id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
73415
https://en.wikipedia.org/wiki/Sieve%20of%20Eratosthenes
Sieve of Eratosthenes
In mathematics, the sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to any given limit. It does so by iteratively marking as composite (i.e., not prime) the multiples of each prime, starting with the first prime number, 2. The multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them that is equal to that prime. This is the sieve's key distinction from using trial division to sequentially test each candidate number for divisibility by each prime. Once all the multiples of each discovered prime have been marked as composites, the remaining unmarked numbers are primes. The earliest known reference to the sieve (, kóskinon Eratosthénous) is in Nicomachus of Gerasa's Introduction to Arithmetic, an early 2nd cent. CE book which attributes it to Eratosthenes of Cyrene, a 3rd cent. BCE Greek mathematician, though describing the sieving by odd numbers instead of by primes. One of a number of prime number sieves, it is one of the most efficient ways to find all of the smaller primes. It may be used to find primes in arithmetic progressions. Overview A prime number is a natural number that has exactly two distinct natural number divisors: the number 1 and itself. To find all the prime numbers less than or equal to a given integer by Eratosthenes' method: Create a list of consecutive integers from 2 through : . Initially, let equal 2, the smallest prime number. Enumerate the multiples of by counting in increments of from to , and mark them in the list (these will be ; the itself should not be marked). Find the smallest number in the list greater than that is not marked. If there was no such number, stop. Otherwise, let now equal this new number (which is the next prime), and repeat from step 3. When the algorithm terminates, the numbers remaining not marked in the list are all the primes below . The main idea here is that every value given to will be prime, because if it were composite it would be marked as a multiple of some other, smaller prime. Note that some of the numbers may be marked more than once (e.g., 15 will be marked both for 3 and 5). As a refinement, it is sufficient to mark the numbers in step 3 starting from , as all the smaller multiples of will have already been marked at that point. This means that the algorithm is allowed to terminate in step 4 when is greater than . Another refinement is to initially list odd numbers only, , and count in increments of in step 3, thus marking only odd multiples of . This actually appears in the original algorithm. This can be generalized with wheel factorization, forming the initial list only from numbers coprime with the first few primes and not just from odds (i.e., numbers coprime with 2), and counting in the correspondingly adjusted increments so that only such multiples of are generated that are coprime with those small primes, in the first place. Example To find all the prime numbers less than or equal to 30, proceed as follows. First, generate a list of integers from 2 to 30:  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The first number in the list is 2; cross out every 2nd number in the list after 2 by counting up from 2 in increments of 2 (these will be all the multiples of 2 in the list):  2 3 5 7 9 11 13 15 17 19 21 23 25 27 29 The next number in the list after 2 is 3; cross out every 3rd number in the list after 3 by counting up from 3 in increments of 3 (these will be all the multiples of 3 in the list):  2 3 5 7 11 13 17 19 23 25 29 The next number not yet crossed out in the list after 3 is 5; cross out every 5th number in the list after 5 by counting up from 5 in increments of 5 (i.e. all the multiples of 5):  2 3 5 7 11 13 17 19 23 29 The next number not yet crossed out in the list after 5 is 7; the next step would be to cross out every 7th number in the list after 7, but they are all already crossed out at this point, as these numbers (14, 21, 28) are also multiples of smaller primes because 7 × 7 is greater than 30. The numbers not crossed out at this point in the list are all the prime numbers below 30:  2 3 5 7 11 13 17 19 23 29 Algorithm and variants Pseudocode The sieve of Eratosthenes can be expressed in pseudocode, as follows: algorithm Sieve of Eratosthenes is input: an integer n > 1. output: all prime numbers from 2 through n. let A be an array of Boolean values, indexed by integers 2 to n, initially all set to true. for i = 2, 3, 4, ..., not exceeding do if A[i] is true for j = i2, i2+i, i2+2i, i2+3i, ..., not exceeding n do set A[j] := false return all i such that A[i] is true. This algorithm produces all primes not greater than . It includes a common optimization, which is to start enumerating the multiples of each prime from . The time complexity of this algorithm is , provided the array update is an operation, as is usually the case. Segmented sieve As Sorenson notes, the problem with the sieve of Eratosthenes is not the number of operations it performs but rather its memory requirements. For large , the range of primes may not fit in memory; worse, even for moderate , its cache use is highly suboptimal. The algorithm walks through the entire array , exhibiting almost no locality of reference. A solution to these problems is offered by segmented sieves, where only portions of the range are sieved at a time. These have been known since the 1970s, and work as follows: Divide the range 2 through into segments of some size . Find the primes in the first (i.e. the lowest) segment, using the regular sieve. For each of the following segments, in increasing order, with being the segment's topmost value, find the primes in it as follows: Set up a Boolean array of size . Mark as non-prime the positions in the array corresponding to the multiples of each prime found so far, by enumerating its multiples in steps of starting from the lowest multiple of between and . The remaining non-marked positions in the array correspond to the primes in the segment. It is not necessary to mark any multiples of these primes, because all of these primes are larger than , as for , one has . If is chosen to be , the space complexity of the algorithm is , while the time complexity is the same as that of the regular sieve. For ranges with upper limit so large that the sieving primes below as required by the page segmented sieve of Eratosthenes cannot fit in memory, a slower but much more space-efficient sieve like the pseudosquares prime sieve, developed by Jonathan P. Sorenson, can be used instead. Incremental sieve An incremental formulation of the sieve generates primes indefinitely (i.e., without an upper bound) by interleaving the generation of primes with the generation of their multiples (so that primes can be found in gaps between the multiples), where the multiples of each prime are generated directly by counting up from the square of the prime in increments of (or for odd primes). The generation must be initiated only when the prime's square is reached, to avoid adverse effects on efficiency. It can be expressed symbolically under the dataflow paradigm as primes = [2, 3, ...] \ [[p², p²+p, ...] for p in primes], using list comprehension notation with \ denoting set subtraction of arithmetic progressions of numbers. Primes can also be produced by iteratively sieving out the composites through divisibility testing by sequential primes, one prime at a time. It is not the sieve of Eratosthenes but is often confused with it, even though the sieve of Eratosthenes directly generates the composites instead of testing for them. Trial division has worse theoretical complexity than that of the sieve of Eratosthenes in generating ranges of primes. When testing each prime, the optimal trial division algorithm uses all prime numbers not exceeding its square root, whereas the sieve of Eratosthenes produces each composite from its prime factors only, and gets the primes "for free", between the composites. The widely known 1975 functional sieve code by David Turner is often presented as an example of the sieve of Eratosthenes but is actually a sub-optimal trial division sieve. Algorithmic complexity The sieve of Eratosthenes is a popular way to benchmark computer performance. The time complexity of calculating all primes below in the random access machine model is operations, a direct consequence of the fact that the prime harmonic series asymptotically approaches . It has an exponential time complexity with regard to length of the input, though, which makes it a pseudo-polynomial algorithm. The basic algorithm requires of memory. The bit complexity of the algorithm is bit operations with a memory requirement of . The normally implemented page segmented version has the same operational complexity of as the non-segmented version but reduces the space requirements to the very minimal size of the segment page plus the memory required to store the base primes less than the square root of the range used to cull composites from successive page segments of size . A special (rarely, if ever, implemented) segmented version of the sieve of Eratosthenes, with basic optimizations, uses operations and bits of memory. Using big O notation ignores constant factors and offsets that may be very significant for practical ranges: The sieve of Eratosthenes variation known as the Pritchard wheel sieve has an performance, but its basic implementation requires either a "one large array" algorithm which limits its usable range to the amount of available memory else it needs to be page segmented to reduce memory use. When implemented with page segmentation in order to save memory, the basic algorithm still requires about bits of memory (much more than the requirement of the basic page segmented sieve of Eratosthenes using bits of memory). Pritchard's work reduced the memory requirement at the cost of a large constant factor. Although the resulting wheel sieve has performance and an acceptable memory requirement, it is not faster than a reasonably Wheel Factorized basic sieve of Eratosthenes for practical sieving ranges. Euler's sieve Euler's proof of the zeta product formula contains a version of the sieve of Eratosthenes in which each composite number is eliminated exactly once. The same sieve was rediscovered and observed to take linear time by . It, too, starts with a list of numbers from 2 to in order. On each step the first element is identified as the next prime, is multiplied with each element of the list (thus starting with itself), and the results are marked in the list for subsequent deletion. The initial element and the marked elements are then removed from the working sequence, and the process is repeated:  [2] (3) 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 ...  [3] (5) 7 11 13 17 19 23 25 29 31 35 37 41 43 47 49 53 55 59 61 65 67 71 73 77 79 ...  [4] (7) 11 13 17 19 23 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 ...  [5] (11) 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 ...  [...] Here the example is shown starting from odds, after the first step of the algorithm. Thus, on the th step all the remaining multiples of the th prime are removed from the list, which will thereafter contain only numbers coprime with the first primes (cf. wheel factorization), so that the list will start with the next prime, and all the numbers in it below the square of its first element will be prime too. Thus, when generating a bounded sequence of primes, when the next identified prime exceeds the square root of the upper limit, all the remaining numbers in the list are prime. In the example given above that is achieved on identifying 11 as next prime, giving a list of all primes less than or equal to 80. Note that numbers that will be discarded by a step are still used while marking the multiples in that step, e.g., for the multiples of 3 it is , , , , ..., , ..., so care must be taken dealing with this.
Mathematics
Prime numbers
null
73421
https://en.wikipedia.org/wiki/Lilium
Lilium
Lilium ( ) is a genus of herbaceous flowering plants growing from bulbs, all with large and often prominent flowers. Lilies are a group of flowering plants which are important in culture and literature in much of the world. Most species are native to the Northern Hemisphere and their range is temperate climates and extends into the subtropics. Many other plants have "lily" in their common names, but do not belong to the same genus and are therefore not true lilies. True lilies are known to be highly toxic to cats. Description Lilies are tall perennials ranging in height from . They form naked or tunicless scaly underground bulbs which are their organs of perennation. In some North American species the base of the bulb develops into rhizomes, on which numerous small bulbs are found. Some species develop stolons. Most bulbs are buried deep in the ground, but a few species form bulbs near the soil surface. Many species form stem-roots. With these, the bulb grows naturally at some depth in the soil, and each year the new stem puts out adventitious roots above the bulb as it emerges from the soil. These roots are in addition to the basal roots that develop at the base of the bulb, a number of species also produce contractile roots that move the bulbs deeper into the soil. The flowers are large, often fragrant, and come in a wide range of colors including whites, yellows, oranges, pinks, reds and purples. Markings include spots and brush strokes. The plants are late spring- or summer-flowering. Flowers are borne in racemes or umbels at the tip of the stem, with six tepals spreading or reflexed, to give flowers varying from funnel shape to a "Turk's cap". The tepals are free from each other, and bear a nectary at the base of each flower. The ovary is 'superior', borne above the point of attachment of the anthers. The fruit is a three-celled capsule. Seeds ripen in late summer. They exhibit varying and sometimes complex germination patterns, many adapted to cool temperate climates. Most cool temperate species are deciduous and dormant in winter in their native environment. But a few species native to areas with hot summers and mild winters (Lilium candidum, Lilium catesbaei, Lilium longiflorum) lose their leaves and enter a short dormant period in summer or autumn, sprout from autumn to winter, forming dwarf stems bearing a basal rosette of leaves until, after they have received sufficient chilling, the stem begins to elongate in warming weather. The basic chromosome number is twelve (n=12). Taxonomy Taxonomical division in sections follows the classical division of Comber, species acceptance follows the World Checklist of Selected Plant Families, the taxonomy of section Pseudolirium is from the Flora of North America, the taxonomy of Section Liriotypus is given in consideration of Resetnik et al. 2007, the taxonomy of Chinese species (various sections) follows the Flora of China and the taxonomy of Section Sinomartagon follows Nishikawa et al. as does the taxonomy of Section Archelirion. The Sinomartagon are divided in 3 paraphyletic groups, while the Leucolirion are divided in 2 paraphyletic groups. There are seven sections: Martagon Pseudolirium Liriotypus Archelirion Sinomartagon Leucolirion Daurolirion There are 119 species counted in this genus. For a full list of accepted species with their native ranges, see List of Lilium species. Some species formerly included within this genus have now been placed in other genera. These genera include Cardiocrinum, Notholirion, and Fritillaria. Four other genuses, Lirium, Martagon, Martagon and Nomocharis are considered to synonyms by most sources. Etymology The botanic name Lilium is the Latin form and is a Linnaean name. The Latin name is derived from the Greek word leírion, generally assumed to refer to true, white lilies as exemplified by the Madonna lily. The word was borrowed from Coptic (dial. Fayyumic) , from standard , from Demotic , from Egyptian "flower". Meillet maintains that both the Egyptian and the Greek word are possible loans from an extinct, substratum language of the Eastern Mediterranean. , , was used by the Greeks, albeit for lilies of any color. The term "lily" has in the past been applied to numerous flowering plants, often with only superficial resemblance to the true lily, including water lily, fire lily, lily of the Nile, calla lily, trout lily, kaffir lily, cobra lily, lily of the valley, daylily, ginger lily, Amazon lily, leek lily, Peruvian lily, and others. All English translations of the Bible render the Hebrew shūshan, shōshan, shōshannā as "lily", but the "lily among the thorns" of Song of Solomon, for instance, may be the honeysuckle. Distribution and habitat The range of lilies in the Old World extends across much of Europe, across most of Asia to Japan, south to India, and east to Indochina and the Philippines. In the New World they extend from southern Canada through much of the United States. They are commonly adapted to either woodland habitats, often montane, or sometimes to grassland habitats. A few can survive in marshland and epiphytes are known in tropical southeast Asia. In general they prefer moderately acidic or lime-free soils. Ecology Lilies are used as food plants by the larvae of some Lepidoptera species including the Dun-bar. The proliferation of deer (e.g. Odocoileus virginianus) in North America, mainly due to factors such as the elimination of large predators for human safety, is responsible there for a downturn in lily populations in the wild and is a threat to garden lilies as well. Fences as high as 8 feet may be required to prevent them from consuming the plants, an impractical solution for most wild areas. Cultivation Many species are widely grown in the garden in temperate, sub-tropical and tropical regions. Numerous ornamental hybrids have been developed. They are used in herbaceous borders, woodland and shrub plantings, and as patio plants. Some lilies, especially Lilium longiflorum, form important cut flower crops or potted plants. These are forced to flower outside of the normal flowering season for particular markets; for instance, Lilium longiflorum for the Easter trade, when it may be called the Easter lily. Lilies are usually planted as bulbs in the dormant season. They are best planted in a south-facing (northern hemisphere), slightly sloping aspect, in sun or part shade, at a depth 2½ times the height of the bulb (except Lilium candidum which should be planted at the surface). Most prefer a porous, loamy soil, and good drainage is essential. Most species bloom in July or August (northern hemisphere). The flowering periods of certain lily species begin in late spring, while others bloom in late summer or early autumn. They have contractile roots which pull the plant down to the correct depth, therefore it is better to plant them too shallowly than too deep. A soil pH of around 6.5 is generally safe. Most grow best in well-drained soils, and plants are watered during the growing season. Some species and cultivars have strong wiry stems, but those with heavy flower heads are staked to stay upright. Awards The following lily species and cultivars currently hold the Royal Horticultural Society's Award of Garden Merit (confirmed 2017): African Queen Group (VI-/a) 2002 H6 'Casa Blanca' (VIIb/b-c) 1993 H6 'Fata Morgana' (Ia/b) 2002 H6 'Garden Party' (VIIb/b) 2002 H6 Golden Splendor Group (VIb-c/a) Lilium henryi (IXc/d) 1993 H6 Lilium mackliniae (IXc/a) 2012 H5 Lilium martagon – Turk's cap lily (IXc/d) Lilium pardalinum – leopard lily (IXc/d) Pink Perfection Group (VIb/a) Lilium regale – regal lily, king's lily (IXb/a) Classification of garden forms Numerous forms, mostly hybrids, are grown for the garden. They vary according to the species and interspecific hybrids that they derived from, and are classified in the following broad groups: Asiatic hybrids (Division I) These are derived from hybrids between species in Lilium section Sinomartagon. They are derived from central and East Asian species and interspecific hybrids, including Lilium amabile, Lilium bulbiferum, Lilium callosum, Lilium cernuum, Lilium concolor, Lilium dauricum, Lilium davidii, Lilium × hollandicum, Lilium lancifolium (syn. Lilium tigrinum), Lilium lankongense, Lilium leichtlinii, Lilium × maculatum, Lilium pumilum, Lilium × scottiae, Lilium wardii and Lilium wilsonii. These are plants with medium-sized, upright or outward facing flowers, mostly unscented. There are various cultivars such as Lilium 'Cappuccino', Lilium 'Dimension', Lilium 'Little Kiss' and Lilium 'Navona'. Dwarf (Patio, Border) varieties are much shorter, c.36–61 cm in height and were designed for containers. They often bear the cultivar name 'Tiny', such as the 'Lily Looks' series, e.g. 'Tiny Padhye', 'Tiny Dessert'. Martagon hybrids (Division II) These are based on Lilium dalhansonii, Lilium hansonii, Lilium martagon, Lilium medeoloides, and Lilium tsingtauense. The flowers are nodding, Turk's cap style (with the petals strongly recurved). Candidum (Euro-Caucasian) hybrids (Division III) This includes mostly European species: Lilium candidum, Lilium chalcedonicum, Lilium kesselringianum, Lilium monadelphum, Lilium pomponium, Lilium pyrenaicum and Lilium × testaceum. American hybrids (Division IV) These are mostly taller growing forms, originally derived from Lilium bolanderi, Lilium × burbankii, Lilium canadense, Lilium columbianum, Lilium grayi, Lilium humboldtii, Lilium kelleyanum, Lilium kelloggii, Lilium maritimum, Lilium michauxii, Lilium michiganense, Lilium occidentale, Lilium × pardaboldtii, Lilium pardalinum, Lilium parryi, Lilium parvum, Lilium philadelphicum, Lilium pitkinense, Lilium superbum, Lilium ollmeri, Lilium washingtonianum, and Lilium wigginsii. Many are clump-forming perennials with rhizomatous rootstocks. Longiflorum hybrids (Division V) These are cultivated forms of this species and its subspecies. They are most important as plants for cut flowers, and are less often grown in the garden than other hybrids. Trumpet lilies (Division VI), including Aurelian hybrids (with L. henryi) This group includes hybrids of many Asiatic species and their interspecific hybrids, including Lilium × aurelianense, Lilium brownii, Lilium × centigale, Lilium henryi, Lilium × imperiale, Lilium × kewense, Lilium leucanthum, Lilium regale, Lilium rosthornii, Lilium sargentiae, Lilium sulphureum and Lilium × sulphurgale. The flowers are trumpet shaped, facing outward or somewhat downward, and tend to be strongly fragrant, often especially night-fragrant. Oriental hybrids (Division VII) These are based on hybrids within Lilium section Archelirion, specifically Lilium auratum and Lilium speciosum, together with crossbreeds from several species native to Japan, including Lilium nobilissimum, Lilium rubellum, Lilium alexandrae, and Lilium japonicum. They are fragrant, and the flowers tend to be outward facing. Plants tend to be tall, and the flowers may be quite large. The whole group are sometimes referred to as "stargazers" because many of them appear to look upwards. (For the specific cultivar, see Lilium 'Stargazer'.) Other hybrids (Division VIII) Includes all other garden hybrids. Species (Division IX) All natural species and naturally occurring forms are included in this group. The flowers can be classified by flower aspect and form: Flower aspect: a up-facing b out-facing c down-facing Flower form: a trumpet-shaped b bowl-shaped c flat (or with tepal tips recurved) d tepals strongly recurved (with the Turk's cap form as the ultimate state) Many newer commercial varieties are developed by using new technologies such as ovary culture and embryo rescue. Pests and diseases Aphids may infest plants. Leatherjackets feed on the roots. Larvae of the Scarlet lily beetle can cause serious damage to the stems and leaves. The scarlet beetle lays its eggs and completes its life cycle only on true lilies (Lilium) and fritillaries (Fritillaria). Oriental, rubrum, tiger and trumpet lilies as well as Oriental trumpets (orienpets) and Turk's cap lilies and native North American Lilium species are all vulnerable, but the beetle prefers some types over others. The beetle could also be having an effect on native Canadian species and some rare and endangered species found in northeastern North America. Daylilies (Hemerocallis, not true lilies) are excluded from this category. Plants can suffer from damage caused by mice, deer and squirrels. Slugs, snails and millipedes attack seedlings, leaves and flowers. Brown spots on damp leaves may signal an infection of Botrytis elliptica, also known as Lily blight, lily fire, and botrytis leaf blight. Various viral diseases can cause mottling of leaves and stunting of growth, including lily curl stripe, ringspot, and lily rosette virus. Propagation and growth Lilies can be propagated in several ways; by division of the bulbs by growing-on bulbils which are adventitious bulbs formed on the stem by scaling, for which whole scales are detached from the bulb and planted to form a new bulb by seed; there are many seed germination patterns, which can be complex by micropropagation techniques (which include tissue culture); commercial quantities of lilies are often propagated in vitro and then planted out to grow into plants large enough to sell. A highly efficient technique for multiple shoot and propagule formation was given by Yadav et al., in 2013. Plant grow regulators (PGRs) are used to limit the height of lilies, especially those sold as potted plants. Commonly used chemicals include ancymidol, fluprimidol, paclobutrazol, and uni-conazole, all of which are applied to the foliage to slow the biosynthesis of gibberellins, a class of plant hormones responsible for stem growth. Research A comparison of meiotic crossing-over (recombination) in lily and mouse led, in 1977, to the conclusion that diverse eukaryotes share a common pattern of meiotic crossing-over. Lilium longiflorum has been used for studying aspects of the basic molecular mechanism of genetic recombination during meiosis. Toxicity Some Lilium species are toxic to cats. This is known to be so especially for Lilium longiflorum, though other Lilium and the unrelated Hemerocallis can also cause the same symptoms with equal lethality. The true mechanism of toxicity is undetermined, but it involves damage to the renal tubular epithelium (composing the substance of the kidney and secreting, collecting, and conducting urine), which can cause acute kidney failure. Veterinary help should be sought, as a matter of urgency, for any cat that is suspected of eating any part of a lily – including licking pollen that may have brushed onto its coat. Due to the high mortality rate, medical care should be sought immediately once it is known a cat came into contact with lilies, ideally before any symptoms develop. Culinary uses Chinese cuisine Lily bulbs are starchy and edible as root vegetables, though bulbs of some species may be too bitter to eat. Lilium brownii var. viridulum, known as 百合 (pak hop; ), is one of the most prominent edible lilies in China. Its bulbs are large in size and not bitter. They were even exported and sold in the San Francisco Chinatown in the 19th century, available both fresh and dry. A landrace called 龍牙百合 () mainly cultivated in Hunan and Jiangxi is especially renowned for its good-quality bulbs. L. lancifolium () is widely cultivated in China, especially in Yixing, Huzhou and Longshan. Its bulbs are slightly bitter. L. davidii var. unicolor () is mainly cultivated in Lanzhou and its bulbs are valued for sweetness. Other edible Chinese lilies include L. brownii var. brownii, L. davidii var. davidii, L. concolor, L. pensylvanicum, L. distichum, L. martagon var. pilosiusculum, L. pumilum, L. rosthornii and L. speciosum var. gloriosoides. Researchers have also explored the possibility of using ornamental cultivars as edible lilies. The dried bulbs are commonly used in the south to flavor soup. They may be reconstituted and stir-fried, grated and used to thicken soup, or processed to extract starch. Their texture and taste draw comparisons with the potato, although the individual bulb scales are much smaller. The commonly marketed "lily" flower buds, called kam cham tsoi () in Chinese cuisine, are actually from daylilies, Hemerocallis citrina, or possibly H. fulva. Flowers of the H. graminea and Lilium bulbiferum were reported to have been eaten as well, but samples provided by the informant were strictly daylilies and did not include L. bulbiferum. Lily flowers and bulbs are eaten especially in the summer, for their perceived ability to reduce internal heat. A 19th century English source reported that "Lily flowers are also said to be efficacious in pulmonary affections, and to have tonic properties". Asiatic lily cultivars are also imported from the Netherlands; the seedling bulbs must be imported from the Netherlands every year. The parts of Lilium species which are officially listed as food material in Taiwan are the flower and bulbs of Lilium lancifolium, Lilium brownii var. viridulum, Lilium pumilum and Lilium candidum. Japanese cuisine The lily bulb or yuri-ne is sometimes used in Japanese cuisine. It may be most familiar in the present day as an occasional in the chawan-mushi (savoury egg custard), where a few loosened scales of this optional ingredient are found embedded in the "hot pudding" of each serving. It could also be used as an ingredient in a clear soup or . The boiled bulb may also be strained into purée for use, as in the sweetened kinton, or chakin-shibori. Yokan There is also the yuri-yōkan, one recipe of which calls for combining measures of yuri starch with agar dissolved in water and sugar. This was a specialty of Hamada, Shimane, and the shop established in 1885 became famous for it. Because a certain Viscount Jimyōin wrote a waka poem about the confection which mentioned hime-yuri "princess lily", one source stated that the hime-yuri (usually taken to mean L. concolor) had to have been used, but another source points out that the city of Hamada lies back to back with across a mountain range with Fuchu, Hiroshima which is renowned for its production of yama-yuri (L. auratum). Species used Current Japanese governmental sources () list the following lily species as prominent in domestic consumption: the oni yuri or tiger lily Lilium lancifolium, the kooni yuri Lilium leichtlinii var. maximowiczii, and the gold-banded white yama-yuri L. auratum. But Japanese sources c. 1895–1900, give a top-three list which replaces kooni yuri with the named from the gaps between the tepals. There is uncertainty regarding which species is meant by the hime-yuri used as food, because although this is usually the common name for L. concolor in most up-to-date literature, it used to ambiguously referred to the tiger lily as well, c. 1895–1900. The non-tiger-lily himeyuri is certainly described as quite palatable in the literature at the time, but the extent of exploitation could not have been as significant. North America The flower buds and roots of Lilium columbianum are traditionally gathered and eaten by North American indigenous peoples. Coast Salish, Nuu-chah-nulth and most western Washington peoples steam, boil or pit-cook the bulbs of Lilium columbianum. Bitter or peppery-tasting, they were mostly used as a flavoring, often in soup with meat or fish. Medicinal uses Traditional Chinese medicine list the use of the following: 野百合 Lilium brownii, 百合 Lilium brownii var. viridulum, 渥丹 Lilium concolor, 毛百合 Lilium dauricum, 卷丹 Lilium lancifolium, 山丹 Lilium pumilum, 南川百合 Lilium rosthornii, 药百合Lilium speciosum var. gloriosoides, 淡黄花百合 Lilium sulphureum. In Taiwan, governmental publications list Lilium lancifolium Thunb., Lilium brownii var. viridulum Baker, Lilium pumilum DC. In the kanpō or Chinese medicine as practiced in Japan, the official Japanese governmental pharmacopeia includes the use of lily bulb (known as in traditional pharmacological circles), listing the use of the following species: Lilium lancifolium, Lilium brownii, Lilium brownii var. colchesteri, Lilium pumilum The scales flaked off from the bulbs are used, usually steamed. In South Korea, the lilium species which are officially listed for medicinal use are 참나리 Lilium lancifolium Thunberg; 당나리 Lilium brownii var. viridulun Baker. In culture Symbolism In the Victorian language of flowers, lilies portray love, ardor, and affection for your loved ones, while orange lilies stand for happiness, love, and warmth. Lilies are the flowers most commonly used at funerals, where they symbolically signify that the soul of the deceased has been restored to the state of innocence. Lilium formosanum, or Taiwanese lily, is called "the flower of broken bowl" () by the elderly members of the Hakka ethnic group. They believe that because this lily grows near bodies of clean water, harming the lily may damage the environment, just like breaking the bowls that people rely on. A different viewpoint proposes that parents discourage kids from picking lilies by informing them of the possible repercussions, like their dinner bowls breaking if they harm the flower. The indigenous Rukai people who call this same species bariangalay consider it as a symbol of bravery and perseverance. In Western Christianity, Madonna lily or Lilium candidum has been associated with the Virgin Mary since at least the Medieval Era. Medieval and Renaissance depictions of the Virgin Mary, especially at the Annunciation, often show her with these flowers. Madonna lilies are also commonly included in depictions of Christ's resurrection. Lilium longiflorum, the Easter lily, is a symbol of Easter, and Lilium candidum, the Madonna lily, carries a great deal of symbolic value in many cultures. See the articles for more information. Heraldry Lilium bulbiferum has long been recognised as a symbol of the Orange Order in Northern Ireland. Lilium mackliniae is the state flower of Manipur. Lilium michauxii, the Carolina lily, is the official state flower of North Carolina. Idyllwild, California, hosts the Lemon Lily Festival, which celebrates Lilium parryi. Lilium philadelphicum is the floral emblem of Saskatchewan province in Canada, and is on the flag of Saskatchewan. Other plants referred to as lilies Lily of the valley, flame lilies, daylilies, water lilies and spider lilies are symbolically important flowers commonly referred to as lilies, but they are not in the genus Lilium.
Biology and health sciences
Monocots
null
73426
https://en.wikipedia.org/wiki/Iris%20%28plant%29
Iris (plant)
Iris is a flowering plant genus of 310 accepted species with showy flowers. As well as being the scientific name, iris is also widely used as a common name for all Iris species, as well as some belonging to other closely related genera. A common name for some species is flags, while the plants of the subgenus Scorpiris are widely known as junos, particularly in horticulture. It is a popular garden flower. The often-segregated, monotypic genera Belamcanda (blackberry lily, I. domestica), Hermodactylus (snake's head iris, I. tuberosa), and Pardanthopsis (vesper iris, I. dichotoma) are currently included in Iris. Three Iris varieties are used in the Iris flower data set outlined by Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis. Description Irises are perennial plants, growing from creeping rhizomes (rhizomatous irises) or, in drier climates, from bulbs (bulbous irises). They have long, erect flowering stems which may be simple or branched, solid or hollow, and flattened or have a circular cross-section. The rhizomatous species usually have 3–10 basal sword-shaped leaves growing in dense clumps. The bulbous species also have 2–10 narrow leaves growing from the bulb. Flower The inflorescences are in the shape of a fan and contain one or more symmetrical six-lobed flowers. These grow on a pedicel or peduncle. The three sepals, which are usually spreading or droop downwards, are referred to as "falls". They expand from their narrow base (the "claw" or "haft"), into a broader expanded portion ("limb" or "blade") and can be adorned with veining, lines or dots. In the centre of the blade, some of the rhizomatous irises have a "beard", a row of fuzzy hairs at the base of each falls petal which gives pollinators a landing place and guides them to the nectar. The three, sometimes reduced, petals stand upright, partly behind the sepal bases. They are called "standards". Some smaller iris species have all six lobes pointing straight outwards, but generally limb and standards differ markedly in appearance. They are united at their base into a floral tube that lies above the ovary (This flower, with the petals, and other flower parts, above the ovary is known as an epigynous flower, and it is said to have an inferior ovary, that is an ovary below the other flower parts). The three styles divide towards the apex into petaloid branches; this is significant in pollination. The iris flower is of interest as an example of the relation between flowering plants and pollinating insects. The shape of the flower and the position of the pollen-receiving and stigmatic surfaces on the outer petals form a landing-stage for a flying insect, which in probing for nectar, will first come into contact with the perianth, then with the three stigmatic stamens in one whorled surface which is borne on an ovary formed of three carpels. The shelf-like transverse projection on the inner whorled underside of the stamens is beneath the overarching style arm below the stigma, so that the insect comes in contact with its pollen-covered surface only after passing the stigma; in backing out of the flower it will come in contact only with the non-receptive lower face of the stigma. Thus, an insect bearing pollen from one flower will, in entering a second, deposit the pollen on the stigma; in backing out of a flower, the pollen which it bears will not be rubbed off on the stigma of the same flower. The iris fruit is a capsule which opens up in three parts to reveal the numerous seeds within. In some species, the seeds bear an aril, such as Iris stolonifera which has light brown seeds with thick white aril. Etymology The genus takes its name from the Greek word îris "rainbow", which is also the name for the Greek goddess of the rainbow, Iris. Some authors state that the name refers to the wide variety of flower colors found among the many species. Taxonomy Iris is the largest genus of the family Iridaceae with up to 300 species – many of them natural hybrids. Plants of the World Online lists 310 accepted species from this genus as of 2022. Modern classifications, starting with Dykes (1913), have subdivided them. Dykes referred to the major subgroupings as sections. Subsequent authors such as Lawrence (1953) and Rodionenko (1987) have generally called them subgenera, while essentially retaining Dykes' groupings, using six subgenera further divided into twelve sections. Of these, section Limneris (subgenus Limneris) was further divided into sixteen series. Like some older sources, Rodionenko moved some of the bulbous subgenera (Xiphium, Scorpiris and Hermodactyloides) into separate genera (Xiphion, Juno and Iridodictyum respectively), but this has not been accepted by later writers such as Mathew (1989), although the latter kept Hermodactylus as a distinct genus, to include Hermodactylus tuberosus, now returned to Hermodactyloides as Iris tuberosa. Rodionenko also reduced the number of sections in subgenus Iris, from six to two, depending on the presence (Hexapogon) or absence (Iris) of arils on the seeds, referred to as arilate or nonarilate. Taylor (1976) provides arguments for not including all arilate species in Hexapogon. In general, modern classifications usually recognise six subgenera, of which five are restricted to the Old World; the sixth (subgenus Limniris) has a Holarctic distribution. The two largest subgenera are further divided into sections. The Iris subgenus has been divided into six sections; bearded irises (or pogon irises), Psammiris, Oncocyclus, Regelia, Hexapogon and Pseudoregelia. Iris subg. Limniris has been divided into 2 sections; Lophiris (or 'Evansias' or crested iris) and Limniris which was further divided into 16 series. Evolution The concept of introgressive hybridization (or "introgression") was first coined to describe the pattern of interspecific hybridization followed by backcrossing to the parentals that is common in this genus. Subgeneric division Subgenera Iris (Bearded rhizomatous irises) Limniris (Beardless rhizomatous irises) Xiphium (Smooth-bulbed bulbous irises: Formerly genus Xiphion) Nepalensis (Bulbous irises: Formerly genus Junopsis) Scorpiris (Smooth-bulbed bulbous irises: Formerly genus Juno) Hermodactyloides (Reticulate-bulbed bulbous irises: Formerly genus Iridodictyum) Sections, series and species Distribution and habitat Nearly all species are found in temperate Northern Hemisphere zones, from Europe to Asia and across North America. Although diverse in ecology, Iris is predominantly found in dry, semi-desert, or colder rocky mountainous areas. Other habitats include grassy slopes, meadowlands, woodland, bogs and riverbanks. Some irises like Iris setosa can tolerate damp (bogs) or dry sites (meadows), and Iris foetidissima can be found in woodland, hedge banks and scrub areas. Diseases Narcissus mosaic virus is most commonly known from Narcissus. Wylie et al., 2014, made the first identification of Narcissus mosaic virus infecting this garden plant genus, and the first record in Australia. Japanese iris necrotic ring virus also, commonly infects this genus. It was, however, unknown in Australia until Wylie et al., 2012, identified it in Australia on I. ensata. Cultivation Iris is extensively grown as ornamental plant in home and botanical gardens. Presby Memorial Iris Gardens in New Jersey, for example, is a living iris museum with over 10,000 plants, while in Europe the most famous iris garden is arguably the Giardino dell'Iris in Florence (Italy) which every year hosts a well attended iris breeders' competition. Irises, especially the multitude of bearded types, feature regularly in shows such as the Chelsea Flower Show. For garden cultivation, iris classification differs from taxonomic classification. Garden iris are classed as either bulb iris or rhizome iris (called rhizomatous) with a number of further subdivisions. Due to a wide variety of geographic origins, and thus great genetic diversity, cultivation needs of iris vary greatly. Generally, Irises grow well in most garden soil types providing they are well-drained, depending on the species. The earliest to bloom are species like I. reticulata and I. reichenbachii, which flower as early as February and March in the Northern Hemisphere, followed by the dwarf forms of I. pumila and others. In May or June, most of the tall bearded varieties start to bloom, such as the German iris and its variety florentina, sweet iris, Hungarian iris, lemon-yellow iris (I. flavescens), Iris sambucina, and their natural and horticultural hybrids such as those described under names like I. neglecta or I. squalens and best united under I. × lurida. The iris is promoted in the United Kingdom by the British Iris Society. The National Collection of Arthur Bliss Irises is held in Gloucestershire. The American Iris Society is the International Cultivar Registration Authority for Iris, and recognises over 30,000 registered cultivar names. Bearded rhizome iris Bearded iris are classified as dwarf, tall, or aril. In Europe, the most commonly found garden iris is a hybrid iris (falsely called German iris, I. germanica which is sterile) and its numerous cultivars. Various wild forms (including Iris aphylla) and naturally occurring hybrids of the Sweet iris (I. pallida) and the Hungarian iris (I. variegata) form the basis of almost all modern hybrid bearded irises. Median forms of bearded iris (intermediate bearded, or IB; miniature tall bearded, or MTB; etc.) are derived from crosses between tall and dwarf species like Iris pumila. The "beard", short hairs arranged to look like a long furry caterpillar, is found toward the back of the lower petals and its purpose is to guide pollinating insects toward the reproductive parts of the plant. Bearded irises have been cultivated to have much larger blooms than historically; the flowers are now twice the size of those a hundred years ago. Ruffles were introduced in the 1960s to help stabilize the larger petals. Bearded iris are easy to cultivate and propagate and have become very popular in gardens. A small selection is usually held by garden centres at appropriate times during the season, but there are thousands of cultivars available from specialist suppliers (more than 30,000 cultivars of tall bearded iris). They are best planted as bare root plants in late summer, in a sunny open position with the rhizome visible on the surface of the soil and facing the sun. They should be divided in summer every two or three years, when the clumps become congested. A truly red bearded iris, like a truly blue rose, remains an unattained goal despite frequent hybridizing and selection. There are species and selections, most notably based on the beardless rhizomatous Copper iris (I. fulva), which have a relatively pure red color. However, getting this color into a modern bearded iris breed has proven very difficult, and thus, the vast majority of irises are in the purple and blue range of the color spectrum, with yellow, pink, orange and white breeds also available. Irises like many related genera lack red-based hues because their anthocyanins are delphinidin-derived. Pelargonidin-derived anthocyanins would lend the sought-after blue-based colors but these genera are metabolically disinclined to produce pelargonidin. Dihydroflavonol 4-reductases in Iriss relatives selectively do not catalyse dihydrokaempferol to leucopelargonidin, the precursor, and this is probably the case here as well. The other metabolic difficulty is the presence of flavonoid 3'-hydroxylase, which in Chrysanthemum inhibits pelargonidin synthesis. The bias in irises towards delphinidin-anthocyanins is so pronounced that they have served as the gene donors for transgenic attempts at the aforementioned blue roses. Although these have been technically successful over 99% of their anthocyanins are blue their growth is crippled and they have never been commercializable. AGM cultivars The following is a selection of bearded irises that have gained the Royal Horticultural Society's Award of Garden Merit: 'Alizes' (tall bearded, blue & white) 'Bumblebee Deelite' (miniature tall bearded, yellow/purple) 'Early Light' (tall bearded, pale yellow) 'Jane Phillips' (tall bearded, pale blue) 'Langport Wren' (intermediate bearded, maroon) 'Maui Moonlight' (intermediate bearded, pale yellow) 'Orinoco Flow' (border bearded, white/violet) 'Raspberry Blush' (intermediate bearded, pink) 'Sarah Taylor' (dwarf bearded, pale yellow) 'Thornbird' (tall bearded, pale yellow) 'Titan's Glory' (tall bearded, deep blue) Bearded iris Oncocyclus section This section contains the cushion irises or royal irises, a group of plants noted for their large, strongly marked flowers. Between 30 and 60 species are classified in this section, depending on the authority. Species of section Oncocyclus are generally strict endemics, typically occurring in a small number of scattered, disjunct populations, whose geographical isolation is enhanced by their pollination strategy and myrmecochory seed dispersal. Morphological divergence between populations usually follows a cline reflecting local adaptation to environment conditions; furthermore, this largely overlaps divergence between species, making it difficult to identify discrete species boundaries in these irises. Compared with other irises, the cushion varieties are scantily furnished with narrow sickle-shaped leaves and the flowers are usually borne singly on the stalks; they are often very dark and in some almost blackish. The cushion irises are somewhat fastidious growers, and to be successful with them they must be planted rather shallow in very gritty well-drained soil. They should not be disturbed in the autumn, and after the leaves have withered the roots should be protected from heavy rains until growth starts again naturally. Bearded iris Regelia section This section, closely allied to the cushion irises, includes several garden hybrids with species in section Oncocyclus, known as Regelio-cyclus irises. They are best planted in September or October in warm sunny positions, the rhizomes being lifted the following July after the leaves have withered. Beardless rhizome iris (subgenus Limniris) There are six major subgroupings of the beardless iris, depending on origin. They are divided into Pacific Coast, Siberica, Spuria, Louisiana, Japanese, and other. Beardless rhizomatous iris types commonly found in the European garden are the Siberian iris (I. sibirica) and its hybrids, and the Japanese Iris (I. ensata) and its hybrids. "Japanese iris" is also a catch-all term for the Japanese iris proper (hanashōbu), the blood iris (I. sanguinea, ayame) and the rabbit-ear iris (I. laevigata, kakitsubata). I. unguicularis is a late-winter-flowering species from Algeria, with sky-blue flowers with a yellow streak in the centre of each petal, produced from Winter to Spring. Yet another beardless rhizomatous iris popular in gardening is I. ruthenica, which has much the same requirements and characteristics as the tall bearded irises. In North America, Louisiana iris and its hybrids are often cultivated. Crested rhizome iris (subgenus Limniris) One specific species, Iris cristata from North America. Bulbing juno iris (subgenus Scorpiris) Often called 'junos', this type of iris is one of the more popular bulb irises in cultivation. They are generally earliest to bloom. Bulbing European iris (subgenus Xiphium) This group includes irises generally of European descent, and are also classified as Dutch, English, or Spanish iris. Iris reticulata and Iris persica, both of which are fragrant, are also popular with florists. Iris xiphium, the Spanish Iris (also known as Dutch Iris) and Iris latifolia, the English Iris. Despite the common names both the Spanish and English iris are of Spanish origin, and have very showy flowers, so they are popular with gardeners and florists. They are among the hardier bulbous irises, and can be grown in northern Europe. They require to be planted in thoroughly drained beds in very light open soil, moderately enriched, and should have a rather sheltered position. Both these present a long series of varieties of the most diverse colours, flowering in May, June and July, the smaller Spanish iris being the earlier of the two. Bulbing reticulate iris (subgenus Hermodactyloides) Reticulate irises with their characteristic bulbs, including the yellow I. danfordiae, and the various blue-purple I. histrioides and I. reticulata, flower as early as February and March. These reticulate-bulbed irises are miniatures and popular spring bulbs, being one of the first to bloom in the garden. Many of the smaller species of bulbous iris, being liable to perish from excess of moisture, should have a well-drained bed of good but porous soil made up for them, in some sunny spot, and in winter should be protected by a covering of half-decayed leaves or fresh coco-fiber. Uses Aromatic rhizomes Rhizomes of the German iris (I. germanica) and sweet iris (I. pallida) are traded as orris root and are used in perfume and medicine, though more common in ancient times than today. Today, Iris essential oil (absolute) from flowers are sometimes used in aromatherapy as sedative medicines. The dried rhizomes are also given whole to babies to help in teething. Gin brands such as Bombay Sapphire and Magellan Gin use orris root and sometimes iris flowers for flavor and color. For orris root production, iris rhizomes are harvested, dried, and aged for up to 5 years. In this time, the fats and oils inside the roots undergo degradation and oxidation, which produces many fragrant compounds that are valuable in perfumery. The scent is said to be similar to violets. The aged rhizomes are steam-distilled which produces a thick oily compound, known in the perfume industry as "iris butter" or orris oil. Iris rhizomes also contain notable amounts of terpenes, and organic acids such as ascorbic acid, myristic acid, tridecylenic acid and undecylenic acid. Iris rhizomes can be toxic. Larger blue flag (I. versicolor) and other species often grown in gardens and widely hybridized contain elevated amounts of the toxic glycoside iridin. These rhizomes can cause nausea, vomiting, diarrhea, and/or skin irritation, but poisonings are not normally fatal. Irises should only be used medicinally under professional guidance. Water purification In water purification, yellow iris (I. pseudacorus) is often used. The roots are usually planted in a substrate (e.g. lava-stone) in a reedbed-setup. The roots then improve water quality by consuming nutrient pollutants, such as from agricultural runoff. This highly aggressive grower is now considered a noxious weed and prohibited in some states of the US where it is found clogging natural waterways. In culture The iris has been used in art and as a symbol, including in heraldry. The symbolic meaning has evolved, in Christendom moving from a symbol of Mary mother of Jesus, to a French heraldic sign, the fleur-de-lis, and from French royalty it spread throughout Europe and beyond. Art Vincent van Gogh has painted several famous pictures of irises. The American artist Joseph Mason – a friend of John James Audubon – painted a precise image of what was then known as the Louisiana flag or copper iris (Iris fulva), to which Audubon subsequently added two Northern paraula birds (Parula americana) for inclusion as Plate 15 in his Birds of America. The artist Philip Hermogenes Calderon painted an iris in his 1856 work Broken Vows; he followed the principles of the Pre-Raphaelite Brotherhood. An ancient belief is that the iris serves as a warning to be heeded, as it was named for the messenger of Olympus. It also conveys images of lost love and silent grief, for young girls were led into the afterlife by the goddess Iris. Broken Vows was accompanied with poetry by Henry Wadsworth Longfellow when it was first exhibited. Contemporary artist George Gessert, who introduced the cultivation of flowers as an art form, has specialised in breeding irises. Local varieties as symbol Iris nigricans, the black iris is the national flower of Jordan. Iris bismarckiana, the Nazareth Iris, is the symbol of the city of Upper Nazareth. The Iris croatica is the unofficial national flower of Croatia. A stylized yellow iris is the symbol of Brussels, since historically the important Saint Gaugericus Island was carpeted in them. The iris symbol is now the sole feature on the flag of the Brussels-Capital Region. In 1998, Iris lacustris, the Dwarf Lake iris, was designated the state wildflower of Michigan, where the vast majority of populations exist. In 1990, the Louisiana iris was voted the state wildflower of Louisiana (see also fleur-de-lis:United States, New France), though the state flower is the magnolia blossom. An iris — species unspecified — is one of the state flowers of Tennessee. It is generally accepted that the species Iris versicolor, the Purple Iris, is the state flower alongside the wild-growing purple passionflower (Passiflora incarnata), the state's other floral emblem. Greeneville, Tennessee, is home to the annual Iris Festival celebrating the iris, local customs, and culture. The species Iris versicolor is also the provincial flower of Quebec, Canada, having replaced the Madonna lily which is not native to the province (see also fleur-de-lis: Canada). The provincial flag of Québec carries the harlequin blueflag (I. versicolor, iris versicolore in French). China It is thought in China that Iris anguifuga has the ability to keep snakes from entering the garden. It grows all winter, keeping snakes out, but then goes dormant in the spring, allowing the snakes back into the garden. In the autumn, the iris re-appears and can stop the snakes again. Ancient Greece In the Homeric Hymn to Demeter, the goddess Persephone and her companion nymphs (the Oceanids along with Artemis and Athena) were gathering flowers such as rose, crocus, violet, iris (also called 'agallis' or ἀγαλλίς (in Greek script), lily, larkspur, and hyacinth in a springtime meadow before she was abducted by the god Hades. It has been suggested that the 'agallis' mentioned was a dwarf iris, as described by leaf and root shape,) and identified as Iris attica. Muslim culture In Iran and Kashmir, Iris kashmiriana and Iris germanica are most commonly grown on Muslim grave yards. Fleur-de-lis and associated heraldry French King Clovis I (466–511), when he converted to Christianity, changed his symbol on his banner from three toads to irises (the Virgin's flower). The fleur-de-lis, a stylized iris, first occurs in its modern use as the emblem of the House of Capet. The fleur-de-lis has been associated with France since Louis VII adopted it as a symbol in the 12th century. The yellow fleur-de-lis reflects the yellow iris (I. pseudacorus), common in Western Europe. Contemporary uses can be seen in the Quebec flag and the logo of the New Orleans Saints professional football team and on the flag of Saint Louis, Missouri. The red fleur-de-lis in the coat-of-arms and flag of Florence, Italy, descends from the white iris which is native to Florence and which grew even in its city walls. This white iris displayed against a red background was the symbol of Florence until the Medici family reversed the colors to signal a change in political power, setting in motion a centuries-long and still on-going breeding program to hybridize a red iris. Scouting, fraternities & sororities The fleur-de-lis is the almost-universal symbol of Scouting and one of the symbols adopted by the sorority Kappa Kappa Gamma. The Iris versicolor is the official flower of Kappa Pi International Honorary Art Fraternity. Other The Iris is one of the flowers listed as birth flower for February. Gallery
Biology and health sciences
Monocots
null
73448
https://en.wikipedia.org/wiki/Mangrove
Mangrove
A mangrove is a shrub or tree that grows mainly in coastal saline or brackish water. Mangroves grow in an equatorial climate, typically along coastlines and tidal rivers. They have particular adaptations to take in extra oxygen and remove salt, allowing them to tolerate conditions that kill most plants. The term is also used for tropical coastal vegetation consisting of such species. Mangroves are taxonomically diverse due to convergent evolution in several plant families. They occur worldwide in the tropics and subtropics and even some temperate coastal areas, mainly between latitudes 30° N and 30° S, with the greatest mangrove area within 5° of the equator. Mangrove plant families first appeared during the Late Cretaceous to Paleocene epochs and became widely distributed in part due to the movement of tectonic plates. The oldest known fossils of mangrove palm date to 75 million years ago. Mangroves are salt-tolerant (halophytic) and are adapted to live in harsh coastal conditions. They contain a complex salt filtration system and a complex root system to cope with saltwater immersion and wave action. They are adapted to the low-oxygen conditions of waterlogged mud, but are most likely to thrive in the upper half of the intertidal zone. The mangrove biome, often called the mangrove forest or mangal, is a distinct saline woodland or shrubland habitat characterized by depositional coastal environments, where fine sediments (often with high organic content) collect in areas protected from high-energy wave action. Mangrove forests serve as vital habitats for a diverse array of aquatic species, offering a unique ecosystem that supports the intricate interplay of marine life and terrestrial vegetation. The saline conditions tolerated by various mangrove species range from brackish water, through pure seawater (3 to 4% salinity), to water concentrated by evaporation to over twice the salinity of ocean seawater (up to 9% salinity). Beginning in 2010, remote sensing technologies and global data have been used to assess areas, conditions and deforestation rates of mangroves around the world. In 2018, the Global Mangrove Watch Initiative released a new global baseline which estimates the total mangrove forest area of the world as of 2010 at , spanning 118 countries and territories. A 2022 study on losses and gains of tidal wetlands estimates a net decrease in global mangrove extent from 1999 to 2019. Mangrove loss continues due to human activity, with a global annual deforestation rate estimated at 0.16%, and per-country rates as high as 0.70%. Degradation in quality of remaining mangroves is also an important concern. There is interest in mangrove restoration for several reasons. Mangroves support sustainable coastal and marine ecosystems. They protect nearby areas from tsunamis and extreme weather events. Mangrove forests are also effective at carbon sequestration and storage. The success of mangrove restoration may depend heavily on engagement with local stakeholders, and on careful assessment to ensure that growing conditions will be suitable for the species chosen. The International Day for the Conservation of the Mangrove Ecosystem is celebrated every year on 26 July. Etymology Etymology of the English term mangrove can only be speculative and is disputed. The term may have come to English from the Portuguese or the Spanish . Further back, it may be traced to South America and Cariban and Arawakan languages such as Taíno. Other possibilities include the Malay language The English usage may reflect a corruption via folk etymology of the words mangrow and grove. The word "mangrove" is used in at least three senses: Most broadly to refer to the habitat and entire plant assemblage or mangal, for which the terms mangrove forest biome and mangrove swamp are also used; To refer to all trees and large shrubs in a mangrove swamp; and Narrowly to refer only to mangrove trees of the genus Rhizophora of the family Rhizophoraceae. Biology According to Hogarth (2015), among the recognized mangrove species there are about 70 species in 20 genera from 16 families that constitute the "true mangroves" – species that occur almost exclusively in mangrove habitats. Demonstrating convergent evolution, many of these species found similar solutions to the tropical conditions of variable salinity, tidal range (inundation), anaerobic soils, and intense sunlight. Plant biodiversity is generally low in a given mangrove. The greatest biodiversity of mangroves occurs in Southeast Asia, particularly in the Indonesian archipelago. Adaptations to low oxygen The red mangrove (Rhizophora mangle) survives in the most inundated areas, props itself above the water level with stilt or prop roots and then absorbs air through lenticels in its bark. The black mangrove (Avicennia germinans) lives on higher ground and develops many specialized root-like structures called pneumatophores, which stick up out of the soil like straws for breathing. These "breathing tubes" typically reach heights of up to , and in some species, over . The roots also contain wide aerenchyma to facilitate transport within the plants. Nutrient uptake Because the soil is perpetually waterlogged, little free oxygen is available. Anaerobic bacteria liberate nitrogen gas, soluble ferrum (iron), inorganic phosphates, sulfides, and methane, which make the soil much less nutritious. Pneumatophores (aerial roots) allow mangroves to absorb gases directly from the atmosphere, and other nutrients such as iron, from the inhospitable soil. Mangroves store gases directly inside the roots, processing them even when the roots are submerged during high tide. Limiting salt intake Red mangroves exclude salt by having significantly impermeable roots that are highly suberised (impregnated with suberin), acting as an ultrafiltration mechanism to exclude sodium salts from the rest of the plant. One study found that roots of the Indian mangrove Avicennia officinalis exclude 90% to 95% of the salt in water taken up by the plant, depositing the excluded salt in the cortex of the root. An increase in the production of suberin and in the activity of a gene regulating cytochrome P450 were observed in correlation with an increase in the salinity of the water to which the plant was exposed. In a frequently cited concept that has become known as the "sacrificial leaf", salt which does accumulate in the shoot (sprout) then concentrates in old leaves, which the plant then sheds. However, recent research on the Red mangrove Rhizophora mangle suggests that the older, yellowing leaves have no more measurable salt content than the other, greener leaves. Limiting water loss Because of the limited fresh water available in salty intertidal soils, mangroves limit the amount of water they lose through their leaves. They can restrict the opening of their stomata (pores on the leaf surfaces, which exchange carbon dioxide gas and water vapor during photosynthesis). They also vary the orientation of their leaves to avoid the harsh midday sun and so reduce evaporation from the leaves. A captive red mangrove grows only if its leaves are misted with fresh water several times a week, simulating frequent tropical rainstorms. Filtration of seawater A 2016 study by Kim et al. investigated the biophysical characteristics of sea water filtration in the roots of the mangrove Rhizophora stylosa from a plant hydrodynamic point of view. R. stylosa can grow even in saline water and the salt level in its roots is regulated within a certain threshold value through filtration. The root possesses a hierarchical, triple layered pore structure in the epidermis and most Na+ ions are filtered at the first sublayer of the outermost layer. The high blockage of Na+ ions is attributed to the high surface zeta potential of the first layer. The second layer, which is composed of macroporous structures, also facilitates Na+ ion filtration. The study provides insights into the mechanism underlying water filtration through halophyte roots and could serve as a basis for the development of a bio-inspired method of desalination. Uptake of Na+ ions is desirable for halophytes to build up osmotic potential, absorb water and sustain turgor pressure. However, excess Na+ ions may work on toxic element. Therefore, halophytes try to adjust salinity delicately between growth and survival strategies. In this point of view, a novel sustainable desalination method can be derived from halophytes, which are in contact with saline water through their roots. Halophytes exclude salt through their roots, secrete the accumulated salt through their aerial parts and sequester salt in senescent leaves and/or the bark. Mangroves are facultative halophytes and Bruguiera is known for its special ultrafiltration system that can filter approximately 90% of Na+ions from the surrounding seawater through the roots. The species also exhibits a high rate of salt rejection. The water-filtering process in mangrove roots has received considerable attention for several decades. Morphological structures of plants and their functions have been evolved through a long history to survive against harsh environmental conditions. Increasing survival of offspring In this harsh environment, mangroves have evolved a special mechanism to help their offspring survive. Mangrove seeds are buoyant and are therefore suited to water dispersal. Unlike most plants, whose seeds germinate in soil, many mangroves (e.g. red mangrove) are viviparous, meaning their seeds germinate while still attached to the parent tree. Once germinated, the seedling grows either within the fruit (e.g. Aegialitis, Avicennia and Aegiceras), or out through the fruit (e.g. Rhizophora, Ceriops, Bruguiera and Nypa) to form a propagule (a ready-to-go seedling) which can produce its own food via photosynthesis. The mature propagule then drops into the water, which can transport it great distances. Propagules can survive desiccation and remain dormant for over a year before arriving in a suitable environment. Once a propagule is ready to root, its density changes so that the elongated shape now floats vertically rather than horizontally. In this position, it is more likely to lodge in the mud and root. If it does not root, it can alter its density and drift again in search of more favorable conditions. Taxonomy and evolution The following listings, based on Tomlinson, 2016, give the mangrove species in each listed plant genus and family. Mangrove environments in the Eastern Hemisphere harbor six times as many species of trees and shrubs as do mangroves in the New World. Genetic divergence of mangrove lineages from terrestrial relatives, in combination with fossil evidence, suggests mangrove diversity is limited by evolutionary transition into the stressful marine environment, and the number of mangrove lineages has increased steadily over the Tertiary with little global extinction. True mangroves Other mangroves Species distribution Mangroves are a type of tropical vegetation with some outliers established in subtropical latitudes, notably in South Florida and southern Japan, as well as South Africa, New Zealand and Victoria (Australia). These outliers result either from unbroken coastlines and island chains or from reliable supplies of propagules floating on warm ocean currents from rich mangrove regions. "At the limits of distribution, the formation is represented by scrubby, usually monotypic Avicennia-dominated vegetation, as at Westonport Bay and Corner Inlet, Victoria, Australia. The latter locality is the highest latitude (38° 45'S) at which mangroves occur naturally. The mangroves in New Zealand, which extend as far south as 37°, are of the same type; they start as low forest in the northern part of the North Island but become low scrub toward their southern limit. In both instances, the species is referred to as Avicennia marina var. australis, although genetic comparison is clearly needed. In Western Australia, A. marina extends as far south as Bunbury (33° 19'S). In the northern hemisphere, scrubby Avicennia gerrninans in Florida occurs as far north as St. Augustine on the east coast and Cedar Point on the west. There are records of A. germinans and Rhizophora mangle for Bermuda, presumably supplied by the Gulf Stream. In southern Japan, Kandelia obovata occurs to about 31 °N (Tagawa in Hosakawa et al., 1977, but initially referred to as K. candel)." Mangrove forests Mangrove forests, also called mangrove swamps or mangals, are found in tropical and subtropical tidal areas. Areas where mangroves occur include estuaries and marine shorelines. The intertidal existence to which these trees are adapted represents the major limitation to the number of species able to thrive in their habitat. High tide brings in salt water, and when the tide recedes, solar evaporation of the seawater in the soil leads to further increases in salinity. The return of tide can flush out these soils, bringing them back to salinity levels comparable to that of seawater. At low tide, organisms are also exposed to increases in temperature and reduced moisture before being then cooled and flooded by the tide. Thus, for a plant to survive in this environment, it must tolerate broad ranges of salinity, temperature, and moisture, as well as several other key environmental factors—thus only a select few species make up the mangrove tree community. About 110 species are considered mangroves, in the sense of being trees that grow in such a saline swamp, though only a few are from the mangrove plant genus, Rhizophora. However, a given mangrove swamp typically features only a small number of tree species. It is not uncommon for a mangrove forest in the Caribbean to feature only three or four tree species. For comparison, the tropical rainforest biome contains thousands of tree species, but this is not to say mangrove forests lack diversity. Though the trees themselves are few in species, the ecosystem that these trees create provides a home (habitat) for a great variety of other species, including as many as 174 species of marine megafauna. Mangrove plants require a number of physiological adaptations to overcome the problems of low environmental oxygen levels, high salinity, and frequent tidal flooding. Each species has its own solutions to these problems; this may be the primary reason why, on some shorelines, mangrove tree species show distinct zonation. Small environmental variations within a mangal may lead to greatly differing methods for coping with the environment. Therefore, the mix of species is partly determined by the tolerances of individual species to physical conditions, such as tidal flooding and salinity, but may also be influenced by other factors, such as crabs preying on plant seedlings. Once established, mangrove roots provide an oyster habitat and slow water flow, thereby enhancing sediment deposition in areas where it is already occurring. The fine, anoxic sediments under mangroves act as sinks for a variety of heavy (trace) metals which colloidal particles in the sediments have concentrated from the water. Mangrove removal disturbs these underlying sediments, often creating problems of trace metal contamination of seawater and organisms of the area. Mangrove swamps protect coastal areas from erosion, storm surge (especially during tropical cyclones), and tsunamis. They limit high-energy wave erosion mainly during events such as storm surges and tsunamis. The mangroves' massive root systems are efficient at dissipating wave energy. Likewise, they slow down tidal water so that its sediment is deposited as the tide comes in, leaving all except fine particles when the tide ebbs. In this way, mangroves build their environments. Because of the uniqueness of mangrove ecosystems and the protection against erosion they provide, they are often the object of conservation programs, including national biodiversity action plans. The unique ecosystem found in the intricate mesh of mangrove roots offers a quiet marine habitat for young organisms. In areas where roots are permanently submerged, the organisms they host include algae, barnacles, oysters, sponges, and bryozoans, which all require a hard surface for anchoring while they filter-feed. Shrimps and mud lobsters use the muddy bottoms as their home. Mangrove crabs eat the mangrove leaves, adding nutrients to the mangal mud for other bottom feeders. In at least some cases, the export of carbon fixed in mangroves is important in coastal food webs. Mangrove forests contribute significantly to coastal ecosystems by fostering complex and diverse food webs. The intricate root systems of mangroves create a habitat conducive to the proliferation of microorganisms, crustaceans, and small fish, forming the foundational tiers of the food chain. This abundance of organisms serves as a critical food source for larger predators like birds, reptiles, and mammals within the ecosystem. Additionally, mangrove forests function as essential nurseries for many commercially important fish species, providing a sheltered environment rich in nutrients during their early life stages. The decomposition of leaves and organic matter in the water further enhances the nutrient content, supporting overall ecosystem productivity. In summary, mangrove forests play a crucial and unbiased role in sustaining biodiversity and ecological balance within coastal food webs. Larger marine organisms benefit from the habitat as a nursery for their offspring. Lemon sharks depend on mangrove creeks to give birth to their pups. The ecosystem provides little competition and minimizes threats of predation to juvenile lemon sharks as they use the cover of mangroves to practice hunting before entering the food web of the ocean. Mangrove plantations in Vietnam, Thailand, Philippines, and India host several commercially important species of fish and crustaceans. The mangrove food chain extends beyond the marine ecosystem. Coastal bird species inhabit the tidal ecosystems feeding off small marine organisms and wetland insects. Common bird families found in mangroves around the world are egrets, kingfishers, herons, and hornbills, among many others dependent on ecological range. Bird predation plays a key role in maintaining prey species along coastlines and within mangrove ecosystems. Mangrove forests can decay into peat deposits because of fungal and bacterial processes as well as by the action of termites. It becomes peat in good geochemical, sedimentary, and tectonic conditions. The nature of these deposits depends on the environment and the types of mangroves involved. In Puerto Rico, the red, white, and black mangroves occupy different ecological niches and have slightly different chemical compositions, so the carbon content varies between the species, as well between the different tissues of the plant (e.g., leaf matter versus roots). In Puerto Rico, there is a clear succession of these three trees from the lower elevations, which are dominated by red mangroves, to farther inland with a higher concentration of white mangroves. Mangrove forests are an important part of the cycling and storage of carbon in tropical coastal ecosystems. Knowing this, scientists seek to reconstruct the environment and investigate changes to the coastal ecosystem over thousands of years using sediment cores. However, an additional complication is the imported marine organic matter that also gets deposited in the sediment due to the tidal flushing of mangrove forests. Termites play an important role in the formation of peat from mangrove materials. They process fallen leaf litter, root systems and wood from mangroves into peat to build their nests, and stabilise the chemistry of this peat that represents approximately 2% of above ground carbon storage in mangroves. As the nests are buried over time this carbon is stored in the sediment and the carbon cycle continues. Mangroves are an important source of blue carbon. Globally, mangroves stored of carbon in 2012. Two percent of global mangrove carbon was lost between 2000 and 2012, equivalent to a maximum potential of of emissions of carbon dioxide in Earth's atmosphere. Globally, mangroves have been shown to provide measurable economic protections to coastal communities affected by tropical storms. Mangrove microbiome Plant microbiomes play crucial roles in the health and productivity of mangroves. Many researchers have successfully applied knowledge acquired about plant microbiomes to produce specific inocula for crop protection. Such inocula can stimulate plant growth by releasing phytohormones and enhancing uptake of some mineral nutrients (particularly phosphorus and nitrogen). However, most of the plant microbiome studies have focused on the model plant Arabidopsis thaliana and economically important crop plants, such as rice, barley, wheat, maize and soybean. There is less information on the microbiomes of tree species. Plant microbiomes are determined by plant-related factors (e.g., genotype, organ, species, and health status) and environmental factors (e.g., land use, climate, and nutrient availability). Two of the plant-related factors, plant species, and genotypes, have been shown to play significant roles in shaping rhizosphere and plant microbiomes, as tree genotypes and species are associated with specific microbial communities. Different plant organs also have specific microbial communities depending on plant-associated factors (plant genotype, available nutrients, and organ-specific physicochemical conditions) and environmental conditions (associated with aboveground and underground surfaces and disturbances). Root microbiome Mangrove roots harbour a repertoire of microbial taxa that contribute to important ecological functions in mangrove ecosystems. Like typical terrestrial plants, mangroves depend upon mutually beneficial interactions with microbial communities. In particular, microbes residing in developed roots could help mangroves transform nutrients into usable forms before plant assimilation. These microbes also provide mangroves phytohormones for suppressing phytopathogens or helping mangroves withstand heat and salinity. In turn, root-associated microbes receive carbon metabolites from the plant via root exudates, thus close associations between the plant and microbes are established for their mutual benefits. The taxonomic class level shows that most Proteobacteria were reported to come from Gammaproteobacteria, followed by Deltaproteobacteria and Alphaproteobacteria. The diverse function and the phylogenic variation of Gammaproteobacteria, which consisted of orders such as Alteromonadales and Vibrionales, are found in marine and coastal regions and are high in abundance in mangrove sediments functioning as nutrient recyclers. Members of Deltaproteobacteria found in mangrove soil are mostly sulfur-related, consisting of Desulfobacterales, Desulfuromonadales, Desulfovibrionales, and Desulfarculales among others. Highly diverse microbial communities (mainly bacteria and fungi) have been found to inhabit and function in mangrove roots. For example, diazotrophic bacteria in the vicinity of mangrove roots could perform biological nitrogen fixation, which provides 40–60% of the total nitrogen required by mangroves; the soil attached to mangrove roots lacks oxygen but is rich in organic matter, providing an optimal microenvironment for sulfate-reducing bacteria and methanogens, ligninolytic, cellulolytic, and amylolytic fungi are prevalent in the mangrove root environment; rhizosphere fungi could help mangroves survive in waterlogged and nutrient-restricted environments. These studies have provided increasing evidence to support the importance of root-associated bacteria and fungi for mangrove growth and health. Recent studies have investigated the detailed structure of root-associated microbial communities at a continuous fine-scale in other plants, where a microhabitat was divided into four root compartments: endosphere, episphere, rhizosphere, and nonrhizosphere or bulk soil. Moreover, the microbial communities in each compartment have been reported to have unique characteristics. Root exudates selectively enrich adapted microbial populations; however, these exudates were found to exert only marginal impacts on microbes in the bulk soil outside the rhizosphere . Furthermore, it was noted that the root episphere, rather than the rhizosphere, was primarily responsible for controlling the entry of specific microbial populations into the root, resulting in the selective enrichment of Proteobacteria in the endosphere. These findings provide new insights into the niche differentiation of root-associated microbial communities, Nevertheless, amplicon-based community profiling may not provide the functional characteristics of root-associated microbial communities in plant growth and biogeochemical cycling. Unraveling functional patterns across the four root compartments holds a great potential for understanding functional mechanisms responsible for mediating root–microbe interactions in support of enhancing mangrove ecosystem functioning. The diversity of bacteria in disturbed mangroves is reported to be higher than in well-preserved mangroves Studies comparing mangroves in different conservation states show that bacterial composition in disturbed mangrove sediment alters its structure, leading to a functional equilibrium, where the dynamics of chemicals in mangrove soils lead to the remodeling of its microbial structure. Suggestions for future mangrove microbial diversity research Despite many research advancements in mangrove sediment bacterial metagenomics diversity in various conditions over the past few years, bridging the research gap and expanding our knowledge towards the relationship between microbes mainly constituted of bacteria and its nutrient cycles in the mangrove sediment and direct and indirect impacts on mangrove growth and stand-structures as coastal barriers and other ecological service providers. Thus, based on studies by Lai et al.'s systematic review, here they suggest sampling improvements and a fundamental environmental index for future reference. Mangrove virome Mangrove forests are one of the most carbon-rich biomes, accounting for 11% of the total input of terrestrial carbon into oceans. Viruses are thought to significantly influence local and global biogeochemical cycles, though as of 2019 little information was available about the community structure, genetic diversity and ecological roles of viruses in mangrove ecosystems. Viruses are the most abundant biological entities on earth, present in virtually all ecosystems. By lysing their hosts, that is, by rupturing their cell membranes, viruses control host abundance and affect the structure of host communities. Viruses also influence their host diversity and evolution through horizontal gene transfer, selection for resistance and manipulation of bacterial metabolisms. Importantly, marine viruses affect local and global biogeochemical cycles through the release of substantial amounts of organic carbon and nutrients from hosts and assist microbes in driving biogeochemical cycles with auxiliary metabolic genes (AMGs). It is presumed AMGs augment viral-infected host metabolism and facilitate the production of new viruses. AMGs have been extensively explored in marine cyanophages and include genes involved in photosynthesis, carbon turnover, phosphate uptake and stress response. Cultivation-independent metagenomic analysis of viral communities has identified additional AMGs that are involved in motility, central carbon metabolism, photosystem I, energy metabolism, iron–sulphur clusters, anti-oxidation and sulphur and nitrogen cycling. Interestingly, a recent analysis of Pacific Ocean Virome data identified niche-specialised AMGs that contribute to depth-stratified host adaptations. Given that microbes drive global biogeochemical cycles, and a large fraction of microbes is infected by viruses at any given time, viral-encoded AMGs must play important roles in global biogeochemistry and microbial metabolic evolution. Mangrove forests are the only woody halophytes that live in salt water along the world's subtropical and tropical coastlines. Mangroves are one of the most productive and ecologically important ecosystems on earth. The rates of primary production of mangroves equal those of tropical humid evergreen forests and coral reefs. As a globally relevant component of the carbon cycle, mangroves sequester approximately 24 million metric tons of carbon each year. Most mangrove carbon is stored in soil and sizable belowground pools of dead roots, aiding in the conservation and recycling of nutrients beneath forests. Although mangroves cover only 0.5% of the earth's coastal area, they account for 10–15% of the coastal sediment carbon storage and 10–11% of the total input of terrestrial carbon into oceans. The disproportionate contribution of mangroves to carbon sequestration is now perceived as an important means to counterbalance greenhouse gas emissions. Despite the ecological importance of mangrove ecosystem, knowledge on mangrove biodiversity is notably limited. Previous reports mainly investigated the biodiversity of mangrove fauna, flora and bacterial communities. Particularly, little information is available about viral communities and their roles in mangrove soil ecosystems. In view of the importance of viruses in structuring and regulating host communities and mediating element biogeochemical cycles, exploring viral communities in mangrove ecosystems is essential. Additionally, the intermittent flooding of sea water and resulting sharp transition of mangrove environments may result in substantially different genetic and functional diversity of bacterial and viral communities in mangrove soils compared with those of other systems. Genome sequencing Rhizophoreae as revealed by whole-genome sequencing
Physical sciences
Forests
null
73469
https://en.wikipedia.org/wiki/Yucca
Yucca
Yucca is a genus of perennial shrubs and trees in the family Asparagaceae, subfamily Agavoideae. Its 40–50 species are notable for their rosettes of evergreen, tough, sword-shaped leaves and large terminal panicles of white or whitish flowers. They are native to the Americas and the Caribbean in a wide range of habitats, from humid rainforest and wet subtropical ecosystems to the hot and dry (arid) deserts and savanna. Early reports of the species were confused with the cassava (Manihot esculenta). Consequently, Linnaeus mistakenly derived the generic name from the Taíno word for the latter, yuca. The Aztecs living in Mexico since before the Spanish arrival, in Nahuatl, call the local yucca species (Yucca gigantea) , which gave the Spanish . is also used for Yucca filifera. Distribution The natural distribution range of the genus Yucca (49 species and 24 subspecies) covers a vast area of the Americas. The genus is represented throughout Mexico and extends into Guatemala (Yucca guatemalensis). It also extends to the north through Baja California in the west, northwards into the southwestern United States, through the drier central states as far north as southern Alberta in Canada (Yucca glauca ssp. albertana). Yucca is also native northward to the coastal lowlands and dry beach scrub of the coastal areas of the southeastern United States, along the Gulf of Mexico and South Atlantic States from coastal Texas to Maryland. Yuccas have adapted to an equally vast range of climatic and ecological conditions. They are found in rocky deserts and badlands, in prairies and grassland, in mountainous regions, woodlands, in coastal sands (Yucca filamentosa), and even in subtropical and semitemperate zones. Several species occur in humid tropical zones (Yucca lacandonica) but most species occur in arid conditions, with the deserts of North America being regarded as the center of diversity for the genus. Ecology Yuccas have a very specialized, mutualistic pollination system; being pollinated by yucca moths (family Prodoxidae); the insect transfers the pollen from the stamens of one plant to the stigma of another, and at the same time lays an egg in the flower; the moth larva then feeds on some of the developing seeds, always leaving enough seed to perpetuate the species. Certain species of the yucca moth have evolved antagonistic features against the plant. They do not assist in the plant's pollination efforts while continuing to lay their eggs in the plant for protection. Yucca species are the host plants for the caterpillars of the yucca giant-skipper (Megathymus yuccae), ursine giant-skipper (Megathymus ursus), and Strecker's giant-skipper (Megathymus streckeri). Beetle herbivores include yucca weevils, in the Curculionidae. Uses Yuccas are widely grown as ornamental plants in gardens. Many species also bear edible parts, including fruits, seeds, flowers, flowering stems, and (more rarely) roots.
Biology and health sciences
Monocots
null
73592
https://en.wikipedia.org/wiki/Toxoplasmosis
Toxoplasmosis
Toxoplasmosis is a parasitic disease caused by Toxoplasma gondii, an apicomplexan. Infections with toxoplasmosis are associated with a variety of neuropsychiatric and behavioral conditions. Occasionally, people may have a few weeks or months of mild, flu-like illness such as muscle aches and tender lymph nodes. In a small number of people, eye problems may develop. In those with a weak immune system, severe symptoms such as seizures and poor coordination may occur. If a person becomes infected during pregnancy, a condition known as congenital toxoplasmosis may affect the child. Toxoplasmosis is usually spread by eating poorly cooked food that contains cysts, by exposure to infected cat feces, or from an infected woman to her baby during pregnancy. Rarely, the disease may be spread by blood transfusion or other organ transplant. It is not otherwise spread between people. The parasite is only known to reproduce sexually in the cat family. However, it can infect most types of warm-blooded animals, including humans. Diagnosis is typically by testing blood for antibodies or by testing the amniotic fluid in a pregnant patient for the parasite's DNA. Prevention is by properly preparing and cooking food. Pregnant women are also recommended not to clean cat litter boxes or, if they must, to wear gloves and wash their hands afterwards. Treatment of otherwise healthy people is usually not needed. During pregnancy, spiramycin or pyrimethamine/sulfadiazine and folinic acid may be used for treatment. Up to half of the world's population is infected by T. gondii, but have no symptoms. In the United States, approximately 11% of people have been infected, while in some areas of the world this is more than 60%. Approximately 200,000 cases of congenital toxoplasmosis occur a year. Charles Nicolle and Louis Manceaux first described the organism in 1908. In 1941, transmission during pregnancy from a pregnant woman to her baby was confirmed. There is tentative evidence that otherwise asymptomatic infection may affect people's behavior. Signs and symptoms Infection has three stages: Acute Acute toxoplasmosis is often asymptomatic in healthy adults. However, symptoms may manifest and are often influenza-like: swollen lymph nodes, headaches, fever, and fatigue, or muscle aches and pains that last for a month or more. It is rare for a human with a fully functioning immune system to develop severe symptoms following infection. People with weakened immune systems are likely to experience headache, confusion, poor coordination, seizures, lung problems that may resemble tuberculosis or Pneumocystis jirovecii pneumonia (a common opportunistic infection that occurs in people with AIDS), or chorioretinitis caused by severe inflammation of the retina (ocular toxoplasmosis). Young children and immunocompromised people, such as those with HIV/AIDS, those taking certain types of chemotherapy, or those who have recently received an organ transplant, may develop severe toxoplasmosis. This can cause damage to the brain (encephalitis) or the eyes (necrotizing retinochoroiditis). Infants infected via placental transmission may be born with either of these problems, or with nasal malformations, although these complications are rare in newborns. The toxoplasmic trophozoites causing acute toxoplasmosis are referred to as tachyzoites, and are typically found in various tissues and body fluids, but rarely in blood or cerebrospinal fluid. Swollen lymph nodes are commonly found in the neck or under the chin, followed by the armpits and the groin. Swelling may occur at different times after the initial infection, persist, and recur for various times independently of antiparasitic treatment. It is usually found at single sites in adults, but in children, multiple sites may be more common. Enlarged lymph nodes will resolve within 1–2 months in 60% of cases. However, a quarter of those affected take 2–4 months to return to normal, and 8% take 4–6 months. A substantial number (6%) do not return to normal until much later. Latent Due to the absence of obvious symptoms, hosts easily become infected with T. gondii and develop toxoplasmosis without knowing it. Although mild, flu-like symptoms occasionally occur during the first few weeks following exposure, infection with T. gondii produces no readily observable symptoms in healthy human adults. In most immunocompetent people, the infection enters a latent phase, during which only bradyzoites (in tissue cysts) are present; these tissue cysts and even lesions can occur in the retinas, alveolar lining of the lungs (where an acute infection may mimic a Pneumocystis jirovecii infection), heart, skeletal muscle, and the central nervous system (CNS), including the brain. Cysts form in the CNS (brain tissue) upon infection with T. gondii and persist for the lifetime of the host. Most infants who are infected while in the womb have no symptoms at birth, but may develop symptoms later in life. Reviews of serological studies have estimated that 30–50% of the global population has been exposed to and may be chronically infected with latent toxoplasmosis, although infection rates differ significantly from country to country. This latent state of infection has recently been associated with numerous disease burdens, neural alterations, and subtle sex-dependent behavioral changes in immunocompetent humans, as well as an increased risk of motor vehicle collisions. Skin While rare, skin lesions may occur in the acquired form of the disease, including roseola and erythema multiforme-like eruptions, prurigo-like nodules, urticaria, and maculopapular lesions. Newborns may have punctate macules, ecchymoses, or "blueberry muffin" lesions. Diagnosis of cutaneous toxoplasmosis is based on the tachyzoite form of T. gondii being found in the epidermis. It is found in all levels of the epidermis, is about 6 by 2 μm and bow-shaped, with the nucleus being one-third of its size. It can be identified by electron microscopy or by Giemsa staining tissue where the cytoplasm shows blue, the nucleus red. Cause Parasitology In its lifecycle, T. gondii adopts several forms. Tachyzoites are responsible for acute infection; they divide rapidly and spread through the tissues of the body. Tachyzoites are also known as "tachyzoic merozoites", a descriptive term that conveys more precisely the parasitological nature of this stage. After proliferating, tachyzoites convert into bradyzoites, which are inside latent intracellular tissue cysts that form mainly in the muscles and brain. The formation of cysts is in part triggered by the pressure of the host immune system. The bradyzoites (also called "bradyzoic merozoites") are not responsive to antibiotics. Bradyzoites, once formed, can remain in the tissues for the lifespan of the host. In a healthy host, if some bradyzoites convert back into active tachyzoites, the immune system will quickly destroy them. However, in immunocompromised individuals, or in fetuses, which lack a developed immune system, the tachyzoites can run rampant and cause significant neurological damage. The parasite's survival is dependent on a balance between host survival and parasite proliferation. T. gondii achieves this balance by manipulating the host's immune response, reducing the host's immune response, and enhancing the parasite's reproductive advantage. Once it infects a normal host cell, it resists damage caused by the host's immune system, and changes the host's immune processes. As it forces its way into the host cell, the parasite forms a parasitophorous vacuole (PV) membrane from the membrane of the host cell. The PV encapsulates the parasite, and is both resistant to the activity of the endolysosomal system, and can take control of the host's mitochondria and endoplasmic reticulum. When first invading the cell, the parasite releases ROP proteins from the bulb of the rhoptry organelle. These proteins translocate to the nucleus and the surface of the PV membrane where they can activate STAT pathways to modulate the expression of cytokines at the transcriptional level, bind and inactivate PV membrane destroying IRG proteins, among other possible effects. Additionally, certain strains of T. gondii can secrete a protein known as GRA15, activating the NF-κB pathway, which upregulates the pro-inflammatory cytokine IL-12 in the early immune response, possibly leading to the parasite's latent phase. The parasite's ability to secrete these proteins depends on its genotype and affects its virulence. The parasite also influences an anti-apoptotic mechanism, allowing the infected host cells to persist and replicate. One method of apoptosis resistance is by disrupting pro-apoptosis effector proteins, such as BAX and BAK. To disrupt these proteins, T. gondii causes conformational changes to the proteins, which prevent the proteins from being transported to various cellular compartments where they initiate apoptosis events. T. gondii does not, however, cause downregulation of the pro-apoptosis effector proteins. T. gondii also has the ability to initiate autophagy of the host's cells. This leads to a decrease in healthy, uninfected cells, and consequently fewer host cells to attack the infected cells. Research by Wang et al finds that infected cells lead to higher levels of autophagosomes in normal and infected cells. Their research reveals that T. gondii causes host cell autophagy using a calcium-dependent pathway. Another study suggests that the parasite can directly affect calcium being released from calcium stores, which are important for the signalling processes of cells. The mechanisms above allow T. gondii to persist in a host. Some limiting factors for the toxoplasma is that its influence on the host cells is stronger in a weak immune system and is quantity-dependent, so a large number of T. gondii per host cell cause a more severe effect. The effect on the host also depends on the strength of the host immune system. Immunocompetent individuals do not normally show severe symptoms or any at all, while fatality or severe complications can result in immunocompromised individuals. T. gondii has been shown to produce a protein called GRA28, released by the MYR1 secretory pathway, which interferes with gene expression in infected cells and results in cells that behave like dendritic cells, becoming highly mobile in the body. Since the parasite can change the host's immune response, it may also have an effect, positive or negative, on the immune response to other pathogenic threats. This includes, but is not limited to, the responses to infections by Helicobacter felis, Leishmania major, or other parasites, such as Nippostrongylus brasiliensis. Transmission Toxoplasmosis is generally transmitted through the mouth when Toxoplasma gondii oocysts or tissue cysts are accidentally eaten. Congenital transmittance from mother to fetus can also occur. Transmission may also occur during the solid organ transplant process or hematogenous stem cell transplants. Oral transmission may occur through: Ingestion of raw or partly cooked meat, especially pork, lamb, or venison containing Toxoplasma cysts: Infection prevalence in countries where undercooked meat is traditionally eaten has been related to this transmission method. Tissue cysts may also be ingested during hand-to-mouth contact after handling undercooked meat, or from using knives, utensils, or cutting boards contaminated by raw meat. Ingestion of unwashed fruit or vegetables that have been in contact with contaminated soil containing infected cat feces. Ingestion of cat feces containing oocysts: This can occur through hand-to-mouth contact following gardening, cleaning a cat's litter box, contact with children's sandpits; the parasite can survive in the environment for months. Ingestion of untreated, unfiltered water through direct consumption or utilization of water for food preparation. Ingestion of unpasteurized milk and milk products, particularly goat's milk. Ingestion of raw seafood. Cats excrete the pathogen in their feces for a number of weeks after contracting the disease, generally by eating an infected intermediate host that could include mammals (like rodents) or birds. Oocyst shedding usually starts from the third day after ingestion of infected intermediate hosts, and may continue for weeks. The oocysts are not infective when excreted. After about a day, the oocyst undergoes a process called sporulation and becomes potentially pathogenic. In addition to cats, birds and mammals including human beings are also intermediate hosts of the parasite and are involved in the transmission process. However the pathogenicity varies with the age and species involved in infection and the mode of transmission of T. gondii. Toxoplasmosis may also be transmitted through solid organ transplants. Toxoplasma-seronegative recipients who receive organs from recently infected Toxoplasma-seropositive donors are at risk. Organ recipients who have latent toxoplasmosis are at risk of the disease reactivating in their system due to the immunosuppression occurring during solid organ transplant. Recipients of hematogenous stem cell transplants may experience higher risk of infection due to longer periods of immunosuppression. Heart and lung transplants provide the highest risk for toxoplasmosis infection due to the striated muscle making up the heart, which can contain cysts, and risks for other organs and tissues vary widely. Risk of transmission can be reduced by screening donors and recipients prior to the transplant procedure and providing treatment. Pregnancy precautions Congenital toxoplasmosis is a specific form of toxoplasmosis in which an unborn fetus is infected via the placenta. Congenital toxoplasmosis is associated with fetal death and miscarriage, and in infants, it is associated with hydrocephalus, cerebral calcifications and chorioretinitis, leading to encephalopathy and possibly blindness. If a woman receives her first exposure to T. gondii while pregnant, the fetus is at particular risk. A simple blood draw at the first prenatal doctor visit can determine whether or not a woman has had previous exposure and therefore whether or not she is at risk. A positive antibody titer indicates previous exposure and immunity, and largely ensures the unborn fetus' safety. Not much evidence exists around the effect of education before pregnancy to prevent congenital toxoplasmosis. However educating parents before the baby is born has been suggested to be effective because it may improve food, personal and pet hygiene. More research is needed to find whether antenatal education can reduce congenital toxoplasmosis. For pregnant women with negative antibody titers, indicating no previous exposure to T. gondii, serology testing as frequent as monthly is advisable as treatment during pregnancy for those women exposed to T. gondii for the first time dramatically decreases the risk of passing the parasite to the fetus. Since a baby's immune system does not develop fully for the first year of life, and the resilient cysts that form throughout the body are very difficult to eradicate with antiprotozoans, an infection can be very serious in the young. Despite these risks, pregnant women are not routinely screened for toxoplasmosis in most countries, for reasons of cost-effectiveness and the high number of false positives generated; Portugal, France, Austria, Uruguay, and Italy are notable exceptions, and some regional screening programmes operate in Germany, Switzerland and Belgium. As invasive prenatal testing incurs some risk to the fetus (18.5 pregnancy losses per toxoplasmosis case prevented), postnatal or neonatal screening is preferred. The exceptions are cases where fetal abnormalities are noted, and thus screening can be targeted. Pregnant women should avoid handling raw meat, drinking raw milk (especially goat milk) and be advised to not eat raw or undercooked meat regardless of type. Because of the obvious relationship between Toxoplasma and cats it is also often advised to avoid exposure to cat feces, and refrain from gardening (cat feces are common in garden soil) or at least wear gloves when so engaged. Most cats are not actively shedding oocysts, since they get infected in the first six months of their life, when they shed oocysts for a short period of time (1–2 weeks). However, these oocysts get buried in the soil, sporulate and remain infectious for periods ranging from several months to more than a year. Numerous studies have shown living in a household with a cat is not a significant risk factor for T. gondii infection, though living with several kittens has some significance. In 2006, a Czech research team discovered women with high levels of toxoplasmosis antibodies were significantly more likely to give birth to baby boys than baby girls. In most populations, the birth rate is around 51% boys, but people infected with T. gondii had up to a 72% chance of a boy. Diagnosis Toxoplasmosis in humans is diagnosed through biological, serological, histological, or molecular methods, or by some combination of the above. Toxoplasmosis can be difficult to distinguish from primary central nervous system lymphoma. Its symptoms mimic several other infectious diseases, so clinical signs are non-specific and are not sufficiently characteristic for a definite diagnosis. A failed trial of antimicrobial therapy (pyrimethamine, sulfadiazine, and folinic acid (USAN: leucovorin)), makes an alternative diagnosis more likely. T. gondii may also be detected in blood, amniotic fluid, or cerebrospinal fluid by using polymerase chain reaction. T. gondii may exist in a host as an inactive cyst that would likely evade detection. Serological testing can detect T. gondii antibodies in blood serum, using methods including the Sabin–Feldman dye test (DT), the indirect hemagglutination assay, the indirect fluorescent antibody assay (IFA), the direct agglutination test, the latex agglutination test (LAT), the enzyme-linked immunosorbent assay (ELISA), and the immunosorbent agglutination assay test (IAAT). The most commonly used tests to measure IgG antibody are the DT, the ELISA, the IFA, and the modified direct agglutination test. IgG antibodies usually appear within a week or two of infection, peak within one to two months, then decline at various rates. Toxoplasma IgG antibodies generally persist for life, and therefore may be present in the bloodstream as a result of either current or previous infection. To some extent, acute toxoplasmosis infections can be differentiated from chronic infections using an IgG avidity test, which is a variation on the ELISA. In the first response to infection, toxoplasma-specific IgG has a low affinity for the toxoplasma antigen; in the following weeks and month, IgG affinity for the antigen increases. Based on the IgG avidity test, if the IgG in the infected individual has a high affinity, it means that the infection began three to five months before testing. This is particularly useful in congenital infection, where pregnancy status and gestational age at time of infection determines treatment. In contrast to IgG, IgM antibodies can be used to detect acute infection but generally not chronic infection. The IgM antibodies appear sooner after infection than the IgG antibodies and disappear faster than IgG antibodies after recovery. In most cases, T. gondii-specific IgM antibodies can first be detected approximately a week after acquiring primary infection and decrease within one to six months; 25% of those infected are negative for T. gondii-specific IgM within seven months. However, IgM may be detectable months or years after infection, during the chronic phase, and false positives for acute infection are possible. The most commonly used tests for the measurement of IgM antibody are double-sandwich IgM-ELISA, the IFA test, and the immunosorbent agglutination assay (IgM-ISAGA). Commercial test kits often have low specificity, and the reported results are frequently misinterpreted. In 2021, twenty commercial anti-Toxoplasma IgG assays were evaluated in a systematic review, in comparison with an accepted reference method. Most of them were enzyme-immunoassays, followed by agglutination tests, immunochromatographic tests, and a Western-Blot assay. The mean sensitivity of IgG assays ranged from 89.7% to 100% for standard titers and from 13.4% to 99.2% for low IgG titers. A few studies pointed out the ability of some methods, especially WB to detect IgG early after primary infection. The specificity of IgG assays was generally high, ranging from 91.3% to 100%; and higher than 99% for most EIA assays. The positive predictive value (PPV) was not a discriminant indicator among methods, whereas significant disparities (87.5–100%) were reported among negative predictive values (NPV), a key-parameter assessing the ability to definitively rule out a Toxoplasma infection in patients at-risk for opportunistic infections. Congenital Recommendations for the diagnosis of congenital toxoplasmosis include: prenatal diagnosis based on testing of amniotic fluid and ultrasound examinations; neonatal diagnosis based on molecular testing of placenta and cord blood and comparative mother-child serologic tests and a clinical examination at birth; and early childhood diagnosis based on neurologic and ophthalmologic examinations and a serologic survey during the first year of life. During pregnancy, serological testing is recommended at three week intervals. Even though diagnosis of toxoplasmosis heavily relies on serological detection of specific anti-Toxoplasma immunoglobulin, serological testing has limitations. For example, it may fail to detect the active phase of T. gondii infection because the specific anti-Toxoplasma IgG or IgM may not be produced until after several weeks of infection. As a result, a pregnant woman might test negative during the active phase of T. gondii infection leading to undetected and therefore untreated congenital toxoplasmosis. Also, the test may not detect T. gondii infections in immunocompromised patients because the titers of specific anti-Toxoplasma IgG or IgM may not rise in this type of patient. Many PCR-based techniques have been developed to diagnose toxoplasmosis using clinical specimens that include amniotic fluid, blood, cerebrospinal fluid, and tissue biopsy. The most sensitive PCR-based technique is nested PCR, followed by hybridization of PCR products. The major downside to these techniques is that they are time-consuming and do not provide quantitative data. Real-time PCR is useful in pathogen detection, gene expression and regulation, and allelic discrimination. This PCR technique utilizes the 5' nuclease activity of Taq DNA polymerase to cleave a nonextendible, fluorescence-labeled hybridization probe during the extension phase of PCR. A second fluorescent dye, e.g., 6-carboxy-tetramethyl-rhodamine, quenches the fluorescence of the intact probe. The nuclease cleavage of the hybridization probe during the PCR releases the effect of quenching resulting in an increase of fluorescence proportional to the amount of PCR product, which can be monitored by a sequence detector. Lymph nodes affected by Toxoplasma have characteristic changes, including poorly demarcated reactive germinal centers, clusters of monocytoid B cells, and scattered epithelioid histiocytes. The classic triad of congenital toxoplasmosis includes: chorioretinitis, hydrocephalus, and intracranial arteriosclerosis. Other consequences include sensorineural deafness, seizures, and intellectual disability. Congenital toxoplasmosis may also impact a child's hearing. Up to 30% of newborns have some degree of sensorineural hearing loss. The child's communication skills may also be affected. A study published in 2010 looked at 106 patients, all of whom received toxoplasmosis treatment prior to 2.5 months. Of this group, 26.4% presented with language disorders. Treatment Treatment is recommended for people with serious health problems, such as people with HIV whose CD4 counts are under 200 cells/mm3. Trimethoprim/sulfamethoxazole is the drug of choice to prevent toxoplasmosis, but not for treating active disease. A 2012 study shows a promising new way to treat the active and latent form of this disease using two endochin-like quinolones. Acute The medications prescribed for acute toxoplasmosis are the following: Pyrimethamine – an antimalarial medication Sulfadiazine – an antibiotic used in combination with pyrimethamine to treat toxoplasmosis Combination therapy is usually given with folic acid supplements to reduce incidence of thrombocytopaenia. Combination therapy is most useful in the setting of HIV. Clindamycin Spiramycin – an antibiotic used most often for pregnant women to prevent the infection of their children. (other antibiotics, such as minocycline, have seen some use as a salvage therapy). If infected during pregnancy, spiramycin is recommended in the first and early second trimesters while pyrimethamine/sulfadiazine and leucovorin is recommended in the late second and third trimesters. Latent In people with latent toxoplasmosis, the cysts are immune to these treatments, as the antibiotics do not reach the bradyzoites in sufficient concentration. The medications prescribed for latent toxoplasmosis are: Atovaquone – an antibiotic that has been used to kill Toxoplasma cysts inside AIDS patients Clindamycin – an antibiotic that, in combination with atovaquone, seemed to optimally kill cysts in mice Congenital When a pregnant woman is diagnosed with acute toxoplasmosis, amniocentesis can be used to determine whether the fetus has been infected or not. When a pregnant woman develops acute toxoplasmosis, the tachyzoites have approximately a 30% chance of entering the placental tissue, and from there entering and infecting the fetus. As gestational age at the time of infection increases, the chance of fetal infection also increases. If the parasite has not yet reached the fetus, spiramycin can help to prevent placental transmission. If the fetus has been infected, the pregnant woman can be treated with pyrimethamine and sulfadiazine, with folinic acid, after the first trimester. They are treated after the first trimester because pyrimethamine has an antifolate effect, and lack of folic acid can interfere with fetal brain formation and cause thrombocytopaenia. Infection in earlier gestational stages correlates with poorer fetal and neonatal outcomes, particularly when the infection is untreated. Newborns who undergo 12 months of postnatal anti-toxoplasmosis treatment have a low chance of sensorineural hearing loss. Information regarding treatment milestones for children with congenital toxoplasmosis have been created for this group. Epidemiology T. gondii infections occur throughout the world, although infection rates differ significantly by country. For women of childbearing age, a survey of 99 studies within 44 countries found the areas of highest prevalence are within Latin America (about 50–80%), parts of Eastern and Central Europe (about 20–60%), the Middle East (about 30–50%), parts of Southeast Asia (about 20–60%), and parts of Africa (about 20–55%). In the United States, data from the National Health and Nutrition Examination Survey (NHANES) from 1999 to 2004 found 9.0% of US-born persons 12–49 years of age were seropositive for IgG antibodies against T. gondii, down from 14.1% as measured in the NHANES 1988–1994. In the 1999–2004 survey, 7.7% of US-born and 28.1% of foreign-born women 15–44 years of age were T. gondii seropositive. A trend of decreasing seroprevalence has been observed by numerous studies in the United States and many European countries. Toxoplasma gondii is considered the second leading cause of foodborne-related deaths and the fourth leading cause of foodborne-related hospitalizations in the United States. The protist responsible for toxoplasmosis is T. gondii. Three major types of T. gondii are responsible for the patterns of toxoplasmosis throughout the world, named types I, II, and III. These three types of T. gondii have differing effects on certain hosts, mainly mice and humans due to their variation in genotypes. Type I: virulent in mice and humans, seen in people with AIDS. Type II: non-virulent in mice, virulent in humans (mostly Europe and North America), seen in people with AIDS. Type III: non-virulent in mice, virulent mainly in animals but seen to a lesser degree in humans as well. Current serotyping techniques can only separate type I or III from type II parasites. Because the parasite poses a particular threat to fetuses when it is contracted during pregnancy, much of the global epidemiological data regarding T. gondii comes from seropositivity tests in women of childbearing age. Seropositivity tests look for the presence of antibodies against T. gondii in blood, so while seropositivity guarantees one has been exposed to the parasite, it does not necessarily guarantee one is chronically infected. History Toxoplasma gondii was first described in 1908 by Nicolle and Manceaux in Tunisia, and independently by Splendore in Brazil. Splendore reported the protozoan in a rabbit, while Nicolle and Manceaux identified it in a North African rodent, the gundi (Ctenodactylus gundi). In 1909 Nicolle and Manceaux differentiated the protozoan from Leishmania. Nicolle and Manceaux then named it Toxoplasma gondii after the curved shape of its infectious stage (Greek root = bow). The first recorded case of congenital toxoplasmosis was in 1923, but it was not identified as caused by T. gondii. Janků (1923) described in detail the autopsy results of an 11-month-old boy who had presented to hospital with hydrocephalus. The boy had classic marks of toxoplasmosis including chorioretinitis (inflammation of the choroid and retina of the eye). Histology revealed a number of "sporocytes", though Janků did not identify these as T. gondii. It was not until 1937 that the first detailed scientific analysis of T. gondii took place using techniques previously developed for analyzing viruses. In 1937 Sabin and Olitsky analyzed T. gondii in laboratory monkeys and mice. Sabin and Olitsky showed that T. gondii was an obligate intracellular parasite and that mice fed T. gondii-contaminated tissue also contracted the infection. Thus Sabin and Olitsky demonstrated T. gondii as a pathogen transmissible between animals. T. gondii was first described as a human pathogen in 1939 at Babies Hospital in New York City. Wolf, Cowen and Paige identified T. gondii infection in an infant girl delivered full-term by Caesarean section. The infant developed seizures and had chorioretinitis in both eyes at three days. The infant then developed encephalomyelitis and died at one month of age. Wolf, Cowen and Paige isolated T. gondii from brain tissue lesions. Intracranial injection of brain and spinal cord samples into mice, rabbits and rats produced encephalitis in the animals. Wolf, Cowen and Page reviewed additional cases and concluded that T. gondii produced recognizable symptoms and could be transmitted from mother to child. The first adult case of toxoplasmosis was reported in 1940 with no neurological signs. Pinkerton and Weinman reported the presence of Toxoplasma in a 22-year-old man from Peru who died from a subsequent bacterial infection and fever. In 1948, a serological dye test was created by Sabin and Feldman based on the ability of the patient's antibodies to alter staining of Toxoplasma. The Sabin Feldman Dye Test is now the gold standard for identifying Toxoplasma infection. Transmission of Toxoplasma by eating raw or undercooked meat was demonstrated by Desmonts et al. in 1965 Paris. Desmonts observed that the therapeutic consumption of raw beef or horse meat in a tuberculosis hospital was associated with a 50% per year increase in Toxoplasma antibodies. This means that more T. gondii was being transmitted through the raw meat. In 1974, Desmonts and Couvreur showed that infection during the first two trimesters produces most harm to the fetus, that transmission depended on when mothers were infected during pregnancy, that mothers with antibodies before pregnancy did not transmit the infection to the fetus, and that spiramycin lowered the transmission to the fetus. Toxoplasma gained more attention in the 1970s with the rise of immune-suppressant treatment given after organ or bone marrow transplants and the AIDS epidemic of the 1980s. Patients with lowered immune system function are much more susceptible to disease. Society and culture "Crazy cat-lady" "Crazy cat-lady syndrome" is a term coined by news organizations to describe scientific findings that link the parasite Toxoplasma gondii to several mental disorders and behavioral problems. The suspected correlation between cat ownership in childhood and later development of schizophrenia suggested that further studies were needed to determine a risk factor for children; however, a later study found that childhood cat ownership was not predictive of psychotic experiences at ages 13 or 18. Researchers also found that cat ownership does not strongly increase the risk of a T. gondii infection in pregnant women. The term crazy cat-lady syndrome draws on both stereotype and popular cultural reference. It was originated as instances of the aforementioned afflictions were noted amongst the populace. A cat lady is a cultural stereotype of a woman who compulsively hoards and dotes upon cats. The biologist Jaroslav Flegr is a proponent of the theory that toxoplasmosis affects human behaviour. Notable cases Tennis player Arthur Ashe developed neurological problems from toxoplasmosis (and was later found to be HIV-positive). Actor Merritt Butrick was HIV-positive and died from toxoplasmosis as a result of his already-weakened immune system. Pedro Zamora, reality television personality and HIV/AIDS activist, was diagnosed with toxoplasmosis as a result of his immune system being weakened by HIV. Prince François, Count of Clermont, pretender to the throne of France, had congenital toxoplasmosis; his disability caused him to be overlooked in the line of succession. Actress Leslie Ash contracted toxoplasmosis in the second month of pregnancy. British middle-distance runner Sebastian Coe contracted toxoplasmosis in 1983, which was probably transmitted by a cat while he trained in Italy. Tennis player Martina Navratilova experienced toxoplasmosis during the 1982 US Open. Other animals Although T. gondii has the capability of infecting virtually all warm-blooded animals, susceptibility and rates of infection vary widely between different genera and species. Rates of infection in populations of the same species can also vary widely due to differences in location, diet, and other factors. Although infection with T. gondii has been noted in several species of Asian primates, seroprevalence of T. gondii antibodies were found for the first time in toque macaques (Macaca sinica) that are endemic to the island of Sri Lanka. Australian marsupials are particularly susceptible to toxoplasmosis. Wallabies, koalas, wombats, pademelons and small dasyurids can be killed by it, with eastern barred bandicoots typically dying within about 3 weeks of infection. It is estimated that 23% of wild swine worldwide are seropositive for T. gondii. Seroprevalence varies across the globe with the highest seroprevalence in North America (32%) and Europe (26%) and the lowest in Asia (13%) and South America (5%). Geographical regions located at higher latitudes and regions that experience warmer, humid climates are associated with increased seroprevalence of T. gondii among wild boar. Wild boar infected with T. gondii pose a potential health risk for humans who consume their meat. Livestock Among livestock, pigs, sheep and goats have the highest rates of chronic T. gondii infection. The prevalence of T. gondii in meat-producing animals varies widely both within and among countries, and rates of infection have been shown to be dramatically influenced by varying farming and management practices. For instance, animals kept outdoors or in free-ranging environments are more at risk of infection than animals raised indoors or in commercial confinement operations. Pigs Worldwide, the percentage of pigs harboring viable parasites has been measured to be 3–71.43% and in the United States (via bioassay in mice or cats) to be as high as 92.7% and as low as 0%, depending on the farm or herd. Surveys of seroprevalence (T. gondii antibodies in blood) are more common, and such measurements are indicative of the high relative seroprevalence in pigs across the world. Neonatal piglets have been found to experience the entire range of severity, including progression to stillbirth. This was especially demonstrated in the foundational Thiptara et al. 2006, reporting a litter birth of three stillborns and six live in Thailand. This observation has been relevant not only to that country but to toxoplasmosis control in porciculture around the world. Sheep Along with pigs, sheep and goats are among the most commonly infected livestock of epidemiological significance for human infection. Prevalence of viable T. gondii in sheep tissue has been measured (via bioassay) to be as high as 78% in the United States, and a 2011 survey of goats intended for consumption in the United States found a seroprevalence of 53.4%. A single live attenuated vaccine, Toxovax, is currently available to mitigate the negative impacts of congenital toxoplasmosis on the sheep industry. Chickens Due to a lack of exposure to the outdoors, chickens raised in large-scale indoor confinement operations are not commonly infected with T. gondii. Free-ranging or backyard-raised chickens are much more commonly infected. A survey of free-ranging chickens in the United States found its prevalence to be 17–100%, depending on the farm. Because chicken meat is generally cooked thoroughly before consumption, poultry is not generally considered to be a significant risk factor for human T. gondii infection. Cattle Although cattle and buffalo can be infected with T. gondii, the parasite is generally eliminated or reduced to undetectable levels within a few weeks following exposure. Tissue cysts are rarely present in buffalo meat or beef, and meat from these animals is considered to be low-risk for harboring viable parasites. Horses Horses are considered resistant to chronic T. gondii infection. However, viable cells have been isolated from US horses slaughtered for export, and severe human toxoplasmosis in France has been epidemiologically linked to the consumption of horse meat. Domestic cats In 1942, the first case of feline toxoplasmosis was diagnosed and reported in a domestic cat in Middletown, New York. The investigators isolated oocysts from feline feces and found that the oocysts could be infectious for up to 12 months in the environment. The seroprevalence of T. gondii in domestic cats, worldwide has been estimated to be around 30–40% and exhibits significant geographical variation. In the United States, no official national estimate has been made, but local surveys have shown levels varying between 16% and 80%. A 2012 survey of 445 purebred pet cats and 45 shelter cats in Finland found an overall seroprevalence of 48.4%, while a 2010 survey of feral cats from Giza, Egypt found a seroprevalence rate of 97.4%. Another survey from Colombia recorded seroprevalence of 89.3%, whereas a Chinese (Guangdong) study found just a 2.1% prevalence. T. gondii infection rates in domestic cats vary widely depending on the cats' diets and lifestyles. Feral cats that hunt for their food are more likely to be infected than domestic cats, and naturally also depends on the prevalence of T. gondii-infected prey such as birds and small mammals. Most infected cats will shed oocysts in their feces only once in their lifetime, typically for 3-10 days after infection. This shedding can release millions of oocysts, each capable of spreading and surviving for months. After infection, most cats will develop antibodies to T. gondii and will no longer shed oocysts. An estimated 1% of cats at any given time are actively shedding oocysts. It is difficult to control the cat population with the infected oocysts due to the lack of an approved vaccine. This remains a challenge in most cases, and the programs that are readily available are questionable in efficacy. Research into feline vaccines for toxoplasmosis is ongoing, with several candidates showing positive results in clinical trials. Current methods of control of T. gondii in cats typically rely on preventing them from hunting (where they might acquire the parasite), not allowing the cat to consume raw meat, and maintaining good hygiene around litter boxes to minimize environmental oocyst contamination. Rodents Infection with T. gondii has been shown to alter the behavior of mice and rats in ways thought to increase the rodents' chances of being preyed upon by cats. Infected rodents show a reduction in their innate aversion to cat odors; while uninfected mice and rats will generally avoid areas marked with cat urine or with cat body odor, this avoidance is reduced or eliminated in infected animals. Moreover, some evidence suggests this loss of aversion may be specific to feline odors: when given a choice between two predator odors (cat or mink), infected rodents show a significantly stronger preference to cat odors than do uninfected controls. In rodents, T. gondii–induced behavioral changes occur through epigenetic remodeling in neurons associated with observed behaviors; for example, it modifies epigenetic methylation to induce hypomethylation of arginine vasopressin-related genes in the medial amygdala to greatly decrease predator aversion. Similar epigenetically induced behavioral changes have also been observed in mouse models of addiction, where changes in the expression of histone-modifying enzymes via gene knockout or enzyme inhibition in specific neurons produced alterations in drug-related behaviors. Widespread histone–lysine acetylation in cortical astrocytes appears to be another epigenetic mechanism employed by T. gondii. T. gondii-infected rodents show a number of behavioral changes beyond altered responses to cat odors. Rats infected with the parasite show increased levels of activity and decreased neophobic behavior. Similarly, infected mice show alterations in patterns of locomotion and exploratory behavior during experimental tests. These patterns include traveling greater distances, moving at higher speeds, accelerating for longer periods of time, and showing a decreased pause-time when placed in new arenas. Infected rodents have also been shown to have lower anxiety, using traditional models such as elevated plus mazes, open field arenas, and social interaction tests. Marine mammals A University of California, Davis study of dead sea otters collected from 1998 to 2004 found toxoplasmosis was the cause of death for 13% of the animals. Proximity to freshwater outflows into the ocean was a major risk factor. Ingestion of oocysts from cat feces is considered to be the most likely ultimate source. Surface runoff containing wild cat feces and litter from domestic cats flushed down toilets are possible sources of oocysts. These same sources may have also introduced the toxoplasmosis infection to the endangered Hawaiian monk seal. Infection with the parasite has contributed to the death of at least four Hawaiian monk seals. A Hawaiian monk seal's infection with T. gondii was first noted in 2004. The parasite's spread threatens the recovery of this highly endangered pinniped. The parasites have been found in numerous cetacean species, such as the bottlenose dolphin, spinner dolphin, Risso's dolphin, Indo-Pacific humpback dolphin, striped dolphin, the beluga whale, and the critically endangered Māui dolphin and Hector's dolphin. A 2011 study of 161 Pacific Northwest marine mammals ranging from a sperm whale to harbor porpoises that had either become stranded or died found that 42 percent tested positive for both T. gondii and S. neurona. Researchers Black and Massie believe anchovies, which travel from estuaries into the open ocean, may be helping to spread the disease. Giant panda Toxoplasma gondii has been reported as the cause of death of a giant panda kept in a zoo in China, who died in 2014 of acute gastroenteritis and respiratory disease. Although seemingly anecdotal, this report emphasizes that all warm-blooded species are likely to be infected by T. gondii, including endangered species such as the giant panda. Research Chronic infection with T. gondii has traditionally been considered asymptomatic in people with normal immune function. Some evidence suggests latent infection may subtly influence a range of human behaviors and tendencies, and infection may alter the susceptibility to or intensity of a number of psychiatric or neurological disorders. In most of the current studies where positive correlations have been found between T. gondii antibody titers and certain behavioral traits or neurological disorders, T. gondii seropositivity tests are conducted after the onset of the examined disease or behavioral trait; that is, it is often unclear whether infection with the parasite increases the chances of having a certain trait or disorder, or if having a certain trait or disorder increases the chances of becoming infected with the parasite. Groups of individuals with certain behavioral traits or neurological disorders may share certain behavioral tendencies that increase the likelihood of exposure to and infection with T. gondii; as a result, it is difficult to confirm causal relationships between T. gondii infections and associated neurological disorders or behavioral traits. Mental health Some evidence links T. gondii to schizophrenia. Two 2012 meta-analyses found that the rates of antibodies to T. gondii in people with schizophrenia were 2.7 times higher than in controls. T. gondii antibody positivity was therefore considered an intermediate risk factor in relation to other known risk factors. Cautions noted include that the antibody tests do not detect toxoplasmosis directly, most people with schizophrenia do not have antibodies for toxoplasmosis, and publication bias might exist. While the majority of these studies tested people already diagnosed with schizophrenia for T. gondii antibodies, associations between T. gondii and schizophrenia have been found prior to the onset of schizophrenia symptoms. Sex differences in the age of schizophrenia onset may be explained in part by a second peak of T. gondii infection incidence during ages 25–30 in females only. Although a mechanism supporting the association between schizophrenia and T. gondii infection is unclear, studies have investigated a molecular basis of this correlation. Antipsychotic drugs used in schizophrenia appear to inhibit the replication of T. gondii tachyzoites in cell culture. Supposing a causal link exists between T. gondii and schizophrenia, studies have yet to determine why only some individuals with latent toxoplasmosis develop schizophrenia; some plausible explanations include differing genetic susceptibility, parasite strain differences, and differences in the route of the acquired T. gondii infection. Correlations have also been found between antibody titers to T. gondii and OCD, as well as suicide among people with mood disorders including bipolar disorder. Positive antibody titers to T. gondii appear to be uncorrelated with major depression or dysthymia. Although there is a correlation between T. gondii and many psychological disorders, the underlying mechanism is unclear. A 2016 study of 236 persons with high levels of toxoplasmosis antibodies found that "there was little evidence that T. gondii was related to increased risk of psychiatric disorder, poor impulse control, personality aberrations or neurocognitive impairment". Neurological disorders Latent infection has been linked to Parkinson's disease and Alzheimer's disease. Individuals with multiple sclerosis show infection rates around 15% lower than the general public. Traffic accidents Latent T. gondii infection in humans has been associated with a higher risk of automobile accidents, potentially due to impaired psychomotor performance or enhanced risk-taking personality profiles. Climate change Climate change has been reported to affect the occurrence, survival, distribution and transmission of T. gondii. T. gondii has been identified in the Canadian arctic, a location that was once too cold for its survival. Higher temperatures increase the survival time of T. gondii. More snowmelt and precipitation can increase the amount of T. gondii oocysts that are transported via river flow. Shifts in bird, rodent, and insect populations and migration patterns can impact the distribution of T. gondii due to their role as reservoir and vector. Urbanization and natural environmental degradation are also suggested to affect T. gondii transmission and increase risk of infection.
Biology and health sciences
Protozoan infections
Health
73614
https://en.wikipedia.org/wiki/Exocytosis
Exocytosis
Exocytosis () is a form of active transport and bulk transport in which a cell transports molecules (e.g., neurotransmitters and proteins) out of the cell (exo- + cytosis). As an active transport mechanism, exocytosis requires the use of energy to transport material. Exocytosis and its counterpart, endocytosis, are used by all cells because most chemical substances important to them are large polar molecules that cannot pass through the hydrophobic portion of the cell membrane by passive means. Exocytosis is the process by which a large amount of molecules are released; thus it is a form of bulk transport. Exocytosis occurs via secretory portals at the cell plasma membrane called porosomes. Porosomes are permanent cup-shaped lipoprotein structures at the cell plasma membrane, where secretory vesicles transiently dock and fuse to release intra-vesicular contents from the cell. In exocytosis, membrane-bound secretory vesicles are carried to the cell membrane, where they dock and fuse at porosomes and their contents (i.e., water-soluble molecules) are secreted into the extracellular environment. This secretion is possible because the vesicle transiently fuses with the plasma membrane. In the context of neurotransmission, neurotransmitters are typically released from synaptic vesicles into the synaptic cleft via exocytosis; however, neurotransmitters can also be released via reverse transport through membrane transport proteins. Exocytosis is also a mechanism by which cells are able to insert membrane proteins (such as ion channels and cell surface receptors), lipids, and other components into the cell membrane. Vesicles containing these membrane components fully fuse with and become part of the outer cell membrane. History The term was proposed by De Duve in 1963. Types In eukaryotes, there are two types of exocytosis: 1) Ca2+ triggered non-constitutive (i.e., regulated exocytosis) and 2) non-Ca2+ triggered constitutive (i.e., non-regulated). Ca2+ triggered non-constitutive exocytosis requires an external signal, a specific sorting signal on the vesicles, a clathrin coat, as well as an increase in intracellular calcium. In multicellular organisms, this mechanism initiates many forms of intercellular communication such as synaptic transmission, hormone secretion by neuroendocrine cells, and immune cells' secretion. In neurons and endocrine cells, the SNARE-proteins and SM-proteins catalyze the fusion by forming a complex that brings the two fusion membranes together. For instance, in synapses, the SNARE complex is formed by syntaxin-1 and SNAP25 at the plasma membrane and VAMP2 at the vesicle membrane. Exocytosis in neuronal chemical synapses is Ca2+ triggered and serves interneuronal signalling. The calcium sensors that trigger exocytosis might interact either with the SNARE complex or with the phospholipids of the fusing membranes. Synaptotagmin has been recognized as the major sensor for Ca2+ triggered exocytosis in animals. However, synaptotagmin proteins are absent in plants and unicellular eukaryotes. Other potential calcium sensors for exocytosis are EF-hand proteins (Ex: Calmodulin) and C2-domain (Ex: Ferlins, E-synaptotagmin, Doc2b) containing proteins. It is unclear how the different calcium sensors can cooperate together and mediate the calcium triggered kinetics of exocytosis in a specific fashion. Constitutive exocytosis is performed by all cells and serves the release of components of the extracellular matrix or delivery of newly synthesized membrane proteins that are incorporated in the plasma membrane after the fusion of the transport vesicle. There is no clear consensus about the machinery and molecular processes that drive the formation, budding, translocation and fusion of the post-Golgi vesicles to the plasma membrane. The fusion involves membrane tethering (recognition) and membrane fusion. It is still unclear if the machinery between the constitutive and regulated secretion is different. The machinery required for constitutive exocytosis has not been studied as much as the mechanism of regulated exocytosis. Two tethering complexes are associated with constitutive exocytosis in mammals, ELKS and Exocyst. ELKS is a large coiled-coil protein, also involved in synaptic exocytosis, marking the 'hotspots' fusion points of the secretory carriers fusion. Exocyst is an octameric protein complex. In mammals, exocyst components localize in both plasma membrane, and Golgi apparatus and the exocyst proteins are colocalized at the fusion point of the post-Golgi vesicles. The membrane fusion of the constitutive exocytosis, probably, is mediated by SNAP29 and Syntaxin19 at the plasma membrane and YKT6 or VAMP3 at the vesicle membrane. Vesicular exocytosis in prokaryote gram negative bacteria is a third mechanism and latest finding in exocytosis. The periplasm is pinched off as bacterial outer membrane vesicles (OMVs) for translocating microbial biochemical signals into eukaryotic host cells or other microbes located nearby, accomplishing control of the secreting microbe on its environment - including invasion of host, endotoxemia, competing with other microbes for nutrition, etc. This finding of membrane vesicle trafficking occurring at the host–pathogen interface also dispels the myth that exocytosis is purely a eukaryotic cell phenomenon. Steps Five steps are involved in exocytosis: Vesicle trafficking Certain vesicle-trafficking steps require the transportation of a vesicle over a moderately small distance. For example, vesicles that transport proteins from the Golgi apparatus to the cell surface area, will be likely to use motor proteins and a cytoskeletal track to get closer to their target. Before tethering would have been appropriate, many of the proteins used for the active transport would have been instead set for passive transport, because the Golgi apparatus does not require ATP to transport proteins. Both the actin- and the microtubule-base are implicated in these processes, along with several motor proteins. Once the vesicles reach their targets, they come into contact with tethering factors that can restrain them. Vesicle tethering It is useful to distinguish between the initial, loose tethering of vesicles to their objective from the more stable, packing interactions. Tethering involves links over distances of more than about half the diameter of a vesicle from a given membrane surface (>25 nm). Tethering interactions are likely to be involved in concentrating synaptic vesicles at the synapse. Vesicle docking Secretory vesicles transiently dock and fuse at the porosome at the cell plasma membrane, via a tight t-/v-SNARE ring complex. Vesicle priming In neuronal exocytosis, the term priming has been used to include all of the molecular rearrangements and ATP-dependent protein and lipid modifications that take place after initial docking of a synaptic vesicle but before exocytosis, such that the influx of calcium ions is all that is needed to trigger nearly instantaneous neurotransmitter release. In other cell types, whose secretion is constitutive (i.e. continuous, calcium ion independent, non-triggered) there is no priming. Vesicle fusion Transient vesicle fusion is driven by SNARE proteins, resulting in release of vesicle contents into the extracellular space (or in case of neurons in the synaptic cleft). The merging of the donor and the acceptor membranes accomplishes three tasks: The surface of the plasma membrane increases (by the surface of the fused vesicle). This is important for the regulation of cell size, e.g., during cell growth. The substances within the vesicle are released into the exterior. These might be waste products or toxins, or signaling molecules like hormones or neurotransmitters during synaptic transmission. Proteins embedded in the vesicle membrane are now part of the plasma membrane. The side of the protein that was facing the inside of the vesicle now faces the outside of the cell. This mechanism is important for the regulation of transmembrane and transporters. Vesicle retrieval Retrieval of synaptic vesicles occurs by endocytosis. Most synaptic vesicles are recycled without a full fusion into the membrane (kiss-and-run fusion) via porosome. Non-constitutive exocytosis and subsequent endocytosis are highly energy expending processes, and thus, are dependent on mitochondria. Examination of cells following secretion using electron microscopy demonstrate increased presence of partially empty vesicles following secretion. This suggested that during the secretory process, only a portion of the vesicular content is able to exit the cell. This could only be possible if the vesicle were to temporarily establish continuity with the cell plasma membrane at porosomes, expel a portion of its contents, then detach, reseal, and withdraw into the cytosol (endocytose). In this way, the secretory vesicle could be reused for subsequent rounds of exo-endocytosis, until completely empty of its contents.
Biology and health sciences
Cell processes
Biology
73638
https://en.wikipedia.org/wiki/Quercus%20alba
Quercus alba
Quercus alba, the white oak, is one of the preeminent hardwoods of eastern and central North America. It is a long-lived oak, native to eastern and central North America and found from Minnesota, Ontario, Quebec, and southern Maine south as far as northern Florida and eastern Texas. Specimens have been documented to be over 450 years old. Although called a white oak, it is very unusual to find an individual specimen with white bark; the usual colour is a light gray. The name comes from the colour of the finished wood. In the forest it can reach a magnificent height and in the open it develops into a massive broad-topped tree with large branches striking out at wide angles. Description Quercus alba typically reaches heights of at maturity, and its canopy can become quite massive as its lower branches are apt to extend far out laterally, parallel to the ground. Trees growing in a forest will become much taller than ones in an open area which develop to be short and massive. The Mingo Oak was the tallest known white oak at over two hundred feet with a trunk height of before it was felled in 1938. It is not unusual for the crown spread of a white oak tree to be as wide as it is tall, but specimens growing at high altitudes may only become small shrubs. The bark is a light ash-gray and peels somewhat from the top, bottom and/or sides. According to Chris Bolgiano in The Appalachian Forest: A Search for Roots and Renewal, the largest tree ever cut in West Virginia was a white oak that measured thirteen feet thick at its base. White oak may live 200 to 300 years, with some even older specimens known. The Wye Oak in Wye Mills, Maryland was estimated to be over 450 years old when it finally fell in a thunderstorm in 2002. Another noted white oak was the Basking Ridge white oak in New Jersey, estimated to have been over 600 years old when it died in 2016. The tree measured in circumference at the base and in circumference above the ground. The tree was tall, and its branches spread over from tip to tip. The oak, claimed to be the oldest in the United States, began showing signs of poor health in the mid-2010s. The tree was taken down in 2017. Sexual maturity begins at around 20 years, but the tree does not produce large crops of acorns until its 50th year and the amount varies from year to year. Acorns deteriorate quickly after ripening, the germination rate being only 10% for six-month-old seeds. As the acorns are prime food for insects and other animals, all may be consumed in years of small crops, leaving none that would become new trees. The acorns are usually sessile, and grow to in length, falling in early October. In spring, the young leaves are delicate, silvery pink, and covered with a soft blanket-like down. The petioles are short, and the clustered leaves close to the ends of the shoots are pale green and downy, resulting in the entire tree having a misty, frosty look. This condition continues for several days, passing through the opalescent changes of soft pink, silvery white, and finally, yellow green. The leaves grow to be long and wide and have a deep glossy green upper surface. They usually turn red or brown in autumn, but depending on climate, site, and individual tree genetics, some trees are nearly always red, or even purple in autumn. Some dead leaves may remain on the tree throughout winter until very early spring. The lobes can be shallow, extending less than halfway to the midrib, or deep and somewhat branching. Quercus alba is sometimes confused with the swamp white oak, a closely related species, and the bur oak. The white oak hybridizes freely with the bur oak, the post oak, and the chestnut oak. Detailed description Bark: Light gray, varying to dark gray and to white; shallow, fissured and scaly. Branchlets start out as bright green, later turn reddish-green, and finally, light gray. A distinguishing feature of this tree is that a little over halfway up the trunk, the bark tends to form overlapping scales that are easily noticed and aid in identification. Wood: Light brown with paler sapwood; strong, tough, heavy, fine-grained and durable. Specific gravity, 0.7470; weight of one cubic foot, 46.35 lbs; weight of one cubic meter 770 kg. Winter buds: Reddish brown, obtuse, long. Leaves: Alternate, long, wide. Obovate or oblong, seven to nine-lobed, usually seven-lobed with rounded lobes and rounded sinuses; lobes destitute of bristles; sinuses sometimes deep, sometimes shallow. On young trees the leaves are often repand. They come out of the bud conduplicate, are bright red above, pale below, and covered with white tomentum. The reddish hue fades in a week or less, and they become silvery greenish, white, and shiny; when mature, they are thin, bright yellow-green, shiny or dull above, pale, glaucous or smooth below; the midrib is stout and yellow, primary veins are conspicuous. In late autumn the leaves turn a deep red and drop, or on young trees, remain on the branches throughout winter. Petioles are short, stout, grooved, and flattened. Stipules are linear and caducous. Flowers: Appear in May when leaves are one-third grown. Staminate flowers are borne in hairy aments long; the calyx is bright yellow, hairy, and six to eight-lobed with lobes shorter than the stamens; anthers are yellow. Pistillate flowers are borne on short peduncles; involucral scales are hairy and reddish; calyx lobes are acute; stigmas are bright red. Acorns: Annual, sessile or stalked; nut ovoid or oblong, round at apex, light brown, shiny, long; cap is cup-shaped, encloses about one-fourth of the nut, tomentose on the outside, tuberculate at base, scales with short obtuse tips becoming smaller and thinner toward the rim. White Oak acorns (referring to Q. alba and all its close relatives) have no epigeal dormancy and germination begins readily without any treatment. In most cases, the oak root sprouts in the fall, with the leaves and stem appearing the next spring. The acorns take only one growing season to develop unlike the red oak group, which require two years for maturation. Chemistry Grandinin/roburin E, castalagin/vescalagin, gallic acid, monogalloyl glucose (glucogallin) and valoneic acid dilactone, monogalloyl glucose, digalloyl glucose, trigalloyl glucose, ellagic acid rhamnose, quercitrin and ellagic acid are phenolic compounds found in Q. alba. Distribution Quercus alba is fairly tolerant of a variety of habitats, and may be found on ridges, in valleys, and in between, in dry and moist habitats, and in moderately acid and alkaline soils. It is mainly a lowland tree, but reaches altitudes of 1,600 m (5,249 ft) in the Appalachian Mountains. It is often a component of the forest canopy in an oak-heath forest. Frequent fires in the Central Plains region of the United States prevented oak forests, including Q. alba, from expanding into the Midwest. However, a decrease in the frequency of these natural fires after European settlement caused rapid expansion of oak forests into the Great Plains, negatively affecting the natural prairie vegetation. Uses Cultivation Quercus alba is cultivated as an ornamental tree somewhat infrequently due to its slow growth and ultimately huge size. It is not tolerant of urban pollution and road salt and due to its large taproot, is unsuited for a street tree or parking strips/islands. Food The acorns are much less bitter than the acorns of red oaks. They can be eaten by humans but, if bitter, may need to have the tannins leached. They are also a valuable wildlife food, notably for turkeys, wood ducks, pheasants, grackles, jays, nuthatches, thrushes, woodpeckers, rabbits, squirrels, and deer. The white oak is the only known food plant of the Bucculatrix luteella and Bucculatrix ochrisuffusa caterpillars. The young shoots of many eastern oak species are readily eaten by deer. Dried oak leaves are also occasionally eaten by white-tailed deer in the fall or winter. Rabbits often browse twigs and can girdle stems. Woodcraft White oak has tyloses that give the wood a closed cellular structure, making it water- and rot-resistant. Because of this characteristic, white oak is used by coopers to make wine and whiskey barrels as the wood resists leaking. It has also been used in construction, shipbuilding, agricultural implements and in the interior finishing of houses. White oak splints have been used historically by Native Americans for basketry. White oak logs feature prominent medullary rays which produce a distinctive, decorative ray and fleck pattern when the wood is quarter sawn. Quarter sawn white oak was a signature wood used in mission style oak furniture by Gustav Stickley in the Craftsman style of the Arts and Crafts movement. White oak is used extensively in Japanese martial arts for some weapons, such as the bokken and jo. It is valued for its density, strength, resiliency and relatively low chance of splintering if broken by impact, relative to the substantially cheaper red oak. is made of white oak and southern live oak, conferring additional resistance to cannon fire. Reconstructive wood replacement of white oak parts comes from a special grove of Quercus alba known as the "Constitution Grove" at Naval Surface Warfare Center Crane Division. Musical instruments Deering Banjo Company have made several 5-string banjos using white oak - including members of the Vega series, the White Lotus, and the limited edition 40th anniversary model. White Oak has a mellower timbre than more traditionally used maple, and yet still has enough power and projection to not require a metal tone ring. Oak barrels Barrels made of American white oak are commonly used for oak aging of wine, in which the wood is noted for imparting strong flavors. Also, by federal regulation, bourbon whiskey must be aged in charred new oak (generally understood to mean specifically American white oak) barrels. Culture White oak has served as the official state tree of Illinois after selection by a vote of school children. There are two "official" white oaks serving as state trees, one located on the grounds of the governor's mansion, and the other in a schoolyard in the town of Rochelle. The white oak is also the state tree of Connecticut and Maryland. The Wye Oak, probably the oldest living white oak until it fell because of a thunderstorm on June 6, 2002, was the honorary state tree of Maryland. Being the subject of a legend as old as the colony itself, the Charter Oak of Hartford, Connecticut is one of the most famous white oaks in America. An image of the tree now adorns the reverse side of the Connecticut state quarter. The white oak from the movie The Shawshank Redemption, known as the "Shawshank tree" and the "Tree of Hope", was estimated to be more than 200 years old when it fell. The tree is seen during the last ten minutes of the movie. As the movie gained fame, the tree became popular as well, and used to attract tens of thousands of movie fans and tourists every year. A portion of the tree came down on July 29, 2011, when the tree was split by lightning during a storm. The remaining half of the tree fell during heavy winds just short of five years later, on July 22, 2016. The Bedford Oak is a 500-year-old white oak tree that sits in the town of Bedford in New York. It is the mascot of the town. It sits at the corner of the Hook Road and the old Bedford Road (now Cantitoe Street). The ground the tree stands on was deeded to the Town of Bedford in 1942 by Harold Whitman in memory of his wife, Georgia Squires Whitman. It has seen Westchester history from Native American settlements to the Revolutionary War to modern times. The video game Ace Attorney Investigations: Miles Edgeworth features a character named Quercus Alba who bears some resemblance to the white oak and plants in general. Threats to Oaks Insects and the damage from insects account for the greatest amount of threat to acorn production from white oaks, particularly nut weevils, moth larvae, and gall forming cynipids.
Biology and health sciences
Fagales
Plants
73644
https://en.wikipedia.org/wiki/Fagaceae
Fagaceae
The Fagaceae (; ) are a family of flowering plants that includes beeches, chestnuts and oaks, and comprises eight genera with about 927 species. Fagaceae in temperate regions are mostly deciduous, whereas in the tropics, many species occur as evergreen trees and shrubs. They are characterized by alternate simple leaves with pinnate venation, unisexual flowers in the form of catkins, and fruit in the form of cup-like (cupule) nuts. Their leaves are often lobed, and both petioles and stipules are generally present. Their fruits lack endosperm and lie in a scaly or spiny husk that may or may not enclose the entire nut, which may consist of one to seven seeds. In the oaks, genus Quercus, the fruit is a non-valved nut (usually containing one seed) called an acorn. The husk of the acorn in most oaks only forms a cup in which the nut sits. Other members of the family have fully enclosed nuts. Fagaceae is one of the most ecologically important woody plant families in the Northern Hemisphere, as oaks form the backbone of temperate forest in North America, Europe, and Asia, and are one of the most significant sources of wildlife food. Several members of the Fagaceae have important economic uses. Many species of oak, chestnut, and beech (genera Quercus, Castanea, and Fagus, respectively) are commonly used as timber for floors, furniture, cabinets, and wine barrels. Cork for stopping wine bottles and a myriad other uses is made from the bark of cork oak, Quercus suber. Chestnuts are the fruits from species of the genus Castanea. Numerous species from several genera are prominent ornamentals. Wood chips from the genus Fagus are often used in flavoring beers. Nuts of some species in the Asian tropical genera Castanopsis and Lithocarpus are edible and often used as ornamentals. Classification The Fagaceae are often divided into five or six subfamilies and are generally accepted to include 8 (to 10) genera (listed below). Monophyly of the Fagaceae is strongly supported by both morphological (especially fruit morphology) and molecular data. The Southern Hemisphere genus Nothofagus, commonly the southern beeches, was historically placed in the Fagaceae sister to the genus Fagus, but recent molecular evidence suggests otherwise. While Nothofagus shares a number of common characteristics with the Fagaceae, such as cupule fruit structure, it differs significantly in a number of ways, including distinct stipule and pollen morphology, as well as having a different number of chromosomes. The currently accepted view by systematic botanists is to place Nothofagus in its own family, Nothofagaceae. Subfamilies and genera There are two subfamilies: Fagoideae Auth. K. Koch. Monotypic Fagus L.—beeches; about 10 to 13 species, north temperate east Asia, southwest Asia, Europe, eastern North America The genus Nothofagus (southern beeches: from the Southern Hemisphere), formerly included in the Fagaceae, is now treated in the separate monotypic family Nothofagaceae. Quercoideae Auth. Õrsted Castanea Mill. 1754—chestnuts; eight species, north temperate east Asia, southwest Asia, southeast Europe, eastern North America Castanopsis (D. Don) Spach 1841—chinquapins or chinkapins; about 125–130 species, southeast Asia Chrysolepis Hjelmq. 1948—golden chinkapins; two species, western United States Lithocarpus Blume 1826—stone oaks; about 330-340 species, warm temperate to tropical Asia Notholithocarpus P. S. Manos, C. H. Cannon & S.H. Oh 2008 [2009]—Tanoaks; 1 species (formerly Lithocarpus densiflorus), endemic to California and southwest Oregon Quercus L. 1753—oaks; about 600 species, widespread Northern Hemisphere, crossing the equator in Indonesia Trigonobalanus Forman 1962—three species, tropical southeast Asia, Northern South America (Colombia) (three species of Colombobalanus and Formanodendron are included) The Quercus subgenus Cyclobalanopsis is treated as a distinct genus by the Flora of China, but as a section or subgenus by most taxonomists. Distribution The Fagaceae are widely distributed across the Northern Hemisphere. Genus-level diversity is concentrated in Southeast Asia, where most of the extant genera are thought to have evolved before migrating to Europe and North America (via the Bering Land Bridge). Members of the Fagaceae (such as Fagus grandifolia, Castanea dentata and Quercus alba in the Northeastern United States, or Fagus sylvatica, Quercus robur and Q. petraea in Europe) are often ecologically dominant in northern temperate forests. More than 400 species of Fagaceae, mostly Castanopsis and Lithocarpus, grow in tropical Southeast Asia, with some species in similar dominant roles over large areas. Phylogeny Modern molecular phylogenetics suggest the following relationships:
Biology and health sciences
Fagales
Plants
73664
https://en.wikipedia.org/wiki/Astrolabe
Astrolabe
An astrolabe ( , ; ; ) is an astronomical instrument dating to ancient times. It serves as a star chart and physical model of visible half-dome of the sky. Its various functions also make it an elaborate inclinometer and an analog calculation device capable of working out several kinds of problems in astronomy. In its simplest form it is a metal disc with a pattern of wires, cutouts, and perforations that allows a user to calculate astronomical positions precisely. It is able to measure the altitude above the horizon of a celestial body, day or night; it can be used to identify stars or planets, to determine local latitude given local time (and vice versa), to survey, or to triangulate. It was used in classical antiquity, the Islamic Golden Age, the European Middle Ages and the Age of Discovery for all these purposes. The astrolabe, which is a precursor to the sextant, is effective for determining latitude on land or calm seas. Although it is less reliable on the heaving deck of a ship in rough seas, the mariner's astrolabe was developed to solve that problem. Applications The 10th century astronomer ʿAbd al-Raḥmān al-Ṣūfī wrote a massive text of 386 chapters on the astrolabe, which reportedly described more than 1,000 applications for the astrolabe's various functions. These ranged from the astrological, the astronomical and the religious, to navigation, seasonal and daily time-keeping, and tide tables. At the time of their use, astrology was widely considered as much of a serious science as astronomy, and study of the two went hand-in-hand. The astronomical interest varied between folk astronomy (of the pre-Islamic tradition in Arabia) which was concerned with celestial and seasonal observations, and mathematical astronomy, which would inform intellectual practices and precise calculations based on astronomical observations. In regard to the astrolabe's religious function, the demands of Islamic prayer times were to be astronomically determined to ensure precise daily timings, and the qibla, the direction of Mecca towards which Muslims must pray, could also be determined by this device. In addition to this, the lunar calendar that was informed by the calculations of the astrolabe was of great significance to the religion of Islam, given that it determines the dates of important religious observances such as Ramadan. Etymology The Oxford English Dictionary gives the translation "star-taker" for the English word astrolabe and traces it through medieval Latin to the Greek word : , from : "star", and : "to take". In the medieval Islamic world the Arabic word (i.e., astrolabe) was given various etymologies. In Arabic texts, the word is translated as (, ) – a direct translation of the Greek word. Al-Biruni quotes and criticises medieval scientist Hamza al-Isfahani, who stated: "asturlab is an Arabisation of this Persian phrase" (, meaning "taker of the stars"). In medieval Islamic sources, there is also a folk etymology of the word as "lines of lab", where "Lab" refers to a certain son of Idris (Enoch). This etymology is mentioned by a 10th century scientist named al-Qummi but rejected by al-Khwarizmi. History Ancient era An astrolabe is essentially a plane (two-dimensional) version of an armillary sphere, which had already been invented in the Hellenistic period and probably been used by Hipparchus to produce his star catalogue. Theon of Alexandria () wrote a detailed treatise on the astrolabe. The invention of the plane astrolabe is sometimes wrongly attributed to Theon's daughter Hypatia (born ; died but it's known to have been used much earlier. The misattribution comes from a misinterpretation of a statement in a letter written by Hypatia's pupil Synesius (), which mentions that Hypatia had taught him how to construct a plane astrolabe, but does not say that she invented it. Lewis argues that Ptolemy used an astrolabe to make the astronomical observations recorded in the Tetrabiblos. However, Emilie Savage-Smith notes "there is no convincing evidence that Ptolemy or any of his predecessors knew about the planispheric astrolabe". In chapter 5.1 of the Almagest, Ptolemy describes the construction of an armillary sphere, and it is usually assumed that this was the instrument he used. Astrolabes continued to be used in the Byzantine Empire. Christian philosopher John Philoponus wrote a treatise () on the astrolabe in Greek, which is the earliest extant treatise on the instrument. Mesopotamian bishop Severus Sebokht also wrote a treatise on the astrolabe in the Syriac language during the mid-7th century. Sebokht refers to the astrolabe as being made of brass in the introduction of his treatise, indicating that metal astrolabes were known in the Christian East well before they were developed in the Islamic world or in the Latin West. Medieval era Astrolabes were further developed in the medieval Islamic world, where Muslim astronomers introduced angular scales to the design, adding circles indicating azimuths on the horizon. It was widely used throughout the Muslim world, chiefly as an aid to navigation and as a way of finding the Qibla, the direction of Mecca. Eighth-century mathematician Muhammad al-Fazari is the first person credited with building the astrolabe in the Islamic world. The mathematical background was established by Muslim astronomer Albatenius in his treatise Kitab az-Zij which was translated into Latin by Plato Tiburtinus (De Motu Stellarum). The earliest surviving astrolabe is dated AH 315 In the Islamic world, astrolabes were used to find the times of sunrise and the rising of fixed stars, to help schedule morning prayers (salat). In the 10th century, al-Sufi first described over 1,000 different uses of an astrolabe, in areas as diverse as astronomy, astrology, navigation, surveying, timekeeping, prayer, Salat, Qibla, etc. The spherical astrolabe was a variation of both the astrolabe and the armillary sphere, invented during the Middle Ages by astronomers and inventors in the Islamic world. The earliest description of the spherical astrolabe dates to Al-Nayrizi (fl. 892–902). In the 12th century, Sharaf al-Dīn al-Tūsī invented the linear astrolabe, sometimes called the "staff of al-Tusi", which was "a simple wooden rod with graduated markings, but without sights. It was furnished with a plumb line and a double chord for making angular measurements and bore a perforated pointer". The geared mechanical astrolabe was invented by Abi Bakr of Isfahan in 1235. The first known metal astrolabe in Western Europe is the Destombes astrolabe made from brass in the eleventh century in Portugal. Metal astrolabes avoided the warping that large wooden ones were prone to, allowing the construction of larger and therefore more accurate instruments. Metal astrolabes were heavier than wooden instruments of the same size, making it difficult to use them in navigation. Herman Contractus of Reichenau Abbey, examined the use of the astrolabe in Mensura Astrolai during the 11th century. Peter of Maricourt wrote a treatise on the construction and use of a universal astrolabe in the last half of the 13th century entitled Nova compositio astrolabii particularis. Universal astrolabes can be found at the History of Science Museum, Oxford. David A. King, historian of Islamic instrumentation, describes the universal astrolobe designed by Ibn al-Sarraj of Aleppo (a.k.a. Ahmad bin Abi Bakr; fl. 1328) as "the most sophisticated astronomical instrument from the entire Medieval and Renaissance periods". English author Geoffrey Chaucer () compiled A Treatise on the Astrolabe for his son, mainly based on a work by Messahalla or Ibn al-Saffar. The same source was translated by French astronomer and astrologer Pélerin de Prusse and others. The first printed book on the astrolabe was Composition and Use of Astrolabe by Christian of Prachatice, also using Messahalla, but relatively original. In 1370, the first Indian treatise on the astrolabe was written by the Jain astronomer Mahendra Suri, titled Yantrarāja. A simplified astrolabe, known as a balesilha, was used by sailors to get an accurate reading of latitude while at sea. The use of the balesilha was promoted by Prince Henry (1394–1460) while navigating for Portugal. The astrolabe was almost certainly first brought north of the Pyrenees by Gerbert of Aurillac (future Pope Sylvester II), where it was integrated into the quadrivium at the school in Reims, France, sometime before the turn of the 11th century. In the 15th century, French instrument maker Jean Fusoris () also started remaking and selling astrolabes in his shop in Paris, along with portable sundials and other popular scientific devices of the day. Thirteen of his astrolabes survive to this day. One more special example of craftsmanship in early 15th-century Europe is the astrolabe designed by Antonius de Pacento and made by Dominicus de Lanzano, dated 1420. In the 16th century, Johannes Stöffler published Elucidatio fabricae ususque astrolabii, a manual of the construction and use of the astrolabe. Four identical 16th century astrolabes made by Georg Hartmann provide some of the earliest evidence for batch production by division of labor. Greek painter Ieremias Palladas incorporated a sophisticated astrolabe in his 1612 painting depicting Catherine of Alexandria. The painting, entitled Catherine of Alexandria; in addition to the saint, showed a device labelled the 'system of the universe' (). The device featured the classical planets with their Greek names: Helios (Sun), Selene (Moon), Hermes (Mercury), Aphrodite (Venus), Ares (Mars), Zeus (Jupiter), and Cronos (Saturn). The depicted device also had celestial spheres, following the Ptolemaic model, and Earth was shown as a blue sphere with circles of geographic coordinates. A complicated line representing the axis of the Earth covered the entire instrument. Astrolabes and clocks Mechanical astronomical clocks were initially influenced by the astrolabe; they could be seen in many ways as clockwork astrolabes designed to produce a continual display of the current position of the sun, stars, and planets. For example, Richard of Wallingford's clock () consisted essentially of a star map rotating behind a fixed rete, similar to that of an astrolabe. Many astronomical clocks use an astrolabe-style display, such as the famous clock at Prague, adopting a stereographic projection (see below) of the ecliptic plane. In recent times, astrolabe watches have become popular. For example, Swiss watchmaker Ludwig Oechslin designed and built an astrolabe wristwatch in conjunction with Ulysse Nardin in 1985. Dutch watchmaker Christaan van der Klauuw also manufactures astrolabe watches today. Construction An astrolabe consists of a disk with a wide, raised rim, called the mater (mother), which is deep enough to hold one or more flat plates called tympans, or climates. A tympan is made for a specific latitude and is engraved with a stereographic projection of circles denoting azimuth and altitude and representing the portion of the celestial sphere above the local horizon. The rim of the mater is typically graduated into hours of time, degrees of arc, or both. Above the mater and tympan, the rete, a framework bearing a projection of the ecliptic plane and several pointers indicating the positions of the brightest stars, is free to rotate. These pointers are often just simple points, but depending on the skill of the craftsman can be very elaborate and artistic. There are examples of astrolabes with artistic pointers in the shape of balls, stars, snakes, hands, dogs' heads, and leaves, among others. The names of the indicated stars were often engraved on the pointers in Arabic or Latin. Some astrolabes have a narrow rule or label which rotates over the rete, and may be marked with a scale of declinations. The rete, representing the sky, functions as a star chart. When it is rotated, the stars and the ecliptic move over the projection of the coordinates on the tympan. One complete rotation corresponds to the passage of a day. The astrolabe is, therefore, a predecessor of the modern planisphere. On the back of the mater, there is often engraved a number of scales that are useful in the astrolabe's various applications. These vary from designer to designer, but might include curves for time conversions, a calendar for converting the day of the month to the sun's position on the ecliptic, trigonometric scales, and graduation of 360 degrees around the back edge. The alidade is attached to the back face. An alidade can be seen in the lower right illustration of the Persian astrolabe above. When the astrolabe is held vertically, the alidade can be rotated and the sun or a star sighted along its length, so that its altitude in degrees can be read ("taken") from the graduated edge of the astrolabe; hence the word's Greek roots: "astron" (ἄστρον) = star + "lab-" (λαβ-) = to take. The alidade had vertical and horizontal cross-hairs which plots locations on an azimuthal ring called an almucantar (altitude-distance circle). An arm called a radius connects from the center of the astrolabe to the optical axis which is parallel with another arm also called a radius. The other radius contains graduations of altitude and distance measurements. A shadow square also appears on the back of some astrolabes, developed by Muslim astrologists in the 9th Century, whereas devices of the Ancient Greek tradition featured only altitude scales on the back of the devices. This was used to convert shadow lengths and the altitude of the sun, the uses of which were various from surveying to measuring inaccessible heights. Devices were usually signed by their maker with an inscription appearing on the back of the astrolabe, and if there was a patron of the object, their name would appear inscribed on the front, or in some cases, the name of the reigning sultan or the teacher of the astrolabist has also been found to appear inscribed in this place. The date of the astrolabe's construction was often also signed, which has allowed historians to determine that these devices are the second oldest scientific instrument in the world. The inscriptions on astrolabes also allowed historians to conclude that astronomers tended to make their own astrolabes, but that many were also made to order and kept in stock to sell, suggesting there was some contemporary market for the devices. Mathematical basis The construction and design of astrolabes are based on the application of the stereographic projection of the celestial sphere. The point from which the projection is usually made is the South Pole. The plane onto which the projection is made is that of the Equator. Designing a tympanum through stereographic projection The tympanum captures the celestial coordinate axes upon which the rete will rotate. It is the component that will enable the precise determination of a star's position at a specific time of day and year. Therefore, it should project: The zenith, which will vary depending on the latitude of the astrolabe user. The horizon line and almucantar or circles parallel to the horizon, which will allow for the determination of a celestial body's altitude (from the horizon to the zenith). The celestial meridian (north-south meridian, passing through the zenith) and secondary meridians (circles intersecting the north-south meridian at the zenith), which will enable the measurement of azimuth for a celestial body. The three main circles of latitude (Capricorn, Equator, and Cancer) to determine the exact moments of solstices and equinoxes throughout the year. The tropics and the equator define the tympanum On the right side of the image above: The blue sphere represents the celestial sphere. The blue arrow indicates the direction of true north (the North Star). The central blue point represents Earth (the observer's location). The geographic south of the celestial sphere acts as the projection pole. The celestial equatorial plane serves as the projection plane. Three parallel circles represent the projection on the celestial sphere of Earth's main circles of latitude: In orange, the celestial Tropic of Cancer. In purple, the celestial equator. In green, the celestial Tropic of Capricorn. When projecting onto the celestial equatorial plane, three concentric circles correspond to the celestial sphere's three circles of latitude (left side of the image). The largest of these, the projection on the celestial equatorial plane of the celestial Tropic of Capricorn, defines the size of the astrolabe's tympanum. The center of the tympanum (and the center of the three circles) is actually the north-south axis around which Earth rotates, and therefore, the rete of the astrolabe will rotate around this point as the hours of the day pass (due to Earth's rotational motion). The three concentric circles on the tympanum are useful for determining the exact moments of solstices and equinoxes throughout the year: if the sun's altitude at noon on the rete is known and coincides with the outer circle of the tympanum (Tropic of Capricorn), it signifies the winter solstice (the sun will be at the zenith for an observer at the Tropic of Capricorn, meaning summer in the southern hemisphere and winter in the northern hemisphere). If, on the other hand, its altitude coincides with the inner circle (Tropic of Cancer), it indicates the summer solstice. If its altitude is on the middle circle (equator), it corresponds to one of the two equinoxes. The horizon and the measurement of altitude On the right side of the image above: The blue arrow indicates the direction of true north (the North Star). The central blue point represents Earth (the observer's location). The black arrow represents the zenith direction for the observer (which would vary depending on the observer's latitude). The two black circles represent the horizon surrounding the observer, which is perpendicular to the zenith vector and defines the portion of the celestial sphere visible to the observer, and its projection on the celestial equatorial plane. The geographic south of the celestial sphere acts as the projection pole. The celestial equatorial plane serves as the projection plane. When projecting the horizon onto the celestial equatorial plane, it transforms into an ellipse upward-shifted relatively to the center of the tympanum (both the observer and the projection of the north-south axis). This implies that a portion of the celestial sphere will fall outside the outer circle of the tympanum (the projection of the celestial Tropic of Capricorn) and, therefore, won't be represented. Additionally, when drawing circles parallel to the horizon up to the zenith (almucantar), and projecting them on the celestial equatorial plane, as in the image above, a grid of consecutive ellipses is constructed, allowing for the determination of a star's altitude when its rete overlaps with the designed tympanum. The meridians and the measurement of azimuth On the right side of the image above: The blue arrow indicates the direction of true north (the North Star). The central blue point represents Earth (the observer's location). The black arrow represents the zenith direction for the observer (which would vary depending on the observer's latitude). The two black circles represent the horizon surrounding the observer, which is perpendicular to the zenith vector and defines the portion of the celestial sphere visible to the observer, and its projection on the celestial equatorial plane. The five red dots represent the zenith, the nadir (the point on the celestial sphere opposite the zenith with respect to the observer), their projections on the celestial equatorial plane, and the center (with no physical meaning attached) of the circle obtained by projecting the secondary meridian (see below) on the celestial equatorial plane. The orange circle represents the celestial meridian (or meridian that goes, for the observer, from the north of the horizon to the south of the horizon passing through the zenith). The two red circles represent a secondary meridian with an azimuth of 40° East relative to the observer's horizon (which, like all secondary meridians, intersects the principal meridian at the zenith and nadir), and its projection on the celestial equatorial plane. The geographic south of the celestial sphere acts as the projection pole. The celestial equatorial plane serves as the projection plane. When projecting the celestial meridian, it results in a straight line that overlaps with the vertical axis of the tympanum, where the zenith and nadir are located. However, when projecting the 40° E meridian, another circle is obtained that passes through both the zenith and nadir projections, so its center is located on the perpendicular bisection of the segment connecting both points. In deed, the projection of the celestial meridian can be considered as a circle with an infinite radius (a straight line) whose center is on this bisection and at an infinite distance from these two points. If successive meridians that divide the celestial sphere into equal sectors (like "orange slices" radiating from the zenith) are projected, a family of curves passing through the zenith projection on the tympanum is obtained. These curves, once overlaid with the rete containing the major stars, allow for determining the azimuth of a star located on the rete and rotated for a specific time of day.
Technology
Measuring instruments
null
74007
https://en.wikipedia.org/wiki/Technology%20assessment
Technology assessment
Technology assessment (TA, , ) is a practical process of determining the value of a new or emerging technology in and of itself or against existing technologies. This is a means of assessing and rating the new technology from the time when it was first developed to the time when it is potentially accepted by the public and authorities for further use. In essence, TA could be defined as "a form of policy research that examines short- and long term consequences (for example, societal, economic, ethical, legal) of the application of technology." General description TA is the study and evaluation of new technologies. It is a way of trying to forecast and prepare for the upcoming technological advancements and their repercussions to the society, and then make decisions based on the judgments. It is based on the conviction that new developments within, and discoveries by, the scientific community are relevant for the world at large rather than just for the scientific experts themselves, and that technological progress can never be free of ethical implications. Technology assessment was initially practiced in the 1960s in the United States where it would focus on analyzing the significance of "supersonic transportation, pollution of the environment and ethics of genetic screening." Also, technology assessment recognizes the fact that scientists normally are not trained ethicists themselves and accordingly ought to be very careful when passing ethical judgement on their own, or their colleagues, new findings, projects, or work in progress. TA is a very broad phenomenon which also includes aspects such as "diffusion of technology (and technology transfer), factors leading to rapid acceptance of new technology, and the role of technology and society." Technology assessment assumes a global perspective and is future-oriented, not anti-technological. TA considers its task as an interdisciplinary approach to solving already existing problems and preventing potential damage caused by the uncritical application and the commercialization of new technologies. Therefore, any results of technology assessment studies must be published, and particular consideration must be given to communication with political decision-makers. An important problem concerning technology assessment is the so-called Collingridge dilemma: on the one hand, impacts of new technologies cannot be easily predicted until the technology is extensively developed and widely used; on the other hand, control or change of a technology is difficult as soon as it is widely used. It emphasizes on the fact that technologies, in their early stage, are unpredictable with regards to their implications and rather tough to regulate or control once it has been widely accepted by the society. Shaping or directing this technology is the desired direction becomes difficult for the authorities at this period of time. There have been several approaches put in place in order to tackle this dilemma, one of the common ones being "anticipation." In this approach, authorities and assessors "anticipate ethical impacts of a technology ("technomoral scenarios"), being too speculative to be reliable, or on ethically regulating technological developments ("sociotechnical experiments"), discarding anticipation of the future implications." Technology assessments, which are a form of cost–benefit analysis, are a medium for decision makers to evaluate and analyze solutions with regards to the particular technology assessment, and choose a best possible option which is cost effective and obeys the authoritative and budgetary requirements. However, they are difficult if not impossible to carry out in an objective manner since subjective decisions and value judgments have to be made regarding a number of complex issues such as (a) the boundaries of the analysis (i.e., what costs are internalized and externalized), (b) the selection of appropriate indicators of potential positive and negative consequences of the new technology, (c) the monetization of non-market values, and (d) a wide range of ethical perspectives. Consequently, most technology assessments are neither objective nor value-neutral exercises but instead are greatly influenced and biased by the values of the most powerful stakeholders, which are in many cases the developers and proponents (i.e., corporations and governments) of new technologies under consideration. In the most extreme view, as expressed by Ian Barbour in '’Technology, Environment, and Human Values'’, technology assessment is "a one-sided apology for contemporary technology by people with a stake in its continuation." Overall, technology assessment is a very broad field which reaches beyond just technology and industrial phenomenons. It handles the assessment of effects, consequences, and risks of a technology, but also is a forecasting function looking into the projection of opportunities and skill development as an input into strategic planning." Some of the major fields of TA are: information technology, hydrogen technologies, nuclear technology, molecular nanotechnology, pharmacology, organ transplants, gene technology, artificial intelligence, the Internet and many more. Forms and concepts of technology assessment The following types of concepts of TA are those that are most visible and practiced. There are, however, a number of further TA forms that are only proposed as concepts in the literature or are the label used by a particular TA institution. Parliamentary TA (PTA): TA activities of various kinds whose addressee is a parliament. PTA may be performed directly by members of those parliaments (e.g. in France and Finland) or on their behalf of related TA institutions (such as in the UK, in Germany and Denmark) or by organisations not directly linked to a Parliament (such as in the Netherlands and Switzerland). Expert TA (often also referred to as the classical TA or traditional TA concept): TA activities carried out by (a team of) TA and technical experts. Input from stakeholders and other actors is included only via written statements, documents and interviews, but not as in participatory TA. Participatory TA (pTA): TA activities which actively, systematically and methodologically involve various kinds of social actors as assessors and discussants, such as different kinds of civil society organisations, representatives of the state systems, but characteristically also individual stakeholders and citizens (lay persons), technical scientists and technical experts. Standard pTA methods include consensus conferences, focus groups, scenario workshops etc. Sometimes pTA is further divided into expert-stakeholder pTA and public pTA (including lay persons). The participatory assessment makes room for the inclusion of laypeople and establishes the value of varied point of views, interests and knowledge. It shows importance of the need for decision makers and actors to have a varied set of mindsets and perspective to make a combined, informed and rational decision. Constructive TA (CTA): This concept of TA, developed in the Netherlands, but also applied and discussed elsewhere attempts to broaden the design of new technology through feedback of TA activities into the actual construction of technology. Contrary to other forms of TA, CTA is not directed toward influencing regulatory practices by assessing the impacts of technology. Instead, CTA wants to address social issues around technology by influencing design practices. It aims to "mobilize insights on co-evolutionary dynamics of science, technology and society for anticipating and assessing technologies, rather than being predominantly concerned with assessing societal impacts of a quasi-given technology." This assessment established the value of involving users in the development and innovation process, encouraging the development and adaptation of new technology in their daily life. Discursive TA or Argumentative TA: This type of TA wants to deepen the political and normative debate about science, technology and society. It is inspired by ethics, policy discourse analysis and the sociology of expectations in science and technology. This mode of TA aims to clarify and bring under public and political scrutiny the normative assumptions and visions that drive the actors who are socially shaping science and technology. This assessment can be used as a tool to analyse and evaluate the background of each and every reaction or perception that takes place for each technology; often some of the reactions these assessors receive are not related to science or technology. Some of the ways of analyzing actors and their reaction is by "studying prospective users' everyday-life practices in their own right, and in naturalistic settings." Accordingly, argumentative TA not only addresses the side effects of technological change, but deals with both broader impacts of science and technology and the fundamental normative question of why developing a certain technology is legitimate and desirable. Technology assessment institutions around the world Many TA institutions are members of the European Parliamentary Technology Assessment (EPTA) network, some are working for the STOA panel of the European Parliament and formed the European Technology Assessment Group (ETAG). Centre for Technology Assessment (TA-SWISS), Bern, Switzerland. Department of Science, Technology and Policy Studies, University of Twente Institute of Technology Assessment (ITA) of the Austrian Academy of Sciences, Vienna Institute for Technology Assessment and Systems Analysis, Karlsruhe Institute of Technology, Germany (former) Office of Technology Assessment (OTA) The Danish Board of Technology Foundation, Copenhagen Norwegian Board of Technology, Oslo Oficina de Ciencia y Tecnología del Congreso (OficinaC), Spain Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST), Paris Parliamentary Office of Science and Technology (POST), London Rathenau Institute, The Hague Science and Technology Options Assessment (STOA) panel of the European Parliament, Brussels Science and Technology Policy Research (SPRU), Sussex Technology centre CAS (TC CAS), Prague, Czech Republic
Technology
General
null
74141
https://en.wikipedia.org/wiki/Subtraction
Subtraction
Subtraction (which is signified by the minus sign ) is one of the four arithmetic operations along with addition, multiplication and division. Subtraction is an operation that represents removal of objects from a collection. For example, in the adjacent picture, there are peaches—meaning 5 peaches with 2 taken away, resulting in a total of 3 peaches. Therefore, the difference of 5 and 2 is 3; that is, . While primarily associated with natural numbers in arithmetic, subtraction can also represent removing or decreasing physical and abstract quantities using different kinds of objects including negative numbers, fractions, irrational numbers, vectors, decimals, functions, and matrices. In a sense, subtraction is the inverse of addition. That is, if and only if . In words: the difference of two numbers is the number that gives the first one when added to the second one. Subtraction follows several important patterns. It is anticommutative, meaning that changing the order changes the sign of the answer. It is also not associative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters. Because is the additive identity, subtraction of it does not change a number. Subtraction also obeys predictable rules concerning related operations, such as addition and multiplication. All of these rules can be proven, starting with the subtraction of integers and generalizing up through the real numbers and beyond. General binary operations that follow these patterns are studied in abstract algebra. In computability theory, considering subtraction is not well-defined over natural numbers, operations between numbers are actually defined using "truncated subtraction" or monus. Notation and terminology Subtraction is usually written using the minus sign "−" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, (pronounced as "two minus one equals one") (pronounced as "four minus two equals two") (pronounced as "six minus three equals three") (pronounced as "four minus six equals negative two") There are also situations where subtraction is "understood", even though no symbol appears: A column of two numbers, with the lower number in red, usually indicates that the lower number in the column is to be subtracted, with the difference written below, under a line. This is most common in accounting. Formally, the number being subtracted is known as the subtrahend, while the number it is subtracted from is the minuend. The result is the difference. That is, . All of this terminology derives from Latin. "Subtraction" is an English word derived from the Latin verb subtrahere, which in turn is a compound of sub "from under" and trahere "to pull". Thus, to subtract is to draw from below, or to take away. Using the gerundive suffix -nd results in "subtrahend", "thing to be subtracted". Likewise, from minuere "to reduce or diminish", one gets "minuend", which means "thing to be diminished". Of integers and real numbers Integers Imagine a line segment of length b with the left end labeled a and the right end labeled c. Starting from a, it takes b steps to the right to reach c. This movement to the right is modeled mathematically by addition: a + b = c. From c, it takes b steps to the left to get back to a. This movement to the left is modeled by subtraction: c − b = a. Now, a line segment labeled with the numbers , , and . From position 3, it takes no steps to the left to stay at 3, so . It takes 2 steps to the left to get to position 1, so . This picture is inadequate to describe what would happen after going 3 steps to the left of position 3. To represent such an operation, the line must be extended. To subtract arbitrary natural numbers, one begins with a line containing every natural number (0, 1, 2, 3, 4, 5, 6, ...). From 3, it takes 3 steps to the left to get to 0, so . But is still invalid, since it again leaves the line. The natural numbers are not a useful context for subtraction. The solution is to consider the integer number line (..., −3, −2, −1, 0, 1, 2, 3, ...). This way, it takes 4 steps to the left from 3 to get to −1: . Natural numbers Subtraction of natural numbers is not closed: the difference is not a natural number unless the minuend is greater than or equal to the subtrahend. For example, 26 cannot be subtracted from 11 to give a natural number. Such a case uses one of two approaches: Conclude that 26 cannot be subtracted from 11; subtraction becomes a partial function. Give the answer as an integer representing a negative number, so the result of subtracting 26 from 11 is −15. Real numbers The field of real numbers can be defined specifying only two binary operations, addition and multiplication, together with unary operations yielding additive and multiplicative inverses. The subtraction of a real number (the subtrahend) from another (the minuend) can then be defined as the addition of the minuend and the additive inverse of the subtrahend. For example, . Alternatively, instead of requiring these unary operations, the binary operations of subtraction and division can be taken as basic. Properties Anti-commutativity Subtraction is anti-commutative, meaning that if one reverses the terms in a difference left-to-right, the result is the negative of the original result. Symbolically, if a and b are any two numbers, then a − b = −(b − a). Non-associativity Subtraction is non-associative, which comes up when one tries to define repeated subtraction. In general, the expression "a − b − c" can be defined to mean either (a − b) − c or a − (b − c), but these two possibilities lead to different answers. To resolve this issue, one must establish an order of operations, with different orders yielding different results. Predecessor In the context of integers, subtraction of one also plays a special role: for any integer a, the integer is the largest integer less than a, also known as the predecessor of a. Units of measurement When subtracting two numbers with units of measurement such as kilograms or pounds, they must have the same unit. In most cases, the difference will have the same unit as the original numbers. Percentages Changes in percentages can be reported in at least two forms, percentage change and percentage point change. Percentage change represents the relative change between the two quantities as a percentage, while percentage point change is simply the number obtained by subtracting the two percentages. As an example, suppose that 30% of widgets made in a factory are defective. Six months later, 20% of widgets are defective. The percentage change is = − = %, while the percentage point change is −10 percentage points. In computing The method of complements is a technique used to subtract one number from another using only the addition of positive numbers. This method was commonly used in mechanical calculators, and is still used in modern computers. To subtract a binary number y (the subtrahend) from another number x (the minuend), the ones' complement of y is added to x and one is added to the sum. The leading digit "1" of the result is then discarded. The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained by inverting each bit (changing "0" to "1" and vice versa). And adding 1 to get the two's complement can be done by simulating a carry into the least significant bit. For example: 01100100 (x, equals decimal 100) - 00010110 (y, equals decimal 22) becomes the sum: 01100100 (x) + 11101001 (ones' complement of y) + 1 (to get the two's complement) —————————— 101001110 Dropping the initial "1" gives the answer: 01001110 (equals decimal 78) The teaching of subtraction in schools Methods used to teach subtraction to elementary school vary from country to country, and within a country, different methods are adopted at different times. In what is known in the United States as traditional mathematics, a specific process is taught to students at the end of the 1st year (or during the 2nd year) for use with multi-digit whole numbers, and is extended in either the fourth or fifth grade to include decimal representations of fractional numbers. In America Almost all American schools currently teach a method of subtraction using borrowing or regrouping (the decomposition algorithm) and a system of markings called crutches. Although a method of borrowing had been known and published in textbooks previously, the use of crutches in American schools spread after William A. Brownell published a study—claiming that crutches were beneficial to students using this method. This system caught on rapidly, displacing the other methods of subtraction in use in America at that time. In Europe Some European schools employ a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are also crutches (markings to aid memory), which vary by country. Comparing the two main methods Both these methods break up the subtraction as a process of one digit subtractions by place value. Starting with a least significant digit, a subtraction of the subtrahend: sj sj−1 ... s1 from the minuend mk mk−1 ... m1, where each si and mi is a digit, proceeds by writing down , , and so forth, as long as si does not exceed mi. Otherwise, mi is increased by 10 and some other digit is modified to correct for this increase. The American method corrects by attempting to decrease the minuend digit mi+1 by one (or continuing the borrow leftwards until there is a non-zero digit from which to borrow). The European method corrects by increasing the subtrahend digit si+1 by one. Example: 704 − 512. The minuend is 704, the subtrahend is 512. The minuend digits are , and . The subtrahend digits are , and . Beginning at the one's place, 4 is not less than 2 so the difference 2 is written down in the result's one's place. In the ten's place, 0 is less than 1, so the 0 is increased by 10, and the difference with 1, which is 9, is written down in the ten's place. The American method corrects for the increase of ten by reducing the digit in the minuend's hundreds place by one. That is, the 7 is struck through and replaced by a 6. The subtraction then proceeds in the hundreds place, where 6 is not less than 5, so the difference is written down in the result's hundred's place. We are now done, the result is 192. The Austrian method does not reduce the 7 to 6. Rather it increases the subtrahend hundreds digit by one. A small mark is made near or below this digit (depending on the school). Then the subtraction proceeds by asking what number when increased by 1, and 5 is added to it, makes 7. The answer is 1, and is written down in the result's hundreds place. There is an additional subtlety in that the student always employs a mental subtraction table in the American method. The Austrian method often encourages the student to mentally use the addition table in reverse. In the example above, rather than adding 1 to 5, getting 6, and subtracting that from 7, the student is asked to consider what number, when increased by 1, and 5 is added to it, makes 7. Subtraction by hand Austrian method Example: Subtraction from left to right Example: American method In this method, each digit of the subtrahend is subtracted from the digit above it starting from right to left. If the top number is too small to subtract the bottom number from it, we add 10 to it; this 10 is "borrowed" from the top digit to the left, which we subtract 1 from. Then we move on to subtracting the next digit and borrowing as needed, until every digit has been subtracted. Example: Trade first A variant of the American method where all borrowing is done before all subtraction. Example: Partial differences The partial differences method is different from other vertical subtraction methods because no borrowing or carrying takes place. In their place, one places plus or minus signs depending on whether the minuend is greater or smaller than the subtrahend. The sum of the partial differences is the total difference. Example: Nonvertical methods Counting up Instead of finding the difference digit by digit, one can count up the numbers between the subtrahend and the minuend. Example: 1234 − 567 = can be found by the following steps: Add up the value from each step to get the total difference: . Breaking up the subtraction Another method that is useful for mental arithmetic is to split up the subtraction into small steps. Example: 1234 − 567 = can be solved in the following way: 1234 − 500 = 734 734 − 60 = 674 674 − 7 = 667 Same change The same change method uses the fact that adding or subtracting the same number from the minuend and subtrahend does not change the answer. One simply adds the amount needed to get zeros in the subtrahend. Example: "1234 − 567 =" can be solved as follows:
Mathematics
Basics
null
74202
https://en.wikipedia.org/wiki/Stork
Stork
Storks are large, long-legged, long-necked wading birds with long, stout bills. They belong to the family Ciconiidae, and make up the order Ciconiiformes . Ciconiiformes previously included a number of other families, such as herons and ibises, but those families have been moved to other orders. Storks dwell in many regions and tend to live in drier habitats than the closely related herons, spoonbills and ibises; they also lack the powder down that those groups use to clean off fish slime. Bill-clattering is an important mode of communication at the nest. Many species are migratory. Most storks eat frogs, fish, insects, earthworms, small birds and small mammals. There are 20 living species of storks in six genera. Various terms are used to refer to groups of storks, two frequently used ones being a muster of storks and a phalanx of storks. Storks tend to use soaring, gliding flight, which conserves energy. Soaring requires thermal air currents. Ottomar Anschütz's famous 1884 album of photographs of storks inspired the design of Otto Lilienthal's experimental gliders of the late nineteenth century. Storks are heavy, with wide wingspans: the marabou stork, with a wingspan of and weight up to , joins the Andean condor in having the widest wingspan of all living land birds. Their nests are often very large and may be used for many years. Some nests have been known to grow to over in diameter and about in depth. All storks were once thought to be monogamous, but this is only partially true. While storks are generally socially monogamous, some species exhibit regular extra-pair breeding. Popular conceptions of storks' fidelity, serial monogamy, and doting parental care contribute to their prominence in mythology and culture, especially in western folklore as the deliverers of newborn humans. All 20 stork species have been assessed by the IUCN and carry a confident Red List status. However, the assessment for several species were based on incorrect assumptions and a general absence of sound information on stork habits. Etymology The word "stork " was first used in its current sense by at least the 12th century in Middle English. It is derived from the Old English word "storc", which itself comes from the hypothesised Proto-Germanic and ultimately the Proto-Indo-European . The name refers to the rigid posture of storks, a meaning reflected in the related word stark, which is derived from the Old English "stearc". Several species of storks are known by other common names. The jabiru is named after the Tupí-Guarani words meaning "that which has" and "swollen", referring to its thickset neck. The marabou stork is named after the Arabic word for holy man, murābiṭ, due to the perceived holy nature of the species. The adjutants are named after the military rank, referring to their stiff, military-like gait. Systematics A DNA study found that the families Ardeidae, Balaenicipitidae, Scopidae and the Threskiornithidae belong to the Pelecaniformes. This would make Ciconiidae the only group. Storks were distinct and possibly widespread by the Oligocene. Like most families of aquatic birds, storks seem to have arisen in the Palaeogene, maybe 40–50 million years ago (mya). For the fossil record of living genera, documented since the Middle Miocene (about 15 mya) at least in some cases, see the genus articles. No species or subspecies of stork is known to have gone extinct in historic times. A systematic literature review uncovered nearly 1,000 papers on storks, but showed most stork species to lack scientific understanding suggesting that many species should be classified as Data Deficient on the IUCN Red List. A Ciconia bone found in a rock shelter on the island of Réunion was probably of a bird taken there as food by early settlers; no known account mentions the presence of storks on the Mascarene Islands. Phylogeny The following phylogeny is recognized by the International Ornithological Congress, partially based on de Sousa et al (2023): Fossil storks Genus Palaeoephippiorhynchus (fossil: Early Oligocene of Fayyum, Egypt) Genus Grallavis (fossil: Early Miocene of Saint-Gérand-le-Puy, France, and Djebel Zelten, Libya) – may be same as Prociconia Ciconiidae gen. et sp. indet. (Ituzaingó Late Miocene of Paraná, Argentina) Ciconiidae gen. et sp. indet. (Puerto Madryn Late Miocene of Punta Buenos Aires, Argentina) Genus Prociconia (fossil: Late Pleistocene of Brazil) – may belong to modern genus Jabiru or Ciconia Genus Pelargosteon (fossil: Early Pleistocene of Romania) Ciconiidae gen. et sp. indet. – formerly Aquilavus/Cygnus bilinicus (fossil: Early Miocene of Břešťany, Czech Republic) cf. Leptoptilos gen. et sp. indet. – formerly L. siwalicensis (fossil: Late Miocene? – Late Pliocene of Siwalik, India) Ciconiidae gen. et sp. indet. (fossil: Late Pleistocene of San Josecito Cavern, Mexico) Ciconia nana (fossil: Pleistocene of Darling Downs, Queensland, Australia, and Ciconia louisebolesae fossil: Olig-Miocene of Riversleigh WHA, Queensland, Australia The fossil genera Eociconia (Middle Eocene of China) and Ciconiopsis (Deseado Early Oligocene of Patagonia, Argentina) are often tentatively placed with this family. A "ciconiiform" fossil fragment from the Touro Passo Formation found at Arroio Touro Passo (Rio Grande do Sul, Brazil) might be of the living wood stork M. americana; it is at most of Late Pleistocene age, a few 10,000s of years. Morphology Storks range in size from the marabou, which stands tall and can weigh , to the Abdim's stork, which is only high and weighs only . Their shape is superficially similar to the herons, with long legs and necks, but they are more heavy-set. There is some sexual dimorphism (differences between males and females) in size, with males being up to 15% bigger than females in some species (for example the saddle-billed stork), but almost no difference in appearance. The only difference is in the colour of the iris of the two species in the genus Ephippiorhynchus. The bills of storks are large to very large, and vary considerably between the genera. The shape of the bills is linked to the diet of the different species. The large bills of the Ciconia storks are the least specialized. Larger are the massive and slightly upturned bills of the Ephippiorhynchus and the jabiru. These have evolved to hunt for fish in shallow water. Larger still are the massive daggers of the two adjutants and marabou (Leptoptilos), which are used to feed on carrion and in defense against other scavengers, as well as for taking other prey. The long, ibis-like downcurved bills of the Mycteria storks have sensitive tips that allow them to detect prey by touch (tactilocation) where cloudy conditions would not allow them to see it. The most specialised bills of any storks are those of the two openbills (Anastomus), which as their name suggests, is open in the middle when their bill is closed. These bills have evolved to help openbills feed on their primary prey item, aquatic snails. Although it is sometimes reported that storks lack syrinxes and are mute, they do have syrinxes, and are capable of making some sounds, although they do not do so often. The syrinxes of storks are "variably degenerate" however, and the syringeal membranes of some species are found between tracheal rings or cartilage, an unusual arrangement shared with the ovenbirds. Distribution and habitat Storks have a nearly cosmopolitan distribution, being absent from the poles, most of North America and large parts of Australia. The centres of stork diversity are in tropical Asia and sub-Saharan Africa, with eight and six breeding species respectively. Just three species are present in the New World: wood stork, maguari stork and jabiru, which is the tallest flying bird of the Americas. Two species, white and black stork, reach Europe and western temperate Asia, while one species, Oriental stork, reaches temperate areas of eastern Asia, and one species, black-necked stork, is found in Australasia. Storks are more diverse and common in the tropics, and the species that live in temperate climates for the most part migrate to avoid the worst of winter. They are fairly diverse in their habitat requirements. Some species, particularly the Mycteria "wood storks" and Anastomus openbills, are highly dependent on water and aquatic prey, but many other species are far less dependent on this habitat type, although they will frequently make use of it. Species like the marabou and Abdim's stork will frequently be found foraging in open grasslands of savannah. Preferred habitats include flooded grasslands, light woodland, marshes and paddyfields, wet meadows, river backwaters and ponds. Many species will select shallow pools, particularly when lakes or rivers are drying out, as they concentrate prey and make it harder for prey to escape, or when monsoonal rainfall increases water depth of larger waterbodies. Some species like the woolly-necked storks and lesser adjutant storks have adapted to changing crops of tropical agricultural landscapes that enables them to remain resident despite the transformations brought about by seasonal crops. In South Africa, the woolly-necked storks have adapted to artificial feeding and now largely nest on trees in gardens with swimming pools. Less typical habitats include the dense temperate forests used by European black storks, or the rainforest habitat sought by Storm's stork in South East Asia. They generally avoid marine habitats, with the exception of the lesser adjutant, milky stork and wood stork, all of which forage in mangroves, lagoons and estuarine mudflats. A number of species, especially woolly-necked storks, black-necked storks, Asian openbills and lesser adjutant Storks in south Asia, have adapted to highly modified human habitats, for foraging and breeding. In the absence of persecution several stork species breed close to people, and species such as the marabou, greater adjutant, and white stork feed at landfill sites. Migration and movements Storks vary in their tendency towards migration. Temperate species like the white stork, black stork and Oriental stork undertake long annual migrations in the winter. The routes taken by these species have developed to avoid long distance travel across water, and from Europe this usually means flying across the Straits of Gibraltar or east across the Bosphorus and through Israel and the Sinai. Studies of young birds denied the chance to travel with others of their species have shown that these routes are at least partially learnt, rather than being innate as they are in passerine migrants. Migrating black storks are split between those that make stopovers on the migration between Europe and their wintering grounds in Africa, and those that do not. The Abdim's stork is another migrant, albeit one that migrates within the tropics. It breeds in northern Africa, from Senegal to the Red Sea, during the wet season, and then migrates to Southern Africa. Many species that are not regular migrants will still make smaller movements if circumstances require it; others may migrate over part of their range. This can also include regular commutes from nesting sites to feeding areas. Wood storks have been observed feeding from their breeding colony. Behaviour Feeding and diet Storks are carnivorous predators, taking a range of reptiles, small mammals, insects, fish, amphibians and other small invertebrates. Storks usually hunt for animals in shallow water. Any plant material consumed is usually by accident. Mycteria storks are specialists in feeding on aquatic vertebrates, particularly when prey is concentrated by lowering water levels or flooding into shallows. On marine mudflats and mangrove swamps in Sumatra, milky storks feed on mudskippers, probing the burrow with the bill and even the whole head into the mud. The characteristic feeding method involves standing or walking in shallow water and holding the bill submerged in the water. When contact is made with prey the bill reflexively snaps shut in 25 milliseconds, one of the fastest reactions known in any vertebrate. The reaction is able to distinguish between prey items and inanimate objects like branches, although the exact mechanism is unknown. Openbills are specialists in freshwater molluscs, particularly apple snails. They feed in small groups, and sometimes African openbills ride on the backs of hippos while foraging. Having caught a snail it will return to land or at least to the shallows to eat it. The fine tip of the bill of the openbills is used to open the snail, and the saliva has a narcotic effect, which causes the snail to relax and simplifies the process of extraction. The other genera of storks are more generalised. Ciconia storks are very generalised in their diets, and some species including Abdim's stork and marabous will feed in large flocks on swarms of locusts and at wildfires. This is why white storks and Abdim's storks are known as "grasshopper birds". Ephippiorhynchus are carnivorous though have a very diverse diet when living on human modified habitats such as agricultural landscapes. The foraging method used by the generalists is to stalk or walk across grassland or shallow water, watching for prey. Breeding Storks range from being solitary breeders through loose breeding associations to fully colonial. The jabiru, Ephippiorhynchus storks and several species of Ciconia are entirely solitary when breeding. In contrast the Mycteria storks, Abdim's stork, openbills and Leptoptilos storks breed in colonies which can range from a couple of pairs to thousands. Many of these species breed in colonies with other waterbirds, which can include other species of storks, herons and egrets, pelicans, cormorants and ibises. White storks, Oriental storks and Maguari storks are all loosely colonial, and may breed in nests that are within visual range of others of the same species, but have little to do with one another. They also may nest solitarily, and the reasons why they choose to nest together or apart are not understood. Storks use trees in a variety of habitats to breed including forests, cities, farmlands, and large wetlands. In culture Many ancient mythologies feature stories and legends involving storks. In Ancient Egypt, saddle-billed storks were seen as being amongst the most powerful animals and were used to represent the ba, the Ancient Egyptian conception of the soul, during the Old Kingdom. Bennu, an Egyptian deity that was later the inspiration for the phoenix, may also have been inspired by a stork, although it was more likely an ibis or heron. Greek and Roman mythology portrays storks as models of parental devotion. The 3rd century Roman writer Aelian, citing the authority of Alexander of Myndus, noted in his De natura animalium (book 3, chapter 23) that aged storks flew away to oceanic islands where they were transformed into humans as a reward for their piety towards their parents. Storks were also thought to care for their aged parents, feeding them and even transporting them, and children's books depicted them as a model of filial values. A Greek law called Pelargonia, from the Ancient Greek word pelargos for stork, required citizens to take care of their aged parents. The Greeks also held that killing a stork could be punished with death. Storks feature in several of Aesop's Fables, most notably in The Farmer and the Stork, The Fox and the Stork, and The Frogs Who Desired a King. The first fable involves a stork who is caught with a group of cranes who are eating grain in a farmer's field, with the moral that those who associate with wicked people can be held accountable for their crimes. The Fox and the Stork involves a fox who invites a stork for dinner and provides soup in a dish that the stork cannot drink from, and is in turn invited for dinner by the stork and given food in a narrow jug which he cannot access. It cautions readers to follow the principle of do no harm. The third fable involves a group of frogs that are dissatisfied with the king that Zeus has given them, an inanimate log, and who are then punished with a new King Stork (a water-snake in some versions) who eats the frogs. King Stork has subsequently entered the English language as a term for a particularly tyrannical ruler. Associations with fertility According to European folklore, the white stork is responsible for bringing babies to new parents. The legend is very ancient, but was popularised by an 1839 Hans Christian Andersen story called "The Storks". German folklore held that storks found babies in caves or marshes and brought them to households in a basket on their backs or held in their beaks. These caves contained adebarsteine or "stork stones". The babies would then be given to the mother or dropped down the chimney. Households would notify when they wanted children by placing sweets for the stork on the window sill. Subsequently, the folklore has spread around the world to the Philippines and countries in South America. Birthmarks on the back of the head of newborn babies, nevus flammeus nuchae, are sometimes referred to as stork-bite. In Slavic mythology and pagan religion, storks were thought to carry unborn souls from Vyraj to Earth in spring and summer. This belief still persists in the modern folk culture of many Slavic countries, in the simplified child story that "storks bring children into the world". Famous is the role that the fable played in historical development of psychoanalysis: the name ‘chimney sweeping’, which the first of all patients gave to her talking cure, is a free association with the place through which the bird used to bring babies into house. Psychoanalyst Marvin Margolis suggests the enduring nature of the stork fable of the newborn is linked to its addressing a psychological need, in that it allays the discomfort of discussing sex and procreation with children. Birds have long been associated with the maternal symbols from pagan goddesses such as Juno, to the Holy Ghost, and the stork may have been chosen for its white plumage (depicting purity), size, and flight at high altitude (likened to flying between Earth and Heaven). There were negative aspects to stork folklore as well; a Polish folktale relates how God made the stork's plumage white, while the Devil gave it black wings, imbuing it with both good and evil impulses. They were also associated with handicapped or stillborn babies in Germany, explained as the stork having dropped the baby en route to the household, or as revenge or punishment for past wrongdoing. A mother who was confined to bed around the time of childbirth was said to have been "bitten" by the stork. In Denmark, storks were said to toss a nestling off the nest and then an egg in successive years. In medieval England, storks were also associated with adultery, possibly inspired by their courtship rituals. Their preening and posture saw them linked with the attribute of self-conceit. Children of African American slaves were sometimes told that white babies were brought by storks, while black babies were born from buzzard eggs. As food Storks have never been a particularly common food, but occasionally featured in medieval banquets. They may also have been eaten in Ancient Egypt.
Biology and health sciences
Pelecanimorphae
null
74204
https://en.wikipedia.org/wiki/Locust
Locust
Locusts (derived from the Latin locusta, locust or lobster) are various species of short-horned grasshoppers in the family Acrididae that have a swarming phase. These insects are usually solitary, but under certain circumstances they become more abundant and change their behaviour and habits, becoming gregarious. No taxonomic distinction is made between locust and grasshopper species; the basis for the definition is whether a species forms swarms under intermittently suitable conditions; this has evolved independently in multiple lineages, comprising at least 18 genera in 5 different subfamilies. Normally, these grasshoppers are innocuous, their numbers are low, and they do not pose a major economic threat to agriculture. However, under suitable conditions of drought followed by rapid vegetation growth, serotonin in their brains triggers dramatic changes: they start to breed abundantly, becoming gregarious and nomadic (loosely described as migratory) when their populations become dense enough. They form bands of wingless nymphs that later become swarms of winged adults. Both the bands and the swarms move around, rapidly strip fields, and damage crops. The adults are powerful fliers; they can travel great distances, consuming most of the green vegetation wherever the swarm settles. Locusts have formed plagues since prehistory. The ancient Egyptians carved them on their tombs and the insects are mentioned in the Iliad, the Mahabharata, the Bible and Quran. Swarms have devastated crops and have caused famines and human migrations. More recently, changes in agricultural practices and better surveillance of locust breeding grounds have allowed control measures at an early stage. Traditional locust control uses insecticides from the ground or air, but newer biological control methods are proving effective. Swarming behaviour decreased in the 20th century, but despite modern surveillance and control methods, swarms can still form; when suitable weather conditions occur and vigilance lapses, plagues can occur. Locusts are large insects and convenient for research and classroom study of zoology. They are edible by humans. They have been eaten throughout history and are considered a delicacy in many countries. Swarming grasshoppers Locusts are the swarming phase of certain species of short-horned grasshoppers in the family Acrididae. These insects are usually solitary, but under certain circumstances become more abundant and change their behaviour and habits, becoming gregarious. No taxonomic distinction is made between locust and grasshopper species; the basis for the definition is whether a species forms swarms under intermittently suitable conditions. In English, the term "locust" is used for grasshopper species that change morphologically and behaviourally on crowding, forming swarms that develop from bands of immature stages called hoppers. The change is described as density-dependent phenotypic plasticity. These changes are examples of phase polyphenism; they were first analysed and described by Boris Uvarov, who was instrumental in setting up the Anti-Locust Research Centre. He made his discoveries during his studies of the migratory locust in the Caucasus, whose solitary and gregarious phases had previously been thought to be separate species (Locusta migratoria and L. danica L.). He designated the two phases as solitaria and gregaria. These are called statary and migratory morphs, though strictly speaking, their swarms are nomadic rather than migratory. Charles Valentine Riley and Norman Criddle were involved in achieving the understanding and control of locusts. Swarming behaviour is a response to overcrowding. Increased tactile stimulation of the hind legs causes an increase in levels of serotonin. This causes the locust to change colour, eat much more, and breed much more easily. The transformation of the locust to the swarming form is induced by several contacts per minute over a four-hour period. A large swarm can consist of billions of locusts spread out over an area of thousands of square kilometres, with a population of up to 80 million per square kilometre (200 million per square mile). When desert locusts meet, their nervous systems release serotonin, which causes them to become mutually attracted, a prerequisite for swarming. The formation of initial bands of gregarious hoppers is called an "outbreak"; when these join into larger groups, the event is known as an "upsurge". Continuing agglomerations of upsurges on a regional level originating from a number of entirely separate breeding locations are known as "plagues". During outbreaks and the early stages of upsurges, only part of the locust population becomes gregarious, with scattered bands of hoppers spread out over a large area. As time goes by, the insects become more cohesive and the bands become concentrated in a smaller area. In the desert locust plague in Africa, the Middle East, and Asia that lasted from 1966 to 1969, the number of locusts increased from two to 30 billion over two generations, but the area covered decreased from over to . Solitary and gregarious phases One of the greatest differences between the solitary and gregarious phases is behavioural. The gregaria nymphs are attracted to each other, this being seen as early as the second instar. They soon form bands of many thousands of individuals. These groups behave like cohesive units and move across the landscape, mostly downhill, but making their way around barriers and merging with other bands. The attraction between the insects involves visual and olfactory cues. The bands seem to navigate using the sun. They pause to feed at intervals before continuing on, and may cover tens of kilometres over a few weeks. Locusts in the gregarious phase differ in morphology and development. In the desert locust and the migratory locust, for example, the gregaria nymphs become darker with strongly contrasting yellow and black markings, they grow larger, and have a longer nymphal period; the adults are larger with different body proportions, less sexual dimorphism, and higher metabolic rates; they mature more rapidly and start reproducing earlier, but have lower levels of fecundity. The mutual attraction between individual insects continues into adulthood, and they continue to act as a cohesive group. Individuals that get detached from a swarm fly back into the mass. Others that get left behind after feeding take off to rejoin the swarm when it passes overhead. When individuals at the front of the swarm settle to feed, others fly past overhead and settle in their turn, the whole swarm acting like a rolling unit with an ever-changing leading edge. The locusts spend much time on the ground feeding and resting, moving on when the vegetation is exhausted. They may then fly a considerable distance before settling in a location where transitory rainfall has caused a green flush of new growth. Distribution and diversity Several species of grasshoppers swarm as locusts in different parts of the world, on all continents except Antarctica: For example, the Australian plague locust (Chortoicetes terminifera) swarms across Australia. The desert locust (Schistocerca gregaria) is probably the best known species owing to its wide distribution (North Africa, Middle East, and Indian subcontinent) and its ability to migrate over long distances. A major infestation covered much of western Africa from 2003 to 2005, after unusually heavy rain set up favourable ecological conditions for swarming. The first outbreaks occurred in Mauritania, Mali, Niger, and Sudan in 2003. The rain allowed swarms to develop and move north to Morocco and Algeria, threatening croplands. Swarms crossed Africa, appearing in Egypt, Jordan and Israel, the first time in those countries for 50 years. The cost of handling the infestation was put at US$122 million, and the damage to crops at up to $2.5 billion. The migratory locust (Locusta migratoria), sometimes classified into up to 10 subspecies, swarms in Africa, Asia, Australia, and New Zealand, but has become rare in Europe. In 2013, the Madagascan form of the migratory locust formed many swarms of over a billion insects, reaching "plague" status and covering about half the country by March 2013. Species such as the Senegalese grasshopper (Oedaleus senegalensis) and the African rice grasshopper (Hieroglyphus daganensis), both from the Sahel, often display locust-like behaviour and change morphologically on crowding. North America is the only sub-continent besides Antarctica without a native locust species. The Rocky Mountain locust was formerly one of the most significant insect pests there, but it became extinct in 1902. In the 1930s, during the Dust Bowl, a second species of North American locust, the High Plains locust (Dissosteira longipennis), reached plague proportions in the American Midwest. Today, the High Plains locust is a rare species, leaving North America with no regularly swarming locusts. Evolution The fossilized wing of an indeterminate locust has been found in Early Oligocene-aged sediments of the Pabdeh Formation in Iran, which were deposited in a deep marine environment. The locust was likely migrating across the early Paratethys Sea, between the emergent Arabian Peninsula and central Iran, which were still separated by large areas of deep ocean at this time. This suggests that trans-oceanic locust migrations have been occurring for at least 30 million years, likely facilitated by the spread of grasslands at the time. Interaction with humans and animals Ancient times Study of literature shows how pervasive plagues of locusts were over the course of history. The insects arrived unexpectedly, often after a change of wind direction or weather, and the consequences were devastating. The Ancient Egyptians carved locusts on tombs in the period 2470 to 2220 BC. A devastating plague in Egypt is mentioned in the Book of Exodus in the Bible. Locust plague is mentioned in the Indian Mahabharata. The Iliad mentions locusts taking to the wing to escape fire. Plagues of locusts are mentioned in the Quran. In the ninth century BC, the Chinese authorities appointed anti-locust officers. In the New Testament, John the Baptist was said to survive in the wilderness on locusts and wild honey; and human-headed locusts appear in the Book of Revelation. Aristotle studied locusts and their breeding habits and Livy recorded a devastating plague in Capua in 203 BC. He mentioned human epidemics following locust plagues which he associated with the stench from the putrifying corpses; the linking of human disease outbreaks to locust plagues was widespread. A pestilence in the northwestern provinces of China in 311 AD that killed 98% of the population locally was blamed on locusts, and may have been caused by an increase in numbers of rats (and their fleas) that devoured the locust carcasses. Recent times During the last two millennia, desert locust plagues have appeared sporadically in Africa, the Middle East, and Europe. Other species of locusts caused havoc in North and South America, Asia, and Australasia; in China, 173 outbreaks over 1924 years. The Bombay locust (Nomadacris succincta) was a major pest in India and southeastern Asia in the 18th and 19th centuries, but has seldom swarmed since the last plague in 1908. In the spring of 1747 locusts arrived outside Damascus eating the majority of the crops and vegetation of the surrounding countryside. One local barber, Ahmad al-Budayri, recalled the locusts "came like a black cloud. They covered everything: the trees and the crops. May God Almighty save us!" The extinction of the Rocky Mountain locust has been a source of puzzlement. It had swarmed throughout the west of the United States and parts of Canada in the 19th century. Albert's swarm of 1875 was estimated to contain 12.5 trillion insects covering an area of (larger than the state of California) and to weigh 27.5 million tons. The last specimen was seen alive in Canada in 1902. Recent research suggests the breeding grounds of this insect in the valleys of the Rocky Mountains came under sustained agricultural development during the large influx of gold miners, destroying the underground eggs of the locust. The 1915 infestation across Palestine and Syria was one of the main contributors to famine in Lebanon which lasted from 1915 to 1918 during which around 200,000 people died. Plagues became less common in the 20th century, but they continue to occur when the conditions are met. Monitoring Early intervention to prevent large locust swarms is more successful than later action once swarms have built up. The means to control locust populations is now available, but organisational, financial, and political problems may be difficult to overcome. Monitoring is the key to early detection and eradication. Ideally, a sufficient proportion of nomadic bands can be killed with insecticide before their swarming phase. This may be possible in richer countries like Morocco and Saudi Arabia, but neighbouring poorer countries such as Mauritania and Yemen lack the resources and may breed locust swarms that threaten the whole region. Several organisations around the world monitor the threat from locusts. They provide forecasts detailing regions likely to suffer from locust plagues in the near future. In Australia, this service is provided by the Australian Plague Locust Commission. It has been very successful in dealing with developing outbreaks, but has the great advantage of having a defined area to monitor and defend without locust invasions from elsewhere. In Central and Southern Africa, the service is provided by the International Locust Control Organization for Central and Southern Africa. In West and Northwest Africa, the service is co-ordinated by the Food and Agriculture Organization's Commission for Controlling the Desert Locust in the Western Region, and executed by locust control agencies belonging to each country concerned. The FAO monitors the situation in the Caucasus and Central Asia, where over 25 million hectares of cultivated land are under threat. In February 2020, in an effort to end massive locust outbreaks, India decided to use drones and special equipment to monitor locusts and spray insecticides. Control Historically, people could do little to protect their crops from locusts, although eating the insects may have been some compensation. By the early 20th century, efforts were made to disrupt the development of the insects by cultivating the soil where eggs were laid, collecting hoppers with catching machines, killing them with flamethrowers, trapping them in ditches, and crushing them with rollers and other mechanical methods. By the 1950s, the organochloride dieldrin was found to be an extremely effective insecticide, but it was later banned in most countries because of its persistence in the environment and its accumulation in the food chain. In years when locust control is needed, the hoppers are targeted early by applying water-based contact pesticides from tractor-based sprayers. This is effective but slow and labour-intensive; a preferable method is spraying concentrated insecticide from aircraft over the insects or vegetation. The use of ultralow-volume spraying of contact pesticides from aircraft in overlapping swathes is effective against nomadic bands and can be used to treat large areas of land swiftly. Other modern technologies for planning locust control include GPS, GIS tools, and satellite imagery with rapid computer data management and analysis. A biological pesticide to control locusts was tested across Africa by a multinational team in 1997. Dried fungal spores of a Metarhizium acridum sprayed in breeding areas pierce the locust exoskeleton on germination and invade the body cavity, causing death. The fungus is passed from insect to insect and persists in the area, making repeated treatments unnecessary. This approach to locust control was used in Tanzania in 2009 to treat around 10,000 hectares in the Iku-Katavi National Park infested with adult locusts. The outbreak was contained without harm to the local elephants, hippopotamuses, and giraffes. As experimental models The locust is large and easy to breed and rear, and is used as an experimental model in research studies. It has been used in evolutionary biology research and to test the generalizability of conclusions reached about test organisms such as the fruit fly (Drosophila) and the housefly (Musca). It is a suitable school laboratory animal because of its robustness and ease of breeding and handling. At Tel Aviv University, scientists have been using the antennae's acute sensitivity of Sense of smell to detect different odors in various technologies. As food Locusts have been used as food throughout history. They are considered meat. Several cultures throughout the world consume insects, and locusts are considered a delicacy in many African, Middle Eastern, and Asian countries. They can be cooked in many ways, but are often fried, smoked, or dried. The Bible records that John the Baptist ate locusts and wild honey () while living in the wilderness. Attempts have been made to explain the text to mean ascetic vegetarian food such as carob beans, but the plain meaning of the Greek akrides is locust. The Torah prohibits the use of most insects as food, but it permits consuming certain types of locust; specifically, those that are red, yellow, or spotted grey. Islamic jurisprudence deems eating locusts to be halal. The Prophet Muhammad was reported to have eaten locusts during a military raid with his companions. Locusts are eaten in the Arabian Peninsula, including Saudi Arabia. In 2014, consumption of locusts spiked around Ramadan especially in the Al-Qassim Region, since many Saudis believe they are healthy to eat, but the Saudi Ministry of Health warned that pesticides made them unsafe. Yemenis also consume locusts, and expressed discontent over governmental plans to use pesticides against them. ʻAbd al-Salâm Shabînî described a locust recipe from Morocco. 19th century European travellers observed Arabs in Arabia, Egypt, and Morocco selling, cooking, and eating locusts. They reported that in Egypt and Palestine locusts were consumed, and that in Palestine, around the River Jordan, in Egypt, in Arabia, and in Morocco that Arabs ate locusts, while Syrian peasants did not eat locusts. In the Haouran region, Fellahs who were in poverty and suffered from famine ate locusts after removing the guts and head, while locusts were swallowed whole by Bedouins. Syrians, Copts, Greeks, Armenians, and other Christians and Arabs themselves reported that in Arabia locusts were eaten frequently and one Arab described to a European traveler the different types of locusts which were favored as food by Arabs. Persians use the Anti-Arab racial slur Arabe malakh-khor (, literally "locust eater Arab") against Arabs. Locusts yield about five times more edible protein per unit of fodder than cattle, and produce lower levels of greenhouse gases in the process. The feed conversion rate of orthopterans is 1.7 kg/kg, while for beef it is typically about 10 kg/kg. The protein content in fresh weight is between 13 and 28 g / 100 g for adult locust, 14–18 g / 100 g for larvae, as compared to 19–26 g / 100 g for beef. The calculated protein efficiency ratio is low, with 1.69 for locust protein compared to 2.5 for standard casein. A serving of 100 g of desert locust provides 11.5 g of fat, 53.5% of which is unsaturated, and 286 mg of cholesterol. Among the fatty acids, palmitoleic, oleic, and linolenic acids were found to be the most abundant. Varying amounts of potassium, sodium, phosphorus, calcium, magnesium, iron, and zinc were present.
Biology and health sciences
Orthoptera
null
74223
https://en.wikipedia.org/wiki/Ulmus%20americana
Ulmus americana
Ulmus americana, generally known as the American elm or, less commonly, as the white elm or water elm, is a species of elm native to eastern North America. The trees can live for several hundred years. It is a very hardy species that can withstand low winter temperatures, but it is affected by Dutch elm disease. The wood was seldom utilized until the advent of mechanical sawing. It is the state tree of Massachusetts and North Dakota. Description The American elm is a deciduous tree which, under ideal conditions, can grow to heights of . The trunk may have a diameter at breast height (dbh) of more than , supporting a high, spreading umbrella-like canopy. The leaves are alternate, long, with double-serrate margins and an oblique base. The leaves turn yellow in the fall. The perfect flowers are small, purple-brown and, being wind-pollinated, apetalous. The flowers are also protogynous, the female parts maturing before the male, thus reducing, but not eliminating, self-fertilization, and emerge in early spring before the leaves. The fruit is a flat samara long by 1.5 cm broad, with a circular papery wing surrounding the single seed. As in the closely related Ulmus laevis (European white elm), the flowers and seeds are borne on 1–3 cm long stems. American elm is wholly insensitive to daylight length (photoperiod), and will continue to grow well into autumn until injured by frost. Ploidy is 2n = 56, or more rarely, 2n = 28. For over 80 years, U. americana had been identified as a tetraploid, i.e. having double the usual number of chromosomes, making it unique within the genus. However, a study published in 2011 by the Agricultural Research Service of the United States Department of Agriculture revealed that about 20% of wild American elms are diploid and may even constitute another species. Moreover, several triploid trees known only in cultivation, such as 'Jefferson', are possessed of a high degree of resistance to DED, which ravaged American elms in the 20th century. This suggests that the diploid parent trees, which have markedly smaller cells than the tetraploid, may too be highly resistant to the disease. Taxonomy Ulmus americana was first described and named by Carl Linnaeus in his Species Plantarum, published in 1753. No subspecies or varieties are currently recognized. Distribution and habitat The American elm is native to eastern North America, occurring from Nova Scotia west to Alberta and Montana, and south to Florida and central Texas. It is an extremely hardy tree that can withstand winter temperatures as low as . The species occurs naturally in an assortment of habitats, most notably rich bottomlands, floodplains, stream banks, and swampy ground, although it also often thrives on hillsides, uplands and other well-drained soils. On more elevated terrain, as in the Appalachian Mountains, it is most often found along rivers. The species' wind-dispersed seeds enable it to spread rapidly as suitable areas of habitat become available. American elm fruits in late spring (which can be as early as February and as late as June depending on the climate), the seeds usually germinating immediately, with no cold stratification needed (occasionally some might remain dormant until the following year). The species attains its greatest growth potential in the Northeastern US, while elms in the Deep South and Texas grow much smaller and have shorter lifespans, although conversely their survival rate in the latter regions is higher owing to the climate being less favorable to the spread of DED. In the United States, the American elm is a principal member of four major forest cover types: black ash-American elm-red maple; silver maple-American elm; sugarberry-American elm-green ash; and sycamore-sweetgum-American elm, with the first two of these types also occurring in Canada. A sugar maple-ironwood-American elm cover type occurs on some hilltops near Témiscaming, Quebec. Ecology The leaves of the American elm serve as food for the larvae of a number of species of Lepidoptera. These include such butterflies as the Eastern Comma (Polygonia comma), Question Mark (Polygonia interrogationis), Mourning Cloak (Nymphalis antiopa), Painted Lady (Vanessa cardui) and Red-spotted Purple (Limenitis arthemis astyanax), as well as such moths as the Columbian Silkmoth (Hyalophora columbia) and the Banded Tussock Moth (Pale Tiger Moth) (Halysidota tessellaris). Pests and diseases The American elm is susceptible to Dutch Elm Disease and to elm yellows. In North America, there are three species of elm bark beetles: one native, Hylurgopinus rufipes ("native elm bark beetle"); and two invasive, Scolytus multistriatus ("smaller European elm bark beetle") and Scolytus schevyrewi ("banded elm bark beetle"). Although intensive feeding by elm bark beetles can kill weakened trees, their main impact is as vectors of DED. American elm is also moderately preferred for feeding and reproduction by the adult elm leaf beetle Xanthogaleruca luteola and highly preferred for feeding by the Japanese beetle Popillia japonica in the United States. U. americana is also the most susceptible of all the elms to verticillium wilt, whose external symptoms closely mimic those of DED. However, the condition is far less serious, and afflicted trees should recover the following year. Dutch elm disease Dutch elm disease (DED) is a fungal disease that has ravaged the American elm, causing catastrophic die-offs in cities across the range. It has been estimated that only approximately 1 in 100,000 American elm trees is DED-tolerant, most known survivors simply having escaped exposure to the disease. However, in some areas still not infested by DED, the American elm continues to thrive, notably in Florida, Alberta and British Columbia. There is a notable grove of old American elm trees in Manhattan's Central Park. The trees there were apparently spared because of the grove's isolation in such an intensely urban setting. The American elm is particularly susceptible to disease because the period of infection often coincides with the period, approximately 30 days, of rapid terminal growth when new springwood vessels are fully functional. Spores introduced outside of this period remain largely static within the xylem and are thus relatively ineffective. The American elm's biology in some ways has helped to spare it from obliteration by DED, in contrast to what happened to the American chestnut with the chestnut blight. The elm's seeds are largely wind-dispersed, and the tree grows quickly and begins bearing seeds at a young age. It grows well along roads or railroad tracks, and in abandoned lots and other disturbed areas, where it is highly tolerant of most stress factors. Elms have been able to survive and to reproduce in areas where the disease had eliminated old trees, although most of these young elms eventually succumb to the disease at a relatively young age. There is some reason to hope that these elms will preserve the genetic diversity of the original population, and that they eventually will hybridize with DED-resistant varieties that have been developed or that occur naturally. After 20 years of research, American scientists first developed DED-resistant strains of elms in the late 1990s. Elms in forest and other natural areas have been less affected by DED than trees in urban environments due to lower environmental stress from pollution and soil compaction and due to occurring in smaller, more isolated populations. Fungicidal injections can be administered to valuable American elms, to prevent infection. Such injections generally are effective as a preventive measure for up to three years when performed before any symptoms have appeared, but may be ineffective once the disease is evident. Cultivation In the 19th and early 20th century, American elm was a common street and park tree owing to its tolerance of urban conditions, rapid growth, and graceful form. This however led to extreme overplanting of the species, especially to form living archways over streets, which ultimately produced an unhealthy monoculture of elms that had no resistance to disease and pests. Elms do not naturally form pure stands and trees used in landscaping were grown from a handful of cultivars, causing extremely low genetic diversity. These trees' rapid growth and longevity, leading to great size within decades, made them popular before the advent of DED. Ohio botanist William B. Werthner, discussing the contrast between open-grown and forest-grown American elms, noted that: "In the open, with an abundance of air and light, the main trunk divides into several leading branches which leave the trunk at a sharp angle and continue to grow upward, gradually diverging, dividing and subdividing into long, flexible branchlets whose ends, at last, float lightly in the air, giving the tree a round, somewhat flattened top of beautifully regular proportions and characteristically fine twiggery." It is this distinctive growth form that is so valued in the open-grown American elms of street plantings, lawns, and parks; along most narrower streets, elms planted on opposite sides arch and blend together into a leafy canopy over the pavement. However, elms can assume many different sizes and forms depending on the location and climate zone. In 1926 the Klehm Nurseries of Arlington Heights, Illinois, wrote: "American Elms grown in the regular way from seedlings show extreme variability, growing up into trees of all shapes, some of them being very slow in growth while others are moderately rapid in development. The shapes run all the way from the true open excurrent growth to globular, or flat-topped, or pendant. As regards foliage, the leaves are from small to medium large, some shedding early and others late. This condition makes it difficult for the landscape architect to choose just the right trees to obtain the effect desired." The classic vase-shaped elm was mainly the result of selective breeding of a few cultivars and is much less likely to occur in the wild. American elms have been planted in North America beyond its natural range as far north as central Alberta. It also survives low desert heat at Phoenix, Arizona. Introductions across the Atlantic rarely prospered, even before the outbreak of DED. Introduced to the UK by James Gordon in 1752, the American elm was noted to be far more susceptible to insect foliage damage than native elms. The tree was propagated and marketed in the UK by the Hillier & Sons nursery, Winchester, Hampshire from 1945, with 450 sold in the period 1962 to 1977 when production ceased with the advent of the more virulent form of Dutch elm disease. Introduced to Australasia, the tree was listed by Australian nurseries in the early 20th century. It is known to have been planted along the Avenue of Honour at Ballarat, Victoria and the Avenue of Honour in Bacchus Marsh, Victoria. In addition, a heritage-listed planting of American elms can be found along Grant Crescent in Griffith, Australian Capital Territory. American elms are only rarely found in New Zealand. Cultivars Numerous cultivars have been raised, originally for their aesthetic merit but more recently for their resistance to Dutch elm disease The total number of named cultivars is circa 45, at least 18 of which have probably been lost to cultivation as a consequence of DED or other factors: and others. The disease-resistant selections made available to commerce to date include 'Valley Forge', 'New Harmony', 'Princeton', 'Jefferson', 'Lewis & Clark', 'Miller Park', 'St. Croix', 'Endurance', and a set of six different clones collectively known as 'American Liberty'. The United States National Arboretum released 'Valley Forge' and 'New Harmony' in late 1995, after screening tests performed in 1992–1993 showed both had unusually high levels of resistance to DED. 'Valley Forge' performed especially well in these tests. 'Princeton' has been in occasional cultivation since the 1920s. 'Princeton' gained renewed attention after its performance in the 1992–1993 screening tests showed that it also had a high degree of disease resistance. A later test performed in 2002–2003 confirmed the disease resistance of 'Princeton', 'Valley Forge' and 'New Harmony', as well as that of 'Jefferson'. Thus far, plantings of these four varieties generally appear to be successful. In 2005, approximately 90 'Princeton' elms were planted along Pennsylvania Avenue in front of the White House in Washington, D.C. The trees, whose maintenance the National Park Service (NPS) manages, remain healthy and are thriving. However, it has been noted that U. americana cultivars are not recommended for more than singular plantings as they have unresolved DED and elm yellows concerns. It has also been noted that monoculture plantings of U. americana cultivars, such as those along Pennsylvania Avenue, have disproportionate vulnerabilities to disease. Further, long-term studies of 'Princeton' in Europe and the United States have suggested that the cultivar's resistance to DED may be limited (see Pests and diseases of 'Princeton'). The National Elm Trial evaluated 19 elm cultivars commercially available in the United States in scientific plantings throughout the nation to assess and compare the strengths and weaknesses of each. The trial, which started in 2005, lasted for ten years. Based on the trial's final ratings, the preferred cultivars of U. americana are 'New Harmony' and 'Princeton'. 'Jefferson' was released to wholesale nurseries in 2004 and is becoming increasingly available for planting. However, 'Jefferson' has not been widely tested beyond Washington, D.C. The National Elm Trial provided no data on ‘Jefferson’ because an error in tree identification had occurred earlier in the nursery trade. The error may still be causing nurseries to sell 'Princeton' elms that are mislabeled as 'Jefferson', although one can distinguish between the two cultivars as the trees mature. In 2007, the 'Elm Recovery Project' from the University of Guelph in Ontario, Canada, reported that cuttings from healthy surviving old elms surveyed across Ontario had been grown to produce a bank of resistant trees, isolated for selective breeding of highly resistant cultivars. In 1993, Mariam B. Sticklen and James L. Sherald reported the results of NPS-funded experiments conducted at Michigan State University in East Lansing that were designed to apply genetic engineering techniques to the development of DED-resistant strains of American elm trees. In 2007, AE Newhouse and F Schrodt of the State University of New York College of Environmental Science and Forestry in Syracuse reported that young transgenic American elm trees had shown reduced DED symptoms and normal mycorrhizal colonization. Hybrids and hybrid cultivars Ulmus 'Rebella' (U. americana × U. parvifolia) Thousands of attempts to cross the American elm with the Siberian elm U. pumila failed. Attempts at the Arnold Arboretum using ten other American, European and Asiatic species also ended in failure, attributed to the differences in ploidy and operational dichogamy, although the ploidy factor has been discounted by other authorities. Success was eventually achieved with the autumn-flowering Chinese elm Ulmus parvifolia by the late Prof. Eugene Smalley towards the end of his career at the University of Wisconsin–Madison after he overcame the problem of keeping Chinese elm pollen alive until spring. Only one of the hybrid clones was commercially released, as 'Rebella' in 2011 by the German nursery Eisele GmbH; the clone is not available in the United States. Other artificial hybridizations with American elm are rare, and now regarded with suspicion. Two such alleged successes by the nursery trade were 'Hamburg', and 'Kansas Hybrid', both with Siberian elm Ulmus pumila. However, given the repeated failure with the two species by research institutions, it is now believed that the "American elm" in question was more likely to have been the red elm, Ulmus rubra. Uses Wood The American elm's wood is coarse, hard, and tough, with interlacing, contorted fibers that make it difficult to split or chop, and cause it to warp after sawing. Accordingly, the wood originally had few uses, save for making hubs for wagon wheels. Later, with the advent of mechanical sawing, American elm wood was used for barrel staves, trunk-slats, and hoop-poles, and subsequently became fundamental to the manufacture of wooden automobile bodies, with the intricate fibers holding screws unusually well. Pioneer and traditional uses Young twigs and branchlets of the American elm have tough, fibrous bark that has been used as a tying and binding material, even for rope swings for children, and also for making whips. In culture Mary Eleanor Wilkins Freeman, in her 1903 book of short stories, Six Trees, wrote of the American elm: On 21 March 1941 the American elm was made the state tree of Massachusetts. The designation was in commemoration of the fact that George Washington reputedly took command of the Continental Army under an elm. Notable trees A number of mostly small to medium-sized American elms now survive in woodlands, suburban areas, and occasionally cities, where the survivors have often been relatively isolated from other elms and thus spared a severe exposure to the fungus. For example, in Central Park and Tompkins Square Park in New York City, stands of several large elms originally planted by Frederick Law Olmsted survive because of their isolation from neighboring areas in New York where there had been heavy mortality. The Olmsted-designed park system in Buffalo, New York, did not fare as well. A row of mature American elms lines Central Park along the entire length of Fifth Avenue from 59th to 110th Streets. In Akron, Ohio, there is a very old elm tree that has not been infected. In historical areas of Philadelphia, Pennsylvania, there are also a few mature American elms still standing — notably in Independence Square and the Quadrangle at the University of Pennsylvania, and also at the nearby campuses of Haverford College, Swarthmore College, and Pennsylvania State University, believed to be the largest remaining stand in the country. There are several large American Elm trees in western Massachusetts. A large specimen, which stands on Summer Street in the Berkshire County town of Lanesborough, Massachusetts, has been kept alive by antifungal treatments. Rutgers University has preserved 55 mature elms on and in the vicinity of Voorhees Mall on the College Avenue Campus in New Brunswick, New Jersey in addition to seven disease-resistant trees that have been planted in this area of the campus in recent years. The largest surviving urban forest of American elms in North America is believed to be in the city of Winnipeg, Manitoba, Canada, where close to 200,000 elms remain. The city of Winnipeg spends $3 million annually to aggressively combat the disease utilizing Dursban Turf and the Dutch Trig vaccine, losing 1,500–4,000 trees per year. Governmental agencies, educational institutions or other organizations in most of the states that are within the United States maintain lists of champion or big trees that describe the locations and characteristics of those states' largest American elm trees (see List of state champion American elm trees). The current U.S. national champion American elm tree is located in Iberville Parish, Louisiana. When measured in 2010, the tree had a trunk circumference of , a height of and an average crown spread of . The current Tree Register of the British Isles (TROBI) champion grows in Avondale Forest near Rathdrum, County Wicklow, Ireland. The tree had a height of and a dbh of (circumference of ) when measured in 2000. The tree replaced on the register a larger champion located in Woodvale Cemetery in Sussex, England, which in 1988 had a height of and a diameter of or circumference of . A prime example of the species was the Sauble Elm, which grew beside the banks of the Sauble River in Ontario, Canada, to a height of 43 m (140 ft), with a dbh of before succumbing to DED; when it was felled in 1968, a tree-ring count established that it had germinated in 1701. Other large or otherwise significant American elm trees have included: Treaty Elm The Treaty Elm, Philadelphia, Pennsylvania. In what is now Penn Treaty Park, the founder of Pennsylvania, William Penn, is said to have entered into a treaty of peace in 1683 with the native Lenape Turtle Clan under a picturesque elm tree immortalized in a painting by Benjamin West. West made the tree, already a local landmark, famous by incorporating it into his painting after hearing legends (of unknown veracity) about the tree being the location of the treaty. No documentary evidence exists of any treaty Penn signed beneath a particular tree. On March 6, 1810 a great storm blew the tree down. Measurements taken at the time showed it to have a circumference of , and its age was estimated to be 280 years. Wood from the tree was made into furniture, canes, walking sticks and various trinkets that Philadelphians kept as relics. Washington Elm (Massachusetts) The Washington Elm, Cambridge, Massachusetts. George Washington is said to have taken command of the American Continental Army under the Washington Elm in Cambridge on July 3, 1775. The tree survived until the 1920s and "was thought to be a survivor of the primeval forest". In 1872, a large branch fell from it and was used to construct a pulpit for a nearby church. The tree, an American white elm, became a celebrated attraction, with its own plaque, a fence constructed around it and a road moved in order to help preserve it. The tree was cut down (or fell—sources differ) in October 1920 after an expert determined it was dead. The city of Cambridge had plans for it to be "carefully cut up and a piece sent to each state of the country and to the District of Columbia and Alaska," according to The Harvard Crimson. As late as the early 1930s, garden shops advertised that they had cuttings of the tree for sale, although the accuracy of the claims has been doubted. A Harvard "professor of plant anatomy" examined the tree rings days after the tree was felled and pronounced it between 204 and 210 years old, making it at most 62 years old when Washington took command of the troops at Cambridge. The tree would have been a little more than two feet in diameter (at 30 inches above ground) in 1773. In 1896, an alumnus of the University of Washington, obtained a rooted cutting of the Cambridge tree and sent it to Professor Edmund Meany at the university. The cutting was planted, cuttings were then taken from it, including one planted on February 18, 1932, the 200th anniversary of the birth of George Washington, for whom Washington state is named. That tree remains on the campus of the Washington State Capitol. Just to the west of the tree is a small elm from a cutting made in 1979. Washington Elm (District of Columbia) George Washington's Elm, Washington, D.C. George Washington supposedly had a favorite spot under an elm tree near the United States Capitol Building from which he would watch construction of the building. The elm stood near the Senate wing of the Capitol building until 1948. Logan Elm The Logan Elm that stood near Circleville, Ohio, was one of the largest American elms in the world. The tree had a trunk circumference of and a crown spread of . Weakened by DED, the tree died in 1964 from storm damage. The Logan Elm State Memorial commemorates the site and preserves various associated markers and monuments. According to tradition, Chief Logan of the Mingo tribe delivered a passionate speech at a peace-treaty meeting under this elm in 1774. "Herbie" Another notable American elm, named Herbie, was the tallest American elm in New England until it was cut down on January 19, 2010, after it succumbed to DED. Herbie was tall at its peak and had a circumference of , or a diameter of approximately . The tree stood in Yarmouth, Maine, where it was cared for by the town's tree warden, Frank Knight. When cut down, Herbie was 217 years old. Herbie's wood is of interest to dendroclimatologists, who will use cross-sections of the trunk to help answer questions about climate during the tree's lifetime. The Glencorradale Elm The Glencorradale Elm on Prince Edward Island, Canada, is a surviving wild elm believed to be several hundred years old. Survivor Tree An American elm located in a parking lot directly across the street from the Alfred P. Murrah Federal Building in Oklahoma City survived the Oklahoma City bombing on April 19, 1995, that killed 168 people and destroyed the Murrah building. Damaged in the blast, with fragments lodged in its trunk and branches, it was nearly cut down in efforts to recover evidence. However, nearly a year later the tree began to bloom. Then known as the Survivor Tree, it became an important part of the Oklahoma City National Memorial, and is featured prominently on the official logo of the memorial. Parliament Hill Elm The Parliament Hill Elm was planted in Ottawa, Canada, in the late 1910s or early 1920s when Centre Block was rebuilt following the Great fire of 1916. The tree grew for approximately a century next to a statue of John A. Macdonald and was one of the few in the region to survive the spread of DED in the 1970s and 1980s. Despite protests from Ottawa area environmentalists and resistance from Opposition Members of Parliament the tree was removed in April 2019 to make way for new Centre Block renovations. Landscaped parks Central Park New York City's Central Park is home to approximately 1,200 American elms. The oldest of these elms were planted during the 1860s by Frederick Law Olmsted, making them among the oldest stands of American elms in the world. The trees are particularly noteworthy along the Mall and Literary Walk, where four lines of American elms stretch over the walkway forming a cathedral-like covering. A part of New York City's urban ecology, the elms improve air and water quality, reduce erosion and flooding, and decrease air temperatures during warm days. While the stand is still vulnerable to DED, in the 1980s the Central Park Conservancy undertook aggressive countermeasures such as heavy pruning and removal of extensively diseased trees. These efforts have largely been successful in saving the majority of the trees, although several are still lost each year. Younger American elms that have been planted in Central Park since the outbreak are of the DED-resistant 'Princeton' and 'Valley Forge' cultivars. National Mall Several rows of American elm trees that the National Park Service first planted during the 1930s line much of the 1.9 miles (3.0 km) length of the National Mall in Washington, D.C. DED first appeared on the trees during the 1950s and reached a peak in the 1970s. The NPS used a number of methods to control the epidemic, including sanitation, pruning, injecting trees with fungicide and replanting with DED-resistant cultivars. The NPS combated the disease's local insect vector, the smaller European elm bark beetle (Scolytus multistriatus), by trapping and by spraying with insecticides. As a result, the population of American elms planted on the Mall and its surrounding areas has remained intact for more than 80 years. Accessions North America Arnold Arboretum, US. Acc. nos. 250-53 (cult. material), 412-86 wild collected in the United States. Bernheim Arboretum and Research Forest, Clermont, Kentucky. No details available. Denver Botanic Gardens, US. One specimen, no details. Holden Arboretum, US. Acc. nos. 2005-17, 65-632, 80-663, all of unrecorded provenance. Longwood Gardens, US. Acc. nos. 1997-0074, L-0352, sources unrecorded. Missouri Botanical Garden, US. Acc. nos. 1969-6172, 1986-0206, 1986-0207, 1986-0208. New York Botanical Garden, US. Acc. nos. 877/97, 944/96, 1854/99, 2111/99, 06791, all unrecorded provenance. Phipps Conservatory & Botanical Gardens, US. Acc. nos. 00/1265, 99/0660. Scott Arboretum, US. Acc. no. S000339, no other details available. U S National Arboretum, Washington, D.C., US. Acc. nos. 64254, 64255, 64256, 66355, 66426, 68988, 69304, 66341. Europe Brighton & Hove City Council, UK. NCCPG elm collection. Dubrava Arboretum, Lithuania. No accession details available. Grange Farm Arboretum, Sutton St James, Spalding, Lincolnshire, UK. Acc. no. not known. Hortus Botanicus Nationalis, Salaspils, Latvia. Acc. nos. 18087,88,89,90,91,92. Linnaean Gardens of Uppsala, Sweden. Acc. nos. 1976-2713,0000-2170 Strona Arboretum, University of Life Sciences, Warsaw, Poland. No accession details available. Royal Botanic Garden Edinburgh, UK. Acc. no. 19901741, Ulmus americana L., wild collected in Canada; Acc. no. 19802124, Ulmus americana L.× pumila L. var. arborea, cultivated material Tallinn Botanic Garden, Estonia. No accession details available. Thenford House arboretum, Northamptonshire, UK. No accession details available. University of Copenhagen, Botanic Garden, Denmark. Acc. no. P1971-5201, wild collected in the US Wakehurst Place Garden, Wakehurst Place, UK. Acc. nos. 1994-67, 1994-68, 1991-1163.A. Australasia Eastwoodhill Arboretum, Gisborne, New Zealand. 11 trees, accession details not known. Art and photography The nobility and arching grace of the American Elm in its heyday, on farms, in villages, in towns and on campuses, were celebrated in the books of photographs of Wallace Nutting (Massachusetts Beautiful, N.Y. 1923, and other volumes in the series) and of Samuel Chamberlain (The New England Image, New York, 1962). Frederick Childe Hassam is notable among painters who have depicted American Elm. Gallery
Biology and health sciences
Rosales
Plants
74225
https://en.wikipedia.org/wiki/Chili%20pepper
Chili pepper
Chili peppers, also spelled chile or chilli ( ), are varieties of berry-fruit plants from the genus Capsicum, which are members of the nightshade family Solanaceae, cultivated for their pungency. Chili peppers are widely used in many cuisines as a spice to add "heat" to dishes. Capsaicin and the related capsaicinoids give chili peppers their intensity when ingested or applied topically. Chili peppers exhibit a range of heat and flavors. This diversity is the reason behind the availability of different types of chili powder, each offering its own taste and heat level. Chili peppers originated in Central or South America and were first cultivated in Mexico. European explorers brought chili peppers back to the Old World in the late 16th century as part of the Columbian Exchange, which led to the cultivation of multiple varieties across the world for food and traditional medicine. Five Capsicum species have been widely cultivated: annuum, baccatum, chinense, frutescens, and pubescens. History Origins Capsicum plants originated in modern-day Peru and Bolivia, and have been a part of human diets since about 7,500 BC. They are one of the oldest cultivated crops in the Americas. Chili peppers were cultivated in east-central Mexico some 6,000 years ago, and independently across different locations in the Americas including highland Peru and Bolivia, central Mexico, and the Amazon. They were among the first self-pollinating crops cultivated in those areas. Peru has the highest diversity of cultivated Capsicum; it is a center of diversification where varieties of all five domesticates were introduced, grown, and consumed in pre-Columbian times. The largest diversity of wild Capsicum peppers is consumed in Bolivia. Bolivian consumers distinguish two basic forms: ulupicas, species with small round fruits including C. eximium, C. cardenasii, C. eshbaughii, and C. caballeroi landraces; and arivivis with small elongated fruits including C. baccatum var. baccatum and C. chacoense varieties. Distribution to Europe When Christopher Columbus and his crew reached the Caribbean, they were the first Europeans to encounter Capsicum fruits. They called them "peppers" because, like black pepper (Piper nigrum), which had long been known in Europe, they have a hot spicy taste unlike other foods. Chilies were first brought back to Europe by the Spanish, who financed Columbus's voyages, at the start of the large-scale interchange of plants and culture between the New World and the Old World called the Columbian exchange. Chilies appear in Spanish records by 1493. Unlike Piper vines, which grow naturally only in the tropics, chilies could be grown in temperate climates. By the mid-1500s, they had become a common garden plant in Spain and were incorporated into numerous dishes. By 1526, they had appeared in Italy, in 1543 in Germany, and by 1569 in the Balkans, where they came to be processed into paprika. Distribution to the rest of the world The rapid introduction of chilies to Africa and Asia was likely through Portuguese and Spanish traders in the 16th century, though the details are unrecorded. The Portuguese introduced them first to Africa and Arabia, and then to their colonies and trading posts in Asia, including Goa, Sri Lanka, and Malacca. From there, chilies spread to neighboring regions in South Asia and western Southeast Asia via local trade and natural dispersal. Around the same time, the Spanish also introduced chilies to the Philippines, where they spread to Melanesia, Micronesia, and other Pacific Islands via their monopoly of the Manila galleons. Their spread to East Asia in the late 16th century is less clear, but was likely also through local trade or through Portuguese and Spanish trading ports in Canton, China, and Nagasaki, Japan. The earliest known mention of the chili pepper in Chinese writing dates to 1591, though the pepper is thought to have entered the country in the 1570s. Producing chili peppers Cultivation Chili peppers are the shiny, brightly coloured fruits of species of Capsicum. Botanically they are berries. The plants are small, depending on variety, making them suitable for growing in pots, greenhouses, or commercially in polytunnels. The plants are perennial, provided they are protected from cold. The fruits can be green, orange, red, or purple, and vary in shape from round and knobbly to smooth and elongated. If the fruits are picked green and unripe, more flowers develop, yielding more fruit; fruits left on the plant can become hotter in taste, and acquire their ripe coloration, at the price of a reduced harvest. Ideal growing conditions for peppers include a sunny position with warm, loamy soil, ideally , that is moist but not waterlogged. The seeds germinate only when warm, close to . The plants prefer warm conditions, but can tolerate temperatures down to ; and are sensitive to cold. The flowers can self-pollinate. However, at extremely high temperatures, , pollen loses viability, and its flowers are much less likely to result in fruit. For flowering, Capsicum is a non-photoperiod-sensitive crop. Chilies are vulnerable to pests including aphids, glasshouse red spider mite, and glasshouse whitefly, all of which feed on plant sap. Common diseases include grey mould caused by Botrytis cinerea; this rots the tissues and produces a brownish-grey mould on the surface. Preparation Harvested chilies may be used fresh, or dried, typically on the ground in hot countries, to make a variety of products. Drying enables chilies grown in temperate regions to be used in winter. For home use, chilies can be dried by threading them with cotton and hanging them up in a warm dry place to dry. Products include whole dried chilies, chili flakes, and chili powder, Fresh or dried chilies are used to make hot sauce, a liquid condiment—usually bottled for commercial use—that adds spice to other dishes. Dried chilies are used to make chili oil, cooking oil infused with chili. Annual production In 2020, 36 million tonnes of green chilies and peppers (counted as any Capsicum or Pimenta fruits) were produced worldwide, with China producing 46% of the total. Species and cultivars Species of Capsicum that produce chili peppers are shown on the simplified phylogenetic tree, with examples of cultivars: Intensity Capsaicin The substances that give chili peppers their pungency (spicy heat) when ingested or applied topically are capsaicin (8-methyl-N-vanillyl-6-nonenamide) and several related chemicals, collectively called capsaicinoids. Pure capsaicin is a hydrophobic, colorless, odorless, and crystalline-to-waxy solid at room temperature. The quantity of capsaicin varies by variety, and depends on growing conditions. Water-stressed peppers usually produce stronger fruits. When a habanero plant is stressed, for example by shortage of water, the concentration of capsaicin increases in some parts of the fruit. When peppers are consumed by mammals such as humans, capsaicin binds with pain receptors in the mouth and throat, potentially evoking pain via spinal relays to the brainstem and thalamus where heat and discomfort are perceived. However, birds are unable to perceive the hotness and so they can eat some of the hottest peppers. The intensity of the "heat" of chili peppers is commonly reported in Scoville heat units (SHU), invented by American pharmacist Wilbur Scoville in 1912. Historically, it was a measure of the dilution of an amount of chili extract added to sugar syrup before its heat becomes undetectable to a panel of tasters; the more it has to be diluted to be undetectable, the more powerful the variety, and therefore the higher the rating. Since the 1980s, spice heat has been assessed quantitatively by high-performance liquid chromatography (HPLC), which measures the concentration of heat-producing capsaicinoids, typically with capsaicin content as the main measure. Capsaicin is produced by the plant as a defense against mammalian predators. A study suggests that by protecting against attack by a hemipteran bug, the risk of disease caused by a Fusarium fungus carried by the insects is reduced. As evidence, the study notes that peppers increased the quantity of capsaicin in proportion to the damage caused by fungi on the plant's seeds. Intensity range of commonly used cultivars A wide range of intensity is found in commonly used peppers: Hottest by country The top 8 world's hottest chili peppers (by country) are: As food Nutritional value Red hot chili peppers are 88% water, 9% carbohydrates, 2% protein, and 0.4% fat (table). In a 100 gram reference amount, chili peppers supply 40 calories, and are a rich source of vitamin C and vitamin B6. Pungency Due to their unique pungency (spicy heat), chili peppers constitute a crucial part of many cuisines around the world, particularly in Chinese (especially in Sichuanese food), Mexican, Thai, Indian, New Mexican cuisine and many other South American, Caribbean and East Asian cuisines. In 21st-century Asian cuisine, chili peppers are commonly used across many regions. Chili is a key ingredient in many curries, providing the desired amount of heat; mild curries may be flavoured with many other spices, and may omit chili altogether. Cooking Chilies with a low capsaicin content can be cooked like bell peppers, for example stuffing and roasting them. Hotter varieties need to be handled with care to avoid contact with skin or eyes; washing does not efficiently remove capsaicin from skin. Chilies can be roasted over very hot coals or grilled for a short time, as they break up if overcooked. The leaves of every species of Capsicum are edible, being mildly bitter and nowhere near as hot as the fruits. They are cooked as greens in Filipino cuisine, where they are called dahon ng sili (literally "chili leaves"). They are used in the chicken soup tinola. In Korean cuisine, the leaves may be used in kimchi. Regional cuisines Chilies are present in many cuisines. In Peru, Papa a la huancaina is a dish of potatoes in a sauce of fresh cheese and aji amarillo chilies. In Thailand, kaeng tai pla fish curry is flavoured with a tai pla sauce made with garlic, shallots, galangal, kaffir lime, turmeric, fish paste, and bird's eye chilies. In Jamaica, jerk chicken is spiced with powerful habanero chilies and allspice. Goan vindaloo curry uses the extremely hot ghost pepper or bhut jolokia to create "perhaps [India's] hottest dish". In Bhutan, ema datshi, entirely made of chili mixed with local cheese, is the national dish. Many Mexican dishes use chilies of different types, including the jalapeño, poblano, habanero, serrano, chipotle, ancho, pasilla, guajillo, de árbol, cascabel and mulato. These offer a wide range of flavours including citrus, earthy, fruity, and grassy. They are used in many dishes and the spicy mole sauce and Mexican salsa sauces. Other uses Ornamental plants The contrast in color and appearance makes chili plants interesting to some as a purely decorative garden plant. Black pearl pepper: small cherry-shaped fruits and dark brown to black leaves Black Hungarian pepper: green foliage, highlighted by purple veins and purple flowers, jalapeño-shaped fruits Bishop's crown pepper, Christmas bell pepper: named for its distinct three-sided shape resembling a red bishop's crown or a red Christmas bell Constrained risk-taking Psychologist Paul Rozin suggests that eating ordinary chilies is an example of a "constrained risk" like riding a roller coaster, in which extreme sensations like pain and fear can be enjoyed because individuals know that these sensations are not actually harmful. This method lets people experience extreme feelings without any significant risk of bodily harm. Topical use and health research Capsaicin, the pungent chemical in chili peppers, is used as an analgesic in topical ointments, nasal sprays, and dermal patches to relieve pain. A 2022 review of preliminary research indicated that regular consumption of chili peppers was associated with weak evidence for a lower risk of death from cardiovascular diseases and cancer. Chemical irritants Capsaicin extracted from chilies is used in pepper sprays and some tear gas formulations as a chemical irritant, for use as less-lethal weapons for control of unruly individuals or crowds. Such products have considerable potential for misuse, and may cause injury or death. Conflicts between farmers and elephants have long been widespread in African and Asian countries, where elephants nightly destroy crops, raid grain houses, and sometimes kill people. Farmers have found the use of chilies effective in crop defense against elephants. Elephants do not like capsaicin due to their large and sensitive olfactory and nasal system. The smell of chili causes them discomfort and deters them from feeding on the crops. By planting a few rows of the fruit around valuable crops, farmers create a buffer zone through which the elephants are reluctant to pass. Chili dung bombs are also used for this purpose. They are bricks made of mixing dung and chili, and are burned, creating a noxious smoke that keeps hungry elephants out of farmers' fields. This can lessen dangerous physical confrontation between people and elephants. Birds do not have the same sensitivity to capsaicin as mammals, as they lack a specific pain receptor. Chili peppers are eaten by birds living in the chili peppers' natural range, possibly contributing to seed dispersal and evolution of the protective capsaicin in chili peppers, as a bird in flight can spread the seeds further away from the parent plant after they pass through its digestive system than any land or tree dwelling mammal could do so under the same circumstances, thus reducing competition for resources. Etymology and spelling The English word is with the same meaning. The name of the plant is unrelated to that of the country Chile. While pepper originally meant the genus Piper, not Capsicum, the Oxford English Dictionary and Merriam-Webster record both usages. The three primary spellings are chili, chile and chilli, all recognized by dictionaries. Chili is widely used in English of the United States and optionally in Canada. Chile is the most common Spanish spelling in Mexico and several other Latin American countries, and some parts of the United States. Chilli was the original Romanization of the Náhuatl language word for the fruit (chīlli), and is the preferred British spelling according to the Oxford English Dictionary. Chilli (and its plural chillies) is the most common spelling in former British colonies such as India and Sri Lanka. Safety The volatile oil in chili peppers may cause skin irritation, requiring hand washing and care when touching the eyes or any sensitive body parts. Consuming hot peppers may cause stomach pain, hyperventilation, sweating, vomiting, and symptoms possibly requiring hospitalization. Unscrupulous traders have illegally added at least eight different synthetic dyes, including Auramine O, Chrysoidine, Sudan stains I to IV, Para red, and Rhodamine B to chili products. All these chemicals are harmful. They can be detected by liquid chromatography used together with mass spectrometry. In popular culture The 16th century Spanish missionary and naturalist José de Acosta noted the supposed aphrodisiac power of chilies, but wrote that they were harmful to people's spiritual health. In the 1970s, the government of Peru forbade prison inmates to consume chilies, their explanation being that these were "not appropriate for men forced to live a limited lifestyle."
Biology and health sciences
Solanales
null
74240
https://en.wikipedia.org/wiki/Anaphylaxis
Anaphylaxis
Anaphylaxis (Greek: 'up' + 'guarding') is a serious, potentially fatal allergic reaction and medical emergency that is rapid in onset and requires immediate medical attention regardless of the use of emergency medication on site. It typically causes more than one of the following: an itchy rash, throat closing due to swelling that can obstruct or stop breathing; severe tongue swelling that can also interfere with or stop breathing; shortness of breath, vomiting, lightheadedness, loss of consciousness, low blood pressure, and medical shock. These symptoms typically start in minutes to hours and then increase very rapidly to life-threatening levels. Urgent medical treatment is required to prevent serious harm and death, even if the patient has used an epipen or has taken other medications in response, and even if symptoms appear to be improving. Common causes include allergies to insect bites and stings, allergies to foods – including nuts, milk, fish, shellfish, eggs and some fresh fruits or dried fruits; allergies to sulfites – a class of food preservatives and a byproduct in some fermented foods like vinegar; allergies to medications – including some antibiotics and non-steroidal anti-inflammatory drugs (NSAIDs) like aspirin; allergy to general anaesthetic (used to make people sleep during surgery); allergy to contrast agents – dyes used in some medical tests to help certain areas of the body show up better on scans; allergy to latex – a type of rubber found in some rubber gloves and condoms. Other causes can include physical exercise, and cases may also occur in some people due to escalating reactions to simple throat irritation or may also occur without an obvious reason. The mechanism involves the release of inflammatory mediators in a rapidly escalating cascade from certain types of white blood cells triggered by either immunologic or non-immunologic mechanisms. Diagnosis is based on the presenting symptoms and signs after exposure to a potential allergen or irritant and in some cases, reaction to physical exercise. The primary treatment of anaphylaxis is epinephrine injection into a muscle, intravenous fluids, then placing the person "in a reclining position with feet elevated to help restore normal blood flow". Additional doses of epinephrine may be required. Other measures, such as antihistamines and steroids, are complementary. Carrying an epinephrine autoinjector, commonly called an "epipen", and identification regarding the condition is recommended in people with a history of anaphylaxis. Immediately contacting ambulance / EMT services is always strongly recommended, regardless of any on-site treatment. Getting to a doctor or hospital as soon as possible is absolutely required in all cases, even if it appears to be getting better. Worldwide, 0.05–2% of the population is estimated to experience anaphylaxis at some point in life. Globally, as underreporting declined into the 2010s, the rate appeared to be increasing. It occurs most often in young people and females. About 99.7% of people hospitalized with anaphylaxis in the United States survive. Etymology The word is derived from , and . Signs and symptoms Anaphylaxis typically presents many different symptoms over minutes or hours with an average onset of 5 to 30 minutes if exposure is intravenous and up to 2 hours if from eating food. The most common areas affected include: skin (80–90%), respiratory (70%), gastrointestinal (30–45%), heart and vasculature (10–45%), and central nervous system (10–15%) with usually two or more being involved. Skin Symptoms typically include generalized hives, itchiness, flushing, or swelling (angioedema) of the affected tissues. Those with angioedema may describe a burning sensation of the skin rather than itchiness. Swelling of the tongue or throat occurs in up to about 20% of cases. Other features may include a runny nose and swelling of the conjunctiva. The skin may also be blue tinged because of lack of oxygen. Respiratory Respiratory symptoms and signs that may be present include shortness of breath, wheezes, or stridor. The wheezing is typically caused by spasms of the bronchial muscles while stridor is related to upper airway obstruction secondary to swelling. Hoarseness, pain with swallowing, or a cough may also occur. Cardiovascular While a fast heart rate caused by low blood pressure is more common, a Bezold–Jarisch reflex has been described in 10% of people, where a slow heart rate is associated with low blood pressure. A drop in blood pressure or shock (either distributive or cardiogenic) may cause the feeling of lightheadedness or loss of consciousness. Rarely very low blood pressure may be the only sign of anaphylaxis. Coronary artery spasm may occur with subsequent myocardial infarction, dysrhythmia, or cardiac arrest. Those with underlying coronary disease are at greater risk of cardiac effects from anaphylaxis. The coronary spasm is related to the presence of histamine-releasing cells in the heart. Other Gastrointestinal symptoms may include severe crampy abdominal pain, and vomiting. There may be confusion, a loss of bladder control or pelvic pain similar to that of uterine cramps. Dilation of blood vessels around the brain may cause headaches. A feeling of anxiety or of "impending doom" has also been described. Causes Anaphylaxis can occur in response to almost any foreign substance. Common triggers include venom from insect bites or stings, foods, and medication. Foods are the most common trigger in children and young adults, while medications and insect bites and stings are more common in older adults. Less common causes include: physical factors, biological agents such as semen, latex, hormonal changes, food additives and colors, and topical medications. Physical factors such as exercise (known as exercise-induced anaphylaxis) or temperature (either hot or cold) may also act as triggers through their direct effects on mast cells. Events caused by exercise are frequently associated with cofactors such as the ingestion of certain foods or taking an NSAID. In aspirin-exacerbated respiratory disease (AERD), alcohol is a common trigger. During anesthesia, neuromuscular blocking agents, antibiotics, and latex are the most common causes. The cause remains unknown in 32–50% of cases, referred to as "idiopathic anaphylaxis." Six vaccines (MMR, varicella, influenza, hepatitis B, tetanus, meningococcal) are recognized as a cause for anaphylaxis, and HPV may cause anaphylaxis as well. Food and alcohol Many foods can trigger anaphylaxis; this may occur upon the first known ingestion. Common triggering foods vary around the world due to cultural cuisine. In Western cultures, ingestion of or exposure to peanuts, wheat, nuts, certain types of seafood like shellfish, milk, fruit and eggs are the most prevalent causes. Sesame is common in the Middle East, while rice and chickpeas are frequently encountered as sources of anaphylaxis in Asia. Severe cases are usually caused by ingesting the allergen, but some people experience a severe reaction upon contact. Children can outgrow their allergies. By age 16, 80% of children with anaphylaxis to milk or eggs and 20% who experience isolated anaphylaxis to peanuts can tolerate these foods. Any type of alcohol, even in small amounts, can trigger anaphylaxis in people with AERD. Medication Any medication may potentially trigger anaphylaxis. The most common are β-lactam antibiotics (such as penicillin) followed by aspirin and NSAIDs. Other antibiotics are implicated less frequently. Anaphylactic reactions to NSAIDs are either agent specific or occur among those that are structurally similar meaning that those who are allergic to one NSAID can typically tolerate a different one or different group of NSAIDs. Other relatively common causes include chemotherapy, vaccines, protamine and herbal preparations. Some medications (vancomycin, morphine, x-ray contrast among others) cause anaphylaxis by directly triggering mast cell degranulation. The frequency of a reaction to an agent partly depends on the frequency of its use and partly on its intrinsic properties. Anaphylaxis to penicillin or cephalosporins occurs only after it binds to proteins inside the body with some agents binding more easily than others. Anaphylaxis to penicillin occurs once in every 2,000 to 10,000 courses of treatment, with death occurring in fewer than one in every 50,000 courses of treatment. Anaphylaxis to aspirin and NSAIDs occurs in about one in every 50,000 persons. If someone has a reaction to penicillin, his or her risk of a reaction to cephalosporins is greater but still less than one in 1,000. The old radiocontrast agents caused reactions in 1% of cases, while the newer lower osmolar agents cause reactions in 0.04% of cases. Venom Venom from stinging or biting insects such as Hymenoptera (ants, bees, and wasps) or Triatominae (kissing bugs) may cause anaphylaxis in susceptible people. Previous reactions that are anything more than a local reaction around the site of the sting, are a risk factor for future anaphylaxis; however, half of fatalities have had no previous systemic reaction. Risk factors People with atopic diseases such as asthma, eczema, or allergic rhinitis are at high risk of anaphylaxis from food, latex, and radiocontrast agents but not from injectable medications or stings. One study in children found that 60% had a history of previous atopic diseases, and of children who die from anaphylaxis, more than 90% have asthma. Those with mastocytosis or of a higher socioeconomic status are at increased risk. Pathophysiology Anaphylaxis is a severe allergic reaction of rapid onset affecting many body systems. It is due to the release of inflammatory mediators and cytokines from mast cells and basophils, typically due to an immunologic reaction but sometimes non-immunologic mechanism. Interleukin (IL)–4 and IL-13 are cytokines important in the initial generation of antibody and inflammatory cell responses to anaphylaxis. Immunologic In the immunologic mechanism, immunoglobulin E (IgE) binds to the antigen (the foreign material that provokes the allergic reaction). Antigen-bound IgE then activates FcεRI receptors on mast cells and basophils. This leads to the release of inflammatory mediators such as histamine. These mediators subsequently increase the contraction of bronchial smooth muscles, trigger vasodilation, increase the leakage of fluid from blood vessels, and cause heart muscle depression. There is also a non-immunologic mechanism that does not rely on IgE, but it is not known if this occurs in humans. Non-immunologic Non-immunologic mechanisms involve substances that directly cause the degranulation of mast cells and basophils. These include agents such as contrast medium, opioids, temperature (hot or cold), and vibration. Sulfites may cause reactions by both immunologic and non-immunologic mechanisms. Diagnosis Anaphylaxis is diagnosed on the basis of a person's signs and symptoms. When any one of the following three occurs within minutes or hours of exposure to an allergen there is a high likelihood of anaphylaxis: Involvement of the skin or mucosal tissue plus either respiratory difficulty or a low blood pressure causing symptoms Two or more of the following symptoms after a likely contact with an allergen: a. Involvement of the skin or mucosa b. Respiratory difficulties c. Low blood pressure d. Gastrointestinal symptoms Low blood pressure after exposure to a known allergen Skin involvement may include: hives, itchiness or a swollen tongue among others. Respiratory difficulties may include: shortness of breath, stridor, or low oxygen levels among others. Low blood pressure is defined as a greater than 30% decrease from a person's usual blood pressure. In adults a systolic blood pressure of less than 90 mmHg is often used. During an attack, blood tests for tryptase or histamine (released from mast cells) might be useful in diagnosing anaphylaxis due to insect stings or medications. However these tests are of limited use if the cause is food or if the person has a normal blood pressure, and they are not specific for the diagnosis. Classification There are three main classifications of anaphylaxis. Anaphylactic shock is associated with systemic vasodilation that causes low blood pressure which is by definition 30% lower than the person's baseline or below standard values. Biphasic anaphylaxis is the recurrence of symptoms within 1–72 hours after resolution of an initial anaphylactic episode. Estimates of incidence vary, between less than 1% and up to 20% of cases. The recurrence typically occurs within 8 hours. It is managed in the same manner as anaphylaxis. Anaphylactoid reaction, non-immune anaphylaxis, or pseudoanaphylaxis, is a type of anaphylaxis that does not involve an allergic reaction but is due to direct mast cell degranulation. Non-immune anaphylaxis is the current term, as of 2018, used by the World Allergy Organization with some recommending that the old terminology, "anaphylactoid", no longer be used. Allergy skin testing Allergy testing may help in determining the trigger. Skin allergy testing is available for certain foods and venoms. Blood testing for specific IgE can be useful to confirm milk, egg, peanut, tree nut and fish allergies. Skin testing is available to confirm penicillin allergies, but is not available for other medications. Non-immune forms of anaphylaxis can only be determined by history or exposure to the allergen in question, and not by skin or blood testing. Differential diagnosis It can sometimes be difficult to distinguish anaphylaxis from asthma, syncope, and panic attacks. Asthma however typically does not entail itching or gastrointestinal symptoms, syncope presents with pallor rather than a rash, and a panic attack may have flushing but does not have hives. Other conditions that may present similarly include: scrombroidosis and anisakiasis. Post-mortem findings In a person who died from anaphylaxis, autopsy may show an "empty heart" attributed to reduced venous return from vasodilation and redistribution of intravascular volume from the central to the peripheral compartment. Other signs are laryngeal edema, eosinophilia in lungs, heart and tissues, and evidence of myocardial hypoperfusion. Laboratory findings could detect increased levels of serum tryptase, increase in total and specific IgE serum levels. Prevention Avoidance of the trigger of anaphylaxis is recommended. In cases where this may not be possible, desensitization may be an option. Immunotherapy with Hymenoptera venoms is effective at desensitizing 80–90% of adults and 98% of children against allergies to bees, wasps, hornets, yellowjackets, and fire ants. Oral immunotherapy may be effective at desensitizing some people to certain food including milk, eggs, nuts and peanuts; however, adverse effects are common. For example, many people develop an itchy throat, cough, or lip swelling during immunotherapy. Desensitization is also possible for many medications, however it is advised that most people simply avoid the agent in question. In those who react to latex it may be important to avoid cross-reactive foods such as avocados, bananas, and potatoes among others. Management Anaphylaxis is a medical emergency that may require resuscitation measures such as airway management, supplemental oxygen, large volumes of intravenous fluids, and close monitoring. Passive leg raise may also be helpful in the emergency management. Administration of intravenous fluid bolus and epinephrine is the treatment of choice with antihistamines used as adjuncts. A period of in-hospital observation for between 2 and 24 hours is recommended for people once they have returned to normal due to concerns of biphasic anaphylaxis. Epinephrine Epinephrine (adrenaline) (1 in 1,000) is the primary treatment for anaphylaxis with no absolute contraindication to its use. It is recommended that an epinephrine solution be given intramuscularly into the mid anterolateral thigh as soon as the diagnosis is suspected. The injection may be repeated every 5 to 15 minutes if there is insufficient response. A second dose is needed in 16–35% of episodes with more than two doses rarely required. The intramuscular route is preferred over subcutaneous administration because the latter may have delayed absorption. It is recommended that after diagnosis and treatment of anaphylaxis, the patient should be kept under observation in an appropriate clinical setting until symptoms have fully resolved. Minor adverse effects from epinephrine include tremors, anxiety, headaches, and palpitations. People on β-blockers may be resistant to the effects of epinephrine. In this situation if epinephrine is not effective intravenous glucagon can be administered which has a mechanism of action independent of β-receptors. If necessary, it can also be given intravenously using a dilute epinephrine solution. Intravenous epinephrine, however, has been associated both with dysrhythmia and myocardial infarction. Epinephrine autoinjectors used for self-administration typically come in two doses, one for adults or children who weigh more than 25 kg and one for children who weigh 10 to 25 kg. Adjuncts Antihistamines (both H1 and H2), while commonly used and assumed effective based on theoretical reasoning, are poorly supported by evidence. A 2007 Cochrane review did not find any good-quality studies upon which to base recommendations and they are not believed to have an effect on airway edema or spasm. Corticosteroids are unlikely to make a difference in the current episode of anaphylaxis, but may be used in the hope of decreasing the risk of biphasic anaphylaxis. Their prophylactic effectiveness in these situations is uncertain. Nebulized salbutamol may be effective for bronchospasm that does not resolve with epinephrine. Methylene blue has been used in those not responsive to other measures due to its presumed effect of relaxing smooth muscle. Preparedness People prone to anaphylaxis are advised to have an allergy action plan. Parents are advised to inform schools of their children's allergies and what to do in case of an anaphylactic emergency. The action plan usually includes use of epinephrine autoinjectors, the recommendation to wear a medical alert bracelet, and counseling on avoidance of triggers. Immunotherapy is available for certain triggers to prevent future episodes of anaphylaxis. A multi-year course of subcutaneous desensitization has been found effective against stinging insects, while oral desensitization is effective for many foods. Prognosis In those in whom the cause is known and prompt treatment is available, the prognosis is good. Even if the cause is unknown, if appropriate preventive medication is available, the prognosis is generally good. Usually death occurs due to either respiratory failure (typically involving asphyxia) or cardiovascular complications, such as cardiovascular shock, with 0.7–20% of cases causing death. There have been cases of death occurring within minutes. Outcomes in those with exercise-induced anaphylaxis are typically good, with fewer and less severe episodes as people get older. Epidemiology The number of people who get anaphylaxis is 4–100 per 100,000 persons per year, with a lifetime risk of 0.05–2%. About 30% of affected people get more than one attack. Exercise-induced anaphylaxis affects about 1 in 2000 young people. Rates appear to be increasing: the numbers in the 1980s were approximately 20 per 100,000 per year, while in the 1990s it was 50 per 100,000 per year. The increase appears to be primarily for food-induced anaphylaxis. The risk is greatest in young people and females. Anaphylaxis leads to as many as 500–1,000 deaths per year (2.7 per million) in the United States, 20 deaths per year in the United Kingdom (0.33 per million), and 15 deaths per year in Australia (0.64 per million). Another estimate from the United States puts the death rate at 0.7 per million. Mortality rates have decreased between the 1970s and 2000s. In Australia, death from food-induced anaphylaxis occur primarily in women while deaths due to insect bites primarily occur in males. Death from anaphylaxis is most commonly triggered by medications. History The conditions of anaphylaxis has been known since ancient times. French physician François Magendie had described how rabbits were killed by repeated injections of egg albumin in 1839. However, the phenomenon was discovered by two French physiologists Charles Richet and Paul Portier. In 1901, Albert I, Prince of Monaco requested Richet and Portier join him on a scientific expedition around the French coast of the Atlantic Ocean, specifically to study on the toxin produced by cnidarians (like jellyfish and sea anemones). Richet and Portier boarded Albert's ship Princesse Alice II for ocean exploration to make collections of the marine animals. Richet and Portier extracted a toxin called hypnotoxin from their collection of jellyfish (but the real source was later identified as Portuguese man o' war) and sea anemone (Actinia sulcata). In their first experiment on the ship, they injected a dog with the toxin in an attempt to immunise the dog, which instead developed a severe reaction (hypersensitivity). In 1902, they repeated the injections in their laboratory and found that dogs normally tolerated the toxin at first injection, but on re-exposure, three weeks later with the same dose, they always developed fatal shock. They also found that the effect was not related to the doses of toxin used, as even small amounts in secondary injections were lethal. Thus, instead of inducing tolerance (prophylaxis) which they expected, they discovered effects of the toxin as deadly. In 1902, Richet introduced the term aphylaxis to describe the condition of lack of protection. He later changed the term to anaphylaxis on grounds of euphony. The term is from the Greek , , meaning "against", and , , meaning "protection". On 15 February 1902, Richet and Portier jointly presented their findings before the Societé de Biologie in Paris. The moment is regarded as the birth of allergy (the term invented by Clemens von Pirquet in 1906) study (allergology). Richet continued to study on the phenomenon and was eventually awarded the Nobel Prize in Physiology or Medicine for his work on anaphylaxis in 1913. Research There are ongoing efforts to develop sublingual epinephrine to treat anaphylaxis. Trials of sublingual epinephrine, currently called AQST-108 (dipivefrin) and sponsored by Aquestive Therapeutics, are in phase 1 trials as of December 2021. Subcutaneous injection of the anti-IgE antibody omalizumab is being studied as a method of preventing recurrence, but it is not yet recommended.
Biology and health sciences
Specific diseases
Health
74263
https://en.wikipedia.org/wiki/Frame%20of%20reference
Frame of reference
In physics and astronomy, a frame of reference (or reference frame) is an abstract coordinate system, whose origin, orientation, and scale have been specified in physical space. It is based on a set of reference points, defined as geometric points whose position is identified both mathematically (with numerical coordinate values) and physically (signaled by conventional markers). An important special case is that of inertial reference frames, a stationary or uniformly moving frame. For n dimensions, reference points are sufficient to fully define a reference frame. Using rectangular Cartesian coordinates, a reference frame may be defined with a reference point at the origin and a reference point at one unit distance along each of the n coordinate axes. In Einsteinian relativity, reference frames are used to specify the relationship between a moving observer and the phenomenon under observation. In this context, the term often becomes observational frame of reference (or observational reference frame), which implies that the observer is at rest in the frame, although not necessarily located at its origin. A relativistic reference frame includes (or implies) the coordinate time, which does not equate across different reference frames moving relatively to each other. The situation thus differs from Galilean relativity, in which all possible coordinate times are essentially equivalent. Definition The need to distinguish between the various meanings of "frame of reference" has led to a variety of terms. For example, sometimes the type of coordinate system is attached as a modifier, as in Cartesian frame of reference. Sometimes the state of motion is emphasized, as in rotating frame of reference. Sometimes the way it transforms to frames considered as related is emphasized as in Galilean frame of reference. Sometimes frames are distinguished by the scale of their observations, as in macroscopic and microscopic frames of reference. In this article, the term observational frame of reference is used when emphasis is upon the state of motion rather than upon the coordinate choice or the character of the observations or observational apparatus. In this sense, an observational frame of reference allows study of the effect of motion upon an entire family of coordinate systems that could be attached to this frame. On the other hand, a coordinate system may be employed for many purposes where the state of motion is not the primary concern. For example, a coordinate system may be adopted to take advantage of the symmetry of a system. In a still broader perspective, the formulation of many problems in physics employs generalized coordinates, normal modes or eigenvectors, which are only indirectly related to space and time. It seems useful to divorce the various aspects of a reference frame for the discussion below. We therefore take observational frames of reference, coordinate systems, and observational equipment as independent concepts, separated as below: An observational frame (such as an inertial frame or non-inertial frame of reference) is a physical concept related to state of motion. A coordinate system is a mathematical concept, amounting to a choice of language used to describe observations. Consequently, an observer in an observational frame of reference can choose to employ any coordinate system (Cartesian, polar, curvilinear, generalized, ...) to describe observations made from that frame of reference. A change in the choice of this coordinate system does not change an observer's state of motion, and so does not entail a change in the observer's observational frame of reference. This viewpoint can be found elsewhere as well. Which is not to dispute that some coordinate systems may be a better choice for some observations than are others. Choice of what to measure and with what observational apparatus is a matter separate from the observer's state of motion and choice of coordinate system. Coordinate systems Although the term "coordinate system" is often used (particularly by physicists) in a nontechnical sense, the term "coordinate system" does have a precise meaning in mathematics, and sometimes that is what the physicist means as well. A coordinate system in mathematics is a facet of geometry or of algebra, in particular, a property of manifolds (for example, in physics, configuration spaces or phase spaces). The coordinates of a point r in an n-dimensional space are simply an ordered set of n numbers: In a general Banach space, these numbers could be (for example) coefficients in a functional expansion like a Fourier series. In a physical problem, they could be spacetime coordinates or normal mode amplitudes. In a robot design, they could be angles of relative rotations, linear displacements, or deformations of joints. Here we will suppose these coordinates can be related to a Cartesian coordinate system by a set of functions: where x, y, z, etc. are the n Cartesian coordinates of the point. Given these functions, coordinate surfaces are defined by the relations: The intersection of these surfaces define coordinate lines. At any selected point, tangents to the intersecting coordinate lines at that point define a set of basis vectors {e1, e2, ..., en} at that point. That is: which can be normalized to be of unit length. For more detail see curvilinear coordinates. Coordinate surfaces, coordinate lines, and basis vectors are components of a coordinate system. If the basis vectors are orthogonal at every point, the coordinate system is an orthogonal coordinate system. An important aspect of a coordinate system is its metric tensor gik, which determines the arc length ds in the coordinate system in terms of its coordinates: where repeated indices are summed over. As is apparent from these remarks, a coordinate system is a mathematical construct, part of an axiomatic system. There is no necessary connection between coordinate systems and physical motion (or any other aspect of reality). However, coordinate systems can include time as a coordinate, and can be used to describe motion. Thus, Lorentz transformations and Galilean transformations may be viewed as coordinate transformations. Observational frame of reference An observational frame of reference, often referred to as a physical frame of reference, a frame of reference, or simply a frame, is a physical concept related to an observer and the observer's state of motion. Here we adopt the view expressed by Kumar and Barve: an observational frame of reference is characterized only by its state of motion. However, there is lack of unanimity on this point. In special relativity, the distinction is sometimes made between an observer and a frame. According to this view, a frame is an observer plus a coordinate lattice constructed to be an orthonormal right-handed set of spacelike vectors perpendicular to a timelike vector. See Doran. This restricted view is not used here, and is not universally adopted even in discussions of relativity. In general relativity the use of general coordinate systems is common (see, for example, the Schwarzschild solution for the gravitational field outside an isolated sphere). There are two types of observational reference frame: inertial and non-inertial. An inertial frame of reference is defined as one in which all laws of physics take on their simplest form. In special relativity these frames are related by Lorentz transformations, which are parametrized by rapidity. In Newtonian mechanics, a more restricted definition requires only that Newton's first law holds true; that is, a Newtonian inertial frame is one in which a free particle travels in a straight line at constant speed, or is at rest. These frames are related by Galilean transformations. These relativistic and Newtonian transformations are expressed in spaces of general dimension in terms of representations of the Poincaré group and of the Galilean group. In contrast to the inertial frame, a non-inertial frame of reference is one in which fictitious forces must be invoked to explain observations. An example is an observational frame of reference centered at a point on the Earth's surface. This frame of reference orbits around the center of the Earth, which introduces the fictitious forces known as the Coriolis force, centrifugal force, and gravitational force. (All of these forces including gravity disappear in a truly inertial reference frame, which is one of free-fall.) Measurement apparatus A further aspect of a frame of reference is the role of the measurement apparatus (for example, clocks and rods) attached to the frame (see Norton quote above). This question is not addressed in this article, and is of particular interest in quantum mechanics, where the relation between observer and measurement is still under discussion (see measurement problem). In physics experiments, the frame of reference in which the laboratory measurement devices are at rest is usually referred to as the laboratory frame or simply "lab frame." An example would be the frame in which the detectors for a particle accelerator are at rest. The lab frame in some experiments is an inertial frame, but it is not required to be (for example the laboratory on the surface of the Earth in many physics experiments is not inertial). In particle physics experiments, it is often useful to transform energies and momenta of particles from the lab frame where they are measured, to the center of momentum frame "COM frame" in which calculations are sometimes simplified, since potentially all kinetic energy still present in the COM frame may be used for making new particles. In this connection it may be noted that the clocks and rods often used to describe observers' measurement equipment in thought, in practice are replaced by a much more complicated and indirect metrology that is connected to the nature of the vacuum, and uses atomic clocks that operate according to the standard model and that must be corrected for gravitational time dilation. (See second, meter and kilogram). In fact, Einstein felt that clocks and rods were merely expedient measuring devices and they should be replaced by more fundamental entities based upon, for example, atoms and molecules. Generalization The discussion is taken beyond simple space-time coordinate systems by Brading and Castellani. Extension to coordinate systems using generalized coordinates underlies the Hamiltonian and Lagrangian formulations of quantum field theory, classical relativistic mechanics, and quantum gravity. Instances International Terrestrial Reference Frame International Celestial Reference Frame In fluid mechanics, Lagrangian and Eulerian specification of the flow field Other frames Frame fields in general relativity Moving frame in mathematics
Physical sciences
Classical mechanics
null
74327
https://en.wikipedia.org/wiki/Principle%20of%20relativity
Principle of relativity
In physics, the principle of relativity is the requirement that the equations describing the laws of physics have the same form in all admissible frames of reference. For example, in the framework of special relativity, the Maxwell equations have the same form in all inertial frames of reference. In the framework of general relativity, the Maxwell equations or the Einstein field equations have the same form in arbitrary frames of reference. Several principles of relativity have been successfully applied throughout science, whether implicitly (as in Newtonian mechanics) or explicitly (as in Albert Einstein's special relativity and general relativity). Basic concepts Certain principles of relativity have been widely assumed in most scientific disciplines. One of the most widespread is the belief that any law of nature should be the same at all times; and scientific investigations generally assume that laws of nature are the same regardless of the person measuring them. These sorts of principles have been incorporated into scientific inquiry at the most fundamental of levels. Any principle of relativity prescribes a symmetry in natural law: that is, the laws must look the same to one observer as they do to another. According to a theoretical result called Noether's theorem, any such symmetry will also imply a conservation law alongside. For example, if two observers at different times see the same laws, then a quantity called energy will be conserved. In this light, relativity principles make testable predictions about how nature behaves. Special principle of relativity According to the first postulate of the special theory of relativity: This postulate defines an inertial frame of reference. The special principle of relativity states that physical laws should be the same in every inertial frame of reference, but that they may vary across non-inertial ones. This principle is used in both Newtonian mechanics and the theory of special relativity. Its influence in the latter is so strong that Max Planck named the theory after the principle. The principle requires physical laws to be the same for any body moving at constant velocity as they are for a body at rest. A consequence is that an observer in an inertial reference frame cannot determine an absolute speed or direction of travel in space, and may only speak of speed or direction relative to some other object. The principle does not extend to non-inertial reference frames because those frames do not, in general experience, seem to abide by the same laws of physics. In classical physics, fictitious forces are used to describe acceleration in non-inertial reference frames. In Newtonian mechanics The special principle of relativity was first explicitly enunciated by Galileo Galilei in 1632 in his Dialogue Concerning the Two Chief World Systems, using the metaphor of Galileo's ship. Newtonian mechanics added to the special principle several other concepts, including laws of motion, gravitation, and an assertion of an absolute time. When formulated in the context of these laws, the special principle of relativity states that the laws of mechanics are invariant under a Galilean transformation. In special relativity Joseph Larmor and Hendrik Lorentz discovered that Maxwell's equations, used in the theory of electromagnetism, were invariant only by a certain change of time and length units. This left some confusion among physicists, many of whom thought that a luminiferous aether was incompatible with the relativity principle, in the way it was defined by Henri Poincaré: In their 1905 papers on electrodynamics, Henri Poincaré and Albert Einstein explained that with the Lorentz transformations the relativity principle holds perfectly. Einstein elevated the (special) principle of relativity to a postulate of the theory and derived the Lorentz transformations from this principle combined with the principle of the independence of the speed of light (in vacuum) from the motion of the source. These two principles were reconciled with each other by a re-examination of the fundamental meanings of space and time intervals. The strength of special relativity lies in its use of simple, basic principles, including the invariance of the laws of physics under a shift of inertial reference frames and the invariance of the speed of light in vacuum. (
Physical sciences
Theory of relativity
null
74331
https://en.wikipedia.org/wiki/Andromeda%20Galaxy
Andromeda Galaxy
The Andromeda Galaxy is a barred spiral galaxy and is the nearest major galaxy to the Milky Way. It was originally named the Andromeda Nebula and is cataloged as Messier 31, M31, and NGC 224. Andromeda has a D25 isophotal diameter of about and is approximately from Earth. The galaxy's name stems from the area of Earth's sky in which it appears, the constellation of Andromeda, which itself is named after the princess who was the wife of Perseus in Greek mythology. The virial mass of the Andromeda Galaxy is of the same order of magnitude as that of the Milky Way, at . The mass of either galaxy is difficult to estimate with any accuracy, but it was long thought that the Andromeda Galaxy was more massive than the Milky Way by a margin of some 25% to 50%. However, this has been called into question by early-21st-century studies indicating a possibly lower mass for the Andromeda Galaxy and a higher mass for the Milky Way. The Andromeda Galaxy has a diameter of about , making it the largest member of the Local Group of galaxies in terms of extension. The Milky Way and Andromeda galaxies are expected to collide with each other in around 4–5 billion years, merging to potentially form a giant elliptical galaxy or a large lenticular galaxy. With an apparent magnitude of 3.4, the Andromeda Galaxy is among the brightest of the Messier objects, and is visible to the naked eye from Earth on moonless nights, even when viewed from areas with moderate light pollution. Observation history The Andromeda Galaxy is visible to the naked eye in dark skies. Around the year 964 CE, the Persian astronomer Abd al-Rahman al-Sufi described the Andromeda Galaxy in his Book of Fixed Stars as a "nebulous smear" or "small cloud". Star charts of that period labeled it as the Little Cloud. In 1612, the German astronomer Simon Marius gave an early description of the Andromeda Galaxy based on telescopic observations. Pierre Louis Maupertuis conjectured in 1745 that the blurry spot was an island universe. Charles Messier cataloged Andromeda as object M31 in 1764 and incorrectly credited Marius as the discoverer despite it being visible to the naked eye. In 1785, the astronomer William Herschel noted a faint reddish hue in the core region of Andromeda. He believed Andromeda to be the nearest of all the "great nebulae", and based on the color and magnitude of the nebula, he incorrectly guessed that it was no more than 2,000 times the distance of Sirius, or roughly . In 1850, William Parsons, 3rd Earl of Rosse, made a drawing of Andromeda's spiral structure. In 1864, William Huggins noted that the spectrum of Andromeda differed from that of a gaseous nebula. The spectrum of Andromeda displays a continuum of frequencies, superimposed with dark absorption lines that help identify the chemical composition of an object. Andromeda's spectrum is very similar to the spectra of individual stars, and from this, it was deduced that Andromeda has a stellar nature. In 1885, a supernova (known as S Andromedae) was seen in Andromeda, the first and so far only one observed in that galaxy. At the time, it was called "Nova 1885"—the difference between "novae" in the modern sense and supernovae was not yet known. Andromeda was considered to be a nearby object, and it was not realized that the "nova" was much brighter than ordinary novae. In 1888, Isaac Roberts took one of the first photographs of Andromeda, which was still commonly thought to be a nebula within our galaxy. Roberts mistook Andromeda and similar "spiral nebulae" as star systems being formed. In 1912, Vesto Slipher used spectroscopy to measure the radial velocity of Andromeda with respect to the Solar System—the largest velocity yet measured, at . "Island universes" hypothesis As early as 1755, the German philosopher Immanuel Kant proposed the hypothesis that the Milky Way is only one of many galaxies in his book Universal Natural History and Theory of the Heavens. Arguing that a structure like the Milky Way would look like a circular nebula viewed from above and like an ellipsoid if viewed from an angle, he concluded that the observed elliptical nebulae like Andromeda, which could not be explained otherwise at the time, were indeed galaxies similar to the Milky Way, not nebulae, as Andromeda was commonly believed to be. In 1917, Heber Curtis observed a nova within Andromeda. After searching the photographic record, 11 more novae were discovered. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred elsewhere in the sky. As a result, he was able to come up with a distance estimate of . Although this estimate is about fivefold lower than the best estimates now available, it was the first known estimate of the distance to Andromeda that was correct to within an order of magnitude (i.e., to within a factor of ten of the current estimates, which place the distance around 2.5 million light-years). Curtis became a proponent of the so-called "island universes" hypothesis: that spiral nebulae were actually independent galaxies. In 1920, the Great Debate between Harlow Shapley and Curtis took place concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. To support his claim that the Great Andromeda Nebula is, in fact, an external galaxy, Curtis also noted the appearance of dark lanes within Andromeda that resembled the dust clouds in our own galaxy, as well as historical observations of the Andromeda Galaxy's significant Doppler shift. In 1922, Ernst Öpik presented a method to estimate the distance of Andromeda using the measured velocities of its stars. His result placed the Andromeda Nebula far outside our galaxy at a distance of about . Edwin Hubble settled the debate in 1925 when he identified extragalactic Cepheid variable stars for the first time on astronomical photos of Andromeda. These were made using the Hooker telescope, and they enabled the distance of the Great Andromeda Nebula to be determined. His measurement demonstrated conclusively that this feature was not a cluster of stars and gas within our own galaxy, but an entirely separate galaxy located a significant distance from the Milky Way. In 1943, Walter Baade was the first person to resolve stars in the central region of the Andromeda Galaxy. Baade identified two distinct populations of stars based on their metallicity, naming the young, high-velocity stars in the disk Type I and the older, red stars in the bulge Type II. This nomenclature was subsequently adopted for stars within the Milky Way and elsewhere. (The existence of two distinct populations had been noted earlier by Jan Oort.) Baade also discovered that there were two types of Cepheid variable stars, which resulted in doubling the distance estimate to Andromeda, as well as the remainder of the universe. In 1950, radio emissions from the Andromeda Galaxy were detected by Robert Hanbury Brown and Cyril Hazard at the Jodrell Bank Observatory. The first radio maps of the galaxy were made in the 1950s by John Baldwin and collaborators at the Cambridge Radio Astronomy Group. The core of the Andromeda Galaxy is called 2C 56 in the 2C radio astronomy catalog. In 1959 rapid rotation of the semi-stellar nucleus of M31 was discovered by Andre Lallemand, M. Duschene and Merle Walker at the Lick Observatory, using the 120-inch telescope, coudé Spectrograph, and Lallemand electronographic camera. They estimated the mass of the nucleus to be about 1.3 x 107 solar masses. The second example of this phenomenon was found in 1961 in the nucleus of M32 by M.F Walker at the Lick Observatory, using the same equipment as used for the discovery of the nucleus of M31. He estimated the nuclear mass to be between 0.8 and 1 x 107 solar masses. Such rotation is now considered to be evidence of the existence of supermassive black holes in the nuclei of these galaxies. In 2009, an occurrence of microlensing—a phenomenon caused by the deflection of light by a massive object—may have led to the first discovery of a planet in the Andromeda Galaxy. In 2020, observations of linearly polarized radio emission with the Westerbork Synthesis Radio Telescope, the Effelsberg 100-m Radio Telescope, and the Very Large Array revealed ordered magnetic fields aligned along the "10-kpc ring" of gas and star formation. General The estimated distance of the Andromeda Galaxy from our own was doubled in 1953 when it was discovered that there is a second, dimmer type of Cepheid variable star. In the 1990s, measurements of both standard red giants as well as red clump stars from the Hipparcos satellite measurements were used to calibrate the Cepheid distances. Formation and history A major merger occurred 2 to 3 billion years ago at the Andromeda location, involving two galaxies with a mass ratio of approximately 4. The discovery of a recent merger in the Andromeda galaxy was first based on interpreting its anomalous age-velocity dispersion relation, as well as the fact that 2 billion years ago, star formation throughout Andromeda's disk was much more active than today. Modeling of this violent collision shows that it has formed most of the galaxy's (metal-rich) galactic halo, including the Giant Stream, and also the extended thick disk, the young age thin disk, and the static 10 kpc ring. During this epoch, its rate of star formation would have been very high, to the point of becoming a luminous infrared galaxy for roughly 100 million years. Modeling also recovers the bulge profile, the large bar, and the overall halo density profile. Andromeda and the Triangulum Galaxy (M33) might have had a very close passage 2–4 billion years ago, but it seems unlikely from the last measurements from the Hubble Space Telescope. Distance estimate At least four distinct techniques have been used to estimate distances from Earth to the Andromeda Galaxy. In 2003, using the infrared surface brightness fluctuations (I-SBF) and adjusting for the new period-luminosity value and a metallicity correction of −0.2 mag dex−1 in (O/H), an estimate of was derived. A 2004 Cepheid variable method estimated the distance to be 2.51 ± 0.13 million light-years (770 ± 40 kpc). In 2005, an eclipsing binary star was discovered in the Andromeda Galaxy. The binary is made up of two hot blue stars of types O and B. By studying the eclipses of the stars, astronomers were able to measure their sizes. Knowing the sizes and temperatures of the stars, they were able to measure their absolute magnitude. When the visual and absolute magnitudes are known, the distance to the star can be calculated. The stars lie at a distance of and the whole Andromeda Galaxy at about . This new value is in excellent agreement with the previous, independent Cepheid-based distance value. The TRGB method was also used in 2005 giving a distance of . Averaged together, these distance estimates give a value of . Mass estimates Until 2018, mass estimates for the Andromeda Galaxy's halo (including dark matter) gave a value of approximately , compared to for the Milky Way. This contradicted even earlier measurements that seemed to indicate that the Andromeda Galaxy and Milky Way are almost equal in mass. In 2018, the earlier measurements for equality of mass were re-established by radio results as approximately . In 2006, the Andromeda Galaxy's spheroid was determined to have a higher stellar density than that of the Milky Way, and its galactic stellar disk was estimated at twice the diameter of that of the Milky Way. The total mass of the Andromeda Galaxy is estimated to be between and . The stellar mass of M31 is , with 30% of that mass in the central bulge, 56% in the disk, and the remaining 14% in the stellar halo. The radio results (similar mass to the Milky Way Galaxy) should be taken as likeliest as of 2018, although clearly, this matter is still under active investigation by several research groups worldwide. As of 2019, current calculations based on escape velocity and dynamical mass measurements put the Andromeda Galaxy at , which is only half of the Milky Way's newer mass, calculated in 2019 at . In addition to stars, the Andromeda Galaxy's interstellar medium contains at least in the form of neutral hydrogen, at least as molecular hydrogen (within its innermost 10 kiloparsecs), and of dust. The Andromeda Galaxy is surrounded by a massive halo of hot gas that is estimated to contain half the mass of the stars in the galaxy. The nearly invisible halo stretches about a million light-years from its host galaxy, halfway to our Milky Way Galaxy. Simulations of galaxies indicate the halo formed at the same time as the Andromeda Galaxy. The halo is enriched in elements heavier than hydrogen and helium, formed from supernovae, and its properties are those expected for a galaxy that lies in the "green valley" of the Galaxy color-magnitude diagram (see below). Supernovae erupt in the Andromeda Galaxy's star-filled disk and eject these heavier elements into space. Over the Andromeda Galaxy's lifetime, nearly half of the heavy elements made by its stars have been ejected far beyond the galaxy's 200,000-light-year-diameter stellar disk. Luminosity estimates The estimated luminosity of the Andromeda Galaxy, , is about 25% higher than that of our own galaxy. However, the galaxy has a high inclination as seen from Earth, and its interstellar dust absorbs an unknown amount of light, so it is difficult to estimate its actual brightness and other authors have given other values for the luminosity of the Andromeda Galaxy (some authors even propose it is the second-brightest galaxy within a radius of 10 megaparsecs of the Milky Way, after the Sombrero Galaxy, with an absolute magnitude of around −22.21 or close). An estimation done with the help of Spitzer Space Telescope published in 2010 suggests an absolute magnitude (in the blue) of −20.89 (that with a color index of +0.63 translates to an absolute visual magnitude of −21.52, compared to −20.9 for the Milky Way), and a total luminosity in that wavelength of . The rate of star formation in the Milky Way is much higher, with the Andromeda Galaxy producing only about one solar mass per year compared to 3–5 solar masses for the Milky Way. The rate of novae in the Milky Way is also double that of the Andromeda Galaxy. This suggests that the latter once experienced a great star formation phase, but is now in a relative state of quiescence, whereas the Milky Way is experiencing more active star formation. Should this continue, the luminosity of the Milky Way may eventually overtake that of the Andromeda Galaxy. According to recent studies, the Andromeda Galaxy lies in what is known in the galaxy color–magnitude diagram as the "green valley", a region populated by galaxies like the Milky Way in transition from the "blue cloud" (galaxies actively forming new stars) to the "red sequence" (galaxies that lack star formation). Star formation activity in green valley galaxies is slowing as they run out of star-forming gas in the interstellar medium. In simulated galaxies with similar properties to the Andromeda Galaxy, star formation is expected to extinguish within about five billion years, even accounting for the expected, short-term increase in the rate of star formation due to the collision between the Andromeda Galaxy and the Milky Way. Structure Based on its appearance in visible light, the Andromeda Galaxy is classified as an SA(s)b galaxy in the de Vaucouleurs–Sandage extended classification system of spiral galaxies. However, infrared data from the 2MASS survey and the Spitzer Space Telescope showed that Andromeda is actually a barred spiral galaxy, like the Milky Way, with Andromeda's bar major axis oriented 55 degrees anti-clockwise from the disc major axis. There are various methods used in astronomy in defining the size of a galaxy, and each method can yield different results concerning one another. The most commonly employed is the D25 standard, the isophote where the photometric brightness of a galaxy in the B-band (445 nm wavelength of light, in the blue part of the visible spectrum) reaches 25 mag/arcsec2. The Third Reference Catalogue of Bright Galaxies (RC3) used this standard for Andromeda in 1991, yielding an isophotal diameter of at a distance of 2.5 million light-years. An earlier estimate from 1981 gave a diameter for Andromeda at . A study in 2005 by the Keck telescopes shows the existence of a tenuous sprinkle of stars, or galactic halo, extending outward from the galaxy. The stars in this halo behave differently from the ones in Andromeda's main galactic disc, where they show rather disorganized orbital motions as opposed to the stars in the main disc having more orderly orbits and uniform velocities of 200 km/s. This diffuse halo extends outwards away from Andromeda's main disc with the diameter of . The galaxy is inclined an estimated 77° relative to Earth (where an angle of 90° would be edge-on). Analysis of the cross-sectional shape of the galaxy appears to demonstrate a pronounced, S-shaped warp, rather than just a flat disk. A possible cause of such a warp could be gravitational interaction with the satellite galaxies near the Andromeda Galaxy. The Galaxy M33 could be responsible for some warp in Andromeda's arms, though more precise distances and radial velocities are required. Spectroscopic studies have provided detailed measurements of the rotational velocity of the Andromeda Galaxy as a function of radial distance from the core. The rotational velocity has a maximum value of at from the core, and it has its minimum possibly as low as at from the core. Further out, rotational velocity rises out to a radius of , where it reaches a peak of . The velocities slowly decline beyond that distance, dropping to around at . These velocity measurements imply a concentrated mass of about in the nucleus. The total mass of the galaxy increases linearly out to , then more slowly beyond that radius. The spiral arms of the Andromeda Galaxy are outlined by a series of HII regions, first studied in great detail by Walter Baade and described by him as resembling "beads on a string". His studies show two spiral arms that appear to be tightly wound, although they are more widely spaced than in our galaxy. His descriptions of the spiral structure, as each arm crosses the major axis of the Andromeda Galaxy, are as follows§pp1062§pp92: Since the Andromeda Galaxy is seen close to edge-on, it is difficult to study its spiral structure. Rectified images of the galaxy seem to show a fairly normal spiral galaxy, exhibiting two continuous trailing arms that are separated from each other by a minimum of about and that can be followed outward from a distance of roughly from the core. Alternative spiral structures have been proposed such as a single spiral arm or a flocculent pattern of long, filamentary, and thick spiral arms. The most likely cause of the distortions of the spiral pattern is thought to be interaction with galaxy satellites M32 and M110. This can be seen by the displacement of the neutral hydrogen clouds from the stars. In 1998, images from the European Space Agency's Infrared Space Observatory demonstrated that the overall form of the Andromeda Galaxy may be transitioning into a ring galaxy. The gas and dust within the galaxy are generally formed into several overlapping rings, with a particularly prominent ring formed at a radius of from the core, nicknamed by some astronomers the ring of fire. This ring is hidden from visible light images of the galaxy because it is composed primarily of cold dust, and most of the star formation that is taking place in the Andromeda Galaxy is concentrated there. Later studies with the help of the Spitzer Space Telescope showed how the Andromeda Galaxy's spiral structure in the infrared appears to be composed of two spiral arms that emerge from a central bar and continue beyond the large ring mentioned above. Those arms, however, are not continuous and have a segmented structure. Close examination of the inner region of the Andromeda Galaxy with the same telescope also showed a smaller dust ring that is believed to have been caused by the interaction with M32 more than 200 million years ago. Simulations show that the smaller galaxy passed through the disk of the Andromeda Galaxy along the latter's polar axis. This collision stripped more than half the mass from the smaller M32 and created the ring structures in Andromeda. It is the co-existence of the long-known large ring-like feature in the gas of Messier 31, together with this newly discovered inner ring-like structure, offset from the barycenter, that suggested a nearly head-on collision with the satellite M32, a milder version of the Cartwheel encounter. Studies of the extended halo of the Andromeda Galaxy show that it is roughly comparable to that of the Milky Way, with stars in the halo being generally "metal-poor", and increasingly so with greater distance. This evidence indicates that the two galaxies have followed similar evolutionary paths. They are likely to have accreted and assimilated about 100–200 low-mass galaxies during the past 12 billion years. The stars in the extended halos of the Andromeda Galaxy and the Milky Way may extend nearly one-third the distance separating the two galaxies. Nucleus The Andromeda Galaxy is known to harbor a dense and compact star cluster at its very center, similar to our own galaxy. A large telescope creates a visual impression of a star embedded in the more diffuse surrounding bulge. In 1991, the Hubble Space Telescope was used to image the Andromeda Galaxy's inner nucleus. The nucleus consists of two concentrations separated by . The brighter concentration, designated as P1, is offset from the center of the galaxy. The dimmer concentration, P2, falls at the true center of the galaxy and contains an embedded star cluster, called P3, containing many UV-bright A-stars and the supermassive black hole, called M31*. The black hole is classified as a low-luminosity AGN (LLAGN) and it was detected only in radio wavelengths and in x-rays. It was quiescent in 2004–2005, but it was highly variable in 2006–2007. The mass of M31* was measured at 3–5 × 107 in 1993, and at 1.1–2.3 × 108 in 2005. The velocity dispersion of material around it is measured to be ≈ . It has been proposed that the observed double nucleus could be explained if P1 is the projection of a disk of stars in an eccentric orbit around the central black hole. The eccentricity is such that stars linger at the orbital apocenter, creating a concentration of stars. It has been postulated that such an eccentric disk could have been formed from the result of a previous black hole merger, where the release of gravitational waves could have "kicked" the stars into their current eccentric distribution. P2 also contains a compact disk of hot, spectral-class A stars. The A stars are not evident in redder filters, but in blue and ultraviolet light they dominate the nucleus, causing P2 to appear more prominent than P1. While at the initial time of its discovery it was hypothesized that the brighter portion of the double nucleus is the remnant of a small galaxy "cannibalized" by the Andromeda Galaxy, this is no longer considered a viable explanation, largely because such a nucleus would have an exceedingly short lifetime due to tidal disruption by the central black hole. While this could be partially resolved if P1 had its own black hole to stabilize it, the distribution of stars in P1 does not suggest that there is a black hole at its center. Discrete sources Apparently, by late 1968, no X-rays had been detected from the Andromeda Galaxy. A balloon flight on 20 October 1970 set an upper limit for detectable hard X-rays from the Andromeda Galaxy. The Swift BAT all-sky survey successfully detected hard X-rays coming from a region centered 6 arcseconds away from the galaxy center. The emission above 25 keV was later found to be originating from a single source named 3XMM J004232.1+411314, and identified as a binary system where a compact object (a neutron star or a black hole) accretes matter from a star. Multiple X-ray sources have since been detected in the Andromeda Galaxy, using observations from the European Space Agency's (ESA) XMM-Newton orbiting observatory. Robin Barnard et al. hypothesized that these are candidate black holes or neutron stars, which are heating the incoming gas to millions of kelvins and emitting X-rays. Neutron stars and black holes can be distinguished mainly by measuring their masses. An observation campaign of NuSTAR space mission identified 40 objects of this kind in the galaxy. In 2012, a microquasar, a radio burst emanating from a smaller black hole was detected in the Andromeda Galaxy. The progenitor black hole is located near the galactic center and has about 10 . It was discovered through data collected by the European Space Agency's XMM-Newton probe and was subsequently observed by NASA's Swift Gamma-Ray Burst Mission and Chandra X-Ray Observatory, the Very Large Array, and the Very Long Baseline Array. The microquasar was the first observed within the Andromeda Galaxy and the first outside of the Milky Way Galaxy. Globular clusters There are approximately 460 globular clusters associated with the Andromeda Galaxy. The most massive of these clusters, identified as Mayall II, nicknamed Globular One, has a greater luminosity than any other known globular cluster in the Local Group of galaxies. It contains several million stars and is about twice as luminous as Omega Centauri, the brightest known globular cluster in the Milky Way. Globular One (or G1) has several stellar populations and a structure too massive for an ordinary globular. As a result, some consider G1 to be the remnant core of a dwarf galaxy that was consumed by Andromeda in the distant past. The globular with the greatest apparent brightness is G76 which is located in the southwest arm's eastern half. Another massive globular cluster, named 037-B327 and discovered in 2006 as is heavily reddened by the Andromeda Galaxy's interstellar dust, was thought to be more massive than G1 and the largest cluster of the Local Group; however, other studies have shown it is actually similar in properties to G1. Unlike the globular clusters of the Milky Way, which show a relatively low age dispersion, Andromeda Galaxy's globular clusters have a much larger range of ages: from systems as old as the galaxy itself to much younger systems, with ages between a few hundred million years to five billion years. In 2005, astronomers discovered a completely new type of star cluster in the Andromeda Galaxy. The new-found clusters contain hundreds of thousands of stars, a similar number of stars that can be found in globular clusters. What distinguishes them from the globular clusters is that they are much larger—several hundred light-years across—and hundreds of times less dense. The distances between the stars are, therefore, much greater within the newly discovered extended clusters. The most massive globular cluster in the Andromeda Galaxy, B023-G078, likely has a central intermediate black hole of almost 100,000 solar masses. PA-99-N2 event and possible exoplanet in galaxy PA-99-N2 was a microlensing event detected in the Andromeda Galaxy in 1999. One of the explanations for this is the gravitational lensing of a red giant by a star with a mass between 0.02 and 3.6 times that of the Sun, which suggested that the star is likely orbited by a planet. This possible exoplanet would have a mass 6.34 times that of Jupiter. If finally confirmed, it would be the first ever found extragalactic planet. However, anomalies in the event were later found. Nearby and satellite galaxies Like the Milky Way, the Andromeda Galaxy has smaller satellite galaxies, consisting of over 20 known dwarf galaxies. The Andromeda Galaxy's dwarf galaxy population is very similar to the Milky Way's, but the galaxies are much more numerous. The best-known and most readily observed satellite galaxies are M32 and M110. Based on current evidence, it appears that M32 underwent a close encounter with the Andromeda Galaxy in the past. M32 may once have been a larger galaxy that had its stellar disk removed by M31 and underwent a sharp increase of star formation in the core region, which lasted until the relatively recent past. M110 also appears to be interacting with the Andromeda Galaxy, and astronomers have found in the halo of the latter a stream of metal-rich stars that appear to have been stripped from these satellite galaxies. M110 does contain a dusty lane, which may indicate recent or ongoing star formation. M32 has a young stellar population as well. The Triangulum Galaxy is a non-dwarf galaxy that lies 750,000 light-years from Andromeda. It is currently unknown whether it is a satellite of Andromeda. In 2006, it was discovered that nine of the satellite galaxies lie in a plane that intersects the core of the Andromeda Galaxy; they are not randomly arranged as would be expected from independent interactions. This may indicate a common tidal origin for the satellites. Collision with the Milky Way The Andromeda Galaxy is approaching the Milky Way at about per second. It has been measured approaching relative to the Sun at around as the Sun orbits around the center of the galaxy at a speed of approximately . This makes the Andromeda Galaxy one of about 100 observable blueshifted galaxies. Andromeda Galaxy's tangential or sideways velocity concerning the Milky Way is relatively much smaller than the approaching velocity and therefore it is expected to collide directly with the Milky Way in about 2.5–4 billion years. A likely outcome of the collision is that the galaxies will merge to form a giant elliptical galaxy or possibly large disc galaxy. Such events are frequent among the galaxies in galaxy groups. The fate of Earth and the Solar System in the event of a collision is currently unknown. Before the galaxies merge, there is a small chance that the Solar System could be ejected from the Milky Way or join the Andromeda Galaxy. Amateur observation Under most viewing conditions, the Andromeda Galaxy is one of the most distant objects that can be seen with the naked eye, due to its sheer size. (M33 and, for observers with exceptionally good vision, M81 can be seen under very dark skies.) The galaxy is commonly located in the sky about the constellations Cassiopeia and Pegasus. Andromeda is best seen during autumn nights in the Northern Hemisphere when it passes high overhead, reaching its highest point around midnight in October, and two hours earlier each successive month. In the early evening, it rises in the east in September and sets in the west in February. From the Southern Hemisphere the Andromeda Galaxy is visible between October and December, best viewed from as far north as possible. Binoculars can reveal some larger structures of the galaxy and its two brightest satellite galaxies, M32 and M110. An amateur telescope can reveal Andromeda's disk, some of its brightest globular clusters, dark dust lanes, and the large star cloud NGC 206.
Physical sciences
Notable galaxies
null
74366
https://en.wikipedia.org/wiki/Square%20metre
Square metre
The square metre (international spelling as used by the International Bureau of Weights and Measures) or square meter (American spelling) is the unit of area in the International System of Units (SI) with symbol m2. It is the area of a square with sides one metre in length. Adding and subtracting SI prefixes creates multiples and submultiples; however, as the unit is exponentiated, the quantities grow exponentially by the corresponding power of 10. For example, 1 kilometre is 103 (one thousand) times the length of 1 metre, but 1 square kilometre is (103)2 (106, one million) times the area of 1 square metre, and 1 cubic kilometre is (103)3 (109, one billion) cubic metres. SI prefixes applied The square metre may be used with all SI prefixes used with the metre. Unicode characters Unicode has several characters used to represent metric area units, but these are for compatibility with East Asian character encodings and are meant to be used in new documents. Instead, the Unicode superscript can be used, as in m². Conversions One square metre is equal to: square kilometre (km2) square centimetres (cm2) hectares (ha) decares (daa) ares (a) deciares (da) centiare (ca) acres cents square yards square feet square inches
Physical sciences
Area
Basics and measurement
74390
https://en.wikipedia.org/wiki/Decay%20product
Decay product
In nuclear physics, a decay product (also known as a daughter product, daughter isotope, radio-daughter, or daughter nuclide) is the remaining nuclide left over from radioactive decay. Radioactive decay often proceeds via a sequence of steps (decay chain). For example, 238U decays to 234Th which decays to 234mPa which decays, and so on, to 206Pb (which is stable): In this example: 234Th, 234mPa,...,206Pb are the decay products of 238U. 234Th is the daughter of the parent 238U. 234mPa (234 metastable) is the granddaughter of 238U. These might also be referred to as the daughter products of 238U. Decay products are important in understanding radioactive decay and the management of radioactive waste. For elements above lead in atomic number, the decay chain typically ends with an isotope of lead or bismuth. Bismuth itself decays to thallium, but the decay is so slow as to be practically negligible. In many cases, individual members of the decay chain are as radioactive as the parent, but far smaller in volume/mass. Thus, although uranium is not dangerously radioactive when pure, some pieces of naturally occurring pitchblende are quite dangerous owing to their radium-226 content, which is soluble and not a ceramic like the parent. Similarly, thorium gas mantles are very slightly radioactive when new, but become more radioactive after only a few months of storage as the daughters of 232Th build up. Although it cannot be predicted whether any given atom of a radioactive substance will decay at any given time, the decay products of a radioactive substance are extremely predictable. Because of this, decay products are important to scientists in many fields who need to know the quantity or type of the parent product. Such studies are done to measure pollution levels (in and around nuclear facilities) and for other matters.
Physical sciences
Nuclear physics
Physics
74549
https://en.wikipedia.org/wiki/Pomegranate
Pomegranate
The pomegranate (Punica granatum) is a fruit-bearing deciduous shrub in the family Lythraceae, subfamily Punicoideae, that grows between tall. Rich in symbolic and mythological associations in many cultures, it is thought to have originated from Afghanistan and Iran before being introduced and exported to other parts of Asia, Africa, and Europe. It was introduced into Spanish America in the late 16th century and into California by Spanish settlers in 1769. It is widely cultivated throughout West Asia and Caucasus region, South Asia, Central Asia, north and tropical Africa, the drier parts of Southeast Asia, and the Mediterranean Basin. The fruit is typically in season in the Northern Hemisphere from September to February, and in the Southern Hemisphere from March to May. Pomegranate and juice are variously used in baking, cooking, juice blends, garnishes, non-alcoholic drinks, and cocktails. Etymology The name pomegranate derives from medieval Latin 'apple' and 'seeded'. Possibly stemming from the old French word for the fruit, , the pomegranate was known in early English as apple of Grenada—a term which today survives only in heraldic blazons. This is a folk etymology, confusing the Latin granatus with the name of the Spanish city of Granada, which is derived from an unrelated Arabic word. Garnet derives from Old French by metathesis, from Medieval Latin as used in a different meaning 'of a dark red color'. This derivation may have originated from pomum granatum, describing the color of pomegranate pulp, or from granum, referring to 'red dye, cochineal'. The modern French term for pomegranate, , has given its name to the military grenade. Pomegranates were colloquially called wineapples or wine-apples in Ireland, although this term has fallen out of use. It still persists at the Moore Street open-air market, in central Dublin. Description The pomegranate is a shrub or small tree growing high, with multiple spiny branches. It is long-lived, with some specimens in France surviving for 200 years. P. granatum leaves are opposite or subopposite, glossy, narrow oblong, entire, long and broad. The flowers are bright red and in diameter, with three to seven petals. Some fruitless varieties are grown for the flowers alone. Fruit The pomegranate fruit husk is red-purple in color with an outer, hard pericarp, and an inner, spongy mesocarp (white "albedo"), which comprises the fruit inner wall where seeds attach. Membranes of the mesocarp are organized as nonsymmetric chambers that contain seeds which are embedded without attachment to the mesocarp. Pomegranate seeds are characterized by having sarcotesta, thick fleshy seed coats derived from the integuments or outer layers of the ovule's epidermal cells. The number of seeds in a pomegranate can vary from 200 to about 1,400. Botanically, the fruit is a berry with edible seeds and pulp produced from the ovary of a single flower. The fruit is intermediate in size between a lemon and a grapefruit, in diameter with a rounded shape and thick, reddish husk. In mature fruits, the juice obtained by compressing the seeds yields a tart flavor due to low pH (4.4) and high contents of polyphenols, which may cause a red indelible stain on fabrics. The pigmentation of pomegranate juice primarily results from the presence of anthocyanins and ellagitannins. Cultivation P. granatum is grown for its vegetable crop, and as ornamental trees and shrubs in parks and gardens. Mature specimens can develop sculptural twisted-bark, multiple trunks and a distinctive overall form. Pomegranates are drought-tolerant, and can be grown in dry areas with either a Mediterranean winter rainfall climate or in summer rainfall climates. In wetter areas, they can be prone to root decay from fungal diseases. They can tolerate moderate frost, down to about . Insect pests of the pomegranate can include the butterflies Virachola isocrates, Iraota timoleon, Deudorix epijarbas, and the leaf-footed bug Leptoglossus zonatus, and fruit flies and ants are attracted to unharvested ripe fruit. Propagation P. granatum reproduces sexually in nature but can be propagated using asexual reproduction. Propagation methods include layering, hardwood cuttings, softwood cuttings and tissue culture. Required conditions for rooting cuttings is warm temperature, within the 18 - 29 °C (65 - 85 °F) range, semi-humid environment, rooting hormone increases success rate but is not required. Grafting is possible but impractical and tends to yield low success rates. Varieties P. granatum var. nana is a dwarf variety of P. granatum popularly planted as an ornamental plant in gardens and larger containers, and used as a bonsai specimen tree. It could well be a wild form with a distinct origin. It has gained the Royal Horticultural Society's Award of Garden Merit. The only other species in the genus Punica is the Socotran pomegranate (P. protopunica), which is endemic to the Socotran archipelago of four islands located in the Arabian Sea, the largest island of which is also known as Socotra. The territory is part of Yemen. It differs in having pink (not red) flowers and smaller, less sweet fruit. Cultivars P. granatum has more than 500 named cultivars, but evidently has considerable synonymy in which the same genotype is named differently across regions of the world. Several characteristics between pomegranate genotypes vary for identification, consumer preference, preferred use, and marketing, the most important of which are fruit size, exocarp color (ranging from yellow to purple, with pink and red most common), seed-coat color (ranging from white to red), the hardness of seed, maturity, juice content and its acidity, sweetness, and astringency. Production and export The leading producers globally are India and China, followed by Iran, Turkey, Afghanistan, the US, Iraq, Pakistan, Syria, and Spain. During 2019, Chile, Peru, Egypt, Israel, India, and Turkey supplied pomegranates to the European market. Chile was the main supplier to the United States market, which has a limited supply from Southern California. China was self-sufficient for its pomegranate supply in 2019, while other South Asia markets were supplied mainly by India. Pomegranate production and exports in South Africa competed with South American shipments in 2012–18, with export destinations including Europe, the Middle East, the United Kingdom, and Russia. South Africa imports pomegranates mainly from Israel. History The pomegranate is native to a region from modern-day Iran to northern India. Pomegranates have been cultivated throughout the Middle East, India, and Mediterranean region for several millennia, and it is also cultivated in the Central Valley of California and in Arizona. Pomegranates may have been domesticated as early as the fifth millennium BC, as they were one of the first fruit trees to be domesticated in the eastern Mediterranean region. Carbonized exocarp of the fruit has been identified in early Bronze Age levels of Tell es-Sultan (Jericho) in the West Bank, as well as late Bronze Age levels of Hala Sultan Tekke on Cyprus and Tiryns. A large, dry pomegranate was found in the tomb of Djehuty, the butler of Queen Hatshepsut in Egypt; Mesopotamian records written in cuneiform mention pomegranates from the mid-third millennium BC onwards. Waterlogged pomegranate remains have been identified at the circa 14th century BC Uluburun shipwreck off the coast of Turkey. Other goods on the ship include perfume, ivory and gold jewelry, suggesting that pomegranates at this time may have been considered a luxury good. Other archaeological finds of pomegranate remains from the Late Bronze Age have been found primarily in elite residences, supporting this inference. It is also extensively grown in southern China and in Southeast Asia, whether originally spread along the route of the Silk Road or brought by sea traders. Kandahar is famous in Afghanistan for its high-quality pomegranates. Although not native to Korea or Japan, the pomegranate is widely grown there and many cultivars have been developed. It is widely used for bonsai because of its flowers and for the unusual twisted bark the older specimens can attain. The term "balaustine" () is also used for a pomegranate-red color. Spanish colonists later introduced the fruit to the Caribbean and America (Spanish America). However, in the English colonies, it was less at home: "Don't use the pomegranate inhospitably, a stranger that has come so far to pay his respects to thee," the English Quaker Peter Collinson wrote to the botanizing John Bartram in Philadelphia, 1762. "Plant it against the side of thy house, nail it close to the wall. In this manner it thrives wonderfully with us, and flowers beautifully, and bears fruit this hot year. I have twenty-four on one tree... Doctor Fothergill says, of all trees this is most salutiferous to mankind." The pomegranate had been introduced as an exotic to England the previous century, by John Tradescant the Elder, but the disappointment that it did not set fruit there led to its repeated introduction to the American colonies, even New England. It succeeded in the South: Bartram received a barrel of pomegranates and oranges from a correspondent in Charleston, South Carolina, 1764. John Bartram partook of "delitious" pomegranates with Noble Jones at Wormsloe Plantation, near Savannah, Georgia, in September 1765. Thomas Jefferson planted pomegranates at Monticello in 1771; he had them from George Wythe of Williamsburg. Use Culinary Pomegranate juice can be sweet or sour, but most fruits are moderate in taste, with sour notes from the acidic ellagitannins contained in the juice. Pomegranate juice has long been a common drink in Europe and the Middle East, and is distributed worldwide. Pomegranate juice is also used as a cooking ingredient. In Syria, pomegranate juice is added to intensify the flavor of some dishes such as kibbeh safarjaliyeh. Grenadine syrup, commonly used in cocktail, originally consisted of thickened and sweetened pomegranate juice, but today is typically a syrup made just of sugar and commercially produced natural and artificial flavors, preservatives, and food coloring, or using substitute fruits (such as berries). Before tomatoes (a New World fruit) arrived in the Middle East, pomegranate juice, pomegranate molasses, and vinegar were widely used in many Iranian foods; this mixture still found in traditional recipes such as fesenjān, a thick sauce made from pomegranate juice and ground walnuts, usually spooned over duck or other poultry and rice, and in ash-e anar (pomegranate soup). Pomegranate seeds are used as a spice known as anar dana (from , pomegranate + seed), most notably in Indian and Pakistani cuisine. Dried whole seeds can often be obtained in ethnic Indian markets. These seeds are separated from the flesh, dried for 10–15 days, and used as an acidic agent for chutney and curry preparation. Ground anardana is also used, which results in a deeper flavoring in dishes and prevents the seeds from getting stuck in teeth. Seeds of the wild pomegranate variety known as daru from the Himalayas are regarded as high-quality sources for this spice. Dried pomegranate seeds, found in some natural specialty food markets, still contain some residual water, maintaining a natural sweet and tart flavor. Dried seeds can be used in several culinary applications, such as trail mix, granola bars, or as a topping for salad, yogurt, or ice cream. In Turkey, pomegranate sauce () is used as a salad dressing, to marinate meat, or simply to drink straight. Pomegranate seeds are also used in salads and sometimes as garnish for desserts such as güllaç. Pomegranate syrup, also called pomegranate molasses, is used in muhammara, a roasted red pepper, walnut, and garlic spread popular in Syria and Turkey. In Greece, pomegranate is used in many recipes, including kollivozoumi, a creamy broth made from boiled wheat, pomegranates, and raisins, legume salad with wheat and pomegranate, traditional Middle Eastern lamb kebabs with pomegranate glaze, pomegranate eggplant relish, and avocado-pomegranate dip. Pomegranate is also made into a liqueur, and as a popular fruit confectionery used as ice cream topping, mixed with yogurt, or spread as jam on toast. In Mexico, pomegranate seeds are commonly used to adorn the traditional dish chiles en nogada, representing the red of the Mexican flag in the dish which evokes the green (poblano pepper), white (nogada sauce) and red (pomegranate seeds) tricolor. Other uses Pomegranate peels may be used to stain wool and silk in the carpet industry. Nutrition The edible portion of raw pomegranate is 78% water, 19% carbohydrates, 2% protein, and 1% fat (table). A serving of pomegranate sarcotesta provides 12% of the Daily Value (DV) for vitamin C, 16% DV for vitamin K, and 10% DV for folate (table), while the seeds are a rich source of dietary fiber (20% DV). Research Phytochemicals Processing The phenolic content of pomegranate juice is degraded by processing and pasteurization techniques. Juice The most abundant phytochemicals in pomegranate juice are polyphenols, including the hydrolyzable tannins called ellagitannins formed when ellagic acid and gallic acid bind with a carbohydrate to form pomegranate ellagitannins, also known as punicalagins. The red color of the juice is attributed to anthocyanins, such as delphinidin, cyanidin, and glycosides of pelargonidin. Generally, an increase in juice pigmentation occurs during fruit ripening. Peel Pomegranate peel contains high amount of polyphenols, condensed tannins, catechins, and prodelphinidins. The higher phenolic content of the peel yields extracts for use in dietary supplements and food preservatives. Seed Pomegranate seed oil contains punicic acid (65%), palmitic acid (5%), stearic acid (2%), oleic acid (6%), and linoleic acid (7%). Health claims Despite limited research data, manufacturers and marketers of pomegranate juice have liberally used results from preliminary research to promote products. In February 2010, the FDA issued a warning letter to one such manufacturer, POM Wonderful, for using published literature to make illegal claims of unproven anti-disease effects. In May 2016, the US Federal Trade Commission declared that POM Wonderful could not make health claims in its advertising, followed by a US Supreme Court ruling that declined a request by POM Wonderful to review the court ruling, upholding the FTC decision. Symbolism Ancient Iran Pomegranate, known as in Persian, is a symbol of fertility, blessing and favor in Iranian belief. Pomegranates are sacred in the Zoroastrian religion and Zoroastrians used it in their religious rituals. The yellow color of the pomegranate stamens symbolizes the sun and light. Pomegranate tree has been one of the most sacred and holiest plants in Iran and is believed to be grown from the spot where the blood of Siavash (the legendary Iranian character who is known for his innocence) and has been mentioned in Iranian Pahlavi scripts as a fruit of heaven. It is also believed that the invulnerability of Esfandiar (Iranian legend) was related to this sacred fruit. The Zoroastrians of Iran, believe that pomegranate is a blest fruit as it is served in their festivals like Mehregan and Nowruz, and especially in their wedding ceremonies to wish for the newly married couple to have a healthy child in the future. They also used to plant a pomegranate tree in their fire temples to use its leaves in their ceremonies. During the Iranian tradition, Yalda Night, people come together on winter solstice and eat pomegranate fruit to celebrate the victory of light over darkness. In a relief from Persepolis, Darius the Great is holding a pomegranate flower with two buds. This Achaemenid king is accepting the representatives of all the subordinate lands of Greater Iran to his presence, while holding a large flower in his hand as a sign of peace and friendship. Ancient Egypt Ancient Egyptians regarded the pomegranate as a symbol of prosperity and ambition. It was referred to by the Semitic names of jnhm or nhm. According to the Ebers Papyrus, one of the oldest medical writings from around 1500 BC, Egyptians used the pomegranate for treatment of tapeworm and other infections. Ancient and modern Greece A pomegranate is displayed on coins from Side, as Side was the name for pomegranate in the local language, which is the city's name. The ancient Greek city of Side was in Pamphylia, a former region on the southern Mediterranean coast of Asia Minor (modern-day Antalya province, Turkey). The Greeks were familiar with the fruit far before it was introduced to Rome via Carthage, and it figures in multiple myths and artworks. In Ancient Greek mythology, the pomegranate was known as the "fruit of the dead", and believed to have sprung from the blood of Adonis. The myth of Persephone, the goddess of the underworld, prominently features her consumption of pomegranate seeds, requiring her to spend a certain number of months in the underworld every year. The number of seeds and therefore months vary. During the months that Persephone sits on the throne of the underworld beside her husband Hades, her mother Demeter mourned and no longer gave fertility to the earth. This was an ancient Greek explanation for the seasons. According to Carl A. P. Ruck and Danny Staples, the chambered pomegranate is also a surrogate for the poppy's narcotic capsule, with its comparable shape and chambered interior. In another Greek myth, a girl named Side ("pomegranate") killed herself on her mother's grave in order to avoid suffering rape at the hands of her own father Ictinus. Her blood transformed into a pomegranate tree. In the fifth century BC, Polycleitus took ivory and gold to sculpt the seated Argive Hera in her temple. She held a scepter in one hand and offered a pomegranate, like a "royal orb", in the other. "About the pomegranate I must say nothing," whispered the traveller Pausanias in the second century, "for its story is somewhat of a holy mystery". The pomegranate has a calyx shaped like a crown. In Jewish tradition, it has been seen as the original "design" for the proper crown. Within the Heraion at the mouth of the Sele, near Paestum, Magna Graecia, is a chapel devoted to the Madonna del Granato, "Our Lady of the Pomegranate", "who by virtue of her epithet and the attribute of a pomegranate must be the Christian successor of the ancient Greek goddess Hera", observes the excavator of the Heraion of Samos, Helmut Kyrieleis. In modern times, the pomegranate still holds strong symbolic meanings for the Greeks. When one buys a new home, it is conventional for a house guest to bring as a first gift a pomegranate, which is placed under/near the ikonostasi (home altar) of the house, as a symbol of abundance, fertility, and good luck. When Greeks commemorate their dead, they make kollyva as offerings, which consist of boiled wheat, mixed with sugar and decorated with pomegranate. Pomegranate decorations for the home are very common in Greece and sold in most home goods stores. Ancient Israel and Judaism Hebrew Bible Some Jewish scholars believe the pomegranate was the forbidden fruit in the Garden of Eden. Pomegranates were known in Ancient Israel as the fruits that the scouts brought to Moses to demonstrate the fertility of the "Promised Land". The Book of Exodus describes the me'il ("robe of the ephod") worn by the Hebrew high priest as having pomegranates embroidered on the hem, alternating with golden bells, which could be heard as the high priest entered and left the Holy of Holies. According to the Books of Kings, the capitals of the two pillars (Jachin and Boaz) that stood in front of Solomon's Temple in Jerusalem were engraved with pomegranates. Solomon is said to have designed his coronet based on the pomegranate's "crown" (calyx). Pomegranates are one of the Seven Species (Hebrew: שבעת המינים, Shiv'at Ha-Minim) of fruits and grains enumerated in the Hebrew Bible () as special products of the Land of Israel, and the Songs of Solomon mentions pomegranate six times and contains this particular quote: "Thy lips are like a thread of scarlet, and thy speech is comely: thy temples are like a piece of a pomegranate within thy locks." (). Historical and traditional use The pomegranate appeared on the ancient coins of Judaea, see Hasmonean, Herodian and First Jewish Revolt coinage. The handles of Torah scrolls, when not in use, are sometimes covered with decorative silver globes similar in shape to pomegranates (Torah rimmonim). Consuming pomegranates on Rosh Hashana, the Jewish New Year, is traditional because, with its numerous seeds, it symbolizes fruitfulness. Talmud and Kabbalah The pomegranate is said to have 613 seeds representing the 613 commandments of the Torah, but it is a misconception. There is no clear source for this claim, although it is used as a metaphor in the Talmud for numerous good deeds. In European Christian motifs In the earliest incontrovertible appearance of Christ in a mosaic, a fourth-century floor mosaic from Hinton St Mary, Dorset, now in the British Museum, the bust of Christ and the chi rho are flanked by pomegranates. Pomegranates continue to be a motif often found in Christian religious decoration. They are often woven into the fabric of vestments and liturgical hangings or wrought in metalwork. Pomegranates figure in many religious paintings by the likes of Sandro Botticelli and Leonardo da Vinci, often in the hands of the Virgin Mary or the infant Jesus. The fruit, broken or bursting open, is a symbol of the fullness of Jesus' suffering and resurrection. In Islam Chapter 55 of the Quran mentions the pomegranate as a "favour" among many to be offered to those fearful to the "Lord" in "two Gardens". Armenia The pomegranate is one of the main fruits in Armenian culture (alongside apricots and grapes). Its juice is used with Armenian food and wine. The pomegranate is a symbol in Armenia, representing fertility, abundance, and marriage. It is also a semireligious icon. For example, the fruit played an integral role in a wedding custom widely practiced in ancient Armenia; a bride was given a pomegranate fruit, which she threw against a wall, breaking it into pieces. Scattered pomegranate seeds ensured the bride future children. The Color of Pomegranates, a movie directed by Sergei Parajanov, is a biography of the Armenian ashug Sayat-Nova (King of Song) which attempts to reveal the poet's life visually and poetically rather than literally. Azerbaijan Every fall the Goychay Pomegranate Festival is held in the city of Goychay. China Introduced to China during the Han dynasty (206BC–220AD), the pomegranate (), in older times, was considered an emblem of fertility and numerous progeny. Pictures of the ripe fruit with the seeds bursting forth were often hung in homes to bestow fertility and bless the dwelling with numerous offspring, an important facet of traditional Chinese culture. In modern times, the pomegranate has been used to symbolise national cohesion and ethnic unity by Xi Jinping, urging the Chinese population to "stick together like pomegranate seeds". India In some Hindu traditions, the pomegranate (Sanskrit: dāḍima) symbolizes prosperity and fertility, and is associated with both Bhumi (the earth goddess) and Ganesha (the one fond of the many-seeded fruit). Kurdish culture Pomegranate is an important fruit and symbol in Kurdish culture. It is accepted as a symbol of abundance and a sacred fruit of ancient Kurdish religions. Pomegranate is used as a symbol of abundance in Kurdish carpets. Gallery
Biology and health sciences
Myrtales
null
74553
https://en.wikipedia.org/wiki/Dermatology
Dermatology
Dermatology is the branch of medicine dealing with the skin. It is a speciality with both medical and surgical aspects. A dermatologist is a specialist medical doctor who manages diseases related to skin, hair, nails, and some cosmetic problems. Etymology Attested in English in 1819, the word "dermatology" derives from the Greek δέρματος (dermatos), genitive of δέρμα (derma), "skin" (itself from δέρω dero, "to flay") and -λογία -logia. Neo-Latin dermatologia was coined in 1630, an anatomical term with various French and German uses attested from the 1730s. History In 1708, the first great school of dermatology became a reality at the famous Hôpital Saint-Louis in Paris, and the first textbooks (Willan's, 1798–1808) and atlases (Alibert's, 1806–1816) appeared in print around the same time. Training United States After earning a medical degree (M.D. or D.O.), the length of training in the United States for a general dermatologist to be eligible for board certification by the American Academy of Dermatology, American Board of Dermatology, or American Osteopathic Board of Dermatology is four years. This training consists of an initial medical, transitional, surgical, or pediatric intern year followed by a three-year dermatology residency. Following this training, one- or two-year post-residency fellowships are available in immunodermatology, phototherapy, laser medicine, Mohs micrographic surgery, cosmetic surgery, dermatopathology, or pediatric dermatology. While these dermatology fellowships offer additional subspecialty training, many dermatologists proficiently provide these services without subspecialty fellowship training. For the past several years, dermatology residency positions in the United States have been one of the most competitive to obtain. According to the American Academy of Dermatology, dermatologists are trained to diagnose and manage over 3,000 distinct skin, hair, and nail conditions across patients spanning various age groups. The United States has been experiencing a national shortage of dermatologists for more than a decade. A study published by the Journal of the American Medical Association reported fewer than 3.4 dermatologists for every 100,000 people. United Kingdom In the UK, a dermatologist is a medically qualified practitioner who has gone on to specialize in medicine and then subspecialize in dermatology. This involves: Medical school for five years to obtain an MBBS, MBBCh, MB, or BChir degree Two years of foundation rotations in various specialties Two to three years training in general medicine to obtain a higher degree in medicine and become a member of the Royal College of Physicians Having obtained the MRCP examination, applying to become a Specialty Registrar (StR) in Dermatology and training for four years in dermatology Passing the Specialty Certificate Examination in dermatology before the end of training Upon successful completion of the four-year training period, the doctor becomes an accredited dermatologist and is able to apply for a consultant hospital post as a consultant dermatologist. Fields Cosmetic dermatology Dermatologists have been leaders in the field of cosmetic surgery. Some dermatologists complete fellowships in surgical dermatology. Many are trained in their residency on the use of botulinum toxin, fillers, and laser surgery. Some dermatologists perform cosmetic procedures including liposuction, blepharoplasty, and face lifts. Most dermatologists limit their cosmetic practice to minimally invasive procedures. Despite an absence of formal guidelines from the American Board of Dermatology, many cosmetic fellowships are offered in both surgery and laser medicine. Dermatopathology A dermatopathologist is a pathologist or dermatologist who specializes in the pathology of the skin. This field is shared by dermatologists and pathologists. Usually, a dermatologist or pathologist completes one year of dermatopathology fellowship and according to market projections, it is estimated to expand at a compound annual growth rate (CAGR) of 11.4% from 2022 to 2030. This usually includes six months of general pathology and six months of dermatopathology. Alumni of both specialties can qualify as dermatopathologists. At the completion of a standard residency in dermatology, many dermatologists are also competent at dermatopathology. Some dermatopathologists qualify to sit for their examinations by completing a residency in dermatology and one in pathology. Trichology Trichology specializes in diseases, which manifest with hair loss, hair abnormalities, hypertrichosis and scalp changes. Trichoscopy is a medical diagnostic method that is used by dermatologists with a special interest in trichology. Immunodermatology This field specializes in the treatment of immune-mediated skin diseases such as lupus, bullous pemphigoid, pemphigus vulgaris, and other immune-mediated skin disorders. Specialists in this field often run their own immunopathology labs. Immunodermatology testing is essential for the correct diagnosis and treatment of many diseases affecting epithelial organs including skin, mucous membranes, gastrointestinal and respiratory tracts. The various diseases often overlap in clinical and histological presentation and, although the diseases themselves are not common, may present with features of common skin disorders such as urticaria, eczema and chronic itch. Therefore, the diagnosis of an immunodermatological disease is often delayed. Tests are performed on blood and tissues that are sent to various laboratories from medical facilities and referring physicians across the United States. Mohs surgery The dermatologic subspecialty called Mohs surgery focuses on the excision of skin cancers using a technique that allows intraoperative assessment of most of the peripheral and deep tumor margins. Developed in the 1930s by Frederic E. Mohs, the procedure is defined as a type of CCPDMA processing. Physicians trained in this technique must be comfortable with both pathology and surgery, and dermatologists receive extensive training in both during their residency. Physicians who perform Mohs surgery can receive training in this specialized technique during their dermatology residency, but many seek additional training either through formal preceptorships to become fellows of the American Society for Mohs Surgery or through one-year Mohs surgery fellowship training programs administered by the American College of Mohs Surgery. In 2020, the American Board of Dermatology (ABD) received approval from the American Board of Medical Specialties (ABMS) to establish a board-certification exam in the subspecialty of Micrographic Dermatologic Surgery (Mohs Surgery). The exam was first offered in October 2021 to any US board-certified dermatologist who practices Mohs surgery, regardless of whether they received their training in dermatology residency or as part of a fellowship. This technique requires the integration of the same doctor in two different capacities - surgeon and pathologist. In case either of the two responsibilities is assigned to another doctor or qualified health-care professional, it is not considered to be Mohs surgery. Pediatric dermatology Physicians can qualify for this specialization by completing both a pediatric residency and a dermatology residency. Or they might elect to complete a post-residency fellowship. This field encompasses the complex diseases of the neonates, hereditary skin diseases or genodermatoses, and the many difficulties of working with the pediatric population. Another area pediatric dermatologists may focus on is treating acne. Acne is formed when follicles under the skin become clogged. This can be caused by sebum, an oil that keeps the skin moist, or dead skin cells clogging the pores. This is very common in teens and young adults, and can be treated by prescription from a dermatologist. Teledermatology Teledermatology is a form of dermatological practice in which telecommunication technologies are used to exchange medical information and treatment through audio, visual, and data communication, including photos of dermatologic conditions, between dermatologists and nondermatologists who are evaluating patients, along with dermatologists directly with patients via distance. In India, during the severe coronavirus situations, some dermatologists have initiated online consultation with their patients using some of popular apps, such as Practo, Apollo Pharmacy, Skin Beauty Pal, Lybrate, etc. This subspecialty deals with options to view skin conditions over a large distance to provide knowledge exchange, to establish second-opinion services for experts, or to use this for follow-up of individuals with chronic skin conditions. Teledermatology can reduce wait times by allowing dermatologists to treat minor conditions online while serious conditions requiring immediate care are given priority for appointments. Dermatoepidemiology Dermatoepidemiology is the study of skin disease at the population level. One of its aspects is the determination of the global burden of skin diseases. From 1990 to 2013, skin disease constituted about 2% of total global disease disability as measured in disability-adjusted life-years. Comparative Dermatology Comparative dermatology is a branch of dermatology that examines skin disorders across species, focusing on similarities and differences between humans and animals, such as dogs. This interdisciplinary approach is crucial for enhancing our understanding of dermatological conditions and developing more effective treatment and prevention strategies. Skin disorders are common in dogs, significantly affecting their quality of life and often requiring veterinary intervention. While some breeds are genetically predisposed to specific skin issues, there remains a notable gap in research comparing these canine conditions to similar human skin disorders. Addressing this gap can yield insights into the shared mechanisms underlying these diseases. For instance, atopic dermatitis. It is a common, itchy, and often difficult-to-treat condition. The Merck Veterinary Manual highlights various congenital and inherited skin disorders in dogs that are influenced by these factors, emphasizing the need for comparative research to improve disease management across species. By comparing the disease in animals and humans, researchers can gain insights into its progression and variability in response to treatments. Furthermore, research into the genetic underpinnings of skin disorders has demonstrated that certain genetic mutations in dogs are associated with inherited skin diseases, which may serve as models for understanding similar human conditions. Environmental factors, such as allergens and pollutants, also play a significant role in skin health. Studies published in journals focusing on inflammatory skin conditions in humans and veterinary research reveal how these environmental influences intersect with genetic predispositions, offering a comparative framework for further study. Treatment strategies for skin disorders also differ between veterinary and human medicine. Veterinary treatments often prioritize symptomatic relief and prevention, while human dermatological care may involve a broader range of targeted pharmaceutical options. Comparative analysis of these treatment methodologies could lead to the development of new therapies beneficial to both fields, as discussed in microbiological research into skin health. By emphasizing the comparative aspects of dermatology, researchers can contribute to a deeper understanding of skin health across species. This field underscores the importance of genetic research, environmental studies, and treatment innovations, as evidenced by ongoing research in dermatological and veterinary science. Therapies Therapies provided by dermatologists include: Excision and treatment of skin cancer Cryosurgery for the treatment of warts, skin cancers, and other dermatoses Cosmetic filler injections Intralesional treatment with steroid drugs or chemotherapy Laser therapy for the management of birth marks, skin disorders (like vitiligo), tattoo removal, and cosmetic resurfacing and rejuvenation Chemical peels for the treatment of acne, melasma, and sun damage Photodynamic therapy for the treatment of skin cancer and precancerous growths Phototherapy including the use of narrowband UVB, broadband UVB, psoralen, and UVB Tumescent liposuction: Invented by a gynecologist, a dermatologist (Jeffrey A. Klein) adapted the procedure to local infusion of dilute anesthetic called tumescent liposuction. This method is now widely practiced by dermatologists, plastic surgeons, and gynecologists. Radiation therapy, although rarely practiced by dermatologists, some continue to provide it in their offices. Vitiligo surgery includes procedures such as autologous melanocyte transplant, suction blister grafting, and punch grafting. Allergy testing uses "patch" testing for contact dermatitis. Systemic therapies include antibiotics, immunomodulators, and novel injectable products. Topical therapies use many of the numerous products and compounds used topically. Most dermatologic pharmacology can be categorized based on the Anatomical Therapeutic Chemical (ATC) classification system, specifically the ATC code D.
Biology and health sciences
Fields of medicine
null
74555
https://en.wikipedia.org/wiki/Acne
Acne
Acne (/ˈækni/ ACK-nee), also known as acne vulgaris, is a long-term skin condition that occurs when dead skin cells and oil from the skin clog hair follicles. Typical features of the condition include blackheads or whiteheads, pimples, oily skin, and possible scarring. It primarily affects skin with a relatively high number of oil glands, including the face, upper part of the chest, and back. The resulting appearance can lead to lack of confidence, anxiety, reduced self-esteem, and, in extreme cases, depression or thoughts of suicide. Susceptibility to acne is primarily genetic in 80% of cases. The roles of diet and cigarette smoking in the condition are unclear, and neither cleanliness nor exposure to sunlight are associated with acne. In both sexes, hormones called androgens appear to be part of the underlying mechanism, by causing increased production of sebum. Another common factor is the excessive growth of the bacterium Cutibacterium acnes, which is present on the skin. Treatments for acne are available, including lifestyle changes, medications, and medical procedures. Eating fewer simple carbohydrates such as sugar may minimize the condition. Treatments applied directly to the affected skin, such as azelaic acid, benzoyl peroxide, and salicylic acid, are commonly used. Antibiotics and retinoids are available in formulations that are applied to the skin and taken by mouth for the treatment of acne. However, resistance to antibiotics may develop as a result of antibiotic therapy. Several types of birth control pills help prevent acne in women. Medical professionals typically reserve isotretinoin pills for severe acne, due to greater potential side effects. Early and aggressive treatment of acne is advocated by some in the medical community to decrease the overall long-term impact on individuals. In 2015, acne affected approximately 633million people globally, making it the eighth-most common disease worldwide. Acne commonly occurs in adolescence and affects an estimated 80–90% of teenagers in the Western world. Some rural societies report lower rates of acne than industrialized ones. Children and adults may also be affected before and after puberty. Although acne becomes less common in adulthood, it persists in nearly half of affected people into their twenties and thirties, and a smaller group continues to have difficulties in their forties. Classification The severity of acne vulgaris (Gr. ἀκμή, "point" + L. vulgaris, "common") can be classified as mild, moderate, or severe to determine an appropriate treatment regimen. There is no universally accepted scale for grading acne severity. The presence of clogged skin follicles (known as comedones) limited to the face with occasional inflammatory lesions defines mild acne. Moderate severity acne is said to occur when a higher number of inflammatory papules and pustules occur on the face, compared to mild cases of acne, and appear on the trunk of the body. Severe acne is said to occur when nodules (the painful 'bumps' lying under the skin) are the characteristic facial lesions, and involvement of the trunk is extensive. The lesions are usually, polymorphic, meaning they can take many forms, including open or closed comedones (commonly known as blackheads and whiteheads), papules, pustules, and even nodules or cysts so that these lesions often leave behind sequelae, or abnormal conditions resulting from a previous disease, such as scarring or hyperpigmentation. Large nodules were previously called cysts. The term nodulocystic has been used in the medical literature to describe severe cases of inflammatory acne. True cysts are rare in those with acne, and the term severe nodular acne is now the preferred terminology. Acne inversa (L. invertō, "upside-down") and acne rosacea (rosa, "rose-colored" + -āceus, "forming") are not forms of acne and are alternate names that respectively refer to the skin conditions hidradenitis suppurativa (HS) and rosacea. Although HS shares certain overlapping features with acne vulgaris, such as a tendency to clog skin follicles with skin cell debris, the condition otherwise lacks the hallmark features of acne and is therefore considered a distinct skin disorder. Signs and symptoms Typical features of acne include increased secretion of oily sebum by the skin, microcomedones, comedones, papules, nodules (large papules), pustules, and often results in scarring. The appearance of acne varies with skin color. It may result in psychological and social problems. Scars Acne scars are caused by inflammation within the dermis and are estimated to affect 95% of people with acne vulgaris. Abnormal healing and dermal inflammation create the scar. Scarring is most likely to take place with severe acne but may occur with any form of acne vulgaris. Acne scars are classified based on whether the abnormal healing response following dermal inflammation leads to excess collagen deposition or loss at the site of the acne lesion. Atrophic acne scars have lost collagen from the healing response and are the most common type of acne scar (accounting for approximately 75% of all acne scars). Ice-pick scars, boxcar scars, and rolling scars are subtypes of atrophic acne scars. Boxcar scars are round or ovoid indented scars with sharp borders and vary in size from 1.5–4 mm across. Ice-pick scars are narrow (less than 2 mm across), deep scars that extend into the dermis. Rolling scars are broader than ice-pick and boxcar scars (4–5 mm across) and have a wave-like pattern of depth in the skin. Hypertrophic scars are uncommon and are characterized by increased collagen content after the abnormal healing response. They are described as firm and raised from the skin. Hypertrophic scars remain within the original margins of the wound, whereas keloid scars can form scar tissue outside of these borders. Keloid scars from acne occur more often in men and people with darker skin, and usually occur on the trunk of the body. Pigmentation After an inflamed nodular acne lesion resolves, it is common for the skin to darken in that area, which is known as postinflammatory hyperpigmentation (PIH). The inflammation stimulates specialized pigment-producing skin cells (known as melanocytes) to produce more melanin pigment, which leads to the skin's darkened appearance. PIH occurs more frequently in people with darker skin color. Pigmented scar is a common term used for PIH, but is misleading as it suggests the color change is permanent. Often, PIH can be prevented by avoiding any aggravation of the nodule and can fade with time. However, untreated PIH can last for months, years, or even be permanent if deeper layers of skin are affected. Even minimal skin exposure to the sun's ultraviolet rays can sustain hyperpigmentation. Daily use of SPF 15 or higher sunscreen can minimize such a risk. Whitening agents like azelaic acid, arbutin or else may be used to improve hyperpigmentation. Causes Risk factors for the development of acne, other than genetics, have not been conclusively identified. Possible secondary contributors include hormones, infections, diet, and stress. Studies investigating the impact of smoking on the incidence and severity of acne have been inconclusive. Cleanliness (hygiene) and sunlight are not associated with acne. Genes Acne appears to be highly heritable; genetics explain 81% of the variation in the population. Studies performed in affected twins and first-degree relatives further demonstrate the strongly inherited nature of acne. Acne susceptibility is likely due to the influence of multiple genes, as the disease does not follow a classic (Mendelian) inheritance pattern. These gene candidates include certain variations in tumor necrosis factor-alpha (TNF-alpha), IL-1 alpha, and CYP1A1 genes, among others. The 308 G/A single nucleotide polymorphism variation in the gene for TNF is associated with an increased risk for acne. Acne can be a feature of rare genetic disorders such as Apert's syndrome. Severe acne may be associated with XYY syndrome. Hormones Hormonal activity, such as occurs during menstrual cycles and puberty, may contribute to the formation of acne. During puberty, an increase in sex hormones called androgens causes the skin follicle glands to grow larger and make more oily sebum. The androgen hormones testosterone, dihydrotestosterone (DHT), and dehydroepiandrosterone (DHEA) are all linked to acne. High levels of growth hormone (GH) and insulin-like growth factor 1 (IGF-1) are also associated with worsened acne. Both androgens and IGF-1 seem to be essential for acne to occur, as acne does not develop in individuals with complete androgen insensitivity syndrome (CAIS) or Laron syndrome (insensitivity to GH, resulting in very low IGF-1 levels). Medical conditions that commonly cause a high-androgen state, such as polycystic ovary syndrome, congenital adrenal hyperplasia, and androgen-secreting tumors, can cause acne in affected individuals. Conversely, people who lack androgenic hormones or are insensitive to the effects of androgens rarely have acne. Pregnancy can increase androgen levels, and consequently, oily sebum synthesis. Acne can be a side effect of testosterone replacement therapy or anabolic steroid use. Over-the-counter bodybuilding and dietary supplements often contain illegally added anabolic steroids. Infections The anaerobic bacterial species Cutibacterium acnes (formerly Propionibacterium acnes) contributes to the development of acne, but its exact role is not well understood. There are specific sub-strains of C. acnes associated with normal skin and others with moderate or severe inflammatory acne. It is unclear whether these undesirable strains evolve on-site or are acquired, or possibly both depending on the person. These strains have the capability of changing, perpetuating, or adapting to the abnormal cycle of inflammation, oil production, and inadequate sloughing of dead skin cells from acne pores. Infection with the parasitic mite Demodex is associated with the development of acne. It is unclear whether eradication of the mite improves acne. Diet High-glycemic-load diets have been found to have different degrees of effect on acne severity. Multiple randomized controlled trials and nonrandomized studies have found a lower-glycemic-load diet to be effective in reducing acne. There is weak observational evidence suggesting that dairy milk consumption is positively associated with a higher frequency and severity of acne. Milk contains whey protein and hormones such as bovine IGF-1 and precursors of dihydrotestosterone. Studies suggest these components promote the effects of insulin and IGF-1 and thereby increase the production of androgen hormones, sebum, and promote the formation of comedones. Available evidence does not support a link between eating chocolate or salt and acne severity. Few studies have examined the relationship between obesity and acne. Vitamin B12 may trigger skin outbreaks similar to acne (acneiform eruptions), or worsen existing acne when taken in doses exceeding the recommended daily intake. Stress There are few high-quality studies to demonstrate that stress causes or worsens acne. Despite being controversial, some research indicates that increased acne severity is associated with high stress levels in certain contexts, such as hormonal changes seen in premenstrual syndrome. Other Some individuals experience severe intensification of their acne when they are exposed to hot humid climates; this is due to bacteria and fungus thriving in warm, moist environments. This climate-induced acne exacerbation has been termed tropical acne. Mechanical obstruction of skin follicles with helmets or chinstraps can worsen pre-existing acne. However, acne caused by mechanical obstruction is technically not acne vulgaris, but another acneiform eruption known as acne mechanica. Several medications can also worsen pre-existing acne; this condition is the acne medicamentosa form of acne. Examples of such medications include lithium, hydantoin, isoniazid, glucocorticoids, iodides, bromides, and testosterone. When acne medicamentosa is specifically caused by anabolic–androgenic steroids it can simply be referred to as steroid acne. Genetically susceptible individuals can get acne breakouts as a result of polymorphous light eruption; a condition triggered by sunlight and artificial UV light exposure. This form of acne is called Acne aestivalis and is specifically caused by intense UVA light exposure. Affected individuals usually experience seasonal acne breakouts on their upper arms, shoulder girdle, back, and chest. The breakouts typically occur one-to-three days after exposure to intese UVA radiation. Unlike other forms of acne, the condition spares the face; this could possibly be a result of the pathogenesis of polymorphous light eruption, in which areas of the skin that are newly exposed to intense ultraviolet radiation are affected. Since faces are typically left uncovered at all stages of life, there is little-to-no likelihood for an eruption to appear there. Studies show that both polymorphous light eruption outbreaks and the acne aestivalis breakout response can be prevented by topical antioxidants combined with the application of a broad spectrum sunscreen. Pathophysiology Acne vulgaris is a chronic skin disease of the pilosebaceous unit and develops due to blockages in the skin's hair follicles. Traditionally seen as a disease of adolescence, acne vulgaris is also observed in adults, including post-menopausal women. Acne vulgaris manifested in adult female is called adult female acne (AFA), defined as a chronic inflammatory disease of the pilosebaceous unit. Particularly in AFA, during the menopausal transition, a relative increase in androgen levels occurs as estrogen levels begin to decline, so that this hormonal shift can manifest as acne; while most women with AFA exhibit few acne lesions and have normal androgen levels, baseline investigations, including an androgen testing panel, can help rule out associated comorbidities such as polycystic ovarian syndrome, congenital adrenal hyperplasia, or tumors. The blockages in the skin's hair follicles that cause acne vulgaris manifestations occur as a result of the following four abnormal processes: increased oily sebum production (influenced by androgens), excessive deposition of the protein keratin leading to comedo formation, colonization of the follicle by Cutibacterium acnes (C. acnes) bacteria, and the local release of pro-inflammatory chemicals in the skin. The earliest pathologic change is the formation of a plug (a microcomedone), which is driven primarily by excessive growth, reproduction, and accumulation of skin cells in the hair follicle. In healthy skin, the skin cells that have died come up to the surface and exit the pore of the hair follicle. In people with acne, the increased production of oily sebum causes the dead skin cells to stick together. The accumulation of dead skin cell debris and oily sebum blocks the pore of the hair follicle, thus forming the microcomedone. The C. acnes biofilm within the hair follicle worsens this process. If the microcomedone is superficial within the hair follicle, the skin pigment melanin is exposed to air, resulting in its oxidation and dark appearance (known as a blackhead or open comedo). In contrast, if the microcomedone occurs deep within the hair follicle, this causes the formation of a whitehead (known as a closed comedo). The main hormonal driver of oily sebum production in the skin is dihydrotestosterone. Another androgenic hormone responsible for increased sebaceous gland activity is DHEA-S. The adrenal glands secrete higher amounts of DHEA-S during adrenarche (a stage of puberty), and this leads to an increase in sebum production. In a sebum-rich skin environment, the naturally occurring and largely commensal skin bacterium C. acnes readily grows and can cause inflammation within and around the follicle due to activation of the innate immune system. C. acnes triggers skin inflammation in acne by increasing the production of several pro-inflammatory chemical signals (such as IL-1α, IL-8, TNF-α, and LTB4); IL-1α is essential to comedo formation. C. acnes''' ability to bind and activate a class of immune system receptors known as toll-like receptors (TLRs), especially TLR2 and TLR4, is a core mechanism of acne-related skin inflammation. Activation of TLR2 and TLR4 by C. acnes leads to increased secretion of IL-1α, IL-8, and TNF-α. The release of these inflammatory signals attracts various immune cells to the hair follicle, including neutrophils, macrophages, and Th1 cells. IL-1α stimulates increased skin cell activity and reproduction, which, in turn, fuels comedo development. Furthermore, sebaceous gland cells produce more antimicrobial peptides, such as HBD1 and HBD2, in response to the binding of TLR2 and TLR4.C. acnes also provokes skin inflammation by altering the fatty composition of oily sebum. Oxidation of the lipid squalene by C. acnes is of particular importance. Squalene oxidation activates NF-κB (a protein complex) and consequently increases IL-1α levels. Additionally, squalene oxidation increases 5-lipoxygenase enzyme activity, which catalyzes the conversion of arachidonic acid to leukotriene B4 (LTB4). LTB4 promotes skin inflammation by acting on the peroxisome proliferator-activated receptor alpha (PPARα) protein. PPARα increases the activity of activator protein 1 (AP-1) and NF-κB, thereby leading to the recruitment of inflammatory T cells. C. acnes' ability to convert sebum triglycerides to pro-inflammatory free fatty acids via secretion of the enzyme lipase further explains its inflammatory properties. These free fatty acids spur increased production of cathelicidin, HBD1, and HBD2, thus leading to further inflammation. This inflammatory cascade typically leads to the formation of inflammatory acne lesions, including papules, infected pustules, or nodules. If the inflammatory reaction is severe, the follicle can break into the deeper layers of the dermis and subcutaneous tissue and cause the formation of deep nodules. The involvement of AP-1 in the aforementioned inflammatory cascade activates matrix metalloproteinases, which contribute to local tissue destruction and scar formation. Along with the bacteria C. acnes, the bacterial species Staphylococcus epidermidis (S. epidermidis) also takes a part in the physiopathology of acne vulgaris. The proliferation of S. epidermidis with C. acnes causes the formation of biofilms, which blocks the hair follicles and pores, creating an anaerobic environment under the skin. This enables for increased growth of both C. acnes and S. epidermidis under the skin. The proliferation of C. acnes causes the formation of biofilms and a biofilm matrix, making it even harder to treat the acne. Diagnosis Acne vulgaris is diagnosed based on a medical professional's clinical judgment. The evaluation of a person with suspected acne should include taking a detailed medical history about a family history of acne, a review of medications taken, signs or symptoms of excessive production of androgen hormones, cortisol, and growth hormone. Comedones (blackheads and whiteheads) must be present to diagnose acne. In their absence, an appearance similar to that of acne would suggest a different skin disorder. Microcomedones (the precursor to blackheads and whiteheads) are not visible to the naked eye when inspecting the skin and require a microscope to be seen. Many features may indicate that a person's acne vulgaris is sensitive to hormonal influences. Historical and physical clues that may suggest hormone-sensitive acne include onset between ages 20 and 30; worsening the week before a woman's period; acne lesions predominantly over the jawline and chin; and inflammatory/nodular acne lesions. Several scales exist to grade the severity of acne vulgaris, but disagreement persists about the ideal one for diagnostic use. Cook's acne grading scale uses photographs to grade severity from 0 to 8, with higher numbers representing more severe acne. This scale was the first to use a standardized photographic protocol to assess acne severity; since its creation in 1979, the scale has undergone several revisions. The Leeds acne grading technique counts acne lesions on the face, back, and chest and categorizes them as inflammatory or non-inflammatory. Leeds scores range from 0 (least severe) to 10 (most severe) though modified scales have a maximum score of 12. The Pillsbury acne grading scale classifies the severity of the acne from grade 1 (least severe) to grade 4 (most severe). Differential diagnosis Many skin conditions can mimic acne vulgaris, and these are collectively known as acneiform eruptions. Such conditions include angiofibromas, epidermal cysts, flat warts, folliculitis, keratosis pilaris, milia, perioral dermatitis, and rosacea, among others. Age is one factor that may help distinguish between these disorders. Skin disorders such as perioral dermatitis and keratosis pilaris can appear similar to acne but tend to occur more frequently in childhood. Rosacea tends to occur more frequently in older adults. Facial redness triggered by heat or the consumption of alcohol or spicy food is also more suggestive of rosacea. The presence of comedones helps health professionals differentiate acne from skin disorders that are similar in appearance. Chloracne and occupational acne due to exposure to certain chemicals & industrial compounds, may look very similar to acne vulgaris. Management Many different treatments exist for acne. These include alpha hydroxy acid, anti-androgen medications, antibiotics, antiseborrheic medications, azelaic acid, benzoyl peroxide, hormonal treatments, keratolytic soaps, nicotinamide (niacinamide), retinoids, and salicylic acid. Acne treatments work in at least four different ways, including the following: reducing inflammation, hormonal manipulation, killing C. acnes, and normalizing skin cell shedding and sebum production in the pore to prevent blockage. Typical treatments include topical therapies such as antibiotics, benzoyl peroxide, and retinoids, and systemic therapies, including antibiotics, hormonal agents, and oral retinoids. Recommended therapies for first-line use in acne vulgaris treatment include topical retinoids, benzoyl peroxide, and topical or oral antibiotics. Procedures such as light therapy and laser therapy are not first-line treatments and typically have only an add on role due to their high cost and limited evidence. Blue light therapy is of unclear benefit. Medications for acne target the early stages of comedo formation and are generally ineffective for visible skin lesions; acne generally improves between eight and twelve weeks after starting therapy. People often view acne as a short-term condition, some expecting it to disappear after puberty. This misconception can lead to depending on self-management or problems with long-term adherence to treatment. Communicating the long-term nature of the condition and better access to reliable information about acne can help people know what to expect from treatments. Skin care In general, it is recommended that people with acne do not wash affected skin more than twice daily. The application of a fragrance-free moisturizer to sensitive and acne-prone skin may reduce irritation. Skin irritation from acne medications typically peaks at two weeks after onset of use and tends to improve with continued use. Dermatologists recommend using cosmetic products that specifically say non-comedogenic, oil-free, and will not clog pores. Acne vulgaris patients, even those with oily skin, should moisturize in order to support the skin's moisture barrier since skin barrier dysfunction may contribute to acne. Moisturizers, especially ceramide-containing moisturizers, as an adjunct therapy are particularly helpful for the dry skin and irritation that commonly results from topical acne treatment. Studies show that ceramide-containing moisturizers are important for optimal skin care; they enhance acne therapy adherence and complement existing acne therapies. In a study where acne patients used 1.2% clindamycin phosphate / 2.5% benzoyl peroxide gel in the morning and applied a micronized 0.05% tretinoin gel in the evening the overwhelming majority of patients experienced no cutaneous adverse events throughout the study. It was concluded that using ceramide cleanser and ceramide moisturizing cream caused the favorable tolerability, did not interfere with the treatment efficacy, and improved adherence to the regimen. The importance of preserving the acidic mantle and its barrier functions is widely accepted in the scientific community. Thus, maintaining a pH in the range 4.5 – 5.5 is essential in order to keep the skin surface in its optimal, healthy conditions. Diet Causal relationship is rarely observed with diet/nutrition and dermatologic conditions. Rather, associations – some of them compelling – have been found between diet and outcomes including disease severity and the number of conditions experienced by a patient. Evidence is emerging in support of medical nutrition therapy as a way of reducing the severity and incidence of dermatologic diseases, including acne. Researchers observed a link between high glycemic index diets and acne. Dermatologists also recommend a diet low in simple sugars as a method of improving acne. As of 2014, the available evidence is insufficient to use milk restriction for this purpose. Medications Benzoyl peroxide Benzoyl peroxide (BPO) is a first-line treatment for mild and moderate acne due to its effectiveness and mild side-effects (mainly skin irritation). In the skin follicle, benzoyl peroxide kills C. acnes by oxidizing its proteins through the formation of oxygen free radicals and benzoic acid. These free radicals likely interfere with the bacterium's metabolism and ability to make proteins. Additionally, benzoyl peroxide is mildly effective at breaking down comedones and inhibiting inflammation. Combination products use benzoyl peroxide with a topical antibiotic or retinoid, such as benzoyl peroxide/clindamycin and benzoyl peroxide/adapalene, respectively. Topical benzoyl peroxide is effective at treating acne. Side effects include increased skin photosensitivity, dryness, redness, and occasional peeling. Sunscreen use is often advised during treatment, to prevent sunburn. Lower concentrations of benzoyl peroxide are just as effective as higher concentrations in treating acne but are associated with fewer side effects. Unlike antibiotics, benzoyl peroxide does not appear to generate bacterial antibiotic resistance. Retinoids Retinoids are medications that reduce inflammation, normalize the follicle cell life cycle, and reduce sebum production. They are structurally related to vitamin A. Studies show dermatologists and primary care doctors underprescribe them for acne. The retinoids appear to influence the cell life cycle in the follicle lining. This helps prevent the accumulation of skin cells within the hair follicle that can create a blockage. They are a first-line acne treatment, especially for people with dark-colored skin. Retinoids are known to lead to faster improvement of postinflammatory hyperpigmentation. Topical retinoids include adapalene, retinol, retinaldehyde, isotretinoin, tazarotene, trifarotene, and tretinoin. They often cause an initial flare-up of acne and facial flushing and can cause significant skin irritation. Generally speaking, retinoids increase the skin's sensitivity to sunlight and are therefore recommended for use at night. Tretinoin is the least expensive of the topical retinoids and is the most irritating to the skin, whereas adapalene is the least irritating but costs significantly more. Most formulations of tretinoin are incompatible for use with benzoyl peroxide. Tazarotene is the most effective and expensive topical retinoid but is usually not as well tolerated. In 2019 a tazarotene lotion formulation, marketed to be a less irritating option, was approved by the FDA. Retinol is a form of vitamin A that has similar but milder effects and is present in many over-the-counter moisturizers and other topical products. Isotretinoin is an oral retinoid that is very effective for severe nodular acne, and moderate acne that is stubborn to other treatments. One to two months of use is typically adequate to see improvement. Acne often resolves completely or is much milder after a 4–6 month course of oral isotretinoin. After a single round of treatment, about 80% of people report an improvement, with more than 50% reporting complete remission. About 20% of people require a second course, but 80% of those report improvement, resulting in a cumulative 96% efficacy rate. There are concerns that isotretinoin is linked to adverse effects, like depression, suicidality, and anemia. There is no clear evidence to support some of these claims. Isotretinoin has been found in some studies to be superior to antibiotics or placebo in reducing acne lesions. However, a 2018 review comparing inflammatory lesions after treatment with antibiotics or isotretinoin found no difference. The frequency of adverse events was about twice as high with isotretinoin use, although these were mostly dryness-related events. No increased risk of suicide or depression was conclusively found. Medical authorities strictly regulate isotretinoin use in women of childbearing age due to its known harmful effects in pregnancy. For such a woman to be considered a candidate for isotretinoin, she must have a confirmed negative pregnancy test and use an effective form of birth control. In 2008, the United States started the iPLEDGE program to prevent isotretinoin use during pregnancy. iPledge requires the woman to have two negative pregnancy tests and to use two types of birth control for at least one month before isotretinoin therapy begins and one month afterward. The effectiveness of the iPledge program is controversial due to continued instances of contraception nonadherence. Antibiotics People may apply antibiotics to the skin or take them orally to treat acne. They work by killing C. acnes and reducing inflammation. Although multiple guidelines call for healthcare providers to reduce the rates of prescribed oral antibiotics, many providers do not follow this guidance. Oral antibiotics remain the most commonly prescribed systemic therapy for acne. Widespread broad-spectrum antibiotic overuse for acne has led to higher rates of antibiotic-resistant C. acnes strains worldwide, especially to the commonly used tetracycline (e.g., doxycycline) and macrolide antibiotics (e.g., topical erythromycin). Therefore, dermatologists prefer antibiotics as part of combination therapy and not for use alone. Commonly used antibiotics, either applied to the skin or taken orally, include clindamycin, erythromycin, metronidazole, sulfacetamide, and tetracyclines (e.g., doxycycline or minocycline). Doxycycline 40 milligrams daily (low-dose) appears to have similar efficacy to 100 milligrams daily and has fewer gastrointestinal side effects. However, low-dose doxycycline is not FDA-approved for the treatment of acne. Antibiotics applied to the skin are typically used for mild to moderately severe acne. Oral antibiotics are generally more effective than topical antibiotics and produce faster resolution of inflammatory acne lesions than topical applications. The Global Alliance to Improve Outcomes in Acne recommends that topical and oral antibiotics are not used together. Oral antibiotics are recommended for no longer than three months as antibiotic courses exceeding this duration are associated with the development of antibiotic resistance and show no clear benefit over shorter durations. If long-term oral antibiotics beyond three months are used, then it is recommended that benzoyl peroxide or a retinoid be used at the same time to limit the risk of C. acnes developing antibiotic resistance. The antibiotic dapsone is effective against inflammatory acne when applied to the skin. It is generally not a first-line choice due to its higher cost and a lack of clear superiority over other antibiotics. Topical dapsone is sometimes a preferred therapy in women or for people with sensitive or darker-toned skin. It is not recommended for use with benzoyl peroxide due to the risk of causing yellow-orange skin discoloration with this combination. Minocycline is an effective acne treatment, but it is not a first-line antibiotic due to a lack of evidence that it is better than other treatments, and concerns about its safety compared to other tetracyclines. Sarecycline is the most recent oral antibiotic developed specifically for the treatment of acne, and is FDA-approved for the treatment of moderate to severe inflammatory acne in patients nine years of age and older. It is a narrow-spectrum tetracycline antibiotic that exhibits the necessary antibacterial activity against pathogens related to acne vulgaris and a low propensity for inducing antibiotic resistance. In clinical trials, sarecycline demonstrated clinical efficacy in reducing inflammatory acne lesions as early as three weeks and reduced truncal (back and chest) acne. Hormonal agents In women, the use of combined birth control pills can improve acne. These medications contain an estrogen and a progestin. They work by decreasing the production of androgen hormones by the ovaries and by decreasing the free and hence biologically active fractions of androgens, resulting in lowered skin production of sebum and consequently reduce acne severity. First-generation progestins such as norethindrone and norgestrel have androgenic properties and may worsen acne. Although oral estrogens decrease IGF-1 levels in some situations, which could theoretically improve acne symptoms, combined birth control pills do not appear to affect IGF-1 levels in fertile women. Cyproterone acetate-containing birth control pills seem to decrease total and free IGF-1 levels. Combinations containing third- or fourth-generation progestins, including desogestrel, dienogest, drospirenone, or norgestimate, as well as birth control pills containing cyproterone acetate or chlormadinone acetate, are preferred for women with acne due to their stronger antiandrogenic effects. Studies have shown a 40 to 70% reduction in acne lesions with combined birth control pills. A 2014 review found that oral antibiotics appear to be somewhat more effective than birth control pills at reducing the number of inflammatory acne lesions at three months. However, the two therapies are approximately equal in efficacy at six months for decreasing the number of inflammatory, non-inflammatory, and total acne lesions. The authors of the analysis suggested that birth control pills may be a preferred first-line acne treatment, over oral antibiotics, in certain women due to similar efficacy at six months and a lack of associated antibiotic resistance. In contrast to combined birth control pills, progestogen-only birth control forms that contain androgenic progestins have been associated with worsened acne. Antiandrogens such as cyproterone acetate and spironolactone can successfully treat acne, especially in women with signs of excessive androgen production, such as increased hairiness or skin production of sebum, or scalp hair loss. Spironolactone is an effective treatment for acne in adult women. Unlike combined birth control pills, it is not approved by the United States Food and Drug Administration for this purpose. Spironolactone is an aldosterone antagonist and is a useful acne treatment due to its ability to additionally block the androgen receptor at higher doses. Alone or in combination with a birth control pill, spironolactone has shown a 33 to 85% reduction in acne lesions in women. The effectiveness of spironolactone for acne appears to be dose-dependent. High-dose cyproterone acetate alone reportedly decreases acne symptoms in women by 75 to 90% within three months. It is usually combined with an estrogen to avoid menstrual irregularities and estrogen deficiency. The medication appears to be effective in the treatment of acne in males, with one study finding that a high dosage reduced inflammatory acne lesions by 73%. However, spironolactone and cyproterone acetate's side effects in males, such as gynecomastia, sexual dysfunction, and decreased bone mineral density, generally make their use for male acne impractical. Pregnant and lactating women should not receive antiandrogens for their acne due to a possibility of birth disorders such as hypospadias and feminization of male babies. Women who are sexually active and who can or may become pregnant should use an effective method of contraception to prevent pregnancy while taking an antiandrogen. Antiandrogens are often combined with birth control pills for this reason, which can result in additive efficacy. The FDA added a black-box warning to spironolactone about possible tumor risks based on preclinical research with very high doses (>100-fold clinical doses) and cautioned that unnecessary use of the medication should be avoided. However, several large epidemiological studies subsequently found no greater risk of tumors in association with spironolactone in humans. Conversely, strong associations of cyproterone acetate with certain brain tumors have been discovered and its use has been restricted. The brain tumor risk with cyproterone acetate is due to its strong progestogenic actions and is not related to antiandrogenic activity nor shared by other antiandrogens. Flutamide, a pure antagonist of the androgen receptor, is effective in treating acne in women. It appears to reduce acne symptoms by 80 to 90% even at low doses, with several studies showing complete acne clearance. In one study, flutamide decreased acne scores by 80% within three months, whereas spironolactone decreased symptoms by only 40% in the same period. In a large long-term study, 97% of women reported satisfaction with the control of their acne with flutamide. Although effective, flutamide has a risk of serious liver toxicity, and cases of death in women taking even low doses of the medication to treat androgen-dependent skin and hair conditions have occurred. As such, the use of flutamide for acne has become increasingly limited, and it has been argued that continued use of flutamide for such purposes is unethical. Bicalutamide, a pure androgen receptor antagonist with the same mechanism as flutamide and with comparable or superior antiandrogenic efficacy but with a far lower risk of liver toxicity, is an alternative option to flutamide in the treatment of androgen-dependent skin and hair conditions in women. Clascoterone is a topical antiandrogen that has demonstrated effectiveness in the treatment of acne in both males and females and was approved for clinical use for this indication in August 2020. It has shown no systemic absorption or associated antiandrogenic side effects. In a small direct head-to-head comparison, clascoterone showed greater effectiveness than topical isotretinoin. 5α-Reductase inhibitors such as finasteride and dutasteride may be useful for the treatment of acne in both males and females but have not been adequately evaluated for this purpose. Moreover, 5α-reductase inhibitors have a strong potential for producing birth defects in male babies and this limits their use in women. However, 5α-reductase inhibitors are frequently used to treat excessive facial/body hair in women and can be combined with birth control pills to prevent pregnancy. There is no evidence as of 2010 to support the use of cimetidine or ketoconazole in the treatment of acne. Hormonal treatments for acne such as combined birth control pills and antiandrogens may be considered first-line therapy for acne under many circumstances, including desired contraception, known or suspected hyperandrogenism, acne during adulthood, acne that flares premenstrually, and when symptoms of significant sebum production (seborrhea) are co-present. Hormone therapy is effective for acne both in women with hyperandrogenism and in women with normal androgen levels. Azelaic acid Azelaic acid is effective for mild to moderate acne when applied topically at a 15–20% concentration. Treatment twice daily for six months is necessary, and is as effective as topical benzoyl peroxide 5%, isotretinoin 0.05%, and erythromycin 2%. Azelaic acid is an effective acne treatment due to its ability to reduce skin cell accumulation in the follicle and its antibacterial and anti-inflammatory properties. It has a slight skin-lightening effect due to its ability to inhibit melanin synthesis. Therefore, it is useful in treating individuals with acne who are also affected by post-inflammatory hyperpigmentation. Azelaic acid may cause skin irritation. It is less effective and more expensive than retinoids. Azelaic acid also led to worse treatment response when compared to benzoyl peroxide. When compared to tretinoin, azelaic acid makes little or no treatment response. Salicylic acid Salicylic acid is a topically applied beta-hydroxy acid that stops bacteria from reproducing and has keratolytic properties. It is less effective than retinoid therapy. Salicylic acid opens obstructed skin pores and promotes the shedding of epithelial skin cells. Dry skin is the most commonly seen side effect with topical application, though darkening of the skin can occur in individuals with darker skin types. Other medications Topical and oral preparations of nicotinamide (the amide form of vitamin B3) are alternative medical treatments. Nicotinamide reportedly improves acne due to its anti-inflammatory properties (influencing neutrophil chemotaxis, inhibiting the release of histamine, suppressing the lymphocyte transformation test, and reducing nitric oxide synthase production induced by cytokines), its ability to suppress sebum production, and its wound healing properties. Topical and oral preparations of zinc are suggested treatments for acne; evidence to support their use for this purpose is limited. Zinc's capacities to reduce inflammation and sebum production as well as inhibit C. acnes growth are its proposed mechanisms for improving acne. Antihistamines may improve symptoms among those already taking isotretinoin due to their anti-inflammatory properties and their ability to suppress sebum production. Hydroquinone lightens the skin when applied topically by inhibiting tyrosinase, the enzyme responsible for converting the amino acid tyrosine to the skin pigment melanin, and is used to treat acne-associated post-inflammatory hyperpigmentation. By interfering with the production of melanin in the epidermis, hydroquinone leads to less hyperpigmentation as darkened skin cells are naturally shed over time. Improvement in skin hyperpigmentation is typically seen within six months when used twice daily. Hydroquinone is ineffective for hyperpigmentation affecting deeper layers of skin such as the dermis. The use of a sunscreen with SPF 15 or higher in the morning with reapplication every two hours is recommended when using hydroquinone. Its application only to affected areas lowers the risk of lightening the color of normal skin but can lead to a temporary ring of lightened skin around the hyperpigmented area. Hydroquinone is generally well-tolerated; side effects are typically mild (e.g., skin irritation) and occur with the use of a higher than the recommended 4% concentration. Most preparations contain the preservative sodium metabisulfite, which has been linked to rare cases of allergic reactions, including anaphylaxis and severe asthma exacerbations in susceptible people. In extremely rare cases, the frequent and improper application of high-dose hydroquinone has been associated with a systemic condition known as exogenous ochronosis (skin discoloration and connective tissue damage from the accumulation of homogentisic acid). Combination therapy Combination therapy—using medications of different classes together, each with a different mechanism of action—has been demonstrated to be a more effective approach to acne treatment than monotherapy. The use of topical benzoyl peroxide and antibiotics together is more effective than antibiotics alone. Similarly, using a topical retinoid with an antibiotic clears acne lesions faster than the use of antibiotics alone. Frequently used combinations include the following: antibiotic and benzoyl peroxide, antibiotic and topical retinoid, or topical retinoid and benzoyl peroxide. Dermatologists generally prefer combining benzoyl peroxide with a retinoid over the combination of a topical antibiotic with a retinoid. Both regimens are effective, but benzoyl peroxide does not lead to antibiotic resistance. Pregnancy Although sebaceous gland activity in the skin increases during the late stages of pregnancy, pregnancy has not been reliably associated with worsened acne severity. In general, topically applied medications are considered the first-line approach to acne treatment during pregnancy, as they have little systemic absorption and are therefore unlikely to harm a developing fetus. Highly recommended therapies include topically applied benzoyl peroxide (pregnancy category C) and azelaic acid (category B). Salicylic acid carries a category C safety rating due to higher systemic absorption (9–25%), and an association between the use of anti-inflammatory medications in the third trimester and adverse effects to the developing fetus including too little amniotic fluid in the uterus and early closure of the babies' ductus arteriosus blood vessel. Prolonged use of salicylic acid over significant areas of the skin or under occlusive (sealed) dressings is not recommended as these methods increase systemic absorption and the potential for fetal harm. Tretinoin (category C) and adapalene (category C) are very poorly absorbed, but certain studies have suggested teratogenic effects in the first trimester. The data examining the association between maternal topical retinoid exposure in the first trimester of pregnancy and adverse pregnancy outcomes is limited. A systematic review of observational studies concluded that such exposure does not appear to increase the risk of major birth defects, miscarriages, stillbirths, premature births, or low birth weight. Similarly, in studies examining the effects of topical retinoids during pregnancy, fetal harm has not been seen in the second and third trimesters. Nevertheless, since rare harms from topical retinoids are not ruled out, they are not recommended for use during pregnancy due to persistent safety concerns. Retinoids contraindicated for use during pregnancy include the topical retinoid tazarotene, and oral retinoids isotretinoin and acitretin (all category X). Spironolactone is relatively contraindicated for use during pregnancy due to its antiandrogen effects. Finasteride is not recommended as it is highly teratogenic. Topical antibiotics deemed safe during pregnancy include clindamycin, erythromycin, and metronidazole (all category B), due to negligible systemic absorption. Nadifloxacin and dapsone (category C) are other topical antibiotics that may be used to treat acne in pregnant women but have received less study. No adverse fetal events have been reported from the topical use of dapsone. If retinoids are used there is a high risk of abnormalities occurring in the developing fetus; women of childbearing age are therefore required to use effective birth control if retinoids are used to treat acne. Oral antibiotics deemed safe for pregnancy (all category B) include azithromycin, cephalosporins, and penicillins. Tetracyclines (category D) are contraindicated during pregnancy as they are known to deposit in developing fetal teeth, resulting in yellow discoloration and thinned tooth enamel. Their use during pregnancy has been associated with the development of acute fatty liver of pregnancy and is further avoided for this reason. Procedures Limited evidence supports comedo extraction, but it is an option for comedones that do not improve with standard treatment. Another procedure for immediate relief is the injection of a corticosteroid into an inflamed acne comedo. Electrocautery and electrofulguration are effective alternative treatments for comedones. Light therapy is a treatment method that involves delivering certain specific wavelengths of light to an area of skin affected by acne. Both regular and laser light have been used. The evidence for light therapy as a treatment for acne is weak and inconclusive. Various light therapies appear to provide a short-term benefit, but data for long-term outcomes, and outcomes in those with severe acne, are sparse; it may have a role for individuals whose acne has been resistant to topical medications. A 2016 meta-analysis was unable to conclude whether light therapies were more beneficial than placebo or no treatment, nor the duration of benefit. When regular light is used immediately following the application of a sensitizing substance to the skin such as aminolevulinic acid or methyl aminolevulinate, the treatment is referred to as photodynamic therapy (PDT). PDT has the most supporting evidence of all light therapy modalities. PDT treats acne by using various forms of light (e.g., blue light or red light) that preferentially target the pilosebaceous unit. Once the light activates the sensitizing substance, this generates free radicals and reactive oxygen species in the skin, which purposefully damage the sebaceous glands and kill C. acnes bacteria. Many different types of nonablative lasers (i.e., lasers that do not vaporize the top layer of the skin but rather induce a physiologic response in the skin from the light) have been used to treat acne, including those that use infrared wavelengths of light. Ablative lasers (such as CO2 and fractional types) have also been used to treat active acne and its scars. When ablative lasers are used, the treatment is often referred to as laser resurfacing because, as mentioned previously, the entire upper layers of the skin are vaporized. Ablative lasers are associated with higher rates of adverse effects compared with non-ablative lasers, with examples being post-inflammatory hyperpigmentation, persistent facial redness, and persistent pain. Physiologically, certain wavelengths of light, used with or without accompanying topical chemicals, are thought to kill bacteria and decrease the size and activity of the glands that produce sebum. Disadvantages of light therapy can include its cost, the need for multiple visits, the time required to complete the procedure(s), and pain associated with some of the treatment modalities. Typical side effects include skin peeling, temporary reddening of the skin, swelling, and post-inflammatory hyperpigmentation. Dermabrasion is an effective therapeutic procedure for reducing the appearance of superficial atrophic scars of the boxcar and rolling varieties. Ice-pick scars do not respond well to treatment with dermabrasion due to their depth. The procedure is painful and has many potential side effects such as skin sensitivity to sunlight, redness, and decreased pigmentation of the skin. Dermabrasion has fallen out of favor with the introduction of laser resurfacing. Unlike dermabrasion, there is no evidence that microdermabrasion is an effective treatment for acne. Dermal or subcutaneous fillers are substances injected into the skin to improve the appearance of acne scars. Fillers are used to increase natural collagen production in the skin and to increase skin volume and decrease the depth of acne scars. Examples of fillers used for this purpose include hyaluronic acid; poly(methyl methacrylate) microspheres with collagen; human and bovine collagen derivatives, and fat harvested from the person's own body (autologous fat transfer). Microneedling is a procedure in which an instrument with multiple rows of tiny needles is rolled over the skin to elicit a wound healing response and stimulate collagen production to reduce the appearance of atrophic acne scars in people with darker skin color. Notable adverse effects of microneedling include post-inflammatory hyperpigmentation and tram track scarring (described as discrete slightly raised scars in a linear distribution similar to a tram track). The latter is thought to be primarily attributable to improper technique by the practitioner, including the use of excessive pressure or inappropriately large needles. Subcision is useful for the treatment of superficial atrophic acne scars and involves the use of a small needle to loosen the fibrotic adhesions that result in the depressed appearance of the scar. Chemical peels can be used to reduce the appearance of acne scars. Mild peels include those using glycolic acid, lactic acid, salicylic acid, Jessner's solution, or a lower concentration (20%) of trichloroacetic acid. These peels only affect the epidermal layer of the skin and can be useful in the treatment of superficial acne scars as well as skin pigmentation changes from inflammatory acne. Higher concentrations of trichloroacetic acid (30–40%) are considered to be medium-strength peels and affect the skin as deep as the papillary dermis. Formulations of trichloroacetic acid concentrated to 50% or more are considered to be deep chemical peels. Medium-strength and deep-strength chemical peels are more effective for deeper atrophic scars but are more likely to cause side effects such as skin pigmentation changes, infection, and small white superficial cysts known as milia. Alternative medicine Researchers are investigating complementary therapies as treatment for people with acne. Low-quality evidence suggests topical application of tea tree oil or bee venom may reduce the total number of skin lesions in those with acne. Tea tree oil appears to be approximately as effective as benzoyl peroxide or salicylic acid but is associated with allergic contact dermatitis. Proposed mechanisms for tea tree oil's anti-acne effects include antibacterial action against C. acnes and anti-inflammatory properties. Numerous other plant-derived therapies have demonstrated positive effects against acne (e.g., basil oil; oligosaccharides from seaweed; however, few well-done studies have examined their use for this purpose. There is a lack of high-quality evidence for the use of acupuncture, herbal medicine, or cupping therapy for acne. Self-care Many over-the-counter treatments in many forms are available, which are often known as cosmeceuticals. Certain types of makeup may be useful to mask acne. In those with oily skin, a water-based product is often preferred. Prognosis Acne usually improves around the age of 20 but may persist into adulthood. Permanent physical scarring may occur. Rare complications from acne or its treatment include the formation of pyogenic granulomas, osteoma cutis, and acne with facial edema. Early and aggressive treatment of acne is advocated by some in the medical community to reduce the chances of these poor outcomes. Mental health impact There is good evidence to support the idea that acne and associated scarring negatively affect a person's psychological state, worsen mood, lower self-esteem, and are associated with a higher risk of anxiety disorders, depression, and suicidal thoughts. Misperceptions about acne's causative and aggravating factors are common, and people with acne often blame themselves, and others often blame those with acne for their condition. Such blame can worsen the affected person's sense of self-esteem. Until the 20th century, even among dermatologists, the list of causes was believed to include excessive sexual thoughts and masturbation. Dermatology's association with sexually transmitted infections, especially syphilis, contributed to the stigma. Another psychological complication of acne vulgaris is acne excoriée, which occurs when a person persistently picks and scratches pimples, irrespective of the severity of their acne. This can lead to significant scarring, changes in the affected person's skin pigmentation, and a cyclic worsening of the affected person's anxiety about their appearance. Epidemiology Globally, acne affects approximately 650 million people, or about 9.4% of the population, as of 2010. It affects nearly 90% of people in Western societies during their teenage years, but can occur before adolescence and may persist into adulthood. While acne that first develops between the ages of 21 and 25 is uncommon, it affects 54% of women and 40% of men older than 25 years of age and has a lifetime prevalence of 85%. About 20% of those affected have moderate or severe cases. It is slightly more common in females than males (9.8% versus 9.0%). In those over 40 years old, 1% of males and 5% of females still have problems. Rates appear to be lower in rural societies. While some research has found it affects people of all ethnic groups, acne may not occur in the non-Westernized peoples of Papua New Guinea and Paraguay. Acne affects 40–50 million people in the United States (16%) and approximately 3–5 million in Australia (23%). Severe acne tends to be more common in people of Caucasian or Amerindian descent than in people of African descent. History Historical records indicate that pharaohs had acne, which may be the earliest known reference to the disease. Sulfur's usefulness as a topical remedy for acne dates back to at least the reign of Cleopatra (69–30 BCE). The sixth-century Greek physician Aëtius of Amida reportedly coined the term "" (,) or "", which seems to be a reference to facial skin lesions that occur during "the 'acme' of life" (puberty). In the 16th century, the French physician and botanist François Boissier de Sauvages de Lacroix provided one of the earlier descriptions of acne. He used the term "psydracia achne" to describe small, red, and hard tubercles that altered a person's facial appearance during adolescence and were neither itchy nor painful. The recognition and characterization of acne progressed in 1776 when Josef Plenck (an Austrian physician) published a book that proposed the novel concept of classifying skin diseases by their elementary (initial) lesions. In 1808 the English dermatologist Robert Willan refined Plenck's work by providing the first detailed descriptions of several skin disorders using morphologic terminology that remains in use today. Thomas Bateman continued and expanded on Robert Willan's work as his student and provided the first descriptions and illustrations of acne accepted as accurate by modern dermatologists. Erasmus Wilson, in 1842, was the first to make the distinction between acne vulgaris and rosacea. The first professional medical monograph dedicated entirely to acne was written by Lucius Duncan Bulkley and published in New York in 1885. Scientists initially hypothesized that acne represented a disease of the skin's hair follicle, and occurred due to blockage of the pore by sebum. During the 1880s, they observed bacteria by microscopy in skin samples from people with acne. Investigators believed the bacteria caused comedones, sebum production, and ultimately acne. During the mid-twentieth century, dermatologists realized that no single hypothesized factor (sebum, bacteria, or excess keratin) fully accounted for the disease in its entirety. This led to the current understanding that acne could be explained by a sequence of related events, beginning with blockage of the skin follicle by excessive dead skin cells, followed by bacterial invasion of the hair follicle pore, changes in sebum production, and inflammation. The approach to acne treatment underwent significant changes during the twentieth century. Retinoids became a medical treatment for acne in 1943. Benzoyl peroxide was first proposed as a treatment in 1958 and remains a staple of acne treatment. The introduction of oral tetracycline antibiotics (such as minocycline) modified acne treatment in the 1950s. These reinforced the idea amongst dermatologists that bacterial growth on the skin plays an important role in causing acne. Subsequently, in the 1970s, tretinoin (original trade name Retin A) was found to be an effective treatment. The development of oral isotretinoin (sold as Accutane and Roaccutane) followed in 1980. After its introduction in the United States, scientists identified isotretinoin as a medication highly likely to cause birth defects if taken during pregnancy. In the United States, more than 2,000 women became pregnant while taking isotretinoin between 1982 and 2003, with most pregnancies ending in abortion or miscarriage. Approximately 160 babies were born with birth defects due to maternal use of isotretinoin during pregnancy. Treatment of acne with topical crushed dry ice, known as cryoslush, was first described in 1907 but is no longer performed commonly. Before 1960, the use of X-rays was also a common treatment. Society and culture The costs and social impact of acne are substantial. In the United States, acne vulgaris is responsible for more than 5 million doctor visits and costs over  billion each year in direct costs. Similarly, acne vulgaris is responsible for 3.5 million doctor visits each year in the United Kingdom. Sales for the top ten leading acne treatment brands in the US in 2015 amounted to $352million. Acne vulgaris and its resultant scars are associated with significant social and academic difficulties that can last into adulthood. During the Great Depression, dermatologists discovered that young men with acne had difficulty obtaining jobs. Until the 1930s, many people viewed acne as a trivial problem among middle-class girls because, unlike smallpox and tuberculosis, no one died from it, and a feminine problem, because boys were much less likely to seek medical assistance for it. During World War II, some soldiers in tropical climates developed such severe and widespread tropical acne on their bodies that they were declared medically unfit for duty. Research Efforts to better understand the mechanisms of sebum production are underway. This research aims to develop medications that target and interfere with the hormones that are known to increase sebum production (e.g., IGF-1 and alpha-melanocyte-stimulating hormone). Other sebum-lowering medications such as topical antiandrogens, peroxisome proliferator-activated receptor modulators, and inhibitors of the stearoyl-CoA desaturase-1 enzyme are also a focus of research efforts. Particles that release nitric oxide into the skin to decrease skin inflammation caused by C. acnes and the immune system have shown promise for improving acne in early clinical trials. Another avenue of early-stage research has focused on how to best use laser and light therapy to selectively destroy sebum-producing glands in the skin's hair follicles to reduce sebum production and improve acne appearance. The use of antimicrobial peptides against C. acnes is under investigation as a treatment for acne to overcoming antibiotic resistance. In 2007, scientists reported the first genome sequencing of a C. acnes'' bacteriophage (PA6). The authors proposed applying this research toward the development of bacteriophage therapy as an acne treatment to overcome the problems associated with long-term antibiotic use, such as bacterial resistance. Oral and topical probiotics are under evaluation as treatments for acne. Probiotics may have therapeutic effects for those affected by acne due to their ability to decrease skin inflammation and improve skin moisture by increasing the skin's ceramide content. As of 2014, knowledge of the effects of probiotics on acne in humans was limited. Decreased levels of retinoic acid in the skin may contribute to comedo formation. Researchers are investigating methods to increase the skin's production of retinoic acid to address this deficiency. A vaccine against inflammatory acne has shown promising results in mice and humans. Some have voiced concerns about creating a vaccine designed to neutralize a stable community of normal skin bacteria that is known to protect the skin from colonization by more harmful microorganisms. Other animals Acne can occur on cats, dogs, and horses.
Biology and health sciences
Non-infectious disease
null
74567
https://en.wikipedia.org/wiki/Dynamic%20random-access%20memory
Dynamic random-access memory
Dynamic random-access memory (dynamic RAM or DRAM) is a type of random-access semiconductor memory that stores each bit of data in a memory cell, usually consisting of a tiny capacitor and a transistor, both typically based on metal–oxide–semiconductor (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The electric charge on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory (SRAM) which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory (vs. non-volatile memory), since it loses its data quickly when power is removed. However, DRAM does exhibit limited data remanence. DRAM typically takes the form of an integrated circuit chip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used in digital electronics where low-cost and high-capacity computer memory is required. One of the largest applications for DRAM is the main memory (colloquially called the RAM) in modern computers and graphics cards (where the main memory is called the graphics memory). It is also used in many portable devices and video game consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as the cache memories in processors. The need to refresh DRAM demands more complicated circuitry and timing than SRAM. This complexity is offset by the structural simplicity of DRAM memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities with a simultaneous reduction in cost per bit. Refreshing the data consumes power, causing a variety of techniques to be used to manage the overall power consumption. For this reason, DRAM usually needs to operate with a memory controller; the memory controller needs to know DRAM parameters, especially memory timings, to initialize DRAMs, which may be different depending on different DRAM manufacturers and part numbers. DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% jump in 1988, while in recent years the price has been going down. In 2018, a "key characteristic of the DRAM market is that there are currently only three major suppliers — Micron Technology, SK Hynix and Samsung Electronics" that are "keeping a pretty tight rein on their capacity". There is also Kioxia (previously Toshiba Memory Corporation after 2017 spin-off) which doesn't manufacture DRAM. Other manufacturers make and sell DIMMs (but not the DRAM chips in them), such as Kingston Technology, and some manufacturers that sell stacked DRAM (used e.g. in the fastest supercomputers on the exascale), separately such as Viking Technology. Others sell such integrated into other products, such as Fujitsu into its CPUs, AMD in GPUs, and Nvidia, with HBM2 in some of their GPU chips. History Precursors The cryptanalytic machine code-named Aquarius used at Bletchley Park during World War II incorporated a hard-wired dynamic memory. Paper tape was read and the characters on it "were remembered in a dynamic store." The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross (1) and an uncharged capacitor dot (0). Since the charge gradually leaked away, a periodic pulse was applied to top up those still charged (hence the term 'dynamic')". In November 1965, Toshiba introduced a bipolar dynamic RAM for its electronic calculator Toscal BC-1411. In 1966, Tomohisa Yoshimaru and Hiroshi Komikawa from Toshiba applied for a Japanese patent of a memory circuit composed of several transistors and a capacitor, in 1967 they applied for a patent in the US. The earliest forms of DRAM mentioned above used bipolar transistors. While it offered improved performance over magnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory. Capacitors had also been used for earlier memory schemes, such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube. Single MOS DRAM In 1966, Dr. Robert Dennard invented modern DRAM architecture in which there's a single MOS transistor per capacitor, at the IBM Thomas J. Watson Research Center, while he was working on MOS memory and was trying to create an alternative to SRAM which required six MOS transistors for each bit of data. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of the single-transistor MOS DRAM memory cell. He filed a patent in 1967, and was granted U.S. patent number 3,387,286 in 1968. MOS memory offered higher performance, was cheaper, and consumed less power, than magnetic-core memory. The patent describes the invention: "Each cell is formed, in one embodiment, using a single field-effect transistor and a single capacitor." MOS DRAM chips were commercialized in 1969 by Advanced Memory Systems, Inc of Sunnyvale, CA. This 1024 bit chip was sold to Honeywell, Raytheon, Wang Laboratories, and others. The same year, Honeywell asked Intel to make a DRAM using a three-transistor cell that they had developed. This became the Intel 1102 in early 1970. However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell. This became the first commercially available DRAM, the Intel 1103, in October 1970, despite initial problems with low yield until the fifth revision of the masks. The 1103 was designed by Joel Karp and laid out by Pat Earhart. The masks were cut by Barbara Maness and Judy Garcia. MOS memory overtook magnetic-core memory as the dominant memory technology in the early 1970s. The first DRAM with multiplexed row and column address lines was the Mostek MK4096 4 Kbit DRAM designed by Robert Proebsting and introduced in 1973. This addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles. This was a radical advance, effectively halving the number of address lines required, which enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size. The MK4096 proved to be a very robust design for customer applications. At the 16 Kbit density, the cost advantage increased; the 16 Kbit Mostek MK4116 DRAM, introduced in 1976, achieved greater than 75% worldwide DRAM market share. However, as density increased to 64 Kbit in the early 1980s, Mostek and other US manufacturers were overtaken by Japanese DRAM manufacturers, which dominated the US and worldwide markets during the 1980s and 1990s. Early in 1985, Gordon Moore decided to withdraw Intel from producing DRAM. By 1986, many, but not all, United States chip makers had stopped making DRAMs. Micron Technology and Texas Instruments continued to produce them commercially, and IBM produced them for internal use. In 1985, when 64K DRAM memory chips were the most common memory chips used in computers, and when more than 60 percent of those chips were produced by Japanese companies, semiconductor makers in the United States accused Japanese companies of export dumping for the purpose of driving makers in the United States out of the commodity memory chip business. Prices for the 64K product plummeted to as low as 35 cents apiece from $3.50 within 18 months, with disastrous financial consequences for some U.S. firms. On 4 December 1985 the US Commerce Department's International Trade Administration ruled in favor of the complaint. Synchronous dynamic random-access memory (SDRAM) was developed by Samsung. The first commercial SDRAM chip was the Samsung KM48SL2000, which had a capacity of 16Mb, and was introduced in 1992. The first commercial DDR SDRAM (double data rate SDRAM) memory chip was Samsung's 64Mb DDR SDRAM chip, released in 1998. Later, in 2001, Japanese DRAM makers accused Korean DRAM manufacturers of dumping. In 2002, US computer makers made claims of DRAM price fixing. Principles of operation DRAM is usually arranged in a rectangular array of charge storage cells consisting of one capacitor and transistor per data bit. The figure to the right shows a simple example with a four-by-four cell matrix. Some DRAM matrices are many thousands of cells in height and width. The long horizontal lines connecting each row are known as word-lines. Each column of cells is composed of two bit-lines, each connected to every other storage cell in the column (the illustration to the right does not include this important detail). They are generally known as the + and − bit lines. A sense amplifier is essentially a pair of cross-connected inverters between the bit-lines. The first inverter is connected with input from the + bit-line and output to the − bit-line. The second inverter's input is from the − bit-line with output to the + bit-line. This results in positive feedback which stabilizes after one bit-line is fully at its highest voltage and the other bit-line is at the lowest possible voltage. Operations to read a data bit from a DRAM storage cell The sense amplifiers are disconnected. The bit-lines are precharged to exactly equal voltages that are in between high and low logic levels (e.g., 0.5 V if the two levels are 0 and 1 V). The bit-lines are physically symmetrical to keep the capacitance equal, and therefore at this time their voltages are equal. The precharge circuit is switched off. Because the bit-lines are relatively long, they have enough capacitance to maintain the precharged voltage for a brief time. This is an example of dynamic logic. The desired row's word-line is then driven high to connect a cell's storage capacitor to its bit-line. This causes the transistor to conduct, transferring charge from the storage cell to the connected bit-line (if the stored value is 1) or from the connected bit-line to the storage cell (if the stored value is 0). Since the capacitance of the bit-line is typically much higher than the capacitance of the storage cell, the voltage on the bit-line increases very slightly if the storage cell's capacitor is discharged and decreases very slightly if the storage cell is charged (e.g., 0.54 and 0.45 V in the two cases). As the other bit-line holds 0.50 V there is a small voltage difference between the two twisted bit-lines. The sense amplifiers are now connected to the bit-lines pairs. Positive feedback then occurs from the cross-connected inverters, thereby amplifying the small voltage difference between the odd and even row bit-lines of a particular column until one bit line is fully at the lowest voltage and the other is at the maximum high voltage. Once this has happened, the row is open (the desired cell data is available). All storage cells in the open row are sensed simultaneously, and the sense amplifier outputs latched. A column address then selects which latch bit to connect to the external data bus. Reads of different columns in the same row can be performed without a row opening delay because, for the open row, all data has already been sensed and latched. While reading of columns in an open row is occurring, current is flowing back up the bit-lines from the output of the sense amplifiers and recharging the storage cells. This reinforces (i.e. refreshes) the charge in the storage cell by increasing the voltage in the storage capacitor if it was charged to begin with, or by keeping it discharged if it was empty. Note that due to the length of the bit-lines there is a fairly long propagation delay for the charge to be transferred back to the cell's capacitor. This takes significant time past the end of sense amplification, and thus overlaps with one or more column reads. When done with reading all the columns in the current open row, the word-line is switched off to disconnect the storage cell capacitors (the row is closed) from the bit-lines. The sense amplifier is switched off, and the bit-lines are precharged again. To write to memory To store data, a row is opened and a given column's sense amplifier is temporarily forced to the desired high or low-voltage state, thus causing the bit-line to charge or discharge the cell storage capacitor to the desired value. Due to the sense amplifier's positive feedback configuration, it will hold a bit-line at stable voltage even after the forcing voltage is removed. During a write to a particular cell, all the columns in a row are sensed simultaneously just as during reading, so although only a single column's storage-cell capacitor charge is changed, the entire row is refreshed (written back in), as illustrated in the figure to the right. Refresh rate Typically, manufacturers specify that each row must be refreshed every 64 ms or less, as defined by the JEDEC standard. Some systems refresh every row in a burst of activity involving all rows every 64 ms. Other systems refresh one row at a time staggered throughout the 64 ms interval. For example, a system with 213 = 8,192 rows would require a staggered refresh rate of one row every 7.8 μs which is 64 ms divided by 8,192 rows. A few real-time systems refresh a portion of memory at a time determined by an external timer function that governs the operation of the rest of a system, such as the vertical blanking interval that occurs every 10–20 ms in video equipment. The row address of the row that will be refreshed next is maintained by external logic or a counter within the DRAM. A system that provides the row address (and the refresh command) does so to have greater control over when to refresh and which row to refresh. This is done to minimize conflicts with memory accesses, since such a system has both knowledge of the memory access patterns and the refresh requirements of the DRAM. When the row address is supplied by a counter within the DRAM, the system relinquishes control over which row is refreshed and only provides the refresh command. Some modern DRAMs are capable of self-refresh; no external logic is required to instruct the DRAM to refresh or to provide a row address. Under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes. Memory timing Many parameters are required to fully describe the timing of DRAM operation. Here are some examples for two timing grades of asynchronous DRAM, from a data sheet published in 1998: Thus, the generally quoted number is the /RAS low to valid data out time. This is the time to open a row, settle the sense amplifiers, and deliver the selected column data to the output. This is also the minimum /RAS low time, which includes the time for the amplified data to be delivered back to recharge the cells. The time to read additional bits from an open page is much less, defined by the /CAS to /CAS cycle time. The quoted number is the clearest way to compare between the performance of different DRAM memories, as it sets the slower limit regardless of the row length or page size. Bigger arrays forcibly result in larger bit line capacitance and longer propagation delays, which cause this time to increase as the sense amplifier settling time is dependent on both the capacitance as well as the propagation latency. This is countered in modern DRAM chips by instead integrating many more complete DRAM arrays within a single chip, to accommodate more capacity without becoming too slow. When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100 MHz state machine (i.e. a 10 ns clock), the 50 ns DRAM can perform the first read in five clock cycles, and additional reads within the same page every two clock cycles. This was generally described as timing, as bursts of four reads within a page were common. When describing synchronous memory, timing is described by clock cycle counts separated by hyphens. These numbers represent in multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate when double data rate signaling is used. JEDEC standard PC3200 timing is with a 200 MHz clock, while premium-priced high performance PC3200 DDR DRAM DIMM might be operated at timing. Minimum random access time has improved from tRAC = 50 ns to , and even the premium 20 ns variety is only 2.5 times faster than the asynchronous DRAM. CAS latency has improved even less, from to 10 ns. However, the DDR3 memory does achieve 32 times higher bandwidth; due to internal pipelining and wide data paths, it can output two words every 1.25 ns , while the EDO DRAM can output one word per tPC = 20 ns (50 Mword/s). Timing abbreviations Memory cell design Each bit of data in a DRAM is stored as a positive or negative electrical charge in a capacitive structure. The structure providing the capacitance, as well as the transistors that control access to it, is collectively referred to as a DRAM cell. They are the fundamental building block in DRAM arrays. Multiple DRAM memory cell variants exist, but the most commonly used variant in modern DRAMs is the one-transistor, one-capacitor (1T1C) cell. The transistor is used to admit current into the capacitor during writes, and to discharge the capacitor during reads. The access transistor is designed to maximize drive strength and minimize transistor-transistor leakage (Kenner, p. 34). The capacitor has two terminals, one of which is connected to its access transistor, and the other to either ground or VCC/2. In modern DRAMs, the latter case is more common, since it allows faster operation. In modern DRAMs, a voltage of +VCC/2 across the capacitor is required to store a logic one; and a voltage of −VCC/2 across the capacitor is required to store a logic zero. The resultant charge is , where Q is the charge in coulombs and C is the capacitance in farads. Reading or writing a logic one requires the wordline be driven to a voltage greater than the sum of VCC and the access transistor's threshold voltage (VTH). This voltage is called VCC pumped (VCCP). The time required to discharge a capacitor thus depends on what logic value is stored in the capacitor. A capacitor containing logic one begins to discharge when the voltage at the access transistor's gate terminal is above VCCP. If the capacitor contains a logic zero, it begins to discharge when the gate terminal voltage is above VTH. Capacitor design Up until the mid-1980s, the capacitors in DRAM cells were co-planar with the access transistor (they were constructed on the surface of the substrate), thus they were referred to as planar capacitors. The drive to increase both density and, to a lesser extent, performance, required denser designs. This was strongly motivated by economics, a major consideration for DRAM devices, especially commodity DRAMs. The minimization of DRAM cell area can produce a denser device and lower the cost per bit of storage. Starting in the mid-1980s, the capacitor was moved above or below the silicon substrate in order to meet these objectives. DRAM cells featuring capacitors above the substrate are referred to as stacked or folded plate capacitors. Those with capacitors buried beneath the substrate surface are referred to as trench capacitors. In the 2000s, manufacturers were sharply divided by the type of capacitor used in their DRAMs and the relative cost and long-term scalability of both designs have been the subject of extensive debate. The majority of DRAMs, from major manufactures such as Hynix, Micron Technology, Samsung Electronics use the stacked capacitor structure, whereas smaller manufacturers such Nanya Technology use the trench capacitor structure (Jacob, pp. 355–357). The capacitor in the stacked capacitor scheme is constructed above the surface of the substrate. The capacitor is constructed from an oxide-nitride-oxide (ONO) dielectric sandwiched in between two layers of polysilicon plates (the top plate is shared by all DRAM cells in an IC), and its shape can be a rectangle, a cylinder, or some other more complex shape. There are two basic variations of the stacked capacitor, based on its location relative to the bitline—capacitor-under-bitline (CUB) and capacitor-over-bitline (COB). In the former, the capacitor is underneath the bitline, which is usually made of metal, and the bitline has a polysilicon contact that extends downwards to connect it to the access transistor's source terminal. In the latter, the capacitor is constructed above the bitline, which is almost always made of polysilicon, but is otherwise identical to the COB variation. The advantage the COB variant possesses is the ease of fabricating the contact between the bitline and the access transistor's source as it is physically close to the substrate surface. However, this requires the active area to be laid out at a 45-degree angle when viewed from above, which makes it difficult to ensure that the capacitor contact does not touch the bitline. CUB cells avoid this, but suffer from difficulties in inserting contacts in between bitlines, since the size of features this close to the surface are at or near the minimum feature size of the process technology (Kenner, pp. 33–42). The trench capacitor is constructed by etching a deep hole into the silicon substrate. The substrate volume surrounding the hole is then heavily doped to produce a buried n+ plate with low resistance. A layer of oxide-nitride-oxide dielectric is grown or deposited, and finally the hole is filled by depositing doped polysilicon, which forms the top plate of the capacitor. The top of the capacitor is connected to the access transistor's drain terminal via a polysilicon strap (Kenner, pp. 42–44). A trench capacitor's depth-to-width ratio in DRAMs of the mid-2000s can exceed 50:1 (Jacob, p. 357). Trench capacitors have numerous advantages. Since the capacitor is buried in the bulk of the substrate instead of lying on its surface, the area it occupies can be minimized to what is required to connect it to the access transistor's drain terminal without decreasing the capacitor's size, and thus capacitance (Jacob, pp. 356–357). Alternatively, the capacitance can be increased by etching a deeper hole without any increase to surface area (Kenner, p. 44). Another advantage of the trench capacitor is that its structure is under the layers of metal interconnect, allowing them to be more easily made planar, which enables it to be integrated in a logic-optimized process technology, which have many levels of interconnect above the substrate. The fact that the capacitor is under the logic means that it is constructed before the transistors are. This allows high-temperature processes to fabricate the capacitors, which would otherwise degrade the logic transistors and their performance. This makes trench capacitors suitable for constructing embedded DRAM (eDRAM) (Jacob, p. 357). Disadvantages of trench capacitors are difficulties in reliably constructing the capacitor's structures within deep holes and in connecting the capacitor to the access transistor's drain terminal (Kenner, p. 44). Historical cell designs First-generation DRAM ICs (those with capacities of 1 Kbit), such as the archetypical Intel 1103, used a three-transistor, one-capacitor (3T1C) DRAM cell with separate read and write circuitry. The write wordline drove a write transistor which connected the capacitor to the write bitline just as in the 1T1C cell, but there was a separate read wordline and read transistor which connected an amplifier transistor to the read bitline. By the second generation, the drive to reduce cost by fitting the same amount of bits in a smaller area led to the almost universal adoption of the 1T1C DRAM cell, although a couple of devices with 4 and 16 Kbit capacities continued to use the 3T1C cell for performance reasons (Kenner, p. 6). These performance advantages included, most significantly, the ability to read the state stored by the capacitor without discharging it, avoiding the need to write back what was read out (non-destructive read). A second performance advantage relates to the 3T1C cell's separate transistors for reading and writing; the memory controller can exploit this feature to perform atomic read-modify-writes, where a value is read, modified, and then written back as a single, indivisible operation (Jacob, p. 459). Proposed cell designs The one-transistor, zero-capacitor (1T, or 1T0C) DRAM cell has been a topic of research since the late-1990s. 1T DRAM is a different way of constructing the basic DRAM memory cell, distinct from the classic one-transistor/one-capacitor (1T/1C) DRAM cell, which is also sometimes referred to as 1T DRAM, particularly in comparison to the 3T and 4T DRAM which it replaced in the 1970s. In 1T DRAM cells, the bit of data is still stored in a capacitive region controlled by a transistor, but this capacitance is no longer provided by a separate capacitor. 1T DRAM is a "capacitorless" bit cell design that stores data using the parasitic body capacitance that is inherent to silicon on insulator (SOI) transistors. Considered a nuisance in logic design, this floating body effect can be used for data storage. This gives 1T DRAM cells the greatest density as well as allowing easier integration with high-performance logic circuits since they are constructed with the same SOI process technologies. Refreshing of cells remains necessary, but unlike with 1T1C DRAM, reads in 1T DRAM are non-destructive; the stored charge causes a detectable shift in the threshold voltage of the transistor. Performance-wise, access times are significantly better than capacitor-based DRAMs, but slightly worse than SRAM. There are several types of 1T DRAMs: the commercialized Z-RAM from Innovative Silicon, the TTRAM from Renesas and the A-RAM from the UGR/CNRS consortium. Array structures DRAM cells are laid out in a regular rectangular, grid-like pattern to facilitate their control and access via wordlines and bitlines. The physical layout of the DRAM cells in an array is typically designed so that two adjacent DRAM cells in a column share a single bitline contact to reduce their area. DRAM cell area is given as nF2, where n is a number derived from the DRAM cell design, and F is the smallest feature size of a given process technology. This scheme permits comparison of DRAM size over different process technology generations, as DRAM cell area scales at linear or near-linear rates with respect to feature size. The typical area for modern DRAM cells varies between 6–8 F2. The horizontal wire, the wordline, is connected to the gate terminal of every access transistor in its row. The vertical bitline is connected to the source terminal of the transistors in its column. The lengths of the wordlines and bitlines are limited. The wordline length is limited by the desired performance of the array, since propagation time of the signal that must transverse the wordline is determined by the RC time constant. The bitline length is limited by its capacitance (which increases with length), which must be kept within a range for proper sensing (as DRAMs operate by sensing the charge of the capacitor released onto the bitline). Bitline length is also limited by the amount of operating current the DRAM can draw and by how power can be dissipated, since these two characteristics are largely determined by the charging and discharging of the bitline. Bitline architecture Sense amplifiers are required to read the state contained in the DRAM cells. When the access transistor is activated, the electrical charge in the capacitor is shared with the bitline. The bitline's capacitance is much greater than that of the capacitor (approximately ten times). Thus, the change in bitline voltage is minute. Sense amplifiers are required to resolve the voltage differential into the levels specified by the logic signaling system. Modern DRAMs use differential sense amplifiers, and are accompanied by requirements as to how the DRAM arrays are constructed. Differential sense amplifiers work by driving their outputs to opposing extremes based on the relative voltages on pairs of bitlines. The sense amplifiers function effectively and efficient only if the capacitance and voltages of these bitline pairs are closely matched. Besides ensuring that the lengths of the bitlines and the number of attached DRAM cells attached to them are equal, two basic architectures to array design have emerged to provide for the requirements of the sense amplifiers: open and folded bitline arrays. Open bitline arrays The first generation (1 Kbit) DRAM ICs, up until the 64 Kbit generation (and some 256 Kbit generation devices) had open bitline array architectures. In these architectures, the bitlines are divided into multiple segments, and the differential sense amplifiers are placed in between bitline segments. Because the sense amplifiers are placed between bitline segments, to route their outputs outside the array, an additional layer of interconnect placed above those used to construct the wordlines and bitlines is required. The DRAM cells that are on the edges of the array do not have adjacent segments. Since the differential sense amplifiers require identical capacitance and bitline lengths from both segments, dummy bitline segments are provided. The advantage of the open bitline array is a smaller array area, although this advantage is slightly diminished by the dummy bitline segments. The disadvantage that caused the near disappearance of this architecture is the inherent vulnerability to noise, which affects the effectiveness of the differential sense amplifiers. Since each bitline segment does not have any spatial relationship to the other, it is likely that noise would affect only one of the two bitline segments. Folded bitline arrays The folded bitline array architecture routes bitlines in pairs throughout the array. The close proximity of the paired bitlines provide superior common-mode noise rejection characteristics over open bitline arrays. The folded bitline array architecture began appearing in DRAM ICs during the mid-1980s, beginning with the 256 Kbit generation. This architecture is favored in modern DRAM ICs for its superior noise immunity. This architecture is referred to as folded because it takes its basis from the open array architecture from the perspective of the circuit schematic. The folded array architecture appears to remove DRAM cells in alternate pairs (because two DRAM cells share a single bitline contact) from a column, then move the DRAM cells from an adjacent column into the voids. The location where the bitline twists occupies additional area. To minimize area overhead, engineers select the simplest and most area-minimal twisting scheme that is able to reduce noise under the specified limit. As process technology improves to reduce minimum feature sizes, the signal to noise problem worsens, since coupling between adjacent metal wires is inversely proportional to their pitch. The array folding and bitline twisting schemes that are used must increase in complexity in order to maintain sufficient noise reduction. Schemes that have desirable noise immunity characteristics for a minimal impact in area is the topic of current research (Kenner, p. 37). Future array architectures Advances in process technology could result in open bitline array architectures being favored if it is able to offer better long-term area efficiencies; since folded array architectures require increasingly complex folding schemes to match any advance in process technology. The relationship between process technology, array architecture, and area efficiency is an active area of research. Row and column redundancy The first DRAM integrated circuits did not have any redundancy. An integrated circuit with a defective DRAM cell would be discarded. Beginning with the 64 Kbit generation, DRAM arrays have included spare rows and columns to improve yields. Spare rows and columns provide tolerance of minor fabrication defects which have caused a small number of rows or columns to be inoperable. The defective rows and columns are physically disconnected from the rest of the array by a triggering a programmable fuse or by cutting the wire by a laser. The spare rows or columns are substituted in by remapping logic in the row and column decoders (Jacob, pp. 358–361). Error detection and correction Electrical or magnetic interference inside a computer system can cause a single bit of DRAM to spontaneously flip to the opposite state. The majority of one-off ("soft") errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read/write them. The problem can be mitigated by using redundant memory bits and additional circuitry that use these bits to detect and correct soft errors. In most cases, the detection and correction are performed by the memory controller; sometimes, the required logic is transparently implemented within DRAM chips or modules, enabling the ECC memory functionality for otherwise ECC-incapable systems. The extra memory bits are used to record parity and to enable missing data to be reconstructed by error-correcting code (ECC). Parity allows the detection of all single-bit errors (actually, any odd number of wrong bits). The most common error-correcting code, a SECDED Hamming code, allows a single-bit error to be corrected and, in the usual configuration, with an extra parity bit, double-bit errors to be detected. Recent studies give widely varying error rates with over seven orders of magnitude difference, ranging from , roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory. The Schroeder et al. 2009 study reported a 32% chance that a given computer in their study would suffer from at least one correctable error per year, and provided evidence that most such errors are intermittent hard rather than soft errors and that trace amounts of radioactive material that had gotten into the chip packaging were emitting alpha particles and corrupting the data. A 2010 study at the University of Rochester also gave evidence that a substantial fraction of memory errors are intermittent hard errors. Large scale studies on non-ECC main memory in PCs and laptops suggest that undetected memory errors account for a substantial number of system failures: the 2011 study reported a 1-in-1700 chance per 1.5% of memory tested (extrapolating to an approximately 26% chance for total memory) that a computer would have a memory error every eight months. Security Data remanence Although dynamic memory is only specified and guaranteed to retain its contents when supplied with power and refreshed every short period of time (often ), the memory cell capacitors often retain their values for significantly longer time, particularly at low temperatures. Under some conditions most of the data in DRAM can be recovered even if it has not been refreshed for several minutes. This property can be used to circumvent security and recover data stored in the main memory that is assumed to be destroyed at power-down. The computer could be quickly rebooted, and the contents of the main memory read out; or by removing a computer's memory modules, cooling them to prolong data remanence, then transferring them to a different computer to be read out. Such an attack was demonstrated to circumvent popular disk encryption systems, such as the open source TrueCrypt, Microsoft's BitLocker Drive Encryption, and Apple's FileVault. This type of attack against a computer is often called a cold boot attack. Memory corruption Dynamic memory, by definition, requires periodic refresh. Furthermore, reading 1T dynamic memory is a destructive operation, requiring a recharge of the storage cells in the row that has been read. If these processes are imperfect, a read operation can cause soft errors. In particular, there is a risk that some charge can leak between nearby cells, causing the refresh or read of one row to cause a disturbance error in an adjacent or even nearby row. The awareness of disturbance errors dates back to the first commercially available DRAM in the early 1970s (the Intel 1103). Despite the mitigation techniques employed by manufacturers, commercial researchers proved in a 2014 analysis that commercially available DDR3 DRAM chips manufactured in 2012 and 2013 are susceptible to disturbance errors. The associated side effect that led to observed bit flips has been dubbed row hammer. Packaging Memory module Dynamic RAM ICs can be packaged in molded epoxy cases, with an internal lead frame for interconnections between the silicon die and the package leads. The original IBM PC design used ICs, including those for DRAM, packaged in dual in-line packages (DIP), soldered directly to the main board or mounted in sockets. As memory density skyrocketed, the DIP package was no longer practical. For convenience in handling, several dynamic RAM integrated circuits may be mounted on a single memory module, allowing installation of 16-bit, 32-bit or 64-bit wide memory in a single unit, without the requirement for the installer to insert multiple individual integrated circuits. Memory modules may include additional devices for parity checking or error correction. Over the evolution of desktop computers, several standardized types of memory module have been developed. Laptop computers, game consoles, and specialized devices may have their own formats of memory modules not interchangeable with standard desktop parts for packaging or proprietary reasons. Embedded DRAM that is integrated into an integrated circuit designed in a logic-optimized process (such as an application-specific integrated circuit, microprocessor, or an entire system on a chip) is called embedded DRAM (eDRAM). Embedded DRAM requires DRAM cell designs that can be fabricated without preventing the fabrication of fast-switching transistors used in high-performance logic, and modification of the basic logic-optimized process technology to accommodate the process steps required to build DRAM cell structures. Versions Since the fundamental DRAM cell and array has maintained the same basic structure for many years, the types of DRAM are mainly distinguished by the many different interfaces for communicating with DRAM chips. Asynchronous DRAM The original DRAM, now known by the retronym asynchronous DRAM was the first type of DRAM in use. From its origins in the late 1960s, it was commonplace in computing up until around 1997, when it was mostly replaced by synchronous DRAM. In the present day, manufacture of asynchronous RAM is relatively rare. Principles of operation An asynchronous DRAM chip has power connections, some number of address inputs (typically 12), and a few (typically one or four) bidirectional data lines. There are three main active-low control signals: , the Row Address Strobe. The address inputs are captured on the falling edge of , and select a row to open. The row is held open as long as is low. , the Column Address Strobe. The address inputs are captured on the falling edge of , and select a column from the currently open row to read or write. , Write Enable. This signal determines whether a given falling edge of is a read (if high) or write (if low). If low, the data inputs are also captured on the falling edge of . If high, the data outputs are enabled by the falling edge of and produce valid output after the internal access time. This interface provides direct control of internal timing: when is driven low, a cycle must not be attempted until the sense amplifiers have sensed the memory state, and must not be returned high until the storage cells have been refreshed. When is driven high, it must be held high long enough for precharging to complete. Although the DRAM is asynchronous, the signals are typically generated by a clocked memory controller, which limits their timing to multiples of the controller's clock cycle. For completeness, we mention two other control signals which are not essential to DRAM operation, but are provided for the convenience of systems using DRAM: , Chip Select. When this is high, all other inputs are ignored. This makes it easy to build an array of DRAM chips which share the same control signals. Just as DRAM internally uses the word lines to select one row of storage cells connect to the shared bit lines and sense amplifiers, is used to select one row of DRAM chips to connect to the shared control, address, and data lines. , Output Enable. This is an additional signal that (if high) inhibits output on the data I/‍O pins, while allowing all other operations to proceed normally. In many applications, can be permanently connected low (output enabled whenever , and are low and is high), but in high-speed applications, judicious use of can prevent bus contention between two DRAM chips connected to the same data lines. For example, it is possible to have two interleaved memory banks sharing the address and data lines, but each having their own , , and connections. The memory controller can begin a read from the second bank while a read from the first bank is in progress, using the two signals to only permit one result to appear on the data bus at a time. RAS-only refresh Classic asynchronous DRAM is refreshed by opening each row in turn. The refresh cycles are distributed across the entire refresh interval in such a way that all rows are refreshed within the required interval. To refresh one row of the memory array using only refresh (ROR), the following steps must occur: The row address of the row to be refreshed must be applied at the address input pins. must switch from high to low. must remain high. At the end of the required amount of time, must return high. This can be done by supplying a row address and pulsing low; it is not necessary to perform any cycles. An external counter is needed to iterate over the row addresses in turn. In some designs, the CPU handled RAM refresh. The Zilog Z80 is perhaps the best known example, as it has an internal row counter R which supplies the address for a special refresh cycle generated after each instruction fetch. In other systems, especially home computers, refresh was handled by the video circuitry as a side effect of its periodic scan of the frame buffer. CAS before RAS refresh For convenience, the counter was quickly incorporated into the DRAM chips themselves. If the line is driven low before (normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open. This is known as -before- (CBR) refresh. This became the standard form of refresh for asynchronous DRAM, and is the only form generally used with SDRAM. Hidden refresh Given support of -before- refresh, it is possible to deassert while holding low to maintain data output. If is then asserted again, this performs a CBR refresh cycle while the DRAM outputs remain valid. Because data output is not interrupted, this is known as hidden refresh. Hidden refresh is no faster than a normal read followed by a normal refresh, but does maintain the data output valid during the refresh cycle. Page mode DRAM Page mode DRAM is a minor modification to the first-generation DRAM IC interface which improves the performance of reads and writes to a row by avoiding the inefficiency of precharging and opening the same row repeatedly to access a different column. In page mode DRAM, after a row is opened by holding low, the row can be kept open, and multiple reads or writes can be performed to any of the columns in the row. Each column access is initiated by presenting a column address and asserting . For reads, after a delay (tCAC), valid data appears on the data out pins, which are held at high-Z before the appearance of valid data. For writes, the write enable signal and write data is presented along with the column address. Page mode DRAM was in turn later improved with a small modification which further reduced latency. DRAMs with this improvement are called fast page mode DRAMs (FPM DRAMs). In page mode DRAM, the chip does not capture the column address until is asserted, so column access time (until data out was valid) begins when is asserted. In FPM DRAM, the column address can be supplied while is still deasserted, and the main column access time (tAA) begins as soon as the address is stable. The signal is only needed to enable the output (the data out pins were held at high-Z while was deasserted), so time from assertion to data valid (tCAC) is greatly reduced. Fast page mode DRAM was introduced in 1986 and was used with the Intel 80486. Static column is a variant of fast page mode in which the column address does not need to be latched, but rather the address inputs may be changed with held low, and the data output will be updated accordingly a few nanoseconds later. Nibble mode is another variant in which four sequential locations within the row can be accessed with four consecutive pulses of . The difference from normal page mode is that the address inputs are not used for the second through fourth edges but are generated internally starting with the address supplied for the first edge. The predictable addresses let the chip prepare the data internally and respond very quickly to the subsequent pulses. Extended data out DRAM Extended data out DRAM (EDO DRAM) was invented and patented in the 1990s by Micron Technology who then licensed technology to many other memory manufacturers. EDO RAM, sometimes referred to as hyper page mode enabled DRAM, is similar to fast page mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved performance. It is up to 30% faster than FPM DRAM, which it began to replace in 1995 when Intel introduced the 430FX chipset with EDO DRAM support. Irrespective of the performance gains, FPM and EDO SIMMs can be used interchangeably in many (but not all) applications. To be precise, EDO DRAM begins data output on the falling edge of but does not disable the output when rises again. Instead, it holds the current output valid (thus extending the data output time) even as the DRAM begins decoding a new column address, until either a new column's data is selected by another falling edge, or the output is switched off by the rising edge of . (Or, less commonly, a change in , , or .) This ability to start a new access even before the system has received the preceding column's data made it possible to design memory controllers which could carry out a access (in the currently open row) in one clock cycle, or at least within two clock cycles instead of the previously required three. EDO's capabilities were able to partially compensate for the performance lost due to the lack of an L2 cache in low-cost, commodity PCs. More expensive notebooks also often lacked an L2 cache die to size and power limitations, and benefitted similarly. Even for systems with an L2 cache, the availability of EDO memory improved the average memory latency seen by applications over earlier FPM implementations. Single-cycle EDO DRAM became very popular on video cards toward the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM. Burst EDO DRAM An evolution of EDO DRAM, burst EDO DRAM (BEDO DRAM), could process four memory addresses in one burst, for a maximum of , saving an additional three clocks over optimally designed EDO memory. It was done by adding an address counter on the chip to keep track of the next address. BEDO also added a pipeline stage allowing page-access cycle to be divided into two parts. During a memory-read operation, the first part accessed the data from the memory array to the output stage (second latch). The second part drove the data bus from this latch at the appropriate logic level. Since the data is already in the output buffer, quicker access time is achieved (up to 50% for large blocks of data) than with traditional EDO. Although BEDO DRAM showed additional optimization over EDO, by the time it was available the market had made a significant investment towards synchronous DRAM, or SDRAM. Even though BEDO RAM was superior to SDRAM in some ways, the latter technology quickly displaced BEDO. Synchronous dynamic RAM Synchronous dynamic RAM (SDRAM) significantly revises the asynchronous memory interface, adding a clock (and a clock enable) line. All other signals are received on the rising edge of the clock. The and inputs no longer act as strobes, but are instead, along with , part of a 3-bit command: The line's function is extended to a per-byte DQM signal, which controls data input (writes) in addition to data output (reads). This allows DRAM chips to be wider than 8 bits while still supporting byte-granularity writes. Many timing parameters remain under the control of the DRAM controller. For example, a minimum time must elapse between a row being activated and a read or write command. One important parameter must be programmed into the SDRAM chip itself, namely the CAS latency. This is the number of clock cycles allowed for internal operations between a read command and the first data word appearing on the data bus. The Load mode register command is used to transfer this value to the SDRAM chip. Other configurable parameters include the length of read and write bursts, i.e. the number of words transferred per read or write command. The most significant change, and the primary reason that SDRAM has supplanted asynchronous RAM, is the support for multiple internal banks inside the DRAM chip. Using a few bits of bank address that accompany each command, a second bank can be activated and begin reading data while a read from the first bank is in progress. By alternating banks, a single SDRAM device can keep the data bus continuously busy, in a way that asynchronous DRAM cannot. Single data rate synchronous DRAM Single data rate SDRAM (SDR SDRAM or SDR) is the original generation of SDRAM; it made a single transfer of data per clock cycle. Double data rate synchronous DRAM Double data rate SDRAM (DDR SDRAM or DDR) was a later development of SDRAM, used in PC memory beginning in 2000. Subsequent versions are numbered sequentially (DDR2, DDR3, etc.). DDR SDRAM internally performs double-width accesses at the clock rate, and uses a double data rate interface to transfer one half on each clock edge. DDR2 and DDR3 increased this factor to 4× and 8×, respectively, delivering 4-word and 8-word bursts over 2 and 4 clock cycles, respectively. The internal access rate is mostly unchanged (200 million per second for DDR-400, DDR2-800 and DDR3-1600 memory), but each access transfers more data. Direct Rambus DRAM Direct RAMBUS DRAM (DRDRAM) was developed by Rambus. First supported on motherboards in 1999, it was intended to become an industry standard, but was outcompeted by DDR SDRAM, making it technically obsolete by 2003. Reduced Latency DRAM Reduced Latency DRAM (RLDRAM) is a high performance double data rate (DDR) SDRAM that combines fast, random access with high bandwidth, mainly intended for networking and caching applications. Graphics RAM Graphics RAMs are asynchronous and synchronous DRAMs designed for graphics-related tasks such as texture memory and framebuffers, found on video cards. Video DRAM Video DRAM (VRAM) is a dual-ported variant of DRAM that was once commonly used to store the frame-buffer in some graphics adaptors. Window DRAM Window DRAM (WRAM) is a variant of VRAM that was once used in graphics adaptors such as the Matrox Millennium and ATI 3D Rage Pro. WRAM was designed to perform better and cost less than VRAM. WRAM offered up to 25% greater bandwidth than VRAM and accelerated commonly used graphical operations such as text drawing and block fills. Multibank DRAM Multibank DRAM (MDRAM) is a type of specialized DRAM developed by MoSys. It is constructed from small memory banks of , which are operated in an interleaved fashion, providing bandwidths suitable for graphics cards at a lower cost to memories such as SRAM. MDRAM also allows operations to two banks in a single clock cycle, permitting multiple concurrent accesses to occur if the accesses were independent. MDRAM was primarily used in graphic cards, such as those featuring the Tseng Labs ET6x00 chipsets. Boards based upon this chipset often had the unusual capacity of because of MDRAM's ability to be implemented more easily with such capacities. A graphics card with of MDRAM had enough memory to provide 24-bit color at a resolution of 1024×768—a very popular setting at the time. Synchronous graphics RAM Synchronous graphics RAM (SGRAM) is a specialized form of SDRAM for graphics adaptors. It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies. Graphics double data rate SDRAM Graphics double data rate SDRAM is a type of specialized DDR SDRAM designed to be used as the main memory of graphics processing units (GPUs). GDDR SDRAM is distinct from commodity types of DDR SDRAM such as DDR3, although they share some core technologies. Their primary characteristics are higher clock frequencies for both the DRAM core and I/O interface, which provides greater memory bandwidth for GPUs. As of 2020, there are seven, successive generations of GDDR: GDDR2, GDDR3, GDDR4, GDDR5, GDDR5X, GDDR6 and GDDR6X. Pseudostatic RAM Pseudostatic RAM (PSRAM or PSDRAM) is dynamic RAM with built-in refresh and address-control circuitry to make it behave similarly to static RAM (SRAM). It combines the high density of DRAM with the ease of use of true SRAM. PSRAM is used in the Apple iPhone and other embedded systems such as XFlar Platform. Some DRAM components have a self-refresh mode. While this involves much of the same logic that is needed for pseudo-static operation, this mode is often equivalent to a standby mode. It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, rather than to allow operation without a separate DRAM controller as is in the case of mentioned PSRAMs. An embedded variant of PSRAM was sold by MoSys under the name 1T-SRAM. It is a set of small DRAM banks with an SRAM cache in front to make it behave much like a true SRAM. It is used in Nintendo GameCube and Wii video game consoles. Cypress Semiconductor's HyperRAM is a type of PSRAM supporting a JEDEC-compliant 8-pin HyperBus or Octal xSPI interface.
Technology
Volatile memory
null
74591
https://en.wikipedia.org/wiki/Osteopathy
Osteopathy
Osteopathy, unlike osteopathic medicine, which is a branch of the medical profession in the United States, is a pseudoscientific system of alternative medicine that emphasizes physical manipulation of the body's muscle tissue and bones. In most countries, practitioners of osteopathy are not medically trained and are referred to as osteopaths. Osteopathic manipulation is the core set of techniques in osteopathy. Parts of osteopathy, such as craniosacral therapy, have been described by Quackwatch as having no therapeutic value and have been labeled by them as pseudoscience and quackery. The techniques are based on an ideology created by Andrew Taylor Still (1828–1917) which posits the existence of a "myofascial continuity"—a tissue layer that "links every part of the body with every other part". Osteopaths attempt to diagnose and treat what was originally called "the osteopathic lesion", but which is now named "somatic dysfunction", by manipulating a person's bones and muscles. Osteopathic Manipulative Treatment (OMT) techniques are most commonly used to treat back pain and other musculoskeletal issues. Osteopathic manipulation is still included in the curricula of osteopathic physicians or Doctors of Osteopathic Medicine (DO) training in the US. The Doctor of Osteopathic Medicine degree, however, became a medical degree and is no longer a degree of non-medical osteopathy. History The practice of osteopathy () began in the United States in 1874. The profession was founded by Andrew Taylor Still, a 19th-century American physician (MD), Civil War surgeon, and Kansas territorial and state legislator. He lived near Baldwin City, Kansas, during the American Civil War and it was there that he founded the practice of osteopathy. Still claimed that human illness was rooted in problems with the musculoskeletal system, and that osteopathic manipulations could solve these problems by harnessing the body's own self-repairing potential. Still's patients were forbidden from treatment by conventional medicine, as well as from other practices such as drinking alcohol. These practices derive from the belief, common in the early 19th century among proponents of alternative medicine, that the body's natural state tends toward health and inherently contains the capacity to battle any illness. This was opposed to orthodox practitioners, who held that intervention by a physician was necessary to restore health in the patient. Still established the basis for osteopathy, and the division between alternative medicine and traditional medicine had already been a major conflict for decades. The foundations of this divergence may be traced back to the mid-18th century when advances in physiology began to localize the causes and nature of diseases to specific organs and tissues. Doctors began shifting their focus from the patient to the internal state of the body, resulting in an issue labeled as the problem of the "vanishing patient". A stronger movement towards experimental and scientific medicine was then developed. In the perspective of the DO physicians, the sympathy and holism that were integral to medicine in the past were left behind. Heroic medicine became the convention for treating patients, with aggressive practices like bloodletting and prescribing chemicals such as mercury, becoming the forefront in therapeutics. Alternative medicine had its beginnings in the early 19th century, when gentler practices in comparison to heroic medicine began to emerge. As each side sought to defend its practice, a schism began to present itself in the medical marketplace, with both practitioners attempting to discredit the other. The osteopathic physicians—those who are now referred to as DO's—argued that the non-osteopathic physicians had an overly mechanistic approach to treating patients, treated the symptoms of disease instead of the original causes, and were blind to the harm they were causing their patients. Other practitioners had a similar argument, labeling osteopathic medicine as unfounded, passive, and dangerous to a disease-afflicted patient. This was the medical environment that pervaded throughout the 19th century, and the setting Still entered when he began developing his idea of osteopathy. After experiencing the loss of his wife and three daughters to spinal meningitis and noting that the current orthodox medical system could not save them, Still may have been prompted to shape his reformist attitudes towards conventional medicine. Still set out to reform the orthodox medical scene and establish a practice that did not so readily resort to drugs, purgatives, and harshly invasive therapeutics to treat a person suffering from ailment, similar to the mindset of the irregulars in the early 19th century. Thought to have been influenced by spiritualist figures such as Andrew Jackson Davis and ideas of magnetic and electrical healing, Still began practicing manipulative procedures that intended to restore harmony in the body. Over the course of the next twenty five years, Still attracted support for his medical philosophy that disapproved of orthodox medicine, and shaped his philosophy for osteopathy. Components included the idea that structure and function are interrelated and the importance of each piece of the body in the harmonious function of its whole. Still sought to establish a new medical school that could produce physicians trained under this philosophy, and be prepared to compete against the orthodox physicians. He established the American School of Osteopathy on 20 May 1892, in Kirksville, Missouri, with twenty-one students in the first class. Still described the foundations of osteopathy in his book "The Philosophy and Mechanical Principles of Osteopathy" in 1892. He named his new school of medicine "osteopathy", reasoning that "the bone, osteon, was the starting point from which [he] was to ascertain the cause of pathological conditions". He would eventually claim that he could "shake a child and stop scarlet fever, croup, diphtheria, and cure whooping cough in three days by a wring of its neck." When the state of Missouri granted the right to award the MD degree, he remained dissatisfied with the limitations of conventional medicine and instead chose to retain the distinction of the DO degree. In the early 20th century, osteopaths across the United States sought to establish law that would legitimize their medical degree to the standard of the modern medic. The processes were arduous, and not without conflict. In some states, it took years for the bills to be passed. Osteopaths were often ridiculed and in some cases arrested, but in each state, osteopaths managed to achieve the legal acknowledgement and action they set out to pursue. In 1898 the American Institute of Osteopathy started the Journal of Osteopathy and by that time four states recognized osteopathy as a profession. Practice According to the American Osteopathic Association (AOA), osteopathic manipulative treatment (OMT) is considered to be only one component of osteopathic medicine and may be used alone or in combination with pharmacotherapy, rehabilitation, surgery, patient education, diet, and exercise. OMT techniques are not necessarily unique to osteopathic medicine; other disciplines, such as physical therapy or chiropractic, use similar techniques. Indeed, many DOs do not practice OMT at all, and, over time, DOs in general practice use OMT less and less and instead apply the common medical treatments. One integral tenet of osteopathy is that problems in the body's anatomy can affect its proper functioning. Another tenet is the body's innate ability to heal itself. Many of osteopathic medicine's manipulative techniques are aimed at reducing or eliminating the impediments to proper structure and function so the self-healing mechanism can assume its role in restoring a person to health. Osteopathic medicine defines a concept of health care that embraces the concept of the unity of the living organism's structure (anatomy) and function (physiology). The AOA states that the four major principles of osteopathic medicine are the following: The body is an integrated unit of mind, body, and spirit. The body possesses self-regulatory mechanisms, having the inherent capacity to defend, repair, and remodel itself. Structure and function are reciprocally interrelated. Rational therapy is based on consideration of the first three principles. These principles are not held by Doctors of Osteopathic Medicine to be empirical laws; they serve, rather, as the underpinnings of the osteopathic approach to health and disease. Muscle energy Muscle energy techniques address somatic dysfunction through stretching and muscle contraction. For example, if a person is unable to fully abduct their arm, the treating physician raises the patient's arm near the end of the patient's range of motion, also called the edge of the restrictive barrier. The patient then tries to lower their arm, while the physician provides resistance. This resistance against the patient's motion allows for isotonic contraction of the patient's muscle. Once the patient relaxes, their range of motion increases slightly. The repetition of alternating cycles of contraction and subsequent relaxation help the treated muscle improve its range of motion. Muscle energy techniques are contraindicated in patients with fractures, crush injuries, joint dislocations, joint instability, severe muscle spasms or strains, severe osteoporosis, severe whiplash injury, vertebrobasilar insufficiency, severe illness, and recent surgery. Counterstrain Counterstrain is a system of diagnosis and treatment that considers the physical dysfunction to be a continuing, inappropriate strain reflex, which is inhibited during treatment by applying a position of mild strain in the direction exactly opposite to that of the reflex. After a counterstrain point tender to palpation has been diagnosed, the identified tender point is treated by the osteopathic physician who, while monitoring the tender point, positions the patient such that the point is no longer tender to palpation. This position is held for ninety seconds and the patient is subsequently returned to their normal posture. Most often this position of ease is usually achieved by shortening the muscle of interest. Improvement or resolution of the tenderness at the identified counterstrain point is the desired outcome. The use of counterstrain technique is contraindicated in patients with severe osteoporosis, pathology of the vertebral arteries, and in patients who are very ill or cannot voluntarily relax during the procedure. High-velocity, low-amplitude manipulation High velocity, low amplitude (HVLA) manipulation is a technique which employs a rapid, targeted, therapeutic force of brief duration that travels a short distance within the anatomic range of motion of a joint and engages the restrictive barrier in one or more places of motion to elicit release of restriction. The use of HVLA is contraindicated in patients with Down syndrome due to instability of the atlantoaxial joint which may stem from ligamentous laxity, and in pathologic bone conditions such as fracture, history of a pathologic fracture, osteomyelitis, osteoporosis, and severe cases of rheumatoid arthritis. HVLA is also contraindicated in patients with vascular disease such as aneurysms, or disease of the carotid arteries or vertebral arteries. People taking ciprofloxacin or anticoagulants, or who have local metastases should not receive HVLA. Myofascial release Myofascial release is a form of alternative treatment. The practitioners claim to treat skeletal muscle immobility and pain by relaxing contracted muscles. Palpatory feedback by the practitioner is said to be an integral part to achieving a release of myofascial tissues, accomplished by relaxing contracted muscles, increasing circulation and lymphatic drainage, and stimulating the stretch reflex of muscles and overlying fascia. Practitioners who perform myofascial release consider the fascia and its corresponding muscle to be the main targets of their procedure, but assert that other tissue may be affected as well, including other connective tissue. Fascia is the soft tissue component of the connective tissue that provides support and protection for most structures within the human body, including muscle. This soft tissue can become restricted due to psychogenic disease, overuse, trauma, infectious agents, or inactivity, often resulting in pain, muscle tension, and corresponding diminished blood flow. Some osteopaths search for small lumps of tissue, called "Chapman release points" as part of their diagnostic procedure. Lymphatic pump treatment Lymphatic pump treatment (LPT) is a manual technique intended to encourage lymph flow in a person's lymphatic system. The first modern lymphatic pump technique was developed in 1920, although osteopathic physicians used various forms of lymphatic techniques as early as the late 19th century. Relative contraindications for the use of lymphatic pump treatments include fractures, abscesses or localized infections, and severe bacterial infections with body temperature elevated higher than . Effectiveness A 2005 Cochrane review of osteopathic manipulative treatment (OMT) in asthma treatment concluded that there was insufficient evidence that OMT can be used to treat asthma. In 2013, a Cochrane review reviewed six randomized controlled trials which investigated the effect of four types of chest physiotherapy (including OMT) as adjunctive treatments for pneumonia in adults and concluded that "based on current limited evidence, chest physiotherapy might not be recommended as routine additional treatment for pneumonia in adults." Techniques investigated in the study included paraspinal inhibition, rib raising, and myofascial release. The review found that OMT did not reduce mortality and did not increase cure rate, but that OMT slightly reduced the duration of hospital stay and antibiotic use. A 2013 systematic review of the use of OMT for treating pediatric conditions concluded that its effectiveness was unproven. In 2014, a systematic review and meta-analysis of 15 randomized controlled trials found moderate-quality evidence that OMT reduces pain and improves functional status in acute and chronic nonspecific low back pain. The same analysis also found moderate-quality evidence for pain reduction for nonspecific low back pain in postpartum women and low-quality evidence for pain reduction in nonspecific low back pain in pregnant women. A 2013 systematic review found insufficient evidence to rate osteopathic manipulation for chronic nonspecific low back pain. In 2011, a systematic review found no compelling evidence that osteopathic manipulation was effective for the treatment of musculoskeletal pain. A 2018 systematic review found that there is no evidence for the reliability or specific efficacy of the techniques used in visceral osteopathy. The New England Journal of Medicines 4 November 1999 issue concluded that patients with chronic low back pain can be treated effectively with manipulation. The United Kingdom's National Health Service says there is "limited evidence" that osteopathy "may be effective for some types of neck, shoulder or lower limb pain and recovery after hip or knee operations", but that there is no evidence that osteopathy is effective as a treatment for health conditions unrelated to the bones and muscles. Others have concluded that there is insufficient evidence to suggest efficacy for osteopathic style manipulation in treating musculoskeletal pain. Criticism The American Medical Association listed DOs as "cultists" and deemed MD consultation of DOs unethical from 1923 until 1962. MDs regarded that osteopathic treatments were rooted in "pseudoscientific dogma", and although physicians from both branches of medicine have been able to meet on common ground, tensions between the two continue. In 1988, Petr Skrabanek classified osteopathy as one of the "paranormal" forms of alternative medicine, commenting that it has a view of disease which had no meaning outside its own closed system. In a 1995 conference address, the president of the Association of American Medical Colleges, Jordan J. Cohen, pinpointed OMT as a defining difference between MDs and DOs; while he saw there was no quarrel in the appropriateness of manipulation for musculoskeletal treatment, the difficulty centered on "applying manipulative therapy to treat other systemic diseases"—at that point, Cohen maintained, "we enter the realm of skepticism on the part of the allopathic world." In 1998, Stephen Barrett of Quackwatch said that the worth of manipulative therapy had been exaggerated and that the American Osteopathic Association (AOA) was acting unethically by failing to condemn craniosacral therapy. The article attracted a letter from the law firm representing the AOA accusing Barrett of libel and demanding an apology to avert legal action. In response, Barrett made some slight modifications to his text, while maintaining its overall stance; he queried the AOA's reference to "the body's natural tendency toward good health", and challenged them to "provide [him] with adequate scientific evidence showing how this belief has been tested and demonstrated to be true." Barrett has been quoted as saying, "the pseudoscience within osteopathy can't compete with the science". In 1999, Joel D. Howell noted that osteopathy and medicine as practiced by MDs were becoming increasingly convergent. He suggested that this raised a paradox: "if osteopathy has become the functional equivalent of allopathy, what is the justification for its continued existence? And if there is value in therapy that is uniquely osteopathic – that is, based on osteopathic manipulation or other techniques – why should its use be limited to osteopaths?" In 2004, the osteopathic physician Bryan E. Bledsoe, a professor of emergency medicine, wrote disparagingly of the "pseudoscience" at the foundation of OMT. In his view, "OMT will and should follow homeopathy, magnetic healing, chiropractic, and other outdated practices into the pages of medical history." In 2010, Steven Salzberg wrote that OMT was promoted as a special distinguishing element of DO training, but that it amounted to no more than "'extra' training in pseudoscientific practices." It has been suggested that osteopathic physicians may be more likely than MDs to be involved in questionable practices such as orthomolecular therapy and homeopathy. Science writer Harriet Hall stated that DOs trained in the U.S. are Doctors of Osteopathic Medicine and are legally equivalent to MDs. "They must be distinguished from 'osteopaths', members of a less regulated or unregulated profession that is practiced in many countries. Osteopaths get inferior training that can't be compared to that of DOs." Regulation and legal status The osteopathic profession has evolved into two branches: non-physician manual medicine osteopaths and full-scope medical practice osteopathic physicians. The two groups are so distinct that in practice they function as separate professions. The regulation of non-physician manual medicine osteopaths varies greatly between jurisdictions. In Australia, Denmark, New Zealand, Switzerland, the UAE and the UK, non-physician manual medicine osteopaths are regulated by statute; their practice of osteopathy requires registration with the relevant regulatory authority. The Osteopathic International Alliance (OIA) publishes a country guide that details registration and practice rights, while the International Osteopathic Association maintains a list of all accredited osteopathic colleges. Several international and national organizations are involved in osteopathic education and political advocacy. The OIA is an international body that oversees national osteopathic and osteopathic medical associations, statutory regulators, and universities or medical schools offering osteopathic and osteopathic medical education. The following sections describe the legal status of osteopathy and osteopathic medicine in each country listed. Australia A majority of osteopaths work in private practice, with osteopaths working within aged care, traffic and workers compensation schemes or co-located with medical practitioners. Osteopaths are not considered physicians or medical doctors in Australia, rather as allied health professionals offering private practice care. The majority of private health insurance providers cover treatment performed by osteopaths, as do many government based schemes such as veteran's affairs or workers compensations schemes In addition, treatment performed by osteopaths is covered by the public healthcare system in Australia (Medicare) under the Chronic Disease Management plan. Osteopathy Australia (formerly the Australian Osteopathic Association) is a national organization representing the interests of Australian osteopaths, osteopathy as a profession in Australia, and consumers' right to access osteopathic services. Founded in 1955 in Victoria, the Australian Osteopathic Association became a national body in 1991 and became Osteopathy Australia in 2014. and is a member of the Osteopathic International Alliance. The Osteopathy Board of Australia is part of the Australian Health Practitioner Regulation Agency which is the regulatory body for all recognized health care professions in Australia. The Osteopathic Board of Australia is separate from the Medical Board of Australia which is the governing body that regulates medical practitioners. Osteopaths trained internationally may be eligible for registration in Australia, dependent on their level of training and following relevant competency assessment. Students training to be an osteopath in Australia must study in an approved program in an accredited university. Current accredited courses are either four or five years in length. To achieve accreditation universities courses must demonstrate the capabilities of graduates. The capabilities are based on the CanMEDS competency framework that was developed by the Royal College of Physicians and Surgeons of Canada. A 2018 large scale study, representing a response rate of 49.1% of the profession indicated the average age of the participants was 38.0 years, with 58.1% being female and the majority holding a Bachelor or higher degree qualification for osteopathy. The study also estimated a total of 3.9 million patients consulted osteopaths every year in Australia. Most osteopaths work in referral relationships with a range of other health services, managing patients primarily with musculoskeletal disorders. Canada In Canada, the titles "osteopath" and "osteopathic physician" are protected in some provinces by the medical regulatory college for physicians and surgeons. As of 2011, there were approximately 20 U.S.-trained osteopathic physicians, all of which held a Doctor of Osteopathic Medicine degree, practicing in all of Canada. As of 2014, no training programs have been established for osteopathic physicians in Canada. The non-physician manual practice of osteopathy is practiced in most Canadian provinces. As of 2014, manual osteopathic practice is not a government-regulated health profession in any province, and those interested in pursuing osteopathic studies must register in private osteopathy schools. It is estimated that there are over 1,300 osteopathic manual practitioners in Canada, most of whom practice in Quebec and Ontario. Some sources indicate that there are between 1,000 and 1,200 osteopaths practicing in the province of Quebec, and although this number might seem quite elevated, many osteopathy clinics are adding patients on waiting lists due to a shortage of osteopaths in the province. Quebec Beginning in 2009, Université Laval in Quebec City was working with the Collège d'études ostéopathiques in Montreal on a project to implement a professional osteopathy program consisting of a bachelor's degree followed by a professional master's degree in osteopathy as manual therapy. However, due to the many doubts concerning the scientific credibility of osteopathy from the university's faculty of medicine, the program developers decided to abandon the project in 2011, after years of discussion, planning, and preparation for the program implementation. There was some controversy with the final decision of the university's committee regarding the continuous undergraduate and professional graduate program in osteopathy because the Commission of studies, which is in charge of evaluating new training programs offered by the university, had judged that the program had its place at Université Laval before receiving the unfavourable support decision from the faculty of medicine. Had the program been implemented, Université Laval would have been the first university institution in Quebec to offer a professional program in osteopathy as a manual therapy. Egypt and the Middle East Hesham Khalil introduced osteopathy in the Middle East at a local physical therapy conference in Cairo, Egypt in 2005 with a lecture titled "The global Osteopathic Concept / Holistic approach in Somatic Dysfunction". Since then he has toured the Middle East to introduce osteopathy in other Middle Eastern and North African countries, including Sudan, Jordan, Saudi Arabia, Qatar, UAE, Kuwait and Oman. In December 2007 the first Workshop on Global osteopathic approach was held at the Nasser Institute Hospital for Research and Treatment, sponsored by the Faculty of Physical Therapy, University of Cairo, Egypt. On 6 August 2010, the Egyptian Osteopathic Society (OsteoEgypt) was founded. OsteoEgypt promotes a two-tier model of osteopathy in Egypt and the Middle East. The event was timed to coincide with the birthday of A.T. Still. European Union There is no European regulatory authority for the practice of osteopathy or osteopathic medicine within the European Union; each country has its own rules. The UK's General Osteopathic Council, a regulatory body set up under the country's Osteopaths Act 1993, issued a position paper on European regulation of osteopathy in 2005. Belgium Since the early 1970s, osteopaths have been practicing in Belgium, during which time several attempts have been made to obtain an official status of health care profession. In 1999, a law was voted (the 'Colla-Law') providing a legal framework for osteopathy, amongst three other non-conventional medical professions. In 2011, the former Belgian Minister Onkelinx set up the Chambers for Non-Conventional Medicines and the Joint Commission provided for in the "Colla-law" (1999). Their goal was to discuss and reach an agreement between the various medical professions to rule on these practices. In February 2014, only one practice, homeopathy, received its recognition. The others, including osteopathy, remain unresolved. Finland Osteopathy has been a recognized health profession since 1994 in Finland. It is regulated by law along with chiropractors and naprapaths. These professions require at least a four-year education. Currently there are three osteopathic schools in Finland, one which is public and two private ones. France Osteopathy is a governmentally recognized profession and has title protection, . The most recent decree regarding osteopathy was enacted in 2014. Germany Germany has both osteopathy and osteopathic medicine. There is a difference in the osteopathic education between non-physician osteopaths, physiotherapists, and medical physicians. Physiotherapists are a recognized health profession and can achieve a degree of "Diploma in Osteopathic Therapy (D.O.T.)". Non-physician osteopaths are not medically licensed. They have an average total of 1200 hours of training, roughly half being in manual therapy and osteopathy, with no medical specialization before they attain their degree. Non-physician osteopaths in Germany officially work under the "Heilpraktiker" law. Heilpraktiker is a separate profession within the health care system. There are many schools of osteopathy in Germany; most are moving toward national recognition although such recognition does not currently exist. In Germany, there are state level rules governing which persons (non-physicians) may call themselves osteopaths. Portugal Osteopathy is a governmentally recognized health profession and the title of Osteopath is protected by Law (Act 45/2003, of 22 October, and Act 71/2013, of 2 September). Currently there are eight faculties that teach the four-year degree course of osteopathy (BSc Hon in Osteopathy). India Sri Sri University offers BSc and MSc Osteopathy programmes. New Zealand The practice of osteopathy is regulated by law, under the terms of the Health Practitioners Competence Assurance Act 2003 which came into effect on 18 September 2004. Under the act, it is a legal requirement to be registered with the Osteopathic Council of New Zealand (OCNZ), and to hold an annual practicing certificate issued by them, in order to practice as an osteopath. Each of the fifteen health professions regulated by the HPCA act work within the "Scope of Practice" determined and published by its professional board or council. Osteopaths in New Zealand are not fully licensed physicians. In New Zealand, in addition to the general scope of practice, osteopaths may also hold the Scope of Practice for Osteopaths using western medical acupuncture and related needling techniques. In New Zealand a course is offered at the Unitec Institute of Technology (Unitec). Australasian courses consist of a bachelor's degree in clinical science (osteopathy) followed by a master's degree. The Unitec double degree programme is the OCNZ prescribed qualification for registration in the scope of practice: Australian qualifications accredited by the Australian and New Zealand Osteopathic Council are also prescribed qualifications. Osteopaths registered and in good standing with the Australian Health Practitioner Regulation Agency – Osteopathy Board of Australian are eligible to register in New Zealand under the mutual recognition system operating between the two countries. Graduates from programs in every other country are required to complete an assessment procedure. The scope of practice for US-trained osteopathic physicians is unlimited on an exceptions basis. Full licensure to practice medicine is awarded on an exceptions basis following a hearing before the licensing authorities in New Zealand. Both the Medical Council of New Zealand and the OCNZ regulate osteopathic physicians in New Zealand. Currently, the country has no recognized osteopathic medical schools. United Kingdom The first school of osteopathy was established in London in 1917 by John Martin Littlejohn, a pupil of A.T. Still, who had been Dean of the Chicago College of Osteopathic Medicine. After many years of existing outside the mainstream of health care provision, the osteopathic profession in the UK was accorded formal recognition by Parliament in 1993 by the Osteopaths Act. This legislation now provides the profession of osteopathy the same legal framework of statutory self-regulation as other healthcare professions such as medicine and dentistry. This Act provides for "protection of title". A person who expressly or implicitly describes themself as an osteopath, osteopathic practitioner, osteopathic physician, osteopathist, osteotherapist, or any kind of osteopath is guilty of an offence unless they are registered as an osteopath. The General Osteopathic Council (GOsC) regulates the practice of osteopathy under the terms of the Act. Under British law, an osteopath must be registered with the GOsC to practice in the United Kingdom. More than 5,300 osteopaths were registered in the UK as of 2021. The General Osteopathic Council has a statutory duty to promote, develop and regulate the profession of osteopathy in the UK. Its duty is to protect the interests of the public by ensuring that all osteopaths maintain high standards of safety, competence and professional conduct throughout their professional lives. In order to be registered with the General Osteopathic Council an osteopath must hold a recognized qualification that meets the standards as set out by law in the GOsC's Standard of Practice. Osteopathic medicine is regulated by the General Osteopathic Council, (GOsC) under the terms of the Osteopaths Act 1993 and statement from the GMC. Practising osteopaths will usually have a BS or MSc in osteopathy. Accelerated courses leading to accreditation are available for those with a medical degree and physiotherapists. The London College of Osteopathic Medicine teaches osteopathy only to those who are already physicians. United States An osteopathic physician in the United States is a physician trained in the full scope of medical practice, with a degree of Doctor of Osteopathic Medicine (DO). With the increased internationalization of osteopathy, the American Osteopathic Association (AOA) recommended in 2010 that the older terms osteopathy and osteopath be reserved for "informal or historical discussions and for referring to previously named entities in the profession and foreign-trained osteopaths", and replaced in the US by osteopathic medicine and osteopathic physician. The American Association of Colleges of Osteopathic Medicine made a similar recommendation. Those trained only in manual osteopathic treatment, generally to relieve muscular and skeletal conditions, are referred to as osteopaths, and are not permitted to use the title DO in the United States to avoid confusion with osteopathic physicians.
Biology and health sciences
Alternative and traditional medicine
Health
74634
https://en.wikipedia.org/wiki/Aileron
Aileron
An aileron (French for "little wing" or "fin") is a hinged flight control surface usually forming part of the trailing edge of each wing of a fixed-wing aircraft. Ailerons are used in pairs to control the aircraft in roll (or movement around the aircraft's longitudinal axis), which normally results in a change in flight path due to the tilting of the lift vector. Movement around this axis is called 'rolling' or 'banking'. Considerable controversy exists over credit for the invention of the aileron. The Wright brothers and Glenn Curtiss fought a years-long legal battle over the Wright patent of 1906, which described a method of wing-warping to achieve lateral control. The brothers prevailed in several court decisions which found that Curtiss's use of ailerons violated the Wright patent. Ultimately, the First World War compelled the U.S. Government to legislate a legal resolution. A much earlier aileron concept was patented in 1868 by British scientist Matthew Piers Watt Boulton, based on his 1864 paper On Aërial Locomotion. History The name "aileron", from French, meaning "little wing", also refers to the extremities of a bird's wings used to control their flight. It first appeared in print in the 7th edition of Cassell's French-English Dictionary of 1877, with its lead meaning of "small wing". In the context of powered airplanes it appears in print about 1908. Prior to that, ailerons were often referred to as rudders, their older technical sibling, with no distinction between their orientations and functions, or more descriptively as horizontal rudders (in French, gouvernails horizontaux). Among the earliest printed aeronautical use of 'aileron' was that in the French aviation journal L'Aérophile of 1908. Ailerons had more or less completely supplanted other forms of lateral control, such as wing warping, by about 1915, well after the function of the rudder and elevator flight controls had been largely standardised. Although there were previously many conflicting claims over who first invented the aileron and its function, i.e., lateral or roll control, the flight control device was invented and described by the British scientist and metaphysicist Matthew Piers Watt Boulton in his 1864 paper On Aërial Locomotion. He was the first to patent an aileron control system in 1868. Boulton's description of his lateral flight control system was "the first record we have of appreciation of the necessity for active lateral control as distinguished from [passive lateral stability].... With this invention of Boulton's we have the birth of the present-day three torque method of airborne control" as was praised by Charles Manly. This was also endorsed by C.H. Gibbs-Smith. Boulton's British patent, No. 392 of 1868, issued about 35 years before ailerons were "reinvented" in France, became forgotten and lost from sight until after the flight control device was in general use. Gibbs-Smith stated on several occasions that if the Boulton patent had been revealed at the time of the Wright brothers' legal filings, they might not have been able to claim priority of invention for the lateral control of flying machines. The fact that the Wright brothers were able to gain a patent in 1906 did not invalidate Boulton's lost and forgotten invention. Ailerons were not used on manned aircraft until they were employed on Robert Esnault-Pelterie's glider in 1904, although in 1871 a French military engineer, Charles Renard, built and flew an unmanned glider incorporating ailerons on each side (which he termed 'winglets'), activated by a Boulton-style pendulum controlled single-axis autopilot device. The pioneering U.S. aeronautical engineer Octave Chanute published descriptions and drawings of the Wright brothers' 1902 glider in the leading aviation periodical of the day, L'Aérophile, in 1903. This prompted Esnault-Pelterie, a French military engineer, to build a Wright-style glider in 1904 that used ailerons in lieu of wing warping. The French journal L'Aérophile then published photos of the ailerons on Esnault-Pelterie's glider which were included in his June 1905 article, and its ailerons were widely copied afterward. The Wright brothers used wing warping instead of ailerons for roll control on their glider in 1902, and about 1904 their Flyer II was the only aircraft of its time able to do a coordinated banked turn. During the early years of powered flight the Wrights had better roll control on their designs than airplanes that used movable surfaces. From 1908, as aileron designs were refined it became clear that ailerons were much more effective and practical than wing warping. Ailerons also had the advantage of not weakening the airplane's wing structure as did the wing warping technique, which was one reason for Esnault-Pelterie's decision to switch to ailerons. By 1911 most biplanes used ailerons rather than wing warping—by 1915 ailerons had become almost universal on monoplanes as well. The U.S. Government, frustrated by the lack of its country's aeronautical advances in the years leading up to World War I, enforced a patent pool effectively putting an end to the Wright brothers patent war. The Wright company quietly changed its aircraft flight controls from wing warping to the use of ailerons at that time as well. Other early aileron designers Others who were previously thought to have been the first to introduce ailerons included: American John J. Montgomery included spring-loaded trailing edge flaps on his second glider (1885): these were operable by the pilot as ailerons. In 1886 his third glider design used rotation of the entire wing rather than just a trailing edge portion for roll control. By his own accounts all of these changes in addition to his use of an elevator for pitch control provided "entire control of the machine in the wind, preventing it from upsetting." New Zealander Richard Pearse reputedly made a powered flight in a monoplane that included small ailerons as early as 1902, but his claims are controversial—and sometimes inconsistent—and, even by his own reports, his aircraft were not well controlled. In 1906 Alberto Santos-Dumont's 14-bis was one of the earliest (if not the earliest) engine-powered, aileron-equipped aircraft to fly, as it was modified to have added octagonal-planform interplane ailerons in its outermost wing bays on November 12 of that year for its concluding flight sessions at the Chateau de Bagatelle's grounds; but those roll control surfaces were not true "trailing-edge" ailerons hinged directly to the wing panels' framework—for the 14-bis, these were instead pivoted around a horizontal axis centred on the forward outboard interplane struts, and protruded forward past the wings' leading edges - said to be very much like those on Robert Esnault-Pelterie's 1904 biplane glider design. On May 18, 1908, engineer and aircraft designer Frederick Baldwin, a member of the Aerial Experiment Association headed by Alexander Graham Bell, flew their first aileron-controlled aircraft, the AEA White Wing, which was later copied by the U.S. aeronautical pioneer Glenn Curtiss the same year, with the AEA June Bug. Henry Farman's ailerons on his 1909 Farman III were the first to resemble ailerons on modern aircraft as they were hinged directly to the wing planform structure, and thus were viewed as having a reasonable claim as the ancestor of the modern-day aileron. Wingtip ailerons were also used on the contemporary Bleriot VIII—the first known flightworthy aircraft to use the joystick and rudder bar pioneering form of modern flight controls in a single airframe, and the 1911-vintage Curtiss Model D pusher biplane had spanwise rectangular interplane ailerons of a similar nature to those on the final form of the Santos-Dumont 14-bis, but mounted on, and pivoted from the outer rear interplane struts instead. Another very late contestant included the American, William Whitney Christmas, who claimed to have invented the aileron in the 1914 patent for what would become the Christmas Bullet which was built in 1918. Both "Bullet" prototypes crashed during their first "flights" when their wings broke off in flight due to flutter as a result of being deliberately unbraced. Patents and lawsuits The Wright Brothers' Ohio patent attorney Henry Toulmin filed an expansive patent application and on May 22, 1906, the brothers were granted U.S. Patent 821393. The patent's importance lay in its claim of a new and useful method of controlling an airplane. The patent application included the claim for the lateral control of aircraft flight that was not limited to wing warping, but through any manipulation of the "....angular relations of the lateral margins of the airplanes [wings].... varied in opposite directions". Thus the patent explicitly stated that other methods besides wing-warping could be used for adjusting the outer portions of an airplane's wings to different angles on its right and left sides to achieve lateral roll control. John J. Montgomery was granted U.S. Patent 831173 at nearly the same time for his methods of wing warping. Both the Wright Brothers patent and Montgomery's patent were reviewed and approved by the same patent examiner at the United States Patent Office, William Townsend. At the time Townsend indicated that both methods of wing warping were invented independently and were sufficiently different to each justify their own patent award. Multiple U.S. court decisions favoured the expansive Wright patent, which the Wright Brothers sought to enforce with licensing fees starting from $1,000 per airplane, and said to range up to $1,000 per day. According to Louis S. Casey, a former curator of the Smithsonian Air & Space Museum in Washington, D.C., and other researchers, due to the patent they had received the Wrights stood firmly on the position that all flying using lateral roll control, anywhere in the world, would only be conducted under license by them. The Wrights subsequently became embroiled with numerous lawsuits they launched against aircraft builders who used lateral flight controls, and the brothers were consequently blamed for playing "...a major role in the lack of growth and aviation industry competition in the United States comparative to other nations like Germany leading up to and during World War I". Years of protracted legal conflict ensued with many other aircraft builders until the U.S. entered World War I, when the government imposed a legislated agreement among the parties which resulted in royalty payments of 1% to the Wrights. Ongoing controversy There are still conflicting claims today over who first invented the aileron. Other 19th century engineers and scientists, including Charles Renard, Alphonse Pénaud, and Louis Mouillard, had described similar flight control surfaces. Another technique for lateral flight control, wing warping, was also described or experimented with by several people including Jean-Marie Le Bris, John Montgomery, Clement Ader, Edson Gallaudet, D.D. Wells, and Hugo Mattullath. Aviation historian C.H. Gibbs-Smith wrote that the aileron was "....one of the most remarkable inventions... of aeronautical history, which was immediately lost sight of". In 1906 the Wright brothers obtained a patent not for the invention of an airplane (which had existed for a number of decades in the form of gliders) but for the invention of a system of aerodynamic control that manipulated a flying machine's surfaces, including lateral flight control, although rudders, elevators and ailerons had previously been invented. Flight dynamics Pairs of ailerons are typically interconnected so that when one is moved downward, the other is moved upward: the down-going aileron increases the lift on its wing while the up-going aileron reduces the lift on its wing, producing a rolling (also called 'banking') moment about the aircraft's longitudinal axis (which extends from the nose to the tail of an airplane). Ailerons are usually situated near the wing tip, but may sometimes also be situated nearer the wing root. Modern airliners may also have a second pair of ailerons on their wings, with the two positions distinguished by the terms 'outboard aileron' and 'inboard aileron'. An unwanted side effect of aileron operation is adverse yaw—a yawing moment in the opposite direction to the roll. Using the ailerons to roll an aircraft to the right produces a yawing motion to the left. As the aircraft rolls, adverse yaw is caused partly by the change in drag between the left and right wing. The rising wing generates increased lift, which causes increased induced drag. The descending wing generates reduced lift, which causes reduced induced drag. Profile drag caused by the deflected ailerons may add further to the difference, along with changes in the lift vectors as one rotates back while the other rotates forward. In a coordinated turn, adverse yaw is effectively compensated by the use of the rudder, which results in a sideforce on the vertical tail that opposes the adverse yaw by creating a favorable yawing moment. Another method of compensation is 'differential ailerons', which have been rigged so that the down-going aileron deflects less than the up-going one. In this case the opposing yaw moment is generated by a difference in profile drag between the left and right wingtips. Frise ailerons accentuate this profile drag imbalance by protruding beneath the wing of an upward-deflected aileron, most often by being hinged slightly behind the leading edge and near the bottom of the surface, with the lower section of the aileron surface's leading edge protruding slightly below the wing's undersurface when the aileron is deflected upwards, substantially increasing profile drag on that side. Ailerons may also be designed to use a combination of these methods. With ailerons in the neutral position, the wing on the outside of the turn develops more lift than the opposite wing due to the variation in airspeed across the wing span, which tends to cause the aircraft to continue to roll. Once the desired angle of bank (degree of rotation about the longitudinal axis) has been obtained, the pilot uses opposite aileron to prevent the angle of bank from increasing due to this variation in lift across the wing span. This minor opposite use of the control must be maintained throughout the turn. The pilot also uses a slight amount of rudder in the same direction as the turn to counteract adverse yaw and to produce a "coordinated" turn wherein the fuselage is parallel to the flight path. A simple gauge on the instrument panel called the slip indicator, also known as "the ball", indicates when this coordination is achieved. Aileron components Horns and aerodynamic counterbalances Particularly on larger or faster aircraft, control forces may be extremely heavy. Borrowing a discovery from boats that extending a control surface's area forward of the hinge lightens the forces needed first appeared on ailerons during World War I when ailerons were extended beyond the wingtip and provided with a horn ahead of the hinge. Known as overhung ailerons, possibly the best known examples are the Handley Page Type O (first flight 17 December 1915), Sopwith Snipe, Fokker Dr.I and Fokker D.VII. Later examples brought the counterbalance in line with the wing to improve control and reduce drag. This is seen less often now, due to the Frise type aileron which provides the same benefit. Trim tabs Trim tabs are small movable sections resembling scaled down ailerons located at or near the trailing edge of the aileron. On most propeller powered aircraft, the rotation of the propeller(s) induces a counteracting roll movement due to Newton's third law of motion, in that every action has an equal and opposite reaction. To relieve the pilot of having to provide continuous pressure on the stick in one direction (which causes fatigue) trim tabs are provided to adjust or trim out the pressure needed against any unwanted movement. The tab itself is deflected in relation to the aileron, causing the aileron to move in the opposite direction. Trim tabs come in two forms, adjustable and fixed. A fixed trim tab is manually bent to the required amount of deflection, while the adjustable trim tab can be controlled from within the cockpit so that different power settings or flight attitudes can be compensated for. Some large aircraft from the 1950s (including the Canadair Argus) used free floating control surfaces that the pilot controlled only through the deflection of trim tabs, in which case additional tabs were also provided to fine-tune the control to provide straight and level flight. Spades Spades are flat metal plates, usually attached to the aileron lower surface, ahead of the aileron hinge, by a lever arm. They reduce the force needed by the pilot to deflect the aileron and are often seen on aerobatic aircraft. As the aileron is deflected upward, the spade produces a downward aerodynamic force, which tends to rotate the whole assembly so as to further deflect the aileron upward. The size of the spade (and its lever arm) determines how much force the pilot needs to apply to deflect the aileron. A spade works in the same manner as a horn but is more efficient due to the longer moment arm. Mass balance weights To increase the speed at which control surface flutter (aeroelastic flutter) might become a risk, the center of gravity of the control surface is moved towards the hinge-line for that surface. To achieve this, lead weights may be added to the front of the aileron. In some aircraft the aileron construction may be too heavy to allow this system to work without an excessive increase in the weight of the aileron. In this case, the weight may be added to a lever arm to move the weight well out in front to the aileron body. These balance weights are tear drop shaped (to reduce drag), which make them appear quite different from spades, although both project forward and below the aileron. In addition to reducing the risk of flutter, mass balances also reduce the stick forces required to move the control surface in maneuvers. Aileron fences Some aileron designs, particularly when fitted on swept wings, include fences like wing fences flush with their inboard plane, in order to suppress some of the spanwise component of the airflow running on the top of the wing, which tends to disrupt the laminar flow above the aileron, when deflected downwards. Types of ailerons Single acting ailerons Used during aviation's pre-war "pioneer era" and into the early years of the First World War, these ailerons were each controlled by a single cable, which pulled the aileron up. When the aircraft was at rest, the ailerons hung vertically down. This type of aileron was used on the Farman III biplane 1909 and the Short 166. A "reverse" version of this, utilizing wing-warping, existed on the later version of the Santos-Dumont Demoiselle, which only warped the wingtips "downward". One of the disadvantages of this setup was a greater tendency to yaw than even with basic interconnected ailerons. During the 1930s a number of light aircraft used single acting controls but used springs to return the ailerons to their neutral positions when the stick was released. Wingtip ailerons Used on the first-ever airframe to have the combination of "joystick/rudder-bar" controls that directly led to the modern flight control system, the Blériot VIII in 1908, some designs of early aircraft used "wingtip" ailerons, where the entire wingtip was rotated to achieve roll control as a separate, pivoting roll-control surface—the AEA June Bug used a form of these, with both the experimental German Fokker V.1 of 1916 and the earlier versions of the Junkers J 7 all-duralumin metal demonstrator monoplane using them—the J 7 led directly to the Junkers D.I all-duralumin metal German fighter design of 1918, which had conventionally hinged ailerons. The main problem with this type of aileron is the dangerous tendency to stall if used aggressively, especially if the aircraft is already in danger of stalling, hence the use primarily on prototypes, and their replacement on production aircraft with more conventional ailerons. Frise ailerons Engineer Leslie George Frise (1897–1979) of the Bristol Aeroplane Company developed an aileron shape that is pivoted at about its 25 to 30% chord line and near its bottom surface , in order to decrease stick forces as aircraft became faster during the 1930s. When the aileron is deflected up (to make its wing go down), the leading edge of the aileron starts to protrude below the underside of the wing into the airflow beneath the wing. The moment of the leading edge in the airflow helps to move up the trailing edge, which decreases the stick force. The down moving aileron also adds energy to the boundary layer. The edge of the aileron directs air flow from the underside of the wing to the upper surface of the aileron, thus creating a lifting force added to the lift of the wing. This reduces the needed deflection of the aileron. Both the Canadian Fleet Model 2 biplane of 1930 and the 1938 popular US Piper J-3 Cub monoplane possessed Frise ailerons as designed and helped introduce them to a wide audience. A claimed benefit of the Frise aileron is the ability to counteract adverse yaw. To do so, the leading edge of the aileron has to be sharp or bluntly rounded, which adds significant drag to the upturned aileron and helps counterbalance the yaw force created by the other aileron turned down. This can add some unpleasant, nonlinear effect and/or potentially dangerous aerodynamic vibration (flutter). Adverse yaw moment is basically countered by aircraft yaw stability and also by the use of differential aileron movement. The Frise-type aileron also forms a slot, so air flows smoothly over the lowered aileron, making it more effective at high angles of attack. Frise-type ailerons may also be designed to function differentially. Like the differential aileron, the Frise-type aileron does not eliminate adverse yaw entirely. Coordinated rudder application is still needed when ailerons are applied. Differential ailerons By careful design of the mechanical linkages, the up aileron can be made to deflect more than the down aileron (e.g., US patent 1,565,097). This helps reduce the likelihood of a wing tip stall when aileron deflections are made at high angles of attack. In addition, the consequent differential in drag reduces adverse yaw (as also discussed above). The idea is that the loss of lift associated with the up aileron carries no penalty while the increase in lift associated with the down aileron is minimized. The rolling couple on the aircraft is always the difference in lift between the two wings. A designer at de Havilland invented a simple and practical linkage and their de Havilland Tiger Moth classic British biplane became one of the best-known aircraft, and one of the earliest, to use differential ailerons. Roll control without ailerons Wing warping On the earliest Pioneer Era aircraft, such as the Wright Flyer and the later, 1909-origin Blériot XI and Etrich Taube, lateral control was effected by twisting the outboard portion of the wing so as to increase or decrease lift by changing the angle of attack. This had the disadvantages of stressing the structure, being heavy on the controls, and of risking stalling the side with the increased angle of attack during a maneuver. By 1916, most designers had abandoned wing warping in favor of ailerons. Researchers at NASA and elsewhere have been taking a second look at wing warping again, although under new names. The NASA version is the X-53 Active Aeroelastic Wing while the United States Air Force tested the Adaptive Compliant Wing. Differential spoilers Spoilers are devices that when extended into the airflow over a wing, disrupt the airflow and reduce the amount of lift generated. Many modern aircraft designs, especially jet aircraft, use spoilers in lieu of, or to supplement ailerons, such as the F4 Phantom II and Northrop P-61 Black Widow, which had almost full width flaps (there were very small conventional ailerons at the wingtips as well). Roll induced by rudder All aircraft with dihedral have some form of yaw-roll coupling to promote stability. Common trainers like the Cessna 152/172 series can be roll controlled with rudder alone. The rudder of the Boeing 737 has more roll authority over the aircraft than the ailerons at high angles of attack. This led to two notable accidents when the rudder jammed in the fully deflected position causing rollovers (see Boeing 737 rudder issues). Some aircraft such as the Fokker Spin and model gliders lack any type of lateral control. Those aircraft use a higher amount of dihedral than conventional aircraft. Deflecting the rudder gives yaw and a lot of differential wing lift, giving a yaw induced roll moment. This type of control system is most commonly seen in the Flying Flea family of small aircraft and on simpler 2-function (pitch and yaw control) glider models or 3-function (pitch, yaw and throttle control) model powered aircraft, such as radio-controlled versions of "Old Timer" free-flight engine-powered model aircraft. Other methods Weight-shift control is widely used in hang gliders, powered hang gliders, and ultralight aircraft. Flight with disabled controls has been successful in a small number of aviation incidents. Reaction control valves as used in the Harrier family of military aircraft. Top rudder: this device was fitted to the British Army Aeroplane No 1. It comprised an all-flying fin mounted above the upper wing and pivoted about a vertical axis. In operation it applied a side force approximately above the centre of pressure, causing the craft to roll. The design also had all-flying ailerons between the wing planes, but these were removed at the time it made the first official flight of a British aircraft and roll control during the flight was achieved solely by use of the top rudder. Combinations with other control surfaces A control surface that combines an aileron and flap is called a flaperon. A single surface on each wing serves both purposes: Used as an aileron, the flaperons left and right are actuated differentially; when used as a flap, both flaperons are actuated downwards. When a flaperon is actuated downward (i.e., used as a flap), there is enough freedom of movement left to be able to still use the aileron function. Some aircraft have used differentially controlled spoilers or spoilerons to provide roll instead of conventional ailerons. The advantage is that the entire trailing edge of the wing may be devoted to flaps, providing better low speed control. The Northrop P-61 Black Widow used spoilers in this manner, in conjunction with full span zap flaps and some modern airliners use spoilers to assist the ailerons. On delta-winged aircraft, the ailerons are combined with the elevators to form an elevon. Several modern fighter aircraft may have no ailerons on their wings but provide roll control with an all moving horizontal tailplane. When horizontal tailplane stabilators can move differentially to perform the roll control function of ailerons, as they do on some modern fighter aircraft, they are termed 'tailerons' or 'rolling tails'. Tailerons additionally permit wider flaps on the aircraft's wings. Aileron struts combined movable surfaces with an airfoil shaped wing strut. Acting in the propeller slipstream increased their effectiveness, although their mechanical advantage is lowered due to the inboard location.
Technology
Aircraft components
null
74748
https://en.wikipedia.org/wiki/Glaucoma
Glaucoma
Glaucoma is a group of eye diseases that can lead to damage of the optic nerve. The optic nerve transmits visual information from the eye to the brain. Glaucoma may cause vision loss if left untreated. It has been called the "silent thief of sight" because the loss of vision usually occurs slowly over a long period of time. A major risk factor for glaucoma is increased pressure within the eye, known as intraocular pressure (IOP). It is associated with old age, a family history of glaucoma, and certain medical conditions or the use of some medications. The word glaucoma comes from the Ancient Greek word (), meaning 'gleaming, blue-green, gray'. Of the different types of glaucoma, the most common are called open-angle glaucoma and closed-angle glaucoma. Inside the eye, a liquid called aqueous humor helps to maintain shape and provides nutrients. The aqueous humor normally drains through the trabecular meshwork. In open-angle glaucoma, the draining is impeded, causing the liquid to accumulate and pressure inside the eye to increase. This elevated pressure can damage the optic nerve. In closed-angle glaucoma, the drainage of the eye becomes suddenly blocked, leading to a rapid increase in intraocular pressure. This may lead to intense eye pain, blurred vision, and nausea. Closed-angle glaucoma is an emergency requiring immediate attention. If treated early, slowing or stopping the progression of glaucoma is possible. Regular eye examinations, especially if the person is over 40 or has a family history of glaucoma, are essential for early detection. Treatment typically includes prescription of eye drops, medication, laser treatment or surgery. The goal of these treatments is to decrease eye pressure. Glaucoma is a leading cause of blindness in African Americans, Hispanic Americans, and Asians. It occurs more commonly among older people, and closed-angle glaucoma is more common in women. Epidemiology In 2013 for the population aged 40-80 years, the global prevalence of glaucoma was estimated at 3.54%, thus affecting 64.3 million worldwide. The same year, 2.97 million people in North America had open-angle glaucoma. By 2040, the prevalence of all types of glaucoma was projected to increase to 111.82 million worldwide and to 4.72 million in North America. Globally, glaucoma is the second-leading cause of blindness, while cataracts are a more common cause. In the United States, glaucoma is a leading cause of blindness for African Americans, who have higher rates of primary open-angle glaucoma, and Hispanic Americans. Bilateral vision loss can negatively affect mobility and interfere with driving. A meta-analysis published in 2009 found that people with primary open-angle glaucoma do not have increased mortality rates, or increased risk of cardiovascular death. A 2024 JAMA Ophthalmology article reports that in 2022, an estimated 4.22 million people in the U.S. had glaucoma, with 1.49 million experiencing vision impairment due to the condition, according to a meta-analysis. The study found that Black adults were about twice as likely to be affected by glaucoma as White adults. Glaucoma prevalence was 1.62% among individuals aged 18 and older and 2.56% among those aged 40 and older, while vision-affecting glaucoma occurred in 0.57% and 0.91% of these age groups, respectively. Signs and symptoms Open-angle glaucoma usually presents with no symptoms early in the course of the disease, but it may gradually progress to involve difficulties with vision. It usually involves deficits in the peripheral vision followed by central vision loss as the disease progresses, but less commonly it may present as central vision loss or patchy areas of vision loss. On an eye examination, optic nerve changes are seen indicating damage to the optic nerve head (increased cup-to-disc ratio on fundoscopic examination). Acute angle-closure glaucoma, a medical emergency due to the risk of impending permanent vision loss, is characterized by sudden ocular pain, seeing halos around lights, red eye, very high intraocular pressure, nausea and vomiting, and suddenly decreased vision. Acute angle-closure glaucoma may further present with corneal edema, engorged conjunctival vessels, and a fixed and dilated pupil on examination. Opaque specks may occur in the lens in glaucoma, known as glaukomflecken. The word is German, meaning "glaucoma-specks". Risk factors Glaucoma can affect anyone. Some people have a higher risk or susceptibility to develop glaucoma due to certain risk factors, including increasing age, high intraocular pressure, a family history of glaucoma, and use of steroid medications. Ocular hypertension Ocular hypertension (increased pressure within the eye) is an important risk factor for glaucoma, but only about 10-70% of people - depending on ethnic group - with primary open-angle glaucoma actually have elevated ocular pressure. Ocular hypertension—an intraocular pressure above the traditional threshold of or even above —is not necessarily a pathological condition, but it increases the risk of developing glaucoma. A study with 1636 persons aged 40-80 who had an intraocular pressure above 24mmHg in at least one eye, but no indications of eye damages, showed that after five years, 9.5% of the untreated participants and 4.4% of the treated participants had developed glaucomatous symptoms, meaning that only about one in 10 untreated people with elevated intraocular pressure will develop glaucomatous symptoms over that period of time. Therefore, whether every person with an elevated intraocular pressure should receive glaucoma therapy is a matter of debate. As of 2018, most ophthalmologists favored treatment of those with additional risk factors. For eye pressures, a value of above atmospheric pressure is often used, with higher pressures leading to a greater risk. However, some may have high eye pressure for years and never develop damage. Conversely, optic nerve damage may occur with normal pressure, known as normal-tension glaucoma. In case of above-normal intraocular pressure, the mechanism of open-angle glaucoma is believed to be the impeded exit of aqueous humor through the trabecular meshwork, while in closed-angle glaucoma, the iris blocks the trabecular meshwork. Diagnosis is achieved by performing an eye examination. Often, the optic nerve shows an abnormal amount of cupping. Family history and genetics Positive family history is a risk factor for glaucoma. The relative risk of having primary open-angle glaucoma is increased about two- to four-fold for people who have a sibling with glaucoma. Glaucoma, particularly primary open-angle glaucoma, is associated with mutations in several genes, including MYOC, ASB10, WDR36, NTF4, TBK1, and RPGRIP1. Many of these genes are involved in critical cellular processes that are implicated in the development and progression of glaucoma, including regulation of intraocular pressure, retinal ganglion cell health, and optic nerve function. Normal-tension glaucoma, which comprises 30-90% of primary open-angle glaucoma (depending on ethnic group), is also associated with genetic mutations (including OPA1 and OPTN genes). Additionally, some rare genetic conditions increase the risk of glaucoma, such as Axenfeld-Rieger syndrome and primary congenital glaucoma, which is associated with mutations in CYP1B1 or LTBP2. They are inherited in an autosomal recessive fashion. Axenfeld-Rieger syndrome is inherited in an autosomal dominant fashion and is associated with PITX2 or FOXC1. Ethnicity The total prevalence of glaucoma is about the same in North America and Asia, but the prevalence of angle-closure glaucoma is four times higher in Asia than in North America. In the United States, glaucoma is more common in African Americans, Latinos, and Asian-Americans. Other Other factors can cause glaucoma, known as "secondary glaucoma", including prolonged use of steroids (steroid-induced glaucoma); conditions that severely restrict blood flow to the eye, such as severe diabetic retinopathy and central retinal vein occlusion (neovascular glaucoma); ocular trauma (angle-recession glaucoma); plateau iris; and inflammation of the middle layer of the pigmented vascular eye structure (uveitis), known as uveitic glaucoma. Pathophysiology The main effect of glaucoma is damage to the optic nerve. Eventually, this damage leads to vision loss, which can deteriorate with time. The underlying cause of open-angle glaucoma remains unclear. Several theories exist on its exact etiology. Intraocular pressure is a function of production of liquid aqueous humor by the ciliary processes of the eye, and its drainage through the trabecular meshwork. Aqueous humor flows from the ciliary processes into the posterior chamber, bounded posteriorly by the lens and the zonules of Zinn, and anteriorly by the iris. It then flows through the pupil of the iris into the anterior chamber, bounded posteriorly by the iris and anteriorly by the cornea. From here, the trabecular meshwork drains aqueous humor via the scleral venous sinus (Schlemm's canal) into scleral plexuses and general blood circulation. In open/wide-angle glaucoma, flow is reduced through the trabecular meshwork, due to the degeneration and obstruction of the trabecular meshwork, whose original function is to absorb the aqueous humor. Loss of aqueous humor absorption leads to increased resistance and thus a chronic, painless buildup of pressure in the eye. In primary angle-closure glaucoma, the iridocorneal angle is narrowed or completely closed, obstructing the flow of aqueous humor to the trabecular meshwork for drainage. This is usually due to the forward displacement of the iris against the cornea, resulting in angle closure. This accumulation of aqueous humor causes an acute increase in pressure and damage to the optic nerve. The pathophysiology of glaucoma is not well understood. Several theories exist regarding the mechanism of the damage to the optic nerve in glaucoma. The biomechanical theory hypothesizes that the retinal ganglion-cell axons (which form the optic nerve head and the retinal nerve fiber layer) are particularly susceptible to mechanical damage from increases in the intraocular pressure as they pass through pores at the lamina cribrosa. Thus, increases in intraocular pressure would cause nerve damage as seen in glaucoma. The vascular theory hypothesizes that a decreased blood supply to the retinal ganglions cells leads to nerve damage. This decrease in blood supply may be due to increasing intraocular pressures, and may also be due to systemic hypotension, vasospasm, or atherosclerosis. This is supported by evidence that those with low blood pressure, particularly low diastolic blood pressure, are at an increased risk of glaucoma. The primary neurodegeneration theory hypothesizes that a primary neurodegenerative process may be responsible for degeneration at the optic nerve head in glaucoma. This would be consistent with a possible mechanism of normal tension glaucoma (those with open-angle glaucoma with normal eye pressures) and is supported by evidence showing a correlation of glaucoma with Alzheimer's dementia and other causes of cognitive decline. Both experimental and clinical studies implicate that oxidative stress plays a role in the pathogenesis of open-angle glaucoma as well as in Alzheimer's disease. Degeneration of axons of the retinal ganglion cells (the optic nerve) is a hallmark of glaucoma. The inconsistent relationship of glaucomatous optic neuropathy with increased intraocular pressure has provoked hypotheses and studies on anatomic structure, eye development, nerve compression trauma, optic nerve blood flow, excitatory neurotransmitter, trophic factor, retinal ganglion cell or axon degeneration, glial support cell, immune system, aging mechanisms of neuron loss, and severing of the nerve fibers at the scleral edge. Diagnosis Screening for glaucoma is an integral part of a standard eye examination performed by optometrists and ophthalmologists. The workup for glaucoma involves taking a thorough case history, with the emphasis on assessment of risk factors. The baseline glaucoma evaluation tests include intraocular pressure measurement by using tonometry, anterior chamber angle assessment by optical coherence tomography, inspecting the drainage angle (gonioscopy), and retinal nerve fiber layer assessment with a fundus examination, measuring corneal thickness (pachymetry), and visual field testing. Types Glaucoma has been classified into specific types: Primary glaucoma and its variants Primary glaucoma (H40.1-H40.2) Primary open-angle glaucoma, also known as chronic open-angle glaucoma, chronic simple glaucoma, glaucoma simplex High-tension glaucoma Low-tension glaucoma Primary angle closure glaucoma, also known as primary closed-angle glaucoma, narrow-angle glaucoma, pupil-block glaucoma, acute congestive glaucoma Acute angle closure glaucoma (aka AACG) Chronic angle closure glaucoma Intermittent angle closure glaucoma Superimposed on chronic open-angle closure glaucoma ("combined mechanism" – uncommon) Variants of primary glaucoma Pigmentary glaucoma Exfoliation glaucoma, also known as pseudoexfoliative glaucoma or glaucoma capsulare Primary juvenile glaucoma Primary angle closure glaucoma is caused by contact between the iris and trabecular meshwork, which in turn obstructs outflow of the aqueous humor from the eye. This contact between iris and trabecular meshwork (TM) may gradually damage the function of the meshwork until it fails to keep pace with aqueous production, and the pressure rises. In over half of all cases, prolonged contact between iris and TM causes the formation of synechiae (effectively "scars"). These cause permanent obstruction of aqueous outflow. In some cases, pressure may rapidly build up in the eye, causing pain and redness (symptomatic, or so-called "acute"-angle closure). In this situation, the vision may become blurred, and halos may be seen around bright lights. Accompanying symptoms may include a headache and vomiting. Diagnosis is made from physical signs and symptoms - pupils mid-dilated and unresponsive to light, cornea edematous (cloudy), reduced vision, redness, and pain. However, the majority of cases are asymptomatic. Prior to the very severe loss of vision, these cases can only be identified by examination, generally by an eye-care professional. Developmental glaucoma Developmental glaucoma (Q15.0) Primary congenital glaucoma Infantile glaucoma Glaucoma associated with hereditary or familial diseases Secondary glaucoma Secondary glaucoma (H40.3-H40.6) Inflammatory glaucoma Uveitis of all types Fuchs heterochromic iridocyclitis Phacogenic glaucoma Angle-closure glaucoma with mature cataract Phacoanaphylactic glaucoma secondary to rupture of lens capsule Phacolytic glaucoma due to phacotoxic meshwork blockage Subluxation of lens Glaucoma secondary to intraocular hemorrhage Hyphema Hemolytic glaucoma, also known as erythroclastic glaucoma Traumatic glaucoma Angle recession glaucoma: Traumatic recession on anterior chamber angle Postsurgical glaucoma Aphakic pupillary block Ciliary block glaucoma Neovascular glaucoma (see below for more details) Drug-induced glaucoma Corticosteroid induced glaucoma Alpha-chymotrypsin glaucoma. Postoperative ocular hypertension from use of alpha chymotrypsin. Glaucoma of miscellaneous origin Associated with intraocular tumors Associated with retinal detachments Secondary to severe chemical burns of the eye Associated with essential iris atrophy Toxic glaucoma Neovascular glaucoma, an uncommon type of glaucoma, is difficult or nearly impossible to treat, and is often caused by proliferative diabetic retinopathy (PDR) or central retinal vein occlusion (CRVO). It may also be triggered by other conditions that result in ischemia of the retina or ciliary body. Individuals with poor blood flow to the eye are highly at risk for this condition. Neovascular glaucoma results when new, abnormal vessels begin developing in the angle of the eye that begin blocking the drainage. People with such condition begin to rapidly lose their eyesight. Sometimes, the disease appears very rapidly, especially after cataract surgery procedures. Toxic glaucoma is open-angle glaucoma with an unexplained significant rise of intraocular pressure following unknown pathogenesis. Intraocular pressure can sometimes reach . It characteristically manifests as ciliary body inflammation and massive trabecular edema that sometimes extends to Schlemm's canal. This condition is differentiated from malignant glaucoma by the presence of a deep and clear anterior chamber and a lack of aqueous misdirection. Also, the corneal appearance is not as hazy. A reduction in visual acuity can occur followed neuroretinal breakdown. Absolute glaucoma Absolute glaucoma (H44.5) is the end stage of all types of glaucoma. The eye has no vision, absence of pupillary light reflex and pupillary response, and has a stony appearance. Severe pain is present in the eye. The treatment of absolute glaucoma is a destructive procedure like cyclocryoapplication, cyclophotocoagulation, or injection of 99% alcohol. Visual field defects in glaucoma In glaucoma visual field defects result from damage to the retinal nerve fiber layer (RNFL). Field defects are seen mainly in primary open angle glaucoma. Because of the unique anatomy of the RNFL, many noticeable patterns are seen in the visual field. Most of the early glaucomatous changes are seen within the central visual field, mainly in Bjerrum's area, 10-20° from fixation. Following are the common glaucomatous field defects: Generalized depression: Generalized depression is seen in early stages of glaucoma and many other conditions. Mild constriction of central and peripheral visual field due to isopter contraction comes under generalized depression. If all the isopters show similar depression to the same point, it is then called a contraction of visual field. Relative paracentral scotomas are the areas where smaller and dimmer targets are not visualized by the patient. Larger and brighter targets can be seen. Small paracentral depressions, mainly superonasal are seen in normal tension glaucoma (NTG). The generalized depression of the entire field may be seen in cataract also. Baring of blind spot: "Baring of blind spot" means exclusion of blind spot from the central field due to inward curve of the outer boundary of 30° central field. It is only an early non-specific visual field change, without much diagnostic value in glaucoma. Small wing-shaped Paracentral scotoma: Small wing-shaped Paracentral scotoma within Bjerrum's area is the earliest clinically significant field defect seen in glaucoma. It may also be associated with nasal steps. Scotoma may be seen above or below the blind spot. Siedel's sickle-shaped scotoma: Paracentral scotoma joins with the blind spot to form the Seidel sign. Arcuate or Bjerrum's scotoma: It is formed at later stages of glaucoma by extension of Seidel's scotoma in an area either above or below the fixation point to reach the horizontal line. Peripheral breakthrough may occur due to damage of nerve fibers. Ring or Double arcuate scotoma: Two arcuate scotomas join to form a Ring or Double arcuate scotoma. This defect is seen in advanced stages of glaucoma. Roenne's central nasal step: It is created when two arcuate scotomas run in different arcs to form a right angled defect. This is also seen in advanced stages of glaucoma. Peripheral field defects: Peripheral field defects may occur in early or late stages of glaucoma. Roenne's peripheral nasal steps occur due to contraction of peripheral isopter. Tubular vision: Since macular fibers are the most resistant to glaucomatous damage, the central vision remains unaffected until end stages of glaucoma. Tubular vision or Tunnel vision is the loss of peripheral vision with retention of central vision, resulting in a constricted circular tunnel-like field of vision. It is seen in the end stages of glaucoma. Retinitis pigmentosa is another disease that causes tubular vision. Temporal island of vision: It is also seen in end stages of glaucoma. The temporal islands lie outside of the central 24 to 30° visual field, so it may not be visible with standard central field measurements done in glaucoma. Screening The United States Preventive Services Task Force stated, as of 2013, that there was insufficient evidence to recommend for or against screening for glaucoma. Therefore, there is no national screening program in the US. Screening, however, is recommended starting at age 40 by the American Academy of Ophthalmology. There is a glaucoma screening program in the UK. Those at risk are advised to have an eye examination at least once a year. Treatment The goal of glaucoma management for patients with increased intraocular pressure is to decrease the intraocular pressure (IOP), thus slowing the progression of glaucoma and preserving the quality of life for patients, with minimal side-effects. This requires appropriate diagnostic techniques and follow-up examinations, and judicious selection of treatments for the individual patient. Although increased IOP is only one of the major risk factors for glaucoma, lowering it via various pharmaceuticals and/or surgical techniques is currently the mainstay of glaucoma treatment. Vascular flow and neurodegenerative theories of glaucomatous optic neuropathy have prompted studies on various neuroprotective therapeutic strategies, including nutritional compounds, some of which may be regarded by clinicians as safe for use now, while others are on trial. Mental stress is also considered as consequence and cause of vision loss which means that stress management training, autogenic training and other techniques to cope with stress can be helpful. Medication There are several pressure-lowering medication groups that could be used in lowering the IOP, usually eyedrops. The choice of medication usually depends on the dose, duration and the side effects of each medication. However, in general, prostaglandin analogues are the first-line treatment for glaucoma. Prostaglandin analogues, such as latanoprost, bimatoprost and travoprost, reduce the IOP by increasing the aqueous fluid outflow through the draining angle. It is usually prescribed once daily at night. The systemic side effects of this class are minimal. However, they can cause local side effects including redness of the conjunctiva, change in the iris color and eyelash elongation. There are several other classes of medications that could be used as a second-line in case of treatment failure or presence of contraindications to prostaglandin analogues. These include: Topical beta-adrenergic receptor antagonists, such as timolol, levobunolol, and betaxolol, decrease aqueous humor production by the epithelium of the ciliary body. Alpha2-adrenergic agonists, such as brimonidine and apraclonidine, work by a dual mechanism, decreasing aqueous humor production and increasing uveoscleral outflow. Less-selective alpha agonists, such as epinephrine, decrease aqueous humor production through vasoconstriction of ciliary body blood vessels, useful only in open-angle glaucoma. Epinephrine's mydriatic effect, however, renders it unsuitable for closed-angle glaucoma due to further narrowing of the uveoscleral outflow (i.e. further closure of trabecular meshwork, which is responsible for absorption of aqueous humor). Miotic agents (parasympathomimetics), such as pilocarpine, work by contraction of the ciliary muscle, opening the trabecular meshwork and allowing increased outflow of the aqueous humour. Echothiophate, an acetylcholinesterase inhibitor, is used in chronic glaucoma. Carbonic anhydrase inhibitors, such as dorzolamide, brinzolamide, and acetazolamide, lower secretion of aqueous humor by inhibiting carbonic anhydrase in the ciliary body. Each of these medicines may have local and systemic side effects. Wiping the eye with an absorbent pad after the administration of eye drops may result in fewer adverse effects. Initially, glaucoma drops may reasonably be started in either one or in both eyes. The possible neuroprotective effects of various topical and systemic medications are also being investigated. Adherence Poor compliance with medications and follow-up visits is a major reason for treatment failure and disease progression in glaucoma patients. Poor adherence could lead to increased complication rates, thus increasing the need of non-pharmacological interventions including surgery. Patient education and communication must be ongoing to sustain successful treatment plans for this lifelong disease with no early symptoms. Laser Argon laser trabeculoplasty (ALT) may be used to treat open-angle glaucoma, but this is a temporary solution, not a cure. A 50-μm argon laser spot is aimed at the trabecular meshwork to stimulate the opening of the mesh to allow more outflow of aqueous fluid. Usually, half of the angle is treated at a time. Traditional laser trabeculoplasty uses a thermal argon laser in an argon laser trabeculoplasty procedure. Nd:YAG laser peripheral iridotomy (LPI) may be used in patients susceptible to or affected by angle closure glaucoma or pigment dispersion syndrome. During laser iridotomy, laser energy is used to make a small, full-thickness opening in the iris to equalize the pressure between the front and back of the iris, thus correcting any abnormal bulging of the iris. In people with narrow angles, this can uncover the trabecular meshwork. In some cases of intermittent or short-term angle closure, this may lower the eye pressure. Laser iridotomy reduces the risk of developing an attack of acute angle closure. In most cases, it also reduces the risk of developing chronic angle closure or of adhesions of the iris to the trabecular meshwork. Computational fluid dynamics (CFD) simulations have shown that an optimal iridotomy size to relieve the pressure differential between the anterior and posterior side of the iris is around 0.1 mm to 0.2 mm. This coincides with clinical practice of LPI where an iridotomy size of 150 to 200 microns is commonly used. However, larger iriditomy sizes are sometimes necessary. Surgery Both laser and conventional surgeries are performed to treat glaucoma. Surgery is the primary therapy for those with congenital glaucoma. Generally, these operations are a temporary solution, as there is not yet a cure for glaucoma. Canaloplasty Canaloplasty is a nonpenetrating procedure using microcatheter technology. To perform a canaloplasty, an incision is made into the eye to gain access to the Schlemm's canal in a similar fashion to a viscocanalostomy. A microcatheter will circumnavigate the canal around the iris, enlarging the main drainage channel and its smaller collector channels through the injection of a sterile, gel-like material called viscoelastic. The catheter is then removed and a suture is placed within the canal and tightened. By opening the canal, the pressure inside the eye may be relieved, although the reason is unclear, since the canal (of Schlemm) does not have any significant fluid resistance in glaucoma or healthy eyes. Long-term results are not available. Trabeculectomy The most common conventional surgery performed for glaucoma is the trabeculectomy. Here, a partial thickness flap is made in the scleral wall of the eye, and a window opening is made under the flap to remove a portion of the trabecular meshwork. The scleral flap is then sutured loosely back in place to allow fluid to flow out of the eye through this opening, resulting in lowered intraocular pressure and the formation of a bleb or fluid bubble on the surface of the eye. Scarring can occur around or over the flap opening, causing it to become less effective or lose effectiveness altogether. Traditionally, chemotherapeutic adjuvants, such as mitomycin C (MMC) or 5-fluorouracil (5-FU), are applied with soaked sponges on the wound bed to prevent filtering blebs from scarring by inhibiting fibroblast proliferation. Contemporary alternatives to prevent the scarring of the meshwork opening include the sole or combinative implementation of nonchemotherapeutic adjuvants such as the Ologen collagen matrix, which has been clinically shown to increase the success rates of surgical treatment. Collagen matrix prevents scarring by randomizing and modulating fibroblast proliferation in addition to mechanically preventing wound contraction and adhesion. Glaucoma drainage implants The first glaucoma drainage implant was developed in 1966. Since then, several types of implants have followed on from the original: the Baerveldt tube shunt, or the valved implants, such as the Ahmed glaucoma valve implant or the ExPress Mini Shunt and the later generation pressure ridge Molteno implants. These are indicated for glaucoma patients not responding to maximal medical therapy, with previous failed guarded filtering surgery (trabeculectomy). The flow tube is inserted into the anterior chamber of the eye, and the plate is implanted underneath the conjunctiva to allow a flow of aqueous fluid out of the eye into a chamber called a bleb. The first-generation Molteno and other nonvalved implants sometimes require the ligation of the tube until the bleb formed is mildly fibrosed and water-tight. This is done to reduce postoperative hypotony—sudden drops in postoperative intraocular pressure. Valved implants, such as the Ahmed glaucoma valve, attempt to control postoperative hypotony by using a mechanical valve. Ab interno implants, such as the Xen Gel Stent, are transscleral implants by an ab interno procedure to channel aqueous humor into the non-dissected Tenon's space, creating a subconjunctival drainage area similar to a bleb. The implants are transscleral and different from other ab interno implants that do not create a transscleral drainage, such as iStent, CyPass, or Hydrus. The ongoing scarring over the conjunctival dissipation segment of the shunt may become too thick for the aqueous humor to filter through. This may require preventive measures using antifibrotic medications, such as 5-fluorouracil or mitomycin-C (during the procedure), or other nonantifibrotic medication methods, such as collagen matrix implant, or biodegradable spacer, or later on create a necessity for revision surgery with the sole or combinative use of donor patch grafts or collagen matrix implant. Laser-assisted nonpenetrating deep sclerectomy The most common surgical approach currently used for the treatment of glaucoma is trabeculectomy, in which the sclera is punctured to alleviate intraocular pressure. Nonpenetrating deep sclerectomy (NPDS) surgery is a similar, but modified, procedure, in which instead of puncturing the scleral bed and trabecular meshwork under a scleral flap, a second deep scleral flap is created, excised, with further procedures of deroofing the Schlemm's canal, upon which, percolation of liquid from the inner eye is achieved and thus alleviating intraocular pressure, without penetrating the eye. NPDS is demonstrated to have significantly fewer side effects than trabeculectomy. However, NPDS is performed manually and requires higher level of skills that may be assisted with instruments. In order to prevent wound adhesion after deep scleral excision and to maintain good filtering results, NPDS as with other non-penetrating procedures is sometimes performed with a variety of biocompatible spacers or devices, such as the Aquaflow collagen wick, ologen Collagen Matrix, or Xenoplast glaucoma implant. Laser-assisted NPDS is performed with the use of a CO2 laser system. The laser-based system is self-terminating once the required scleral thickness and adequate drainage of the intraocular fluid have been achieved. This self-regulation effect is achieved as the CO2 laser essentially stops ablating as soon as it comes in contact with the intraocular percolated liquid, which occurs as soon as the laser reaches the optimal residual intact layer thickness. Clear lens extraction For people with chronic closed-angle glaucoma, lens extraction can relieve the block created by the pupil and help regulate the intraocular pressure. A study found that CLE is even more effective than laser peripheral iridotomy in patients with angle closure glaucoma. A systematic review comparing lens extraction and laser peripheral iridotomy for treating acute primary angle closure found that lens extraction potentially provides better intraocular pressure control and reduces medication needs over time. However, it remains uncertain if it significantly lowers the risk of recurrent episodes or reduces the need for additional operations. Treatment approaches for primary glaucoma Primary angle closure glaucoma: Once any symptoms have been controlled, the first line (and often definitive) treatment is laser iridotomy. This may be performed using either Nd:YAG or argon lasers, or in some cases by conventional incisional surgery. The goal of treatment is to reverse and prevent contact between the iris and trabecular meshwork. In early to moderately advanced cases, iridotomy is successful in opening the angle in around 75% of cases. In the other 25%, laser iridoplasty, medication (pilocarpine) or incisional surgery may be required. Primary open-angle glaucoma: Prostaglandin agonists work by opening uveoscleral passageways. Beta-blockers, such as timolol, work by decreasing aqueous formation. Carbonic anhydrase inhibitors decrease bicarbonate formation from ciliary processes in the eye, thus decreasing the formation of aqueous humor. Parasympathetic analogs are drugs that work on the trabecular outflow by opening up the passageway and constricting the pupil. Alpha 2 agonists (brimonidine, apraclonidine) both decrease fluid production (via inhibition of AC) and increase drainage. A review of people with primary open-angle glaucoma and ocular hypertension concluded that medical IOP-lowering treatment slowed down the progression of visual field loss. Neovascular glaucoma Anti-VEGF agents as injectable medications, along with other standard of care treatment for decreasing intraocular pressure, may improve pressure in people with neovascular glaucoma for short periods of time. Evidence suggests that this improvement may last 4–6 weeks. There is no sufficient evidence to suggest that anti-VEGF medications are effective either for short-term or for longer-term treatment. The short, medium, and long-term safety of anti-VEGF treatment has not been well investigated. Other Cannabis is not suggested for treatment of glaucoma by the American Glaucoma Society for adults or for children. Sepetaprost, investigational new drug Prognosis In open-angle glaucoma, the typical progression from normal vision to complete blindness takes about 25 years to 70 years without treatment, depending on the method of estimation used. History The association of elevated intraocular pressure (IOP) and glaucoma was first described by Englishman Richard Banister in 1622: "...that the Eye be grown more solid and hard, then naturally it should be...". Angle-closure glaucoma was treated with cataract extraction by John Collins Warren in Boston as early as 1806. The invention of the ophthalmoscope by Hermann Helmholtz in 1851 enabled ophthalmologists for the first time to identify the pathological hallmark of glaucoma, the excavation of the optic nerve head due to retinal ganglion cell loss. The first reliable instrument to measure intraocular pressure was invented by Norwegian ophthalmologist Hjalmar August Schiøtz in 1905. About half a century later, Hans Goldmann in Berne, Switzerland, developed his applanation tonometer which still today - despite numerous new innovations in diagnostics - is considered the gold standard of determining this crucial pathogenic factor. In the late 20th century, further pathomechanisms beyond elevated IOP were discovered and became the subject of research like insufficient blood supply – often associated with low or irregular blood pressure – to the retina and optic nerve head. The first drug to reduce IOP, pilocarpine, was introduced in the 1870s; other major innovations in pharmacological glaucoma therapy were the introduction of beta blocker eye drops in the 1970s and of prostaglandin analogues and topical (locally administered) carbonic anhydrase inhibitors in the mid-1990s. Early surgical techniques like iridectomy and fistulating methods have recently been supplemented by less invasive procedures like small implants, a range of options now widely called MIGS (micro-invasive glaucoma surgery). Etymology The word "glaucoma" comes from the Ancient Greek , a derivative of (glaukos), which commonly described the color of eyes which were not dark (i.e. blue, green, light gray). Eyes described as due to disease might have had a gray cataract in the Hippocratic era, or, in the early Common Era, the greenish pupillary hue sometimes seen in angle-closure glaucoma. This colour is reflected in the Chinese word for glaucoma, 青光眼 (qīngguāngyǎn), literally “cyan-light eye”. An alternative hypothesis connects the name to the Ancient Greek noun for 'owl', or (both glaux). Research Eye drops vs. other treatments The TAGS randomised controlled trial investigated if eye drops or trabeculectomy is more effective in treating advanced primary open-angle glaucoma. After two years researchers found that vision and quality of life are similar in both treatments. At the same time eye pressure was lower in people who underwent surgery and in the long-run surgery is more cost-effective. The LiGHT trial compared the effectiveness of eye drops and selective laser trabeculoplasty for open angle glaucoma. Both contributed to a similar quality of life but most people undergoing laser treatment were able to stop using eye drops. Laser trabeculoplasty was also shown to be more cost-effective. Comparison of effects of brimonidine and timolol A 2013 Cochrane systematic review compared the effect of brimonidine and timolol in slowing the progression of open angle glaucoma in adult participants. The results showed that participants assigned to brimonidine showed less visual field progression than those assigned to timolol, though the results were not significant, given the heavy loss-to-followup and limited evidence. The mean intraocular pressures for both groups were similar. Participants in the brimonidine group had a higher occurrence of side effects caused by medication than participants in the timolol group. Social disparities in glaucoma care and research A study conducted in UK showed that people living in an area of high deprivation were likely to be diagnosed in the later stage of the disease. It also showed that there were lack of professional ophthalmic services in the area of high deprivation. A study in 2017 shows that there is a huge difference in the volume of glaucoma testing depending on the type of insurance in the US. Researchers reviewed 21,766 persons age ≥ 40 years old with newly diagnosed open-angle glaucoma (OAG) and found that Medicaid recipients had substantially lower volume of glaucoma testing performed compared to patients with commercial health insurance. Results from a meta-analysis of 33,428 primary open-angle glaucoma (POAG) participants published in 2021 suggest that there are substantial ethnic and racial disparities in clinical trials in the US. Although ethnic and racial minorities have a higher disease burden, the 70.7% of the study participants was White as opposed to 16.8% Black and 3.4% Hispanic/Latino.
Biology and health sciences
Disabilities
Health
74800
https://en.wikipedia.org/wiki/Torus
Torus
In geometry, a torus (: tori or toruses) is a surface of revolution generated by revolving a circle in three-dimensional space one full revolution about an axis that is coplanar with the circle. The main types of toruses include ring toruses, horn toruses, and spindle toruses. A ring torus is sometimes colloquially referred to as a donut or doughnut. If the axis of revolution does not touch the circle, the surface has a ring shape and is called a torus of revolution, also known as a ring torus. If the axis of revolution is tangent to the circle, the surface is a horn torus. If the axis of revolution passes twice through the circle, the surface is a spindle torus (or self-crossing torus or self-intersecting torus). If the axis of revolution passes through the center of the circle, the surface is a degenerate torus, a double-covered sphere. If the revolved curve is not a circle, the surface is called a toroid, as in a square toroid. Real-world objects that approximate a torus of revolution include swim rings, inner tubes and ringette rings. A torus should not be confused with a solid torus, which is formed by rotating a disk, rather than a circle, around an axis. A solid torus is a torus plus the volume inside the torus. Real-world objects that approximate a solid torus include O-rings, non-inflatable lifebuoys, ring doughnuts, and bagels. In topology, a ring torus is homeomorphic to the Cartesian product of two circles: , and the latter is taken to be the definition in that context. It is a compact 2-manifold of genus 1. The ring torus is one way to embed this space into Euclidean space, but another way to do this is the Cartesian product of the embedding of in the plane with itself. This produces a geometric object called the Clifford torus, a surface in 4-space. In the field of topology, a torus is any topological space that is homeomorphic to a torus. The surface of a coffee cup and a doughnut are both topological tori with genus one. An example of a torus can be constructed by taking a rectangular strip of flexible material such as rubber, and joining the top edge to the bottom edge, and the left edge to the right edge, without any half-twists (compare Klein bottle). Etymology Torus is a Latin word for "a round, swelling, elevation, protuberance". Geometry A torus of revolution in 3-space can be parametrized as: using angular coordinates , , representing rotation around the tube and rotation around the torus's axis of revolution, respectively, where the major radius is the distance from the center of the tube to the center of the torus and the minor radius is the radius of the tube. The ratio is called the aspect ratio of the torus. The typical doughnut confectionery has an aspect ratio of about 3 to 2. An implicit equation in Cartesian coordinates for a torus radially symmetric about the z-axis is Algebraically eliminating the square root gives a quartic equation, The three classes of standard tori correspond to the three possible aspect ratios between and : When , the surface will be the familiar ring torus or anchor ring. corresponds to the horn torus, which in effect is a torus with no "hole". describes the self-intersecting spindle torus; its inner shell is a lemon and its outer shell is an apple. When , the torus degenerates to the sphere radius . When , the torus degenerates to the circle radius . When , the interior of this torus is diffeomorphic (and, hence, homeomorphic) to a product of a Euclidean open disk and a circle. The volume of this solid torus and the surface area of its torus are easily computed using Pappus's centroid theorem, giving: These formulas are the same as for a cylinder of length and radius , obtained from cutting the tube along the plane of a small circle, and unrolling it by straightening out (rectifying) the line running around the center of the tube. The losses in surface area and volume on the inner side of the tube exactly cancel out the gains on the outer side. Expressing the surface area and the volume by the distance of an outermost point on the surface of the torus to the center, and the distance of an innermost point to the center (so that and ), yields As a torus is the product of two circles, a modified version of the spherical coordinate system is sometimes used. In traditional spherical coordinates there are three measures, , the distance from the center of the coordinate system, and and , angles measured from the center point. As a torus has, effectively, two center points, the centerpoints of the angles are moved; measures the same angle as it does in the spherical system, but is known as the "toroidal" direction. The center point of is moved to the center of , and is known as the "poloidal" direction. These terms were first used in a discussion of the Earth's magnetic field, where "poloidal" was used to denote "the direction toward the poles". In modern use, toroidal and poloidal are more commonly used to discuss magnetic confinement fusion devices. Topology Topologically, a torus is a closed surface defined as the product of two circles: . This can be viewed as lying in and is a subset of the 3-sphere of radius . This topological torus is also often called the Clifford torus. In fact, is filled out by a family of nested tori in this manner (with two degenerate circles), a fact that is important in the study of as a fiber bundle over (the Hopf bundle). The surface described above, given the relative topology from , is homeomorphic to a topological torus as long as it does not intersect its own axis. A particular homeomorphism is given by stereographically projecting the topological torus into from the north pole of . The torus can also be described as a quotient of the Cartesian plane under the identifications or, equivalently, as the quotient of the unit square by pasting the opposite edges together, described as a fundamental polygon . The fundamental group of the torus is just the direct product of the fundamental group of the circle with itself: Intuitively speaking, this means that a closed path that circles the torus's "hole" (say, a circle that traces out a particular latitude) and then circles the torus's "body" (say, a circle that traces out a particular longitude) can be deformed to a path that circles the body and then the hole. So, strictly 'latitudinal' and strictly 'longitudinal' paths commute. An equivalent statement may be imagined as two shoelaces passing through each other, then unwinding, then rewinding. If a torus is punctured and turned inside out then another torus results, with lines of latitude and longitude interchanged. This is equivalent to building a torus from a cylinder, by joining the circular ends together, in two ways: around the outside like joining two ends of a garden hose, or through the inside like rolling a sock (with the toe cut off). Additionally, if the cylinder was made by gluing two opposite sides of a rectangle together, choosing the other two sides instead will cause the same reversal of orientation. The first homology group of the torus is isomorphic to the fundamental group (this follows from Hurewicz theorem since the fundamental group is abelian). Two-sheeted cover The 2-torus is a twofold branched cover of the 2-sphere, with four ramification points. Every conformal structure on the 2-torus can be represented as such a two-sheeted cover of the 2-sphere. The points on the torus corresponding to the ramification points are the Weierstrass points. In fact, the conformal type of the torus is determined by the cross-ratio of the four points. n-dimensional torus The torus has a generalization to higher dimensions, the , often called the or for short. (This is the more typical meaning of the term "-torus", the other referring to holes or of genus .) Just as the ordinary torus is topologically the product space of two circles, the -dimensional torus is topologically equivalent to the product of circles. That is: The standard 1-torus is just the circle: . The torus discussed above is the standard 2-torus, . And similar to the 2-torus, the -torus, can be described as a quotient of under integral shifts in any coordinate. That is, the n-torus is modulo the action of the integer lattice (with the action being taken as vector addition). Equivalently, the -torus is obtained from the -dimensional hypercube by gluing the opposite faces together. An -torus in this sense is an example of an n-dimensional compact manifold. It is also an example of a compact abelian Lie group. This follows from the fact that the unit circle is a compact abelian Lie group (when identified with the unit complex numbers with multiplication). Group multiplication on the torus is then defined by coordinate-wise multiplication. Toroidal groups play an important part in the theory of compact Lie groups. This is due in part to the fact that in any compact Lie group one can always find a maximal torus; that is, a closed subgroup which is a torus of the largest possible dimension. Such maximal tori have a controlling role to play in theory of connected . Toroidal groups are examples of protori, which (like tori) are compact connected abelian groups, which are not required to be manifolds. Automorphisms of are easily constructed from automorphisms of the lattice , which are classified by invertible integral matrices of size with an integral inverse; these are just the integral matrices with determinant . Making them act on in the usual way, one has the typical toral automorphism on the quotient. The fundamental group of an n-torus is a free abelian group of rank . The th homology group of an -torus is a free abelian group of rank n choose . It follows that the Euler characteristic of the -torus is for all . The cohomology ring H•(, Z) can be identified with the exterior algebra over the -module whose generators are the duals of the nontrivial cycles. Configuration space As the -torus is the -fold product of the circle, the -torus is the configuration space of ordered, not necessarily distinct points on the circle. Symbolically, . The configuration space of unordered, not necessarily distinct points is accordingly the orbifold , which is the quotient of the torus by the symmetric group on letters (by permuting the coordinates). For , the quotient is the Möbius strip, the edge corresponding to the orbifold points where the two coordinates coincide. For this quotient may be described as a solid torus with cross-section an equilateral triangle, with a twist; equivalently, as a triangular prism whose top and bottom faces are connected with a 1/3 twist (120°): the 3-dimensional interior corresponds to the points on the 3-torus where all 3 coordinates are distinct, the 2-dimensional face corresponds to points with 2 coordinates equal and the 3rd different, while the 1-dimensional edge corresponds to points with all 3 coordinates identical. These orbifolds have found significant applications to music theory in the work of Dmitri Tymoczko and collaborators (Felipe Posada, Michael Kolinas, et al.), being used to model musical triads. Flat torus A flat torus is a torus with the metric inherited from its representation as the quotient, , where is a discrete subgroup of isomorphic to . This gives the quotient the structure of a Riemannian manifold, as well as the structure of an abelian Lie group. Perhaps the simplest example of this is when : , which can also be described as the Cartesian plane under the identifications . This particular flat torus (and any uniformly scaled version of it) is known as the "square" flat torus. This metric of the square flat torus can also be realised by specific embeddings of the familiar 2-torus into Euclidean 4-space or higher dimensions. Its surface has zero Gaussian curvature everywhere. It is flat in the same sense that the surface of a cylinder is flat. In 3 dimensions, one can bend a flat sheet of paper into a cylinder without stretching the paper, but this cylinder cannot be bent into a torus without stretching the paper (unless some regularity and differentiability conditions are given up, see below). A simple 4-dimensional Euclidean embedding of a rectangular flat torus (more general than the square one) is as follows: where R and P are positive constants determining the aspect ratio. It is diffeomorphic to a regular torus but not isometric. It can not be analytically embedded (smooth of class ) into Euclidean 3-space. Mapping it into 3-space requires one to stretch it, in which case it looks like a regular torus. For example, in the following map: If and in the above flat torus parametrization form a unit vector then u, v, and parameterize the unit 3-sphere as Hopf coordinates. In particular, for certain very specific choices of a square flat torus in the 3-sphere S3, where above, the torus will partition the 3-sphere into two congruent solid tori subsets with the aforesaid flat torus surface as their common boundary. One example is the torus defined by Other tori in having this partitioning property include the square tori of the form , where is a rotation of 4-dimensional space , or in other words is a member of the Lie group . It is known that there exists no (twice continuously differentiable) embedding of a flat torus into 3-space. (The idea of the proof is to take a large sphere containing such a flat torus in its interior, and shrink the radius of the sphere until it just touches the torus for the first time. Such a point of contact must be a tangency. But that would imply that part of the torus, since it has zero curvature everywhere, must lie strictly outside the sphere, which is a contradiction.) On the other hand, according to the Nash-Kuiper theorem, which was proven in the 1950s, an isometric C1 embedding exists. This is solely an existence proof and does not provide explicit equations for such an embedding. In April 2012, an explicit C1 (continuously differentiable) isometric embedding of a flat torus into 3-dimensional Euclidean space was found. It is a flat torus in the sense that, as a metric space, it is isometric to a flat square torus. It is similar in structure to a fractal as it is constructed by repeatedly corrugating an ordinary torus at smaller scales. Like fractals, it has no defined Gaussian curvature. However, unlike fractals, it does have defined surface normals, yielding a so-called "smooth fractal". The key to obtaining the smoothness of this corrugated torus is to have the amplitudes of successive corrugations decreasing faster than their "wavelengths". (These infinitely recursive corrugations are used only for embedding into three dimensions; they are not an intrinsic feature of the flat torus.) This is the first time that any such embedding was defined by explicit equations or depicted by computer graphics. Conformal classification of flat tori In the study of Riemann surfaces, one says that any two smooth compact geometric surfaces are "conformally equivalent" when there exists a smooth homeomorphism between them that is both angle-preserving and orientation-preserving. The Uniformization theorem guarantees that every Riemann surface is conformally equivalent to one that has constant Gaussian curvature. In the case of a torus, the constant curvature must be zero. Then one defines the "moduli space" of the torus to contain one point for each conformal equivalence class, with the appropriate topology. It turns out that this moduli space M may be identified with a punctured sphere that is smooth except for two points that have less angle than 2π (radians) around them: One has total angle π and the other has total angle 2π/3. M may be turned into a compact space M* – topologically equivalent to a sphere – by adding one additional point that represents the limiting case as a rectangular torus approaches an aspect ratio of 0 in the limit. The result is that this compactified moduli space is a sphere with three points each having less than 2π total angle around them. (Such a point is termed a "cusp", and may be thought of as the vertex of a cone, also called a "conepoint".) This third conepoint will have zero total angle around it. Due to symmetry, M* may be constructed by glueing together two congruent geodesic triangles in the hyperbolic plane along their (identical) boundaries, where each triangle has angles of , , and . (The three angles of a hyperbolic triangle T determine T up to congruence.) As a result, the Gauss–Bonnet theorem shows that the area of each triangle can be calculated as , so it follows that the compactified moduli space M* has area equal to . The other two cusps occur at the points corresponding in M* to (a) the square torus (total angle ) and (b) the hexagonal torus (total angle ). These are the only conformal equivalence classes of flat tori that have any conformal automorphisms other than those generated by translations and negation. Genus g surface In the theory of surfaces there is a more general family of objects, the "genus" surfaces. A genus surface is the connected sum of two-tori. (And so the torus itself is the surface of genus 1.) To form a connected sum of two surfaces, remove from each the interior of a disk and "glue" the surfaces together along the boundary circles. (That is, merge the two boundary circles so they become just one circle.) To form the connected sum of more than two surfaces, successively take the connected sum of two of them at a time until they are all connected. In this sense, a genus surface resembles the surface of doughnuts stuck together side by side, or a 2-sphere with handles attached. As examples, a genus zero surface (without boundary) is the two-sphere while a genus one surface (without boundary) is the ordinary torus. The surfaces of higher genus are sometimes called -holed tori (or, rarely, -fold tori). The terms double torus and triple torus are also occasionally used. The classification theorem for surfaces states that every compact connected surface is topologically equivalent to either the sphere or the connect sum of some number of tori, disks, and real projective planes. Toroidal polyhedra Polyhedra with the topological type of a torus are called toroidal polyhedra, and have Euler characteristic . For any number of holes, the formula generalizes to , where is the number of holes. The term "toroidal polyhedron" is also used for higher-genus polyhedra and for immersions of toroidal polyhedra. Automorphisms The homeomorphism group (or the subgroup of diffeomorphisms) of the torus is studied in geometric topology. Its mapping class group (the connected components of the homeomorphism group) is surjective onto the group of invertible integer matrices, which can be realized as linear maps on the universal covering space that preserve the standard lattice (this corresponds to integer coefficients) and thus descend to the quotient. At the level of homotopy and homology, the mapping class group can be identified as the action on the first homology (or equivalently, first cohomology, or on the fundamental group, as these are all naturally isomorphic; also the first cohomology group generates the cohomology algebra: Since the torus is an Eilenberg–MacLane space , its homotopy equivalences, up to homotopy, can be identified with automorphisms of the fundamental group); all homotopy equivalences of the torus can be realized by homeomorphisms – every homotopy equivalence is homotopic to a homeomorphism. Thus the short exact sequence of the mapping class group splits (an identification of the torus as the quotient of gives a splitting, via the linear maps, as above): The mapping class group of higher genus surfaces is much more complicated, and an area of active research. Coloring a torus The torus's Heawood number is seven, meaning every graph that can be embedded on the torus has a chromatic number of at most seven. (Since the complete graph can be embedded on the torus, and , the upper bound is tight.) Equivalently, in a torus divided into regions, it is always possible to color the regions using no more than seven colors so that no neighboring regions are the same color. (Contrast with the four color theorem for the plane.) de Bruijn torus In combinatorial mathematics, a de Bruijn torus is an array of symbols from an alphabet (often just 0 and 1) that contains every -by- matrix exactly once. It is a torus because the edges are considered wraparound for the purpose of finding matrices. Its name comes from the De Bruijn sequence, which can be considered a special case where is 1 (one dimension). Cutting a torus A solid torus of revolution can be cut by n (> 0) planes into at most parts. (This assumes the pieces may not be rearranged but must remain in place for all cuts.) The first 11 numbers of parts, for (including the case of , not covered by the above formulas), are as follows: 1, 2, 6, 13, 24, 40, 62, 91, 128, 174, 230, ... .
Mathematics
Three-dimensional space
null
74819
https://en.wikipedia.org/wiki/Pupil
Pupil
The pupil is a hole located in the center of the iris of the eye that allows light to strike the retina. It appears black because light rays entering the pupil are either absorbed by the tissues inside the eye directly, or absorbed after diffuse reflections within the eye that mostly miss exiting the narrow pupil. The size of the pupil is controlled by the iris, and varies depending on many factors, the most significant being the amount of light in the environment. The term "pupil" was coined by Gerard of Cremona. In humans, the pupil is circular, but its shape varies between species; some cats, reptiles, and foxes have vertical slit pupils, goats and sheep have horizontally oriented pupils, and some catfish have annular types. In optical terms, the anatomical pupil is the eye's aperture and the iris is the aperture stop. The image of the pupil as seen from outside the eye is the entrance pupil, which does not exactly correspond to the location and size of the physical pupil because it is magnified by the cornea. On the inner edge lies a prominent structure, the collarette, marking the junction of the embryonic pupillary membrane covering the embryonic pupil. Function The iris is a contractile structure, consisting mainly of smooth muscle, surrounding the pupil. Light enters the eye through the pupil, and the iris regulates the amount of light by controlling the size of the pupil. This is known as the pupillary light reflex. The iris contains two groups of smooth muscles; a circular group called the sphincter pupillae, and a radial group called the dilator pupillae. When the sphincter pupillae contract, the iris decreases or constricts the size of the pupil. The dilator pupillae, innervated by sympathetic nerves from the superior cervical ganglion, cause the pupil to dilate when they contract. These muscles are sometimes referred to as intrinsic eye muscles. The sensory pathway (rod or cone, bipolar, ganglion) is linked with its counterpart in the other eye by a partial crossover of each eye's fibers. This causes the effect in one eye to carry over to the other. Effect of light The pupil gets wider in the dark and narrower in light. When narrow, the diameter is 2 to 4 millimeters. In the dark it will be the same at first, but will approach the maximum distance for a wide pupil 3 to 8 mm. However, in any human age group there is considerable variation in maximal pupil size. For example, at the peak age of 15, the dark-adapted pupil can vary from 4 mm to 9 mm with different individuals. After 25 years of age, the average pupil size decreases, though not at a steady rate. At this stage the pupils do not remain completely still, therefore may lead to oscillation, which may intensify and become known as hippus. The constriction of the pupil and near vision are closely tied. In bright light, the pupils constrict to prevent aberrations of light rays and thus attain their expected acuity; in the dark, this is not necessary, so it is chiefly concerned with admitting sufficient light into the eye. When bright light is shone on the eye, light-sensitive cells in the retina, including rod and cone photoreceptors and melanopsin ganglion cells, will send signals to the oculomotor nerve, specifically the parasympathetic part coming from the Edinger-Westphal nucleus, which terminates on the circular iris sphincter muscle. When this muscle contracts, it reduces the size of the pupil. This is the pupillary light reflex, which is an important test of brainstem function. Furthermore, the pupil will dilate if a person sees an object of interest. Clinical significance Effect of drugs If the drug pilocarpine is administered, the pupils will constrict and accommodation is increased due to the parasympathetic action on the circular muscle fibers, conversely, atropine will cause paralysis of accommodation (cycloplegia) and dilation of the pupil. Certain drugs cause constriction of the pupils, such as opioids. Other drugs, such as atropine, LSD, MDMA, mescaline, psilocybin mushrooms, cocaine and amphetamines may cause pupil dilation. The sphincter muscle has a parasympathetic innervation, and the dilator has a sympathetic innervation. In pupillary constriction induced by pilocarpine, not only is the sphincter nerve supply activated but that of the dilator is inhibited. The reverse is true, so control of pupil size is controlled by differences in contraction intensity of each muscle. Another term for the constriction of the pupil is miosis. Substances that cause miosis are described as miotic. Dilation of the pupil is mydriasis. Dilation can be caused by mydriatic substances such as an eye drop solution containing tropicamide. Diseases A condition called bene dilitatism occurs when the optic nerves are partially damaged. This condition is typified by chronically widened pupils due to the decreased ability of the optic nerves to respond to light. In normal lighting, people affected by this condition normally have dilated pupils, and bright lighting can cause pain. At the other end of the spectrum, people with this condition have trouble seeing in darkness. It is necessary for these people to be especially careful when driving at night due to their inability to see objects in their full perspective. This condition is not otherwise dangerous. Size The size of the pupil (often measured as diameter) can be a symptom of an underlying disease. Dilation of the pupil is known as mydriasis and contraction as miosis. Not all variations in size are indicative of disease however. In addition to dilation and contraction caused by light and darkness, it has been shown that solving simple multiplication problems affects the size of the pupil. The simple act of recollection can dilate the size of the pupil, however when the brain is required to process at a rate above its maximum capacity, the pupils contract. There is also evidence that pupil size is related to the extent of positive or negative emotional arousal experienced by a person. Myopic individuals have larger resting and dark dilated pupils than hyperopic and emmetropic individuals, likely due to requiring less accommodation (which results in pupil constriction). Some humans are able to exert direct control over their iris muscles, giving them the ability to manipulate the size of their pupils (i.e. dilating and constricting them) on command, without any changes in lighting condition or eye accommodation state. However, this ability is likely very rare and its purpose or advantages over those without it are unclear. Animals Not all animals have circular pupils. Some have slits or ovals which may be oriented vertically, as in crocodiles, vipers, cats and foxes, or horizontally as in some rays, flying frogs, mongooses and artiodactyls such as elk, red deer, reindeer and hippopotamus, as well as the domestic horse. Goats, sheep, toads and octopus pupils tend to be horizontal and rectangular with rounded corners. Some skates and rays have crescent shaped pupils, gecko pupils range from circular, to a slit, to a series of pinholes, and the cuttlefish pupil is a smoothly curving W shape. Although human pupils are normally circular, abnormalities like colobomas can result in unusual pupil shapes, such as teardrop, keyhole or oval pupil shapes. There may be differences in pupil shape even between closely related animals. In felids, there are differences between small- and large eyed species. The domestic cat (Felis sylvestris domesticus) has vertical slit pupils, its large relative the Siberian tiger (Panthera tigris altaica) has circular pupils and the Eurasian lynx (Lynx lynx) is intermediate between those of the domestic cat and the Siberian tiger. A similar difference between small and large species may be present in canines. The small red fox (Vulpes vulpes) has vertical slit pupils whereas their large relatives, the gray wolf (Canis lupus lupus) and domestic dogs (Canis lupus familiaris) have round pupils. Evolution and adaptation One explanation for the evolution of slit pupils is that they can exclude light more effectively than a circular pupil. This would explain why slit pupils tend to be found in the eyes of animals with a crepuscular or nocturnal lifestyle that need to protect their eyes during daylight. Constriction of a circular pupil (by a ring-shaped muscle) is less complete than closure of a slit pupil, which uses two additional muscles that laterally compress the pupil. For example, the cat's slit pupil can change the light intensity on the retina 135-fold compared to 10-fold in humans. However, this explanation does not account for circular pupils that can be closed to a very small size (e.g., 0.5 mm in the tarsier) and the rectangular pupils of many ungulates which do not close to a narrow slit in bright light. An alternative explanation is that a partially constricted circular pupil shades the peripheral zones of the lens which would lead to poorly focused images at relevant wavelengths. The vertical slit pupil allows for use of all wavelengths across the full diameter of the lens, even in bright light. It has also been suggested that in ambush predators such as some snakes, vertical slit pupils may aid in camouflage, breaking up the circular outline of the eye. Activity pattern and behavior In a study of Australian snakes, pupil shapes correlated both with diel activity times and with foraging behavior. Most snake species with vertical pupils were nocturnal and also ambush foragers, and most snakes with circular pupils were diurnal and active foragers. Overall, foraging behaviour predicted pupil shape accurately in more cases than did diel time of activity, because many active-foraging snakes with circular pupils were not diurnal. It has been suggested that there may be a similar link between foraging behaviour and pupil shape amongst the felidae and canidae discussed above. A 2015 study confirmed the hypothesis that elongated pupils have increased dynamic range, and furthered the correlations with diel activity. However it noted that other hypotheses could not explain the orientation of the pupils. They showed that vertical pupils enable ambush predators to optimise their depth perception, and horizontal pupils to optimise the field of view and image quality of horizontal contours. They further explained why elongated pupils are correlated with the animal's height. Society and culture The pupil plays a role in eye contact and nonverbal communication. The voluntary or involuntary enlargement or dilation of the pupils indicates cognitive arousal, interest in the subject of attention, and/or sexual arousal. On the other hand, when the pupil is voluntarily or involuntarily contracted, it could indicate the opposite - disinterest or disgust. Exceptionally large or dilated pupils are also perceived to be an attractive feature in body language. In a surprising number of unrelated languages, the etymological meaning of the term for pupil is "little person". This is true, for example, of the word pupil itself: this comes into English from Latin pūpilla, which means "doll, girl", and is a diminutive form of pupa, "girl". (The double meaning in Latin is preserved in English, where pupil means both "schoolchild" and "dark central portion of the eye within the iris".) This may be because the reflection of one's image in the pupil is a minuscule version of one's self. In the Old Babylonian period (c. 1800-1600 BC) in ancient Mesopotamia, the expression "protective spirit of the eye" is attested, perhaps arising from the same phenomenon. The English phrase apple of my eye arises from an Old English usage, in which the word apple meant not only the fruit but also the pupil or eyeball.
Biology and health sciences
Visual system
Biology
74844
https://en.wikipedia.org/wiki/Glasses
Glasses
Glasses, also known as eyeglasses or spectacles, are vision eyewear with clear or tinted lenses mounted in a frame that holds them in front of a person's eyes, typically utilizing a bridge over the nose and hinged arms, known as temples or temple pieces, that rest over the ears for support. Glasses are typically used for vision correction, such as with reading glasses and glasses used for nearsightedness; however, without the specialized lenses, they are sometimes used for cosmetic purposes. Safety glasses provide eye protection against flying debris for construction workers or lab technicians; these glasses may have protection on the sides of the eyes as well as in the lenses. Some types of safety glasses are used to protect against visible and near-visible light or radiation. Glasses are worn for eye protection in some sports, such as squash. Glasses wearers may use a strap to prevent the glasses from falling off. Wearers of glasses that are used only part of the time may have the glasses attached to a cord that goes around their neck to prevent the loss and breaking of the glasses. Sunglasses allow for better vision in bright daylight and are used to protect one's eyes against damage from excessive levels of ultraviolet light. Typical sunglasses lenses are tinted for protection against bright light or polarized to remove glare; photochromic glasses are clear or lightly tinted in dark or indoor conditions, but turn into sunglasses when they come into contact with ultraviolet light. Most over-the-counter sunglasses do not have corrective power in the lenses; however, special prescription sunglasses can be made. People with conditions that have photophobia as a primary symptom (like certain migraine disorders) often wear sunglasses or precision tinted glasses, even indoors and at night. Specialized glasses may be used for viewing specific visual information, for example, 3D glasses for 3D films (stereoscopy). Sometimes glasses are worn purely for fashion or aesthetic purposes. Even with glasses used for vision correction, a wide range of fashions are available, using plastic, metal, wire, and other materials for frames. Most glasses lens are made of plastic, polyethylene, and glass. Types Glasses can be marked or found by their primary function, but also appear in combinations such as prescription sunglasses or safety glasses with enhanced magnification. Corrective Corrective lenses are used to correct refractive errors by bending the light entering the eye in order to alleviate the effects of conditions such as nearsightedness (myopia), farsightedness (hypermetropia) or astigmatism. The ability of one's eyes to accommodate their focus to near and distant focus alters over time. A common condition in people over forty years old is presbyopia, which is caused by the eye's crystalline lens losing elasticity, progressively reducing the ability of the lens to accommodate (i.e. to focus on objects close to the eye). Few people have a pair of eyes that show exactly equal refractive characteristics; one eye may need a "stronger" (i.e. more refracting) lens than the other. Corrective lenses bring the image back into focus on the retina. They are made to conform to the prescription of an ophthalmologist or optometrist. A lensmeter can be used to verify the specifications of an existing pair of glasses. Corrective eyeglasses can significantly improve the life quality of the wearer. Not only do they enhance the wearer's visual experience, but can also reduce problems that result from eye strain, such as headaches or squinting. The most common type of corrective lens is "single vision", which has a uniform refractive index. For people with presbyopia and hyperopia, bifocal and trifocal glasses provide two or three different refractive indices, respectively, and progressive lenses have a continuous gradient. Lenses can also be manufactured with high refractive indices, which allow them to be more lightweight and thinner than their counterparts with "low" refractive indices. Reading glasses provide a separate set of glasses for focusing on close by objects. Reading glasses are available without prescription from drugstores, and offer a cheap, practical solution, though these have a pair of simple lenses of equal power, and so will not correct refraction problems like astigmatism or refractive or prismatic variations between the left and right eye. For the total correction of the individual's sight, glasses complying to a recent ophthalmic prescription are required. People who need glasses to see often have corrective lens restrictions on their driver's licenses that require them to wear their glasses every time they drive or risk fines or jail time. Some militaries issue prescription glasses to servicemen and women. These are typically GI glasses. Many state prisons in the United States issue glasses to inmates, often in the form of clear plastic aviators. Adjustable-focus eyeglasses might be used to replace bifocals or trifocals, or might be used to produce cheaper single-vision glasses (since they do not have to be custom-manufactured for every person). Pinhole glasses are a type of corrective glasses that do not use a lens. Pinhole glasses do not actually refract the light or change focal length. Instead, they create a diffraction limited system, which has an increased depth of field, similar to using a small aperture in photography. This form of correction has many limitations that prevent it from gaining popularity in everyday use. Pinhole glasses can be made in a DIY fashion by making small holes in a piece of card which is then held in front of the eyes with a strap or cardboard arms. Glasses may also house other corrective or assistive devices. After the development of the transistor in the 1940s, combined eyeglass-hearing aids became popular. With thick-rimmed glasses the fashion at the time, a hearing aid could be concealed in the temple part of the frame. These fell out of fashion after the 1970s, but there are still occasions when combined eyeglass-hearing aids may be useful. Safety Safety glasses are worn to protect the eyes in various situations. They are made with break-proof plastic lenses to protect the eye from flying debris or other matter. Construction workers, factory workers, machinists and lab technicians are often required to wear safety glasses to shield the eyes from flying debris or hazardous splatters such as blood or chemicals. As of 2017, dentists and surgeons in Canada and other countries are required to wear safety glasses to protect against infection from patients' blood or other body fluids. There are also safety glasses for welding, which are styled like wraparound sunglasses, but with much darker lenses, for use in welding where a full-sized welding helmet is inconvenient or uncomfortable. These are often called "flash goggles" because they provide protection from welding flash. Nylon frames are usually used for protective eyewear for sports because of their lightweight and flexible properties. Unlike most regular glasses, safety glasses often include protection beside the eyes as well as in front of the eyes. Sunglasses Sunglasses provide more comfort and protection against bright light and often against ultraviolet (UV) light. To properly protect the eyes from the dangers of UV light, sunglasses should have UV-400 blocker to provide good coverage against the entire light spectrum that poses a danger. Light polarization is an added feature that can be applied to sunglass lenses. Polarization filters are positioned to remove horizontally polarized rays of light, which eliminates glare from horizontal surfaces (allowing wearers to see into water when reflected light would otherwise overwhelm the scene). Polarized sunglasses may present some difficulties for pilots since reflections from water and other structures often used to gauge altitude may be removed. Liquid-crystal displays emit polarized light, making them sometimes difficult to view with polarized sunglasses. Sunglasses may be worn for aesthetic purposes, or simply to hide the eyes. Examples of sunglasses that were popular for these reasons include tea shades and mirrorshades. Many blind people wear nearly opaque glasses to hide their eyes for cosmetic reasons. Many people with light sensitivity conditions wear sunglasses or other tinted glasses to make the light more tolerable. Sunglasses may also have corrective lenses, which requires a prescription. Clip-on sunglasses or sunglass clips can be attached to another pair of glasses. Some wrap-around sunglasses are large enough to be worn over another pair of glasses. Otherwise, many people opt to wear contact lenses to correct their vision so that standard sunglasses can be used. Mixed double-frame (Flip glasses) The double frame uplifting glasses have one moving frame with one pair of lenses and the basic fixed frame with another pair of lenses (optional), that are connected by four-bar linkage. For example, sun lenses could be easily lifted up and down while mixed with myopia lenses that always stay on. Presbyopia lenses could be also combined and easily removed from the field of view if needed without taking off glasses. These glasses are often used for drivers going through tunnels, with the upper frame serving as sunglasses and the second frame as transparent lenses. 3D glasses The illusion of three dimensions on a two-dimensional surface can be created by providing each eye with different visual information. 3D glasses create the illusion of three dimensions by filtering a signal containing information for both eyes. The signal, often light reflected off a movie screen or emitted from an electronic display, is filtered so that each eye receives a slightly different image. The filters only work for the type of signal they were designed for. Anaglyph 3D glasses have a different colored filter for each eye, typically red and blue or red and green. A polarized 3D system on the other hand uses polarized filters. Polarized 3D glasses allow for color 3D, while the red-blue lenses produce an image with distorted coloration. An active shutter 3D system uses electronic shutters. Head-mounted displays can filter the signal electronically and then transmit light directly into the viewer's eyes. Anaglyph and polarized glasses are distributed to audiences at 3D movies. Polarized and active shutter glasses are used with many home theaters. Head-mounted displays are used by a single person, but the input signal can be shared between multiple units. Magnification (bioptics) Glasses can also provide magnification that is useful for people with vision impairments or specific occupational demands. An example would be bioptics or bioptic telescopes which have small telescopes mounted on, in, or behind their regular lenses. Newer designs use smaller lightweight telescopes, which can be embedded into the corrective glass and improve aesthetic appearance (mini telescopic spectacles). They may take the form of self-contained glasses that resemble goggles or binoculars, or may be attached to existing glasses. Recumbent glasses Recumbent or prism glasses are glasses that use a prism with a 90° refraction to allow the wearer to read or view a screen while lying on their back. Developed by Liverpudlian ophthalmologist Andrew McKie Reid in the 1930s to assist people bedbound by chronic illness or spinal injury, recumbent glasses have more recently been marketed not simply as an assistive device but also as 'lazy glasses'. They do not assist with vision, although they can be worn over regular corrective glasses. Yellow-tinted computer/gaming glasses Yellow-tinted glasses are a type of glasses with a minor yellow tint. They perform a slight color correction, on top of reducing eyestrain from lack of blinking. They may also be considered minor corrective non-prescription glasses. Depending on the company, these computer or gaming glasses can also filter out high energy blue and ultra-violet light from LCD screens, fluorescent lighting, and other sources of light. This allows for reduced eye-strain. These glasses can be ordered as standard or prescription lenses that fit into standard optical frames. Blue-light blocking glasses By the end of the 2010s, eyeglasses that filter out blue light from computers, smartphones and tablets are becoming increasingly popular in response to concerns about problems caused by blue light overexposure. The problems claimed range from dry eyes to eye strain, sleep cycle disruption, up to macular degeneration which can cause partial blindness. They may also block out ultraviolet (UV) radiation. However, there is no measurable UV light from computer monitors. The problem of computer vision syndrome (CVS) can result from focusing the eyes on a screen for long, continuous periods. Many times the glasses do not appear to have much of a tint, or, if any, a slight yellow tint, but they may be more heavily tinted. Long hours of computer use (not blue light) may cause eye strain. Many eye symptoms caused by computer use will lessen after the usage of the computer is stopped. Decreasing evening screen time and setting devices to night mode will improve sleep. Several studies have shown that blue light from computers does not lead to eye diseases, including macular degeneration. The total amount of light entering the eyes can be adjusted without glasses using the screen brightness settings. Similarly, the blue light can often specifically be adjusted using the "night mode" of different operating systems, which can usually be activated outside of nighttime hours. The American Academy of Ophthalmology (AAO) does not recommend special eyewear for computer use, although it recommends using prescription glasses measured specifically for computer screen distance (depending on individuals, but possibly 20–26 inches from the face), which are not the same as "blue-light blocking" glasses. The position of the College of Optometrists (UK) is that "the best scientific evidence currently available does not support the use of blue-blocking spectacle lenses in the general population to improve visual performance, alleviate the symptoms of eye fatigue or visual discomfort, improve sleep quality or conserve macula health." Frames The ophthalmic frame is the part of a pair of glasses that is designed to hold the lenses in the proper position. Ophthalmic frames come in a variety of styles, sizes, materials, shapes, and colors. Parts pair of eye wires or rims surrounding and holding the lenses in place bridge which connects the two eye wires chassis, the combination of the eye wires and the bridge top bar or brow bar, a bar just above the bridge providing structural support and/or style enhancement (country/Grandpa style). The addition of a top bar makes a pair of glasses aviator eyeglasses pair of brows or caps, plastic or metal caps which fit over the top of the eye wires for style enhancement and to provide additional support for the lenses. The addition of brows makes a pair of glasses browline glasses pair of nose pads that allows a comfortable resting of the eye wires on the nose pair of pad arms connect the nose pads to the eye wires pair of temples (earpieces) on either side of the skull pair of temple tips at the ends of the temples pair of end pieces connect the eye wires via the hinges to the temples pair of frame-front end pieces pair of hinges connect the end pieces to the temples, allowing a swivel movement. Spring-loaded flex hinges are a variant that is equipped with a small spring that affords the temples a greater range of movement and does not limit them to the traditional, 90-degree angle. Temple types Skull temples: bend down behind the ears, follow the contour of the skull and rest evenly against the skull Library temples: generally straight and do not bend down behind the ears. Hold the glasses primarily through light pressure against the side of the skull Convertible temples: used either as library or skull temples depending on the bent Riding bow temples: curve around the ear and extend down to the level of the ear lobe. Used mostly on athletic, children's, and industrial safety frames Comfort cable temples: similar to the riding bow, but made from a springy cable of coiled metal, sometimes inside a plastic or silicone sleeve. The tightness of the curl can be adjusted along its whole length, allowing the back to fit the wearer's ear curve perfectly. Used for physically active wearers, children, and people with high prescriptions (heavy lenses). See the image of 1920s frames above. Materials Plastic and polymer Cellulose acetate Optyl, a type of hypoallergenic material made especially for eyeglass frames. It features a type of elasticity that returns the material to its original shape. Cellulose propionate, a molded, durable plastic 3D-printed plastic using super-fine polyamide powder and Selective laser sintering processes – see Mykita Mylon (The frames can be 3-D printed by Fused Filament Fabrication for pennies of ABS, PLA or nylon) Nylon Metal Various metals and alloys may be used to make glasses, such as gold, silver, aluminum, beryllium, stainless steel, titanium, Monel, and nickel titanium. Natural material Natural materials such as wood, bone, ivory, leather and semi-precious or precious stones may also be used. Corrective lens shape Corrective lenses can be produced in many different shapes from a circular lens called a lens blank. Lens blanks are cut to fit the shape of the frame that will hold them. Frame styles vary and fashion trends change over time, resulting in a multitude of lens shapes. For lower power lenses, there are few restrictions, allowing for many trendy and fashionable shapes. Higher power lenses can distort peripheral vision and may become thick and heavy if a large lens shape is used. However, if the lens is too small, it can drastically reduce the field of view. Bifocal, trifocal, and progressive lenses generally require a taller lens shape to leave room for the different segments while preserving an adequate field of view through each segment. Frames with rounded edges are the most efficient for correcting myopic prescriptions, with perfectly round frames being the most efficient. Before the advent of eyeglasses as a fashion item, when frames were constructed with only functionality in mind, virtually all eyeglasses were either round, oval, panto, rectangular, octagonal, or square. It was not until glasses began to be seen as an accessory that different shapes were introduced to be more aesthetically pleasing than functional. History Precursors Scattered evidence exists for use of visual aid devices in Greek and Roman times, most prominently the use of an emerald by Emperor Nero as mentioned by Pliny the Elder. The use of a convex lens to form an enlarged/magnified image was most likely described in Ptolemy's Optics (which survives only in a poor Arabic translation). Ptolemy's description of lenses was commented upon and improved by Ibn Sahl (10th century) and most notably by Alhazen (Book of Optics, ). Latin translations of Ptolemy's Optics and of Alhazen became available in Europe in the 12th century, coinciding with the development of "reading stones". There are claims that single lens magnifying glasses were being used in China during the Northern Song dynasty (960–1127). Robert Grosseteste's treatise De iride (On the Rainbow), written between 1220 and 1235, mentions using optics to "read the smallest letters at incredible distances". A few years later in 1262, Roger Bacon is also known to have written on the magnifying properties of lenses. The development of the first eyeglasses took place in northern Italy in the second half of the 13th century. Independently of the development of optical lenses, some cultures developed "sunglasses" for eye protection, without any corrective properties. For example, flat panes of smoky quartz were used in 12th-century China, and the Inuit have used snow goggles for eye protection. Invention The earliest recorded comment on the use of lenses for optical purposes was made in 1268 by Roger Bacon. The first eyeglasses were estimated to have been made in Central Italy, most likely in Pisa or Florence, by about 1290: In a sermon delivered on 23 February 1306, the Dominican friar Giordano da Pisa (–1311) wrote "It is not yet twenty years since there was found the art of making eyeglasses, which make for good vision ... And it is so short a time that this new art, never before extant, was discovered. ... I saw the one who first discovered and practiced it, and I talked to him." Giordano's colleague Friar Alessandro della Spina of Pisa (d. 1313) was soon making eyeglasses. The Ancient Chronicle of the Dominican Monastery of St. Catherine in Pisa records: "Eyeglasses, having first been made by someone else, who was unwilling to share them, he [Spina] made them and shared them with everyone with a cheerful and willing heart." Venice quickly became an important center of manufacture, especially due to using the high-quality glass made at Murano. By 1301, there were guild regulations in Venice governing the sale of eyeglasses and a separate guild of Venetian spectacle makers was formed in 1320. In the fourteenth century, they were very common objects: Francesco Petrarca says in one of his letters that, until he was 60, he did not need glasses, and Franco Sacchetti mentions them often in his Trecentonovelle. The earliest pictorial evidence for the use of eyeglasses is Tommaso da Modena's 1352 portrait of the cardinal Hugh de Saint-Cher reading in a scriptorium. Another early example would be a depiction of eyeglasses found north of the Alps in an altarpiece of the church of Bad Wildungen, Germany, in 1403. These early glasses had convex lenses that could correct both hyperopia (farsightedness), and the presbyopia that commonly develops as a symptom of aging. Although concave lenses for myopia (near-sightedness) had made their first appearance in the mid-15th century, it was not until 1604 that Johannes Kepler published the first correct explanation as to why convex and concave lenses could correct presbyopia and myopia. Early frames for glasses consisted of two magnifying glasses riveted together by the handles so that they could grip the nose. These are referred to as "rivet spectacles". The earliest surviving examples were found under the floorboards at Kloster Wienhausen, a convent near Celle in Germany; they have been dated to circa 1400. The world's first specialist shop for spectacles—what we might regard today as an optician—opened in Strasbourg (then Holy Roman Empire, now France) in 1466. Other claims The 17th-century claim by Francesco Redi that Salvino degli Armati of Florence invented eyeglasses in the 13th century has been exposed as erroneous. Marco Polo is mistakenly claimed to have encountered eyeglasses during his travels in China in the 13th century. However, no such evidence appears in his accounts. Indeed, the earliest mentions of eyeglasses in China occur in the 15th century and those Chinese sources state that eyeglasses were imported. In 1907, Professor Berthold Laufer speculated, in his history of glasses, that for glasses to be mentioned in the literature of China and Europe at approximately the same time it was probable that they were not invented independently, and after ruling out the Turks, proposed India as a location. However, Joseph Needham speculated that the mention of glasses in the Chinese manuscript Laufer used "in part" to credit the prior invention of them in Asia did not exist in older versions of that manuscript, and the reference to them in later versions was added during the Ming dynasty. In 1971, Rishi Agarwal, in an article in the British Journal of Ophthalmology, states that Vyasatirtha was observed in possession of a pair of glasses in the 1520s, he argues that it "is, therefore, most likely that the use of lenses reached Europe via the Arabs, as did Hindu mathematics and the ophthalmological works of the ancient Hindu surgeon Sushruta", but all dates are given well after the existence of eyeglasses in Italy was established, including significant shipments of eyeglasses from Italy to the Middle East, with one shipment as large as 24,000 glasses, as well as a spectacles dispensary in Strasbourg in 1466. Later developments The American scientist Benjamin Franklin, who had both myopia and presbyopia, invented bifocals. Historians have from time to time produced evidence to suggest that others may have preceded him in the invention; however, a correspondence between George Whatley and John Fenno, editor of The Gazette of the United States, suggested that Franklin had indeed invented bifocals, and perhaps 50 years earlier than had been originally thought. The first lenses for correcting astigmatism were designed by the British astronomer George Airy in 1825. Over time, the construction of frames for glasses also evolved. Early eyepieces were designed to be either held in place by hand or by exerting pressure on the nose (pince-nez). Girolamo Savonarola suggested that eyepieces could be held in place by a ribbon passed over the wearer's head, this in turn secured by the weight of a hat. The modern style of glasses, held by temples passing over the ears, was developed sometime before 1727, possibly by the British optician Edward Scarlett. These designs were not immediately successful, however, and various styles with attached handles such as "scissors-glasses" and lorgnettes were also fashionable from the second half of the 18th century and into the early 19th century. In the early 20th century, Moritz von Rohr and Zeiss (with the assistance of H. Boegehold and A. Sonnefeld) developed the Zeiss Punktal spherical point-focus lenses that dominated the eyeglass lens field for many years. In 2008, Joshua Silver designed eyewear with adjustable corrective glasses. They work by using a built-in syringe to pump a silicone solution into a flexible lens. Despite the popularity of contact lenses and laser corrective eye surgery, glasses remain very common, as their technology has improved. For instance, it is now possible to purchase frames made of special memory metal alloys that return to their correct shape after being bent. Other frames have spring-loaded hinges. Either of these designs offer dramatically better ability to withstand the stresses of daily wear and the occasional accident. Modern frames are also often made from strong, lightweight materials such as titanium alloys, which were not available in earlier times. In fashion In the 1930s, "spectacles" were described as "medical appliances". Wearing spectacles was sometimes considered socially humiliating. In the 1970s, fashionable glasses started to become available through manufacturers, and governments also recognized the demand for stylized eyewear. Graham Pullin describes how devices for disability, like glasses, have traditionally been designed to camouflage against the skin and restore ability without being visible. In the past, design for disability has "been less about projecting a positive image as about trying not to project an image at all". Pullin uses the example of spectacles, traditionally categorized as a medical device for "patients", and outlines how they are now described as eyewear: a fashionable accessory. Much like other fashion designs and accessories, eyewear is created by designers, has reputable labels, and comes in collections, by season and designer. In recent years, it has become more common for consumers to purchase eyewear with non-prescription lenses as a fashion accessory. Society and culture Market United States The market for spectacles has been characterized as having highly inelastic demand. Advertising restrictions in the United States, for example, have correlated with higher prices, suggesting that adverts make the spectacles market more price-competitive. It has also been claimed to be monopolistically competitive, as in the case of Luxottica. There are claims that insufficiently free market competition inflates the prices of frames, which cost an average of $25–$50 U.S. to make, to an average retail price of $300 in the United States. This claim is disputed by some in the industry. The United States also prohibits the sale of glasses unless the user has a recent prescription from an optometrist or ophthalmologist, whereas in most of the world, glasses and contact lenses can be bought without needing to get a new eye exam first. This means that Americans who lose or break their glasses may be unable to see well until they can get, and pay for, an appointment with an optometrist. In most of the world, someone who has lost their glasses merely goes to the nearest store selling glasses and buys a replacement over the counter. Redistribution Some organizations like Lions Clubs International, Unite For Sight, ReSpectacle, and New Eyes for the Needy provide a way to donate glasses and sunglasses to people on low incomes or no income. Unite For Sight has redistributed more than 200,000 pairs. Fashion Many people require glasses for the reasons listed above. There are many shapes, colors, and materials that can be used when designing frames and lenses that can be utilized in various combinations. Oftentimes, the selection of a frame is made based on how it will affect the appearance of the wearer. Some people with good natural eyesight like to wear eyeglasses as a style accessory. In Japan, some companies ban women from wearing glasses. Personal image For most of their history, eyeglasses were seen as unfashionable, and carried several potentially negative connotations: wearing glasses caused individuals to be stigmatized and stereotyped as pious clergymen, as those in religious vocation were the most likely to be literate and therefore the most likely to need reading glasses, elderly, or physically weak and passive. The stigma began to fall away in the U.S. in the early 1900s when the popular Theodore Roosevelt was regularly photographed wearing eyeglasses, and in the 1910s when popular comedian Harold Lloyd wore a pair of horn-rimmed glasses as the "Glasses" character in his films. In the United Kingdom, wearing glasses was characterized in the nineteenth century as "a sure sign of the weakling and the mollycoddle", according to Neville Cardus, writing in 1928. "Tim" Killick was the first professional cricketer to play while wearing glasses "continuously", after his vision deteriorated in 1897. "With their aid he placed himself in the forefront among English professionals of all-round abilities." The American tenor Jan Peerce, plagued with poor eyesight, credited comedian Steve Allen for normalizing and even popularizing the wearing of eyeglasses in front of live television and stage audiences; prior to this, performers who read on early television were expected to squint or use contact lenses. Since then, eyeglasses have become an acceptable fashion item and often act as a key component in individuals' personal image. Musicians Buddy Holly and John Lennon became synonymous with the styles of eye-glasses they wore to the point that thick, black horn-rimmed glasses are often called "Buddy Holly glasses" and perfectly round metal eyeglass frames called "John Lennon glasses" (or, more recently, "Harry Potter glasses"). British comedic actor Eric Sykes was known in the United Kingdom for wearing thick, square, horn-rimmed glasses, which were in fact a sophisticated hearing aid that alleviated his deafness by allowing him to "hear" vibrations. Some celebrities have become so associated with their eyeglasses that they continued to wear them even after taking other measures against vision problems: U.S. Senator Barry Goldwater and comedian Drew Carey continued to wear non-prescription glasses after being fitted for contacts and getting laser eye surgery, respectively. Other celebrities have used glasses to differentiate themselves from the characters they play, such as Anne Kirkbride, who wore oversized 1980s-style round horn-rimmed glasses as Deirdre Barlow in the soap opera Coronation Street; and Masaharu Morimoto, who wears glasses to separate his professional persona as a chef from his stage persona as Iron Chef Japanese. In 2012, some NBA players wore lensless glasses with thick plastic frames like horn-rimmed glasses during post-game interviews, geek chic that draws comparisons to actor Jaleel White's infamous styling as TV character Steve Urkel. In superhero fiction, eyeglasses have become a standard component of various heroes' disguises as masks, allowing them to adopt a nondescript demeanor when they are not in their superhero personae: Superman is well known for wearing 1950s-style horn-rimmed glasses as Clark Kent, while Wonder Woman wears either round, Harold Lloyd-style glasses or 1970s-style bug-eye glasses as Diana Prince. An example of the halo effect is seen in the stereotype that those who wear glasses are intelligent. This belief can have positive consequences for people who wear glasses, for example in elections. Studies show that wearing glasses increases politicians' electoral success, at least in Western cultures. Styles In the 20th century, eyeglasses came to be considered a component of fashion; as such, various different styles have come in and out of popularity. Most are still in regular use, albeit with varying degrees of frequency. Aviator sunglasses Browline glasses Bug-eye glasses Cat eye glasses GI glasses Goggles Horn-rimmed glasses Lensless glasses Monocle Pince-nez Rimless glasses Sunglasses Wayfarer sunglasses Windsor glasses
Technology
Optical instruments
null
74845
https://en.wikipedia.org/wiki/Contact%20lens
Contact lens
Contact lenses, or simply contacts, are thin lenses placed directly on the surface of the eyes. Contact lenses are ocular prosthetic devices used by over 150 million people worldwide, and they can be worn to correct vision or for cosmetic or therapeutic reasons. In 2010, the worldwide market for contact lenses was estimated at $6.1 billion, while the US soft lens market was estimated at $2.1 billion. Multiple analysts estimated that the global market for contact lenses would reach $11.7 billion by 2015. the average age of contact lens wearers globally was 31 years old, and two-thirds of wearers were female. People choose to wear contact lenses for many reasons. Aesthetics and cosmetics are main motivating factors for people who want to avoid wearing glasses or to change the appearance or color of their eyes. Others wear contact lenses for functional or optical reasons. When compared with spectacles, contact lenses typically provide better peripheral vision, and do not collect moisture (from rain, snow, condensation, etc.) or perspiration. This can make them preferable for sports and other outdoor activities. Contact lens wearers can also wear sunglasses, goggles, or other eye wear of their choice without having to fit them with prescription lenses or worry about compatibility with glasses. Additionally, there are conditions such as keratoconus and aniseikonia that are typically corrected better with contact lenses than with glasses. History Origins and first functional prototypes Leonardo da Vinci is frequently credited with introducing the idea of contact lenses in his 1508 Codex of the eye, Manual D, wherein he described a method of directly altering corneal power by either submerging the head in a bowl of water or wearing a water-filled glass hemisphere over the eye. Neither idea was practically implementable in da Vinci's time. He did not suggest his idea be used for correcting vision; he was more interested in exploring mechanisms of accommodation. Descartes proposed a device for correcting vision consisting of a liquid-filled glass tube capped with a lens. However, the idea was impracticable, since the device was to be placed in direct contact with the cornea and thus would have made blinking impossible. In 1801, Thomas Young fashioned a pair of basic contact lenses based on Descartes' model. He used wax to affix water-filled lenses to his eyes, neutralizing their refractive power, which he corrected with another pair of lenses. John Herschel, in a footnote to the 1845 edition of the Encyclopedia Metropolitana, posed two ideas for the visual correction: the first "a spherical capsule of glass filled with animal jelly", the second "a mould of the cornea" that could be impressed on "some sort of transparent medium". Though Herschel reportedly never tested these ideas, they were later advanced by independent inventors, including Hungarian physician Joseph Dallos, who perfected a method of making molds from living eyes. This enabled the manufacture of lenses that, for the first time, conformed to the actual shape of the eye. Although Louis J. Girard invented a scleral contact lens in 1887, it was German ophthalmologist Adolf Gaston Eugen Fick who in 1888 fabricated the first successful afocal scleral contact lens. Approximately in diameter, the heavy blown-glass shells rested on the less sensitive rim of tissue surrounding the cornea and floated on a dextrose solution. He experimented with fitting the lenses initially on rabbits, then on himself, and lastly on a small group of volunteers, publishing his work, "Contactbrille", in the March 1888 edition of Archiv für Augenheilkunde. Large and unwieldy, Fick's lens could be worn only for a couple of hours at a time. August Müller of Kiel, Germany, corrected his own severe myopia with a more convenient blown-glass scleral contact lens of his own manufacture in 1888. The development of polymethyl methacrylate (PMMA) in the 1930s paved the way for the manufacture of plastic scleral lenses. In 1936, optometrist William Feinbloom introduced a hybrid lens composed of glass and plastic, and in 1937 it was reported that some 3,000 Americans were already wearing contact lenses. In 1939, Hungarian ophthalmologist Dr.István Györffy produced the first fully plastic contact lens. The following year, German optometrist Heinrich Wöhlk produced his own version of plastic lenses based on experiments performed during the 1930s. Corneal and rigid lenses (1949–1960s) In 1949, the first "corneal" lenses were developed. These were much smaller than the original scleral lenses, as they sat only on the cornea rather than across all of the visible ocular surface and could be worn up to 16 hours a day. PMMA corneal lenses became the first contact lenses to have mass appeal through the 1960s, as lens designs became more sophisticated with improving manufacturing technology. On October 18, 1964, in a television studio in Washington, D.C., Lyndon Baines Johnson became the first President in the history of the United States to appear in public wearing contact lenses, under the supervision of Dr. Alan Isen, who developed the first commercially viable soft-contact lenses in the United States. Early corneal lenses of the 1950s and 1960s were relatively expensive and fragile, resulting in the development of a market for contact lens insurance. Replacement Lens Insurance, Inc. (now known as RLI Corp.) phased out its original flagship product in 1994 after contact lenses became more affordable and easier to replace. Gas permeable and soft lenses (1959–present) One of the major disadvantages of PMMA lenses is that they allow no oxygen to get through to the conjunctiva and cornea, causing a number of adverse and potentially serious clinical effects. By the end of the 1970s and through the 1980s and 1990s, a range of oxygen-permeable but rigid materials were developed to overcome this problem. Chemist Norman Gaylord played a prominent role in the development of these new oxygen-permeable contact lenses. Collectively, these polymers are referred to as rigid gas permeable or RGP materials or lenses. Though all the above contact lens types—sclerals, PMMAs and RGPs—could be correctly referred to as "rigid" or "hard", the latter term is now used for the original PMMAs, which are still occasionally fitted and worn, whereas "rigid" is a generic term for all these lens types; thus, hard lenses (PMMAs) are a subset of rigid contact lenses. Occasionally, the term "gas permeable" is used to describe RGPs, which is somewhat misleading as soft contact lenses are also gas permeable in that they allow oxygen to get through to the ocular surface. The principal breakthrough in soft lenses was made by Czech chemists Otto Wichterle and Drahoslav Lím, who published their work "Hydrophilic gels for biological use" in the journal Nature in 1959. In 1965, National Patent Development Corporation (NPDC) bought the American rights to produce the lenses and then sublicensed the rights to Bausch & Lomb, which started to manufacture them in the United States. The Czech scientists' work led to the launch of the first hydrogel contact lenses in some countries in the 1960s and the first approval of the Soflens material by the US Food and Drug Administration (FDA) in 1971. These soft lenses were soon prescribed more often than rigid ones, due to the immediate and much greater comfort (rigid lenses require a period of adaptation before full comfort is achieved). Polymers from which soft lenses are manufactured improved over the next 25 years, primarily in terms of increasing oxygen permeability, by varying the ingredients. In 1972, British optometrist Rishi Agarwal was the first to suggest disposable soft contact lenses. In 1998, the first silicone hydrogel contact lenses were released by Ciba Vision in Mexico. These new materials encapsulated the benefits of silicone which has extremely high oxygen permeability—with the comfort and clinical performance of the conventional hydrogels that had been used for the previous 30 years. These contact lenses were initially advocated primarily for extended (overnight) wear, although more recently, daily (no overnight) wear silicone hydrogels have been launched. In a slightly modified molecule, a polar group is added without changing the structure of the silicone hydrogel. This is referred to as the Tanaka monomer because it was invented and patented by of Co. of Japan in 1979. Second-generation silicone hydrogels, such as galyfilcon A (Acuvue Advance, Vistakon) and senofilcon A (Acuvue Oasys, Vistakon), use the Tanaka monomer. Vistakon improved the Tanaka monomer even further and added other molecules, which serve as an internal wetting agent. Comfilcon A (Biofinity, CooperVision) was the first third-generation polymer. Its patent claims that the material uses two siloxy macromers of diverse sizes that, when used in combination, produce very high oxygen permeability for a given water content. Enfilcon A (Avaira, CooperVision) is another third-generation material that is naturally wet; its water content is 46%. Types Contact lenses are classified in diverse ways, namely, by their primary function, material, wear schedule (how long a lens can be worn), and replacement schedule (how long before a lens needs to be discarded). Functions Correction of refractive error Corrective contact lenses are designed to improve vision, most commonly by correcting refractive error. This is done by directly focusing light so it enters the eye with the proper power for clear vision. A spherical contact lens bends light evenly in every direction (horizontally, vertically, etc.). They are typically used to correct myopia and hypermetropia. There are two ways that contact lenses can correct astigmatism. One way is with toric soft lenses that work essentially the same way as eyeglasses with cylindrical correction; a toric lens has a different focusing power horizontally than vertically, and as a result can correct for astigmatism. Another way is by using a rigid gas permeable lens; since most astigmatism is caused by the shape of the cornea, rigid lenses can improve vision because the front surface of the optical system is the perfectly spherical lens. Both approaches have advantages and drawbacks. Toric lenses must have the proper orientation to correct for astigmatism, so such lenses must have additional design characteristics to prevent them from rotating out of alignment. This can be done by weighting the bottom of the lens or by using other physical characteristics to rotate the lens back into position, but these mechanisms rarely work perfectly, so some misalignment is common and results in somewhat imperfect correction, and blurring of sight after blinking rotates the lens. Toric soft lenses have all the advantages of soft lenses in general, which are low initial cost, ease of fitting, and minimal adjustment period. Rigid gas permeable lenses usually provide superior optical correction but have become less popular relative to soft lenses due to higher initial costs, longer initial adjustment period, and more involved fitting. Correction of presbyopia Correction of presbyopia (a need for a reading prescription different from the prescription needed for distance) presents an additional challenge in the fitting of contact lenses. Two main strategies exist: multifocal lenses and monovision. Multifocal contact lenses (e.g. bifocals or progressives) are comparable to spectacles with bifocals or progressive lenses because they have multiple focal points. Multifocal contact lenses are typically designed for constant viewing through the center of the lens, but some designs do incorporate a shift in lens position to view through the reading power (similar to bifocal glasses). Monovision is the use of single-vision lenses (one focal point per lens) to focus an eye (typically the dominant one) for distance vision and the other for near work. The brain then learns to use this setup to see clearly at all distances. A technique called modified monovision uses multifocal lenses and also specializes one eye for distance and the other for near, thus gaining the benefits of both systems. Care is advised for persons with a previous history of strabismus and those with significant phorias, who are at risk of eye misalignment under monovision. Studies have shown no adverse effect to driving performance in adapted monovision contact lens wearers. Alternatively, a person may simply wear reading glasses over their distance contact lenses. Other types of vision correction For those with certain color deficiencies, a red-tinted "X-Chrom" contact lens may be used. Although such a lens does not restore normal color vision, it allows some color-blind people to distinguish colors better. Red-filtering contact lenses can also be an option for extreme light sensitivity in some visual deficiencies such as achromatopsia. ChromaGen contact lenses have been used and shown to have some limitations with vision at night although otherwise producing significant improvements in color vision. An earlier study showed very significant improvements in color vision and patient satisfaction. Later work that used these ChromaGen lenses with people with dyslexia in a randomised, double-blind, placebo-controlled trial showed highly significant improvements in reading ability over reading without the lenses. This system has been granted FDA approval for use in the United States. Magnification is another area being researched for future contact lens applications. Embedding of telescopic lenses and electronic components suggests that future uses of contact lenses may become extremely diverse. Cosmetic contact lenses A cosmetic contact lens is designed to change the appearance of the eye. These lenses may also correct refractive error. Although many brands of contact lenses are lightly tinted to make them easier to handle, cosmetic lenses worn to change eye color are far less common, accounting for only 3% of contact lens fits in 2004. In the United States, the FDA labels non-corrective cosmetic contact lenses as decorative contact lenses. Like any contact lens, cosmetic lenses carry risks of mild to serious complications, including ocular redness, irritation and infection. Due to their medical nature, colored contact lenses, similar to regular ones, are illegal to purchase in the United States without a valid prescription. Those with perfect vision can buy color contacts for cosmetic reasons, but they still need their eyes to be measured for a "plano" prescription, meaning one with zero vision correction. This is for safety reasons so the lenses will fit the eye without causing irritation or redness. Some colored contact lenses completely cover the iris, thus dramatically changing eye color. Other colored contact lenses merely tint the iris, highlighting its natural color. A new trend in Japan, South Korea and China is the circle contact lens, which extend the appearance of the iris onto the sclera by having a dark tinted area all around. The result is an appearance of a bigger, wider iris, a look reminiscent of dolls' eyes. Cosmetic lenses can have more direct medical applications. For example, some contact lenses can restore the appearance and, to some extent the function, of a damaged or missing iris. Therapeutic scleral lenses A scleral lens is a large, firm, transparent, oxygen-permeable contact lens that rests on the sclera and creates a tear-filled vault over the cornea. The cause of this unique positioning is usually relevant to a specific patient whose cornea is too sensitive to support the lens directly. Scleral lenses may be used to improve vision and reduce pain and light sensitivity for people with disorders or injuries to the eye, such as severe dry eye syndrome (keratoconjunctivitis sicca), microphthalmia, keratoconus, corneal ectasia, Stevens–Johnson syndrome, Sjögren's syndrome, aniridia, neurotrophic keratitis (anesthetic corneas), complications post-LASIK, high order aberrations of the eye, complications post-corneal transplant and pellucid degeneration. Injuries to the eye such as surgical complications, distorted corneal implants, as well as chemical and burn injuries also may be treated with scleral lenses. Therapeutic soft lenses Soft lenses are often used in the treatment and management of non-refractive disorders of the eye. A bandage contact lens allows the patient to see while protecting an injured or diseased cornea from the constant rubbing of blinking eyelids, thereby allowing it to heal. They are used in the treatment of conditions including bullous keratopathy, dry eyes, corneal abrasions and erosion, keratitis, corneal edema, descemetocele, corneal ectasia, Mooren's ulcer, anterior corneal dystrophy, and neurotrophic keratoconjunctivitis. Contact lenses that deliver drugs to the eye have also been developed. Materials Rigid lenses Glass lenses were never comfortable enough to gain widespread popularity. The first lenses to do so were those made from polymethyl methacrylate (PMMA or Perspex/Plexiglas), now commonly referred to as "hard" lenses. Their main disadvantage is they do not allow oxygen to pass through to the cornea, which can cause a number of adverse, and often serious, clinical events. Starting in the late 1970s, improved rigid materials which were oxygen-permeable were developed. Contact lenses made from these materials are called rigid gas permeable lenses (RGPs). A rigid lens is able to cover the natural shape of the cornea with a new refracting surface. This means that a spherical rigid contact lens can correct corneal astigmatism. Rigid lenses can also be made as a front-toric, back-toric, or bitoric. Rigid lenses can also correct corneas with irregular geometries, such as those with keratoconus or post surgical ectasias. In most cases, patients with keratoconus see better through rigid lenses than through glasses. Rigid lenses are more chemically inert, allowing them to be worn in more challenging environments where chemical inertia is important compared to soft lenses. Soft lenses Soft lenses are more flexible than rigid lenses and can be gently rolled or folded without damaging the lens. While rigid lenses require a period of adaptation before comfort is achieved, new soft lens wearers typically report lens awareness rather than pain or discomfort. Hydrogel lenses rely on their water content to transmit oxygen through the lens to the cornea. As a result, higher water content lenses allowed more oxygen to the cornea. In 1998, silicone hydrogel, or Si-hy lenses became available. These materials have both the extremely high oxygen permeability of silicone and the comfort and clinical performance of the conventional hydrogels. Because silicone allows more oxygen permeability than water, oxygen permeability of silicone hydrogels is not tied to the lenses' water content. Lenses have now been developed with so much oxygen permeability that they are approved for overnight wear (extended wear). Lenses approved for daily wear are also available in silicone hydrogel materials. Current brands of soft lenses are either traditional hydrogel or silicone hydrogel. Because of drastic differences in oxygen permeability, replacement schedule, and other design characteristics, it is very important to follow the instructions of the eye care professional prescribing the lenses. When comparing traditional hydrogel soft lens contacts with silicone hydrogel versions, there is no clear evidence to recommend a superior lens. Disadvantages of silicone hydrogels are that they are slightly stiffer and the lens surface can be hydrophobic, thus less "wettable" – factors that can influence comfort of lens use. New manufacturing techniques and changes to multipurpose solutions have minimized these effects. Those new techniques are often broken down into 3 generations: 1st generation (plasma coating): A surface modification process called plasma coating alters the lens surface's hydrophobic nature; 2nd generation (wetting agents): Another technique incorporates internal rewetting agents to make the lens surface hydrophilic; 3rd generation (inherently wettable): A third process uses longer backbone polymer chains that results in less cross linking and increased wetting without surface alterations or additive agents. Hybrid A small number of hybrid lenses exist. Typically, these contact lenses consist of a rigid center and a soft "skirt". A similar technique is the "piggybacking" of a smaller, rigid lens on the surface of a larger, soft lens. These techniques are often chosen to give the vision correction benefits of a rigid lens and the comfort of a soft lens. Wear schedule A "daily wear" (DW) contact lens is designed to be worn for one day and removed before sleeping. An "extended wear" (EW) contact lens is designed for continuous overnight wear, typically for up to 6 consecutive nights. Newer materials, such as silicone hydrogels, allow for even longer wear periods of up to 30 consecutive nights; these longer-wear lenses are often referred to as "continuous wear" (CW). EW and CW contact lenses can be worn overnight because of their high oxygen permeability. While awake, the eyes are mostly open, allowing oxygen from the air to dissolve into the tears and pass through the lens to the cornea. While asleep, oxygen is supplied from the blood vessels in the back of the eyelid. A lens hindering passage of oxygen to the cornea causes corneal hypoxia which can result in serious complications, such as corneal ulcer that, if left untreated, can permanently decrease vision. EW and CW contact lenses typically allow for a transfer of 5–6 times more oxygen than conventional softs, allowing the cornea to remain healthy, even with closed eyelids. Wearing lenses designed for daily wear overnight has an increased risk for corneal infections, corneal ulcers and corneal neovascularization—this latter condition, once it sets in, cannot be reversed and will eventually spoil vision acuity through diminishing corneal transparency. The most common complication of extended wear is giant papillary conjunctivitis (GPC), sometimes associated with a poorly fitting contact lens. Replacement schedule Contact lenses are often categorized by their replacement schedule. Single use lenses (called 1-day or daily disposables) are discarded after one use. Because they do not have to stand up to the wear and tear of repeated uses, these lenses can be made thinner and lighter, greatly improving their comfort. Lenses replaced frequently gather fewer deposits of allergens and germs, making these lenses preferable for patients with ocular allergies or for those who are prone to infection. Single-use lenses are also useful for people who wear contact lenses infrequently, or when losing a lens is likely or not easily replaced (such as when on vacation). They are also considered useful for children because cleaning or disinfecting is not needed, leading to improved compliance. Other disposable contact lenses are designed for replacement every two or four weeks. Quarterly or annual lenses, which used to be very common, are now much less so. Rigid gas permeable lenses are very durable and may last for several years without the need for replacement. PMMA hards were very durable and were commonly worn for 5 to 10 years but had several drawbacks. Lenses with different replacement schedules can be made of the same material. Although the materials are alike, differences in the manufacturing processes determine if the resulting lens will be a "daily disposable" or one recommended for two- or four-week replacement. However, sometimes manufacturers use absolutely identical lenses and just repackage them with different labels. Manufacturing Typically, soft contact lenses are mass-produced, while rigids are custom-made to exact specifications for the individual patient. Spin-cast lenses – A soft lens manufactured by whirling liquid silicone in a revolving mold at high speed. Diamond turning – This type is cut and polished on a CNC lathe. The lens starts out as a cylindrical disk held in the jaws of the lathe that is equipped with an industrial-grade diamond as the cutting tool. The CNC lathe may turn at nearly 6000 RPM as the cutter removes the desired amount of material from the inside of the lens. The concave (inner) surface of the lens is then polished with some fine abrasive paste, oil, and a small polyester cotton ball turned at high speeds. To hold the delicate lens in reverse manner, wax is used as an adhesive. The lens' convex (outer) surface is thus cut and polished by the same process. This method can be used to shape rigid as well as soft lenses. In the case of softs, the lens is cut from a dehydrated polymer that is rigid until water is reintroduced. Molded – Molding is used to manufacture some brands of soft contact lenses. Rotating molds are used and the molten material is added and shaped by centripetal forces. Injection molding and computer control are also used to create nearly perfect lenses. The lens is kept moist throughout the entire molding process and is never dried and rehydrated. Prescriptions The parameters specified in a contact lens prescription may include: Brand name Material Base curve radius (BC, BCR) Diameter (D, OAD) Optical power in diopters (dpt) Center thickness (CT) Prescriptions for contact lenses and glasses may be similar but are not interchangeable. Prescribing of contact lenses is usually restricted to various combinations of ophthalmologists, optometrists and opticians. An eye examination is needed to determine an individual's suitability for contact lens wear. This typically includes a refraction to determine the proper power of the lens and an assessment of the health of the eye's anterior segment. Many eye diseases inhibit contact lens wear, such as active infections, allergies, and dry eye. Keratometry is especially important in the fitting of rigid lenses. United States Contact lenses are prescribed by ophthalmologists, optometrists, or specially licensed opticians under the supervision of an eye doctor. They are typically ordered at the same office that conducts the eye exam and fitting. The Fairness to Contact Lens Consumers Act guarantees consumers a copy of their contact lens prescription, allowing them to obtain lenses at the provider of their choice. Usage Before touching the contact lens or the eye, it is important to wash hands thoroughly with soap and rinse well. Soaps containing moisturizers or allergens should be avoided as these can cause eye irritation. Drying of hands using towels or tissues before handling contact lenses can transfer lint (fluff) to the hands and, subsequently, to the lenses, causing irritation upon insertion. Towels, unless freshly laundered on high temperature wash, are frequently contaminated with large quantities of bacteria and, as such, should be avoided when handling lenses. Dust, lint and other debris may collect on the outside of contact lenses. Again, hand contact with this material, before handling contact lenses, may transfer it to the lenses themselves. Rinsing the case under a source of clean running water, before opening it, can help alleviate this problem. Next the lens should be removed from its case and inspected for defects (e.g. splits, folds, lint). A 'gritty' or rough appearance to the lens surface may indicate that a considerable quantity of proteins, lipids and debris has built up on it and that additional cleaning is required; this is often accompanied and felt by unusually high irritation upon insertion. Care should be taken to ensure the soft lens is not inserted inside-out. The edge of a lens turned inside out has a different appearance, especially when the lens is slightly folded. Insertion of an inside-out lens for a brief time (less than one minute) should not cause any damage to the eye. Some brands of lenses have markings on the rim that make it easier to tell the front of the lens apart from the back. Insertion Contact lenses are typically inserted into the eye by placing them on the pad of the index or middle finger with the concave side upward and then using that finger to place the lens on the eye. Rigid lenses should be placed directly on the cornea. Soft lenses may be placed on the sclera (white of the eye) and then slid into place. Another finger of the same hand, or a finger of the other hand, is used to keep the eye wide open. Alternatively, the user may close their eyes and then look towards their nose, sliding the lens into place over the cornea. Problems may arise if the lens folds, turns inside-out, slides off the finger prematurely, or adheres more tightly to the finger than the eye surface. A drop of solution may help the lens adhere to the eye. When the lens first contacts the eye, it should be comfortable. A brief period of irritation may occur, caused by a difference in pH and/or salinity between that of the lens solution and the tear. This discomfort fades quickly as the solution drains away and is replaced by the natural tears. However, if irritation persists, the cause could be a dirty, damaged, or inside-out lens. Removing and inspecting it for damage and proper orientation, and re-cleaning if necessary, should correct the problem. If discomfort continues, the lens should not be worn. In some cases, taking a break from lens wear for a day may correct the problem. In case of severe discomfort, or if it does not resolve by the next day, the person should be seen as soon as possible by an eye doctor to rule out potentially serious complications. Removal Removing contact lenses incorrectly can result in damage to the lens and injury to the eye, so certain precautions must be taken. Rigid contact lenses can best be removed by pulling the eyelid tight and then blinking, whereupon the lens drops out. With one finger on the outer corner of the eyelids, or lateral canthus, the person stretches the eyelids towards the ear; the increased tension of the eyelid margins against the edge of lens allows the blink to break the capillary action that adheres the lens to the eye. The other hand is typically cupped underneath the eye to catch the lens as it drops out. For soft lenses, which have a stronger adherence to the eye surface, this technique is less suitable. A soft contact lens may be removed by pinching the edge between the thumb and index finger. Moving the lens off the cornea first can improve comfort during removal and reduce risk of scratching the cornea with a fingernail. It is also possible to push or pull a soft lens far enough to the side or bottom of the eyeball to get it to fold then fall out, without pinching and thereby damaging it. If these techniques are used with a rigid lens, it may scratch the cornea. There are also small tools specifically for removing lenses. Usually made of flexible plastic, they resemble small tweezers, or plungers that suction onto the front of the lens. Typically, these tools are used only with rigid lenses. Extreme care must be exercised when using mechanical tools or fingernails to insert or remove contact lenses. Care Lens care varies depending on material and wear schedule. Daily disposables are discarded after a single use and thus require no cleaning. Other lenses need regular cleaning and disinfecting to prevent surface coating and infections. There are many ways to clean and care for contact lenses, typically called care systems or lens solutions: Multipurpose solutions The main attraction of multipurpose solutions is that the same solution can clean, rinse, disinfect and store lenses. Some multipurpose solutions also contain ingredients that improve the surface wettability and comfort of silicone hydrogel lenses. Studies showed that multipurpose solutions are ineffective against Acanthamoebae. There is preliminary research on creating a new multipurpose solution that kills amoeba. Hydrogen peroxide contact solutions Hydrogen peroxide can be used to disinfect contact lenses. Care should be taken not to get hydrogen peroxide in the eye because it is very painful and irritating. With "two-step" products, the hydrogen peroxide must be rinsed away with saline before the lenses may be worn. "One-step" systems allow the hydrogen peroxide to react completely, becoming pure water. Thus "one-step" hydrogen peroxide systems do not require the lenses to be rinsed before insertion, provided the solution has been given enough time to react. An exposure time of 2-3 hours to 3% (non neutralized solution) is sufficient to kill bacteria, HIV, fungi, and Acanthamoeba. This can be achieved by using a "two-step" product or a "one-step" tablet system if the catalytic tablet is not added before 2-3 hours. However, the "one-step" catalytic disk systems are not effective against Acanthamoeba due to insufficient exposure time. Enzymatic cleaner Used for cleaning protein deposits off lenses, usually weekly, if the daily cleaner is not sufficient. Typically, this cleaner is in tablet form. Ultraviolet, vibration, or ultrasonic devices These devices intend to disinfect and clean contact lenses. The lenses are inserted inside the portable device (running on batteries and/or plug-in) for 2 to 6 minutes during which both the microorganisms and protein build-up are supposed to be cleaned. However these devices can not be used to replace the manual rub and rinse method because vibration and ultrasound can not create relative motion between contact lens and solution, which is required for proper cleaning of the lens. These devices are not usually available in optic retailers but are in other stores. Rub and rinse method Contact lenses can be mechanically cleaned of more substantial protein, lipid and debris build up by rubbing them between the clean pad of a finger and the palm of a hand, using a small amount of cleaning fluid as a lubricant; and by rinsing thereafter. This "rub and rinse" method is thought to be the most effective method for multipurpose solutions, and is the method indicated by the American Academy of Ophthalmology regardless of cleaning solution used. In 2010, the FDA recommended that manufacturers removed the "no rub" from product labeling, "because rub-and-rinse regimens help prevent microbial adhesion to the contact lens, help prevent formation of biofilms, and generally reduce the microbial load on the lens and the lens case." Physical rubbing devices This type of devices mimic digital rubbing. The lenses are sandwiched by silicone parts inside the portable device. The device applies a gentle yet high speed rubbing action on the lens surface and remove debris. Saline solution Sterile saline is used for rinsing the lens after cleaning and preparing it for insertion. Saline solutions do not disinfect, so it must be used in conjunction with some type of disinfection system. One advantage to saline is that it cannot cause an allergic response, so it is well suited for individuals with sensitive eyes or strong allergies. Daily cleaner Used to clean lenses on a daily basis. A few drops of cleaner are applied to the lens while it rests in the palm of the hand; the lens is rubbed for about 20 seconds with a clean fingertip (depending on the product) on each side. Lens must then be rinsed. This system is commonly used to care for rigid lenses. Water is not recommended for cleaning contact lenses. Insufficiently chlorinated tap water can lead to lens contamination, particularly by Acanthamoeba. On the other hand, sterile water will not kill any contaminants that get in from the environment. Aside from cleaning the contact lenses, contact lens case should also be kept clean and be replaced at minimum every 3 months. Contact lens solutions often contain preservatives such as benzalkonium chloride and benzyl alcohol. Preservative-free products usually have shorter shelf lives, but are better suited for individuals with an allergy or sensitivity to a preservative. In the past, thiomersal was used as a preservative. In 1989, thiomersal was responsible for about 10% of problems related to contact lenses. As a result, most products no longer contain thiomersal. Complications Contact lenses are generally safe as long as they are used correctly. Complications from contact lens wear affect roughly 5% of wearers yearly. Factors leading to eye damage varies, and improper use of a contact lens may affect the eyelid, the conjunctiva, and, most of all, the whole structure of the cornea. Poor lens care may lead to infections by various microorganisms including bacteria, fungi, and the amoeba Acanthamoeba (Acanthamoeba keratitis). Many complications arise when contact lenses are worn not as prescribed (improper wear schedule or lens replacement). Sleeping in lenses not designed or approved for extended wear is a common cause of complications. Many people go too long before replacing their contacts, wearing lenses designed for 1, 14, or 30 days of wear for multiple months or years. While this does save on the cost of lenses, it risks permanent damage to the eye and even loss of sight. For non silicone-hydrogel lenses, one of the major factors that causes complications is that the contact lens is an oxygen barrier. The cornea needs a constant supply of oxygen to remain completely transparent and function as it should; it normally gets that oxygen from the surrounding air while awake, and from the blood vessels in the back of the eyelid while asleep. The most prominent risks associated with long-term, chronic low oxygen to the cornea include corneal neovascularization, increased epithelial permeability, bacterial adherence, microcysts, corneal edema, endothelial polymegethism, dry eye and potential increase in myopia. Much of the research into soft and rigid contact lens materials has centered on improving oxygen transmission through the lens. Silicone-hydrogel lenses available today have effectively eliminated hypoxia for most patients. Mishandling of contact lenses can also cause problems. Corneal abrasions can increase the chances of infection. When combined with improper cleaning and disinfection of the lens, a risk of infection further increases. Decreased corneal sensitivity after extended contact lens wear may cause a patient to miss some of the earliest symptoms of such complications. The way contact lenses interact with the natural tear layer is a major factor in determining lens comfort and visual clarity. People with dry eyes are particularly vulnerable to discomfort and episodes of brief blurry vision. Proper lens selection can minimize these effects. Long-term wear (over five years) of contact lenses may "decrease the entire corneal thickness and increase the corneal curvature and surface irregularity." Long-term wear of rigid contacts is associated with decreased corneal keratocyte density and increased number of epithelial Langerhans cells. All contact lenses sold in the United States are studied and approved as safe by the FDA when specific handling and care procedures, wear schedules, and replacement schedules are followed. Current research Contact lens sensors to monitor the ocular temperature have been demonstrated. Monitoring intraocular pressure with contact lens sensors is another area of contact lens research. A large segment of current contact lens research is directed towards the treatment and prevention of conditions resulting from contact lens contamination and colonization by foreign organisms. Clinicians tend to agree that the most significant complication of contact lens wear is microbial keratitis and that the most predominant microbial pathogen is Pseudomonas aeruginosa. Other organisms are also major causative factors in bacterial keratitis associated with contact lens wear, although their prevalence varies across different locations. These include both the Staphylococcus species (aureus and epidermidis) and the Streptococcus species, among others. Microbial keratitis is a serious focal point of current research due to its potentially devastating effect on the eye, including severe vision loss. One specific research topic of interest is how microbes such as Pseudomonas aeruginosa invade the eye and cause infection. Although the pathogenesis of microbial keratitis is not well understood, many different factors have been investigated. One group of researchers showed that corneal hypoxia exacerbated Pseudomonas binding to the corneal epithelium, internalization of the microbes, and induction of the inflammatory response. One way to alleviate hypoxia is to increase the amount of oxygen transmitted to the cornea. Although silicone-hydrogel lenses almost eliminate hypoxia in patients due to their very high levels of oxygen transmissibility, they also seem to provide a more efficient platform for bacterial contamination and corneal infiltration than other conventional hydrogel soft contact lenses. One study showed that Pseudomonas aeruginosa and Staphylococcus epidermidis adhere much more strongly to unworn silicone hydrogel contact lenses than conventional hydrogel lenses and that adhesion of Pseudomonas aeruginosa was 20 times stronger than that of Staphylococcus epidermidis. This might partly explain why Pseudomonas infections are the most predominant. However, another study conducted with worn and unworn silicone and conventional hydrogel contact lenses showed that worn silicone contact lenses were less prone to Staphylococcus epidermidis colonization than conventional hydrogel lenses. Besides bacterial adhesion and cleaning, micro and nano pollutants (biological and manmade) is an area of contact lens research that is growing. Small physical pollutants ranging from nanoplastics to fungi spores to plant pollen adhere to contact lens surfaces in high concentrations. It has been found that multipurpose solution and rubbing with fingers does not significantly clean the lenses. A group of researchers have suggested an alternative cleaning solution, PoPPR (polymer on polymer pollution removal). This cleaning technique takes advantage of a soft and porous polymer to physically peel pollutants off of contact lenses. Another important area of contact lens research deals with patient compliance. Compliance is a major issue pertaining to the use of contact lenses because patient noncompliance often leads to contamination of the lens, storage case, or both. However, careful users can extend the wear of lenses through proper handling: there is, unfortunately, no disinterested research on the issue of "compliance" or the length of time a user can safely wear a lens beyond its stated use. The introduction of multipurpose solutions and daily disposable lenses have helped to alleviate some of the problems observed from inadequate cleaning but new methods of combating microbial contamination are currently being developed. A silver-impregnated lens case has been developed which helps to eradicate any potentially contaminating microbes that come in contact with the lens case. Additionally, a number of antimicrobial agents are being developed that have been embedded into contact lenses themselves. Lenses with covalently attached selenium molecules have been shown to reduce bacterial colonization without adversely affecting the cornea of a rabbit eye and octyl glucoside used as a lens surfactant significantly decreases bacterial adhesion. These compounds are of particular interest to contact lens manufacturers and prescribing optometrists because they do not require any patient compliance to effectively attenuate the effects of bacterial colonization. One area of research is in the field of bionic lenses. These are visual displays that include built-in electric circuits and light-emitting diodes and can harvest radio waves for their electric power. Bionic lenses can display information beamed from a mobile device overcoming the small display size problem. The technology involves embedding nano and microscale electronic devices in lenses. These lenses will also need to have an array of microlenses to focus the image so that it appears suspended in front of the wearer's eyes. The lens could also serve as a head-up display for pilots or gamers. Drug administration through contact lenses is also becoming an area of research. One application is a lens that releases anesthesia to the eye for post-surgery pain relief, especially after PRK (photorefractive keratectomy) in which the healing process takes several days. One experiment shows that silicone contact lenses that contain vitamin E deliver pain medication for up to seven days compared with less than two hours in usual lenses. Another study of the usage of contact lens is aimed to address the issue of macular degeneration (AMD or age-related macular degeneration). An international collaboration of researchers was able to develop a contact lens that can shift between magnified and normal vision. Previous solutions to AMD included bulky glasses or surgical implants. But the development of this new contact lens, which is made of polymethyl methacrylate, could offer an unobtrusive solution. In popular culture Films One of the earliest known motion pictures to introduce the use of contact lenses as a make-up artist's device for enhancing the eyes was by the innovative actor Lon Chaney in the 1926 film The Road to Mandalay to create the effect of a character who had a blind eye. Dr. Rueben Greenspoon applied them to Orson Welles for the film Citizen Kane in 1940. In the 1950s, contact lenses were starting to be used in British color horror films. An early example of this is the British actor Christopher Lee as the Dracula character in the 1958 color horror film Dracula, which helped to emphasize his horrific looking black pupils and red bloodshot eyes. Tony Curtis wore them in the 1968 film The Boston Strangler. Contact lenses were also used to better emphasize the sinister gaze of the demonic characters in 1968's Rosemary's Baby and 1973's The Exorcist. Colored custom-made contact lenses are now standard makeup for a number of special effects-based movies.
Technology
Optical instruments
null
74941
https://en.wikipedia.org/wiki/Body%20cavity
Body cavity
A body cavity is any space or compartment, or potential space, in an animal body. Cavities accommodate organs and other structures; cavities as potential spaces contain fluid. The two largest human body cavities are the ventral body cavity, and the dorsal body cavity. In the dorsal body cavity the brain and spinal cord are located. The membranes that surround the central nervous system organs (the brain and the spinal cord, in the cranial and spinal cavities) are the three meninges. The differently lined spaces contain different types of fluid. In the meninges for example the fluid is cerebrospinal fluid; in the abdominal cavity the fluid contained in the peritoneum is a serous fluid. In amniotes and some invertebrates the peritoneum lines their largest body cavity called the coelom. Mammals Mammalian embryos develop two body cavities: the intraembryonic coelom and the extraembryonic coelom (or chorionic cavity). The intraembryonic coelom is lined by somatic and splanchnic lateral plate mesoderm, while the extraembryonic coelom is lined by extraembryonic mesoderm. The intraembryonic coelom is the only cavity that persists in the mammal at term, which is why its name is often contracted to simply coelomic cavity. Subdividing the coelomic cavity into compartments, for example, the pericardial cavity / pericardium, where the heart develops, simplifies discussion of the anatomies of complex animals. Cavitation in the early embryo is the process of forming the blastocoel, the fluid-filled cavity defining the blastula stage in non-mammals, or the blastocyst in mammals. Human body cavities The dorsal (posterior) cavity and the ventral (anterior) cavity are the largest body compartments. The dorsal body cavity includes the cranial cavity, enclosed by the skull and contains the brain, and the spinal cavity, enclosed by the spine, and contains the spinal cord. The ventral body cavity includes the thoracic cavity, enclosed by the ribcage and contains the lungs and heart; and the abdominopelvic cavity. The abdominopelvic cavity can be divided into the abdominal cavity, enclosed by the ribcage and pelvis and contains the kidneys, ureters, stomach, intestines, liver, gallbladder, and pancreas; and the pelvic cavity, enclosed by the pelvis and contains bladder, anus and reproductive system. Ventral body cavity The ventral cavity has two main subdivisions: the thoracic cavity and the abdominopelvic cavity. The thoracic cavity is the more superior subdivision of the ventral cavity, and is enclosed by the rib cage. The thoracic cavity contains the lungs surrounded by the pleural cavity, and the heart surrounded by the pericardial cavity, located in the mediastinum. The diaphragm forms the floor of the thoracic cavity and separates it from the more inferior abdominopelvic cavity. The abdominopelvic cavity is the largest cavity in the body occupying the entire lower half of the trunk. Although no membrane physically divides the abdominopelvic cavity, it can be useful to distinguish between the abdominal cavity, and the pelvic cavity. The abdominal cavity occupies the entire lower half of the trunk, anterior to the spine, and houses the organs of digestion. Just under the abdominal cavity, anterior to the buttocks, is the pelvic cavity. The pelvic cavity is funnel shaped, and is located inferior and anterior to the abdominal cavity, and houses the organs of reproduction. Dorsal body cavity The dorsal body cavity contains the cranial cavity, and the spinal cavity. The cranial cavity is a large, bean-shaped cavity filling most of the upper skull where the brain is located. The spinal cavity is the very narrow, thread-like cavity running from the cranial cavity down the entire length of the spinal cord. In the dorsal cavity, the cranial cavity houses the brain, and the spinal cavity encloses the spinal cord. Just as the brain and spinal cord make up a continuous, uninterrupted structure, the cranial and spinal cavities that house them are also continuous. The brain and spinal cord are protected by the bones of the skull and vertebral column and by cerebrospinal fluid, a colorless fluid produced by the brain, which cushions the brain and spinal cord within the dorsal body cavity. Development At the end of the third week of gestation, the neural tube, which is a fold of one of the layers of the trilaminar germ disc, called the ectoderm, appears. This layer elevates and closes dorsally, while the gut tube rolls up and closes ventrally to create a "tube on top of a tube". The mesoderm, which is another layer of the trilaminar germ disc, holds the tubes together and the lateral plate mesoderm, the middle layer of the germ disc, splits to form a visceral layer associated with the gut and a parietal layer, which along with the overlying ectoderm, forms the lateral body wall. The space between the visceral and parietal layers of lateral plate mesoderm is the primitive body cavity. When the lateral body wall folds, it moves ventrally and fuses at the midline. The body cavity closes, except in the region of the connecting stalk. Here, the gut tube maintains an attachment to the yolk sac. The yolk sac is a membranous sac attached to the embryo, which provides nutrients and functions as the circulatory system of the very early embryo. The lateral body wall folds, pulling the amnion in with it so that the amnion surrounds the embryo and extends over the connecting stalk, which becomes the umbilical cord, which connects the fetus with the placenta. If the ventral body wall fails to close, ventral body wall defects can result, such as ectopia cordis, a congenital malformation in which the heart is abnormally located outside the thorax. Another defect is gastroschisis, a congenital defect in the anterior abdominal wall through which the abdominal contents freely protrude. Another possibility is bladder exstrophy, in which part of the urinary bladder is present outside the body. In normal circumstances, the parietal mesoderm will form the parietal layer of serous membranes lining the outside (walls) of the peritoneal, pleural, and pericardial cavities. The visceral layer will form the visceral layer of the serous membranes covering the lungs, heart, and abdominal organs. These layers are continuous at the root of each organ as the organs lie in their respective cavities. The peritoneum, a serum membrane that forms the lining of the abdominal cavity, forms in the gut layers and in places mesenteries extend from the gut as double layers of peritoneum. Mesenteries provide a pathway for vessels, nerves, and lymphatics to the organs. Initially, the gut tube from the caudal end of the foregut to the end of the hindgut is suspended from the dorsal body wall by dorsal mesentery. Ventral mesentery, derived from the septum transversum, exists only in the region of the terminal part of the esophagus, the stomach, and the upper portion of the duodenum. Function These cavities contain and protect delicate internal organs, and the ventral cavity allows for significant changes in the size and shape of the organs as they perform their functions. Anatomical structures are often described in terms of the cavity in which they reside. The body maintains its internal organization by means of membranes, sheaths, and other structures that separate compartments. The lungs, heart, stomach, and intestines, for example, can expand and contract without distorting other tissues or disrupting the activity of nearby organs. The ventral cavity includes the thoracic and abdominopelvic cavities and their subdivisions. The dorsal cavity includes the cranial and spinal cavities. Other animals Organisms can be also classified according to the type of body cavity they possess, such as pseudocoelomates and protostome coelomates. Coelom In amniotes and some invertebrates, the coelom is the large cavity lined by mesothelium, an epithelium derived from mesoderm. Organs formed inside the coelom can freely move, grow, and develop independently of the body wall while fluid in the peritoneum cushions and protects them from shocks. Arthropods and most molluscs have a reduced (but still true) coelom, the hemocoel (of an open circulatory system) and the smaller gonocoel (a cavity that contains the gonads). Their hemocoel is often derived from the blastocoel.
Biology and health sciences
External anatomy and regions of the body
Biology
74964
https://en.wikipedia.org/wiki/Gauss%27s%20law
Gauss's law
In physics (specifically electromagnetism), Gauss's law, also known as Gauss's flux theorem (or sometimes Gauss's theorem), is one of Maxwell's equations. It is an application of the divergence theorem, and it relates the distribution of electric charge to the resulting electric field. Definition In its integral form, it states that the flux of the electric field out of an arbitrary closed surface is proportional to the electric charge enclosed by the surface, irrespective of how that charge is distributed. Even though the law alone is insufficient to determine the electric field across a surface enclosing any charge distribution, this may be possible in cases where symmetry mandates uniformity of the field. Where no such symmetry exists, Gauss's law can be used in its differential form, which states that the divergence of the electric field is proportional to the local density of charge. The law was first formulated by Joseph-Louis Lagrange in 1773, followed by Carl Friedrich Gauss in 1835, both in the context of the attraction of ellipsoids. It is one of Maxwell's equations, which forms the basis of classical electrodynamics. Gauss's law can be used to derive Coulomb's law, and vice versa. Qualitative description In words, Gauss's law states: The net electric flux through any hypothetical closed surface is equal to times the net electric charge enclosed within that closed surface. The closed surface is also referred to as Gaussian surface. Gauss's law has a close mathematical similarity with a number of laws in other areas of physics, such as Gauss's law for magnetism and Gauss's law for gravity. In fact, any inverse-square law can be formulated in a way similar to Gauss's law: for example, Gauss's law itself is essentially equivalent to the Coulomb's law, and Gauss's law for gravity is essentially equivalent to the Newton's law of gravity, both of which are inverse-square laws. The law can be expressed mathematically using vector calculus in integral form and differential form; both are equivalent since they are related by the divergence theorem, also called Gauss's theorem. Each of these forms in turn can also be expressed two ways: In terms of a relation between the electric field and the total electric charge, or in terms of the electric displacement field and the free electric charge. Equation involving the field Gauss's law can be stated using either the electric field or the electric displacement field . This section shows some of the forms with ; the form with is below, as are other forms with . Integral form Gauss's law may be expressed as: where is the electric flux through a closed surface enclosing any volume , is the total charge enclosed within , and is the electric constant. The electric flux is defined as a surface integral of the electric field: where is the electric field, is a vector representing an infinitesimal element of area of the surface, and represents the dot product of two vectors. In a curved spacetime, the flux of an electromagnetic field through a closed surface is expressed as where is the speed of light; denotes the time components of the electromagnetic tensor; is the determinant of metric tensor; is an orthonormal element of the two-dimensional surface surrounding the charge ; indices and do not match each other. Since the flux is defined as an integral of the electric field, this expression of Gauss's law is called the integral form. In problems involving conductors set at known potentials, the potential away from them is obtained by solving Laplace's equation, either analytically or numerically. The electric field is then calculated as the potential's negative gradient. Gauss's law makes it possible to find the distribution of electric charge: The charge in any given region of the conductor can be deduced by integrating the electric field to find the flux through a small box whose sides are perpendicular to the conductor's surface and by noting that the electric field is perpendicular to the surface, and zero inside the conductor. The reverse problem, when the electric charge distribution is known and the electric field must be computed, is much more difficult. The total flux through a given surface gives little information about the electric field, and can go in and out of the surface in arbitrarily complicated patterns. An exception is if there is some symmetry in the problem, which mandates that the electric field passes through the surface in a uniform way. Then, if the total flux is known, the field itself can be deduced at every point. Common examples of symmetries which lend themselves to Gauss's law include: cylindrical symmetry, planar symmetry, and spherical symmetry. See the article Gaussian surface for examples where these symmetries are exploited to compute electric fields. Differential form By the divergence theorem, Gauss's law can alternatively be written in the differential form: where is the divergence of the electric field, is the vacuum permittivity and is the total volume charge density (charge per unit volume). Equivalence of integral and differential forms The integral and differential forms are mathematically equivalent, by the divergence theorem. Here is the argument more specifically. Equation involving the field Free, bound, and total charge The electric charge that arises in the simplest textbook situations would be classified as "free charge"—for example, the charge which is transferred in static electricity, or the charge on a capacitor plate. In contrast, "bound charge" arises only in the context of dielectric (polarizable) materials. (All materials are polarizable to some extent.) When such materials are placed in an external electric field, the electrons remain bound to their respective atoms, but shift a microscopic distance in response to the field, so that they're more on one side of the atom than the other. All these microscopic displacements add up to give a macroscopic net charge distribution, and this constitutes the "bound charge". Although microscopically all charge is fundamentally the same, there are often practical reasons for wanting to treat bound charge differently from free charge. The result is that the more fundamental Gauss's law, in terms of (above), is sometimes put into the equivalent form below, which is in terms of and the free charge only. Integral form This formulation of Gauss's law states the total charge form: where is the -field flux through a surface which encloses a volume , and is the free charge contained in . The flux is defined analogously to the flux of the electric field through : Differential form The differential form of Gauss's law, involving free charge only, states: where is the divergence of the electric displacement field, and is the free electric charge density. Equivalence of total and free charge statements Equation for linear materials In homogeneous, isotropic, nondispersive, linear materials, there is a simple relationship between and : where is the permittivity of the material. For the case of vacuum (aka free space), . Under these circumstances, Gauss's law modifies to for the integral form, and for the differential form. Relation to Coulomb's law Deriving Gauss's law from Coulomb's law Strictly speaking, Gauss's law cannot be derived from Coulomb's law alone, since Coulomb's law gives the electric field due to an individual, electrostatic point charge only. However, Gauss's law can be proven from Coulomb's law if it is assumed, in addition, that the electric field obeys the superposition principle. The superposition principle states that the resulting field is the vector sum of fields generated by each particle (or the integral, if the charges are distributed smoothly in space). Since Coulomb's law only applies to stationary charges, there is no reason to expect Gauss's law to hold for moving charges based on this derivation alone. In fact, Gauss's law does hold for moving charges, and, in this respect, Gauss's law is more general than Coulomb's law. Deriving Coulomb's law from Gauss's law Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion).
Physical sciences
Electrodynamics
Physics
75005
https://en.wikipedia.org/wiki/Emergency%20medical%20technician
Emergency medical technician
An emergency medical technician (often, more simply, EMT) is a medical professional that provides emergency medical services. EMTs are most commonly found serving on ambulances and in fire departments in the US and Canada, as full-time and some part-time departments require their firefighters to at least be EMT certified. In English-speaking countries, paramedics are a separate profession that has additional educational requirements, qualifications, and scope of practice. EMTs are often employed by public ambulance services, municipal EMS agencies, governments, hospitals, and fire departments. Some EMTs are paid employees, while others (particularly those in rural areas) are volunteers. EMTs provide medical care under a set of protocols, which are typically written by a physician. Hazard controls EMTs are exposed to a variety of hazards such as lifting patients and equipment, treating those with infectious disease, handling hazardous substances, and transportation via ground or air vehicles. Employers can prevent occupational illness or injury by providing safe patient handling equipment, implementing a training program to educate EMTs on job hazards, and supplying PPE such as respirators, gloves, and isolation gowns when dealing with biological hazards. Infectious disease has become a major concern in light of the COVID-19 pandemic. In response, the U.S. Centers for Disease Control and Prevention and other agencies and organizations have issued guidance regarding workplace hazard controls for COVID-19. Some specific recommendations include modified call queries, symptom screening, universal PPE use, hand hygiene, physical distancing, and stringent disinfection protocols. Research on ambulance ventilation systems found that aerosols often recirculate throughout the compartment, creating a health hazard for EMTs when transporting sick patients capable of airborne transmission. Unidirectional airflow design can better protect workers. Canada There is considerable degree of inter-provincial variation in the Canadian paramedic practice. Although a national consensus (by way of the National Occupational Competency Profile) identifies certain knowledge, skills, and abilities as being most synonymous with a given level of paramedic practice, each province retains ultimate authority in legislating the actual administration and delivery of emergency medical services within its own borders. For this reason, any discussion of paramedic practice in Canada is necessarily broad, and general. Specific regulatory frameworks and questions related to paramedic practices can only definitively be answered by consulting relevant provincial legislation, although provincial paramedic associations may often offer a simpler overview of this topic when it is restricted to a province-by-province basis. In Canada, the levels of paramedic practice as defined by the National Occupational Competency Profile are: emergency medical responder (EMR), primary care paramedic, advanced care paramedic, and critical care paramedic. Regulatory frameworks vary from province to province, and include direct government regulation (such as Ontario's method of credentialing its practitioners with the title of A-EMCA, or advanced emergency medical care assistant) to professional self-regulating bodies, such as the Alberta College of Paramedics. In Alberta, for instance, only someone registered with the Alberta College of Paramedics can call themselves a paramedic; the title is legally protected. Almost all provinces have moved to adopting the new titles, or have at least recognized the NOCP document as a benchmarking document to permit inter-provincial labour mobility of practitioners, regardless of how titles are specifically regulated within their own provincial systems. In this manner, the confusing myriad of titles and occupational descriptions can at least be discussed using a common language for comparison sake. Emergency medical responder Most providers that work in ambulances are identified as "paramedics" by the public. However, in many cases, the most prevalent level of emergency pre-hospital care is that which is provided by an emergency medical responder (EMR). This is a level of practice recognized under the National Occupational Competency Profile, although unlike the next three successive levels of practice, the high number of EMRs across Canada cannot be ignored as contributing a critical role in the chain of survival, although it is a level of practice that is least comprehensive (clinically speaking), and is also generally not consistent with any medical acts beyond advanced first-aid and oxygen therapy, administration of ASA, I.M. epinephrine and glucagon, oral glucose and administration of intranasal Narcan with the exception of automated external defibrillation (which is still considered a regulated medical act in most provinces in Canada). Primary care paramedic Primary care paramedics (PCP) are the entry-level of paramedic practice in Canadian provinces. The scope of practice includes performing semi-automated external defibrillation, interpretation of 4-lead ECGs, administration of symptom relief medications for a variety of emergency medical conditions (these include oxygen, epinephrine, dextrose, glucagon, salbutamol, ASA and nitroglycerine), performing trauma immobilization (including cervical immobilization), and other fundamental basic medical care. Primary care paramedics may also receive additional training in order to perform certain skills that are normally in the scope of practice of advanced care paramedics. This is regulated both provincially (by statute) and locally (by the medical director), and ordinarily entails an aspect of medical oversight by a specific body or group of physicians. This is often referred to as "medical control", or a role played by a base hospital. For example, in the provinces of Ontario, Quebec and Newfoundland and Labrador, many paramedic services allow primary care paramedics to perform 12-lead ECG interpretation, or initiate intravenous therapy to deliver a few additional medications. Advanced care paramedic Advanced care paramedic (ACP) is a level of practitioner that is in high demand by many services across Canada. However, Quebec only utilizes this level of practice in a very limited fashion as part of a pilot program in Montreal. The ACP typically carries approximately 20 different medications, although the number and type of medications may vary substantially from region to region. ACPs perform advanced airway management including intubation, surgical airways, intravenous therapy, place external jugular IV lines, perform needle thoracotomy, perform and interpret 12-lead ECGs, perform synchronized and chemical cardioversion, transcutaneous pacing, perform obstetrical assessments, and provide pharmacological pain relief for various conditions. Several sites in Canada have adopted pre-hospital fibrinolytics and rapid sequence induction, and prehospital medical research has permitted a great number of variations in the scope of practice for ACPs. Current programs include providing ACPs with discretionary direct 24-hour access to PCI labs, bypassing the emergency department, and representing a fundamental change in both the way that patients with S-T segment elevation myocardial infarctions (STEMI) are treated, but also profoundly affecting survival rates, as well as bypassing closer hospitals to get an identified stroke patient to a stroke centre. Critical care paramedic Critical care paramedics (CCPs) are paramedics who generally do not respond to 9-1-1 emergency calls, with the exception of helicopter "scene" calls. Instead they focus on transferring patients from the hospital they are currently in to other hospitals that can provide a higher level of care. CCPs often work in collaboration with registered nurses and respiratory therapists during hospital transfers. This ensures continuity of care. However, when acuity is manageable by a CCP or a registered nurse or respiratory therapist is not available, CCPs will work alone. Providing this care to the patient allows the sending hospital to avoid losing highly trained staff on hospital transfers. CCPs are able to provide all of the care that PCPs and ACPs provide. That being said, CCPs significantly lack practical experience with advanced skills such as IV initiation, peripheral access to cardiovascular system for fluid and drug administration, advanced airway, and many other techniques. While a PCP and ACP may run 40–50 medical codes per year, a CCP may run 1–2 in an entire career. IV/IO starts are nearly non-existent in the field and for this reason CCPs are required to attend nearly double the amount of time in classroom situations or in hospital to keep current. In addition to this, they are trained for other skills such as medication infusion pumps, mechanical ventilation, and arterial line monitoring. CCPs often work in fixed and rotary wing aircraft when the weather permits and staff are available, but systems such as the Toronto EMS Critical Care Transport Program work in land ambulances. ORNGE transport operates both land and aircraft in Ontario. In British Columbia, CCPs work primarily in aircraft with a dedicated critical care transport crew in Trail for long-distance transfers and a regular CCP street crew stationed in South Vancouver that often also performs medevacs when necessary. Training Paramedic training in Canada varies regionally; for example, the length of training may be eight months in British Columbia or two to four years in Ontario, Alberta, and Quebec. The nature of training and how it is regulated, like actual paramedic practice, varies from province to province. Republic of Ireland Emergency medical technician (EMT), paramedic (P) and advanced paramedic (AP) are legally defined and protected titles in the Republic of Ireland based on the standard set down by the Pre-Hospital Emergency Care Council (PHECC). Emergency medical technician is the entry-level standard of practitioner for employment within the ambulance service. Currently, EMTs are authorized to work on non-emergency ambulances only as the standard for emergency (999) calls is a minimum of a two-paramedic crew, although this minimum requirement was relaxed to and EMT - paramedic crew during the COVID-19 crisis. EMTs are a vital part of the private, voluntary and auxiliary services where a practitioner must be on board any ambulance in the process of transporting a patient to hospital. Philippines Emergency medical technician (EMT), paramedic (P) and advanced paramedic (AP) are legally defined and protected titles in the Philippines based on the standard set down by the Department of Health. Spain Técnico en Emergencias Sanitarias (TES) are trained a total of 2000hrs in 2 years with 3 months of internship in ambulances at the very end. It's the only level of EMS worker. BLS ambulances can be driven with a B license, ALS with a C1. United Kingdom Emergency medical technician is a term that has existed for many years in the United Kingdom, but has no single defined scope. They may be known as emergency medical technician or simply, ambulance technician. Most EMTs hold an Institute for Healthcare Development Ambulance Technician Certificate and are employed in private ambulance companies or in National Health Service ambulance trusts. As of 2016, The IHCD Ambulance Technician Certificate was replaced with the FAQ Level 4 Diploma for Associate Ambulance Practitioners & QA Level 5 Diploma in First Response Emergency and Urgent Care (RQF) This provided a defined scope of practice agreed nationally by ambulance service trusts. Their role title, however, may still be defined by their employer as emergency medical technician. They can work autonomously, making their own clinical decisions within their training and remit. They may also work as a clinical lead working alongside an emergency care assistant or as assistants themselves to a paramedic. As the role does not have a single defined scope, the skills they have can include: Administration of select general sales list, pharmacy and prescription only medicines under provision of the Human Medicines Regulations 2012 Administration of medicines by select parenteral or non-parenteral routes; typically oral, intramuscular, inhaled, nebulised or sublingual Intermediate life support, including manual defibrillation and supraglottic airway placement Ability to discharge patients to different care pathways The term emergency medical technician is not commonly used by members of the public in the United Kingdom. Instead, it is common for all ambulance personnel to be referred to as "paramedics", although the paramedic title is protected under registration of the Health and Care Professions Council. United States Certification In the United States, EMTs are certified according to their level of training. Individual states set their own standards of certification (or licensure, in some cases) and all EMT training must meet the minimum requirements as set by the National Highway Traffic Safety Administration's (NHTSA's) standards for curriculum. The National Registry of Emergency Medical Technicians (NREMT) is a nonprofit organization which offers certification exams based on NHTSA education guidelines and has been around since the 1970s. Currently, NREMT exams are used by 46 states as the sole basis for certification at one or more EMT certification levels. A NREMT exam consists of skills and patient assessments as well as a written portion. On June 12, 2019, the NREMT changed the rules regarding age limits for EMTs, AEMTs, and paramedics. There is no longer an age limit for registered personnel. However, applicants must successfully complete a state-approved EMT course that meets or exceeds the NREMT standards within the past two years of applying. Those applying for the NREMT certification must also complete a state-approved EMT psychomotor exam. It is possible for the candidate to be refused access to a state-approved course due to their age within the state. Levels NHTSA recognizes four levels of certification: Emergency medical responder (EMR) Emergency medical technician (EMT) Advanced emergency medical technician (AEMT) Paramedic Some states also recognize the advanced practice paramedic or critical care paramedic level as a state-specific licensure above that of paramedic. These critical care paramedics generally perform high acuity transports that require skills outside the scope of a standard paramedic (such as mechanical ventilation and management of cardiac assist devices). In addition, EMTs can seek out specialty certifications such as wilderness EMT, wilderness paramedic, tactical EMT, and flight paramedic. In 2009, the NREMT posted information about a transition to a new system of levels for emergency care providers developed by NHTSA with the National EMS Scope of Practice Project. By 2014, these new levels replaced the fragmented system found around the United States. The new classification includes emergency medical responder (replacing first responder), emergency medical technician (replacing EMT-basic), advanced emergency medical technician (replacing EMT-intermediate/85), and paramedic (replacing EMT-intermediate/99 and EMT-paramedic). Education requirements in transitioning to the new levels are substantially similar. Emergency Medical Responder (EMR) EMR is the most basic level of training, and is considered the bare minimum certification for rescuers that respond to medical emergencies. EMRs are typically on-call volunteers in rural communities, or are primarily employed as firefighters or search and rescue personnel. EMRs typically arrive quickly and assess and stabilize the patient before the transporting ambulance arrives, and then assist the crew with patient care and packaging. EMRs provide advanced first aid-level care, CPR, semi-automatic defibrillation, basic airway management (suction/oropharyngeal airway), oxygen therapy, and administration of basic, life-saving medications such as epinephrine and naloxone. Emergency Medical Technician (EMT) EMT is the next level of EMS certification and is considered the most common entry level of training. The procedures and skills allowed at this level include bleeding control, management of burns, splinting of suspected fractures and spinal injuries, childbirth, cardiopulmonary resuscitation, semi-automatic defibrillation, oral suctioning, insertion of oropharyngeal and nasopharyngeal airways, pulse oximetry, blood glucose monitoring, auscultation of lung sounds, and administration of a limited set of medications (including oxygen, epinephrine, dextrose, naloxone, albuterol, ipratropium bromide, glucagon, nitroglycerin, nitrous oxide, and acetylsalicylic acid). Some areas may add to the scope of practice, including intravenous access, insertion of supraglottic airway devices, and CPAP. Training requirements and treatment protocols vary from area to area. Advanced EMT Advanced EMT is the level of training between EMT and paramedic. They can provide intermediate life support (ILS) care including obtaining intravenous or intraosseous access, basic cardiac monitoring, fluid resuscitation, capnography, and administration of some additional medications. Paramedic Paramedics typically represent the highest degree of pre-hospital medical provider, providing advanced life support (ALS) care. Paramedics perform a variety of medical procedures such as endotracheal intubation, rapid sequence induction, cricothyrotomy, fluid resuscitation, drug administration, obtaining intravenous and intraosseous access, manual defibrillation, electrocardiogram interpretation, capnography, cardioversion, transcutaneous pacing, pericardiocentesis, thoracostomy, ultrasonography, and blood chemistry interpretation. Staffing levels An ambulance with only EMTs is considered a basic life support (BLS) unit, an ambulance utilizing AEMTs is dubbed an "intermediate life support" (ILS), or "limited advanced life support" (LALS) unit, and an ambulance with paramedics is dubbed an "advanced life support" (ALS) unit. Many states allow ambulance crews to contain a mix of crews levels (e.g. an EMT and a paramedic or an AEMT and a paramedic) to staff ambulances and operate at the level of the highest trained provider. There is nothing stopping supplemental crew members to be of a certain certification, though (e.g. if an ALS ambulance is required to have two paramedics, then it is acceptable to have two paramedics and an EMT). An emergency vehicle with only EMRs or a combination of both EMRs and EMTs is still dubbed a "basic life support" (BLS) unit. An EMR must usually be overseen by an EMT-level provider or higher to work on a transporting ambulance. Education and training EMT training programs for certification vary greatly from course to course, provided that each course at least meets local and national requirements. In the United States, EMRs receive at least 40–80 hours of classroom training and EMTs receive at least 120–300 hours of classroom training. AEMTs generally have 100-300 hours of additional classroom training beyond the standard EMT training. Paramedics are trained for 1,500–2,500 hours or more. In addition to each level's didactic education, clinical rotations are typically also required. Similar in a sense to medical school clinical rotations, EMT students are required to spend a required amount of time in an ambulance and on a variety of hospital services (e.g. obstetrics, emergency medicine, surgery, intensive care unit, psychiatry) in order to complete a course and become eligible for the certification and licensure exams. The number of clinical hours for both time in an ambulance and time in the hospital vary depending on local requirements, the level the student is obtaining, and the amount of time it takes the student to show competency. In addition, a minimum of continuing education (CE) hours is required to maintain certification. For example, to maintain NREMT certification, EMTs must obtain at least 48 hours of additional education and either complete a 24-hour refresher course or complete an additional 24 hours of CE that cover, on an hour by hour basis, the same topics as the refresher course would. Recertification for other levels follows a similar pattern. EMT training programs vary greatly in calendar length (number of days or months). For example, fast track programs are available for EMTs that are completed in two weeks by holding class for 8 to 12 hours a day for at least two weeks. Other training programs are months long, or up to two years for paramedics in associate degree programs. EMT training programs take place at numerous locations, such as universities, community colleges, technical schools, hospitals or EMS academies. Every state in the United States has an EMS lead agency or state office of emergency medical services that regulates and accredits EMT training programs. Most of these offices have web sites to provide information to the public and individuals who are interested in becoming an EMT. Medical direction In the United States, an EMT's actions in the field are governed by state regulations, local regulations, and by the policies of their EMS organization. The development of these policies are guided by a physician medical director, often with the advice of a medical advisory committee composed of paramedics and other health professionals. In California, for example, each county's local emergency medical service agency (LEMSA) issues a list of standard operating procedures or protocols, under the supervision of the California Emergency Medical Services Authority. These procedures often vary from county to county based on local needs, levels of training and clinical experiences. New York State has similar procedures, whereas a regional medical-advisory council (REMAC) determines protocols for one or more counties in a geographical section of the state. Treatments and procedures administered by paramedics fall under one of two categories, off-line medical orders (standing orders) and on-line medical orders. On-line medical orders refers to procedures that must be explicitly approved by a base hospital physician or registered nurse through voice communication (generally by phone or radio) and are generally rare or high risk procedures (e.g. vasopressor initiation). In addition, when multiple levels can perform the same procedure (e.g. AEMT-critical care and paramedics in New York), a procedure can be both an on-line and a standing order depending on the level of the provider. Since no set of protocols can cover every patient situation, many systems work with protocols as guidelines. Systems also have policies in place to handle medical direction when communication failures happen or in disaster situations. The NHTSA curriculum is the foundation Standard of Care for EMS providers in the US. Employment EMTs and paramedics are employed in varied settings, mainly the prehospital environment such as in EMS, fire, and police agencies. They can also be found in positions ranging from hospital and health care settings, to industrial and entertainment positions. The prehospital environment is loosely divided into non-emergency (e.g. patient transport) and emergency (9-1-1 calls) services, but many ambulance services and EMS agencies operate both non-emergency and emergency care. In many places across the United States, it is not uncommon for the primary employer of EMRs, EMTs, and paramedics to be a fire department, with the fire department providing the primary emergency medical system response including "first responder" fire apparatus, as well as ambulances. In many other locations, emergency medical services are provided by a separate, or "third-party", municipal government emergency agency (e.g. Boston EMS, Austin-Travis County EMS). In still other locations, emergency medical services are provided by volunteer agencies. College and university campuses may provide emergency medical responses on their own campus using students. In some states of the US, many EMS agencies are run by independent non-profit volunteer first aid squads that are their own corporations set up as separate entities from fire departments. In this environment, volunteers are hired to fill certain blocks of time to cover emergency calls. These volunteers have the same state certification as their paid counterparts.
Biology and health sciences
Health professionals
Health
75006
https://en.wikipedia.org/wiki/Paramedic
Paramedic
A paramedic is a healthcare professional trained in the medical model, whose main role has historically been to respond to emergency calls for medical help outside of a hospital. Paramedics work as part of the emergency medical services (EMS), most often in ambulances. They also have roles in emergency medicine, primary care, transfer medicine and remote/offshore medicine. The scope of practice of a paramedic varies between countries, but generally includes autonomous decision making around the emergency care of patients. Not all ambulance personnel are paramedics, although the term is sometimes used informally to refer to any ambulance personnel. In some English-speaking countries, there is an official distinction between paramedics and emergency medical technicians (or emergency care assistants), in which paramedics have additional educational requirements and scope of practice. Functions and duties The paramedic role is closely related to other healthcare positions, especially the emergency medical technician, with paramedics often being at a higher grade with more responsibility and autonomy following substantially greater education and training. The primary role of a paramedic is to stabilize people with life-threatening injuries and transport these patients to a higher level of care (typically an emergency department). Due to the nature of their job, paramedics work in many environments, including roadways, people's homes, and depending on their qualifications, wilderness environments, hospitals, aircraft, and with SWAT teams during police operations. Paramedics also work in non-emergency situations, such as transporting chronically ill patients to and from treatment centers and in some areas, address social determinants of health and provide in-home care to ill patients at risk of hospitalization (a practice known as community paramedicine). The role of a paramedic varies widely across the world, as EMS providers operate with many different models of care. In the Anglo-American model, paramedics are autonomous decision-makers. In some countries such as the United Kingdom and South Africa, the paramedic role has developed into an autonomous health profession. In the Franco-German model, ambulance care is led by physicians. In some versions of this model, such as France, there is no direct equivalent to a paramedic. Ambulance staff have either the more advanced qualifications of a physician or less advanced training in first aid. In other versions of the Franco-German model, such as Germany, paramedics do exist. Their role is very similar to the role of paramedics in the Anglo-American model with an advanced scope of autonomy and practice, and the added element of emergency physician backup, either virtually (Tele-Notarzt) or on scene with a rapid response vehicle / helicopter. The role of paramedics in Germany has evolved from support to physicians in the field to the central role in pre-hospital emergency care. The development of the profession has been a gradual move from simply transporting patients to hospital, to more advanced treatments in the field. In some countries, the paramedic may take on the role as part of a system to prevent hospitalization entirely and, through practitioners, are able to prescribe certain medications, or undertaking 'see and refer' visits, where the paramedic directly refers a patient to specialist services without taking them to hospital. Occupational hazards Paramedics are exposed to a variety of hazards such as lifting patients and equipment, treating those with infectious disease, handling hazardous substances, and transportation via ground or air vehicles. Employers can prevent occupational illness or injury by providing safe patient handling equipment, implementing a training program to educate paramedics on job hazards, and supplying PPE such as respirators, gloves, and isolation gowns when dealing with biological hazards. Infectious disease has become a major concern, in light of the COVID-19 pandemic. In response, the U.S. Centers for Disease Control and Prevention and other agencies and organizations have issued guidance regarding workplace hazard controls for COVID-19. Some specific recommendations include modified call queries, symptom screening, universal PPE use, hand hygiene, physical distancing, and stringent disinfection protocols. Research on ambulance ventilation systems found that aerosols often recirculate throughout the compartment, creating a health hazard for paramedics when transporting sick patients capable of airborne transmission. Unidirectional airflow design can better protect workers. To further safeguard paramedics, incorporating evidence-based strategies for managing chemical exposures and environmental risks is crucial. Agencies such as OSHA, WHO and NIOSH offer comprehensive guidelines that highlight the integration of safety protocols, technological advancements, and procedural innovations to enhance paramedic safety and well-being. Physical injuries Paramedics are widely recognized to face high risks of physical injuries in their line of work. More than 22,000 EMS providers visit the emergency room each year for work-related injuries. Some physical injuries encountered when providing healthcare services include lifting injuries, back strains, and needlestick incidences. Injuries such as sprains and strains mostly occur in the back and neck, and injuries are most prevalent while responding to 911 calls, which include patient care and transport. These injuries are prevalent but not impossible to overcome; they require preventive measures to minimize the chance of them happening. Safe lifting techniques and patient-handling equipment are major factors in reducing paramedics’ physical injury risk. Workers with less than 10 years’ experience are most at risk, pointing to the need for targeted prevention strategies for newer employees. By employing the proposed measures to reduce physical injuries, it will be possible to mitigate the hazards faced by paramedics, to help paramedics stay safe while rendering the most needed services. Infectious diseases The risk of contracting infectious diseases is common in the paramedic profession. The COVID-19 pandemic strengthens the necessity of following safety protocols. Preventive measures for healthcare workers from needlestick injuries and infectious disease is critical. Including, the immediate disposal of sharps in puncture-resistant containers and wearing appropriate personal protective equipment (PPE) and strict adherence to post-exposure protocols, enhances safety. Additionally, staying updated with vaccinations, including those for flu, COVID-19, Hepatitis B. Furthermore, adhering to infection control practices, such as hand hygiene, environmental cleaning, and specialized control programs, are vital for preventing infections like MRSA, TB, and COVID-19. Personal Protective Equipment (PPE) usage in implementation and vaccination compliance are effective transmission reduction measures for infectious diseases among paramedics. Exposures to blood pathogens and body fluids through incidents, for example, needlestick injuries which jeopardizes paramedics at risk of infectious diseases such as Hepatitis B, and Hepatitis C, and HIV affecting around 6,000 EMS workers. This realization strengthens the need for science-based methods in preventing the occupational risks posed by infectious diseases with a foregrounding of the role of preventative measures geared towards protecting the health of paramedic professionals and, at the same time, the community. Chemical exposures Paramedics encounter daily risks associated with handling hazardous chemicals. As a result, they must understand how to deliver care safely to remain protected in the service provision. They need to remain cautious for them to stay safe in the process of providing care. There are numerous associated risks from chemical exposures in prehospital settings. The use of PPE and standard precautions are necessary to prevent harmful exposures for paramedics. Desirable implementation of the ordered processing of hazardous material and the proper decontamination process are effective strategies in combating hazard risk. Such steps are necessary to ensure fewer cases of health hazards to paramedics. Environmental and operational hazards Paramedics are confronted with many challenges exhibited in the form of environmental and operational risks, primarily during transportation. These transportation-related hazards should be considered and addressed in prehospital care. Slips, trips, and falls; motor vehicle incidents; and violence or assaults have huge impact on paramedics' occupational hazards, resulting to thousands of paramedics impacted annually. Vehicle safety features need to be known by paramedics, and so must undergo exhaustive emergency driving training, which looks into curbing the provisions that are the peril of transportation. Paramedics are frequently assaulted by patients or bystanders affecting around 2,000 EMS workers annually, which further hammers the need for training on de-escalation. NIOSH and the Department of Homeland Security have conducted ambulance crash testing, resulting in the development of 10 test methods published by the Society of Automotive Engineers (SAE) to reduce and eliminate crash-related injuries to EMS workers. Through effective training, the threat cases are more likely to be mitigated, and the paramedics will have a better chance to provide services as required. Protective measures and equipment One way of ensuring paramedics work at optimal efficiency is to provide them with protective equipment and gear to mitigate the possible risks when executing their duties. PPE keeps paramedics’ occupational risks low. Examples of PPEs include gloves, masks, and gown or specific clothing; they protect workers from physical, biological, and chemical hazards. The different types of PPE include respiratory, eye, face, and hand protection. Under respiratory protection, paramedics can use N95 masks to filter airborne contaminants. Chemical splashes are also a common hazard faced by paramedics, where safety goggles can be used for eye protection. Underhand protection, paramedics can employ gloves mainly to curb burns. One of the principles of PPE is that choices should be guided by specific risks associated with various emergencies, which warrant different PPE requirements. Mental health and stress management Paramedic are involved in challenging professions and can be subject to different kinds of psychological stress, for instance, post-traumatic stress disorder, depression, or severe burnout. The psychological aspect is intertwined with the nature of the paramedics' work. Exposure to traumatic events such as accidents, medical emergencies, and violence are some of the factors undermining the psychological health of paramedics. Mental health issues, including depression, anxiety, and substance abuse, are some of the mental health issues paramedics are likely to get exposed to due to their nature of work as compared to the general population. Stable support systems that may include peer counselling and the availability of mental health resources become essential in building the resilience of paramedic professionals. Peer counselling programs appear to be an effective stress management strategy for paramedics. Taking part in open discussions with other peers who understand what the employee is going through determines the necessary supportive grounds that facilitate managing and processing feelings related to this work. Health risks and monitoring The long-term health risks that need to be observed by the paramedics are Post Traumatic Stress Disorder (PTSD), cardiovascular diseases (CVDs), and cancer risk. There are a variety of challenges paramedics encounter, including PTSD, which should provide a compelling reason to implement preventive mental health measures within this profession. Moreover, there is an extra risk for CVDs because of the heaviness of emergency response operations. There is a need to emphasize cancer risk and the importance of constant exploration and individualized prevention patterns. Besides, there is the cumulative effect of fatigue, violence, and trauma on the health of paramedics. As a result, there is a need for systematic monitoring and preventive measures in health among paramedics. It is necessary to study long-term health risks for paramedics and apply a prophylactic approach to maintaining the health state of healthcare professionals. Regulatory guidelines and recommendations The regulatory guidelines are fundamental in eliminating occupational risk in paramedicine; authoritative bodies like the Occupational Safety and Health Administration (OSHA) and the World Health Organization (WHO) provide specific guidelines. For example, in United States, physical, chemical, and biological hazards are managed by operating under the guidelines and recommendations offered by NIOSH and OSHA, targeting the healthcare industry especially. These include properly using PPE, handling hazardous substances, and adequately managing workplace violence. Moreover, the WHO provides global views by laying international standards to protect the well-being of the staff involved in the healthcare provided, irrespective of whether it is an emergency or routine operation. Such regulatory bodies, as the ones promoting national and global safety standards, ensure that evidence-based approaches reinforce adherence to their occupational health being safeguarded. History Early history Throughout the evolution of pre-hospitalization care, there has been an ongoing association with military conflict. One of the first indications of a formal process for managing injured people dates from the Imperial Legions of Rome, where aging Centurions, no longer able to fight, were given the task of organizing the removal of the wounded from the battlefield and providing some form of care. Such individuals, although not physicians, were probably among the world's earliest surgeons by default, being required to suture wounds and complete amputations. A similar situation existed in the Crusades, with the Knights Hospitaller of the Order of St. John of Jerusalem filling a similar function; this organisation continued, and evolved into what is now known throughout the Commonwealth of Nations as the St. John Ambulance and as the Order of Malta Ambulance Corps in the Republic of Ireland and various countries. Early ambulance services While civilian communities had organized ways to deal with prehospitalisation care and transportation of the sick and dying as far back as the bubonic plague in London between 1598 and 1665, such arrangements were typically ad hoc and temporary. In time, however, these arrangements began to formalize and become permanent. During the American Civil War, Jonathan Letterman devised a system of mobile field hospitals employing the first uses of the principles of triage. After returning home, some veterans began to attempt to apply what had they had seen on the battlefield to their own communities, and commenced the creation of volunteer life-saving squads and ambulance corps. These early developments in formalized ambulance services were decided at local levels, and this led to services being provided by diverse operators such as the local hospital, police, fire brigade, or even funeral directors who often possessed the only local transport allowing a passenger to lie down. In most cases these ambulances were operated by drivers and attendants with little or no medical training, and it was some time before formal training began to appear in some units. An early example was the members of the Toronto Police Ambulance Service receiving a mandatory five days of training from St. John as early as 1889. Prior to World War I motorized ambulances started to be developed, but once they proved their effectiveness on the battlefield during the war the concept spread rapidly to civilian systems. In terms of advanced skills, once again the military led the way. During World War II and the Korean War battlefield medics administered painkilling narcotics by injection in emergency situations, and pharmacists' mates on warships were permitted to do even more without the guidance of a physician. The Korean War also marked the first widespread use of helicopters to evacuate the wounded from forward positions to medical units, leading to the rise of the term "medevac". These innovations would not find their way into the civilian sphere for nearly twenty more years. Pre-hospital emergency medical care By the early 1960s experiments in improving medical care had begun in some civilian centres. One early experiment involved the provision of pre-hospital cardiac care by physicians in Belfast, Northern Ireland, in 1966. This was repeated in Toronto, Canada in 1968 using a single ambulance called Cardiac One, which was staffed by a regular ambulance crew, along with a hospital intern to perform the advanced procedures. While both of these experiments had certain levels of success, the technology had not yet reached a sufficiently advanced level to be fully effective; for example, the Toronto portable defibrillator and heart monitor was powered by lead-acid car batteries, and weighed around . In 1966, a report called Accidental Death and Disability: The Neglected Disease of Modern Society—commonly known as The White Paper—was published in the United States. This paper presented data showing that soldiers who were seriously wounded on the battlefields during the Vietnam War had a better survival rate than people who were seriously injured in motor vehicle accidents on California's freeways. Key factors contributing to victim survival in transport to definitive care such as a hospital were identified as comprehensive trauma care, rapid transport to designated trauma facilities, and the presence of medical corpsmen who were trained to perform certain critical advanced medical procedures such as fluid replacement and airway management. As a result of The White Paper, the US government moved to develop minimum standards for ambulance training, ambulance equipment and vehicle design. These new standards were incorporated into Federal Highway Safety legislation and the states were advised to either adopt these standards into state laws or risk a reduction in Federal highway safety funding. The "White Paper" also prompted the inception of a number of emergency medical service (EMS) pilot units across the US including paramedic programs. The success of these units led to a rapid transition to make them fully operational. Founded in 1967, Freedom House Ambulance Service was the first civilian emergency medical service in the United States to be staffed by paramedics, most of whom were Black. New York City's Saint Vincent's Hospital developed the United States' first Mobile Coronary Care Unit (MCCU) under the medical direction of William Grace, MD, and based on Frank Pantridge's MCCU project in Belfast, Northern Ireland. In 1967, Eugene Nagle, MD and Jim Hirschmann, MD helped pioneer the United States' first EKG telemetry transmission to a hospital and then in 1968, a functional paramedic program in conjunction with the City of Miami Fire Department. In 1969, the City of Columbus Fire Department joined with the Ohio State University Medical Center to develop the "HEARTMOBILE" paramedic program under the medical direction of James Warren, MD and Richard Lewis, MD. In 1969, the Haywood County (NC) Volunteer Rescue Squad developed a paramedic program (then called Mobile Intensive Care Technicians) under the medical direction of Ralph Feichter, MD. In 1969, the initial Los Angeles paramedic training program was instituted in conjunction with Harbor General Hospital, now Harbor–UCLA Medical Center, under the medical direction of J. Michael Criley, MD and James Lewis, MD. In 1969, the Seattle "Medic 1" paramedic program was developed in conjunction with the Harborview Medical Center under the medical direction of Leonard Cobb, MD. The Marietta (GA) initial paramedic project was instituted in the Fall of 1970 in conjunction with Kennestone Hospital and Metro Ambulance Service, Inc. under the medical direction of Luther Fortson, MD. The Los Angeles County and City established paramedic programs following the passage of The Wedsworth-Townsend Act in 1970. Other cities and states passed their own paramedic bills, leading to the formation of services across the US. Many other countries also followed suit, and paramedic units formed around the world. In the military, however, the required telemetry and miniaturization technologies were more advanced, particularly due to initiatives such as the space program. It would take several more years before these technologies drifted through to civilian applications. In North America, physicians were judged to be too expensive to be used in the pre-hospital setting, although such initiatives were implemented, and sometimes still operate, in European countries and Latin America. Public notability While doing background research at Los Angeles' UCLA Harbor Medical Center for a proposed new show about doctors, television producer Robert A. Cinader, working for Jack Webb, happened to encounter "firemen who spoke like doctors and worked with them". This concept developed into the television series Emergency!, which ran from 1972 to 1977, portraying the exploits of this new profession called paramedics. The show gained popularity with emergency services personnel, the medical community, and the general public. When the show first aired in 1972, there were just six paramedic units operating in three pilot programs in the whole of the US, and the term paramedic was essentially unknown. By the time the program ended in 1977, there were paramedics operating in all fifty states. The show's technical advisor, James O. Page, was a pioneer of paramedicine and responsible for the UCLA paramedic program; he would go on to help establish paramedic programs throughout the US, and was the founding publisher of the Journal of Emergency Medical Services (JEMS). The JEMS magazine creation resulted from Page's previous purchase of the PARAMEDICS International magazine. Ron Stewart, the show's medical director, was instrumental in organizing emergency health services in southern California earlier in his career during the 1970s, in the paramedic program in Pittsburgh, and had a substantial role in the founding of the paramedic programs in Toronto and Nova Scotia, Canada. Evolution and growth Throughout the 1970s and 1980s, the paramedic field continued to evolve, with a shift in emphasis from patient transport to treatment both on scene and en route to hospitals. This led to some services changing their descriptions from "ambulance services" to "emergency medical services". The training, knowledge-base, and skill sets of both paramedics and emergency medical technicians (EMTs) were typically determined by local medical directors based primarily on the perceived needs of the community along with affordability. There were also large differences between localities in the amount and type of training required, and how it would be provided. This ranged from in-service training in local systems, through community colleges, and up to university level education. This emphasis on increasing qualifications has followed the progression of other health professions such as nursing, which also progressed from on the job training to university level qualifications. The variations in educational approaches and standards required for paramedics has led to large differences in the required qualifications between locations—both within individual countries and from country to country. Within the UK training is a three-year course equivalent to a bachelor's degree. Comparisons have been made between paramedics and nurses; with nurses now requiring degree entry (BSc) the knowledge deficit is large between the two fields. This has led to many countries passing laws to protect the title of "paramedic" (or its local equivalent) from use by anyone except those qualified and experienced to a defined standard. This usually means that paramedics must be registered with the appropriate body in their country; for example all paramedics in the United Kingdom must by registered with the Health and Care Professions Council (HCPC) in order to call themselves a paramedic. In the United States, a similar system is operated by the National Registry of Emergency Medical Technicians (NREMT), although this is only accepted by forty of the fifty states. As paramedicine has evolved, a great deal of both the curriculum and skill set has existed in a state of flux. Requirements often originated and evolved at the local level, and were based upon the preferences of physician advisers and medical directors. Recommended treatments would change regularly, often changing more like a fashion than a scientific discipline. Associated technologies also rapidly evolved and changed, with medical equipment manufacturers having to adapt equipment that worked inadequately outside of hospitals, to be able to cope with the less controlled pre-hospital environment. Physicians began to take more interest in paramedics from a research perspective as well. By about 1990, the fluctuating trends began to diminish, being replaced by outcomes-based research. This research then drove further evolution of the practice of both paramedics and the emergency physicians who oversaw their work, with changes to procedures and protocols occurring only after significant research demonstrated their need and effectiveness (an example being ALS). Such changes affected everything from simple procedures such as CPR, to changes in drug protocols. As the profession grew, some paramedics went on to become not just research participants, but researchers in their own right, with their own projects and journal publications. In 2010, the American Board of Emergency Medicine created a medical subspecialty for physicians who work in emergency medical services. Changes in procedures also included the manner in which the work of paramedics was overseen and managed. In the early days medical control and oversight was direct and immediate, with paramedics calling into a local hospital and receiving orders for every individual procedure or drug. While this still occurs in some jurisdictions, it has become increasingly rare. Day-to-day operations largely moved from direct and immediate medical control to pre-written protocols or standing orders, with the paramedic typically seeking advice after the options in the standing orders had been exhausted. Canada While the evolution of paramedicine described above is focused largely on the US, many other countries followed a similar pattern, although often with significant variations. Canada, for example, attempted a pilot paramedic training program at Queen's University, Kingston, Ontario, in 1972. The program, which intended to upgrade the then mandatory 160 hours of training for ambulance attendants, was found to be too costly and premature. The program was abandoned after two years, and it was more than a decade before the legislative authority for its graduates to practice was put into place. An alternative program which provided 1,400 hours of training at the community college level prior to commencing employment was then tried, and made mandatory in 1977, with formal certification examinations being introduced in 1978. Similar programs occurred at roughly the same time in Alberta and British Columbia, with other Canadian provinces gradually following, but with their own education and certification requirements. Advanced Care Paramedics were not introduced until 1984, when Toronto trained its first group internally, before the process spread across the country. By 2010 the Ontario system involved a two-year community college based program, including both hospital and field clinical components, prior to designation as a Primary Care Paramedic, although it is starting to head towards a university degree-based program. The province of Ontario announced that by September 2021, the entry level primary care paramedic post-secondary program would be enhanced from a two-year diploma to a three-year advanced diploma in primary care paramedicine. Resultantly, advanced care paramedics in Ontario will require a minimum of four years of post-secondary education and critical care paramedics will require five years of post-secondary education. Israel In Israel, paramedics are trained in either of the following ways: a three-year degree in Emergency Medicine (B.EMS), a year and three months IDF training, or MADA training. Paramedics manage and provide medical guidelines in mass casualty incidents. They operate in MED evac and ambulances. They are legalized under the 1976 Doctors Ordinance (Decree). In a 2016 study at the Ben Gurion University of the Negev it was found that 73% of trained paramedics stop working within a five-year period, and 93% stop treating within 10 years. United Kingdom In the United Kingdom, ambulances were originally municipal services after the end of World War II. Training was frequently conducted internally, although national levels of coordination led to more standardization of staff training. Ambulance services were merged into county-level agencies in 1974, and then into regional agencies in 2006. The regional ambulance services, most often trusts, are under the authority of the National Health Service and there is now a significant standardization of training and skills. The UK model has three levels of ambulance staff. In increasing order of clinical skill these are: emergency care assistants, emergency medical technicians, and paramedics. Today, university qualifications are expected for paramedics, with the current entry level being a Bachelor of Science degree in Pre-Hospital Care or Paramedic Science. As the title "Paramedic" is legally protected, those utilising must be registered with the Health and Care Professions Council (HCPC), and in order to qualify for registration you must meet the standards for registration, which include having a degree obtained through an approved course. The change of entry requirements does not affect currently registered Paramedics, some of whom will still only have their entry qualification, but it is common for Paramedics to continue to progress through "top up" courses, for instance, to work towards a Bachelors of Science degree. This has led to Paramedics holding a wide range of qualifications, with some qualifications (such as master's degrees in Advanced or Paramedic Practice) being a pre-requisite for paramedic prescribing. Paramedics work in various settings including NHS and Independent Ambulance Providers, Air Ambulances, Emergency Departments and other alternative settings. Some paramedics have gone on to become Paramedic Practitioners, a role that practices independently in the pre-hospital environment in a capacity similar to that of a nurse practitioner. This is a fully autonomous role, and such senior paramedics are now working in hospitals, community teams such as rapid response teams, and also in increasing numbers in general practice, where their role includes acute presentations, complex chronic care and end of life management. They work as part of the allied health professional team including Doctors, Nurses, physician Associates, Physiotherapists, Associate Physicians, Health Care Assistant and Clinical Pharmacists. Paramedic Practitioners also undertake examinations modelled upon the MRCGP (a combination of applied knowledge exams, clinical skills and work place based assessment) in order to use the title "specialist". There are also now a growing number of these advanced paramedics who are independent and supplementary prescribers. There are also 'Critical Care Paramedics' who specialise in acute emergency incidents. In 2018, the UK government changed legislation allowing Paramedics to independently prescribe, which will open new pathways to Paramedics to progress into. This came into force on 1 April 2018, but did not immediately affect practice as guidance was still being written. United States In the United States, the minimum standards for paramedic training is considered vocational, but many colleges offer paramedic associate degree or bachelor's degree options. Paramedic education programs typically follow the U.S. NHTSA EMS Curriculum, DOT or National Registry of EMTs. While many regionally accredited community colleges offer paramedic programs and two-year associate degrees, a handful of universities also offer a four-year bachelor's degree component. The national standard course minimum requires didactic and clinical hours for a paramedic program of 1,500 or more hours of classroom training and 500+ clinical hours to be accredited and nationally recognized. Calendar length typically varies from 12 months to upwards of two years, excluding degree options, EMT training, work experience, and prerequisites. It is required to be a certified Emergency Medical Technician prior to starting paramedic training. Entry requirements vary, but many paramedic programs also have prerequisites such as one year required work experience as an emergency medical technician, or anatomy and physiology courses from an accredited college or university. Paramedics in some states must attend up to 50+ hours of ongoing education, plus maintain Pediatric Advanced Life Support and Advanced Cardiac Life Support. National Registry requires 70 + hours to maintain its certification or one may re-certify through completing the written computer based adaptive testing again (between 90 and 120 questions) every two years. Paramedicine continues to grow and evolve into a formal profession in its own right, complete with its own standards and body of knowledge, and in many locations paramedics have formed their own professional bodies. The early technicians with limited training, performing a small and specific set of procedures, has become a role beginning to require a foundation degree in countries such as Australia, South Africa, the UK, and increasingly in Canada and parts of the U.S. such as Oregon, where a degree is required for entry level practice. Ukraine As a part of Emergency Medicine Reform in 2017 Ministry of Healthcare introduced two specialties — "paramedic" and "emergency medical technician". Structure of employment Paramedics are employed by a variety of different organizations, and the services they provide may occur under differing organizational structures, depending on the part of the world. A new and evolving role for paramedics involves the expansion of their practice into the provision of relatively basic primary health care and assessment services. Some paramedics have begun to specialize their practice, frequently in association with the environment in which they will work. Some early examples of this involved aviation medicine and the use of helicopters, and the transfer of critical care patients between facilities. While some jurisdictions still use physicians, nurses, and technicians for transporting patients, increasingly this role falls to specialized senior and experienced paramedics. Other areas of specialization include such roles as tactical paramedics working in police units, marine paramedics, hazardous materials (Hazmat) teams, Heavy Urban Search and Rescue, and paramedics on offshore oil platforms, oil and mineral exploration teams, and in the military. The majority of paramedics are employed by the emergency medical service for their area, although this employer could itself be working under a number of models, including a specific autonomous public ambulance service, a fire department, a hospital based service, or a private company working under contract. In Washington, firefighters have been offered free paramedic training. There are also many paramedics who volunteer for backcountry or wilderness rescue teams, and small town rescue squads. In the specific case of an ambulance service being maintained by a fire department, paramedics and EMTs may be required to maintain firefighting and rescue skills as well as medical skills, and vice versa. In some instances, such as Los Angeles County, a fire department may provide emergency medical services, but as a rapid response or rescue unit rather than a transport ambulance. The provision of municipal ambulance services and paramedics, can vary by area, even within the same country or state. For instance, in Canada, the province of British Columbia operates a province-wide service (the British Columbia Ambulance Service) whereas in Ontario, the service is provided by each municipality, either as a distinct service, linked to the fire service, or contracted out to a third party. Scope of practice Common skills While there are varying degrees of training and expectations around the world, a set of skills practised by paramedics in the pre-hospital setting commonly includes: Advanced cardiac life support, or ACLS, including cardiopulmonary resuscitation, defibrillation, cardioversion, transcutaneous pacing, and administration of cardiac drugs Patient assessment, including acquisition of vital signs, physical exam, chest auscultation, history taking, electrocardiogram acquisition and interpretation, capnography, pulse oximetry, point-of-care ultrasound and basic blood chemistry interpretation (glucose, lactate) Airway management techniques including tracheal intubation, cricothyrotomy, rapid sequence induction, supraglottic airway insertion, manual repositioning, sterile suctioning, use of oropharyngeal and nasopharyngeal airway adjuncts, and manual removal of obstructions via direct laryngoscopy and use of magill forceps Thorocostomy and pericardiocentesis to relieve pneumothorax and pericardial tamponade Intravenous (IV) and intraosseous (IO) cannulation Oxygen administration and positive pressure ventilation via bag-valve-mask, CPAP device, or ventilator Fluid resuscitation Administration of emergency drugs/medications (see section below) Bleeding control and management of shock Spinal injury management, including immobilization and safe transport Fracture management, including assessment, splinting, and dislocation reduction Obstetrics, including assessment, childbirth, and recognition of and procedures for obstetrical emergencies such as breech presentation, cord presentation, and placental abruption Management of burns, including classification, estimate of surface area, recognition of more serious burns, and treatment Triage of patients in a mass casualty incident Surgical procedures such as field amputation, escharotomy, or thorocotomy (if trained and credentialed) Emergency pharmacology Paramedics carry and administer a wide array of emergency medications. The specific medications they are permitted to administer vary widely, based on local standards of care and protocols. For an accurate description of permitted drugs or procedures in a given location, it is necessary to contact that jurisdiction directly. A representative list of medications may commonly include: Analgesic medications such as aspirin, ketorolac and paracetamol (acetaminophen), used to relieve pain or decrease nausea and vomiting Narcotics like morphine, pethidine, fentanyl, and methoxyflurane, used to treat severe pain. Beta and calcium channel blockers such as diltiazem, metoprolol and verapamil used to slow down excessively high heart rates or severe hypertension Parasympatholytic drug such as Atropine, also known as anticholinergic drugs, used to speed up slow bradycardic heart rates Sympathomimetics such as dopamine, dobutamine, norepinephrine, and epinephrine used for cardiac arrest, severe hypotension (low blood pressure), shock and sepsis. These are often known as "vasoactive" agents. Dextrose (often D50W, a solution of 50% dextrose in water), used to treat hypoglycemia (low blood sugar) Sedatives like midazolam, lorazepam, etomidate, and ketamine used to reduce the irritability or agitation of patients, to relieve symptoms of seizure, or provide procedural sedation Paralytics such as succinylcholine, rocuronium, and vecuronium, used when an emergency procedure such as rapid sequence intubation (RSI) is required Antipsychotics like haloperidol or ziprasidone, used to sedate combative patients Respiratory medications such as albuterol and ipratropium bromide used to treat conditions such as asthma and acute bronchitis Steroids such as hydrocortisone and methylprednisolone used to treat inflammatory respiratory conditions and adrenal crisis Cardiac medications such as nitroglycerin and aspirin are used to treat cardiac ailments such as angina and myocardial infarctions Diuretic medications such as furosemide to treat congestive heart failure and severe hypertension Antiarrhythmics such as amiodarone, adenosine, lidocaine and magnesium sulfate used to treat abnormal heart rhythms such as ventricular tachycardia and ventricular fibrillation Antiemetics such as promethazine or ondansetron used for nausea and vomiting Antidotes for a variety of toxins such as naloxone (opioids), pralidoxime (organophosphates), sodium bicarbonate (tricyclic antidepressants), and hydroxocobalamin (cyanide). Blood products and tranexamic acid in cases of hemorrhagic shock Broad spectrum antibiotics such as ceftriaxone or vancomycin for cases of sepsis Hormones like oxytocin to control post-partum bleeding Skills by certification level As described above, many jurisdictions have different levels of paramedic training, leading to variations in what procedures different paramedics may perform depending upon their qualifications. Three common general divisions of paramedic training are the basic technician, general paramedic or advanced technician, and advanced paramedic. Common skills that these three certification levels may practice are summarized in the table below. The skills for the higher levels automatically also assume those listed for lower levels. Medicolegal authority The medicolegal framework for paramedics is highly dependent on the overall structure of emergency medical services in the territory where they are working. In many localities, paramedics operate as a direct extension of a physician medical director and practice as an extension of the medical director's license. In the United States, a physician delegates authority under an individual state's Medical Practice Act. This gives a paramedic the ability to practice within limited scope of practice in law, along with state DOH guidelines and medical control oversight. The authority to practice in this manner is granted in the form of standing orders (protocols) (off-line medical control) and direct physician consultation via phone or radio (on-line medical control). Under this paradigm, paramedics effectively assume the role of out-of-hospital field agents to regional emergency physicians, with independent clinical decision. In places where paramedics are recognised health care professionals registered with an appropriate body, they can conduct all procedures authorised for their profession, including the administration of prescription medication, and are personally answerable to a regulator. For example, in the United Kingdom, the Health and Care Professions Council regulates paramedics and can censure or strike a paramedic from the register. In some cases paramedics may gain further qualifications to extend their status to that of a paramedic practitioner or advanced paramedic, which may allow them to administer a wider range of drugs and use a wider range of clinical skills. In some areas, paramedics are only permitted to practice many advanced skills while assisting a physician who is physically present, except for immediately life-threatening emergencies.
Biology and health sciences
Health professionals
Health
75022
https://en.wikipedia.org/wiki/Expansion%20card
Expansion card
In computing, an expansion card (also called an expansion board, adapter card, peripheral card or accessory card) is a printed circuit board that can be inserted into an electrical connector, or expansion slot (also referred to as a bus slot) on a computer's motherboard (see also backplane) to add functionality to a computer system. Sometimes the design of the computer's case and motherboard involves placing most (or all) of these slots onto a separate, removable card. Typically such cards are referred to as a riser card in part because they project upward from the board and allow expansion cards to be placed above and parallel to the motherboard. Expansion cards allow the capabilities and interfaces of a computer system to be extended or supplemented in a way appropriate to the tasks it will perform. For example, a high-speed multi-channel data acquisition system would be of no use in a personal computer used for bookkeeping, but might be a key part of a system used for industrial process control. Expansion cards can often be installed or removed in the field, allowing a degree of user customization for particular purposes. Some expansion cards take the form of "daughterboards" that plug into connectors on a supporting system board. In personal computing, notable expansion buses and expansion card standards include the S-100 bus from 1974 associated with the CP/M operating system, the 50-pin expansion slots of the original Apple II computer from 1977 (unique to Apple), IBM's Industry Standard Architecture (ISA) introduced with the IBM PC in 1981, Acorn's tube expansion bus on the BBC Micro also from 1981, IBM's patented and proprietary Micro Channel architecture (MCA) from 1987 that never won favour in the clone market, the vastly improved Peripheral Component Interconnect (PCI) that displaced ISA in 1992, and PCI Express from 2003 which abstracts the interconnect into high-speed communication "lanes" and relegates all other functions into software protocol. History Vacuum-tube based computers had modular construction, but individual functions for peripheral devices filled a cabinet, not just a printed circuit board. Processor, memory and I/O cards became feasible with the development of integrated circuits. Expansion cards make processor systems adaptable to the needs of the user by making it possible to connect various types of devices, including I/O, additional memory, and optional features (such as a floating point unit) to the central processor. Minicomputers, starting with the PDP-8, were made of multiple cards communicating through, and powered by, a passive backplane. The first commercial microcomputer to feature expansion slots was the Micral N, in 1973. The first company to establish a de facto standard was Altair with the Altair 8800, developed 1974–1975, which later became a multi-manufacturer standard, the S-100 bus. Many of these computers were also passive backplane designs, where all elements of the computer, (processor, memory, and I/O) plugged into a card cage which passively distributed signals and power between the cards. Proprietary bus implementations for systems such as the Apple II co-existed with multi-manufacturer standards. IBM PC and descendants IBM introduced what would retroactively be called the Industry Standard Architecture (ISA) bus with the IBM PC in 1981. At that time, the technology was called the PC bus. The IBM XT, introduced in 1983, used the same bus (with slight exception). The 8-bit PC and XT bus was extended with the introduction of the IBM AT in 1984. This used a second connector for extending the address and data bus over the XT, but was backward compatible; 8-bit cards were still usable in the AT 16-bit slots. Industry Standard Architecture (ISA) became the designation for the IBM AT bus after other types were developed. Users of the ISA bus had to have in-depth knowledge of the hardware they were adding to properly connect the devices, since memory addresses, I/O port addresses, and DMA channels had to be configured by switches or jumpers on the card to match the settings in driver software. IBM's MCA bus, developed for the PS/2 in 1987, was a competitor to ISA, also their design, but fell out of favor due to the ISA's industry-wide acceptance and IBM's licensing of MCA. EISA, the 32-bit extended version of ISA championed by Compaq, was used on some PC motherboards until 1997, when Microsoft declared it a "legacy" subsystem in the PC 97 industry white-paper. Proprietary local buses (q.v. Compaq) and then the VESA Local Bus Standard, were late 1980s expansion buses that were tied but not exclusive to the 80386 and 80486 CPU bus. The PC/104 bus is an embedded bus that copies the ISA bus. Intel launched their PCI bus chipsets along with the P5-based Pentium CPUs in 1993. The PCI bus was introduced in 1991 as a replacement for ISA. The standard (now at version 3.0) is found on PC motherboards to this day. The PCI standard supports bus bridging: as many as ten daisy-chained PCI buses have been tested. CardBus, using the PCMCIA connector, is a PCI format that attaches peripherals to the Host PCI Bus via PCI to PCI Bridge. Cardbus is being supplanted by ExpressCard format. Intel introduced the AGP bus in 1997 as a dedicated video acceleration solution. AGP devices are logically attached to the PCI bus over a PCI-to-PCI bridge. Though termed a bus, AGP usually supports only a single card at a time (Legacy BIOS support issues). From 2005 PCI Express has been replacing both PCI and AGP. This standard, approved in 2004, implements the logical PCI protocol over a serial communication interface. PC/104(-Plus) or Mini PCI are often added for expansion on small form factor boards such as Mini-ITX. For their 1000 EX and 1000 HX models, Tandy Computer designed the PLUS expansion interface, an adaptation of the XT-bus supporting cards of a smaller form factor. Because it is electrically compatible with the XT bus (a.k.a. 8-bit ISA or XT-ISA), a passive adapter can be made to connect XT cards to a PLUS expansion connector. Another feature of PLUS cards is that they are stackable. Another bus that offered stackable expansion modules was the "sidecar" bus used by the IBM PCjr. This may have been electrically comparable to the XT bus; it most certainly had some similarities since both essentially exposed the 8088 CPU's address and data buses, with some buffering and latching, the addition of interrupts and DMA provided by Intel add-on chips, and a few system fault detection lines (Power Good, Memory Check, I/O Channel Check). Again, PCjr sidecars are not technically expansion cards, but expansion modules, with the only difference being that the sidecar is an expansion card enclosed in a plastic box (with holes exposing the connectors). External expansion buses Laptops are generally unable to accept most expansion cards intended for desktop computers. Consequently, several compact expansion standards were developed. The original PC Card expansion card standard is essentially a compact version of the ISA bus. The CardBus expansion card standard is an evolution of the PC card standard to make it into a compact version of the PCI bus. The original ExpressCard standard acts like it is either a USB 2.0 peripheral or a PCI Express 1.x x1 device. ExpressCard 2.0 adds SuperSpeed USB as another type of interface the card can use. Unfortunately, CardBus and ExpressCard are vulnerable to DMA attack unless the laptop has an IOMMU that is configured to thwart these attacks. One notable exception to the above is the inclusion of a single internal slot for a special reduced size version of the desktop standard. The most well known examples are Mini-PCI or Mini PCIe. Such slots were usually intended for a specific purpose such as offering "built-in" wireless networking or upgrading the system at production with a discrete GPU. Other families Most other computer lines, including those from Apple Inc., Tandy, Commodore, Amiga, and Atari, Inc., offered their own expansion buses. The Amiga used Zorro II. Apple used a proprietary system with seven 50-pin-slots for Apple II peripheral cards, then later used both variations on Processor Direct Slot and NuBus for its Macintosh series until 1995, when they switched to a PCI Bus. Generally speaking, most PCI expansion cards will function on any CPU platform which incorporates PCI bus hardware provided there is a software driver for that type. PCI video cards and any other cards that contain their own BIOS or other ROM are problematic, although video cards conforming to VESA Standards may be used for secondary monitors. DEC Alpha, IBM PowerPC, and NEC MIPS workstations used PCI bus connectors. Both Zorro II and NuBus were plug and play, requiring no hardware configuration by the user. Other computer buses were used for industrial control, instruments, and scientific systems. One specific example is HP-IB (or Hewlett Packard Interface Bus) which was ultimately standardized as IEEE-488 (aka GPIB). Some well-known historical standards include VMEbus, STD Bus, SBus (specific to Sun's SPARCStations), and numerous others. Video game consoles Many other video game consoles such as the Nintendo Entertainment System and the Sega Genesis included expansion buses in some form; In the case of at least the Genesis, the expansion bus was proprietary. In fact, the cartridge slots of many cartridge-based consoles (not counting the Atari 2600) would qualify as expansion buses, as they exposed both read and write capabilities of the system's internal bus. However, the expansion modules attached to these interfaces, though functionally the same as expansion cards, are not technically expansion cards, due to their physical form. Applications The primary purpose of an expansion card is to provide or expand on features not offered by the motherboard. For example, the original IBM PC did not have on-board graphics or hard drive capability. In that case, a graphics card and an ST-506 hard disk controller card provided graphics capability and hard drive interface respectively. Some single-board computers made no provision for expansion cards, and may only have provided IC sockets on the board for limited changes or customization. Since reliable multi-pin connectors are relatively costly, some mass-market systems such as home computers had no expansion slots and instead used a card-edge connector at the edge of the main board, putting the costly matching socket into the cost of the peripheral device. In the case of expansion of on-board capability, a motherboard may provide a single serial RS232 port or Ethernet port. An expansion card can be installed to offer multiple RS232 ports or multiple and higher bandwidth Ethernet ports. In this case, the motherboard provides basic functionality but the expansion card offers additional or enhanced ports. Physical construction One edge of the expansion card holds the contacts (the edge connector or pin header) that fit into the slot. They establish the electrical contact between the electronics on the card and on the motherboard. Peripheral expansion cards generally have connectors for external cables. In the PC-compatible personal computer, these connectors were located in the support bracket at the back of the cabinet. Industrial backplane systems had connectors mounted on the top edge of the card, opposite to the backplane pins. Depending on the form factor of the motherboard and case, around one to seven expansion cards can be added to a computer system. 19 or more expansion cards can be installed in backplane systems. When many expansion cards are added to a system, total power consumption and heat dissipation become limiting factors. Some expansion cards take up more than one slot space. For example, many graphics cards on the market as of 2010 are dual slot graphics cards, using the second slot as a place to put an active heat sink with a fan. Some cards are "low-profile" cards, meaning that they are shorter than standard cards and will fit in a lower height computer chassis such as HTPC and SFF. (There is a "low profile PCI card" standard that specifies a much smaller bracket and board area). The group of expansion cards that are used for external connectivity, such as network, SAN or modem cards, are commonly referred to as input/output cards (or I/O cards). Daughterboard A daughterboard, daughtercard, mezzanine board or piggyback board is an expansion card that attaches to a system directly. Daughterboards often have plugs, sockets, pins or other attachments for other boards. Daughterboards often have only internal connections within a computer or other electronic devices, and usually access the motherboard directly rather than through a computer bus. Such boards are used to either improve various memory capacities of a computer, enable the computer to connect to certain kinds of networks that it previously could not connect to, or to allow for users to customize their computers for various purposes such as gaming. Daughterboards are sometimes used in computers in order to allow for expansion cards to fit parallel to the motherboard, usually to maintain a small form factor. This form are also called riser cards, or risers. Daughterboards are also sometimes used to expand the basic functionality of an electronic device, such as when a certain model has features added to it and is released as a new or separate model. Rather than redesigning the first model completely, a daughterboard may be added to a special connector on the main board. These usually fit on top of and parallel to the board, separated by spacers or standoffs, and are sometimes called mezzanine cards due to being stacked like the mezzanine of a theatre. Wavetable cards (sample-based synthesis cards) are often mounted on sound cards in this manner. Some mezzanine card interface standards include the 400 pin FPGA Mezzanine Card (FMC); the 172 pin High-Speed Mezzanine Card (HSMC); the PCI Mezzanine Card (PMC); XMC mezzanines; the Advanced Mezzanine Card; IndustryPacks (VITA 4), the GreenSpring Computers Mezzanine modules; etc. Examples of daughterboard-style expansion cards include: Enhanced Graphics Adapter piggyback board, adds memory beyond 64 KB, up to 256 KB Expanded memory piggyback board, adds additional memory to some EMS and EEMS boards ADD daughterboard RAID daughterboard Network interface controller (NIC) daughterboard CPU Socket daughterboard Bluetooth daughterboard Modem daughterboard AD/DA/DIO daughter-card Communication daughterboard (CDC) Server Management daughterboard (SMDC) Serial ATA connector daughterboard Robotic daughterboard Access control List daughterboard Arduino "shield" daughterboards Beaglebone "cape" daughterboard Raspberry Pi "HAT add-on board" Network Daughterboard (NDB). Commonly integrates: bus interfaces logic, LLC, PHY and Magnetics onto a single board. Standards PCI Extended (PCI-X) PCI Express (PCIe) Mini PCIe M.2 Accelerated Graphics Port (AGP) Peripheral Component Interconnect (PCI) Industry Standard Architecture (ISA) Micro Channel architecture (MCA) VESA Local Bus (VLB) CardBus/PC card/PCMCIA (for notebook computers) ExpressCard (for notebook computers) Audio/modem riser (AMR) Communications and networking riser (CNR) CompactFlash (for handheld computers and high speed cameras and camcorders) SBus (1990s SPARC-based Sun computers) Amiga Zorro Bizarro (Commodore Amiga) NuBus (Apple Macintosh) FPGA Mezzanine Card (FMC)
Technology
Computer hardware
null
75075
https://en.wikipedia.org/wiki/Haloalkane
Haloalkane
The haloalkanes (also known as halogenoalkanes or alkyl halides) are alkanes containing one or more halogen substituents. They are a subset of the general class of halocarbons, although the distinction is not often made. Haloalkanes are widely used commercially. They are used as flame retardants, fire extinguishants, refrigerants, propellants, solvents, and pharmaceuticals. Subsequent to the widespread use in commerce, many halocarbons have also been shown to be serious pollutants and toxins. For example, the chlorofluorocarbons have been shown to lead to ozone depletion. Methyl bromide is a controversial fumigant. Only haloalkanes that contain chlorine, bromine, and iodine are a threat to the ozone layer, but fluorinated volatile haloalkanes in theory may have activity as greenhouse gases. Methyl iodide, a naturally occurring substance, however, does not have ozone-depleting properties and the United States Environmental Protection Agency has designated the compound a non-ozone layer depleter. For more information, see Halomethane. Haloalkane or alkyl halides are the compounds which have the general formula "RX" where R is an alkyl or substituted alkyl group and X is a halogen (F, Cl, Br, I). Haloalkanes have been known for centuries. Chloroethane was produced in the 15th century. The systematic synthesis of such compounds developed in the 19th century in step with the development of organic chemistry and the understanding of the structure of alkanes. Methods were developed for the selective formation of C-halogen bonds. Especially versatile methods included the addition of halogens to alkenes, hydrohalogenation of alkenes, and the conversion of alcohols to alkyl halides. These methods are so reliable and so easily implemented that haloalkanes became cheaply available for use in industrial chemistry because the halide could be further replaced by other functional groups. While many haloalkanes are human-produced, substantial amounts are biogenic. Classes From the structural perspective, haloalkanes can be classified according to the connectivity of the carbon atom to which the halogen is attached. In primary (1°) haloalkanes, the carbon that carries the halogen atom is only attached to one other alkyl group. An example is chloroethane (). In secondary (2°) haloalkanes, the carbon that carries the halogen atom has two C–C bonds. In tertiary (3°) haloalkanes, the carbon that carries the halogen atom has three C–C bonds. Haloalkanes can also be classified according to the type of halogen on group 17 responding to a specific halogenoalkane. Haloalkanes containing carbon bonded to fluorine, chlorine, bromine, and iodine results in organofluorine, organochlorine, organobromine and organoiodine compounds, respectively. Compounds containing more than one kind of halogen are also possible. Several classes of widely used haloalkanes are classified in this way chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs). These abbreviations are particularly common in discussions of the environmental impact of haloalkanes. Properties Haloalkanes generally resemble the parent alkanes in being colorless, relatively odorless, and hydrophobic. The melting and boiling points of chloro-, bromo-, and iodoalkanes are higher than the analogous alkanes, scaling with the atomic weight and number of halides. This effect is due to the increased strength of the intermolecular forces—from London dispersion to dipole-dipole interaction because of the increased polarizability. Thus tetraiodomethane () is a solid whereas tetrachloromethane () is a liquid. Many fluoroalkanes, however, go against this trend and have lower melting and boiling points than their nonfluorinated analogues due to the decreased polarizability of fluorine. For example, methane () has a melting point of −182.5 °C whereas tetrafluoromethane () has a melting point of −183.6 °C. As they contain fewer C–H bonds, haloalkanes are less flammable than alkanes, and some are used in fire extinguishers. Haloalkanes are better solvents than the corresponding alkanes because of their increased polarity. Haloalkanes containing halogens other than fluorine are more reactive than the parent alkanes—it is this reactivity that is the basis of most controversies. Many are alkylating agents, with primary haloalkanes and those containing heavier halogens being the most active (fluoroalkanes do not act as alkylating agents under normal conditions). The ozone-depleting abilities of the CFCs arises from the photolability of the C–Cl bond. Natural occurrence An estimated 4,100,000,000 kg of chloromethane are produced annually by natural sources. The oceans are estimated to release 1 to 2 million tons of bromomethane annually. Nomenclature IUPAC The formal naming of haloalkanes should follow IUPAC nomenclature, which put the halogen as a prefix to the alkane. For example, ethane with bromine becomes bromoethane, methane with four chlorine groups becomes tetrachloromethane. However, many of these compounds have already an established trivial name, which is endorsed by the IUPAC nomenclature, for example chloroform (trichloromethane) and methylene chloride (dichloromethane). But nowadays, IUPAC nomenclature is used. To reduce confusion this article follows the systematic naming scheme throughout. Production Haloalkanes can be produced from virtually all organic precursors. From the perspective of industry, the most important ones are alkanes and alkenes. From alkanes Alkanes react with halogens by free radical halogenation. In this reaction a hydrogen atom is removed from the alkane, then replaced by a halogen atom by reaction with a diatomic halogen molecule. Free radical halogenation typically produces a mixture of compounds mono- or multihalogenated at various positions. From alkenes and alkynes In hydrohalogenation, an alkene reacts with a dry hydrogen halide (HX) electrophile like hydrogen chloride () or hydrogen bromide () to form a mono-haloalkane. The double bond of the alkene is replaced by two new bonds, one with the halogen and one with the hydrogen atom of the hydrohalic acid. Markovnikov's rule states that under normal conditions, hydrogen is attached to the unsaturated carbon with the most hydrogen substituents. The rule is violated when neighboring functional groups polarize the multiple bond, or in certain additions of hydrogen bromide (addition in the presence of peroxides and the Wohl-Ziegler reaction) which occur by a free-radical mechanism. Alkenes also react with halogens (X2) to form haloalkanes with two neighboring halogen atoms in a halogen addition reaction. Alkynes react similarly, forming the tetrahalo compounds. This is sometimes known as "decolorizing" the halogen, since the reagent X2 is colored and the product is usually colorless and odorless. From alcohols Alcohol can be converted to haloalkanes. Direct reaction with a hydrohalic acid rarely gives a pure product, instead generating ethers. However, some exceptions are known: ionic liquids suppress the formation or promote the cleavage of ethers, hydrochloric acid converts tertiary alcohols to choloroalkanes, and primary and secondary alcohols convert similarly in the presence of a Lewis acid activator, such as zinc chloride. The latter is exploited in the Lucas test. In the laboratory, more active deoxygenating and halogenating agents combine with base to effect the conversion. In the "Darzens halogenation", thionyl chloride () with pyridine converts less reactive alcohols to chlorides. Both phosphorus pentachloride () and phosphorus trichloride () function similarly, and alcohols convert to bromoalkanes under hydrobromic acid or phosphorus tribromide (PBr3). The heavier halogens do not require preformed reagents: A catalytic amount of may be used for the transformation using phosphorus and bromine; is formed in situ. Iodoalkanes may similarly be prepared using red phosphorus and iodine (equivalent to phosphorus triiodide). One family of named reactions relies on the deoxygenating effect of triphenylphosphine. In the Appel reaction, the reagent is tetrahalomethane and triphenylphosphine; the co-products are haloform and triphenylphosphine oxide. In the Mitsunobu reaction, the reagents are any nucleophile, triphenylphosphine, and a diazodicarboxylate; the coproducts are triphenyl­phosphine oxide and a hydrazodiamide. From carboxylic acids Two methods for the synthesis of haloalkanes from carboxylic acids are Hunsdiecker reaction and Kochi reaction. Biosynthesis Many chloro- and bromoalkanes are formed naturally. The principal pathways involve the enzymes chloroperoxidase and bromoperoxidase. From amines by Sandmeyer's Method Primary aromatic amines yield diazonium ions in a solution of sodium nitrite. Upon heating this solution with copper(I) chloride, the diazonium group is replaced by -Cl. This is a comparatively easy method to make aryl halides as the gaseous product can be separated easily from aryl halide. When an iodide is to be made, copper chloride is not needed. Addition of potassium iodide with gentle shaking produces the haloalkane. Reactions Haloalkanes are reactive towards nucleophiles. They are polar molecules: the carbon to which the halogen is attached is slightly electropositive where the halogen is slightly electronegative. This results in an electron deficient (electrophilic) carbon which, inevitably, attracts nucleophiles. Substitution Substitution reactions involve the replacement of the halogen with another molecule—thus leaving saturated hydrocarbons, as well as the halogenated product. Haloalkanes behave as the R+ synthon, and readily react with nucleophiles. Hydrolysis, a reaction in which water breaks a bond, is a good example of the nucleophilic nature of haloalkanes. The polar bond attracts a hydroxide ion, OH− (NaOH(aq) being a common source of this ion). This OH− is a nucleophile with a clearly negative charge, as it has excess electrons it donates them to the carbon, which results in a covalent bond between the two. Thus C–X is broken by heterolytic fission resulting in a halide ion, X−. As can be seen, the OH is now attached to the alkyl group, creating an alcohol. (Hydrolysis of bromoethane, for example, yields ethanol). Reactions with ammonia give primary amines. Chloro- and bromoalkanes are readily substituted by iodide in the Finkelstein reaction. The iodoalkanes produced easily undergo further reaction. Sodium iodide is used as a catalyst. Haloalkanes react with ionic nucleophiles (e.g. cyanide, thiocyanate, azide); the halogen is replaced by the respective group. This is of great synthetic utility: chloroalkanes are often inexpensively available. For example, after undergoing substitution reactions, cyanoalkanes may be hydrolyzed to carboxylic acids, or reduced to primary amines using lithium aluminium hydride. Azoalkanes may be reduced to primary amines by Staudinger reduction or lithium aluminium hydride. Amines may also be prepared from alkyl halides in amine alkylation, Gabriel synthesis and Delepine reaction, by undergoing nucleophilic substitution with potassium phthalimide or hexamine respectively, followed by hydrolysis. In the presence of a base, haloalkanes alkylate alcohols, amines, and thiols to obtain ethers, N-substituted amines, and thioethers respectively. They are substituted by Grignard reagent to give magnesium salts and an extended alkyl compound. Elimination In dehydrohalogenation reactions, the halogen and an adjacent proton are removed from halocarbons, thus forming an alkene. For example, with bromoethane and sodium hydroxide (NaOH) in ethanol, the hydroxide ion HO− abstracts a hydrogen atom. A Bromide ion is then lost, resulting in ethene, H2O and NaBr. Thus, haloalkanes can be converted to alkenes. Similarly, dihaloalkanes can be converted to alkynes. In related reactions, 1,2-dibromocompounds are debrominated by zinc dust to give alkenes and geminal dihalides can react with strong bases to give carbenes. Other Haloalkanes undergo free-radical reactions with elemental magnesium to give alkyl-magnesium compound: Grignard reagent. Haloalkanes also react with lithium metal to give organolithium compounds. Both Grignard reagents and organolithium compounds behave as the R− synthon. Alkali metals such as sodium and lithium are able to cause haloalkanes to couple in Wurtz reaction, giving symmetrical alkanes. Haloalkanes, especially iodoalkanes, also undergo oxidative addition reactions to give organometallic compounds. Applications Chlorinated or fluorinated alkenes undergo polymerization. Important halogenated polymers include polyvinyl chloride (PVC), and polytetrafluoroethene (PTFE, or teflon). Alkyl fluorides An estimated one fifth of pharmaceuticals contain fluorine, including several of the top drugs. Most of these compounds are alkyl fluorides. Examples include 5-fluorouracil, flunitrazepam (Rohypnol), fluoxetine (Prozac), paroxetine (Paxil), ciprofloxacin (Cipro), mefloquine and fluconazole. Fluorine-substituted ethers are volatile anesthetics, including the commercial products methoxyflurane, enflurane, isoflurane, sevoflurane and desflurane. Alkyl chlorides Some low molecular weight chlorinated hydrocarbons such as chloroform, dichloromethane, dichloroethene, and trichloroethane are useful solvents. Several million tons of chlorinated methanes are produced annually. Chloromethane is a precursor to chlorosilanes and silicones. Chlorodifluoromethane (CHClF2) is used to make teflon. Alkyl bromides Large scale applications of alkyl bromides exploit their toxicity, which also limits their usefulness. Methyl bromide is also an effective fumigant, but its production and use are controversial. Alkyl iodides No large scale applications are known for alkyl iodides. Methyl iodide is a popular methylating agent in organic synthesis. Chlorofluorocarbons Chlorofluorocarbons were used almost universally as refrigerants and propellants due to their relatively low toxicity and high heat of vaporization. Starting in the 1980s, as their contribution to ozone depletion became known, their use was increasingly restricted, and they have now largely been replaced by HFCs. Environmental considerations Nature produces massive amounts of chloromethane and bromomethane. Most concern focuses on anthropogenic sources, which are potential toxins, even carcinogens. Similarly, great interest has been shown in remediation of man made halocarbons such as those produced on large scale, such as dry cleaning fluids. Volatile halocarbons degrade photochemically because the carbon-halogen bond can be labile. Some microorganisms dehalogenate halocarbons. While this behavior is intriguing, the rates of remediation are generally very slow. Safety As alkylating agents, haloalkanes are potential carcinogens. The more reactive members of this large class of compounds generally pose greater risk, e.g. carbon tetrachloride.
Physical sciences
Organic compounds
null
75089
https://en.wikipedia.org/wiki/Racemic%20mixture
Racemic mixture
In chemistry, a racemic mixture or racemate () is one that has equal amounts of left- and right-handed enantiomers of a chiral molecule or salt. Racemic mixtures are rare in nature, but many compounds are produced industrially as racemates. History The first known racemic mixture was racemic acid, which Louis Pasteur found to be a mixture of the two enantiomeric isomers of tartaric acid. He manually separated the crystals of a mixture, starting from an aqueous solution of the sodium ammonium salt of racemate tartaric acid. Pasteur benefited from the fact that ammonium tartrate salt gives enantiomeric crystals with distinct crystal forms (at 77 °F). Reasoning from the macroscopic scale down to the molecular, he reckoned that the molecules had to have non-superimposable mirror images. A sample with only a single enantiomer is an enantiomerically pure or enantiopure compound. Etymology From racemic acid found in grapes; from Latin racemus, meaning a bunch of grapes. This acid, when naturally produced in grapes, is only the right-handed version of the molecule, better known as tartaric acid. In many Germanic languages racemic acid is called “grape acid” e.g. German traubensäure and Swedish druvsyra. Carl von Linné gave red elderberry the scientific name Sambucus racemosa as the Swedish name, druvfläder, means 'grape elder', so called because its berries grow in a grape-like cluster. Nomenclature A racemic mixture is denoted by the prefix (±)- or dl- (for sugars the prefix - may be used), indicating an equal (1:1) mixture of dextro and levo isomers. Also the prefix rac- (or racem-) or the symbols RS and SR (all in italic letters) are used. If the ratio is not 1:1 (or is not known), the prefix (+)/(−), - or d/l- (with a slash) is used instead. The usage of d and l is discouraged by IUPAC. Properties A racemate is optically inactive (achiral), meaning that such materials do not rotate the polarization of plane-polarized light. Although the two enantiomers rotate plane-polarized light in opposite directions, the rotations cancel each other out because they are present in equal amounts of negative (-) counterclockwise (levorotatory) and positive (+) clockwise (dextrorotatory) enantiomers. In contrast to the two pure enantiomers, which have identical physical properties except for the direction of rotation of plane-polarized light, a racemate sometimes has different properties from either of the pure enantiomers. Different melting points are most common, but different solubilities and boiling points are also possible. Pharmaceuticals may be available as a racemate or as the pure enantiomer, which might have different potencies. Because biological systems have many chiral asymmetries, pure enantiomers frequently have very different biological effects; examples include glucose and methamphetamine. Crystallization There are four ways to crystallize a racemate; three of which H. W. B. Roozeboom had distinguished by 1899: Conglomerate (sometimes racemic conglomerate)If the molecules of the substance have a much greater affinity for the same enantiomer than for the opposite one, a mechanical mixture of enantiomerically pure crystals will result. The mixture of enantiomerically pure R and S crystals forms a eutectic mixture. Consequently, the melting point of the conglomerate is always lower than that of the pure enantiomer. Addition of a small amount of one enantiomer to the conglomerate increases the melting point. Roughly 10% of racemic chiral compounds crystallize as conglomerates. Racemic compound (sometimes true racemate)If molecules have a greater affinity for the opposite enantiomer than for the same enantiomer, the substance forms a single crystalline phase in which the two enantiomers are present in an ordered 1:1 ratio in the elementary cell. Adding a small amount of one enantiomer to the racemic compound decreases the melting point. But the pure enantiomer can have a higher or lower melting point than the compound. A special case of racemic compounds are kryptoracemic compounds (or kryptoracemates), in which the crystal itself has handedness (is enantiomorphic), despite containing both enantiomorphs in a 1:1 ratio. Pseudoracemate (sometimes racemic solid solution) When there is no big difference in affinity between the same and opposite enantiomers, then in contrast to the racemic compound and the conglomerate, the two enantiomers will coexist in an unordered manner in the crystal lattice. Addition of a small amount of one enantiomer changes the melting point slightly or not at all. Quasiracemate A quasiracemate is a co-crystal of two similar but distinct compounds, one of which is left-handed and the other right-handed. Although chemically different, they are sterically similar (isosteric) and are still able to form a racemic crystalline phase. One of the first such racemates studied, by Pasteur in 1853, forms from a 1:2 mixture of the bis ammonium salt of (+)-tartaric acid and the bis ammonium salt of (−)-malic acid in water. Re-investigated in 2008, the crystals formed are dumbbell-shape with the central part consisting of ammonium (+)-bitartrate, whereas the outer parts are a quasiracemic mixture of ammonium (+)-bitartrate and ammonium (−)-bimalate. Resolution The separation of a racemate into its components, the individual enantiomers, is called a chiral resolution. Various methods exist for this separation, including crystallization, chromatography, and the use of various reagents. Synthesis Without a chiral influence (for example a chiral catalyst, solvent or starting material), a chemical reaction that makes a chiral product will always yield a racemate. That can make the synthesis of a racemate cheaper and easier than making the pure enantiomer, because it does not require special conditions. This fact also leads to the question of how biological homochirality evolved on what is presumed to be a racemic primordial earth. The reagents of, and the reactions that produce, racemic mixtures are said to be "not stereospecific" or "not stereoselective", for their indecision in a particular stereoisomerism. A frequent scenario is that of a planar species (such as an sp2 carbon atom or a carbocation intermediate) acting as an electrophile. The nucleophile will have a 50% probability of 'hitting' either of the two sides of the planar grouping, thus producing a racemic mixture: Racemic pharmaceuticals Some drug molecules are chiral, and the enantiomers have different effects on biological entities. They can be sold as one enantiomer or as a racemic mixture. Examples include thalidomide, ibuprofen, cetirizine and salbutamol. A well known drug that has different effects depending on its ratio of enantiomers is amphetamine. Adderall is an unequal mixture of both amphetamine enantiomers. A single Adderall dose combines the neutral sulfate salts of dextroamphetamine and amphetamine, with the dextro isomer of amphetamine saccharate and D/L-amphetamine aspartate monohydrate. The original Benzedrine was a racemic mixture, and isolated dextroamphetamine was later introduced to the market as Dexedrine. The prescription analgesic tramadol is also a racemate. In some cases (e.g., ibuprofen and thalidomide), the enantiomers interconvert or racemize in vivo. This means that preparing a pure enantiomer for medication is largely pointless. However, sometimes samples containing pure enantiomers may be made and sold at a higher cost in cases where the use requires specifically one isomer (e.g., for a stereospecific reagent); compare omeprazole and esomeprazole. Moving from a racemic drug to a chiral specific drug may be done for a better safety profile or an improved therapeutic index. This process is called chiral switching and the resulting enantiopure drug is called a chiral switch. As examples, esomeprazole is a chiral switch of (±)-omeprazole and levocetirizine is a chiral switch of (±)-cetirizine. While often only one enantiomer of the drug may be active, there are cases in which the other enantiomer is harmful, like salbutamol and thalidomide. The (R) enantiomer of thalidomide is effective against morning sickness, while the (S) enantiomer is teratogenic, causing birth defects. Since the drug racemizes, the drug cannot be considered safe for use by women of child-bearing age, and its use is tightly controlled when used for treating other illness. Methamphetamine is available by prescription under the brand name Desoxyn. The active component of Desoxyn is dextromethamphetamine hydrochloride. This is the right-handed isomer of methamphetamine. The left-handed isomer of methamphetamine, levomethamphetamine, is an OTC drug that is less centrally-acting and more peripherally-acting. Methedrine during the 20th century was a 50:50 racemic mixture of both methamphetamine isomers (levo and dextro). Wallach's rule Wallach's rule (first proposed by Otto Wallach) states that racemic crystals tend to be denser than their chiral counterparts. This rule has been substantiated by crystallographic database analysis.
Physical sciences
Stereochemistry
Chemistry
75110
https://en.wikipedia.org/wiki/Biot%E2%80%93Savart%20law
Biot–Savart law
In physics, specifically electromagnetism, the Biot–Savart law ( or ) is an equation describing the magnetic field generated by a constant electric current. It relates the magnetic field to the magnitude, direction, length, and proximity of the electric current. The Biot–Savart law is fundamental to magnetostatics. It is valid in the magnetostatic approximation and consistent with both Ampère's circuital law and Gauss's law for magnetism. When magnetostatics does not apply, the Biot–Savart law should be replaced by Jefimenko's equations. The law is named after Jean-Baptiste Biot and Félix Savart, who discovered this relationship in 1820. Equation In the following equations, it is assumed that the medium is not magnetic (e.g., vacuum). This allows for straightforward derivation of magnetic field B, while the fundamental vector here is H. Electric currents (along a closed curve/wire) The Biot–Savart law is used for computing the resultant magnetic flux density B at position r in 3D-space generated by a filamentary current I (for example due to a wire). A steady (or stationary) current is a continual flow of charges which does not change with time and the charge neither accumulates nor depletes at any point. The law is a physical example of a line integral, being evaluated over the path C in which the electric currents flow (e.g. the wire). The equation in SI units teslas (T) is where is a vector along the path whose magnitude is the length of the differential element of the wire in the direction of conventional current, is a point on path , and is the full displacement vector from the wire element () at point to the point at which the field is being computed (), and μ0 is the magnetic constant. Alternatively: where is the unit vector of . The symbols in boldface denote vector quantities. The integral is usually around a closed curve, since stationary electric currents can only flow around closed paths when they are bounded. However, the law also applies to infinitely long wires (this concept was used in the definition of the SI unit of electric current—the Ampere—until 20 May 2019). To apply the equation, the point in space where the magnetic field is to be calculated is arbitrarily chosen (). Holding that point fixed, the line integral over the path of the electric current is calculated to find the total magnetic field at that point. The application of this law implicitly relies on the superposition principle for magnetic fields, i.e. the fact that the magnetic field is a vector sum of the field created by each infinitesimal section of the wire individually. For example, consider the magnetic field of a loop of radius carrying a current For a point at a distance along the center line of the loop, the magnetic field vector at that point is:where is the unit vector of along the center-line of the loop (and the loop is taken to be centered at the origin). Loops such as the one described appear in devices like the Helmholtz coil, the solenoid, and the Magsail spacecraft propulsion system. Calculation of the magnetic field at points off the center line requires more complex mathematics involving elliptic integrals that require numerical solution or approximations. Electric current density (throughout conductor volume) The formulations given above work well when the current can be approximated as running through an infinitely-narrow wire. If the conductor has some thickness, the proper formulation of the Biot–Savart law (again in SI units) is: where is the vector from dV to the observation point , is the volume element, and is the current density vector in that volume (in SI in units of A/m2). In terms of unit vector Constant uniform current In the special case of a uniform constant current I, the magnetic field is i.e., the current can be taken out of the integral. Point charge at constant velocity In the case of a point charged particle q moving at a constant velocity v, Maxwell's equations give the following expression for the electric field and magnetic field: where is the unit vector pointing from the current (non-retarded) position of the particle to the point at which the field is being measured, is the speed in units of and is the angle between and . Alternatively, these can be derived by considering Lorentz transformation of Coulomb's force (in four-force form) in the source charge's inertial frame. When , the electric field and magnetic field can be approximated as These equations were first derived by Oliver Heaviside in 1888. Some authors call the above equation for the "Biot–Savart law for a point charge" due to its close resemblance to the standard Biot–Savart law. However, this language is misleading as the Biot–Savart law applies only to steady currents and a point charge moving in space does not constitute a steady current. Magnetic responses applications The Biot–Savart law can be used in the calculation of magnetic responses even at the atomic or molecular level, e.g. chemical shieldings or magnetic susceptibilities, provided that the current density can be obtained from a quantum mechanical calculation or theory. Aerodynamics applications The Biot–Savart law is also used in aerodynamic theory to calculate the velocity induced by vortex lines. In the aerodynamic application, the roles of vorticity and current are reversed in comparison to the magnetic application. In Maxwell's 1861 paper 'On Physical Lines of Force', magnetic field strength H was directly equated with pure vorticity (spin), whereas B was a weighted vorticity that was weighted for the density of the vortex sea. Maxwell considered magnetic permeability μ to be a measure of the density of the vortex sea. Hence the relationship, Magnetic induction current was essentially a rotational analogy to the linear electric current relationship, Electric convection current where ρ is electric charge density. B was seen as a kind of magnetic current of vortices aligned in their axial planes, with H being the circumferential velocity of the vortices. The electric current equation can be viewed as a convective current of electric charge that involves linear motion. By analogy, the magnetic equation is an inductive current involving spin. There is no linear motion in the inductive current along the direction of the B vector. The magnetic inductive current represents lines of force. In particular, it represents lines of inverse square law force. In aerodynamics the induced air currents form solenoidal rings around a vortex axis. Analogy can be made that the vortex axis is playing the role that electric current plays in magnetism. This puts the air currents of aerodynamics (fluid velocity field) into the equivalent role of the magnetic induction vector B in electromagnetism. In electromagnetism the B lines form solenoidal rings around the source electric current, whereas in aerodynamics, the air currents (velocity) form solenoidal rings around the source vortex axis. Hence in electromagnetism, the vortex plays the role of 'effect' whereas in aerodynamics, the vortex plays the role of 'cause'. Yet when we look at the B lines in isolation, we see exactly the aerodynamic scenario insomuch as B is the vortex axis and H is the circumferential velocity as in Maxwell's 1861 paper. In two dimensions, for a vortex line of infinite length, the induced velocity at a point is given by where is the strength of the vortex and r is the perpendicular distance between the point and the vortex line. This is similar to the magnetic field produced on a plane by an infinitely long straight thin wire normal to the plane. This is a limiting case of the formula for vortex segments of finite length (similar to a finite wire): where A and B are the (signed) angles between the point and the two ends of the segment. The Biot–Savart law, Ampère's circuital law, and Gauss's law for magnetism In a magnetostatic situation, the magnetic field B as calculated from the Biot–Savart law will always satisfy Gauss's law for magnetism and Ampère's circuital law: In a non-magnetostatic situation, the Biot–Savart law ceases to be true (it is superseded by Jefimenko's equations), while Gauss's law for magnetism and the Maxwell–Ampère law are still true. Theoretical background Initially, the Biot–Savart law was discovered experimentally, then this law was derived in different ways theoretically. In The Feynman Lectures on Physics, at first, the similarity of expressions for the electric potential outside the static distribution of charges and the magnetic vector potential outside the system of continuously distributed currents is emphasized, and then the magnetic field is calculated through the curl from the vector potential. Another approach involves a general solution of the inhomogeneous wave equation for the vector potential in the case of constant currents. The magnetic field can also be calculated as a consequence of the Lorentz transformations for the electromagnetic force acting from one charged particle on another particle. Two other ways of deriving the Biot–Savart law include: 1) Lorentz transformation of the electromagnetic tensor components from a moving frame of reference, where there is only an electric field of some distribution of charges, into a stationary frame of reference, in which these charges move. 2) the use of the method of retarded potentials.
Physical sciences
Magnetostatics
Physics
75367
https://en.wikipedia.org/wiki/Non-Newtonian%20fluid
Non-Newtonian fluid
In physics and chemistry, a non-Newtonian fluid is a fluid that does not follow Newton's law of viscosity, that is, it has variable viscosity dependent on stress. In particular, the viscosity of non-Newtonian fluids can change when subjected to force. Ketchup, for example, becomes runnier when shaken and is thus a non-Newtonian fluid. Many salt solutions and molten polymers are , as are many commonly found substances such as custard, toothpaste, starch suspensions, corn starch, paint, blood, melted butter and shampoo. Most commonly, the viscosity (the gradual deformation by shear or tensile stresses) of non-Newtonian fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different. The fluid can even exhibit time-dependent viscosity. Therefore, a constant coefficient of viscosity cannot be defined. Although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. They are best studied through several other rheological properties that relate stress and strain rate tensors under many different flow conditions—such as oscillatory shear or extensional flow—which are measured using different devices or rheometers. The properties are better studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. For non-Newtonian fluid's viscosity, there are pseudoplastic, plastic, and dilatant flows that are time-independent, and there are thixotropic and rheopectic flows that are time-dependent. Three well-known time-dependent non-newtonian fluids which can be identified by the defining authors are the Oldroyd-B model, Walters’ Liquid B and Williamson fluids. Time-dependent self-similar analysis of the Ladyzenskaya-type model with a non-linear velocity dependent stress tensor was performed unfortunately no analytical solutions could be derived, however a rigorous mathematical existence theorem was given for the solution. For time-independent non-Newtonian fluids the known analytic solutions are much broader Types of non-Newtonian behavior Summary Shear thickening fluid The viscosity of a shear thickeningi.e. dilatant fluid appears to increase when the shear rate increases. Corn starch suspended in water ("oobleck", see below) is a common example: when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Shear thinning fluid A familiar example of the opposite, a shear thinning fluid, or pseudoplastic fluid, is wall paint: The paint should flow readily off the brush when it is being applied to a surface but not drip excessively. Note that all thixotropic fluids are extremely shear thinning, but they are significantly time dependent, whereas the colloidal "shear thinning" fluids respond instantaneously to changes in shear rate. Thus, to avoid confusion, the latter classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood. This application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Bingham plastic Fluids that have a linear shear stress/shear strain relationship but require a finite yield stress before they begin to flow (the plot of shear stress against shear strain does not pass through the origin) are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, and mustard. The surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still. Rheopectic or anti-thixotropic There are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic. An opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate (thixotropic). Examples Many common substances exhibit non-Newtonian flows. These include: Soap solutions, cosmetics, and toothpaste Food such as butter, cheese, jam, mayonnaise, soup, taffy, and yogurt Natural substances such as magma, lava, gums, honey, and extracts such as vanilla extract Biological fluids such as blood, saliva, semen, mucus, and synovial fluid Slurries such as cement slurry and paper pulp, emulsions such as mayonnaise, and some kinds of dispersions Oobleck An inexpensive, non-toxic example of a non-Newtonian fluid is a suspension of starch (e.g., cornstarch/cornflour) in water, sometimes called "oobleck", "ooze", or "magic mud" (1 part of water to 1.5–2 parts of corn starch). The name "oobleck" is derived from the Dr. Seuss book Bartholomew and the Oobleck. Because of its dilatant properties, oobleck is often used in demonstrations that exhibit its unusual behavior. A person may walk on a large tub of oobleck without sinking due to its shear thickening properties, as long as the individual moves quickly enough to provide enough force with each step to cause the thickening. Also, if oobleck is placed on a large subwoofer driven at a sufficiently high volume, it will thicken and form standing waves in response to low frequency sound waves from the speaker. If a person were to punch or hit oobleck, it would thicken and act like a solid. After the blow, the oobleck will go back to its thin liquid-like state. Flubber (slime) Flubber, also commonly known as slime, is a non-Newtonian fluid, easily made from polyvinyl alcohol–based glues (such as white "school" glue) and borax. It flows under low stresses but breaks under higher stresses and pressures. This combination of fluid-like and solid-like properties makes it a Maxwell fluid. Its behaviour can also be described as being viscoplastic or gelatinous. Chilled caramel topping Another example of non-Newtonian fluid flow is chilled caramel ice cream topping (so long as it incorporates hydrocolloids such as carrageenan and gellan gum). The sudden application of force—by stabbing the surface with a finger, for example, or rapidly inverting the container holding it—causes the fluid to behave like a solid rather than a liquid. This is the "shear thickening" property of this non-Newtonian fluid. More gentle treatment, such as slowly inserting a spoon, will leave it in its liquid state. Trying to jerk the spoon back out again, however, will trigger the return of the temporary solid state. Silly Putty Silly Putty is a silicone polymer based suspension that will flow, bounce, or break, depending on strain rate. Plant resin Plant resin is a viscoelastic solid polymer. When left in a container, it will flow slowly as a liquid to conform to the contours of its container. If struck with greater force, however, it will shatter as a solid. Quicksand Quicksand is a shear thinning non-Newtonian colloid that gains viscosity at rest. Quicksand's non-Newtonian properties can be observed when it experiences a slight shock (for example, when someone walks on it or agitates it with a stick), shifting between its gel and sol phase and seemingly liquefying, causing objects on the surface of the quicksand to sink. Ketchup Ketchup is a shear thinning fluid. Shear thinning means that the fluid viscosity decreases with increasing shear stress. In other words, fluid motion is initially difficult at slow rates of deformation, but will flow more freely at high rates. Shaking an inverted bottle of ketchup can cause it to transition to a lower viscosity through shear thinning, making it easier to pour from the bottle. Dry granular flows Under certain circumstances, flows of granular materials can be modelled as a continuum, for example using the μ(I) rheology. Such continuum models tend to be non-Newtonian, since the apparent viscosity of granular flows increases with pressure and decreases with shear rate. The main difference is the shearing stress and rate of shear.
Physical sciences
Fluid mechanics
Physics
75485
https://en.wikipedia.org/wiki/Electrical%20discharge%20machining
Electrical discharge machining
Electrical discharge machining (EDM), also known as spark machining, spark eroding, die sinking, wire burning or wire erosion, is a metal fabrication process whereby a desired shape is obtained by using electrical discharges (sparks). Material is removed from the work piece by a series of rapidly recurring current discharges between two electrodes, separated by a dielectric liquid and subject to an electric voltage. One of the electrodes is called the tool-electrode, or simply the or , while the other is called the workpiece-electrode, or . The process depends upon the tool and work piece not making physical contact. Extremely hard materials like carbides, ceramics, titanium alloys and heat treated tool steels that are very difficult to machine using conventional machining can be precisely machined by EDM. When the voltage between the two electrodes is increased, the intensity of the electric field in the volume between the electrodes becomes greater, causing dielectric break down of the liquid, and produces an electric arc. As a result, material is removed from the electrodes. Once the current stops (or is stopped, depending on the type of generator), new liquid dielectric is conveyed into the inter-electrode volume, enabling the solid particles (debris) to be carried away and the insulating properties of the dielectric to be restored. Adding new liquid dielectric in the inter-electrode volume is commonly referred to as . After a current flow, the voltage between the electrodes is restored to what it was before the breakdown, so that a new liquid dielectric breakdown can occur to repeat the cycle. History The erosive effect of electrical discharges was first noted in 1770 by English physicist Joseph Priestley. Die-sink EDM Two Soviet scientists, B. R. Lazarenko and N. I. Lazarenko, were tasked in 1943 to investigate ways of preventing the erosion of tungsten electrical contacts due to sparking. They failed in this task but found that the erosion was more precisely controlled if the electrodes were immersed in a dielectric fluid. This led them to invent an EDM machine used for working difficult-to-machine materials such as tungsten. The Lazarenkos' machine is known as an R-C-type machine, after the resistor–capacitor circuit (RC circuit) used to charge the electrodes. Simultaneously but independently, an American team, Harold Stark, Victor Harding, and Jack Beaver, developed an EDM machine for removing broken drills and taps from aluminium castings. Initially constructing their machines from under-powered electric-etching tools, they were not very successful. But more powerful sparking units, combined with automatic spark repetition and fluid replacement with an electromagnetic interrupter arrangement produced practical machines. Stark, Harding, and Beaver's machines produced 60 sparks per second. Later machines based on their design used vacuum tube circuits that produced thousands of sparks per second, significantly increasing the speed of cutting. Wire-cut EDM The wire-cut type of machine arose in the 1960s for making tools (dies) from hardened steel. The tool electrode in wire EDM is simply a wire. To avoid the erosion of the wire causing it to break, the wire is wound between two spools so that the active part of the wire is constantly changing. The earliest numerical controlled (NC) machines were conversions of punched-tape vertical milling machines. The first commercially available NC machine built as a wire-cut EDM machine was manufactured in the USSR in 1967. Machines that could optically follow lines on a master drawing were developed by David H. Dulebohn's group in the 1960s at Andrew Engineering Company for milling and grinding machines. Master drawings were later produced by computer numerical controlled (CNC) plotters for greater accuracy. A wire-cut EDM machine using the CNC drawing plotter and optical line follower techniques was produced in 1974. Dulebohn later used the same plotter CNC program to directly control the EDM machine, and the first CNC EDM machine was produced in 1976. Commercial wire EDM capability and use has advanced substantially during recent decades. Feed rates have increased and surface finish can be finely controlled. Generalities Electrical discharge machining is a machining method primarily used for hard metals or those that would be very difficult to machine with traditional techniques. EDM typically works with materials that are electrically conductive, although methods have also been proposed for using EDM to machine insulating ceramics. EDM can cut intricate contours or cavities in pre-hardened steel without the need for heat treatment to soften and re-harden them. This method can be used with any other metal or metal alloy such as titanium, hastelloy, kovar, and inconel. Also, applications of this process to shape polycrystalline diamond tools have been reported. EDM is often included in the "non-traditional" or "non-conventional" group of machining methods together with processes such as electrochemical machining (ECM), water jet cutting (WJ, AWJ), laser cutting, and opposite to the "conventional" group (turning, milling, grinding, drilling, and any other process whose material removal mechanism is essentially based on mechanical forces). Ideally, EDM can be seen as a series of breakdown and restoration of the liquid dielectric in-between the electrodes. However, caution should be exerted in considering such a statement because it is an idealized model of the process, introduced to describe the fundamental ideas underlying the process. Yet, any practical application involves many aspects that may also need to be considered. For instance, the removal of the debris from the inter-electrode volume is likely to be always partial. Thus the electrical properties of the dielectric in the inter-electrodes volume can be different from their nominal values and can even vary with time. The inter-electrode distance, often also referred to as spark-gap, is the result of the control algorithms of the specific machine used. The control of such a distance appears logically to be central to this process. Also, not all of the current between the dielectric is of the ideal type described above: the spark-gap can be short-circuited by the debris. The control system of the electrode may fail to react quickly enough to prevent the two electrodes (tool and workpiece) from coming into contact, with a consequent short circuit. This is unwanted because a short circuit contributes to material removal differently from the ideal case. The flushing action can be inadequate to restore the insulating properties of the dielectric so that the current always happens in the point of the inter-electrode volume (this is referred to as arcing), with a consequent unwanted change of shape (damage) of the tool-electrode and workpiece. Ultimately, a description of this process in a suitable way for the specific purpose at hand is what makes the EDM area such a rich field for further investigation and research. To obtain a specific geometry, the EDM tool is guided along the desired path very close to the work; ideally it should not touch the workpiece, although in reality this may happen due to the performance of the specific motion control in use. In this way, a large number of current discharges (colloquially also called sparks) happen, each contributing to the removal of material from both tool and workpiece, where small craters are formed. The size of the craters is a function of the technological parameters set for the specific job at hand. They can be with typical dimensions ranging from the nanoscale (in micro-EDM operations) to some hundreds of micrometers in roughing conditions. The presence of these small craters on the tool results in the gradual erosion of the electrode. This erosion of the tool-electrode is also referred to as wear. Strategies are needed to counteract the detrimental effect of the wear on the geometry of the workpiece. One possibility is that of continuously replacing the tool-electrode during a machining operation. This is what happens if a continuously replaced wire is used as electrode. In this case, the correspondent EDM process is also called wire EDM. The tool-electrode can also be used in such a way that only a small portion of it is actually engaged in the machining process and this portion is changed on a regular basis. This is, for instance, the case when using a rotating disk as a tool-electrode. The corresponding process is often also referred to as EDM grinding. A further strategy consists in using a set of electrodes with different sizes and shapes during the same EDM operation. This is often referred to as multiple electrode strategy, and is most common when the tool electrode replicates in negative the wanted shape and is advanced towards the blank along a single direction, usually the vertical direction (i.e. z-axis). This resembles the sink of the tool into the dielectric liquid in which the workpiece is immersed, so, not surprisingly, it is often referred to as die-sinking EDM (also called conventional EDM and ram EDM). The corresponding machines are often called sinker EDM. Usually, the electrodes of this type have quite complex forms. If the final geometry is obtained using a usually simple-shaped electrode which is moved along several directions and is possibly also subject to rotations, often the term EDM milling is used. In any case, the severity of the wear is strictly dependent on the technological parameters used in the operation (for instance: polarity, maximum current, open circuit voltage). For example, in micro-EDM, also known as μ-EDM, these parameters are usually set at values which generates severe wear. Therefore, wear is a major problem in that area. The problem of wear to graphite electrodes is being addressed. In one approach, a digital generator, controllable within milliseconds, reverses polarity as electro-erosion takes place. That produces an effect similar to electroplating that continuously deposits the eroded graphite back on the electrode. In another method, a so-called "Zero Wear" circuit reduces how often the discharge starts and stops, keeping it on for as long a time as possible. Definition of the technological parameters Difficulties have been encountered in the definition of the technological parameters that drive the process. Two broad categories of generators, also known as power supplies, are in use on EDM machines commercially available: the group based on RC circuits and the group based on transistor-controlled pulses. In both categories, the primary parameters at setup are the current and frequency delivered. In RC circuits, however, little control is expected over the time duration of the discharge, which is likely to depend on the actual spark-gap conditions (size and pollution) at the moment of the discharge. Also, the open circuit voltage (i.e. the voltage between the electrodes when the dielectric is not yet broken) can be identified as steady state voltage of the RC circuit. In generators based on transistor control, the user is usually able to deliver a train of pulses of voltage to the electrodes. Each pulse can be controlled in shape, for instance, quasi-rectangular. In particular, the time between two consecutive pulses and the duration of each pulse can be set. The amplitude of each pulse constitutes the open circuit voltage. Thus, the maximum duration of discharge is equal to the duration of a pulse of voltage in the train. Two pulses of current are then expected not to occur for a duration equal or larger than the time interval between two consecutive pulses of voltage. The maximum current during a discharge that the generator delivers can also be controlled. Because other sorts of generators may also be used by different machine builders, the parameters that may actually be set on a particular machine will depend on the generator manufacturer. The details of the generators and control systems on their machines are not always easily available to their user. This is a barrier to describing unequivocally the technological parameters of the EDM process. Moreover, the parameters affecting the phenomena occurring between tool and electrode are also related to the controller of the motion of the electrodes. A framework to define and measure the electrical parameters during an EDM operation directly on inter-electrode volume with an oscilloscope external to the machine has been recently proposed by Ferri et al. These authors conducted their research in the field of μ-EDM, but the same approach can be used in any EDM operation. This would enable the user to estimate directly the electrical parameters that affect their operations without relying upon machine manufacturer's claims. When machining different materials in the same setup conditions, the actual electrical parameters of the process are significantly different. Material removal mechanism The first serious attempt at providing a physical explanation of the material removal during electric discharge machining is perhaps that of Van Dijck. Van Dijck presented a thermal model together with a computational simulation to explain the phenomena between the electrodes during electric discharge machining. However, as Van Dijck himself admitted in his study, the number of assumptions made to overcome the lack of experimental data at that time was quite significant. Further models of what occurs during electric discharge machining in terms of heat transfer were developed in the late eighties and early nineties. It resulted in three scholarly papers: the first presenting a thermal model of material removal on the cathode, the second presenting a thermal model for the erosion occurring on the anode and the third introducing a model describing the plasma channel formed during the passage of the discharge current through the dielectric liquid. Validation of these models is supported by experimental data provided by AGIE. These models give the most authoritative support for the claim that EDM is a thermal process, removing material from the two electrodes because of melting or vaporization, along with pressure dynamics established in the spark-gap by the collapsing of the plasma channel. However, for small discharge energies the models are inadequate to explain the experimental data. All these models hinge on a number of assumptions from such disparate research areas as submarine explosions, discharges in gases, and failure of transformers, so it is not surprising that alternative models have been proposed more recently in the literature trying to explain the EDM process. Among these, the model from Singh and Ghosh reconnects the removal of material from the electrode to the presence of an electrical force on the surface of the electrode that could mechanically remove material and create the craters. This would be possible because the material on the surface has altered mechanical properties due to an increased temperature caused by the passage of electric current. The authors' simulations showed how they might explain EDM better than a thermal model (melting or evaporation), especially for small discharge energies, which are typically used in μ-EDM and in finishing operations. Given the many available models, it appears that the material removal mechanism in EDM is not yet well understood and that further investigation is necessary to clarify it, especially considering the lack of experimental scientific evidence to build and validate the current EDM models. This explains an increased current research effort in related experimental techniques. Types Sinker EDM Sinker EDM, also called ram EDM, cavity type EDM or volume EDM, consists of an electrode and workpiece submerged in an insulating liquid such as, more typically, oil or, less frequently, other dielectric fluids. The electrode and workpiece are connected to a suitable power supply. The power supply generates an electrical potential between the two parts. As the electrode approaches the workpiece, dielectric breakdown occurs in the fluid, forming a plasma channel, and a small spark jumps. These sparks usually strike one at a time, because it is very unlikely that different locations in the inter-electrode space have the identical local electrical characteristics which would enable a spark to occur simultaneously in all such locations. These sparks happen in huge numbers at seemingly random locations between the electrode and the workpiece. As the base metal is eroded, and the spark gap subsequently increased, the electrode is lowered automatically by the machine so that the process can continue uninterrupted. Several hundred thousand sparks occur per second, with the actual duty cycle carefully controlled by the setup parameters. These controlling cycles are sometimes known as "on time" and "off time", which are more formally defined in the literature. The on time setting determines the length or duration of the spark. Hence, a longer on time produces a deeper cavity from each spark, creating a rougher finish on the workpiece. The reverse is true for a shorter on time. Off time is the period of time between sparks. Although not directly affecting the machining of the part, the off time allows the flushing of dielectric fluid through a nozzle to clean out the eroded debris. Insufficient debris removal can cause repeated strikes in the same location which can lead to a short circuit. Modern controllers monitor the characteristics of the arcs and can alter parameters in microseconds to compensate. The typical part geometry is a complex 3D shape, often with small or odd shaped angles. Vertical, orbital, vectorial, directional, helical, conical, rotational, spin, and indexing machining cycles are also used. Wire EDM In wire electrical discharge machining (WEDM), also known as wire-cut EDM and wire cutting, a thin single-strand metal wire, usually brass, is fed through the workpiece, submerged in a tank of dielectric fluid, typically deionized water. Wire-cut EDM is typically used to cut plates as thick as 300mm and to make punches, tools, and dies from hard metals that are difficult to machine with other methods. The wire, which is constantly fed from a spool, is held between upper and lower diamond guides which is centered in a water nozzle head. The guides, usually CNC-controlled, move in the x–y plane. On most machines, the upper guide can also move independently in the z–u–v axis, giving rise to the ability to cut tapered and transitioning shapes (circle on the bottom, square at the top for example). The upper guide can control axis movements in the GCode standard, x–y–u–v–i–j–k–l–. This allows the wire-cut EDM to be programmed to cut very intricate and delicate shapes. The upper and lower diamond guides are usually accurate to , and can have a cutting path or kerf as small as using Ø wire, though the average cutting kerf that achieves the best economic cost and machining time is using Ø brass wire. The reason that the cutting width is greater than the width of the wire is because sparking occurs from the sides of the wire to the work piece, causing erosion. This "overcut" is necessary, for many applications it is adequately predictable and therefore can be compensated for (for instance in micro-EDM this is not often the case). Spools of wire are long — an 8 kg spool of 0.25 mm wire is just over 19 kilometers in length. Wire diameter can be as small as and the geometry precision is not far from ± . The wire-cut process uses water as its dielectric fluid, controlling its resistivity and other electrical properties with filters and PID controlled de-ionizer units. The water flushes the cut debris away from the cutting zone. Flushing is an important factor in determining the maximum feed rate for a given material thickness. Along with tighter tolerances, multi axis EDM wire-cutting machining centers have added features such as multi heads for cutting two parts at the same time, controls for preventing wire breakage, automatic self-threading features in case of wire breakage, and programmable machining strategies to optimize the operation. Wire-cutting EDM is commonly used when low residual stresses are desired, because it does not require high cutting forces for removal of material. If the energy per pulse is relatively low (as in finishing operations), little change in the mechanical properties of a material is expected due to these low residual stresses, although material that hasn't been stress-relieved can distort in the machining process. The work piece may undergo a significant thermal cycle, its severity depending on the technological parameters used. Such thermal cycles may cause formation of a recast layer on the part and residual tensile stresses on the work piece. If machining takes place after heat treatment, dimensional accuracy will not be affected by heat treat distortion. Fast hole drilling EDM Fast hole drilling EDM was designed for producing fast, accurate, small, deep holes. It is conceptually akin to sinker EDM but the electrode is a rotating tube conveying a pressurized jet of dielectric fluid. It can make a hole an inch deep in about a minute and is a good way to machine holes in materials too hard for twist-drill machining. This EDM drilling type is used largely in the aerospace industry, producing cooling holes into aero blades and other components. It is also used to drill holes in industrial gas turbine blades, in molds and dies, and in bearings. Applications Prototype production The EDM process is most widely used by the mold-making, tool, and die industries, but is becoming a common method of making prototype and production parts, especially in the aerospace, automobile and electronics industries in which production quantities are relatively low. In sinker EDM, a graphite, copper tungsten, or pure copper electrode is machined into the desired (negative) shape and fed into the workpiece on the end of a vertical ram. Coinage die making For the creation of dies for producing jewelry and badges, or blanking and piercing (through use of a pancake die) by the coinage (stamping) process, the positive master may be made from sterling silver, since (with appropriate machine settings) the master is significantly eroded and is used only once. The resultant negative die is then hardened and used in a drop hammer to produce stamped flats from cutout sheet blanks of bronze, silver, or low proof gold alloy. For badges these flats may be further shaped to a curved surface by another die. This type of EDM is usually performed submerged in an oil-based dielectric. The finished object may be further refined by hard (glass) or soft (paint) enameling, or electroplated with pure gold or nickel. Softer materials such as silver may be hand engraved as a refinement. Small hole drilling Small hole drilling EDM is used in a variety of applications. On wire-cut EDM machines, small hole drilling EDM is used to make a through hole in a workpiece through which to thread the wire for the wire-cut EDM operation. A separate EDM head specifically for small hole drilling is mounted on a wire-cut machine and allows large hardened plates to have finished parts eroded from them as needed and without pre-drilling. Small hole EDM is used to drill rows of holes into the leading and trailing edges of turbine blades used in jet engines. Gas flow through these small holes allows the engines to use higher temperatures than otherwise possible. The high-temperature, very hard, single crystal alloys employed in these blades makes conventional machining of these holes with high aspect ratio extremely difficult, if not impossible. Small hole EDM is also used to create microscopic orifices for fuel system components, spinnerets for synthetic fibers such as rayon, and other applications. There are also stand-alone small hole drilling EDM machines with an x–y axis also known as a super drill or hole popper that can machine blind or through holes. EDM drills bore holes with a long brass or copper tube electrode that rotates in a chuck with a constant flow of distilled or deionized water flowing through the electrode as a flushing agent and dielectric. The electrode tubes operate like the wire in wire-cut EDM machines, having a spark gap and wear rate. Some small-hole drilling EDMs are able to drill through 100 mm of soft or hardened steel in less than 10 seconds, averaging 50% to 80% wear rate. Holes of 0.3 mm to 6.1 mm can be achieved in this drilling operation. Brass electrodes are easier to machine but are not recommended for wire-cut operations due to eroded brass particles causing "brass on brass" wire breakage, therefore copper is recommended. Metal disintegration machining Several manufacturers produce EDM machines for the specific purpose of removing broken cutting tools and fasteners from work pieces. In this application, the process is termed "metal disintegration machining" or MDM. The metal disintegration process removes only the center of the broken tool or fastener, leaving the hole intact and allowing a ruined part to be reclaimed. Closed-loop manufacturing Closed-loop manufacturing can improve the accuracy and reduce the tool costs Advantages and disadvantages EDM is often compared to electrochemical machining. Advantages of EDM include: Ability to machine complex shapes that would otherwise be difficult to produce with conventional cutting tools. Machining of extremely hard material to very close tolerances. Very small work pieces can be machined where conventional cutting tools may damage the part from excess cutting tool pressure. There is no direct contact between tool and work piece. Therefore, delicate sections and weak materials can be machined without perceivable distortion. A good surface finish can be obtained; a very good surface may be obtained by redundant finishing paths. Very fine holes can be attained. Tapered holes may be produced. Pipe or container internal contours and internal corners down to R 0.001". Disadvantages of EDM include: Difficulty finding expert machinists. The slow rate of material removal. Potential fire hazard associated with use of combustible oil based dielectrics. The additional time and cost used for creating electrodes for ram/sinker EDM. Reproducing sharp corners on the workpiece is difficult due to electrode wear. Specific power consumption is very high. Power consumption is high. "Overcut" is formed. Excessive tool wear occurs during machining. Electrically non-conductive materials can be machined only with specific set-up of the process. A recast layer is formed at the cut surface due to melting of the material by the arc.
Technology
Metallurgy
null
75654
https://en.wikipedia.org/wiki/Hyperthermia
Hyperthermia
Hyperthermia, also known simply as overheating, is a condition in which an individual's body temperature is elevated beyond normal due to failed thermoregulation. The person's body produces or absorbs more heat than it dissipates. When extreme temperature elevation occurs, it becomes a medical emergency requiring immediate treatment to prevent disability or death. Almost half a million deaths are recorded every year from hyperthermia. The most common causes include heat stroke and adverse reactions to drugs. Heat stroke is an acute temperature elevation caused by exposure to excessive heat, or combination of heat and humidity, that overwhelms the heat-regulating mechanisms of the body. The latter is a relatively rare side effect of many drugs, particularly those that affect the central nervous system. Malignant hyperthermia is a rare complication of some types of general anesthesia. Hyperthermia can also be caused by a traumatic brain injury. Hyperthermia differs from fever in that the body's temperature set point remains unchanged. The opposite is hypothermia, which occurs when the temperature drops below that required to maintain normal metabolism. The term is from Greek ὑπέρ, hyper, meaning "above", and θέρμος, thermos, meaning "heat". Classification In humans, hyperthermia is defined as a temperature greater than , depending on the reference used, that occurs without a change in the body's temperature set point. The normal human body temperature can be as high as in the late afternoon. Hyperthermia requires an elevation from the temperature that would otherwise be expected. Such elevations range from mild to extreme; body temperatures above can be life-threatening. Signs and symptoms An early stage of hyperthermia can be "heat exhaustion" (or "heat prostration" or "heat stress"), whose symptoms can include heavy sweating, rapid breathing and a fast, weak pulse. If the condition progresses to heat stroke, then hot, dry skin is typical as blood vessels dilate in an attempt to increase heat loss. An inability to cool the body through perspiration may cause dry skin. Hyperthermia from neurological disease may include little or no sweating, cardiovascular problems, and confusion or delirium. Other signs and symptoms vary. Accompanying dehydration can produce nausea, vomiting, headaches, and low blood pressure and the latter can lead to fainting or dizziness, especially if the standing position is assumed quickly. In severe heat stroke, confusion and aggressive behavior may be observed. Heart rate and respiration rate will increase (tachycardia and tachypnea) as blood pressure drops and the heart attempts to maintain adequate circulation. The decrease in blood pressure can then cause blood vessels to contract reflexively, resulting in a pale or bluish skin color in advanced cases. Young children, in particular, may have seizures. Eventually, organ failure, unconsciousness and death will result. Causes Heat stroke occurs when thermoregulation is overwhelmed by a combination of excessive metabolic production of heat (exertion), excessive environmental heat, and insufficient or impaired heat loss, resulting in an abnormally high body temperature. In severe cases, temperatures can exceed . Heat stroke may be non-exertional (classic) or exertional. Exertional Significant physical exertion in hot conditions can generate heat beyond the ability to cool, because, in addition to the heat, humidity of the environment may reduce the efficiency of the body's normal cooling mechanisms. Human heat-loss mechanisms are limited primarily to sweating (which dissipates heat by evaporation, assuming sufficiently low humidity) and vasodilation of skin vessels (which dissipates heat by convection proportional to the temperature difference between the body and its surroundings, according to Newton's law of cooling). Other factors, such as insufficient water intake, consuming alcohol, or lack of air conditioning, can worsen the problem. The increase in body temperature that results from a breakdown in thermoregulation affects the body biochemically. Enzymes involved in metabolic pathways within the body such as cellular respiration fail to work effectively at higher temperatures, and further increases can lead them to denature, reducing their ability to catalyse essential chemical reactions. This loss of enzymatic control affects the functioning of major organs with high energy demands such as the heart and brain. Loss of fluid and electrolytes cause heat cramps – slow muscular contraction and severe muscular spasm lasting between one and three minutes. Almost all cases of heat cramps involve vigorous physical exertion. Body temperature may remain normal or a little higher than normal and cramps are concentrated in heavily used muscles. Situational Situational heat stroke occurs in the absence of exertion. It mostly affects the young and elderly. In the elderly in particular, it can be precipitated by medications that reduce vasodilation and sweating, such as anticholinergic drugs, antihistamines, and diuretics. In this situation, the body's tolerance for high environmental temperature may be insufficient, even at rest. Heat waves are often followed by a rise in the death rate, and these 'classical hyperthermia' deaths typically involve the elderly and infirm. This is partly because thermoregulation involves cardiovascular, respiratory and renal systems which may be inadequate for the additional stress because of the existing burden of aging and disease, further compromised by medications. During the July 1995 heatwave in Chicago, there were at least 700 heat-related deaths. The strongest risk factors were being confined to bed, and living alone, while the risk was reduced for those with working air conditioners and those with access to transportation. Even then, reported deaths may be underestimated as diagnosis can be mis-classified as stroke or heart attack. Drugs Some drugs cause excessive internal heat production. The rate of drug-induced hyperthermia is higher where use of these drugs is higher. Many psychotropic medications, such as selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and tricyclic antidepressants, can cause hyperthermia. Serotonin syndrome is a rare adverse reaction to overdose of these medications or the use of several simultaneously. Similarly, neuroleptic malignant syndrome is an uncommon reaction to neuroleptic agents. These syndromes are differentiated by other associated symptoms, such as tremor in serotonin syndrome and "lead-pipe" muscle rigidity in neuroleptic malignant syndrome. Recreational drugs such as amphetamines and cocaine, PCP, dextromethorphan, LSD, and MDMA may cause hyperthermia. Malignant hyperthermia is a rare reaction to common anesthetic agents (such as halothane) or the paralytic agent succinylcholine. Those who have this reaction, which is potentially fatal, have a genetic predisposition. The use of anticholinergics, more specifically muscarinic antagonists are thought to cause mild hyperthermic episodes due to its parasympatholytic effects. The sympathetic nervous system, also known as the "fight-or-flight response", dominates by raising catecholamine levels by the blocked action of the "rest and digest system". Drugs that decouple oxidative phosphorylation may also cause hyperthermia. From this group of drugs the most well-known is 2,4-dinitrophenol which was used as a weight loss drug until dangers from its use became apparent. Personal protective equipment Those working in industry, in the military, or as first responders may be required to wear personal protective equipment (PPE) against hazards such as chemical agents, gases, fire, small arms and improvised explosive devices (IEDs). PPE includes a range of hazmat suits, firefighting turnout gear, body armor and bomb suits, among others. Depending on design, the wearer may be encapsulated in a microclimate, due to an increase in thermal resistance and decrease in vapor permeability. As physical work is performed, the body's natural thermoregulation (i.e. sweating) becomes ineffective. This is compounded by increased work rates, high ambient temperature and humidity levels, and direct exposure to the sun. The net effect is that desired protection from some environmental threats inadvertently increases the threat of heat stress. The effect of PPE on hyperthermia has been noted in fighting the 2014 Ebola virus epidemic in Western Africa. Doctors and healthcare workers were only able to work for 40 minutes at a time in their protective suits, fearing heat stroke. Other Other rare causes of hyperthermia include thyrotoxicosis and an adrenal gland tumor, called pheochromocytoma, both of which can cause increased heat production. Damage to the central nervous system from brain hemorrhage, traumatic brain injury, status epilepticus, and other kinds of injury to the hypothalamus can also cause hyperthermia. Pathophysiology A fever occurs when the core temperature is set higher, through the action of the pre-optic region of the anterior hypothalamus. For example, in response to a bacterial or viral infection, certain white blood cells within the blood will release pyrogens which have a direct effect on the anterior hypothalamus, causing body temperature to rise, much like raising the temperature setting on a thermostat. In contrast, hyperthermia occurs when the body temperature rises without a change in the heat control centers. Some of the gastrointestinal symptoms of acute exertional heatstroke, such as vomiting, diarrhea, and gastrointestinal bleeding, may be caused by barrier dysfunction and subsequent endotoxemia. Ultraendurance athletes have been found to have significantly increased plasma endotoxin levels. Endotoxin stimulates many inflammatory cytokines, which in turn may cause multiorgan dysfunction. Experimentally, monkeys treated with oral antibiotics prior to induction of heat stroke do not become endotoxemic. There is scientific support for the concept of a temperature set point; that is, maintenance of an optimal temperature for the metabolic processes that life depends on. Nervous activity in the preoptic-anterior hypothalamus of the brain triggers heat losing (sweating, etc.) or heat generating (shivering and muscle contraction, etc.) activities through stimulation of the autonomic nervous system. The pre-optic anterior hypothalamus has been shown to contain warm sensitive, cool sensitive, and temperature insensitive neurons, to determine the body's temperature setpoint. As the temperature that these neurons are exposed to rises above , the rate of electrical discharge of the warm-sensitive neurons increases progressively. Cold-sensitive neurons increase their rate of electrical discharge progressively below . Diagnosis Hyperthermia is generally diagnosed by the combination of unexpectedly high body temperature and a history that supports hyperthermia instead of a fever. Most commonly this means that the elevated temperature has occurred in a hot, humid environment (heat stroke) or in someone taking a drug for which hyperthermia is a known side effect (drug-induced hyperthermia). The presence of signs and symptoms related to hyperthermia syndromes, such as extrapyramidal symptoms characteristic of neuroleptic malignant syndrome, and the absence of signs and symptoms more commonly related to infection-related fevers, are also considered in making the diagnosis. If fever-reducing drugs lower the body temperature, even if the temperature does not return entirely to normal, then hyperthermia is excluded. Prevention When ambient temperature is excessive, humans and many other animals cool themselves below ambient by evaporative cooling of sweat (or other aqueous liquid; saliva in dogs, for example); this helps prevent potentially fatal hyperthermia. The effectiveness of evaporative cooling depends upon humidity. Wet-bulb temperature, which takes humidity into account, or more complex calculated quantities such as wet-bulb globe temperature (WBGT), which also takes solar radiation into account, give useful indications of the degree of heat stress and are used by several agencies as the basis for heat-stress prevention guidelines. (Wet-bulb temperature is essentially the lowest skin temperature attainable by evaporative cooling at a given ambient temperature and humidity.) A sustained wet-bulb temperature exceeding is likely to be fatal even to fit and healthy people unclothed in the shade next to a fan; at this temperature, environmental heat gain instead of loss occurs. , wet-bulb temperatures only very rarely exceeded anywhere, although significant global warming may change this. In cases of heat stress caused by physical exertion, hot environments, or protective equipment, prevention or mitigation by frequent rest breaks, careful hydration, and monitoring body temperature should be attempted. However, in situations demanding one is exposed to a hot environment for a prolonged period or must wear protective equipment, a personal cooling system is required as a matter of health and safety. There are a variety of active or passive personal cooling systems; these can be categorized by their power sources and whether they are person- or vehicle-mounted. Because of the broad variety of operating conditions, these devices must meet specific requirements concerning their rate and duration of cooling, their power source, and their adherence to health and safety regulations. Among other criteria are the user's need for physical mobility and autonomy. For example, active-liquid systems operate by chilling water and circulating it through a garment; the skin surface area is thereby cooled through conduction. This type of system has proven successful in certain military, law enforcement, and industrial applications. Bomb-disposal technicians wearing special suits to protect against improvised explosive devices (IEDs) use a small, ice-based chiller unit that is strapped to one leg; a liquid-circulating garment, usually a vest, is worn over the torso to maintain a safe core body temperature. By contrast, soldiers traveling in combat vehicles can face microclimate temperatures in excess of and require a multiple-user, vehicle-powered cooling system with rapid connection capabilities. Requirements for hazmat teams, the medical community, and workers in heavy industry vary further. Treatment The underlying cause must be removed. Mild hyperthemia caused by exertion on a hot day may be adequately treated through self-care measures, such as increased water consumption and resting in a cool place. Hyperthermia that results from drug exposure requires prompt cessation of that drug, and occasionally the use of other drugs as counter measures. Antipyretics (e.g., acetaminophen, aspirin, other nonsteroidal anti-inflammatory drugs) have no role in the treatment of heatstroke because antipyretics interrupt the change in the hypothalamic set point caused by pyrogens; they are not expected to work on a healthy hypothalamus that has been overloaded, as in the case of heatstroke. In this situation, antipyretics actually may be harmful in patients who develop hepatic, hematologic, and renal complications because they may aggravate bleeding tendencies. When body temperature is significantly elevated, mechanical cooling methods are used to remove heat and to restore the body's ability to regulate its own temperatures. Passive cooling techniques, such as resting in a cool, shady area and removing clothing can be applied immediately. Active cooling methods, such as sponging the head, neck, and trunk with cool water, remove heat from the body and thereby speed the body's return to normal temperatures. When methods such as immersion are impractical, misting the body with water and using a fan have also been shown to be effective. Sitting in a bathtub of tepid or cool water (immersion method) can remove a significant amount of heat in a relatively short period of time. It was once thought that immersion in very cold water is counterproductive, as it causes vasoconstriction in the skin and thereby prevents heat from escaping the body core. However, a British analysis of various studies stated: "this has never been proven experimentally. Indeed, a recent study using normal volunteers has shown that cooling rates were fastest when the coldest water was used." The analysis concluded that iced water immersion is the most-effective cooling technique for exertional heat stroke. No superior cooling method has been found for non-exertional heat stroke. Thus, aggressive ice-water immersion remains the gold standard for life-threatening heat stroke. When the body temperature reaches about , or if the affected person is unconscious or showing signs of confusion, hyperthermia is considered a medical emergency that requires treatment in a proper medical facility. A cardiopulmonary resuscitation (CPR) may be necessary if the person goes into cardiac arrest (stop of heart beats). Already in a hospital, more aggressive cooling measures are available, including intravenous hydration, gastric lavage with iced saline, and even hemodialysis to cool the blood. Epidemiology Hyperthermia affects those who are unable to regulate their body heat, mainly due to environmental conditions. The main risk factor for hyperthermia is the lack of ability to sweat. People who are dehydrated or who are older may not produce the sweat they need to regulate their body temperature. High heat conditions can put certain groups at risk for hyperthermia including: physically active individuals, soldiers, construction workers, landscapers and factory workers. Some people that do not have access to cooler living conditions, like people with lower socioeconomic status, may have a difficult time fighting the heat. People are at risk for hyperthermia during high heat and dry conditions, most commonly seen in the summer. Various cases of different types of hyperthermia have been reported. A research study was published in March 2019 that looked into multiple case reports of drug induced hyperthermia. The study concluded that psychotropic drugs such as anti-psychotics, antidepressants, and anxiolytics were associated with an increased heat-related mortality as opposed to the other drugs researched (anticholinergics, diuretics, cardiovascular agents, etc.). A different study was published in June 2019 that examined the association between hyperthermia in older adults and the temperatures in the United States. Hospitalization records of elderly patients in the US between 1991 and 2006 were analyzed and concluded that cases of hyperthermia were observed to be highest in regions with arid climates. The study discussed finding a disproportionately high number of cases of hyperthermia in early seasonal heat waves indicating that people were not yet practicing proper techniques to stay cool and prevent overheating in the early presence of warm, dry weather. In urban areas people are at an increased susceptibility to hyperthermia. This is due to a phenomenon called the urban heat island effect. Since the 20th century in the United States, the north-central region (Ohio, Indiana, Illinois, Missouri, Iowa, and Nebraska) was the region with the highest morbidity resulting from hyperthermia. Northeastern states had the next highest. Regions least affected by heat wave-related hyperthermia causing death were Southern and Pacific Coastal states. Northern cities in the United States are at greater risk of hyperthermia during heat waves due to the fact that people tend to have a lower minimum mortality temperature at higher latitudes. In contrast, cities residing in lower latitudes within the continental US typically have higher thresholds for ambient temperatures. In India, hundreds die every year from summer heat waves, including more than 2,500 in the year 2015. Later that same summer, the 2015 Pakistani heat wave killed about 2,000 people. An extreme 2003 European heat wave caused tens of thousands of deaths. Causes of hyperthermia include dehydration, use of certain medications, using cocaine and amphetamines or excessive alcohol use. Bodily temperatures greater than can be diagnosed as a hyperthermic case. As body temperatures increase or excessive body temperatures persist, individuals are at a heightened risk of developing progressive conditions. Greater risk complications of hyperthermia include heat stroke, organ malfunction, organ failure, and death. There are two forms of heat stroke; classical heatstroke and exertional heatstroke. Classical heatstroke occurs from extreme environmental conditions, such as heat waves. Those who are most commonly affected by classical heatstroke are very young, elderly or chronically ill. Exertional heatstroke appears in individuals after vigorous physical activity. Exertional heatstroke is displayed most commonly in healthy 15-50 year old people. Sweating is often present in exertional heatstroke. The associated mortality rate of heatstroke is 40 to 64%. Research Hyperthermia can also be deliberately induced using drugs or medical devices, and is being studied and applied in clinical routine as a treatment of some kinds of cancer. Research has shown that medically controlled hyperthermia can shrink tumours. This occurs when a high body temperature damages cancerous cells by destroying proteins and structures within each cell. Hyperthermia has also been researched to investigate whether it causes cancerous tumours to be more prone to radiation as a form of treatment; which as a result has allowed hyperthermia to be used to complement other forms of cancer therapy. Various techniques of hyperthermia in the treatment of cancer include local or regional hyperthermia, as well as whole body techniques.
Biology and health sciences
Types
Health
19048
https://en.wikipedia.org/wiki/Mass
Mass
Mass is an intrinsic property of a body. It was traditionally believed to be related to the quantity of matter in a body, until the discovery of the atom and particle physics. It was found that different atoms and different elementary particles, theoretically with the same amount of matter, have nonetheless different masses. Mass in modern physics has multiple definitions which are conceptually distinct, but physically equivalent. Mass can be experimentally defined as a measure of the body's inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. The object's mass also determines the strength of its gravitational attraction to other bodies. The SI base unit of mass is the kilogram (kg). In physics, mass is not the same as weight, even though mass is often determined by measuring the object's weight using a spring scale, rather than balance scale comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity, but it would still have the same mass. This is because weight is a force, while mass is the property that (along with gravity) determines the strength of this force. In the Standard Model of physics, the mass of elementary particles is believed to be a result of their coupling with the Higgs boson in what is known as the Brout–Englert–Higgs mechanism. Phenomena There are several distinct phenomena that can be used to measure mass. Although some theorists have speculated that some of these phenomena could be independent of each other, current experiments have found no difference in results regardless of how it is measured: Inertial mass measures an object's resistance to being accelerated by a force (represented by the relationship ). Active gravitational mass determines the strength of the gravitational field generated by an object. Passive gravitational mass measures the gravitational force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force. The inertia and the inertial mass describe this property of physical bodies at the qualitative and quantitative level respectively. According to Newton's second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A body's mass also determines the degree to which it generates and is affected by a gravitational field. If a first body of mass mA is placed at a distance r (center of mass to center of mass) from a second body of mass mB, each body is subject to an attractive force , where is the "universal gravitational constant". This is sometimes referred to as gravitational mass. Repeated experiments since the 17th century have demonstrated that inertial and gravitational mass are identical; since 1915, this observation has been incorporated a priori in the equivalence principle of general relativity. Units of mass The International System of Units (SI) unit of mass is the kilogram (kg). The kilogram is 1000 grams (g), and was first defined in 1795 as the mass of one cubic decimetre of water at the melting point of ice. However, because precise measurement of a cubic decimetre of water at the specified temperature and pressure was difficult, in 1889 the kilogram was redefined as the mass of a metal object, and thus became independent of the metre and the properties of water, this being a copper prototype of the grave in 1793, the platinum Kilogramme des Archives in 1799, and the platinum–iridium International Prototype of the Kilogram (IPK) in 1889. However, the mass of the IPK and its national copies have been found to drift over time. The re-definition of the kilogram and several other units came into effect on 20 May 2019, following a final vote by the CGPM in November 2018. The new definition uses only invariant quantities of nature: the speed of light, the caesium hyperfine frequency, the Planck constant and the elementary charge. Non-SI units accepted for use with SI units include: the tonne (t) (or "metric ton"), equal to 1000 kg the electronvolt (eV), a unit of energy, used to express mass in units of eV/c2 through mass–energy equivalence the dalton (Da), equal to 1/12 of the mass of a free carbon-12 atom, approximately . Outside the SI system, other units of mass include: the slug (sl), an Imperial unit of mass (about 14.6 kg) the pound (lb), a unit of mass (about 0.45 kg), which is used alongside the similarly named pound (force) (about 4.5 N), a unit of force the Planck mass (about ), a quantity derived from fundamental constants the solar mass (), defined as the mass of the Sun, primarily used in astronomy to compare large masses such as stars or galaxies (≈ ) the mass of a particle, as identified with its inverse Compton wavelength () the mass of a star or black hole, as identified with its Schwarzschild radius (). Definitions In physical science, one may distinguish conceptually between at least seven different aspects of mass, or seven physical notions that involve the concept of mass. Every experiment to date has shown these seven values to be proportional, and in some cases equal, and this proportionality gives rise to the abstract concept of mass. There are a number of ways mass can be measured or operationally defined: Inertial mass is a measure of an object's resistance to acceleration when a force is applied. It is determined by applying a force to an object and measuring the acceleration that results from that force. An object with small inertial mass will accelerate more than an object with large inertial mass when acted upon by the same force. One says the body of greater mass has greater inertia. Active gravitational mass is a measure of the strength of an object's gravitational flux (gravitational flux is equal to the surface integral of gravitational field over an enclosing surface). Gravitational field can be measured by allowing a small "test object" to fall freely and measuring its free-fall acceleration. For example, an object in free-fall near the Moon is subject to a smaller gravitational field, and hence accelerates more slowly, than the same object would if it were in free-fall near the Earth. The gravitational field near the Moon is weaker because the Moon has less active gravitational mass. Passive gravitational mass is a measure of the strength of an object's interaction with a gravitational field. Passive gravitational mass is determined by dividing an object's weight by its free-fall acceleration. Two objects within the same gravitational field will experience the same acceleration; however, the object with a smaller passive gravitational mass will experience a smaller force (less weight) than the object with a larger passive gravitational mass. According to relativity, mass is nothing else than the rest energy of a system of particles, meaning the energy of that system in a reference frame where it has zero momentum. Mass can be converted into other forms of energy according to the principle of mass–energy equivalence. This equivalence is exemplified in a large number of physical processes including pair production, beta decay and nuclear fusion. Pair production and nuclear fusion are processes in which measurable amounts of mass are converted to kinetic energy or vice versa. Curvature of spacetime is a relativistic manifestation of the existence of mass. Such curvature is extremely weak and difficult to measure. For this reason, curvature was not discovered until after it was predicted by Einstein's theory of general relativity. Extremely precise atomic clocks on the surface of the Earth, for example, are found to measure less time (run slower) when compared to similar clocks in space. This difference in elapsed time is a form of curvature called gravitational time dilation. Other forms of curvature have been measured using the Gravity Probe B satellite. Quantum mass manifests itself as a difference between an object's quantum frequency and its wave number. The quantum mass of a particle is proportional to the inverse Compton wavelength and can be determined through various forms of spectroscopy. In relativistic quantum mechanics, mass is one of the irreducible representation labels of the Poincaré group. Weight vs. mass In everyday usage, mass and "weight" are often used interchangeably. For instance, a person's weight may be stated as 75 kg. In a constant gravitational field, the weight of an object is proportional to its mass, and it is unproblematic to use the same unit for both concepts. But because of slight differences in the strength of the Earth's gravitational field at different places, the distinction becomes important for measurements with a precision better than a few percent, and for places far from the surface of the Earth, such as in space or on other planets. Conceptually, "mass" (measured in kilograms) refers to an intrinsic property of an object, whereas "weight" (measured in newtons) measures an object's resistance to deviating from its current course of free fall, which can be influenced by the nearby gravitational field. No matter how strong the gravitational field, objects in free fall are weightless, though they still have mass. The force known as "weight" is proportional to mass and acceleration in all situations where the mass is accelerated away from free fall. For example, when a body is at rest in a gravitational field (rather than in free fall), it must be accelerated by a force from a scale or the surface of a planetary body such as the Earth or the Moon. This force keeps the object from going into free fall. Weight is the opposing force in such circumstances and is thus determined by the acceleration of free fall. On the surface of the Earth, for example, an object with a mass of 50 kilograms weighs 491 newtons, which means that 491 newtons is being applied to keep the object from going into free fall. By contrast, on the surface of the Moon, the same object still has a mass of 50 kilograms but weighs only 81.5 newtons, because only 81.5 newtons is required to keep this object from going into a free fall on the moon. Restated in mathematical terms, on the surface of the Earth, the weight W of an object is related to its mass m by , where is the acceleration due to Earth's gravitational field, (expressed as the acceleration experienced by a free-falling object). For other situations, such as when objects are subjected to mechanical accelerations from forces other than the resistance of a planetary surface, the weight force is proportional to the mass of an object multiplied by the total acceleration away from free fall, which is called the proper acceleration. Through such mechanisms, objects in elevators, vehicles, centrifuges, and the like, may experience weight forces many times those caused by resistance to the effects of gravity on objects, resulting from planetary surfaces. In such cases, the generalized equation for weight W of an object is related to its mass m by the equation , where a is the proper acceleration of the object caused by all influences other than gravity. (Again, if gravity is the only influence, such as occurs when an object falls freely, its weight will be zero). Inertial vs. gravitational mass Although inertial mass, passive gravitational mass and active gravitational mass are conceptually distinct, no experiment has ever unambiguously demonstrated any difference between them. In classical mechanics, Newton's third law implies that active and passive gravitational mass must always be identical (or at least proportional), but the classical theory offers no compelling reason why the gravitational mass has to equal the inertial mass. That it does is merely an empirical fact. Albert Einstein developed his general theory of relativity starting with the assumption that the inertial and passive gravitational masses are the same. This is known as the equivalence principle. The particular equivalence often referred to as the "Galilean equivalence principle" or the "weak equivalence principle" has the most important consequence for freely falling objects. Suppose an object has inertial and gravitational masses m and M, respectively. If the only force acting on the object comes from a gravitational field g, the force on the object is: Given this force, the acceleration of the object can be determined by Newton's second law: Putting these together, the gravitational acceleration is given by: This says that the ratio of gravitational to inertial mass of any object is equal to some constant K if and only if all objects fall at the same rate in a given gravitational field. This phenomenon is referred to as the "universality of free-fall". In addition, the constant K can be taken as 1 by defining our units appropriately. The first experiments demonstrating the universality of free-fall were—according to scientific 'folklore'—conducted by Galileo obtained by dropping objects from the Leaning Tower of Pisa. This is most likely apocryphal: he is more likely to have performed his experiments with balls rolling down nearly frictionless inclined planes to slow the motion and increase the timing accuracy. Increasingly precise experiments have been performed, such as those performed by Loránd Eötvös, using the torsion balance pendulum, in 1889. , no deviation from universality, and thus from Galilean equivalence, has ever been found, at least to the precision 10−6. More precise experimental efforts are still being carried out. The universality of free-fall only applies to systems in which gravity is the only acting force. All other forces, especially friction and air resistance, must be absent or at least negligible. For example, if a hammer and a feather are dropped from the same height through the air on Earth, the feather will take much longer to reach the ground; the feather is not really in free-fall because the force of air resistance upwards against the feather is comparable to the downward force of gravity. On the other hand, if the experiment is performed in a vacuum, in which there is no air resistance, the hammer and the feather should hit the ground at exactly the same time (assuming the acceleration of both objects towards each other, and of the ground towards both objects, for its own part, is negligible). This can easily be done in a high school laboratory by dropping the objects in transparent tubes that have the air removed with a vacuum pump. It is even more dramatic when done in an environment that naturally has a vacuum, as David Scott did on the surface of the Moon during Apollo 15. A stronger version of the equivalence principle, known as the Einstein equivalence principle or the strong equivalence principle, lies at the heart of the general theory of relativity. Einstein's equivalence principle states that within sufficiently small regions of spacetime, it is impossible to distinguish between a uniform acceleration and a uniform gravitational field. Thus, the theory postulates that the force acting on a massive object caused by a gravitational field is a result of the object's tendency to move in a straight line (in other words its inertia) and should therefore be a function of its inertial mass and the strength of the gravitational field. Origin In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction but a theory of the latter has not been yet reconciled with the currently popular model of particle physics, known as the Standard Model. Pre-Newtonian concepts Weight as an amount The concept of amount is very old and predates recorded history. The concept of "weight" would incorporate "amount" and acquire a double meaning that was not clearly recognized as such. Humans, at some early era, realized that the weight of a collection of similar objects was directly proportional to the number of objects in the collection: where W is the weight of the collection of similar objects and n is the number of objects in the collection. Proportionality, by definition, implies that two values have a constant ratio: , or equivalently An early use of this relationship is a balance scale, which balances the force of one object's weight against the force of another object's weight. The two sides of a balance scale are close enough that the objects experience similar gravitational fields. Hence, if they have similar masses then their weights will also be similar. This allows the scale, by comparing weights, to also compare masses. Consequently, historical weight standards were often defined in terms of amounts. The Romans, for example, used the carob seed (carat or siliqua) as a measurement standard. If an object's weight was equivalent to 1728 carob seeds, then the object was said to weigh one Roman pound. If, on the other hand, the object's weight was equivalent to 144 carob seeds then the object was said to weigh one Roman ounce (uncia). The Roman pound and ounce were both defined in terms of different sized collections of the same common mass standard, the carob seed. The ratio of a Roman ounce (144 carob seeds) to a Roman pound (1728 carob seeds) was: Planetary motion In 1600 AD, Johannes Kepler sought employment with Tycho Brahe, who had some of the most precise astronomical data available. Using Brahe's precise observations of the planet Mars, Kepler spent the next five years developing his own method for characterizing planetary motion. In 1609, Johannes Kepler published his three laws of planetary motion, explaining how the planets orbit the Sun. In Kepler's final planetary model, he described planetary orbits as following elliptical paths with the Sun at a focal point of the ellipse. Kepler discovered that the square of the orbital period of each planet is directly proportional to the cube of the semi-major axis of its orbit, or equivalently, that the ratio of these two values is constant for all planets in the Solar System. On 25 August 1609, Galileo Galilei demonstrated his first telescope to a group of Venetian merchants, and in early January 1610, Galileo observed four dim objects near Jupiter, which he mistook for stars. However, after a few days of observation, Galileo realized that these "stars" were in fact orbiting Jupiter. These four objects (later named the Galilean moons in honor of their discoverer) were the first celestial bodies observed to orbit something other than the Earth or Sun. Galileo continued to observe these moons over the next eighteen months, and by the middle of 1611, he had obtained remarkably accurate estimates for their periods. Galilean free fall Sometime prior to 1638, Galileo turned his attention to the phenomenon of objects in free fall, attempting to characterize these motions. Galileo was not the first to investigate Earth's gravitational field, nor was he the first to accurately describe its fundamental characteristics. However, Galileo's reliance on scientific experimentation to establish physical principles would have a profound effect on future generations of scientists. It is unclear if these were just hypothetical experiments used to illustrate a concept, or if they were real experiments performed by Galileo, but the results obtained from these experiments were both realistic and compelling. A biography by Galileo's pupil Vincenzo Viviani stated that Galileo had dropped balls of the same material, but different masses, from the Leaning Tower of Pisa to demonstrate that their time of descent was independent of their mass. In support of this conclusion, Galileo had advanced the following theoretical argument: He asked if two bodies of different masses and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing resolution to this question is that all bodies must fall at the same rate. A later experiment was described in Galileo's Two New Sciences published in 1638. One of Galileo's fictional characters, Salviati, describes an experiment using a bronze ball and a wooden ramp. The wooden ramp was "12 cubits long, half a cubit wide and three finger-breadths thick" with a straight, smooth, polished groove. The groove was lined with "parchment, also smooth and polished as possible". And into this groove was placed "a hard, smooth and very round bronze ball". The ramp was inclined at various angles to slow the acceleration enough so that the elapsed time could be measured. The ball was allowed to roll a known distance down the ramp, and the time taken for the ball to move the known distance was measured. The time was measured using a water clock described as follows: a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results. Galileo found that for an object in free fall, the distance that the object has fallen is always proportional to the square of the elapsed time: Galileo had shown that objects in free fall under the influence of the Earth's gravitational field have a constant acceleration, and Galileo's contemporary, Johannes Kepler, had shown that the planets follow elliptical paths under the influence of the Sun's gravitational mass. However, Galileo's free fall motions and Kepler's planetary motions remained distinct during Galileo's lifetime. Mass as distinct from weight According to K. M. Browne: "Kepler formed a [distinct] concept of mass ('amount of matter' (copia materiae)), but called it 'weight' as did everyone at that time." Finally, in 1686, Newton gave this distinct concept its own name. In the first paragraph of Principia, Newton defined quantity of matter as “density and bulk conjunctly”, and mass as quantity of matter. Newtonian mass Robert Hooke had published his concept of gravitational forces in 1674, stating that all celestial bodies have an attraction or gravitating power towards their own centers, and also attract all the other celestial bodies that are within the sphere of their activity. He further stated that gravitational attraction increases by how much nearer the body wrought upon is to its own center. In correspondence with Isaac Newton from 1679 and 1680, Hooke conjectured that gravitational forces might decrease according to the double of the distance between the two bodies. Hooke urged Newton, who was a pioneer in the development of calculus, to work through the mathematical details of Keplerian orbits to determine if Hooke's hypothesis was correct. Newton's own investigations verified that Hooke was correct, but due to personal differences between the two men, Newton chose not to reveal this to Hooke. Isaac Newton kept quiet about his discoveries until 1684, at which time he told a friend, Edmond Halley, that he had solved the problem of gravitational orbits, but had misplaced the solution in his office. After being encouraged by Halley, Newton decided to develop his ideas about gravity and publish all of his findings. In November 1684, Isaac Newton sent a document to Edmund Halley, now lost but presumed to have been titled De motu corporum in gyrum (Latin for "On the motion of bodies in an orbit"). Halley presented Newton's findings to the Royal Society of London, with a promise that a fuller presentation would follow. Newton later recorded his ideas in a three-book set, entitled Philosophiæ Naturalis Principia Mathematica (English: Mathematical Principles of Natural Philosophy). The first was received by the Royal Society on 28 April 1685–86; the second on 2 March 1686–87; and the third on 6 April 1686–87. The Royal Society published Newton's entire collection at their own expense in May 1686–87. Isaac Newton had bridged the gap between Kepler's gravitational mass and Galileo's gravitational acceleration, resulting in the discovery of the following relationship which governed both of these: where g is the apparent acceleration of a body as it passes through a region of space where gravitational fields exist, μ is the gravitational mass (standard gravitational parameter) of the body causing gravitational fields, and R is the radial coordinate (the distance between the centers of the two bodies). By finding the exact relationship between a body's gravitational mass and its gravitational field, Newton provided a second method for measuring gravitational mass. The mass of the Earth can be determined using Kepler's method (from the orbit of Earth's Moon), or it can be determined by measuring the gravitational acceleration on the Earth's surface, and multiplying that by the square of the Earth's radius. The mass of the Earth is approximately three-millionths of the mass of the Sun. To date, no other accurate method for measuring gravitational mass has been discovered. Newton's cannonball Newton's cannonball was a thought experiment used to bridge the gap between Galileo's gravitational acceleration and Kepler's elliptical orbits. It appeared in Newton's 1728 book A Treatise of the System of the World. According to Galileo's concept of gravitation, a dropped stone falls with constant acceleration down towards the Earth. However, Newton explains that when a stone is thrown horizontally (meaning sideways or perpendicular to Earth's gravity) it follows a curved path. "For a stone projected is by the pressure of its own weight forced out of the rectilinear path, which by the projection alone it should have pursued, and made to describe a curve line in the air; and through that crooked way is at last brought down to the ground. And the greater the velocity is with which it is projected, the farther it goes before it falls to the Earth." Newton further reasons that if an object were "projected in an horizontal direction from the top of a high mountain" with sufficient velocity, "it would reach at last quite beyond the circumference of the Earth, and return to the mountain from which it was projected." Universal gravitational mass In contrast to earlier theories (e.g. celestial spheres) which stated that the heavens were made of entirely different material, Newton's theory of mass was groundbreaking partly because it introduced universal gravitational mass: every object has gravitational mass, and therefore, every object generates a gravitational field. Newton further assumed that the strength of each object's gravitational field would decrease according to the square of the distance to that object. If a large collection of small objects were formed into a giant spherical body such as the Earth or Sun, Newton calculated the collection would create a gravitational field proportional to the total mass of the body, and inversely proportional to the square of the distance to the body's center. For example, according to Newton's theory of universal gravitation, each carob seed produces a gravitational field. Therefore, if one were to gather an immense number of carob seeds and form them into an enormous sphere, then the gravitational field of the sphere would be proportional to the number of carob seeds in the sphere. Hence, it should be theoretically possible to determine the exact number of carob seeds that would be required to produce a gravitational field similar to that of the Earth or Sun. In fact, by unit conversion it is a simple matter of abstraction to realize that any traditional mass unit can theoretically be used to measure gravitational mass. Measuring gravitational mass in terms of traditional mass units is simple in principle, but extremely difficult in practice. According to Newton's theory, all objects produce gravitational fields and it is theoretically possible to collect an immense number of small objects and form them into an enormous gravitating sphere. However, from a practical standpoint, the gravitational fields of small objects are extremely weak and difficult to measure. Newton's books on universal gravitation were published in the 1680s, but the first successful measurement of the Earth's mass in terms of traditional mass units, the Cavendish experiment, did not occur until 1797, over a hundred years later. Henry Cavendish found that the Earth's density was 5.448 ± 0.033 times that of water. As of 2009, the Earth's mass in kilograms is only known to around five digits of accuracy, whereas its gravitational mass is known to over nine significant figures. Given two objects A and B, of masses MA and MB, separated by a displacement RAB, Newton's law of gravitation states that each object exerts a gravitational force on the other, of magnitude , where G is the universal gravitational constant. The above statement may be reformulated in the following way: if g is the magnitude at a given location in a gravitational field, then the gravitational force on an object with gravitational mass M is . This is the basis by which masses are determined by weighing. In simple spring scales, for example, the force F is proportional to the displacement of the spring beneath the weighing pan, as per Hooke's law, and the scales are calibrated to take g into account, allowing the mass M to be read off. Assuming the gravitational field is equivalent on both sides of the balance, a balance measures relative weight, giving the relative gravitation mass of each object. Inertial mass Mass was traditionally believed to be a measure of the quantity of matter in a physical body, equal to the "amount of matter" in an object. For example, Barre´ de Saint-Venant argued in 1851 that every object contains a number of "points" (basically, interchangeable elementary particles), and that mass is proportional to the number of points the object contains. (In practice, this "amount of matter" definition is adequate for most of classical mechanics, and sometimes remains in use in basic education, if the priority is to teach the difference between mass from weight.) This traditional "amount of matter" belief was contradicted by the fact that different atoms (and, later, different elementary particles) can have different masses, and was further contradicted by Einstein's theory of relativity (1905), which showed that the measurable mass of an object increases when energy is added to it (for example, by increasing its temperature or forcing it near an object that electrically repels it.) This motivates a search for a different definition of mass that is more accurate than the traditional definition of "the amount of matter in an object". Inertial mass is the mass of an object measured by its resistance to acceleration. This definition has been championed by Ernst Mach and has since been developed into the notion of operationalism by Percy W. Bridgman. The simple classical mechanics definition of mass differs slightly from the definition in the theory of special relativity, but the essential meaning is the same. In classical mechanics, according to Newton's second law, we say that a body has a mass m if, at any instant of time, it obeys the equation of motion where F is the resultant force acting on the body and a is the acceleration of the body's centre of mass. For the moment, we will put aside the question of what "force acting on the body" actually means. This equation illustrates how mass relates to the inertia of a body. Consider two objects with different masses. If we apply an identical force to each, the object with a bigger mass will experience a smaller acceleration, and the object with a smaller mass will experience a bigger acceleration. We might say that the larger mass exerts a greater "resistance" to changing its state of motion in response to the force. However, this notion of applying "identical" forces to different objects brings us back to the fact that we have not really defined what a force is. We can sidestep this difficulty with the help of Newton's third law, which states that if one object exerts a force on a second object, it will experience an equal and opposite force. To be precise, suppose we have two objects of constant inertial masses m1 and m2. We isolate the two objects from all other physical influences, so that the only forces present are the force exerted on m1 by m2, which we denote F12, and the force exerted on m2 by m1, which we denote F21. Newton's second law states that where a1 and a2 are the accelerations of m1 and m2, respectively. Suppose that these accelerations are non-zero, so that the forces between the two objects are non-zero. This occurs, for example, if the two objects are in the process of colliding with one another. Newton's third law then states that and thus If is non-zero, the fraction is well-defined, which allows us to measure the inertial mass of m1. In this case, m2 is our "reference" object, and we can define its mass m as (say) 1 kilogram. Then we can measure the mass of any other object in the universe by colliding it with the reference object and measuring the accelerations. Additionally, mass relates a body's momentum p to its linear velocity v: , and the body's kinetic energy K to its velocity: . The primary difficulty with Mach's definition of mass is that it fails to take into account the potential energy (or binding energy) needed to bring two masses sufficiently close to one another to perform the measurement of mass. This is most vividly demonstrated by comparing the mass of the proton in the nucleus of deuterium, to the mass of the proton in free space (which is greater by about 0.239%—this is due to the binding energy of deuterium). Thus, for example, if the reference weight m2 is taken to be the mass of the neutron in free space, and the relative accelerations for the proton and neutron in deuterium are computed, then the above formula over-estimates the mass m1 (by 0.239%) for the proton in deuterium. At best, Mach's formula can only be used to obtain ratios of masses, that is, as m1 / m2 =  / . An additional difficulty was pointed out by Henri Poincaré, which is that the measurement of instantaneous acceleration is impossible: unlike the measurement of time or distance, there is no way to measure acceleration with a single measurement; one must make multiple measurements (of position, time, etc.) and perform a computation to obtain the acceleration. Poincaré termed this to be an "insurmountable flaw" in the Mach definition of mass. Atomic masses Typically, the mass of objects is measured in terms of the kilogram, which since 2019 is defined in terms of fundamental constants of nature. The mass of an atom or other particle can be compared more precisely and more conveniently to that of another atom, and thus scientists developed the dalton (also known as the unified atomic mass unit). By definition, 1 Da (one dalton) is exactly one-twelfth of the mass of a carbon-12 atom, and thus, a carbon-12 atom has a mass of exactly 12 Da. In relativity Special relativity In some frameworks of special relativity, physicists have used different definitions of the term. In these frameworks, two kinds of mass are defined: rest mass (invariant mass), and relativistic mass (which increases with velocity). Rest mass is the Newtonian mass as measured by an observer moving along with the object. Relativistic mass is the total quantity of energy in a body or system divided by c2. The two are related by the following equation: where is the Lorentz factor: The invariant mass of systems is the same for observers in all inertial frames, while the relativistic mass depends on the observer's frame of reference. In order to formulate the equations of physics such that mass values do not change between observers, it is convenient to use rest mass. The rest mass of a body is also related to its energy E and the magnitude of its momentum p by the relativistic energy-momentum equation: So long as the system is closed with respect to mass and energy, both kinds of mass are conserved in any given frame of reference. The conservation of mass holds even as some types of particles are converted to others. Matter particles (such as atoms) may be converted to non-matter particles (such as photons of light), but this does not affect the total amount of mass or energy. Although things like heat may not be matter, all types of energy still continue to exhibit mass. Thus, mass and energy do not change into one another in relativity; rather, both are names for the same thing, and neither mass nor energy appear without the other. Both rest and relativistic mass can be expressed as an energy by applying the well-known relationship E = mc2, yielding rest energy and "relativistic energy" (total system energy) respectively: The "relativistic" mass and energy concepts are related to their "rest" counterparts, but they do not have the same value as their rest counterparts in systems where there is a net momentum. Because the relativistic mass is proportional to the energy, it has gradually fallen into disuse among physicists. There is disagreement over whether the concept remains useful pedagogically. In bound systems, the binding energy must often be subtracted from the mass of the unbound system, because binding energy commonly leaves the system at the time it is bound. The mass of the system changes in this process merely because the system was not closed during the binding process, so the energy escaped. For example, the binding energy of atomic nuclei is often lost in the form of gamma rays when the nuclei are formed, leaving nuclides which have less mass than the free particles (nucleons) of which they are composed. Mass–energy equivalence also holds in macroscopic systems. For example, if one takes exactly one kilogram of ice, and applies heat, the mass of the resulting melt-water will be more than a kilogram: it will include the mass from the thermal energy (latent heat) used to melt the ice; this follows from the conservation of energy. This number is small but not negligible: about 3.7 nanograms. It is given by the latent heat of melting ice (334 kJ/kg) divided by the speed of light squared (c2 ≈ ). General relativity In general relativity, the equivalence principle is the equivalence of gravitational and inertial mass. At the core of this assertion is Albert Einstein's idea that the gravitational force as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (i.e. accelerated) frame of reference. However, it turns out that it is impossible to find an objective general definition for the concept of invariant mass in general relativity. At the core of the problem is the non-linearity of the Einstein field equations, making it impossible to write the gravitational field energy as part of the stress–energy tensor in a way that is invariant for all observers. For a given observer, this can be achieved by the stress–energy–momentum pseudotensor. In quantum physics In classical mechanics, the inert mass of a particle appears in the Euler–Lagrange equation as a parameter m: After quantization, replacing the position vector x with a wave function, the parameter m appears in the kinetic energy operator: In the ostensibly covariant (relativistically invariant) Dirac equation, and in natural units, this becomes: where the "mass" parameter m is now simply a constant associated with the quantum described by the wave function ψ. In the Standard Model of particle physics as developed in the 1960s, this term arises from the coupling of the field ψ to an additional field Φ, the Higgs field. In the case of fermions, the Higgs mechanism results in the replacement of the term mψ in the Lagrangian with . This shifts the explanandum of the value for the mass of each elementary particle to the value of the unknown coupling constant Gψ. Tachyonic particles and imaginary (complex) mass A tachyonic field, or simply tachyon, is a quantum field with an imaginary mass. Although tachyons (particles that move faster than light) are a purely hypothetical concept not generally believed to exist, fields with imaginary mass have come to play an important role in modern physics and are discussed in popular books on physics. Under no circumstances do any excitations ever propagate faster than light in such theories—the presence or absence of a tachyonic mass has no effect whatsoever on the maximum velocity of signals (there is no violation of causality). While the field may have imaginary mass, any physical particles do not; the "imaginary mass" shows that the system becomes unstable, and sheds the instability by undergoing a type of phase transition called tachyon condensation (closely related to second order phase transitions) that results in symmetry breaking in current models of particle physics. The term "tachyon" was coined by Gerald Feinberg in a 1967 paper, but it was soon realized that Feinberg's model in fact did not allow for superluminal speeds. Instead, the imaginary mass creates an instability in the configuration:- any configuration in which one or more field excitations are tachyonic will spontaneously decay, and the resulting configuration contains no physical tachyons. This process is known as tachyon condensation. Well known examples include the condensation of the Higgs boson in particle physics, and ferromagnetism in condensed matter physics. Although the notion of a tachyonic imaginary mass might seem troubling because there is no classical interpretation of an imaginary mass, the mass is not quantized. Rather, the scalar field is; even for tachyonic quantum fields, the field operators at spacelike separated points still commute (or anticommute), thus preserving causality. Therefore, information still does not propagate faster than light, and solutions grow exponentially, but not superluminally (there is no violation of causality). Tachyon condensation drives a physical system that has reached a local limit and might naively be expected to produce physical tachyons, to an alternate stable state where no physical tachyons exist. Once the tachyonic field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles with a positive mass-squared. This is a special case of the general rule, where unstable massive particles are formally described as having a complex mass, with the real part being their mass in the usual sense, and the imaginary part being the decay rate in natural units. However, in quantum field theory, a particle (a "one-particle state") is roughly defined as a state which is constant over time; i.e., an eigenvalue of the Hamiltonian. An unstable particle is a state which is only approximately constant over time; If it exists long enough to be measured, it can be formally described as having a complex mass, with the real part of the mass greater than its imaginary part. If both parts are of the same magnitude, this is interpreted as a resonance appearing in a scattering process rather than a particle, as it is considered not to exist long enough to be measured independently of the scattering process. In the case of a tachyon, the real part of the mass is zero, and hence no concept of a particle can be attributed to it. In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called "bradyons" in discussions of tachyons) must also apply to tachyons. In particular the energy–momentum relation: (where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle: This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy. When v is larger than c, the denominator in the equation for the energy is "imaginary", as the value under the radical is negative. Because the total energy must be real, the numerator must also be imaginary: i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number.
Physical sciences
Physics
null
19051
https://en.wikipedia.org/wiki/Manganese
Manganese
Manganese is a chemical element; it has symbol Mn and atomic number 25. It is a hard, brittle, silvery metal, often found in minerals in combination with iron. Manganese was first isolated in the 1770s. It is a transition metal with a multifaceted array of industrial alloy uses, particularly in stainless steels. It improves strength, workability, and resistance to wear. Manganese oxide is used as an oxidising agent; as a rubber additive; and in glass making, fertilisers, and ceramics. Manganese sulfate can be used as a fungicide. Manganese is also an essential human dietary element, important in macronutrient metabolism, bone formation, and free radical defense systems. It is a critical component in dozens of proteins and enzymes. It is found mostly in the bones, but also the liver, kidneys, and brain. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes. It is familiar in the laboratory in the form of the deep violet salt potassium permanganate. It occurs at the active sites in some enzymes. Of particular interest is the use of a Mn-O cluster, the oxygen-evolving complex, in the production of oxygen by plants. Characteristics Physical properties Manganese is a silvery-gray metal that resembles iron. It is hard and very brittle, difficult to fuse, but easy to oxidize. Manganese and its common ions are paramagnetic. Manganese tarnishes slowly in air and oxidizes ("rusts") like iron in water containing dissolved oxygen. Isotopes Naturally occurring manganese is composed of one stable isotope, 55Mn. Several radioisotopes have been isolated and described, ranging in atomic weight from 46 u (46Mn) to 72 u (72Mn). The most stable are 53Mn with a half-life of 3.7 million years, 54Mn with a half-life of 312.2 days, and 52Mn with a half-life of 5.591 days. All of the remaining radioactive isotopes have half-lives of less than three hours, and the majority of less than one minute. The primary decay mode in isotopes lighter than the most abundant stable isotope, 55Mn, is electron capture and the primary mode in heavier isotopes is beta decay. Manganese also has three meta states. Manganese is part of the iron group of elements, which are thought to be synthesized in large stars shortly before the supernova explosion. 53Mn decays to 53Cr with a half-life of 3.7 million years. Because of its relatively short half-life, 53Mn is relatively rare, produced by cosmic rays impact on iron. Manganese isotopic contents are typically combined with chromium isotopic contents and have found application in isotope geology and radiometric dating. Mn–Cr isotopic ratios reinforce the evidence from 26Al and 107Pd for the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites suggest an initial 53Mn/55Mn ratio, which indicate that Mn–Cr isotopic composition must result from in situ decay of 53Mn in differentiated planetary bodies. Hence, 53Mn provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. Allotropes Four allotropes (structural forms) of solid manganese are known, labeled α, β, γ and δ, and occurring at successively higher temperatures. All are metallic, stable at standard pressure, and have a cubic crystal lattice, but they vary widely in their atomic structures. Alpha manganese (α-Mn) is the equilibrium phase at room temperature. It has a body-centered cubic lattice and is unusual among elemental metals in having a very complex unit cell, with 58 atoms per cell (29 atoms per primitive unit cell) in four different types of site. It is paramagnetic at room temperature and antiferromagnetic at temperatures below . Beta manganese (β-Mn) forms when heated above the transition temperature of . It has a primitive cubic structure with 20 atoms per unit cell at two types of sites, which is as complex as that of any other elemental metal. It is easily obtained as a metastable phase at room temperature by rapid quenching. It does not show magnetic ordering, remaining paramagnetic down to the lowest temperature measured (1.1 K). Gamma manganese (γ-Mn) forms when heated above . It has a simple face-centered cubic structure (four atoms per unit cell). When quenched to room temperature it converts to β-Mn, but it can be stabilized at room temperature by alloying it with at least 5 percent of other elements (such as C, Fe, Ni, Cu, Pd or Au), and these solute-stabilized alloys distort into a face-centered tetragonal structure. Delta manganese (δ-Mn) forms when heated above and is stable up to the manganese melting point of . It has a body-centered cubic structure (two atoms per cubic unit cell). Chemical compounds Common oxidation states of manganese are +2, +3, +4, +6, and +7, although all oxidation states from −3 to +7 have been observed. Manganese in oxidation state +7 is represented by salts of the intensely purple permanganate anion . Potassium permanganate is a commonly used laboratory reagent because of its oxidizing properties; it is used as a topical medicine (for example, in the treatment of fish diseases). Solutions of potassium permanganate were among the first stains and fixatives to be used in the preparation of biological cells and tissues for electron microscopy. Aside from various permanganate salts, Mn(VII) is represented by the unstable, volatile derivative Mn2O7. Oxyhalides (MnO3F and MnO3Cl) are powerful oxidizing agents. The most prominent example of Mn in the +6 oxidation state is the green anion manganate, [MnO4]2−. Manganate salts are intermediates in the extraction of manganese from its ores. Compounds with oxidation states +5 are somewhat elusive, and often found associated to an oxide (O2−) or nitride (N3−) ligand. One example is the blue anion hypomanganate [MnO4]3−. Mn(IV) is somewhat enigmatic because it is common in nature but far rarer in synthetic chemistry. The most common Mn ore, pyrolusite, is MnO2. It is the dark brown pigment of many cave drawings but is also a common ingredient in dry cell batteries. Complexes of Mn(IV) are well known, but they require elaborate ligands. Mn(IV)-OH complexes are an intermediate in some enzymes, including the oxygen evolving center (OEC) in plants. Simple derivatives Mn3+ are rarely encountered but can be stabilized by suitably basic ligands. Manganese(III) acetate is an oxidant useful in organic synthesis. Solid compounds of manganese(III) are characterized by its strong purple-red color and a preference for distorted octahedral coordination resulting from the Jahn-Teller effect. A particularly common oxidation state for manganese in aqueous solution is +2, which has a pale pink color. Many manganese(II) compounds are known, such as the aquo complexes derived from manganese(II) sulfate (MnSO4) and manganese(II) chloride (MnCl2). This oxidation state is also seen in the mineral rhodochrosite (manganese(II) carbonate). Manganese(II) commonly exists with a high spin, S = 5/2 ground state because of the high pairing energy for manganese(II). There are no spin-allowed d–d transitions in manganese(II), which explain its faint color. Organomanganese compounds Manganese forms a large variety of organometallic derivatives, i.e., compounds with Mn-C bonds. The organometallic derivatives include numerous examples of Mn in its lower oxidation states, i.e. Mn(−III) up through Mn(I). This area of organometallic chemistry is attractive because Mn is inexpensive and of relatively low toxicity. Of greatest commercial interest is "MMT", methylcyclopentadienyl manganese tricarbonyl, which is used as an anti-knock compound added to gasoline (petrol) in some countries. It features Mn(I). Consistent with other aspects of Mn(II) chemistry, manganocene () is high-spin. In contrast, its neighboring metal iron forms an air-stable, low-spin derivative in the form of ferrocene (). When conducted under an atmosphere of carbon monoxide, reduction of Mn(II) salts gives dimanganese decacarbonyl , an orange and volatile solid. The air-stability of this Mn(0) compound (and its many derivatives) reflects the powerful electron-acceptor properties of carbon monoxide. Many alkene complexes and alkyne complexes are derived from . In Mn(CH3)2(dmpe)2, Mn(II) is low spin, which contrasts with the high spin character of its precursor, MnBr2(dmpe)2 (dmpe = (CH3)2PCH2CH2P(CH3)2). Polyalkyl and polyaryl derivatives of manganese often exist in higher oxidation states, reflecting the electron-releasing properties of alkyl and aryl ligands. One example is [Mn(CH3)6]2−. History The origin of the name manganese is complex. In ancient times, two black minerals were identified from the regions of the Magnetes (either Magnesia, located within modern Greece, or Magnesia ad Sipylum, located within modern Turkey). They were both called magnes from their place of origin, but were considered to differ in sex. The male magnes attracted iron, and was the iron ore now known as lodestone or magnetite, and which probably gave us the term magnet. The female magnes ore did not attract iron, but was used to decolorize glass. This female magnes was later called magnesia, known now in modern times as pyrolusite or manganese dioxide. Neither this mineral nor elemental manganese is magnetic. In the 16th century, manganese dioxide was called manganesum (note the two Ns instead of one) by glassmakers, possibly as a corruption and concatenation of two words, since alchemists and glassmakers eventually had to differentiate a magnesia nigra (the black ore) from magnesia alba (a white ore, also from Magnesia, also useful in glassmaking). Michele Mercati called magnesia nigra manganesa, and finally the metal isolated from it became known as manganese (). The name magnesia eventually was then used to refer only to the white magnesia alba (magnesium oxide), which provided the name magnesium for the free element when it was isolated much later. Manganese dioxide, which is abundant in nature, has long been used as a pigment. The cave paintings in Gargas that are 30,000 to 24,000 years old are made from the mineral form of MnO2 pigments. Manganese compounds were used by Egyptian and Roman glassmakers, either to add to, or remove, color from glass. Use as "glassmakers soap" continued through the Middle Ages until modern times and is evident in 14th-century glass from Venice. Because it was used in glassmaking, manganese dioxide was available for experiments by alchemists, the first chemists. Ignatius Gottfried Kaim (1770) and Johann Glauber (17th century) discovered that manganese dioxide could be converted to permanganate, a useful laboratory reagent. Kaim also may have reduced manganese dioxide to isolate the metal, but that is uncertain. By the mid-18th century, the Swedish chemist Carl Wilhelm Scheele used manganese dioxide to produce chlorine. First, hydrochloric acid, or a mixture of dilute sulfuric acid and sodium chloride was made to react with manganese dioxide, and later hydrochloric acid from the Leblanc process was used and the manganese dioxide was recycled by the Weldon process. The production of chlorine and hypochlorite bleaching agents was a large consumer of manganese ores. Scheele and others were aware that pyrolusite (mineral form of manganese dioxide) contained a new element. Johan Gottlieb Gahn isolated an impure sample of manganese metal in 1774, which he did by reducing the dioxide with carbon. The manganese content of some iron ores used in Greece led to speculations that steel produced from that ore contains additional manganese, making the Spartan steel exceptionally hard. Around the beginning of the 19th century, manganese was used in steelmaking and several patents were granted. In 1816, it was documented that iron alloyed with manganese was harder but not more brittle. In 1837, British academic James Couper noted an association between miners' heavy exposure to manganese and a form of Parkinson's disease. In 1912, United States patents were granted for protecting firearms against rust and corrosion with manganese phosphate electrochemical conversion coatings, and the process has seen widespread use ever since. The invention of the Leclanché cell in 1866 and the subsequent improvement of batteries containing manganese dioxide as cathodic depolarizer increased the demand for manganese dioxide. Until the development of batteries with nickel–cadmium and lithium, most batteries contained manganese. The zinc–carbon battery and the alkaline battery normally use industrially produced manganese dioxide because naturally occurring manganese dioxide contains impurities. In the 20th century, manganese dioxide was widely used as the cathodic for commercial disposable dry batteries of both the standard (zinc–carbon) and alkaline types. Manganese is essential to iron and steel production by virtue of its sulfur-fixing, deoxidizing, and alloying properties. This application was first recognized by the British metallurgist Robert Forester Mushet (1811–1891) who, in 1856, introduced the element, in the form of Spiegeleisen. Occurrence Manganese comprises about 1000 ppm (0.1%) of the Earth's crust and is the 12th most abundant element. Soil contains 7–9000 ppm of manganese with an average of 440 ppm. The atmosphere contains 0.01 μg/m3. Manganese occurs principally as pyrolusite (MnO2), braunite (Mn2+Mn3+6)SiO12), psilomelane , and to a lesser extent as rhodochrosite (MnCO3). The most important manganese ore is pyrolusite (MnO2). Other economically important manganese ores usually show a close spatial relation to the iron ores, such as sphalerite. Land-based resources are large but irregularly distributed. About 80% of the known world manganese resources are in South Africa; other important manganese deposits are in Ukraine, Australia, India, China, Gabon and Brazil. According to 1978 estimate, the ocean floor has 500 billion tons of manganese nodules. Attempts to find economically viable methods of harvesting manganese nodules were abandoned in the 1970s. In South Africa, most identified deposits are located near Hotazel in the Northern Cape Province, (Kalahari manganese fields), with a 2011 estimate of 15 billion tons. In 2011 South Africa produced 3.4 million tons, topping all other nations. Manganese is mainly mined in South Africa, Australia, China, Gabon, Brazil, India, Kazakhstan, Ghana, Ukraine and Malaysia. Production For the production of ferromanganese, the manganese ore is mixed with iron ore and carbon, and then reduced either in a blast furnace or in an electric arc furnace. The resulting ferromanganese has a manganese content of 30–80%. Pure manganese used for the production of iron-free alloys is produced by leaching manganese ore with sulfuric acid and a subsequent electrowinning process. A more progressive extraction process involves directly reducing (a low grade) manganese ore by heap leaching. This is done by percolating natural gas through the bottom of the heap; the natural gas provides the heat (needs to be at least 850 °C) and the reducing agent (carbon monoxide). This reduces all of the manganese ore to manganese oxide (MnO), which is a leachable form. The ore then travels through a grinding circuit to reduce the particle size of the ore to between 150 and 250 μm, increasing the surface area to aid leaching. The ore is then added to a leach tank of sulfuric acid and ferrous iron (Fe2+) in a 1.6:1 ratio. The iron reacts with the manganese dioxide (MnO2) to form iron hydroxide (FeO(OH)) and elemental manganese (Mn). This process yields approximately 92% recovery of the manganese. For further purification, the manganese can then be sent to an electrowinning facility. Oceanic environment In 1972, the CIA's Project Azorian, through billionaire Howard Hughes, commissioned the ship Hughes Glomar Explorer with the cover story of harvesting manganese nodules from the sea floor. That triggered a rush of activity to collect manganese nodules, which was not actually practical until the 2020s. The real mission of Hughes Glomar Explorer was to raise a sunken Soviet submarine, the K-129, with the goal of retrieving Soviet code books. An abundant resource of manganese in the form of manganese nodules found on the ocean floor. These nodules, which are composed of 29% manganese, are located along the ocean floor. The environmental impacts of nodule collection are of interest. Dissolved manganese (dMn) is found throughout the world's oceans, 90% of which originates from hydrothermal vents. Particulate Mn develops in buoyant plumes over an active vent source, while the dMn behaves conservatively. Mn concentrations vary between the water columns of the ocean. At the surface, dMn is elevated due to input from external sources such as rivers, dust, and shelf sediments. Coastal sediments normally have lower Mn concentrations, but can increase due to anthropogenic discharges from industries such as mining and steel manufacturing, which enter the ocean from river inputs. Surface dMn concentrations can also be elevated biologically through photosynthesis and physically from coastal upwelling and wind-driven surface currents. Internal cycling such as photo-reduction from UV radiation can also elevate levels by speeding up the dissolution of Mn-oxides and oxidative scavenging, preventing Mn from sinking to deeper waters. Elevated levels at mid-depths can occur near mid-ocean ridges and hydrothermal vents. The hydrothermal vents release dMn enriched fluid into the water. The dMn can then travel up to 4,000 km due to the microbial capsules present, preventing exchange with particles, lowing the sinking rates. Dissolved Mn concentrations are even higher when oxygen levels are low. Overall, dMn concentrations are normally higher in coastal regions and decrease when moving offshore. Soils Manganese occurs in soils in three oxidation states: the divalent cation, Mn2+ and as brownish-black oxides and hydroxides containing Mn (III,IV), such as MnOOH and MnO2. Soil pH and oxidation-reduction conditions affect which of these three forms of Mn is dominant in a given soil. At pH values less than 6 or under anaerobic conditions, Mn(II) dominates, while under more alkaline and aerobic conditions, Mn(III,IV) oxides and hydroxides predominate. These effects of soil acidity and aeration state on the form of Mn can be modified or controlled by microbial activity. Microbial respiration can cause both the oxidation of Mn2+ to the oxides, and it can cause reduction of the oxides to the divalent cation. The Mn(III,IV) oxides exist as brownish-black stains and small nodules on sand, silt, and clay particles. These surface coatings on other soil particles have high surface area and carry negative charge. The charged sites can adsorb and retain various cations, especially heavy metals (e.g., Cr3+, Cu2+, Zn2+, and Pb2+). In addition, the oxides can adsorb organic acids and other compounds. The adsorption of the metals and organic compounds can then cause them to be oxidized while the Mn(III,IV) oxides are reduced to Mn2+ (e.g., Cr3+ to Cr(VI) and colorless hydroquinone to tea-colored quinone polymers). Applications Steel Manganese is essential to iron and steel production by virtue of its sulfur-fixing, deoxidizing, and alloying properties. Manganese has no satisfactory substitute in these applications in metallurgy. Steelmaking, including its ironmaking component, has accounted for most manganese demand, presently in the range of 85% to 90% of the total demand. Manganese is a key component of low-cost stainless steel. Often ferromanganese (usually about 80% manganese) is the intermediate in modern processes. Small amounts of manganese improve the workability of steel at high temperatures by forming a high-melting sulfide and preventing the formation of a liquid iron sulfide at the grain boundaries. If the manganese content reaches 4%, the embrittlement of the steel becomes a dominant feature. The embrittlement decreases at higher manganese concentrations and reaches an acceptable level at 8%. Steel containing 8 to 15% of manganese has a high tensile strength of up to 863 MPa. Steel with 12% manganese was discovered in 1882 by Robert Hadfield and is still known as Hadfield steel (mangalloy). It was used for British military steel helmets and later by the U.S. military. Aluminium alloys Manganese is used in production of alloys with aluminium. Aluminium with roughly 1.5% manganese has increased resistance to corrosion through grains that absorb impurities which would lead to galvanic corrosion. The corrosion-resistant aluminium alloys 3004 and 3104 (0.8 to 1.5% manganese) are used for most beverage cans. Before 2000, more than 1.6 million tonnes of those alloys were used; at 1% manganese, this consumed 16,000 tonnes of manganese. Batteries Manganese(IV) oxide was used in the original type of dry cell battery as an electron acceptor from zinc, and is the blackish material in carbon–zinc type flashlight cells. The manganese dioxide is reduced to the manganese oxide-hydroxide MnO(OH) during discharging, preventing the formation of hydrogen at the anode of the battery. MnO2 + H2O + e− → MnO(OH) + The same material also functions in newer alkaline batteries (usually battery cells), which use the same basic reaction, but a different electrolyte mixture. In 2002, more than 230,000 tons of manganese dioxide was used for this purpose. Resistors Copper alloys of manganese, such as Manganin, are commonly found in metal element shunt resistors used for measuring relatively large amounts of current. These alloys have very low temperature coefficient of resistance and are resistant to sulfur. This makes the alloys particularly useful in harsh automotive and industrial environments. Fertilizers and feed additive Manganese oxide and sulfate are components of fertilizers. In the year 2000, an estimated 20,000 tons of these compounds were used in fertilizers in the US alone. A comparable amount of Mn compounds was also used in animal feeds. Niche Methylcyclopentadienyl manganese tricarbonyl is an additive in some unleaded gasoline to boost octane rating and reduce engine knocking. Manganese(IV) oxide (manganese dioxide, MnO2) is used as a reagent in organic chemistry for the oxidation of benzylic alcohols (where the hydroxyl group is adjacent to an aromatic ring). Manganese dioxide has been used since antiquity to oxidize and neutralize the greenish tinge in glass from trace amounts of iron contamination. MnO2 is also used in the manufacture of oxygen and chlorine and in drying black paints. In some preparations, it is a brown pigment for paint and is a constituent of natural umber. Tetravalent manganese is used as an activator in red-emitting phosphors. While many compounds are known which show luminescence, the majority are not used in commercial application due to low efficiency or deep red emission. However, several Mn4+ activated fluorides were reported as potential red-emitting phosphors for warm-white LEDs. But to this day, only K2SiF6:Mn4+ is commercially available for use in warm-white LEDs. The metal is occasionally used in coins; until 2000, the only United States coin to use manganese was the "wartime" nickel from 1942 to 1945. An alloy of 75% copper and 25% nickel was traditionally used for the production of nickel coins. However, because of shortage of nickel metal during the war, it was substituted by more available silver and manganese, thus resulting in an alloy of 56% copper, 35% silver and 9% manganese. Since 2000, dollar coins, for example the Sacagawea dollar and the Presidential $1 coins, are made from a brass containing 7% of manganese with a pure copper core. In both cases of nickel and dollar, the use of manganese in the coin was to duplicate the electromagnetic properties of a previous identically sized and valued coin in the mechanisms of vending machines. In the case of the later U.S. dollar coins, the manganese alloy was intended to duplicate the properties of the copper/nickel alloy used in the previous Susan B. Anthony dollar. Manganese compounds have been used as pigments and for the coloring of ceramics and glass. The brown color of ceramic is sometimes the result of manganese compounds. In the glass industry, manganese compounds are used for two effects. Manganese(III) reacts with iron(II) to reduce strong green color in glass by forming less-colored iron(III) and slightly pink manganese(II), compensating for the residual color of the iron(III). Larger quantities of manganese are used to produce pink colored glass. In 2009, Mas Subramanian and associates at Oregon State University discovered that manganese can be combined with yttrium and indium to form an intensely blue, non-toxic, inert, fade-resistant pigment, YInMn Blue, the first new blue pigment discovered in 200 years. Biochemistry Many classes of enzymes contain manganese cofactors including oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. Other enzymes containing manganese are arginase and a Mn-containing superoxide dismutase (Mn-SOD). Some reverse transcriptases of many retroviruses (although not lentiviruses such as HIV) contain manganese. Manganese-containing polypeptides are the diphtheria toxin, lectins, and integrins. The oxygen-evolving complex (OEC), containing four atoms of manganese, is a part of photosystem II contained in the thylakoid membranes of chloroplasts. The OEC is responsible for the terminal photooxidation of water during the light reactions of photosynthesis, i.e., it is the catalyst that makes the O2 produced by plants. Human health and nutrition Manganese is an essential human dietary element and is present as a coenzyme in several biological processes, which include macronutrient metabolism, bone formation, and free radical defense systems. Manganese is a critical component in dozens of proteins and enzymes. The human body contains about 12 mg of manganese, mostly in the bones. The soft tissue remainder is concentrated in the liver and kidneys. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes. Regulation The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for minerals in 2001. For manganese, there was not sufficient information to set EARs and RDAs, so needs are described as estimates for Adequate Intakes (AIs). As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of manganese, the adult UL is set at 11 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). Manganese deficiency is rare. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For people ages 15 and older, the AI is set at 3.0 mg/day. AIs for pregnancy and lactation is 3.0 mg/day. For children ages 1–14 years, the AIs increase with age from 0.5 to 2.0 mg/day. The adult AIs are higher than the U.S. RDAs. The EFSA reviewed the same safety question and decided that there was insufficient information to set a UL. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For manganese labeling purposes, 100% of the Daily Value was 2.0 mg, but as of 27 May 2016 it was revised to 2.3 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake. Excessive exposure or intake may lead to a condition known as manganism, a neurodegenerative disorder that causes dopaminergic neuronal death and symptoms similar to Parkinson's disease. Deficiency Manganese deficiency in humans, which is rare, results in a number of medical problems. A deficiency of manganese causes skeletal deformation in animals and inhibits the production of collagen in wound healing. Exposure In water Waterborne manganese has a greater bioavailability than dietary manganese. According to results from a 2010 study, higher levels of exposure to manganese in drinking water are associated with increased intellectual impairment and reduced intelligence quotients in school-age children. It is hypothesized that long-term exposure due to inhaling the naturally occurring manganese in shower water puts up to 8.7 million Americans at risk. However, data indicates that the human body can recover from certain adverse effects of overexposure to manganese if the exposure is stopped and the body can clear the excess. Mn levels can increase in seawater when hypoxic periods occur. Since 1990 there have been reports of Mn accumulation in marine organisms including fish, crustaceans, mollusks, and echinoderms. Specific tissues are targets in different species, including the gills, brain, blood, kidney, and liver/hepatopancreas. Physiological effects have been reported in these species. Mn can affect the renewal of immunocytes and their functionality, such as phagocytosis and activation of pro-phenoloxidase, suppressing the organisms' immune systems. This causes the organisms to be more susceptible to infections. As climate change occurs, pathogen distributions increase, and in order for organisms to survive and defend themselves against these pathogens, they need a healthy, strong immune system. If their systems are compromised from high Mn levels, they will not be able to fight off these pathogens and die. Gasoline Methylcyclopentadienyl manganese tricarbonyl (MMT) is an additive developed to replace lead compounds for gasolines to improve the octane rating. MMT is used only in a few countries. Fuels containing manganese tend to form manganese carbides, which damage exhaust valves. Air Compared to 1953, levels of manganese in air have dropped. Generally, exposure to ambient Mn air concentrations in excess of 5 μg Mn/m3 can lead to Mn-induced symptoms. Increased ferroportin protein expression in human embryonic kidney (HEK293) cells is associated with decreased intracellular Mn concentration and attenuated cytotoxicity, characterized by the reversal of Mn-reduced glutamate uptake and diminished lactate dehydrogenase leakage. Regulation Manganese exposure in United States is regulated by the Occupational Safety and Health Administration (OSHA). People can be exposed to manganese in the workplace by breathing it in or swallowing it. OSHA has set the legal limit (permissible exposure limit) for manganese exposure in the workplace as 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 1 mg/m3 over an 8-hour workday and a short term limit of 3 mg/m3. At levels of 500 mg/m3, manganese is immediately dangerous to life and health. Health and safety Manganese is essential for human health, albeit in milligram amounts. The current maximum safe concentration under U.S. EPA rules is 50 μg Mn/L. Manganism Manganese overexposure is most frequently associated with manganism, a rare neurological disorder associated with excessive manganese ingestion or inhalation. Historically, persons employed in the production or processing of manganese alloys have been at risk for developing manganism; however, health and safety regulations protect workers in developed nations. The disorder was first described in 1837 by British academic John Couper, who studied two patients who were manganese grinders. Manganism is a biphasic disorder. In its early stages, an intoxicated person may experience depression, mood swings, compulsive behaviors, and psychosis. Early neurological symptoms give way to late-stage manganism, which resembles Parkinson's disease. Symptoms include weakness, monotone and slowed speech, an expressionless face, tremor, forward-leaning gait, inability to walk backwards without falling, rigidity, and general problems with dexterity, gait and balance. Unlike Parkinson's disease, manganism is not associated with loss of the sense of smell and patients are typically unresponsive to treatment with L-DOPA. Symptoms of late-stage manganism become more severe over time even if the source of exposure is removed and brain manganese levels return to normal. Chronic manganese exposure has been shown to produce a parkinsonism-like illness characterized by movement abnormalities. This condition is not responsive to typical therapies used in the treatment of PD, suggesting an alternative pathway to the typical dopaminergic loss within the substantia nigra. Manganese may accumulate in the basal ganglia, leading to the abnormal movements. A mutation of the SLC30A10 gene, a manganese efflux transporter necessary for decreasing intracellular Mn, has been linked with the development of this Parkinsonism-like disease. The Lewy bodies typical to PD are not seen in Mn-induced parkinsonism. Animal experiments have given the opportunity to examine the consequences of manganese overexposure under controlled conditions. In (non-aggressive) rats, manganese induces mouse-killing behavior. Toxicity Manganese compounds are less toxic than those of other widespread metals, such as nickel and copper. However, exposure to manganese dusts and fumes should not exceed the ceiling value of 5 mg/m3 even for short periods because of its toxicity level. Manganese poisoning has been linked to impaired motor skills and cognitive disorders. Neurodegenerative diseases A protein called DMT1 is the major transporter in manganese absorption from the intestine and may be the major transporter of manganese across the blood–brain barrier. DMT1 also transports inhaled manganese across the nasal epithelium. The proposed mechanism for manganese toxicity is that dysregulation leads to oxidative stress, mitochondrial dysfunction, glutamate-mediated excitotoxicity, and aggregation of proteins.
Physical sciences
Chemical elements_2
null
19052
https://en.wikipedia.org/wiki/Molybdenum
Molybdenum
Molybdenum is a chemical element; it has symbol Mo (from Neo-Latin molybdaenum) and atomic number 42. The name derived from Ancient Greek , meaning lead, since its ores were confused with lead ores. Molybdenum minerals have been known throughout history, but the element was discovered (in the sense of differentiating it as a new entity from the mineral salts of other metals) in 1778 by Carl Wilhelm Scheele. The metal was first isolated in 1781 by Peter Jacob Hjelm. Molybdenum does not occur naturally as a free metal on Earth; in its minerals, it is found only in oxidized states. The free element, a silvery metal with a grey cast, has the sixth-highest melting point of any element. It readily forms hard, stable carbides in alloys, and for this reason most of the world production of the element (about 80%) is used in steel alloys, including high-strength alloys and superalloys. Most molybdenum compounds have low solubility in water. Heating molybdenum-bearing minerals under oxygen and water affords molybdate ion , which forms quite soluble salts. Industrially, molybdenum compounds (about 14% of world production of the element) are used as pigments and catalysts. are by far the most common bacterial catalysts for breaking the chemical bond in atmospheric molecular nitrogen in the process of biological nitrogen fixation. At least 50 molybdenum enzymes are now known in bacteria, plants, and animals, although only bacterial and cyanobacterial enzymes are involved in nitrogen fixation. Most nitrogenases contain an iron–molybdenum cofactor FeMoco, which is believed to contain either Mo(III) or Mo(IV). By contrast Mo(VI) and Mo(IV) are complexed with molybdopterin in all other molybdenum-bearing enzymes. Molybdenum is an essential element for all higher eukaryote organisms, including humans. A species of sponge, Theonella conica, is known for hyperaccumulation of molybdenum. Characteristics Physical properties In its pure form, molybdenum is a silvery-grey metal with a Mohs hardness of 5.5 and a standard atomic weight of 95.95 g/mol. It has a melting point of , sixth highest of the naturally occurring elements; only tantalum, osmium, rhenium, tungsten, and carbon have higher melting points. It has one of the lowest coefficients of thermal expansion among commercially used metals. Chemical properties Molybdenum is a transition metal with an electronegativity of 2.16 on the Pauling scale. It does not visibly react with oxygen or water at room temperature, but is attacked by halogens and hydrogen peroxide. Weak oxidation of molybdenum starts at ; bulk oxidation occurs at temperatures above 600 °C, resulting in molybdenum trioxide. Like many heavier transition metals, molybdenum shows little inclination to form a cation in aqueous solution, although the Mo3+ cation is known to form under carefully controlled conditions. Gaseous molybdenum consists of the diatomic species Mo2. That molecule is a singlet, with two unpaired electrons in bonding orbitals, in addition to 5 conventional bonds. The result is a sextuple bond. Isotopes There are 39 known isotopes of molybdenum, ranging in atomic mass from 81 to 119, as well as 13 metastable nuclear isomers. Seven isotopes occur naturally, with atomic masses of 92, 94, 95, 96, 97, 98, and 100. Of these naturally occurring isotopes, only molybdenum-100 is unstable. Molybdenum-98 is the most abundant isotope, comprising 24.14% of all molybdenum. Molybdenum-100 has a half-life of about 1019 y and undergoes double beta decay into ruthenium-100. All unstable isotopes of molybdenum decay into isotopes of niobium, technetium, and ruthenium. Of the synthetic radioisotopes, the most stable is 93Mo, with a half-life of 4,839 years. The most common isotopic molybdenum application involves molybdenum-99, which is a fission product. It is a parent radioisotope to the short-lived gamma-emitting daughter radioisotope technetium-99m, a nuclear isomer used in various imaging applications in medicine. In 2008, the Delft University of Technology applied for a patent on the molybdenum-98-based production of molybdenum-99. Compounds Molybdenum forms chemical compounds in oxidation states −4 and from −2 to +6. Higher oxidation states are more relevant to its terrestrial occurrence and its biological roles, mid-level oxidation states are often associated with metal clusters, and very low oxidation states are typically associated with organomolybdenum compounds. The chemistry of molybdenum and tungsten show strong similarities. The relative rarity of molybdenum(III), for example, contrasts with the pervasiveness of the chromium(III) compounds. The highest oxidation state is seen in molybdenum(VI) oxide (MoO3), whereas the normal sulfur compound is molybdenum disulfide MoS2. From the perspective of commerce, the most important compounds are molybdenum disulfide () and molybdenum trioxide (). The black disulfide is the main mineral. It is roasted in air to give the trioxide: 2 + 7 → 2 + 4 The trioxide, which is volatile at high temperatures, is the precursor to virtually all other Mo compounds as well as alloys. Molybdenum has several oxidation states, the most stable being +4 and +6 (bolded in the table at left). Molybdenum(VI) oxide is soluble in strong alkaline water, forming molybdates (MoO42−). Molybdates are weaker oxidants than chromates. They tend to form structurally complex oxyanions by condensation at lower pH values, such as [Mo7O24]6− and [Mo8O26]4−. Polymolybdates can incorporate other ions, forming polyoxometalates. The dark-blue phosphorus-containing heteropolymolybdate P[Mo12O40]3− is used for the spectroscopic detection of phosphorus. The broad range of oxidation states of molybdenum is reflected in various molybdenum chlorides: Molybdenum(II) chloride MoCl2, which exists as the hexamer Mo6Cl12 and the related dianion [Mo6Cl14]2-. Molybdenum(III) chloride MoCl3, a dark red solid, which converts to the anion trianionic complex [MoCl6]3-. Molybdenum(IV) chloride MoCl4, a black solid, which adopts a polymeric structure. Molybdenum(V) chloride MoCl5 dark green solid, which adopts a dimeric structure. Molybdenum(VI) chloride MoCl6 is a black solid, which is monomeric and slowly decomposes to MoCl5 and Cl2 at room temperature. The accessibility of these oxidation states depends quite strongly on the halide counterion: although molybdenum(VI) fluoride is stable, molybdenum does not form a stable hexachloride, pentabromide, or tetraiodide. Like chromium and some other transition metals, molybdenum forms quadruple bonds, such as in Mo2(CH3COO)4 and [Mo2Cl8]4−. The Lewis acid properties of the butyrate and perfluorobutyrate dimers, Mo2(O2CR)4 and Rh2(O2CR) 4, have been reported. The oxidation state 0 and lower are possible with carbon monoxide as ligand, such as in molybdenum hexacarbonyl, Mo(CO)6. History Molybdenite—the principal ore from which molybdenum is now extracted—was previously known as molybdena. Molybdena was confused with and often utilized as though it were graphite. Like graphite, molybdenite can be used to blacken a surface or as a solid lubricant. Even when molybdena was distinguishable from graphite, it was still confused with the common lead ore PbS (now called galena); the name comes from Ancient Greek , meaning lead. (The Greek word itself has been proposed as a loanword from Anatolian Luvian and Lydian languages). Although (reportedly) molybdenum was deliberately alloyed with steel in one 14th-century Japanese sword (mfd. ), that art was never employed widely and was later lost. In the West in 1754, Bengt Andersson Qvist examined a sample of molybdenite and determined that it did not contain lead and thus was not galena. By 1778 Swedish chemist Carl Wilhelm Scheele stated firmly that molybdena was (indeed) neither galena nor graphite. Instead, Scheele correctly proposed that molybdena was an ore of a distinct new element, named molybdenum for the mineral in which it resided, and from which it might be isolated. Peter Jacob Hjelm successfully isolated molybdenum using carbon and linseed oil in 1781. For the next century, molybdenum had no industrial use. It was relatively scarce, the pure metal was difficult to extract, and the necessary techniques of metallurgy were immature. Early molybdenum steel alloys showed great promise of increased hardness, but efforts to manufacture the alloys on a large scale were hampered with inconsistent results, a tendency toward brittleness, and recrystallization. In 1906, William D. Coolidge filed a patent for rendering molybdenum ductile, leading to applications as a heating element for high-temperature furnaces and as a support for tungsten-filament light bulbs; oxide formation and degradation require that molybdenum be physically sealed or held in an inert gas. In 1913, Frank E. Elmore developed a froth flotation process to recover molybdenite from ores; flotation remains the primary isolation process. During World War I, demand for molybdenum spiked; it was used both in armor plating and as a substitute for tungsten in high-speed steels. Some British tanks were protected by 75 mm (3 in) manganese steel plating, but this proved to be ineffective. The manganese steel plates were replaced with much lighter molybdenum steel plates allowing for higher speed, greater maneuverability, and better protection. The Germans also used molybdenum-doped steel for heavy artillery, like in the super-heavy howitzer Big Bertha, because traditional steel melts at the temperatures produced by the propellant of the one ton shell. After the war, demand plummeted until metallurgical advances allowed extensive development of peacetime applications. In World War II, molybdenum again saw strategic importance as a substitute for tungsten in steel alloys. Occurrence and production Molybdenum is the 54th most abundant element in the Earth's crust with an average of 1.5 parts per million and the 25th most abundant element in the oceans, with an average of 10 parts per billion; it is the 42nd most abundant element in the Universe. The Soviet Luna 24 mission discovered a molybdenum-bearing grain (1 × 0.6 μm) in a pyroxene fragment taken from Mare Crisium on the Moon. The comparative rarity of molybdenum in the Earth's crust is offset by its concentration in a number of water-insoluble ores, often combined with sulfur in the same way as copper, with which it is often found. Though molybdenum is found in such minerals as wulfenite (PbMoO4) and powellite (CaMoO4), the main commercial source is molybdenite (MoS2). Molybdenum is mined as a principal ore and is also recovered as a byproduct of copper and tungsten mining. The world's production of molybdenum was 250,000 tonnes in 2011, the largest producers being China (94,000 t), the United States (64,000 t), Chile (38,000 t), Peru (18,000 t) and Mexico (12,000 t). The total reserves are estimated at 10 million tonnes, and are mostly concentrated in China (4.3 Mt), the US (2.7 Mt) and Chile (1.2 Mt). By continent, 93% of world molybdenum production is about evenly shared between North America, South America (mainly in Chile), and China. Europe and the rest of Asia (mostly Armenia, Russia, Iran and Mongolia) produce the remainder. In molybdenite processing, the ore is first roasted in air at a temperature of . The process gives gaseous sulfur dioxide and the molybdenum(VI) oxide: 2MoS2 + 7O2 -> 2MoO3 + 4SO2 The resulting oxide is then usually extracted with aqueous ammonia to give ammonium molybdate: MoO3 + 2NH3 + H2O -> (NH4)2(MoO4) Copper, an impurity in molybdenite, is separated at this stage by treatment with hydrogen sulfide. Ammonium molybdate converts to ammonium dimolybdate, which is isolated as a solid. Heating this solid gives molybdenum trioxide: (NH4)2Mo2O7 -> 2MoO3 + 2NH3 + H2O Crude trioxide can be further purified by sublimation at . Metallic molybdenum is produced by reduction of the oxide with hydrogen: MoO3 + 3H2 -> Mo + 3H2O The molybdenum for steel production is reduced by the aluminothermic reaction with addition of iron to produce ferromolybdenum. A common form of ferromolybdenum contains 60% molybdenum. Molybdenum had a value of approximately $30,000 per tonne as of August 2009. It maintained a price at or near $10,000 per tonne from 1997 through 2003, and reached a peak of $103,000 per tonne in June 2005. In 2008, the London Metal Exchange announced that molybdenum would be traded as a commodity. Mining The Knaben mine in southern Norway, opened in 1885, was the first dedicated molybdenum mine. Closed in 1973 but reopened in 2007, it now produces of molybdenum disulfide per year. Large mines in Colorado (such as the Henderson mine and the Climax mine) and in British Columbia yield molybdenite as their primary product, while many porphyry copper deposits such as the Bingham Canyon Mine in Utah and the Chuquicamata mine in northern Chile produce molybdenum as a byproduct of copper-mining. Applications Alloys About 86% of molybdenum produced is used in metallurgy, with the rest used in chemical applications. The estimated global use is structural steel 35%, stainless steel 25%, chemicals 14%, tool & high-speed steels 9%, cast iron 6%, molybdenum elemental metal 6%, and superalloys 5%. Molybdenum can withstand extreme temperatures without significantly expanding or softening, making it useful in environments of intense heat, including military armor, aircraft parts, electrical contacts, industrial motors, and supports for filaments in light bulbs. Most high-strength steel alloys (for example, 41xx steels) contain 0.25% to 8% molybdenum. Even in these small portions, more than 43,000 tonnes of molybdenum are used each year in stainless steels, tool steels, cast irons, and high-temperature superalloys. Molybdenum is also used in steel alloys for its high corrosion resistance and weldability. Molybdenum contributes corrosion resistance to type-300 stainless steels (specifically type-316) and especially so in the so-called superaustenitic stainless steels (such as alloy AL-6XN, 254SMO and 1925hMo). Molybdenum increases lattice strain, thus increasing the energy required to dissolve iron atoms from the surface. Molybdenum is also used to enhance the corrosion resistance of ferritic (for example grade 444) and martensitic (for example 1.4122 and 1.4418) stainless steels. Because of its lower density and more stable price, molybdenum is sometimes used in place of tungsten. An example is the 'M' series of high-speed steels such as M2, M4 and M42 as substitution for the 'T' steel series, which contain tungsten. Molybdenum can also be used as a flame-resistant coating for other metals. Although its melting point is , molybdenum rapidly oxidizes at temperatures above making it better-suited for use in vacuum environments. TZM (Mo (~99%), Ti (~0.5%), Zr (~0.08%) and some C) is a corrosion-resisting molybdenum superalloy that resists molten fluoride salts at temperatures above . It has about twice the strength of pure Mo, and is more ductile and more weldable, yet in tests it resisted corrosion of a standard eutectic salt (FLiBe) and salt vapors used in molten salt reactors for 1100 hours with so little corrosion that it was difficult to measure. Due to its excellent mechanical properties under high temperature and high pressure, TZM alloys are extensively applied in the military industry. It is used as the valve body of torpedo engines, rocket nozzles and gas pipelines, where it can withstand extreme thermal and mechanical stresses. It is also used as radiation shields in nuclear applications. Other molybdenum-based alloys that do not contain iron have only limited applications. For example, because of its resistance to molten zinc, both pure molybdenum and molybdenum-tungsten alloys (70%/30%) are used for piping, stirrers and pump impellers that come into contact with molten zinc. Pure element applications Molybdenum powder is used as a fertilizer for some plants, such as cauliflower. Elemental molybdenum is used in NO, NO2, NOx analyzers in power plants for pollution controls. At , the element acts as a catalyst for NO2/NOx to form NO molecules for detection by infrared light. Molybdenum anodes replace tungsten in certain low voltage X-ray sources for specialized uses such as mammography. The radioactive isotope molybdenum-99 is used to generate technetium-99m, used for medical imaging The isotope is handled and stored as the molybdate. Compound applications Molybdenum disulfide (MoS2) is used as a solid lubricant and a high-pressure high-temperature (HPHT) anti-wear agent. It forms strong films on metallic surfaces and is a common additive to HPHT greases — in the event of a catastrophic grease failure, a thin layer of molybdenum prevents contact of the lubricated parts. When combined with small amounts of cobalt, MoS2 is also used as a catalyst in the hydrodesulfurization (HDS) of petroleum. In the presence of hydrogen, this catalyst facilitates the removal of nitrogen and especially sulfur from the feedstock, which otherwise would poison downstream catalysts. HDS is one of the largest scale applications of catalysis in industry. Molybdenum oxides are important catalysts for selective oxidation of organic compounds. The production of the commodity chemicals acrylonitrile and formaldehyde relies on MoOx-based catalysts. Molybdenum disilicide (MoSi2) is an electrically conducting ceramic with primary use in heating elements operating at temperatures above 1500 °C in air. Molybdenum trioxide (MoO3) is used as an adhesive between enamels and metals. Lead molybdate (wulfenite) co-precipitated with lead chromate and lead sulfate is a bright-orange pigment used with ceramics and plastics. The molybdenum-based mixed oxides are versatile catalysts in the chemical industry. Some examples are the catalysts for the oxidation of carbon monoxide, propylene to acrolein and acrylic acid, the ammoxidation of propylene to acrylonitrile. Molybdenum carbides, nitride and phosphides can be used for hydrotreatment of rapeseed oil. Ammonium heptamolybdate is used in biological staining. Molybdenum coated soda lime glass is used in CIGS (copper indium gallium selenide) solar cells, called CIGS solar cells. Phosphomolybdic acid is a stain used in thin-layer chromatography and trichrome staining in histochemistry. Biological role Mo-containing enzymes Molybdenum is an essential element in most organisms; a 2008 research paper speculated that a scarcity of molybdenum in the Earth's early oceans may have strongly influenced the evolution of eukaryotic life (which includes all plants and animals). At least 50 molybdenum-containing enzymes have been identified, mostly in bacteria. Those enzymes include aldehyde oxidase, sulfite oxidase and xanthine oxidase. With one exception, Mo in proteins is bound by molybdopterin to give the molybdenum cofactor. The only known exception is nitrogenase, which uses the FeMoco cofactor, which has the formula Fe7MoS9C. In terms of function, molybdoenzymes catalyze the oxidation and sometimes reduction of certain small molecules in the process of regulating nitrogen, sulfur, and carbon. In some animals, and in humans, the oxidation of xanthine to uric acid, a process of purine catabolism, is catalyzed by xanthine oxidase, a molybdenum-containing enzyme. The activity of xanthine oxidase is directly proportional to the amount of molybdenum in the body. An extremely high concentration of molybdenum reverses the trend and can inhibit purine catabolism and other processes. Molybdenum concentration also affects protein synthesis, metabolism, and growth. Mo is a component in most nitrogenases. Among molybdoenzymes, nitrogenases are unique in lacking the molybdopterin. Nitrogenases catalyze the production of ammonia from atmospheric nitrogen: The biosynthesis of the FeMoco active site is highly complex. Molybdate is transported in the body as MoO42−. Human metabolism and deficiency Molybdenum is an essential trace dietary element. Four mammalian Mo-dependent enzymes are known, all of them harboring a pterin-based molybdenum cofactor (Moco) in their active site: sulfite oxidase, xanthine oxidoreductase, aldehyde oxidase, and mitochondrial amidoxime reductase. People severely deficient in molybdenum have poorly functioning sulfite oxidase and are prone to toxic reactions to sulfites in foods. The human body contains about 0.07 mg of molybdenum per kilogram of body weight, with higher concentrations in the liver and kidneys and lower in the vertebrae. Molybdenum is also present within human tooth enamel and may help prevent its decay. Acute toxicity has not been seen in humans, and the toxicity depends strongly on the chemical state. Studies on rats show a median lethal dose (LD50) as low as 180 mg/kg for some Mo compounds. Although human toxicity data is unavailable, animal studies have shown that chronic ingestion of more than 10 mg/day of molybdenum can cause diarrhea, growth retardation, infertility, low birth weight, and gout; it can also affect the lungs, kidneys, and liver. Sodium tungstate is a competitive inhibitor of molybdenum. Dietary tungsten reduces the concentration of molybdenum in tissues. Low soil concentration of molybdenum in a geographical band from northern China to Iran results in a general dietary molybdenum deficiency and is associated with increased rates of esophageal cancer. Compared to the United States, which has a greater supply of molybdenum in the soil, people living in those areas have about 16 times greater risk for esophageal squamous cell carcinoma. Molybdenum deficiency has also been reported as a consequence of non-molybdenum supplemented total parenteral nutrition (complete intravenous feeding) for long periods of time. It results in high blood levels of sulfite and urate, in much the same way as molybdenum cofactor deficiency. Since pure molybdenum deficiency from this cause occurs primarily in adults, the neurological consequences are not as marked as in cases of congenital cofactor deficiency. A congenital molybdenum cofactor deficiency disease, seen in infants, is an inability to synthesize molybdenum cofactor, the heterocyclic molecule discussed above that binds molybdenum at the active site in all known human enzymes that use molybdenum. The resulting deficiency results in high levels of sulfite and urate, and neurological damage. Excretion Most molybdenum is excreted from the human body as molybdate in the urine. Furthermore, urinary excretion of molybdenum increases as dietary molybdenum intake increases. Small amounts of molybdenum are excreted from the body in the feces by way of the bile; small amounts also can be lost in sweat and in hair. Excess and copper antagonism High levels of molybdenum can interfere with the body's uptake of copper, producing copper deficiency. Molybdenum prevents plasma proteins from binding to copper, and it also increases the amount of copper that is excreted in urine. Ruminants that consume high levels of molybdenum suffer from diarrhea, stunted growth, anemia, and achromotrichia (loss of fur pigment). These symptoms can be alleviated by copper supplements, either dietary and injection. The effective copper deficiency can be aggravated by excess sulfur. Copper reduction or deficiency can also be deliberately induced for therapeutic purposes by the compound ammonium tetrathiomolybdate, in which the bright red anion tetrathiomolybdate is the copper-chelating agent. Tetrathiomolybdate was first used therapeutically in the treatment of copper toxicosis in animals. It was then introduced as a treatment in Wilson's disease, a hereditary copper metabolism disorder in humans; it acts both by competing with copper absorption in the bowel and by increasing excretion. It has also been found to have an inhibitory effect on angiogenesis, potentially by inhibiting the membrane translocation process that is dependent on copper ions. This is a promising avenue for investigation of treatments for cancer, age-related macular degeneration, and other diseases that involve a pathologic proliferation of blood vessels. In some grazing livestock, most strongly in cattle, molybdenum excess in the soil of pasturage can produce scours (diarrhea) if the pH of the soil is neutral to alkaline; see teartness. Mammography Molybdenum targets are used in mammography because they produce X-rays in the energy range of 17-20 keV, which is optimal for imaging soft tissues like the breast. The characteristic X-rays emitted from molybdenum provide high contrast between different types of tissues, allowing for the effective visualization of microcalcifications and other subtle abnormalities in breast tissue. This energy range also minimizes radiation dose while maximizing image quality, making molybdenum targets particularly suitable for breast cancer screening. Dietary recommendations In 2000, the then U.S. Institute of Medicine (now the National Academy of Medicine, NAM) updated its Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for molybdenum. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. An AI of 2 micrograms (μg) of molybdenum per day was established for infants up to 6 months of age, and 3 μg/day from 7 to 12 months of age, both for males and females. For older children and adults, the following daily RDAs have been established for molybdenum: 17 μg from 1 to 3 years of age, 22 μg from 4 to 8 years, 34 μg from 9 to 13 years, 43 μg from 14 to 18 years, and 45 μg for persons 19 years old and older. All these RDAs are valid for both sexes. Pregnant or lactating females from 14 to 50 years of age have a higher daily RDA of 50 μg of molybdenum. As for safety, the NAM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of molybdenum, the UL is 2000 μg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men ages 15 and older, the AI is set at 65 μg/day. Pregnant and lactating women have the same AI. For children aged 1–14 years, the AIs increase with age from 15 to 45 μg/day. The adult AIs are higher than the U.S. RDAs, but on the other hand, the European Food Safety Authority reviewed the same safety question and set its UL at 600 μg/day, which is much lower than the U.S. value. Labeling For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For molybdenum labeling purposes, 100% of the Daily Value was 75 μg, but as of May 27, 2016 it was revised to 45 μg. A table of the old and new adult daily values is provided at Reference Daily Intake. Food sources Average daily intake varies between 120 and 240 μg/day, which is higher than dietary recommendations. Pork, lamb, and beef liver each have approximately 1.5 parts per million of molybdenum. Other significant dietary sources include green beans, eggs, sunflower seeds, wheat flour, lentils, cucumbers, and cereal grain. Precautions Molybdenum dusts and fumes, generated by mining or metalworking, can be toxic, especially if ingested (including dust trapped in the sinuses and later swallowed). Low levels of prolonged exposure can cause irritation to the eyes and skin. Direct inhalation or ingestion of molybdenum and its oxides should be avoided. OSHA regulations specify the maximum permissible molybdenum exposure in an 8-hour day as 5 mg/m3. Chronic exposure to 60 to 600 mg/m3 can cause symptoms including fatigue, headaches and joint pains. At levels of 5000 mg/m3, molybdenum is immediately dangerous to life and health.
Physical sciences
Chemical elements_2
null
19053
https://en.wikipedia.org/wiki/Mineral
Mineral
In geology and mineralogy, a mineral or mineral species is, broadly speaking, a solid substance with a fairly well-defined chemical composition and a specific crystal structure that occurs naturally in pure form. The geological definition of mineral normally excludes compounds that occur only in living organisms. However, some minerals are often biogenic (such as calcite) or organic compounds in the sense of chemistry (such as mellite). Moreover, living organisms often synthesize inorganic minerals (such as hydroxylapatite) that also occur in rocks. The concept of mineral is distinct from rock, which is any bulk solid geologic material that is relatively homogeneous at a large enough scale. A rock may consist of one type of mineral or may be an aggregate of two or more different types of minerals, spacially segregated into distinct phases. Some natural solid substances without a definite crystalline structure, such as opal or obsidian, are more properly called mineraloids. If a chemical compound occurs naturally with different crystal structures, each structure is considered a different mineral species. Thus, for example, quartz and stishovite are two different minerals consisting of the same compound, silicon dioxide. The International Mineralogical Association (IMA) is the generally recognized standard body for the definition and nomenclature of mineral species. , the IMA recognizes 6,100 official mineral species. The chemical composition of a named mineral species may vary somewhat due to the inclusion of small amounts of impurities. Specific varieties of a species sometimes have conventional or official names of their own. For example, amethyst is a purple variety of the mineral species quartz. Some mineral species can have variable proportions of two or more chemical elements that occupy equivalent positions in the mineral's structure; for example, the formula of mackinawite is given as , meaning , where x is a variable number between 0 and 9. Sometimes a mineral with variable composition is split into separate species, more or less arbitrarily, forming a mineral group; that is the case of the silicates , the olivine group. Besides the essential chemical composition and crystal structure, the description of a mineral species usually includes its common physical properties such as habit, hardness, lustre, diaphaneity, colour, streak, tenacity, cleavage, fracture, system, zoning, parting, specific gravity, magnetism, fluorescence, radioactivity, as well as its taste or smell and its reaction to acid. Minerals are classified by key chemical constituents; the two dominant systems are the Dana classification and the Strunz classification. Silicate minerals comprise approximately 90% of the Earth's crust. Other important mineral groups include the native elements (made up of a single pure element) and compounds (combinations of multiple elements) namely sulfides (e.g. Galena PbS), oxides (e.g. quartz SiO2), halides (e.g. rock salt NaCl), carbonates (e.g. calcite CaCO3), sulfates (e.g. gypsum CaSO4·2H2O), silicates (e.g. orthoclase KAlSi3O8), molybdates (e.g. wulfenite PbMoO4) and phosphates (e.g. pyromorphite Pb5(PO4)3Cl). Definitions International Mineralogical Association The International Mineralogical Association has established the following requirements for a substance to be considered a distinct mineral: It must be a naturally occurring substance formed by natural geological processes, on Earth or other extraterrestrial bodies. This excludes compounds directly and exclusively generated by human activities (anthropogenic) or in living beings (biogenic), such as tungsten carbide, urinary calculi, calcium oxalate crystals in plant tissues, and seashells. However, substances with such origins may qualify if geological processes were involved in their genesis (as is the case of evenkite, derived from plant material; or taranakite, from bat guano; or alpersite, from mine tailings). Hypothetical substances are also excluded, even if they are predicted to occur in inaccessible natural environments like the Earth's core or other planets. It must be a solid substance in its natural occurrence. A major exception to this rule is native mercury: it is still classified as a mineral by the IMA, even though crystallizes only below −39 °C, because it was included before the current rules were established. Water and carbon dioxide are not considered minerals, even though they are often found as inclusions in other minerals; but water ice is considered a mineral. It must have a well-defined crystallographic structure; or, more generally, an ordered atomic arrangement. This property implies several macroscopic physical properties, such as crystal form, hardness, and cleavage. It excludes ozokerite, limonite, obsidian and many other amorphous (non-crystalline) materials that occur in geologic contexts. It must have a fairly well defined chemical composition. However, certain crystalline substances with a fixed structure but variable composition may be considered single mineral species. A common class of examples are solid solutions such as mackinawite, (Fe, Ni)9S8, which is mostly a ferrous sulfide with a significant fraction of iron atoms replaced by nickel atoms. Other examples include layered crystals with variable layer stacking, or crystals that differ only in the regular arrangement of vacancies and substitutions. On the other hand, some substances that have a continuous series of compositions, may be arbitrarily split into several minerals. The typical example is the olivine group (Mg, Fe)2SiO4, whose magnesium-rich and iron-rich end-members are considered separate minerals (forsterite and fayalite). The details of these rules are somewhat controversial. For instance, there have been several recent proposals to classify amorphous substances as minerals, but they have not been accepted by the IMA. The IMA is also reluctant to accept minerals that occur naturally only in the form of nanoparticles a few hundred atoms across, but has not defined a minimum crystal size. Some authors require the material to be a stable or metastable solid at room temperature (25 °C). However, the IMA only requires that the substance be stable enough for its structure and composition to be well-determined. For example, it recognizes meridianiite (a naturally occurring hydrate of magnesium sulfate) as a mineral, even though it is formed and stable only below 2 °C. , 6,100 mineral species are approved by the IMA. They are most commonly named after a person, followed by discovery location; names based on chemical composition or physical properties are the two other major groups of mineral name etymologies. Most names end in "-ite"; the exceptions are usually names that were well-established before the organization of mineralogy as a discipline, for example galena and diamond. Biogenic minerals A topic of contention among geologists and mineralogists has been the IMA's decision to exclude biogenic crystalline substances. For example, Lowenstam (1981) stated that "organisms are capable of forming a diverse array of minerals, some of which cannot be formed inorganically in the biosphere." Skinner (2005) views all solids as potential minerals and includes biominerals in the mineral kingdom, which are those that are created by the metabolic activities of organisms. Skinner expanded the previous definition of a mineral to classify "element or compound, amorphous or crystalline, formed through biogeochemical processes," as a mineral. Recent advances in high-resolution genetics and X-ray absorption spectroscopy are providing revelations on the biogeochemical relations between microorganisms and minerals that may shed new light on this question. For example, the IMA-commissioned "Working Group on Environmental Mineralogy and Geochemistry " deals with minerals in the hydrosphere, atmosphere, and biosphere. The group's scope includes mineral-forming microorganisms, which exist on nearly every rock, soil, and particle surface spanning the globe to depths of at least 1600 metres below the sea floor and 70 kilometres into the stratosphere (possibly entering the mesosphere). Biogeochemical cycles have contributed to the formation of minerals for billions of years. Microorganisms can precipitate metals from solution, contributing to the formation of ore deposits. They can also catalyze the dissolution of minerals. Prior to the International Mineralogical Association's listing, over 60 biominerals had been discovered, named, and published. These minerals (a sub-set tabulated in Lowenstam (1981)) are considered minerals proper according to Skinner's (2005) definition. These biominerals are not listed in the International Mineral Association official list of mineral names; however, many of these biomineral representatives are distributed amongst the 78 mineral classes listed in the Dana classification scheme. Skinner's (2005) definition of a mineral takes this matter into account by stating that a mineral can be crystalline or amorphous. Although biominerals are not the most common form of minerals, they help to define the limits of what constitutes a mineral proper. Nickel's (1995) formal definition explicitly mentioned crystallinity as a key to defining a substance as a mineral. A 2011 article defined icosahedrite, an aluminium-iron-copper alloy, as a mineral; named for its unique natural icosahedral symmetry, it is a quasicrystal. Unlike a true crystal, quasicrystals are ordered but not periodic. Rocks, ores, and gems A rock is an aggregate of one or more minerals or mineraloids. Some rocks, such as limestone or quartzite, are composed primarily of one mineral – calcite or aragonite in the case of limestone, and quartz in the latter case. Other rocks can be defined by relative abundances of key (essential) minerals; a granite is defined by proportions of quartz, alkali feldspar, and plagioclase feldspar. The other minerals in the rock are termed accessory minerals, and do not greatly affect the bulk composition of the rock. Rocks can also be composed entirely of non-mineral material; coal is a sedimentary rock composed primarily of organically derived carbon. In rocks, some mineral species and groups are much more abundant than others; these are termed the rock-forming minerals. The major examples of these are quartz, the feldspars, the micas, the amphiboles, the pyroxenes, the olivines, and calcite; except for the last one, all of these minerals are silicates. Overall, around 150 minerals are considered particularly important, whether in terms of their abundance or aesthetic value in terms of collecting. Commercially valuable minerals and rocks, other than gemstones, metal ores, or mineral fuels, are referred to as industrial minerals. For example, muscovite, a white mica, can be used for windows (sometimes referred to as isinglass), as a filler, or as an insulator. Ores are minerals that have a high concentration of a certain element, typically a metal. Examples are cinnabar (HgS), an ore of mercury; sphalerite (ZnS), an ore of zinc; cassiterite (SnO2), an ore of tin; and colemanite, an ore of boron. Gems are minerals with an ornamental value, and are distinguished from non-gems by their beauty, durability, and usually, rarity. There are about 20 mineral species that qualify as gem minerals, which constitute about 35 of the most common gemstones. Gem minerals are often present in several varieties, and so one mineral can account for several different gemstones; for example, ruby and sapphire are both corundum, Al2O3. Etymology The first known use of the word "mineral" in the English language (Middle English) was the 15th century. The word came from , from , mine, ore. The word "species" comes from the Latin species, "a particular sort, kind, or type with distinct look, or appearance". Chemistry The abundance and diversity of minerals is controlled directly by their chemistry, in turn dependent on elemental abundances in the Earth. The majority of minerals observed are derived from the Earth's crust. Eight elements account for most of the key components of minerals, due to their abundance in the crust. These eight elements, summing to over 98% of the crust by weight, are, in order of decreasing abundance: oxygen, silicon, aluminium, iron, magnesium, calcium, sodium and potassium. Oxygen and silicon are by far the two most important – oxygen composes 47% of the crust by weight, and silicon accounts for 28%. The minerals that form are those that are most stable at the temperature and pressure of formation, within the limits imposed by the bulk chemistry of the parent body. For example, in most igneous rocks, the aluminium and alkali metals (sodium and potassium) that are present are primarily found in combination with oxygen, silicon, and calcium as feldspar minerals. However, if the rock is unusually rich in alkali metals, there will not be enough aluminium to combine with all the sodium as feldspar, and the excess sodium will form sodic amphiboles such as riebeckite. If the aluminium abundance is unusually high, the excess aluminium will form muscovite or other aluminium-rich minerals. If silicon is deficient, part of the feldspar will be replaced by feldspathoid minerals. Precise predictions of which minerals will be present in a rock of a particular composition formed at a particular temperature and pressure requires complex thermodynamic calculations. However, approximate estimates may be made using relatively simple rules of thumb, such as the CIPW norm, which gives reasonable estimates for volcanic rock formed from dry magma. The chemical composition may vary between end member species of a solid solution series. For example, the plagioclase feldspars comprise a continuous series from sodium-rich end member albite (NaAlSi3O8) to calcium-rich anorthite (CaAl2Si2O8) with four recognized intermediate varieties between them (given in order from sodium- to calcium-rich): oligoclase, andesine, labradorite, and bytownite. Other examples of series include the olivine series of magnesium-rich forsterite and iron-rich fayalite, and the wolframite series of manganese-rich hübnerite and iron-rich ferberite. Chemical substitution and coordination polyhedra explain this common feature of minerals. In nature, minerals are not pure substances, and are contaminated by whatever other elements are present in the given chemical system. As a result, it is possible for one element to be substituted for another. Chemical substitution will occur between ions of a similar size and charge; for example, K+ will not substitute for Si4+ because of chemical and structural incompatibilities caused by a big difference in size and charge. A common example of chemical substitution is that of Si4+ by Al3+, which are close in charge, size, and abundance in the crust. In the example of plagioclase, there are three cases of substitution. Feldspars are all framework silicates, which have a silicon-oxygen ratio of 2:1, and the space for other elements is given by the substitution of Si4+ by Al3+ to give a base unit of [AlSi3O8]−; without the substitution, the formula would be charge-balanced as SiO2, giving quartz. The significance of this structural property will be explained further by coordination polyhedra. The second substitution occurs between Na+ and Ca2+; however, the difference in charge has to accounted for by making a second substitution of Si4+ by Al3+. Coordination polyhedra are geometric representations of how a cation is surrounded by an anion. In mineralogy, coordination polyhedra are usually considered in terms of oxygen, due its abundance in the crust. The base unit of silicate minerals is the silica tetrahedron – one Si4+ surrounded by four O2−. An alternate way of describing the coordination of the silicate is by a number: in the case of the silica tetrahedron, the silicon is said to have a coordination number of 4. Various cations have a specific range of possible coordination numbers; for silicon, it is almost always 4, except for very high-pressure minerals where the compound is compressed such that silicon is in six-fold (octahedral) coordination with oxygen. Bigger cations have a bigger coordination numbers because of the increase in relative size as compared to oxygen (the last orbital subshell of heavier atoms is different too). Changes in coordination numbers leads to physical and mineralogical differences; for example, at high pressure, such as in the mantle, many minerals, especially silicates such as olivine and garnet, will change to a perovskite structure, where silicon is in octahedral coordination. Other examples are the aluminosilicates kyanite, andalusite, and sillimanite (polymorphs, since they share the formula Al2SiO5), which differ by the coordination number of the Al3+; these minerals transition from one another as a response to changes in pressure and temperature. In the case of silicate materials, the substitution of Si4+ by Al3+ allows for a variety of minerals because of the need to balance charges. Because the eight most common elements make up over 98% of the Earth's crust, the small quantities of the other elements that are typically present are substituted into the common rock-forming minerals. The distinctive minerals of most elements are quite rare, being found only where these elements have been concentrated by geological processes, such as hydrothermal circulation, to the point where they can no longer be accommodated in common minerals. Changes in temperature and pressure and composition alter the mineralogy of a rock sample. Changes in composition can be caused by processes such as weathering or metasomatism (hydrothermal alteration). Changes in temperature and pressure occur when the host rock undergoes tectonic or magmatic movement into differing physical regimes. Changes in thermodynamic conditions make it favourable for mineral assemblages to react with each other to produce new minerals; as such, it is possible for two rocks to have an identical or a very similar bulk rock chemistry without having a similar mineralogy. This process of mineralogical alteration is related to the rock cycle. An example of a series of mineral reactions is illustrated as follows. Orthoclase feldspar (KAlSi3O8) is a mineral commonly found in granite, a plutonic igneous rock. When exposed to weathering, it reacts to form kaolinite (Al2Si2O5(OH)4, a sedimentary mineral, and silicic acid): 2 KAlSi3O8 + 5 H2O + 2 H+ → Al2Si2O5(OH)4 + 4 H2SiO3 + 2 K+ Under low-grade metamorphic conditions, kaolinite reacts with quartz to form pyrophyllite (Al2Si4O10(OH)2): Al2Si2O5(OH)4 + SiO2 → Al2Si4O10(OH)2 + H2O As metamorphic grade increases, the pyrophyllite reacts to form kyanite and quartz: Al2Si4O10(OH)2 → Al2SiO5 + 3 SiO2 + H2O Alternatively, a mineral may change its crystal structure as a consequence of changes in temperature and pressure without reacting. For example, quartz will change into a variety of its SiO2 polymorphs, such as tridymite and cristobalite at high temperatures, and coesite at high pressures. Physical properties Classifying minerals ranges from simple to difficult. A mineral can be identified by several physical properties, some of them being sufficient for full identification without equivocation. In other cases, minerals can only be classified by more complex optical, chemical or X-ray diffraction analysis; these methods, however, can be costly and time-consuming. Physical properties applied for classification include crystal structure and habit, hardness, lustre, diaphaneity, colour, streak, cleavage and fracture, and specific gravity. Other less general tests include fluorescence, phosphorescence, magnetism, radioactivity, tenacity (response to mechanical induced changes of shape or form), piezoelectricity and reactivity to dilute acids. Crystal structure and habit Crystal structure results from the orderly geometric spatial arrangement of atoms in the internal structure of a mineral. This crystal structure is based on regular internal atomic or ionic arrangement that is often expressed in the geometric form that the crystal takes. Even when the mineral grains are too small to see or are irregularly shaped, the underlying crystal structure is always periodic and can be determined by X-ray diffraction. Minerals are typically described by their symmetry content. Crystals are restricted to 32 point groups, which differ by their symmetry. These groups are classified in turn into more broad categories, the most encompassing of these being the six crystal families. These families can be described by the relative lengths of the three crystallographic axes, and the angles between them; these relationships correspond to the symmetry operations that define the narrower point groups. They are summarized below; a, b, and c represent the axes, and α, β, γ represent the angle opposite the respective crystallographic axis (e.g. α is the angle opposite the a-axis, viz. the angle between the b and c axes): The hexagonal crystal family is also split into two crystal systems – the trigonal, which has a three-fold axis of symmetry, and the hexagonal, which has a six-fold axis of symmetry. Chemistry and crystal structure together define a mineral. With a restriction to 32 point groups, minerals of different chemistry may have identical crystal structure. For example, halite (NaCl), galena (PbS), and periclase (MgO) all belong to the hexaoctahedral point group (isometric family), as they have a similar stoichiometry between their different constituent elements. In contrast, polymorphs are groupings of minerals that share a chemical formula but have a different structure. For example, pyrite and marcasite, both iron sulfides, have the formula FeS2; however, the former is isometric while the latter is orthorhombic. This polymorphism extends to other sulfides with the generic AX2 formula; these two groups are collectively known as the pyrite and marcasite groups. Polymorphism can extend beyond pure symmetry content. The aluminosilicates are a group of three minerals – kyanite, andalusite, and sillimanite – which share the chemical formula Al2SiO5. Kyanite is triclinic, while andalusite and sillimanite are both orthorhombic and belong to the dipyramidal point group. These differences arise corresponding to how aluminium is coordinated within the crystal structure. In all minerals, one aluminium ion is always in six-fold coordination with oxygen. Silicon, as a general rule, is in four-fold coordination in all minerals; an exception is a case like stishovite (SiO2, an ultra-high pressure quartz polymorph with rutile structure). In kyanite, the second aluminium is in six-fold coordination; its chemical formula can be expressed as Al[6]Al[6]SiO5, to reflect its crystal structure. Andalusite has the second aluminium in five-fold coordination (Al[6]Al[5]SiO5) and sillimanite has it in four-fold coordination (Al[6]Al[4]SiO5). Differences in crystal structure and chemistry greatly influence other physical properties of the mineral. The carbon allotropes diamond and graphite have vastly different properties; diamond is the hardest natural substance, has an adamantine lustre, and belongs to the isometric crystal family, whereas graphite is very soft, has a greasy lustre, and crystallises in the hexagonal family. This difference is accounted for by differences in bonding. In diamond, the carbons are in sp3 hybrid orbitals, which means they form a framework where each carbon is covalently bonded to four neighbours in a tetrahedral fashion; on the other hand, graphite is composed of sheets of carbons in sp2 hybrid orbitals, where each carbon is bonded covalently to only three others. These sheets are held together by much weaker van der Waals forces, and this discrepancy translates to large macroscopic differences. Twinning is the intergrowth of two or more crystals of a single mineral species. The geometry of the twinning is controlled by the mineral's symmetry. As a result, there are several types of twins, including contact twins, reticulated twins, geniculated twins, penetration twins, cyclic twins, and polysynthetic twins. Contact, or simple twins, consist of two crystals joined at a plane; this type of twinning is common in spinel. Reticulated twins, common in rutile, are interlocking crystals resembling netting. Geniculated twins have a bend in the middle that is caused by start of the twin. Penetration twins consist of two single crystals that have grown into each other; examples of this twinning include cross-shaped staurolite twins and Carlsbad twinning in orthoclase. Cyclic twins are caused by repeated twinning around a rotation axis. This type of twinning occurs around three, four, five, six, or eight-fold axes, and the corresponding patterns are called threelings, fourlings, fivelings, sixlings, and eightlings. Sixlings are common in aragonite. Polysynthetic twins are similar to cyclic twins through the presence of repetitive twinning; however, instead of occurring around a rotational axis, polysynthetic twinning occurs along parallel planes, usually on a microscopic scale. Crystal habit refers to the overall shape of the aggregate crystal of any mineral. Several terms are used to describe this property. Common habits include acicular, which describes needle-like crystals as in natrolite; dendritic (tree-pattern) is common in native copper or native gold with a groundmass (matrix); equant, which is typical of garnet; prismatic (elongated in one direction) as seen in kunzite or stibnite; botryoidal (like a bunch of grapes) seen in chalcedony; fibrous, which has fibre-like crystals as seen in wollastonite; tabular, which differs from bladed habit in that the former is platy whereas the latter has a defined elongation as seen in muscovite; and massive, which has no definite shape as seen in carnallite. Related to crystal form, the quality of crystal faces is diagnostic of some minerals, especially with a petrographic microscope. Euhedral crystals have a defined external shape, while anhedral crystals do not; those intermediate forms are termed subhedral. Hardness The hardness of a mineral defines how much it can resist scratching or indentation. This physical property is controlled by the chemical composition and crystalline structure of a mineral. The most commonly used scale of measurement is the ordinal Mohs hardness scale, which measures resistance to scratching. Defined by ten indicators, a mineral with a higher index scratches those below it. The scale ranges from talc, a phyllosilicate, to diamond, a carbon polymorph that is the hardest natural material. The scale is provided below: A mineral's hardness is a function of its structure. Hardness is not necessarily constant for all crystallographic directions; crystallographic weakness renders some directions softer than others. An example of this hardness variability exists in kyanite, which has a Mohs hardness of 5 parallel to [001] but 7 parallel to [100]. Other scales include these; Shore's hardness test, which measures the endurance of a mineral based on the indentation of a spring-loaded contraption. The Rockwell scale The Vickers hardness test The Brinell scale Lustre and diaphaneity Lustre indicates how light reflects from the mineral's surface, with regards to its quality and intensity. There are numerous qualitative terms used to describe this property, which are split into metallic and non-metallic categories. Metallic and sub-metallic minerals have high reflectivity like metal; examples of minerals with this lustre are galena and pyrite. Non-metallic lustres include: adamantine, such as in diamond; vitreous, which is a glassy lustre very common in silicate minerals; pearly, such as in talc and apophyllite; resinous, such as members of the garnet group; silky which is common in fibrous minerals such as asbestiform chrysotile. The diaphaneity of a mineral describes the ability of light to pass through it. Transparent minerals do not diminish the intensity of light passing through them. An example of a transparent mineral is muscovite (potassium mica); some varieties are sufficiently clear to have been used for windows. Translucent minerals allow some light to pass, but less than those that are transparent. Jadeite and nephrite (mineral forms of jade are examples of minerals with this property). Minerals that do not allow light to pass are called opaque. The diaphaneity of a mineral depends on the thickness of the sample. When a mineral is sufficiently thin (e.g., in a thin section for petrography), it may become transparent even if that property is not seen in a hand sample. In contrast, some minerals, such as hematite or pyrite, are opaque even in thin-section. Colour and streak Colour is the most obvious property of a mineral, but it is often non-diagnostic. It is caused by electromagnetic radiation interacting with electrons (except in the case of incandescence, which does not apply to minerals). Two broad classes of elements (idiochromatic and allochromatic) are defined with regards to their contribution to a mineral's colour: Idiochromatic elements are essential to a mineral's composition; their contribution to a mineral's colour is diagnostic. Examples of such minerals are malachite (green) and azurite (blue). In contrast, allochromatic elements in minerals are present in trace amounts as impurities. An example of such a mineral would be the ruby and sapphire varieties of the mineral corundum. The colours of pseudochromatic minerals are the result of interference of light waves. Examples include labradorite and bornite. In addition to simple body colour, minerals can have various other distinctive optical properties, such as play of colours, asterism, chatoyancy, iridescence, tarnish, and pleochroism. Several of these properties involve variability in colour. Play of colour, such as in opal, results in the sample reflecting different colours as it is turned, while pleochroism describes the change in colour as light passes through a mineral in a different orientation. Iridescence is a variety of the play of colours where light scatters off a coating on the surface of crystal, cleavage planes, or off layers having minor gradations in chemistry. In contrast, the play of colours in opal is caused by light refracting from ordered microscopic silica spheres within its physical structure. Chatoyancy ("cat's eye") is the wavy banding of colour that is observed as the sample is rotated; asterism, a variety of chatoyancy, gives the appearance of a star on the mineral grain. The latter property is particularly common in gem-quality corundum. The streak of a mineral refers to the colour of a mineral in powdered form, which may or may not be identical to its body colour. The most common way of testing this property is done with a streak plate, which is made out of porcelain and coloured either white or black. The streak of a mineral is independent of trace elements or any weathering surface. A common example of this property is illustrated with hematite, which is coloured black, silver or red in hand sample, but has a cherry-red to reddish-brown streak; or with chalcopyrite, which is brassy golden in colour and leaves a black streak. Streak is more often distinctive for metallic minerals, in contrast to non-metallic minerals whose body colour is created by allochromatic elements. Streak testing is constrained by the hardness of the mineral, as those harder than 7 powder the streak plate instead. Cleavage, parting, fracture, and tenacity By definition, minerals have a characteristic atomic arrangement. Weakness in this crystalline structure causes planes of weakness, and the breakage of a mineral along such planes is termed cleavage. The quality of cleavage can be described based on how cleanly and easily the mineral breaks; common descriptors, in order of decreasing quality, are "perfect", "good", "distinct", and "poor". In particularly transparent minerals, or in thin-section, cleavage can be seen as a series of parallel lines marking the planar surfaces when viewed from the side. Cleavage is not a universal property among minerals; for example, quartz, consisting of extensively interconnected silica tetrahedra, does not have a crystallographic weakness which would allow it to cleave. In contrast, micas, which have perfect basal cleavage, consist of sheets of silica tetrahedra which are very weakly held together. As cleavage is a function of crystallography, there are a variety of cleavage types. Cleavage occurs typically in either one, two, three, four, or six directions. Basal cleavage in one direction is a distinctive property of the micas. Two-directional cleavage is described as prismatic, and occurs in minerals such as the amphiboles and pyroxenes. Minerals such as galena or halite have cubic (or isometric) cleavage in three directions, at 90°; when three directions of cleavage are present, but not at 90°, such as in calcite or rhodochrosite, it is termed rhombohedral cleavage. Octahedral cleavage (four directions) is present in fluorite and diamond, and sphalerite has six-directional dodecahedral cleavage. Minerals with many cleavages might not break equally well in all of the directions; for example, calcite has good cleavage in three directions, but gypsum has perfect cleavage in one direction, and poor cleavage in two other directions. Angles between cleavage planes vary between minerals. For example, as the amphiboles are double-chain silicates and the pyroxenes are single-chain silicates, the angle between their cleavage planes is different. The pyroxenes cleave in two directions at approximately 90°, whereas the amphiboles distinctively cleave in two directions separated by approximately 120° and 60°. The cleavage angles can be measured with a contact goniometer, which is similar to a protractor. Parting, sometimes called "false cleavage", is similar in appearance to cleavage but is instead produced by structural defects in the mineral, as opposed to systematic weakness. Parting varies from crystal to crystal of a mineral, whereas all crystals of a given mineral will cleave if the atomic structure allows for that property. In general, parting is caused by some stress applied to a crystal. The sources of the stresses include deformation (e.g. an increase in pressure), exsolution, or twinning. Minerals that often display parting include the pyroxenes, hematite, magnetite, and corundum. When a mineral is broken in a direction that does not correspond to a plane of cleavage, it is termed to have been fractured. There are several types of uneven fracture. The classic example is conchoidal fracture, like that of quartz; rounded surfaces are created, which are marked by smooth curved lines. This type of fracture occurs only in very homogeneous minerals. Other types of fracture are fibrous, splintery, and hackly. The latter describes a break along a rough, jagged surface; an example of this property is found in native copper. Tenacity is related to both cleavage and fracture. Whereas fracture and cleavage describes the surfaces that are created when a mineral is broken, tenacity describes how resistant a mineral is to such breaking. Minerals can be described as brittle, ductile, malleable, sectile, flexible, or elastic. Specific gravity Specific gravity numerically describes the density of a mineral. The dimensions of density are mass divided by volume with units: kg/m3 or g/cm3. Specific gravity is defined as the density of the mineral divided by the density of water at 4 °C and thus is a dimensionless quantity, identical in all unit systems. It can be measured as the quotient of the mass of the sample and difference between the weight of the sample in air and its corresponding weight in water. Among most minerals, this property is not diagnostic. Rock forming minerals – typically silicates or occasionally carbonates – have a specific gravity of 2.5–3.5. High specific gravity is a diagnostic property of a mineral. A variation in chemistry (and consequently, mineral class) correlates to a change in specific gravity. Among more common minerals, oxides and sulfides tend to have a higher specific gravity as they include elements with higher atomic mass. A generalization is that minerals with metallic or adamantine lustre tend to have higher specific gravities than those having a non-metallic to dull lustre. For example, hematite, Fe2O3, has a specific gravity of 5.26 while galena, PbS, has a specific gravity of 7.2–7.6, which is a result of their high iron and lead content, respectively. A very high specific gravity is characteristic of native metals; for example, kamacite, an iron-nickel alloy common in iron meteorites has a specific gravity of 7.9, and gold has an observed specific gravity between 15 and 19.3. Other properties Other properties can be used to diagnose minerals. These are less general, and apply to specific minerals. Dropping dilute acid (often 10% HCl) onto a mineral aids in distinguishing carbonates from other mineral classes. The acid reacts with the carbonate ([CO3]2−) group, which causes the affected area to effervesce, giving off carbon dioxide gas. This test can be further expanded to test the mineral in its original crystal form or powdered form. An example of this test is done when distinguishing calcite from dolomite, especially within the rocks (limestone and dolomite respectively). Calcite immediately effervesces in acid, whereas acid must be applied to powdered dolomite (often to a scratched surface in a rock), for it to effervesce. Zeolite minerals will not effervesce in acid; instead, they become frosted after 5–10 minutes, and if left in acid for a day, they dissolve or become a silica gel. Magnetism is a very conspicuous property of a few minerals. Among common minerals, magnetite exhibits this property strongly, and magnetism is also present, albeit not as strongly, in pyrrhotite and ilmenite. Some minerals exhibit electrical properties – for example, quartz is piezoelectric – but electrical properties are rarely used as diagnostic criteria for minerals because of incomplete data and natural variation. Minerals can also be tested for taste or smell. Halite, NaCl, is table salt; its potassium-bearing counterpart, sylvite, has a pronounced bitter taste. Sulfides have a characteristic smell, especially as samples are fractured, reacting, or powdered. Radioactivity is a rare property found in minerals containing radioactive elements. The radioactive elements could be a defining constituent, such as uranium in uraninite, autunite, and carnotite, or present as trace impurities, as in zircon. The decay of a radioactive element damages the mineral crystal structure rendering it locally amorphous (metamict state); the optical result, termed a radioactive halo or pleochroic halo, is observable with various techniques, such as thin-section petrography. Classification Earliest classifications In 315 BCE, Theophrastus presented his classification of minerals in his treatise On Stones. His classification was influenced by the ideas of his teachers Plato and Aristotle. Theophrastus classified minerals as stones, earths or metals. Georgius Agricola's classification of minerals in his book De Natura Fossilium, published in 1546, divided minerals into three types of substance: simple (stones, earths, metals, and congealed juices), compound (intimately mixed) and composite (separable). Linnaeus An early classification of minerals was given by Carl Linnaeus in his seminal 1735 book Systema Naturae. He divided the natural world into three kingdoms – plants, animals, and minerals – and classified each with the same hierarchy. In descending order, these were Phylum, Class, Order, Family, Tribe, Genus, and Species. However, while his system was justified by Charles Darwin's theory of species formation and has been largely adopted and expanded by biologists in the following centuries (who still use his Greek- and Latin-based binomial naming scheme), it had little success among mineralogists (although each distinct mineral is still formally referred to as a mineral species). Modern classification Minerals are classified by variety, species, series and group, in order of increasing generality. The basic level of definition is that of mineral species, each of which is distinguished from the others by unique chemical and physical properties. For example, quartz is defined by its formula, SiO2, and a specific crystalline structure that distinguishes it from other minerals with the same chemical formula (termed polymorphs). When there exists a range of composition between two minerals species, a mineral series is defined. For example, the biotite series is represented by variable amounts of the endmembers phlogopite, siderophyllite, annite, and eastonite. In contrast, a mineral group is a grouping of mineral species with some common chemical properties that share a crystal structure. The pyroxene group has a common formula of XY(Si,Al)2O6, where X and Y are both cations, with X typically bigger than Y; the pyroxenes are single-chain silicates that crystallize in either the orthorhombic or monoclinic crystal systems. Finally, a mineral variety is a specific type of mineral species that differs by some physical characteristic, such as colour or crystal habit. An example is amethyst, which is a purple variety of quartz. Two common classifications, Dana and Strunz, are used for minerals; both rely on composition, specifically with regards to important chemical groups, and structure. James Dwight Dana, a leading geologist of his time, first published his System of Mineralogy in 1837; , it is in its eighth edition. The Dana classification assigns a four-part number to a mineral species. Its class number is based on important compositional groups; the type gives the ratio of cations to anions in the mineral, and the last two numbers group minerals by structural similarity within a given type or class. The less commonly used Strunz classification, named for German mineralogist Karl Hugo Strunz, is based on the Dana system, but combines both chemical and structural criteria, the latter with regards to distribution of chemical bonds. As the composition of the Earth's crust is dominated by silicon and oxygen, silicates are by far the most important class of minerals in terms of rock formation and diversity. However, non-silicate minerals are of great economic importance, especially as ores. Non-silicate minerals are subdivided into several other classes by their dominant chemistry, which includes native elements, sulfides, halides, oxides and hydroxides, carbonates and nitrates, borates, sulfates, phosphates, and organic compounds. Most non-silicate mineral species are rare (constituting in total 8% of the Earth's crust), although some are relatively common, such as calcite, pyrite, magnetite, and hematite. There are two major structural styles observed in non-silicates: close-packing and silicate-like linked tetrahedra. Close-packed structures are a way to densely pack atoms while minimizing interstitial space. Hexagonal close-packing involves stacking layers where every other layer is the same ("ababab"), whereas cubic close-packing involves stacking groups of three layers ("abcabcabc"). Analogues to linked silica tetrahedra include (sulfate), (phosphate), (arsenate), and (vanadate) structures. The non-silicates have great economic importance, as they concentrate elements more than the silicate minerals do. The largest grouping of minerals by far are the silicates; most rocks are composed of greater than 95% silicate minerals, and over 90% of the Earth's crust is composed of these minerals. The two main constituents of silicates are silicon and oxygen, which are the two most abundant elements in the Earth's crust. Other common elements in silicate minerals correspond to other common elements in the Earth's crust, such as aluminium, magnesium, iron, calcium, sodium, and potassium. Some important rock-forming silicates include the feldspars, quartz, olivines, pyroxenes, amphiboles, garnets, and micas. Silicates The base unit of a silicate mineral is the [SiO4]4− tetrahedron. In the vast majority of cases, silicon is in four-fold or tetrahedral coordination with oxygen. In very high-pressure situations, silicon will be in six-fold or octahedral coordination, such as in the perovskite structure or the quartz polymorph stishovite (SiO2). In the latter case, the mineral no longer has a silicate structure, but that of rutile (TiO2), and its associated group, which are simple oxides. These silica tetrahedra are then polymerized to some degree to create various structures, such as one-dimensional chains, two-dimensional sheets, and three-dimensional frameworks. The basic silicate mineral where no polymerization of the tetrahedra has occurred requires other elements to balance out the base 4- charge. In other silicate structures, different combinations of elements are required to balance out the resultant negative charge. It is common for the Si4+ to be substituted by Al3+ because of similarity in ionic radius and charge; in those cases, the [AlO4]5− tetrahedra form the same structures as do the unsubstituted tetrahedra, but their charge-balancing requirements are different. The degree of polymerization can be described by both the structure formed and how many tetrahedral corners (or coordinating oxygens) are shared (for aluminium and silicon in tetrahedral sites): Orthosilicates (or nesosilicates) Have no linking of polyhedra, thus tetrahedra share no corners. Disilicates (or sorosilicates) Have two tetrahedra sharing one oxygen atom. Inosilicates are chain silicates Single-chain silicates have two shared corners, whereas double-chain silicates have two or three shared corners. Phyllosilicates Have a sheet structure which requires three shared oxygens; in the case of double-chain silicates, some tetrahedra must share two corners instead of three as otherwise a sheet structure would result. Framework silicates (or tectosilicates) Have tetrahedra that share all four corners. Ring silicates (or cyclosilicates) Only need tetrahedra to share two corners to form the cyclical structure. The silicate subclasses are described below in order of decreasing polymerization. Tectosilicates Tectosilicates, also known as framework silicates, have the highest degree of polymerization. With all corners of a tetrahedra shared, the silicon:oxygen ratio becomes 1:2. Examples are quartz, the feldspars, feldspathoids, and the zeolites. Framework silicates tend to be particularly chemically stable as a result of strong covalent bonds. Forming 12% of the Earth's crust, quartz (SiO2) is the most abundant mineral species. It is characterized by its high chemical and physical resistivity. Quartz has several polymorphs, including tridymite and cristobalite at high temperatures, high-pressure coesite, and ultra-high pressure stishovite. The latter mineral can only be formed on Earth by meteorite impacts, and its structure has been compressed so much that it has changed from a silicate structure to that of rutile (TiO2). The silica polymorph that is most stable at the Earth's surface is α-quartz. Its counterpart, β-quartz, is present only at high temperatures and pressures (changes to α-quartz below 573 °C at 1 bar). These two polymorphs differ by a "kinking" of bonds; this change in structure gives β-quartz greater symmetry than α-quartz, and they are thus also called high quartz (β) and low quartz (α). Feldspars are the most abundant group in the Earth's crust, at about 50%. In the feldspars, Al3+ substitutes for Si4+, which creates a charge imbalance that must be accounted for by the addition of cations. The base structure becomes either [AlSi3O8]− or [Al2Si2O8]2− There are 22 mineral species of feldspars, subdivided into two major subgroups – alkali and plagioclase – and two less common groups – celsian and banalsite. The alkali feldspars are most commonly in a series between potassium-rich orthoclase and sodium-rich albite; in the case of plagioclase, the most common series ranges from albite to calcium-rich anorthite. Crystal twinning is common in feldspars, especially polysynthetic twins in plagioclase and Carlsbad twins in alkali feldspars. If the latter subgroup cools slowly from a melt, it forms exsolution lamellae because the two components – orthoclase and albite – are unstable in solid solution. Exsolution can be on a scale from microscopic to readily observable in hand-sample; perthitic texture forms when Na-rich feldspar exsolve in a K-rich host. The opposite texture (antiperthitic), where K-rich feldspar exsolves in a Na-rich host, is very rare. Feldspathoids are structurally similar to feldspar, but differ in that they form in Si-deficient conditions, which allows for further substitution by Al3+. As a result, feldspathoids are almost never found in association with quartz. A common example of a feldspathoid is nepheline ((Na, K)AlSiO4); compared to alkali feldspar, nepheline has an Al2O3:SiO2 ratio of 1:2, as opposed to 1:6 in alkali feldspar. Zeolites often have distinctive crystal habits, occurring in needles, plates, or blocky masses. They form in the presence of water at low temperatures and pressures, and have channels and voids in their structure. Zeolites have several industrial applications, especially in waste water treatment. Phyllosilicates Phyllosilicates consist of sheets of polymerized tetrahedra. They are bound at three oxygen sites, which gives a characteristic silicon:oxygen ratio of 2:5. Important examples include the mica, chlorite, and the kaolinite-serpentine groups. In addition to the tetrahedra, phyllosilicates have a sheet of octahedra (elements in six-fold coordination by oxygen) that balance out the basic tetrahedra, which have a negative charge (e.g. [Si4O10]4−) These tetrahedra (T) and octahedra (O) sheets are stacked in a variety of combinations to create phyllosilicate layers. Within an octahedral sheet, there are three octahedral sites in a unit structure; however, not all of the sites may be occupied. In that case, the mineral is termed dioctahedral, whereas in other case it is termed trioctahedral. The layers are weakly bound by van der Waals forces, hydrogen bonds, or sparse ionic bonds, which causes a crystallographic weakness, in turn leading to a prominent basal cleavage among the phyllosilicates. The kaolinite-serpentine group consists of T-O stacks (the 1:1 clay minerals); their hardness ranges from 2 to 4, as the sheets are held by hydrogen bonds. The 2:1 clay minerals (pyrophyllite-talc) consist of T-O-T stacks, but they are softer (hardness from 1 to 2), as they are instead held together by van der Waals forces. These two groups of minerals are subgrouped by octahedral occupation; specifically, kaolinite and pyrophyllite are dioctahedral whereas serpentine and talc trioctahedral. Micas are also T-O-T-stacked phyllosilicates, but differ from the other T-O-T and T-O-stacked subclass members in that they incorporate aluminium into the tetrahedral sheets (clay minerals have Al3+ in octahedral sites). Common examples of micas are muscovite, and the biotite series. Mica T-O-T layers are bonded together by metal ions, giving them a greater hardness than other phyllosilicate minerals, though they retain perfect basal cleavage. The chlorite group is related to mica group, but a brucite-like (Mg(OH)2) layer between the T-O-T stacks. Because of their chemical structure, phyllosilicates typically have flexible, elastic, transparent layers that are electrical insulators and can be split into very thin flakes. Micas can be used in electronics as insulators, in construction, as optical filler, or even cosmetics. Chrysotile, a species of serpentine, is the most common mineral species in industrial asbestos, as it is less dangerous in terms of health than the amphibole asbestos. Inosilicates Inosilicates consist of tetrahedra repeatedly bonded in chains. These chains can be single, where a tetrahedron is bound to two others to form a continuous chain; alternatively, two chains can be merged to create double-chain silicates. Single-chain silicates have a silicon:oxygen ratio of 1:3 (e.g. [Si2O6]4−), whereas the double-chain variety has a ratio of 4:11, e.g. [Si8O22]12−. Inosilicates contain two important rock-forming mineral groups; single-chain silicates are most commonly pyroxenes, while double-chain silicates are often amphiboles. Higher-order chains exist (e.g. three-member, four-member, five-member chains, etc.) but they are rare. The pyroxene group consists of 21 mineral species. Pyroxenes have a general structure formula of XY(Si2O6), where X is an octahedral site, while Y can vary in coordination number from six to eight. Most varieties of pyroxene consist of permutations of Ca2+, Fe2+ and Mg2+ to balance the negative charge on the backbone. Pyroxenes are common in the Earth's crust (about 10%) and are a key constituent of mafic igneous rocks. Amphiboles have great variability in chemistry, described variously as a "mineralogical garbage can" or a "mineralogical shark swimming a sea of elements". The backbone of the amphiboles is the [Si8O22]12−; it is balanced by cations in three possible positions, although the third position is not always used, and one element can occupy both remaining ones. Finally, the amphiboles are usually hydrated, that is, they have a hydroxyl group ([OH]−), although it can be replaced by a fluoride, a chloride, or an oxide ion. Because of the variable chemistry, there are over 80 species of amphibole, although variations, as in the pyroxenes, most commonly involve mixtures of Ca2+, Fe2+ and Mg2+. Several amphibole mineral species can have an asbestiform crystal habit. These asbestos minerals form long, thin, flexible, and strong fibres, which are electrical insulators, chemically inert and heat-resistant; as such, they have several applications, especially in construction materials. However, asbestos are known carcinogens, and cause various other illnesses, such as asbestosis; amphibole asbestos (anthophyllite, tremolite, actinolite, grunerite, and riebeckite) are considered more dangerous than chrysotile serpentine asbestos. Cyclosilicates Cyclosilicates, or ring silicates, have a ratio of silicon to oxygen of 1:3. Six-member rings are most common, with a base structure of [Si6O18]12−; examples include the tourmaline group and beryl. Other ring structures exist, with 3, 4, 8, 9, 12 having been described. Cyclosilicates tend to be strong, with elongated, striated crystals. Tourmalines have a very complex chemistry that can be described by a general formula XY3Z6(BO3)3T6O18V3W. The T6O18 is the basic ring structure, where T is usually Si4+, but substitutable by Al3+ or B3+. Tourmalines can be subgrouped by the occupancy of the X site, and from there further subdivided by the chemistry of the W site. The Y and Z sites can accommodate a variety of cations, especially various transition metals; this variability in structural transition metal content gives the tourmaline group greater variability in colour. Other cyclosilicates include beryl, Al2Be3Si6O18, whose varieties include the gemstones emerald (green) and aquamarine (bluish). Cordierite is structurally similar to beryl, and is a common metamorphic mineral. Sorosilicates Sorosilicates, also termed disilicates, have tetrahedron-tetrahedron bonding at one oxygen, which results in a 2:7 ratio of silicon to oxygen. The resultant common structural element is the [Si2O7]6− group. The most common disilicates by far are members of the epidote group. Epidotes are found in variety of geologic settings, ranging from mid-ocean ridge to granites to metapelites. Epidotes are built around the structure [(SiO4)(Si2O7)]10− structure; for example, the mineral species epidote has calcium, aluminium, and ferric iron to charge balance: Ca2Al2(Fe3+, Al)(SiO4)(Si2O7)O(OH). The presence of iron as Fe3+ and Fe2+ helps buffer oxygen fugacity, which in turn is a significant factor in petrogenesis. Other examples of sorosilicates include lawsonite, a metamorphic mineral forming in the blueschist facies (subduction zone setting with low temperature and high pressure), vesuvianite, which takes up a significant amount of calcium in its chemical structure. Orthosilicates Orthosilicates consist of isolated tetrahedra that are charge-balanced by other cations. Also termed nesosilicates, this type of silicate has a silicon:oxygen ratio of 1:4 (e.g. SiO4). Typical orthosilicates tend to form blocky equant crystals, and are fairly hard. Several rock-forming minerals are part of this subclass, such as the aluminosilicates, the olivine group, and the garnet group. The aluminosilicates –bkyanite, andalusite, and sillimanite, all Al2SiO5 – are structurally composed of one [SiO4]4− tetrahedron, and one Al3+ in octahedral coordination. The remaining Al3+ can be in six-fold coordination (kyanite), five-fold (andalusite) or four-fold (sillimanite); which mineral forms in a given environment is depend on pressure and temperature conditions. In the olivine structure, the main olivine series of (Mg, Fe)2SiO4 consist of magnesium-rich forsterite and iron-rich fayalite. Both iron and magnesium are in octahedral by oxygen. Other mineral species having this structure exist, such as tephroite, Mn2SiO4. The garnet group has a general formula of X3Y2(SiO4)3, where X is a large eight-fold coordinated cation, and Y is a smaller six-fold coordinated cation. There are six ideal endmembers of garnet, split into two group. The pyralspite garnets have Al3+ in the Y position: pyrope (Mg3Al2(SiO4)3), almandine (Fe3Al2(SiO4)3), and spessartine (Mn3Al2(SiO4)3). The ugrandite garnets have Ca2+ in the X position: uvarovite (Ca3Cr2(SiO4)3), grossular (Ca3Al2(SiO4)3) and andradite (Ca3Fe2(SiO4)3). While there are two subgroups of garnet, solid solutions exist between all six end-members. Other orthosilicates include zircon, staurolite, and topaz. Zircon (ZrSiO4) is useful in geochronology as U6+ can substitute for Zr4+; furthermore, because of its very resistant structure, it is difficult to reset it as a chronometer. Staurolite is a common metamorphic intermediate-grade index mineral. It has a particularly complicated crystal structure that was only fully described in 1986. Topaz (Al2SiO4(F, OH)2, often found in granitic pegmatites associated with tourmaline, is a common gemstone mineral. Non-silicates Native elements Native elements are those that are not chemically bonded to other elements. This mineral group includes native metals, semi-metals, and non-metals, and various alloys and solid solutions. The metals are held together by metallic bonding, which confers distinctive physical properties such as their shiny metallic lustre, ductility and malleability, and electrical conductivity. Native elements are subdivided into groups by their structure or chemical attributes. The gold group, with a cubic close-packed structure, includes metals such as gold, silver, and copper. The platinum group is similar in structure to the gold group. The iron-nickel group is characterized by several iron-nickel alloy species. Two examples are kamacite and taenite, which are found in iron meteorites; these species differ by the amount of Ni in the alloy; kamacite has less than 5–7% nickel and is a variety of native iron, whereas the nickel content of taenite ranges from 7–37%. Arsenic group minerals consist of semi-metals, which have only some metallic traits; for example, they lack the malleability of metals. Native carbon occurs in two allotropes, graphite and diamond; the latter forms at very high pressure in the mantle, which gives it a much stronger structure than graphite. Sulfides The sulfide minerals are chemical compounds of one or more metals or semimetals with a chalcogen or pnictogen, of which sulfur is most common. Tellurium, arsenic, or selenium can substitute for the sulfur. Sulfides tend to be soft, brittle minerals with a high specific gravity. Many powdered sulfides, such as pyrite, have a sulfurous smell when powdered. Sulfides are susceptible to weathering, and many readily dissolve in water; these dissolved minerals can be later redeposited, which creates enriched secondary ore deposits. Sulfides are classified by the ratio of the metal or semimetal to the sulfur, such as M:S equal to 2:1, or 1:1. Many sulfide minerals are economically important as metal ores; examples include sphalerite (ZnS), an ore of zinc, galena (PbS), an ore of lead, cinnabar (HgS), an ore of mercury, and molybdenite (MoS2, an ore of molybdenum. Pyrite (FeS2), is the most commonly occurring sulfide, and can be found in most geological environments. It is not, however, an ore of iron, but can be instead oxidized to produce sulfuric acid. Related to the sulfides are the rare sulfosalts, in which a metallic element is bonded to sulfur and a semimetal such as antimony, arsenic, or bismuth. Like the sulfides, sulfosalts are typically soft, heavy, and brittle minerals. Oxides Oxide minerals are divided into three categories: simple oxides, hydroxides, and multiple oxides. Simple oxides are characterized by O2− as the main anion and primarily ionic bonding. They can be further subdivided by the ratio of oxygen to the cations. The periclase group consists of minerals with a 1:1 ratio. Oxides with a 2:1 ratio include cuprite (Cu2O) and water ice. Corundum group minerals have a 2:3 ratio, and includes minerals such as corundum (Al2O3), and hematite (Fe2O3). Rutile group minerals have a ratio of 1:2; the eponymous species, rutile (TiO2) is the chief ore of titanium; other examples include cassiterite (SnO2; ore of tin), and pyrolusite (MnO2; ore of manganese). In hydroxides, the dominant anion is the hydroxyl ion, OH−. Bauxites are the chief aluminium ore, and are a heterogeneous mixture of the hydroxide minerals diaspore, gibbsite, and bohmite; they form in areas with a very high rate of chemical weathering (mainly tropical conditions). Finally, multiple oxides are compounds of two metals with oxygen. A major group within this class are the spinels, with a general formula of X2+Y3+2O4. Examples of species include spinel (MgAl2O4), chromite (FeCr2O4), and magnetite (Fe3O4). The latter is readily distinguishable by its strong magnetism, which occurs as it has iron in two oxidation states (Fe2+Fe3+2O4), which makes it a multiple oxide instead of a single oxide. Halides The halide minerals are compounds in which a halogen (fluorine, chlorine, iodine, or bromine) is the main anion. These minerals tend to be soft, weak, brittle, and water-soluble. Common examples of halides include halite (NaCl, table salt), sylvite (KCl), and fluorite (CaF2). Halite and sylvite commonly form as evaporites, and can be dominant minerals in chemical sedimentary rocks. Cryolite, Na3AlF6, is a key mineral in the extraction of aluminium from bauxites; however, as the only significant occurrence at Ivittuut, Greenland, in a granitic pegmatite, was depleted, synthetic cryolite can be made from fluorite. Carbonates The carbonate minerals are those in which the main anionic group is carbonate, [CO3]2−. Carbonates tend to be brittle, many have rhombohedral cleavage, and all react with acid. Due to the last characteristic, field geologists often carry dilute hydrochloric acid to distinguish carbonates from non-carbonates. The reaction of acid with carbonates, most commonly found as the polymorph calcite and aragonite (CaCO3), relates to the dissolution and precipitation of the mineral, which is a key in the formation of limestone caves, features within them such as stalactite and stalagmites, and karst landforms. Carbonates are most often formed as biogenic or chemical sediments in marine environments. The carbonate group is structurally a triangle, where a central C4+ cation is surrounded by three O2− anions; different groups of minerals form from different arrangements of these triangles. The most common carbonate mineral is calcite, which is the primary constituent of sedimentary limestone and metamorphic marble. Calcite, CaCO3, can have a significant percentage of magnesium substituting for calcium. Under high-Mg conditions, its polymorph aragonite will form instead; the marine geochemistry in this regard can be described as an aragonite or calcite sea, depending on which mineral preferentially forms. Dolomite is a double carbonate, with the formula CaMg(CO3)2. Secondary dolomitization of limestone is common, in which calcite or aragonite are converted to dolomite; this reaction increases pore space (the unit cell volume of dolomite is 88% that of calcite), which can create a reservoir for oil and gas. These two mineral species are members of eponymous mineral groups: the calcite group includes carbonates with the general formula XCO3, and the dolomite group constitutes minerals with the general formula XY(CO3)2. Sulfates The sulfate minerals all contain the sulfate anion, [SO4]2−. They tend to be transparent to translucent, soft, and many are fragile. Sulfate minerals commonly form as evaporites, where they precipitate out of evaporating saline waters. Sulfates can also be found in hydrothermal vein systems associated with sulfides, or as oxidation products of sulfides. Sulfates can be subdivided into anhydrous and hydrous minerals. The most common hydrous sulfate by far is gypsum, CaSO4⋅2H2O. It forms as an evaporite, and is associated with other evaporites such as calcite and halite; if it incorporates sand grains as it crystallizes, gypsum can form desert roses. Gypsum has very low thermal conductivity and maintains a low temperature when heated as it loses that heat by dehydrating; as such, gypsum is used as an insulator in materials such as plaster and drywall. The anhydrous equivalent of gypsum is anhydrite; it can form directly from seawater in highly arid conditions. The barite group has the general formula XSO4, where the X is a large 12-coordinated cation. Examples include barite (BaSO4), celestine (SrSO4), and anglesite (PbSO4); anhydrite is not part of the barite group, as the smaller Ca2+ is only in eight-fold coordination. Phosphates The phosphate minerals are characterized by the tetrahedral [PO4]3− unit, although the structure can be generalized, and phosphorus is replaced by antimony, arsenic, or vanadium. The most common phosphate is the apatite group; common species within this group are fluorapatite (Ca5(PO4)3F), chlorapatite (Ca5(PO4)3Cl) and hydroxylapatite (Ca5(PO4)3(OH)). Minerals in this group are the main crystalline constituents of teeth and bones in vertebrates. The relatively abundant monazite group has a general structure of ATO4, where T is phosphorus or arsenic, and A is often a rare-earth element (REE). Monazite is important in two ways: first, as a REE "sink", it can sufficiently concentrate these elements to become an ore; secondly, monazite group elements can incorporate relatively large amounts of uranium and thorium, which can be used in monazite geochronology to date the rock based on the decay of the U and Th to lead. Organic minerals The Strunz classification includes a class for organic minerals. These rare compounds contain organic carbon, but can be formed by a geologic process. For example, whewellite, CaC2O4⋅H2O is an oxalate that can be deposited in hydrothermal ore veins. While hydrated calcium oxalate can be found in coal seams and other sedimentary deposits involving organic matter, the hydrothermal occurrence is not considered to be related to biological activity. Recent advances Mineral classification schemes and their definitions are evolving to match recent advances in mineral science. Recent changes have included the addition of an organic class, in both the new Dana and the Strunz classification schemes. The organic class includes a very rare group of minerals with hydrocarbons. The IMA Commission on New Minerals and Mineral Names adopted in 2009 a hierarchical scheme for the naming and classification of mineral groups and group names and established seven commissions and four working groups to review and classify minerals into an official listing of their published names. According to these new rules, "mineral species can be grouped in a number of different ways, on the basis of chemistry, crystal structure, occurrence, association, genetic history, or resource, for example, depending on the purpose to be served by the classification." Astrobiology It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions. In January 2014, NASA reported that studies by the Curiosity and Opportunity rovers on Mars would search for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars became a primary NASA objective.
Physical sciences
Earth science
null
19054
https://en.wikipedia.org/wiki/Marble
Marble
Marble is a metamorphic rock consisting of carbonate minerals (most commonly calcite (CaCO3) or dolomite (CaMg(CO3)2) that have recrystallized under the influence of heat and pressure. It has a crystalline texture, and is typically not foliated (layered), although there are exceptions. In geology, the term marble refers to metamorphosed limestone, but its use in stonemasonry more broadly encompasses unmetamorphosed limestone. Marble is commonly used for sculpture and as a building material. Etymology The word "marble" derives from the Ancient Greek (), from (), "crystalline rock, shining stone", perhaps from the verb (), "to flash, sparkle, gleam"; R. S. P. Beekes has suggested that a "Pre-Greek origin is probable". This stem is also the ancestor of the English word "marmoreal", meaning "marble-like." While the English term "marble" resembles the French , most other European languages (with words like "marmoreal") more closely resemble the original Ancient Greek. Geology Marble is a rock resulting from metamorphism of sedimentary carbonate rocks, most commonly limestone or dolomite. Metamorphism causes variable re-crystallization of the original carbonate mineral grains. The resulting marble rock is typically composed of an interlocking mosaic of carbonate crystals. Primary sedimentary textures and structures of the original carbonate rock (protolith) have typically been modified or destroyed. Pure white marble is the result of metamorphism of a very pure (silicate-poor) limestone or dolomite protolith. The characteristic swirls and veins of many colored marble varieties, sometimes called striations, are usually due to various mineral impurities such as clay, silt, sand, iron oxides, or chert which were originally present as grains or layers in the limestone. Green coloration is often due to serpentine resulting from originally magnesium-rich limestone or dolomite with silica impurities. These various impurities have been mobilized and recrystallized by the intense pressure and heat of the metamorphism. Chemistry Degradation by acids Acids react with the calcium carbonate in marble, producing carbonic acid (which decomposes quickly to CO2 and H2O) and other soluble salts : CaCO3(s) + 2H+(aq) → Ca2+(aq) + CO2(g) + H2O (l) Outdoor marble statues, gravestones, or other marble structures are damaged by acid rain whether by carbonation, sulfation or the formation of "black-crust" (accumulation of calcium sulphate, nitrates and carbon particles). Vinegar and other acidic solutions should be avoided in the cleaning of marble products. Crystallization Crystallization refers to a method of imparting a glossy, more durable finish on to a marble floor (CaCO3). It involves polishing the surface with an acidic solution and a steel wool pad on a flooring machine. The chemical reaction below shows a typical process using magnesium fluorosilicate (MgSiF6) and hydrochloric acid (HCl) taking place. CaCO3(s) + MgSiF6(l) + 2HCl (l) → MgCl2(s) + CaSiF6(s) + CO2(g) + H2O(l) The resulting calcium hexafluorosilicate (CaSiF6) is bonded to the surface of the marble. This is harder, more glossy and stain resistant compared to the original surface. The other often used method of finishing marble is to polish with oxalic acid (H2C2O4), an organic acid. The resulting reaction is as follows: CaCO3(s) + H2C2O4(l) → CaC2O4(s) + CO2(g) + H2O(l) In this case the calcium oxalate (CaC2O4) formed in the reaction is washed away with the slurry, leaving a surface that has not been chemically changed. Microbial degradation The haloalkaliphilic methylotrophic bacterium Methylophaga murata was isolated from deteriorating marble in the Kremlin. Bacterial and fungal degradation was detected in four samples of marble from Milan Cathedral; black Cladosporium attacked dried acrylic resin using melanin. Types and features Examples of notable marble varieties and locations Features Marble is a rock composed of calcium and magnesium carbonate, mostly white and pink. Common marble varieties are granular limestone or dolomite. The hardness of marble is very high, because the internal structure of the rock is very uniform after long-term natural aging, and the internal stress disappears, so the marble will not be deformed due to temperature, and has strong wear resistance. It is a very popular building material. The following table is a summary of the features of Marble. Uses Sculpture White marble has been prized for its use in sculptures since classical times. This preference has to do with its softness, which made it easier to carve, relative isotropy and homogeneity, and a relative resistance to shattering. Also, the low index of refraction of calcite allows light to penetrate 12.7 to 38 millimeters into the stone before being scattered out, resulting in the characteristic waxy look which brings a lifelike luster to marble sculptures of any kind, which is why many sculptors preferred and still prefer marble for sculpting the human form. Construction Construction marble is a stone which is composed of calcite, dolomite or serpentine that is capable of taking a polish. More generally in construction, specifically the dimension stone trade, the term marble is used for any crystalline calcitic rock (and some non-calcitic rocks) useful as building stone. For example, Tennessee marble is really a dense granular fossiliferous gray to pink to maroon Ordovician limestone, that geologists call the Holston Formation. Ashgabat, the capital city of Turkmenistan, was recorded in the 2013 Guinness Book of Records as having the world's highest concentration of white marble buildings. Production The extraction of marble is performed by quarrying. Blocks are favoured for most purposes, and can be created through various techniques, including drilling and blasting, water jet and wedge methods. Limestones are often commercially and historically referred to as marble, which differs from the geological definition. Locations Marble production was dominated by 4 countries that accounted for almost half of world production of marble and decorative stone. China and Italy were the world leaders, each representing 34% and 19% of world production respectively, followed by India and Spain produced 16% and 13% respectively. In 2018 Turkey was the world leader in marble export, with 42% share in global marble trade, followed by Italy with 18% and Greece with 10%. The largest importer of marble in 2018 was China with a 64% market share, followed by India with 11% and Italy with 5%. Ancient times White marbles throughout the Mediterranean basin were widely utilized during the Roman period. Extraction centers were unevenly distributed across the Italian Peninsula, mainland Greece, the Aegean Islands, Asia Minor, and smaller hubs like those in the Iberian Peninsula. The need for extensive trade arose due to this imbalance, leading to the widespread exchange of marble objects, including building elements, sculptures, and sarcophagi. There was a significant increase in the distribution of white marble from the late 1st century BC to the end of the 2nd century AD. A gradual decline in distribution started in the third century AD. United States According to the United States Geological Survey, U.S. domestic marble production in 2006 was 46,400 tons valued at about $18.1 million, compared to 72,300 tons valued at $18.9 million in 2005. Crushed marble production (for aggregate and industrial uses) in 2006 was 11.8 million tons valued at $116 million, of which 6.5 million tons was finely ground calcium carbonate and the rest was construction aggregate. For comparison, 2005 crushed marble production was 7.76 million tons valued at $58.7 million, of which 4.8 million tons was finely ground calcium carbonate and the rest was construction aggregate. U.S. dimension marble demand is about 1.3 million tons. The DSAN World Demand for (finished) Marble Index has shown a growth of 12% annually for the 2000–2006 period, compared to 10.5% annually for the 2000–2005 period. The largest dimension marble application is tile. Palestine The stone and marble industry is one of the largest industries in Palestine, contributing 20-25% of its total industrial revenues, generating USD $400–$450 million in revenue annually. The industry employs 15,000–20,000 workers across the West Bank across 1200–1700 facilities, and amounts to 4.5% of the nation's GDP. The vast majority of the industry's exports are to Israel. Marble in the geologic sense does not naturally outcrop in Palestine, and the vast majority of commercially labeled marble produced in Palestine produced would be geologically considered limestone. Occupational safety Particulate air pollution exposure has been found to be elevated in the marble production industry. Exposure to the dust produced by cutting marble could impair lung function or cause lung disease in workers, such as silicosis. Skin and eye problems are also a potential hazard. Mitigations such as dust filters, or dust suppression are suggested, but more research needs to be carried out on the efficacy of safety measures. In the United States, the Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for marble exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. Dust, debris and temperature fluctuations from working marble can endanger the eye health of employees. For the staff involved in marble processing, it is necessary to provide eye protection equipment, and it is recommended to improve the education of all workers on occupational health risks and strengthen preventive measures. Cultural associations As the favorite medium for Greek and Roman sculptors and architects (see classical sculpture), marble has become a cultural symbol of tradition and refined taste. Its extremely varied and colorful patterns make it a favorite decorative material. Places named after the stone include Marblehead, Massachusetts; Marblehead, Ohio; Marble Arch, London; the Sea of Marmara; India's Marble Rocks; and the towns of Marble, Minnesota; Marble, Colorado; Marble Falls, Texas, and Marble Hill, Manhattan, New York. The Elgin Marbles are marble sculptures from the Parthenon in Athens that are on display in the British Museum. Impact on the environment Total world quarrying production in 2019 was approximately 316 million tonnes; however, quarrying waste accounted for 53% of this total production. In the process of marble mining and processing, around half or the excavated material will be waste, this is often then used as chips for flooring or wall finish, and uses for which high-calcium limestone is suitable. Sustainability Marble sludge waste can be used as a mineral filler in water-based paints. Using ground calcium carbonate as a filler in paint production can improve the brightness, hiding power and application performance of paint, and can also replace expensive pigments such as titanium dioxide. Recycling of marble waste leads to a large amount of waste not being land-filled, reducing environmental pollution, thereby realizing the sustainability of marble. Converting waste to generate economic income and restore degraded soil can improve the environment. Cleaning and preservation The nature of marble is soft and porous, so it is easily stained by colored liquids and scratches easily. Maintenance and cleaning is particularly important. Preservation Prevent sand and dust from contacting the marble surface. Avoid corroding marble surfaces with alcohol, color and acidic liquids. Cleaning As a floor material, marble is easy to scratch. You can first use a vacuum cleaner to suck away the grit and dust on the marble floor, and then use a steam cleaner to remove other dirt. A mild, pH-neutral, non-abrasive soap should be used for cleaning marble surfaces. Wipe with a soft foam cotton or rag. Gallery
Physical sciences
Petrology
null
19159
https://en.wikipedia.org/wiki/Mile
Mile
The mile, sometimes the international mile or statute mile to distinguish it from other miles, is a British imperial unit and United States customary unit of length; both are based on the older English unit of length equal to 5,280 English feet, or 1,760 yards. The statute mile was standardised between the Commonwealth of Nations and the United States by an international agreement in 1959, when it was formally redefined with respect to SI units as exactly . With qualifiers, mile is also used to describe or translate a wide range of units derived from or roughly equivalent to the Roman mile (roughly ), such as the nautical mile (now exactly), the Italian mile (roughly ), and the Chinese mile (now exactly). The Romans divided their mile into 5,000 pedēs ("feet"), but the greater importance of furlongs in the Elizabethan-era England meant that the statute mile was made equivalent to or in 1593. This form of the mile then spread across the British Empire, some successor states of which continue to employ the mile. The US Geological Survey now employs the metre for official purposes, but legacy data from its 1927 geodetic datum has meant that a separate US survey mile continues to see some use, although it was officially phased out in 2022. While most countries replaced the mile with the kilometre when switching to the International System of Units (SI), the international mile continues to be used in some countries, such as the United Kingdom, the United States, and a number of countries with fewer than one million inhabitants, most of which are UK or US territories or have close historical ties with the UK or US. Name The modern English word mile derives from Middle English and Old English , which was cognate with all other Germanic terms for miles. These derived from the nominal ellipsis form of 'mile' or 'miles', the Roman mile of one thousand paces. The present international mile is usually what is understood by the unqualified term mile. When this distance needs to be distinguished from the nautical mile, the international mile may also be described as a land mile or statute mile. In British English, statute mile may refer to the present international mile or to any other form of English mile since the 1593 Act of Parliament, which set it as a distance of . Under American law, however, statute mile refers to the US survey mile. Foreign and historical units translated into English as miles usually employ a qualifier to describe the kind of mile being used but this may be omitted if it is obvious from the context, such as a discussion of the 2nd-century Antonine Itinerary describing its distances in terms of miles rather than Roman miles. Abbreviation The mile has been variously abbreviated in English—with and without a trailing period—as "mi", "M", "ml", and "m". The American National Institute of Standards and Technology now uses and recommends "mi" to avoid confusion with the SI metre (m) and millilitre (ml). However, derived units such as miles per hour or miles per gallon continue to be abbreviated as "mph" and "mpg" rather than "mi/h" and "mi/gal". In the United Kingdom, road signs use "m" as the abbreviation for mile though height and width restrictions also use "m" as the symbol for the metre, which may be displayed alongside feet and inches. The BBC style holds that "there is no acceptable abbreviation for 'miles and so it should be spelled out when used in describing areas. Historical Roman The Roman mile (,  "thousand paces";  m.p.; also and ) consisted of a thousand paces as measured by every other step—as in the total distance of the left foot hitting the ground 1,000 times. When Roman legionaries were well-fed and harshly driven in good weather, they thus created longer miles. The distance was indirectly standardised by Agrippa's establishment of a standard Roman foot (Agrippa's own) in 29 BC, and the definition of a pace as 5 feet. An Imperial Roman mile thus denoted 5,000 Roman feet. Surveyors and specialised equipment such as the decempeda and dioptra then spread its use. In modern times, Agrippa's Imperial Roman mile was empirically estimated to have been about in length, slightly less than the of the modern international mile. In Hellenic areas of the Empire, the Roman mile (, ) was used beside the native Greek units as equivalent to 8 stadia of 600 Greek feet. The continued to be used as a Byzantine unit and was also used as the name of the zero mile marker for the Byzantine Empire, the Milion, located at the head of the Mese near Hagia Sophia. The Roman mile spread throughout Europe, with its local variations giving rise to the different units. Also arising from the Roman mile is the milestone. All roads radiated out from the Roman Forum throughout the Empire – 50,000 (Roman) miles of stone-paved roads. At every mile was placed a shaped stone. Originally, these were obelisks made from granite, marble, or whatever local stone was available. On these was carved a Roman numeral, indicating the number of miles from the centre of Rome – the Forum. Hence, one can know how far one is from Rome. Italian The Italian mile (,  ) was traditionally considered a direct continuation of the Roman mile, equal to 1000 paces, although its actual value over time or between regions could vary greatly. It was often used in international contexts from the Middle Ages into the 17th century and is thus also known as the "geographical mile", although the geographical mile is now a separate standard unit. Arabic The Arabic mile (, ) was not the common Arabic unit of length; instead, Arabs and Persians traditionally used the longer parasang or "Arabic league". The Arabic mile was, however, used by medieval geographers and scientists and constituted a kind of precursor to the nautical or geographical mile. It extended the Roman mile to fit an astronomical approximation of 1 arcminute of latitude measured directly north-and-south along a meridian. Although the precise value of the approximation remains disputed, it was somewhere between 1.8 and 2.0 km. English The "old English mile" of the medieval and early modern periods varied but seems to have measured about 1.3 international miles (2.1 km). The old English mile varied over time and location within England. The old English mile has also been defined as 79,200 or 79,320 inches (1.25 or 1.2519 statute miles). The English long continued the Roman computations of the mile as 5,000 feet, 1,000 paces, or 8 longer divisions, which they equated with their "furrow's length" or furlong. The origins of English units are "extremely vague and uncertain", but seem to have been a combination of the Roman system with native British and Germanic systems both derived from multiples of the barleycorn. Probably by the reign of Edgar in the 10th century, the nominal prototype physical standard of English length was an arm-length iron bar (a yardstick) held by the king at Winchester; the foot was then one-third of its length. Henry I was said to have made a new standard in 1101 based on his own arm. Following the issuance of Magna Carta in 1215, the barons of Parliament directed John and his son to keep the king's standard measure () and weight at the Exchequer, which thereafter verified local standards until its abolition in the 19th century. New brass standards are known to have been constructed under Henry VII and Elizabeth I. Arnold's Customs of London recorded a mile shorter than previous ones, coming to 0.947 international miles (5,000 feet) or 1.524 km. Statute The English statute mile was established by a Weights and Measures Act of Parliament in 1593 during the reign of Queen Elizabeth I. The act on the Composition of Yards and Perches had shortened the length of the foot and its associated measures, causing the two methods of determining the mile to diverge. Owing to the importance of the surveyor's rod in deeds and surveying undertaken under Henry VIII, decreasing the length of the rod by would have amounted to a significant tax increase. Parliament instead opted to maintain the mile of 8 furlongs (which were derived from the rod) and to increase the number of feet per mile from the old Roman value. The applicable passage of the statute reads: "A Mile shall contain eight Furlongs, every Furlong forty Poles, and every Pole shall contain sixteen Foot and half." The statute mile therefore contained 5,280 feet or 1,760 yards. The distance was not uniformly adopted. Robert Morden had multiple scales on his 17th-century maps which included continuing local values: his map of Hampshire, for example, bore two different "miles" with a ratio of and his map of Dorset had three scales with a ratio of . In both cases, the traditional local units remained longer than the statute mile. The English statute mile was superseded in 1959 by the international mile by international agreement. Welsh The Welsh mile ( or ) was 3 statute miles and 1,470 yards long (6.17 km). It comprised 9,000 paces (), each of 3 Welsh feet () of 9 inches (). (The Welsh inch is usually reckoned as equivalent to the English inch.) Along with other Welsh units, it was said to have been codified under Dyfnwal the Bald and Silent and retained unchanged by Hywel the Good. Along with other Welsh units, it was discontinued following the conquest of Wales by Edward I of England in the 13th century. Scots The Scots mile was longer than the English mile, as mentioned by Robert Burns in the first verse of his poem "Tam o' Shanter". It comprised 8 (Scots) furlongs divided into 320 falls or faws (Scots rods). It varied from place to place but the most accepted equivalencies are 1,976 Imperial yards (1.123 statute miles or 1.81 km). It was legally abolished three times: first by a 1685 act of the Scottish Parliament, again by the 1707 Treaty of Union with England, and finally by the Weights and Measures Act 1824. It had continued in use as a customary unit through the 18th century but had become obsolete by its final abolition. Irish The Irish mile ( or ) measured 2,240 yards: approximately 1.27 statute miles or 2.048 kilometres. It was used in Ireland from the 16th century plantations until the 19th century, with residual use into the 20th century. The units were based on "English measure" but used a linear perch measuring as opposed to the English rod of . Dutch The Dutch mile () has had different definitions throughout history. One of the older definitions was 5,600 ells. But the length of an ell was not standardised, so that the length of a mile could range between 3,280 m and 4,280 m. In the sixteenth century, the Dutch had three different miles: small (), medium (), and large (). The Dutch mile had the historical definition of one hour's walking (), which was defined as 24 stadia, 3000 paces, or 15,000 Amsterdam or Rhineland feet (respectively 4,250 m or 4,710 m). The common Dutch mile was 32 stadia, 4,000 paces, or 20,000 feet (5,660 m or 6,280 m). The large mile was defined as 5000 paces. The common Dutch mile was preferred by mariners, equating with 15 to one degree of latitude or one degree of longitude on the equator. This was originally based upon Ptolemy's underestimate of the Earth's circumference. The ratio of 15 Dutch miles to a degree remained fixed while the length of the mile was changed as with improved calculations of the circumference of the Earth. In 1617, Willebrord Snellius calculated a degree of the circumference of the Earth at 28,500 (within 3.5% of the actual value), which resulted in a Dutch mile of 1900 rods. By the mid-seventeenth century, map scales assigned 2000 rods to the common Dutch mile, which equalled around 7,535 m (reducing the discrepancy with latitude measurement to less than 2%). The metric system was introduced in the Netherlands in 1816, and the metric mile became a synonym for the kilometre, being exactly 1,000 m. Since 1870, the term was replaced by the equivalent . Today, the word is no longer used, except as part of certain proverbs and compound terms like ("miles away"). German The German mile () was 24,000 German feet. The standardised Austrian mile used in southern Germany and the Austrian Empire was 7.586 km; the Prussian mile used in northern Germany was 7.5325 km. Following its standardisation by Ole Rømer in the late 17th century, the Danish mile () was precisely equal to the Prussian mile and likewise divided into 24,000 feet. These were sometimes treated as equivalent to 7.5 km. Earlier values had varied: the , for instance, had been 11.13 km. The Germans also used a longer version of the geographical mile. Breslau The Breslau mile, used in Breslau, and from 1630 officially in all of Silesia, equal to 11,250 ells, or about 6,700 meters. The mile equaled the distance from the Piaskowa Gate all the way to Psie Pole (Hundsfeld). By rolling a circle with a radius of 5 ells through Piaskowa Island, Ostrów Tumski and suburban tracts, passing eight bridges on the way, the standard Breslau mile was determined. Saxon The Saxon post mile ( or , introduced on occasion of a survey of the Saxon roads in the 1700s, corresponded to 2,000 Dresden rods, equivalent to 9.062 kilometres. Hungarian The Hungarian mile ( or ) varied from 8.3790 km to 8.9374 km before being standardised as 8.3536 km. Portuguese The Portuguese mile () used in Portugal and Brazil was 2.0873 km prior to metrication. Russian The Russian mile ( or , ) was 7.468 km, divided into 7 versts. Croatian The Croatian mile (), first devised by the Jesuit Stjepan Glavač on a 1673 map, is the length of an arc of the equator subtended by ° or 11.13 km exactly. The previous Croatian mile, now known as the "ban mile" (), had been the Austrian mile given above. Ottoman The Ottoman mile was 1,894.35 m, which was equal to 5,000 Ottoman foot. After 1933, the Ottoman mile was replaced with the modern Turkish mile (1,853.181 m). Japanese The CJK Compatibility Unicode block contains square-format versions of Japanese names for measurement units as written in katakana script. Among them, there is , after . International The international mile is precisely equal to (or  km as a fraction). It was established as part of the 1959 international yard and pound agreement reached by the United States, the United Kingdom, Canada, Australia, New Zealand, and the Union of South Africa, which resolved small but measurable differences that had arisen from separate physical standards each country had maintained for the yard. As with the earlier statute mile, it continues to comprise 1,760 yards or 5,280 feet. The old Imperial value of the yard was used in converting measurements to metric values in India in a 1976 Act of the Indian Parliament. However, the current National Topographic Database of the Survey of India is based on the metric WGS-84 datum, which is also used by the Global Positioning System. The difference from the previous standards was 2 ppm, or about 3.2 millimetres ( inch) per mile. The US standard was slightly longer and the old Imperial standards had been slightly shorter than the international mile. When the international mile was introduced in English-speaking countries, the basic geodetic datum in America was the North American Datum of 1927 (NAD27). This had been constructed by triangulation based on the definition of the foot in the Mendenhall Order of 1893, with 1 foot =  (≈0.304800609601) metres and the definition was retained for data derived from NAD27, but renamed the US survey foot to distinguish it from the international foot. Thus a survey mile =  × 5280 (≈1609.347218694) metres. An international mile = 1609.344 / ( × 5280) (=0.999998) survey miles. The exact length of the land mile varied slightly among English-speaking countries until the international yard and pound agreement in 1959 established the yard as exactly 0.9144 metres, giving a mile of exactly 1,609.344 metres. The US adopted this international mile for most purposes, but retained the pre-1959 mile for some land-survey data, terming it the U. S. survey mile. In the United States, statute mile normally refers to the survey mile, about 3.219 mm ( inch) longer than the international mile (the international mile is exactly 0.0002% less than the US survey mile). While most countries abandoned the mile when switching to the metric system, the international mile continues to be used in some countries, such as Liberia, Myanmar, the United Kingdom and the United States. It is also used in a number of territories with less than a million inhabitants, most of which are UK or US territories, or have close historical ties with the UK or US: American Samoa, Bahamas, Belize, British Virgin Islands, Cayman Islands, Dominica, Falkland Islands, Grenada, Guam, The N. Mariana Islands, Samoa, St. Lucia, St. Vincent & The Grenadines, St. Helena, St. Kitts & Nevis, the Turks & Caicos Islands, and the US Virgin Islands. The mile is even encountered in Canada, though this is predominantly in rail transport and horse racing, as the roadways have been metricated since 1977. Ireland gradually replaced miles with kilometres, including in speed measurements; the process was completed in 2005. US survey The US survey mile is 5,280 US survey feet, or 1,609.347 metres and 0.30480061 metres respectively. Both are very slightly longer than the international mile and international foot. In the United States, the term statute mile formally refers to the survey mile, but for most purposes, the difference of less than between the survey mile and the international mile (1609.344 metres exactly) is insignificant—one international mile is US survey miles—so statute mile can be used for either. But in some cases, such as in the US State Plane Coordinate Systems (SPCSs), which can stretch over hundreds of miles, the accumulated difference can be significant, so it is important to note that the reference is to the US survey mile. The United States redefined its yard in 1893, and this resulted in US and Imperial measures of distance having very slightly different lengths. The North American Datum of 1983 (NAD83), which replaced the NAD27, is defined in metres. State Plane Coordinate Systems were then updated, but the National Geodetic Survey left individual states to decide which (if any) definition of the foot they would use. All State Plane Coordinate Systems are defined in metres, and 42 of the 50 states only use the metre-based State Plane Coordinate Systems. However, eight states also have State Plane Coordinate Systems defined in feet, seven of them in US survey feet and one in international feet. State legislation in the US is important for determining which conversion factor from the metric datum is to be used for land surveying and real estate transactions, even though the difference (2 ppm) is hardly significant, given the precision of normal surveying measurements over short distances (usually much less than a mile). Twenty-four states have legislated that surveying measures be based on the US survey foot, eight have legislated that they be based on the international foot, and eighteen have not specified which conversion factor to use. SPCS 83 legislation refers to state legislation that has been passed or updated using the newer 1983 NAD data. Most states have done so. Two states, Alaska and Missouri, and two jurisdictions, Guam and Puerto Rico, do not specify which foot to use. Two states, Alabama and Hawaii, and four jurisdictions, Washington, DC, US Virgin Islands, American Samoa and Northern Mariana Islands, do not have SPCS 83 legislation. In October 2019, US National Geodetic Survey and National Institute of Standards and Technology announced their joint intent to retire the US survey foot and US survey mile, as permitted by their 1959 decision, with effect on January 1, 2023. Nautical The nautical mile was originally defined as one minute of arc along a meridian of the Earth. Navigators use dividers to step off the distance between two points on the navigational chart, then place the open dividers against the minutes-of-latitude scale at the edge of the chart, and read off the distance in nautical miles. The Earth is not perfectly spherical but an oblate spheroid, so the length of a minute of latitude increases by 1% from the equator to the poles, as seen for example in the WGS84 ellipsoid, with at the equator, at the poles and average . Since 1929 the international nautical mile is defined by the First International Extraordinary Hydrographic Conference in Monaco as exactly 1,852 metres (which is or ). In the United States, the nautical mile was defined in the 19th century as , whereas in the United Kingdom, the Admiralty nautical mile was defined as and was about one minute of latitude in the latitudes of the south of the UK. Other nations had different definitions of the nautical mile. Related units The nautical mile per hour is known as the knot. Nautical miles and knots are almost universally used for aeronautical and maritime navigation, because of their relationship with degrees and minutes of latitude and the convenience of using the latitude scale on a map for distance measuring. The data mile is used in radar-related subjects and is equal to 6,000 feet (1.8288 kilometres). The radar mile is a unit of time (in the same way that the light year is a unit of distance), equal to the time required for a radar pulse to travel a distance of two miles (one mile each way). Thus, the radar statute mile is 10.8 μs and the radar nautical mile is 12.4 μs. Geographical The geographical mile is based upon the length of a meridian of latitude. The German geographical mile () was previously ° of latitude (7.4127 km). Metric The informal term "metric mile" is used in some countries, in sports such as track and field athletics and speed skating, to denote a distance of . The 1500 meters is the premier middle distance running event in Olympic sports. In United States high-school competition, the term is sometimes used for a race of . Scandinavian The Scandinavian mile () remains in common use in Norway and Sweden, where it has meant precisely 10 km since metrication in 1889. It is used in informal situations and in measurements of fuel consumption, which are often given as litres per . In formal situations (such as official road signs) only kilometres are given. The Swedish mile was standardised as 36,000 Swedish feet or in 1649; before that it varied by province from about 6 to 14.485 km. Before metrication, the Norwegian mile was . The traditional Finnish was translated as and also set equal to 10 km during metrication in 1887, but is much less commonly used. Comparison table A comparison of the different lengths for a "mile", in different countries and at different times in history, is given in the table below. Leagues are also included in this list because, in terms of length, they fall in between the short West European miles and the long North, Central and Eastern European miles. Similar units: 1,066.8 m – verst, see also obsolete Russian units of measurement Idioms The mile is still used in a variety of idioms, even in English-speaking countries that have moved from the Imperial to the metric system (for example, Australia, Canada, or New Zealand). These idioms include: A country mile is used colloquially to denote a very long distance. "A miss is as good as a mile" (failure by a narrow margin is no better than any other failure) "Give him an inch and he'll take a mile" – a corruption of "Give him an inch and he'll take an ell" (the person in question will become greedy if shown generosity) "Missed by a mile" (missed by a wide margin) "Go a mile a minute" (move very quickly) "Talk a mile a minute" (speak at a rapid rate) "To go the extra mile" (to put in extra effort) "Miles away" (lost in thought, or daydreaming) "Milestone" (an event indicating significant progress) Glasgow's miles better, a touristic campaign.
Physical sciences
Length and distance
null
19167
https://en.wikipedia.org/wiki/Manuscript
Manuscript
A manuscript (abbreviated MS for singular and MSS for plural) was, traditionally, any document written by hand or typewritten, as opposed to mechanically printed or reproduced in some indirect or automated way. More recently, the term has come to be understood to further include any written, typed, or word-processed copy of an author's work, as distinguished from the rendition as a printed version of the same. Before the arrival of prints, all documents and books were manuscripts. Manuscripts are not defined by their contents, which may combine writing with mathematical calculations, maps, music notation, explanatory figures, or illustrations. Terminology The word "manuscript" derives from the (from , hand and from , to write), and is first recorded in English in 1597. An earlier term in English that shares the meaning of a handwritten document is "hand-writ" (or "handwrit"), which is first attested around 1175 and is now rarely used.. The study of the writing (the "hand") in surviving manuscripts is termed palaeography (or paleography). The traditional abbreviations are MS for manuscript and MSS for manuscripts, while the forms MS., ms or ms. for singular, and MSS., mss or mss. for plural (with or without the full stop, all uppercase or all lowercase) are also accepted. The second s is not simply the plural; by an old convention, a doubling of the last letter of the abbreviation expresses the plural, just as pp. means "pages". A manuscript may be a codex (i.e. bound as a book), a scroll, or bound differently or consist of loose pages. Illuminated manuscripts are enriched with pictures, border decorations, elaborately embossed initial letters or full-page illustrations. Parts Cover Flyleaf (blank sheet) Colophon (publication information) incipit (the first few words of the text) decoration; illustrations dimensions Shelfmark or Signature in holding library (as opposed to printed Catalog number) works/compositions included in same ms codicological elements: deletions method: erasure? overstrike? dots above letters? headers/footers page format/layout: columns? text and surrounding commentary/additions/glosses? interpolations (passage not written by the original author) owners' marginal notations/corrections owner signatures dedication/inscription censor signatures collation (quires) (binding order) foliation page numeration binding manuscripts bound together in a single volume: convolute: volume containing different manuscripts fascicle: individual manuscript, part of a convolute Materials paper parchment papyrus to preserve text ink writing implement used pencil to help with the writing process pastedown (blank paper for inside cover) Paleographic elements script (one or more) dating line fillers rubrication (red ink text) ruled lines catchwords historical elements of the ms: blood, wine etc. stains condition: smokiness evidence of fire mold wormed Reproduction The mechanical reproduction of a manuscript is called facsimile. Digital reproductions can be called (high-resolution) scans or digital images. History Before the inventions of printing, in China by woodblock and in Europe by movable type in a printing press, all written documents had to be both produced and reproduced by hand. In the west, manuscripts were produced in form of scrolls (volumen in Latin) or books (codex, plural codices). Manuscripts were produced on vellum and other parchment, on papyrus, and on paper. In Indian Subcontinent and Southeast Asia, palm leaf manuscripts, with a distinctive long rectangular shape, were used dating back to the 5th century BCE or earlier, and in some cases continued to be used until the 19th century. In China, bamboo and wooden slips were used prior to the introduction of paper. In Russia, birch bark documents as old as from the 11th century have survived. Paper spread from China via the Islamic world to Europe by the 14th century, and by the late 15th century had largely replaced parchment for many purposes there. When Greek or Latin works were published, numerous professional copies were sometimes made simultaneously by scribes in a scriptorium, each making a single copy from an original that was declaimed aloud. The oldest written manuscripts have been preserved by the perfect dryness of their Middle Eastern resting places, whether placed within sarcophagi in Egyptian tombs, or reused as mummy-wrappings, discarded in the middens of Oxyrhynchus or secreted for safe-keeping in jars and buried (Nag Hammadi library) or stored in dry caves (Dead Sea scrolls). Volcanic ash preserved some of the Roman library of the Villa of the Papyri in Herculaneum. Manuscripts in Tocharian languages, written on palm leaves, survived in desert burials in the Tarim Basin of Central Asia. Ironically, the manuscripts that were being most carefully preserved in the libraries of antiquity are virtually all lost. Papyrus has a life of at most a century or two in relatively humid Italian or Greek conditions; only those works copied onto parchment, usually after the general conversion to Christianity, have survived, and by no means all of those. Originally, all books were in manuscript form. In China, and later other parts of East Asia, woodblock printing was used for books from about the 7th century. The earliest dated example is the Diamond Sutra of 868. In the Islamic world and the West, all books were in manuscript until the introduction of movable type printing in about 1450. Manuscript copying of books continued for a least a century, as printing remained expensive. Private or government documents remained hand-written until the invention of the typewriter in the late 19th century. Because of the likelihood of errors being introduced each time a manuscript was copied, the filiation of different versions of the same text is a fundamental part of the study and criticism of all texts that have been transmitted in manuscript. In Southeast Asia, in the first millennium, documents of sufficiently great importance were inscribed on soft metallic sheets such as copperplate, softened by refiner's fire and inscribed with a metal stylus. In the Philippines, for example, as early as 900 AD, specimen documents were not inscribed by stylus, but were punched much like the style of today's dot-matrix printers. This type of document was rare compared to the usual leaves and bamboo staves that were inscribed. However, neither the leaves nor paper were as durable as the metal document in the hot, humid climate. In Burma, the kammavaca, Buddhist manuscripts, were inscribed on brass, copper or ivory sheets, and even on discarded monk robes folded and lacquered. In Italy some important Etruscan texts were similarly inscribed on thin gold plates: similar sheets have been discovered in Bulgaria. Technically, these are all inscriptions rather than manuscripts. In the Western world, from the classical period through the early centuries of the Christian era, manuscripts were written without spaces between the words (scriptio continua), which makes them especially hard for the untrained to read. Extant copies of these early manuscripts written in Greek or Latin and usually dating from the 4th century to the 8th century, are classified according to their use of either all upper case or all lower case letters. Hebrew manuscripts, such as the Dead Sea scrolls make no such differentiation. Manuscripts using all upper case letters are called majuscule, those using all lower case are called minuscule. Usually, the majuscule scripts such as uncial are written with much more care. The scribe lifted his pen between each stroke, producing an unmistakable effect of regularity and formality. On the other hand, while minuscule scripts can be written with pen-lift, they may also be cursive, that is, use little or no pen-lift. Islamic world Islamic manuscripts were produced in different ways depending on their use and time period. Parchment (vellum) was a common way to produce manuscripts. Manuscripts eventually transitioned to using paper in later centuries with the diffusion of paper making in the Islamic empire. When Muslims encountered paper in Central Asia, its use and production spread to Iran, Iraq, Syria, Egypt, and North Africa during the 8th century. Africa 4,203 of Timbuktu's manuscripts were burned or stolen during the armed conflict in Mali between 2012 and 2013. 90% of these manuscripts were saved by the population organized around the NGO "Sauvegarde et valorisation des manuscrits pour la défense de la culture islamique" (SAVAMA-DCI). Some 350,000 manuscripts were transported to safety, and 300,000 of them were still in Bamako in 2022. An international consultation on the safeguarding, accessibility and promotion of ancient manuscripts in the Sahel was held at the UNESCO office in Bamako in 2020. Western world Most surviving pre-modern manuscripts use the codex format (as in a modern book), which had replaced the scroll by Late Antiquity. Parchment or vellum, as the best type of parchment is known, had also replaced papyrus, which was not nearly so long lived and has survived to the present almost exclusively in the very dry climate of Egypt, although it was widely used across the Roman world. Parchment is made of animal skin, normally calf, sheep, or goat, but also other animals. With all skins, the quality of the finished product is based on how much preparation and skill was put into turning the skin into parchment. Parchment made from calf or sheep was the most common in Northern Europe, while civilizations in Southern Europe preferred goatskin. Often, if the parchment is white or cream in color and veins from the animal can still be seen, it is calfskin. If it is yellow, greasy or in some cases shiny, then it was made from sheepskin. Vellum comes from the Latin word vitulinum which means "of calf"/ "made from calf". For modern parchment makers and calligraphers, and apparently often in the past, the terms parchment and vellum are used based on the different degrees of quality, preparation and thickness, and not according to which animal the skin came from, and because of this, the more neutral term "membrane" is often used by modern academics, especially where the animal has not been established by testing. Scripts Merovingian script, or "Luxeuil minuscule", is named after an abbey in Western France, the Luxeuil Abbey, founded by the Irish missionary St Columba . Caroline minuscule is a calligraphic script developed as a writing standard in Europe so that the Latin alphabet could be easily recognized by the literate class from different regions. It was used in the Holy Roman Empire between approximately 800 and 1200. Codices, classical and Christian texts, and educational material were written in Carolingian minuscule throughout the Carolingian Renaissance. The script developed into blackletter and became obsolete, though its revival in the Italian renaissance forms the basis of more recent scripts. In Introduction to Manuscript Studies, Clemens and Graham associate the beginning of this text coming from the Abby of Saint-Martin at Tours. Caroline Minuscule arrived in England in the second half of the 10th century. Its adoption there, replacing Insular script, was encouraged by the importation of continental European manuscripts by Saints Dunstan, Aethelwold, and Oswald. This script spread quite rapidly, being employed in many English centres for copying Latin texts. English scribes adapted the Carolingian script, giving it proportion and legibility. This new revision of the Caroline minuscule was called English Protogothic Bookhand. Another script that is derived from the Caroline Minuscule was the German Protogothic Bookhand. It originated in southern Germany during the second half of the 12th century. All the individual letters are Caroline; but just as with English Protogothic Bookhand it evolved. This can be seen most notably in the arm of the letter h. It has a hairline that tapers out by curving to the left. When first read the German Protogothic h looks like the German Protogothic b. Many more scripts sprang out of the German Protogothic Bookhand. After those came Bastard Anglicana, which is best described as: The coexistence in the Gothic period of formal hands employed for the copying of books and cursive scripts used for documentary purposes eventually resulted in cross-fertilization between these two fundamentally different writing styles. Notably, scribes began to upgrade some of the cursive scripts. A script that has been thus formalized is known as a bastard script (whereas a bookhand that has had cursive elements fused onto it is known as a hybrid script). The advantage of such a script was that it could be written more quickly than a pure bookhand; it thus recommended itself to scribes in a period when demand for books was increasing and authors were tending to write longer texts. In England during the fourteenth and fifteenth centuries, many books were written in the script known as Bastard Anglicana. Genres From ancient texts to medieval maps, anything written down for study would have been done with manuscripts. Some of the most common genres were bibles, religious commentaries, philosophy, law and government texts. Biblical "The Bible was the most studied book of the Middle Ages". The Bible was the center of medieval religious life. Along with the Bible came scores of commentaries. Commentaries were written in volumes, with some focusing on just single pages of scripture. Across Europe, there were universities that prided themselves on their biblical knowledge. Along with universities, certain cities also had their own celebrities of biblical knowledge during the medieval period. Book of hours A book of hours is a type of devotional text which was widely popular during the Middle Ages. They are the most common type of surviving medieval illuminated manuscripts. Each book of hours contain a similar collection of texts, prayers, and psalms but decoration can vary between each and each example. Many have minimal illumination, often restricted to ornamented initials, but books of hours made for wealthier patrons can be extremely extravagant with full-page miniatures. These books were used for owners to recite prayers privately eight different times, or hours, of the day. Liturgical books and calendars Along with Bibles, large numbers of manuscripts made in the Middle Ages were received in Church. Due to the complex church system of rituals and worship these books were the most elegantly written and finely decorated of all medieval manuscripts. Liturgical books usually came in two varieties. Those used during mass and those for divine office. Most liturgical books came with a calendar in the front. This served as a quick reference point for important dates in Jesus' life and to tell church officials which saints were to be honored and on what day. Modern variations In the context of library science, a manuscript is defined as any hand-written item in the collections of a library or an archive. For example, a library's collection of hand-written letters or diaries is considered a manuscript collection. Such manuscript collections are described in finding aids, similar to an index or table of contents to the collection, in accordance with national and international content standards such as DACS and ISAD(G). In other contexts, however, the use of the term "manuscript" no longer necessarily means something that is hand-written. By analogy a typescript has been produced on a typewriter. Publishing In book, magazine, and music publishing, a manuscript is an autograph or copy of a work, written by an author, composer or copyist. Such manuscripts generally follow standardized typographic and formatting rules, in which case they can be called fair copy (whether original or copy). The staff paper commonly used for handwritten music is, for this reason, often called "manuscript paper". Film and theatre In film and theatre, a manuscript, or script for short, is an author's or dramatist's text, used by a theatre company or film crew during the production of the work's performance or filming. More specifically, a motion picture manuscript is called a screenplay; a television manuscript, a teleplay; a manuscript for the theatre, a stage play; and a manuscript for audio-only performance is often called a radio play, even when the recorded performance is disseminated via non-radio means. Insurance In insurance, a manuscript policy is one that is negotiated between the insurer and the policyholder, as opposed to an off-the-shelf form supplied by the insurer. Preservation About 300,000 Latin, 55,000 Greek, 30,000 Armenian and 12,000 Georgian medieval manuscripts have survived. National Geographic estimates that 700,000 African manuscripts have survived at the University of Timbuktu in Mali. Repositories Major U.S. repositories of medieval manuscripts include: The Morgan Library & Museum = 1,300 (including papyri) Beinecke Rare Book and Manuscript Library, Yale = 1,100 Walters Art Museum = 1,000 Houghton Library, Harvard = 850 Van Pelt Library, Penn = 650 Huntington Library = 400 Robbins Collection = 300 Newberry Library = 260 Cornell University Library = 150 Many European libraries have far larger collections. Arnamagnæan Institute Árni Magnússon Institute for Icelandic Studies British Library#Collections of manuscripts Kungliga biblioteket Because they are books, pre-modern manuscripts are best described using bibliographic rather than archival standards. The standard endorsed by the American Library Association is known as AMREMM. A growing digital catalog of pre-modern manuscripts is Digital Scriptorium, hosted by the University of California at Berkeley.
Technology
Printing
null
19192
https://en.wikipedia.org/wiki/Mean
Mean
A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statistics. Each attempts to summarize or typify a given group of data, illustrating the magnitude and sign of the data set. Which of these measures is most illuminating depends on what is being measured, and on context and purpose. The arithmetic mean, also known as "arithmetic average", is the sum of the values divided by the number of values. The arithmetic mean of a set of numbers x1, x2, ..., xn is typically denoted using an overhead bar, . If the numbers are from observing a sample of a larger group, the arithmetic mean is termed the sample mean () to distinguish it from the group mean (or expected value) of the underlying distribution, denoted or . Outside probability and statistics, a wide range of other notions of mean are often used in geometry and mathematical analysis; examples are given below. Types of means Pythagorean means In mathematics, the three classical Pythagorean means are the arithmetic mean (AM), the geometric mean (GM), and the harmonic mean (HM). These means were studied with proportions by Pythagoreans and later generations of Greek mathematicians because of their importance in geometry and music. Arithmetic mean (AM) The arithmetic mean (or simply mean or average) of a list of numbers, is the sum of all of the numbers divided by their count. Similarly, the mean of a sample , usually denoted by , is the sum of the sampled values divided by the number of items in the sample. For example, the arithmetic mean of five values: 4, 36, 45, 50, 75 is: Geometric mean (GM) The geometric mean is an average that is useful for sets of positive numbers, that are interpreted according to their product (as is the case with rates of growth) and not their sum (as is the case with the arithmetic mean): For example, the geometric mean of five values: 4, 36, 45, 50, 75 is: Harmonic mean (HM) The harmonic mean is an average which is useful for sets of numbers which are defined in relation to some unit, as in the case of speed (i.e., distance per unit of time): For example, the harmonic mean of the five values: 4, 36, 45, 50, 75 is If we have five pumps that can empty a tank of a certain size in respectively 4, 36, 45, 50, and 75 minutes, then the harmonic mean of tells us that these five different pumps working together will pump at the same rate as much as five pumps that can each empty the tank in minutes. Relationship between AM, GM, and HM AM, GM, and HM satisfy these inequalities: Equality holds if all the elements of the given sample are equal. Statistical location In descriptive statistics, the mean may be confused with the median, mode or mid-range, as any of these may incorrectly be called an "average" (more formally, a measure of central tendency). The mean of a set of observations is the arithmetic average of the values; however, for skewed distributions, the mean is not necessarily the same as the middle value (median), or the most likely value (mode). For example, mean income is typically skewed upwards by a small number of people with very large incomes, so that the majority have an income lower than the mean. By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including the exponential and Poisson distributions. Mean of a probability distribution The mean of a probability distribution is the long-run arithmetic average value of a random variable having that distribution. If the random variable is denoted by , then the mean is also known as the expected value of (denoted ). For a discrete probability distribution, the mean is given by , where the sum is taken over all possible values of the random variable and is the probability mass function. For a continuous distribution, the mean is , where is the probability density function. In all cases, including those in which the distribution is neither discrete nor continuous, the mean is the Lebesgue integral of the random variable with respect to its probability measure. The mean need not exist or be finite; for some probability distributions the mean is infinite ( or ), while for others the mean is undefined. Generalized means Power mean The generalized mean, also known as the power mean or Hölder mean, is an abstraction of the quadratic, arithmetic, geometric, and harmonic means. It is defined for a set of n positive numbers xi by By choosing different values for the parameter m, the following types of means are obtained: f-mean This can be generalized further as the generalized -mean and again a suitable choice of an invertible will give {| |- | || power mean, |- | || arithmetic mean, |- | || geometric mean. |- | || harmonic mean, |} Weighted arithmetic mean The weighted arithmetic mean (or weighted average) is used if one wants to combine average values from different sized samples of the same population: Where and are the mean and size of sample respectively. In other applications, they represent a measure for the reliability of the influence upon the mean by the respective values. Truncated mean Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused by artifacts. In this case, one can use a truncated mean. It involves discarding given parts of the data at the top or the bottom end, typically an equal amount at each end and then taking the arithmetic mean of the remaining data. The number of values removed is indicated as a percentage of the total number of values. Interquartile mean The interquartile mean is a specific example of a truncated mean. It is simply the arithmetic mean after removing the lowest and the highest quarter of values. assuming the values have been ordered, so is simply a specific example of a weighted mean for a specific set of weights. Mean of a function In some circumstances, mathematicians may calculate a mean of an infinite (or even an uncountable) set of values. This can happen when calculating the mean value of a function . Intuitively, a mean of a function can be thought of as calculating the area under a section of a curve, and then dividing by the length of that section. This can be done crudely by counting squares on graph paper, or more precisely by integration. The integration formula is written as: In this case, care must be taken to make sure that the integral converges. But the mean may be finite even if the function itself tends to infinity at some points. Mean of angles and cyclical quantities Angles, times of day, and other cyclical quantities require modular arithmetic to add and otherwise combine numbers. In all these situations, there will not be a unique mean. For example, the times an hour before and after midnight are equidistant to both midnight and noon. It is also possible that no mean exists. Consider a color wheel—there is no mean to the set of all colors. In these situations, you must decide which mean is most useful. You can do this by adjusting the values before averaging, or by using a specialized approach for the mean of circular quantities. Fréchet mean The Fréchet mean gives a manner for determining the "center" of a mass distribution on a surface or, more generally, Riemannian manifold. Unlike many other means, the Fréchet mean is defined on a space whose elements cannot necessarily be added together or multiplied by scalars. It is sometimes also known as the Karcher mean (named after Hermann Karcher). Triangular sets In geometry, there are thousands of different definitions for the center of a triangle that can all be interpreted as the mean of a triangular set of points in the plane. Swanson's rule This is an approximation to the mean for a moderately skewed distribution. It is used in hydrocarbon exploration and is defined as: where , and are the 10th, 50th and 90th percentiles of the distribution, respectively. Other means Arithmetic-geometric mean Arithmetic-harmonic mean Cesàro mean Chisini mean Contraharmonic mean Elementary symmetric mean Geometric-harmonic mean Grand mean Heinz mean Heronian mean Identric mean Lehmer mean Logarithmic mean Moving average Neuman–Sándor mean Quasi-arithmetic mean Root mean square (quadratic mean) Rényi's entropy (a generalized f-mean) Spherical mean Stolarsky mean Weighted geometric mean Weighted harmonic mean
Mathematics
Statistics and probability
null
19200
https://en.wikipedia.org/wiki/Molecular%20biology
Molecular biology
Molecular biology is a branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including biomolecular synthesis, modification, mechanisms, and interactions. Though cells and other microscopic structures had been observed in living organisms as early as the 18th century, a detailed understanding of the mechanisms and interactions governing their behavior did not emerge until the 20th century, when technologies used in physics and chemistry had advanced sufficiently to permit their application in the biological sciences. The term 'molecular biology' was first used in 1945 by the English physicist William Astbury, who described it as an approach focused on discerning the underpinnings of biological phenomena—i.e. uncovering the physical and chemical structures and properties of biological molecules, as well as their interactions with other molecules and how these interactions explain observations of so-called classical biology, which instead studies biological processes at larger scales and higher levels of organization. In 1953, Francis Crick, James Watson, Rosalind Franklin, and their colleagues at the Medical Research Council Unit, Cavendish Laboratory, were the first to describe the double helix model for the chemical structure of deoxyribonucleic acid (DNA), which is often considered a landmark event for the nascent field because it provided a physico-chemical basis by which to understand the previously nebulous idea of nucleic acids as the primary substance of biological inheritance. They proposed this structure based on previous research done by Franklin, which was conveyed to them by Maurice Wilkins and Max Perutz. Their work led to the discovery of DNA in other microorganisms, plants, and animals. The field of molecular biology includes techniques which enable scientists to learn about molecular processes. These techniques are used to efficiently target new drugs, diagnose disease, and better understand cell physiology. Some clinical research and medical therapies arising from molecular biology are covered under gene therapy, whereas the use of molecular biology or molecular cell biology in medicine is now referred to as molecular medicine. History of molecular biology Molecular biology sits at the intersection of biochemistry and genetics; as these scientific disciplines emerged and evolved in the 20th century, it became clear that they both sought to determine the molecular mechanisms which underlie vital cellular functions. Advances in molecular biology have been closely related to the development of new technologies and their optimization. Molecular biology has been elucidated by the work of many scientists, and thus the history of the field depends on an understanding of these scientists and their experiments. The field of genetics arose from attempts to understand the set of rules underlying reproduction and heredity, and the nature of the hypothetical units of heredity known as genes. Gregor Mendel pioneered this work in 1866, when he first described the laws of inheritance he observed in his studies of mating crosses in pea plants. One such law of genetic inheritance is the law of segregation, which states that diploid individuals with two alleles for a particular gene will pass one of these alleles to their offspring. Because of his critical work, the study of genetic inheritance is commonly referred to as Mendelian genetics. A major milestone in molecular biology was the discovery of the structure of DNA. This work began in 1869 by Friedrich Miescher, a Swiss biochemist who first proposed a structure called nuclein, which we now know to be (deoxyribonucleic acid), or DNA. He discovered this unique substance by studying the components of pus-filled bandages, and noting the unique properties of the "phosphorus-containing substances". Another notable contributor to the DNA model was Phoebus Levene, who proposed the "polynucleotide model" of DNA in 1919 as a result of his biochemical experiments on yeast. In 1950, Erwin Chargaff expanded on the work of Levene and elucidated a few critical properties of nucleic acids: first, the sequence of nucleic acids varies across species. Second, the total concentration of purines (adenine and guanine) is always equal to the total concentration of pyrimidines (cysteine and thymine). This is now known as Chargaff's rule. In 1953, James Watson and Francis Crick published the double helical structure of DNA, based on the X-ray crystallography work done by Rosalind Franklin which was conveyed to them by Maurice Wilkins and Max Perutz. Watson and Crick described the structure of DNA and conjectured about the implications of this unique structure for possible mechanisms of DNA replication. Watson and Crick were awarded the Nobel Prize in Physiology or Medicine in 1962, along with Wilkins, for proposing a model of the structure of DNA. In 1961, it was demonstrated that when a gene encodes a protein, three sequential bases of a gene's DNA specify each successive amino acid of the protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid. Furthermore, it was shown that the codons do not overlap with each other in the DNA sequence encoding a protein, and that each sequence is read from a fixed starting point. During 1962–1964, through the use of conditional lethal mutants of a bacterial virus, fundamental advances were made in our understanding of the functions and interactions of the proteins employed in the machinery of DNA replication, DNA repair, DNA recombination, and in the assembly of molecular structures. Griffith's experiment In 1928, Frederick Griffith, encountered a virulence property in pneumococcus bacteria, which was killing lab rats. According to Mendel, prevalent at that time, gene transfer could occur only from parent to daughter cells. Griffith advanced another theory, stating that gene transfer occurring in member of same generation is known as horizontal gene transfer (HGT). This phenomenon is now referred to as genetic transformation. Griffith's experiment addressed the pneumococcus bacteria, which had two different strains, one virulent and smooth and one avirulent and rough. The smooth strain had glistering appearance owing to the presence of a type of specific polysaccharide – a polymer of glucose and glucuronic acid capsule. Due to this polysaccharide layer of bacteria, a host's immune system cannot recognize the bacteria and it kills the host. The other, avirulent, rough strain lacks this polysaccharide capsule and has a dull, rough appearance. Presence or absence of capsule in the strain, is known to be genetically determined. Smooth and rough strains occur in several different type such as S-I, S-II, S-III, etc. and R-I, R-II, R-III, etc. respectively. All this subtypes of S and R bacteria differ with each other in antigen type they produce. Avery–MacLeod–McCarty experiment The Avery–MacLeod–McCarty experiment was a landmark study conducted in 1944 that demonstrated that DNA, not protein as previously thought, carries genetic information in bacteria. Oswald Avery, Colin Munro MacLeod, and Maclyn McCarty used an extract from a strain of pneumococcus that could cause pneumonia in mice. They showed that genetic transformation in the bacteria could be accomplished by injecting them with purified DNA from the extract. They discovered that when they digested the DNA in the extract with DNase, transformation of harmless bacteria into virulent ones was lost. This provided strong evidence that DNA was the genetic material, challenging the prevailing belief that proteins were responsible. It laid the basis for the subsequent discovery of its structure by Watson and Crick. Hershey–Chase experiment Confirmation that DNA is the genetic material which is cause of infection came from the Hershey–Chase experiment. They used E.coli and bacteriophage for the experiment. This experiment is also known as blender experiment, as kitchen blender was used as a major piece of apparatus. Alfred Hershey and Martha Chase demonstrated that the DNA injected by a phage particle into a bacterium contains all information required to synthesize progeny phage particles. They used radioactivity to tag the bacteriophage's protein coat with radioactive sulphur and DNA with radioactive phosphorus, into two different test tubes respectively. After mixing bacteriophage and E.coli into the test tube, the incubation period starts in which phage transforms the genetic material in the E.coli cells. Then the mixture is blended or agitated, which separates the phage from E.coli cells. The whole mixture is centrifuged and the pellet which contains E.coli cells was checked and the supernatant was discarded. The E.coli cells showed radioactive phosphorus, which indicated that the transformed material was DNA not the protein coat. The transformed DNA gets attached to the DNA of E.coli and radioactivity is only seen onto the bacteriophage's DNA. This mutated DNA can be passed to the next generation and the theory of Transduction came into existence. Transduction is a process in which the bacterial DNA carry the fragment of bacteriophages and pass it on the next generation. This is also a type of horizontal gene transfer. Meselson–Stahl experiment The Meselson-Stahl experiment was a landmark experiment in molecular biology that provided evidence for the semiconservative replication of DNA. Conducted in 1958 by Matthew Meselson and Franklin Stahl, the experiment involved growing E. coli bacteria in a medium containing heavy isotope of nitrogen (15N) for several generations. This caused all the newly synthesized bacterial DNA to be incorporated with the heavy isotope. After allowing the bacteria to replicate in a medium containing normal nitrogen (14N), samples were taken at various time points. These samples were then subjected to centrifugation in a density gradient, which separated the DNA molecules based on their density. The results showed that after one generation of replication in the 14N medium, the DNA formed a band of intermediate density between that of pure 15N DNA and pure 14N DNA. This supported the semiconservative DNA replication proposed by Watson and Crick, where each strand of the parental DNA molecule serves as a template for the synthesis of a new complementary strand, resulting in two daughter DNA molecules, each consisting of one parental and one newly synthesized strand. The Meselson-Stahl experiment provided compelling evidence for the semiconservative replication of DNA, which is fundamental to the understanding of genetics and molecular biology. Modern molecular biology In the early 2020s, molecular biology entered a golden age defined by both vertical and horizontal technical development. Vertically, novel technologies are allowing for real-time monitoring of biological processes at the atomic level. Molecular biologists today have access to increasingly affordable sequencing data at increasingly higher depths, facilitating the development of novel genetic manipulation methods in new non-model organisms. Likewise, synthetic molecular biologists will drive the industrial production of small and macro molecules through the introduction of exogenous metabolic pathways in various prokaryotic and eukaryotic cell lines. Horizontally, sequencing data is becoming more affordable and used in many different scientific fields. This will drive the development of industries in developing nations and increase accessibility to individual researchers. Likewise, CRISPR-Cas9 gene editing experiments can now be conceived and implemented by individuals for under $10,000 in novel organisms, which will drive the development of industrial and medical applications. Relationship to other biological sciences The following list describes a viewpoint on the interdisciplinary relationships between molecular biology and other related fields. Molecular biology is the study of the molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. Biochemistry is the study of the chemical substances and vital processes occurring in living organisms. Biochemists focus heavily on the role, function, and structure of biomolecules such as proteins, lipids, carbohydrates and nucleic acids. Genetics is the study of how genetic differences affect organisms. Genetics attempts to predict how mutations, individual genes and genetic interactions can affect the expression of a phenotype While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry. Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology. Molecular genetics, the study of gene structure and function, has been among the most prominent sub-fields of molecular biology since the early 2000s. Other branches of biology are informed by molecular biology, by either directly studying the interactions of molecules in their own right such as in cell biology and developmental biology, or indirectly, where molecular techniques are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics. Techniques of molecular biology Molecular cloning Molecular cloning is used to isolate and then transfer a DNA sequence of interest into a plasmid vector. This recombinant DNA technology was first developed in the 1960s. In this technique, a DNA sequence coding for a protein of interest is cloned using polymerase chain reaction (PCR), and/or restriction enzymes, into a plasmid (expression vector). The plasmid vector usually has at least 3 distinctive features: an origin of replication, a multiple cloning site (MCS), and a selective marker (usually antibiotic resistance). Additionally, upstream of the MCS are the promoter regions and the transcription start site, which regulate the expression of cloned gene. This plasmid can be inserted into either bacterial or animal cells. Introducing DNA into bacterial cells can be done by transformation via uptake of naked DNA, conjugation via cell-cell contact or by transduction via viral vector. Introducing DNA into eukaryotic cells, such as animal cells, by physical or chemical means is called transfection. Several different transfection techniques are available, such as calcium phosphate transfection, electroporation, microinjection and liposome transfection. The plasmid may be integrated into the genome, resulting in a stable transfection, or may remain independent of the genome and expressed temporarily, called a transient transfection. DNA coding for a protein of interest is now inside a cell, and the protein can now be expressed. A variety of systems, such as inducible promoters and specific cell-signaling factors, are available to help express the protein of interest at high levels. Large quantities of a protein can then be extracted from the bacterial or eukaryotic cell. The protein can be tested for enzymatic activity under a variety of situations, the protein may be crystallized so its tertiary structure can be studied, or, in the pharmaceutical industry, the activity of new drugs against the protein can be studied. Polymerase chain reaction Polymerase chain reaction (PCR) is an extremely versatile technique for copying DNA. In brief, PCR allows a specific DNA sequence to be copied or modified in predetermined ways. The reaction is extremely powerful and under perfect conditions could amplify one DNA molecule to become 1.07 billion molecules in less than two hours. PCR has many applications, including the study of gene expression, the detection of pathogenic microorganisms, the detection of genetic mutations, and the introduction of mutations to DNA. The PCR technique can be used to introduce restriction enzyme sites to ends of DNA molecules, or to mutate particular bases of DNA, the latter is a method referred to as site-directed mutagenesis. PCR can also be used to determine whether a particular DNA fragment is found in a cDNA library. PCR has many variations, like reverse transcription PCR (RT-PCR) for amplification of RNA, and, more recently, quantitative PCR which allow for quantitative measurement of DNA or RNA molecules. Gel electrophoresis Gel electrophoresis is a technique which separates molecules by their size using an agarose or polyacrylamide gel. This technique is one of the principal tools of molecular biology. The basic principle is that DNA fragments can be separated by applying an electric current across the gel - because the DNA backbone contains negatively charged phosphate groups, the DNA will migrate through the agarose gel towards the positive end of the current. Proteins can also be separated on the basis of size using an SDS-PAGE gel, or on the basis of size and their electric charge by using what is known as a 2D gel electrophoresis. The Bradford protein assay The Bradford assay is a molecular biology technique which enables the fast, accurate quantitation of protein molecules utilizing the unique properties of a dye called Coomassie Brilliant Blue G-250. Coomassie Blue undergoes a visible color shift from reddish-brown to bright blue upon binding to protein. In its unstable, cationic state, Coomassie Blue has a background wavelength of 465 nm and gives off a reddish-brown color. When Coomassie Blue binds to protein in an acidic solution, the background wavelength shifts to 595 nm and the dye gives off a bright blue color. Proteins in the assay bind Coomassie blue in about 2 minutes, and the protein-dye complex is stable for about an hour, although it is recommended that absorbance readings are taken within 5 to 20 minutes of reaction initiation. The concentration of protein in the Bradford assay can then be measured using a visible light spectrophotometer, and therefore does not require extensive equipment. This method was developed in 1975 by Marion M. Bradford, and has enabled significantly faster, more accurate protein quantitation compared to previous methods: the Lowry procedure and the biuret assay. Unlike the previous methods, the Bradford assay is not susceptible to interference by several non-protein molecules, including ethanol, sodium chloride, and magnesium chloride. However, it is susceptible to influence by strong alkaline buffering agents, such as sodium dodecyl sulfate (SDS). Macromolecule blotting and probing The terms northern, western and eastern blotting are derived from what initially was a molecular biology joke that played on the term Southern blotting, after the technique described by Edwin Southern for the hybridisation of blotted DNA. Patricia Thomas, developer of the RNA blot which then became known as the northern blot, actually did not use the term. Southern blotting Named after its inventor, biologist Edwin Southern, the Southern blot is a method for probing for the presence of a specific DNA sequence within a DNA sample. DNA samples before or after restriction enzyme (restriction endonuclease) digestion are separated by gel electrophoresis and then transferred to a membrane by blotting via capillary action. The membrane is then exposed to a labeled DNA probe that has a complement base sequence to the sequence on the DNA of interest. Southern blotting is less commonly used in laboratory science due to the capacity of other techniques, such as PCR, to detect specific DNA sequences from DNA samples. These blots are still used for some applications, however, such as measuring transgene copy number in transgenic mice or in the engineering of gene knockout embryonic stem cell lines. Northern blotting The northern blot is used to study the presence of specific RNA molecules as relative comparison among a set of different samples of RNA. It is essentially a combination of denaturing RNA gel electrophoresis, and a blot. In this process RNA is separated based on size and is then transferred to a membrane that is then probed with a labeled complement of a sequence of interest. The results may be visualized through a variety of ways depending on the label used; however, most result in the revelation of bands representing the sizes of the RNA detected in sample. The intensity of these bands is related to the amount of the target RNA in the samples analyzed. The procedure is commonly used to study when and how much gene expression is occurring by measuring how much of that RNA is present in different samples, assuming that no post-transcriptional regulation occurs and that the levels of mRNA reflect proportional levels of the corresponding protein being produced. It is one of the most basic tools for determining at what time, and under what conditions, certain genes are expressed in living tissues. Western blotting A western blot is a technique by which specific proteins can be detected from a mixture of proteins. Western blots can be used to determine the size of isolated proteins, as well as to quantify their expression. In western blotting, proteins are first separated by size, in a thin gel sandwiched between two glass plates in a technique known as SDS-PAGE. The proteins in the gel are then transferred to a polyvinylidene fluoride (PVDF), nitrocellulose, nylon, or other support membrane. This membrane can then be probed with solutions of antibodies. Antibodies that specifically bind to the protein of interest can then be visualized by a variety of techniques, including colored products, chemiluminescence, or autoradiography. Often, the antibodies are labeled with enzymes. When a chemiluminescent substrate is exposed to the enzyme it allows detection. Using western blotting techniques allows not only detection but also quantitative analysis. Analogous methods to western blotting can be used to directly stain specific proteins in live cells or tissue sections. Eastern blotting The eastern blotting technique is used to detect post-translational modification of proteins. Proteins blotted on to the PVDF or nitrocellulose membrane are probed for modifications using specific substrates. Microarrays A DNA microarray is a collection of spots attached to a solid support such as a microscope slide where each spot contains one or more single-stranded DNA oligonucleotide fragments. Arrays make it possible to put down large quantities of very small (100 micrometre diameter) spots on a single slide. Each spot has a DNA fragment molecule that is complementary to a single DNA sequence. A variation of this technique allows the gene expression of an organism at a particular stage in development to be qualified (expression profiling). In this technique the RNA in a tissue is isolated and converted to labeled complementary DNA (cDNA). This cDNA is then hybridized to the fragments on the array and visualization of the hybridization can be done. Since multiple arrays can be made with exactly the same position of fragments, they are particularly useful for comparing the gene expression of two different tissues, such as a healthy and cancerous tissue. Also, one can measure what genes are expressed and how that expression changes with time or with other factors. There are many different ways to fabricate microarrays; the most common are silicon chips, microscope slides with spots of ~100 micrometre diameter, custom arrays, and arrays with larger spots on porous membranes (macroarrays). There can be anywhere from 100 spots to more than 10,000 on a given array. Arrays can also be made with molecules other than DNA. Allele-specific oligonucleotide Allele-specific oligonucleotide (ASO) is a technique that allows detection of single base mutations without the need for PCR or gel electrophoresis. Short (20–25 nucleotides in length), labeled probes are exposed to the non-fragmented target DNA, hybridization occurs with high specificity due to the short length of the probes and even a single base change will hinder hybridization. The target DNA is then washed and the unhybridized probes are removed. The target DNA is then analyzed for the presence of the probe via radioactivity or fluorescence. In this experiment, as in most molecular biology techniques, a control must be used to ensure successful experimentation. In molecular biology, procedures and technologies are continually being developed and older technologies abandoned. For example, before the advent of DNA gel electrophoresis (agarose or polyacrylamide), the size of DNA molecules was typically determined by rate sedimentation in sucrose gradients, a slow and labor-intensive technique requiring expensive instrumentation; prior to sucrose gradients, viscometry was used. Aside from their historical interest, it is often worth knowing about older technology, as it is occasionally useful to solve another new problem for which the newer technique is inappropriate.
Biology and health sciences
Biology
null
19319
https://en.wikipedia.org/wiki/Myelin
Myelin
Myelin ( ) is a lipid-rich material that surrounds nerve cell axons to insulate them and increase the rate at which electrical impulses (called action potentials) pass along the axon. The myelinated axon can be likened to an electrical wire (the axon) with insulating material (myelin) around it. However, unlike the plastic covering on an electrical wire, myelin does not form a single long sheath over the entire length of the axon. Rather, myelin ensheaths the axon segmentally: in general, each axon is encased in multiple long sheaths with short gaps between, called nodes of Ranvier. At the nodes of Ranvier, which are approximately one thousandth of a mm (one micrometre (μm) in length, the axon's membrane (axolemma) is bare of myelin. Myelin's best known function is to increase the rate at which information, encoded as electrical charges, passes along the axon's length. Myelin achieves this by eliciting saltatory conduction. Saltatory conduction refers to the fact that electrical impulses 'jump' along the axon, over long myelin sheaths, from one node of Ranvier to the next. Thus, information is passed around 100 times faster along a myelinated axon than a non-myelinated one. At the molecular level, the myelin sheath increases the distance between extracellular and intracellular ions, reducing the accumulation of electrical charges. The discontinuous structure of the myelin sheath results in the action potential "jumping" from one node of Ranvier over a long (c. 0.1 mm – >1 mm, or 100–1000 micron) myelinated stretch of the axon called the internodal segment or "internode", before "recharging" at the next node of Ranvier. This 'jumping' continues until the action potential reaches the axon terminal. Once there, the electrical signal provokes the release of chemical neurotransmitters across the synapse, which bind to receptors on the post-synaptic cell (e.g. another neuron, myocyte or secretory cell). Myelin is made by glial cells, which are non-neuronal cells that provide nutritional and homeostatic support to the axons. This is because axons, being elongated structures, are too far from the soma to be supported by the neurons themselves. In the central nervous system (brain, spinal cord and optic nerves), myelination is formed by specialized glial cells called oligodendrocytes, each of which sends out processes (limb-like extensions from the cell body) to myelinate multiple nearby axons; while in the peripheral nervous system, myelin is formed by Schwann cells (neurolemmocytes), which only myelinate a section of one axon. In the CNS, axons carry electrical signals from one nerve cell body to another. The "insulating" function for myelin is essential for efficient motor function (i.e. movement such as walking), sensory function (e.g. sight, hearing, smell, the feeling of touch or pain) and cognition (e.g. acquiring and recalling knowledge), as demonstrated by the consequence of disorders that affect myelination, such as the genetically determined leukodystrophies; the acquired inflammatory demyelinating disorder, multiple sclerosis; and the inflammatory demyelinating peripheral neuropathies. Due to its high prevalence, multiple sclerosis, which specifically affects the central nervous system (brain, spinal cord and optic nerve), is the best known disorder of myelin. Development The process of generating myelin is called myelination or myelinogenesis. In the CNS, oligodendrocyte progenitor cells (OPCs) differentiate into mature oligodendrocytes, which form myelin. In humans, myelination begins early in the 3rd trimester, although only little myelin is present in either the CNS or the PNS at the time of birth. During infancy, myelination progresses rapidly, with increasing numbers of axons acquiring myelin sheaths. This corresponds with the development of cognitive and motor skills, including language comprehension, speech acquisition, crawling and walking. Myelination continues through adolescence and early adulthood and although largely complete at this time, myelin sheaths can be added in grey matter regions such as the cerebral cortex, throughout life. Species distribution Vertebrates Myelin is considered a defining characteristic of the jawed vertebrates (gnathostomes), though axons are ensheathed by a type of cell, called glial cells, in invertebrates. These glial wraps are quite different from vertebrate compact myelin, formed, as indicated above, by concentric wrapping of the myelinating cell process multiple times around the axon. Myelin was first described in 1854 by Rudolf Virchow, although it was over a century later, following the development of electron microscopy, that its glial cell origin and its ultrastructure became apparent. In vertebrates, not all axons are myelinated. For example, in the PNS, a large proportion of axons are unmyelinated. Instead, they are ensheathed by non-myelinating Schwann cells known as Remak SCs and arranged in Remak bundles. In the CNS, non-myelinated axons (or intermittently myelinated axons, meaning axons with long non-myelinated regions between myelinated segments) intermingle with myelinated ones and are entwined, at least partially, by the processes of another type of glial cell the astrocyte. Invertebrates Functionally equivalent myelin-like sheaths are found in several invertebrate taxa, including oligochaete annelids, and crustacean taxa such as penaeids, palaemonids, and calanoids. These myelin-like sheaths share several structural features with the sheaths found in vertebrates including multiplicity of membranes, condensation of membrane, and nodes. However, the nodes in vertebrates are annular; i.e. they encircle the axon. In contrast, nodes found in the sheaths of invertebrates are either annular or fenestrated; i.e. they are restricted to "spots". The fastest recorded conduction speed (across both vertebrates and invertebrates) is found in the ensheathed axons of the Kuruma shrimp, an invertebrate, ranging between 90 and 200 m/s (cf. 100–120 m/s for the fastest myelinated vertebrate axon). Composition CNS myelin differs slightly in composition and configuration from PNS myelin, but both perform the same "insulating" function (see above). Being rich in lipid, myelin appears white, hence the name given to the "white matter" of the CNS. Both CNS white matter tracts (e.g. the optic nerve, corticospinal tract and corpus callosum) and PNS nerves (e.g. the sciatic nerve and the auditory nerve, which also appear white) each comprise thousands to millions of axons, largely aligned in parallel. Blood vessels provide the route for oxygen and energy substrates such as glucose to reach these fibre tracts, which also contain other cell types including astrocytes and microglia in the CNS and macrophages in the PNS. In terms of total mass, myelin comprises approximately 40% water; the dry mass comprises between 60% and 75% lipid and between 15% and 25% protein. Protein content includes myelin basic protein (MBP), which is abundant in the CNS where it plays a critical, non-redundant role in formation of compact myelin; myelin oligodendrocyte glycoprotein (MOG), which is specific to the CNS; and proteolipid protein (PLP), which is the most abundant protein in CNS myelin, but only a minor component of PNS myelin. In the PNS, myelin protein zero (MPZ or P0) has a similar role to that of PLP in the CNS in that it is involved in holding together the multiple concentric layers of glial cell membrane that constitute the myelin sheath. The primary lipid of myelin is a glycolipid called galactocerebroside. The intertwining hydrocarbon chains of sphingomyelin strengthen the myelin sheath. Cholesterol is an essential lipid component of myelin, without which myelin fails to form. Function The main purpose of myelin is to increase the speed at which electrical impulses (known as action potentials) propagate along the myelinated fiber. In unmyelinated fibers, action potentials travel as continuous waves, but, in myelinated fibers, they "hop" or propagate by saltatory conduction. The latter is markedly faster than the former, at least for axons over a certain diameter. Myelin decreases capacitance and increases electrical resistance across the axonal membrane (the axolemma). It has been suggested that myelin permits larger body size by maintaining agile communication between distant body parts. Myelinated fibers lack voltage-gated sodium channels along the myelinated internodes, exposing them only at the nodes of Ranvier. Here, they are highly abundant and densely packed. Positively charged sodium ions can enter the axon through these voltage-gated channels, leading to depolarisation of the membrane potential at the node of Ranvier. The resting membrane potential is then rapidly restored due to positively charged potassium ions leaving the axon through potassium channels. The sodium ions inside the axon then diffuse rapidly through the axoplasm (axonal cytoplasm), to the adjacent myelinated internode and ultimately to the next (distal) node of Ranvier, triggering the opening of the voltage gated sodium channels and entry of sodium ions at this site. Although the sodium ions diffuse through the axoplasm rapidly, diffusion is decremental by nature, thus nodes of Ranvier have to be (relatively) closely spaced, to secure action potential propagation. The action potential "recharges" at consecutive nodes of Ranvier as the axolemmal membrane potential depolarises to approximately +35 mV. Along the myelinated internode, energy-dependent sodium/potassium pumps pump the sodium ions back out of the axon and potassium ions back into the axon to restore the balance of ions between the intracellular (inside the cell, i.e. axon in this case) and extracellular (outside the cell) fluids. Whilst the role of myelin as an "axonal insulator" is well-established, other functions of myelinating cells are less well known or only recently established. The myelinating cell "sculpts" the underlying axon by promoting the phosphorylation of neurofilaments, thus increasing the diameter or thickness of the axon at the internodal regions; helps cluster molecules on the axolemma (such as voltage-gated sodium channels) at the node of Ranvier; and modulates the transport of cytoskeletal structures and organelles such as mitochondria, along the axon. In 2012, evidence came to light to support a role for the myelinating cell in "feeding" the axon. In other words, the myelinating cell seems to act as a local "fueling station" for the axon, which uses a great deal of energy to restore the normal balance of ions between it and its environment, following the generation of action potentials. When a peripheral fiber is severed, the myelin sheath provides a track along which regrowth can occur. However, the myelin layer does not ensure a perfect regeneration of the nerve fiber. Some regenerated nerve fibers do not find the correct muscle fibers, and some damaged motor neurons of the peripheral nervous system die without regrowth. Damage to the myelin sheath and nerve fiber is often associated with increased functional insufficiency. Unmyelinated fibers and myelinated axons of the mammalian central nervous system do not regenerate. Clinical significance Demyelination Demyelination is the loss of the myelin sheath insulating the nerves, and is the hallmark of some neurodegenerative autoimmune diseases, including multiple sclerosis, acute disseminated encephalomyelitis, neuromyelitis optica, transverse myelitis, chronic inflammatory demyelinating polyneuropathy, Guillain–Barré syndrome, central pontine myelinosis, inherited demyelinating diseases such as leukodystrophy, and Charcot–Marie–Tooth disease. People with pernicious anaemia can also develop nerve damage if the condition is not diagnosed quickly. Subacute combined degeneration of spinal cord secondary to pernicious anaemia can lead to slight peripheral nerve damage to severe damage to the central nervous system, affecting speech, balance, and cognitive awareness. When myelin degrades, conduction of signals along the nerve can be impaired or lost, and the nerve eventually withers. A more serious case of myelin deterioration is called Canavan disease. The immune system may play a role in demyelination associated with such diseases, including inflammation causing demyelination by overproduction of cytokines via upregulation of tumor necrosis factor or interferon. MRI evidence that docosahexaenoic acid DHA ethyl ester improves myelination in generalized peroxisomal disorders. Symptoms Demyelination results in diverse symptoms determined by the functions of the affected neurons. It disrupts signals between the brain and other parts of the body; symptoms differ from patient to patient, and have different presentations upon clinical observation and in laboratory studies. Typical symptoms include blurriness in the central visual field that affects only one eye, may be accompanied by pain upon eye movement, double vision, loss of vision/hearing, odd sensation in legs, arms, chest, or face, such as tingling or numbness (neuropathy), weakness of arms or legs, cognitive disruption, including speech impairment and memory loss, heat sensitivity (symptoms worsen or reappear upon exposure to heat, such as a hot shower), loss of dexterity, difficulty coordinating movement or balance disorder, difficulty controlling bowel movements or urination, fatigue, and tinnitus. Myelin repair Research to repair damaged myelin sheaths is ongoing. Techniques include surgically implanting oligodendrocyte precursor cells in the central nervous system and inducing myelin repair with certain antibodies. While results in mice have been encouraging (via stem cell transplantation), whether this technique can be effective in replacing myelin loss in humans is still unknown. Cholinergic treatments, such as acetylcholinesterase inhibitors (AChEIs), may have beneficial effects on myelination, myelin repair, and myelin integrity. Increasing cholinergic stimulation also may act through subtle trophic effects on brain developmental processes and particularly on oligodendrocytes and the lifelong myelination process they support. Increasing oligodendrocyte cholinergic stimulation, AChEIs, and other cholinergic treatments, such as nicotine, possibly could promote myelination during development and myelin repair in older age. Glycogen synthase kinase 3β inhibitors such as lithium chloride have been found to promote myelination in mice with damaged facial nerves. Cholesterol is a necessary nutrient for the myelin sheath, along with vitamin B12. Dysmyelination Dysmyelination is characterized by a defective structure and function of myelin sheaths; unlike demyelination, it does not produce lesions. Such defective sheaths often arise from genetic mutations affecting the biosynthesis and formation of myelin. The shiverer mouse represents one animal model of dysmyelination. Human diseases where dysmyelination has been implicated include leukodystrophies (Pelizaeus–Merzbacher disease, Canavan disease, phenylketonuria) and schizophrenia.
Biology and health sciences
Nervous system
Biology
19322
https://en.wikipedia.org/wiki/Mesozoic
Mesozoic
The Mesozoic Era is the era of Earth's geological history, lasting from about , comprising the Triassic, Jurassic and Cretaceous Periods. It is characterized by the dominance of gymnosperms such as cycads, ginkgoaceae and araucarian conifers, and of archosaurian reptiles such as the dinosaurs; a hot greenhouse climate; and the tectonic break-up of Pangaea. The Mesozoic is the middle of the three eras since complex life evolved: the Paleozoic, the Mesozoic, and the Cenozoic. The era began in the wake of the Permian–Triassic extinction event, the largest mass extinction in Earth's history, and ended with the Cretaceous–Paleogene extinction event, another mass extinction whose victims included the non-avian dinosaurs, pterosaurs, mosasaurs, and plesiosaurs. The Mesozoic was a time of significant tectonic, climatic, and evolutionary activity. The supercontinent Pangaea began to break apart into separate landmasses. The climate of the Mesozoic was varied, alternating between warming and cooling periods. Overall, however, the Earth was hotter than it is today. Dinosaurs first appeared in the Mid-Triassic, and became the dominant terrestrial vertebrates in the Late Triassic or Early Jurassic, occupying this position for about 150 or 135 million years until their demise at the end of the Cretaceous. Archaic birds appeared in the Jurassic, having evolved from a branch of theropod dinosaurs, then true toothless birds appeared in the Cretaceous. The first mammals also appeared during the Mesozoic, but would remain small—less than 15 kg (33 lb)—until the Cenozoic. Flowering plants appeared in the Early Cretaceous and would rapidly diversify through the end of the era, replacing conifers and other gymnosperms (sensu lato), such as ginkgoales, cycads and bennettitales as the dominant group of plants. Naming The phrase "Age of Reptiles" was introduced by the 19th century paleontologist Gideon Mantell who viewed it as dominated by diapsids such as Iguanodon, Megalosaurus, Plesiosaurus, and Pterodactylus. The current name was proposed in 1840 by the British geologist John Phillips (1800–1874). "Mesozoic" literally means 'middle life', deriving from the Greek prefix ( 'between') and ( 'animal, living being'). In this way, the Mesozoic is comparable to the Cenozoic () and Paleozoic ('old life') eras as well as the Proterozoic ('earlier life') Eon. The Mesozoic Era was originally described as the "secondary" era, following the "primary" (Paleozoic), and preceding the Tertiary. Geologic periods Following the Paleozoic, the Mesozoic extended roughly 186 million years, from when the Cenozoic Era began. This time frame is separated into three geologic periods. From oldest to youngest: Triassic Period () Jurassic Period () Cretaceous Period () The lower boundary of the Mesozoic is set by the Permian–Triassic extinction event, during which it has been estimated that up to 90-96% of marine species became extinct although those approximations have been brought into question with some paleontologists estimating the actual numbers as low as 81%. It is also known as the "Great Dying" because it is considered the largest mass extinction in the Earth's history. The upper boundary of the Mesozoic is set at the Cretaceous–Paleogene extinction event (or K–Pg extinction event), which may have been caused by an asteroid impactor that created Chicxulub Crater on the Yucatán Peninsula. Towards the Late Cretaceous, large volcanic eruptions are also believed to have contributed to the Cretaceous–Paleogene extinction event. Approximately 50% of all genera became extinct, including all of the non-avian dinosaurs. Triassic The Triassic ranges roughly from 252 million to 201 million years ago, preceding the Jurassic Period. The period is bracketed between the Permian–Triassic extinction event and the Triassic–Jurassic extinction event, two of the "big five", and it is divided into three major epochs: Early, Middle, and Late Triassic. The Early Triassic, about 252 to 247 million years ago, was dominated by deserts in the interior of the Pangaea supercontinent. The Earth had just witnessed a massive die-off in which 95% of all life became extinct, and the most common vertebrate life on land were Lystrosaurus, labyrinthodonts, and Euparkeria along with many other creatures that managed to survive the Permian extinction. Temnospondyls reached peak diversity during the early Triassic. The Middle Triassic, from 247 to 237 million years ago, featured the beginnings of the breakup of Pangaea and the opening of the Tethys Ocean. Ecosystems had recovered from the Permian extinction. Algae, sponge, corals, and crustaceans all had recovered, and new aquatic reptiles evolved, such as ichthyosaurs and nothosaurs. On land, pine forests flourished, as did groups of insects such as mosquitoes and fruit flies. Reptiles began to get bigger and bigger, and the first crocodilians and dinosaurs evolved, which sparked competition with the large amphibians that had previously ruled the freshwater world, respectively mammal-like reptiles on land. Following the bloom of the Middle Triassic, the Late Triassic, from 237 to 201 million years ago, featured frequent heat spells and moderate precipitation (10–20 inches per year). The recent warming led to a boom of dinosaurian evolution on land as the continents began to separate from each other (Nyasasaurus from 243 to 210 million years ago, approximately 235–30 ma, some of them separated into Sauropodomorphs, Theropods and Herrerasaurids), as well as the first pterosaurs. During the Late Triassic, some advanced cynodonts gave rise to the first Mammaliaformes. All this climatic change, however, resulted in a large die-out known as the Triassic–Jurassic extinction event, in which many archosaurs (excluding pterosaurs, dinosaurs and crocodylomorphs), most synapsids, and almost all large amphibians became extinct, as well as 34% of marine life, in the Earth's fourth mass extinction event. The cause is debatable; flood basalt eruptions at the Central Atlantic magmatic province is cited as one possible cause. Jurassic The Jurassic ranges from 200 million years to 145 million years ago and features three major epochs: The Early Jurassic, the Middle Jurassic, and the Late Jurassic. The Early Jurassic spans from 200 to 175 million years ago. The climate was tropical and much more humid than the Triassic, as a result of the large seas appearing between the land masses. In the oceans, plesiosaurs, ichthyosaurs and ammonites were abundant. On land, dinosaurs and other archosaurs staked their claim as the dominant race, with theropods such as Dilophosaurus at the top of the food chain. The first true crocodiles evolved, pushing the large amphibians to near extinction. All-in-all, archosaurs rose to rule the world. Meanwhile, the first true mammals evolved, remaining relatively small, but spreading widely; the Jurassic Castorocauda, for example, had adaptations for swimming, digging and catching fish. Fruitafossor, from the late Jurassic Period about 150 million years ago, was about the size of a chipmunk, and its teeth, forelimbs and back suggest that it dug open the nests of social insects (probably termites, as ants had not yet appeared) ; Volaticotherium was able to glide for short distances, such as modern flying squirrels. The first multituberculates such as Rugosodon evolved. The Middle Jurassic spans from 175 to 163 million years ago. During this epoch, dinosaurs flourished as huge herds of sauropods, such as Brachiosaurus and Diplodocus, filled the fern prairies, chased by many new predators such as Allosaurus. Conifer forests made up a large portion of the forests. In the oceans, plesiosaurs were quite common, and ichthyosaurs flourished. This epoch was the peak of the reptiles. The Late Jurassic spans from 163 to 145 million years ago. During this epoch, the first avialans, such as Archaeopteryx, evolved from small coelurosaurian dinosaurs. The increase in sea levels opened up the Atlantic seaway, which has grown continually larger until today. The further separation of the continents gave opportunity for the diversification of new dinosaurs. Cretaceous The Cretaceous is the longest period of the Mesozoic, but has only two epochs: Early and Late Cretaceous. The Early Cretaceous spans from 145 to 100 million years ago. The Early Cretaceous saw the expansion of seaways and a decline in diversity of sauropods, stegosaurs, and other high-browsing groups, with sauropods particularly scarce in North America. Seasons came back into effect and the poles got seasonally colder, but some dinosaurs still inhabited the polar forests year round, such as Leaellynasaura and Muttaburrasaurus. The poles were too cold for crocodiles, and became the last stronghold for large amphibians such as Koolasuchus. Pterosaurs got larger as genera such as Tapejara and Ornithocheirus evolved. Mammals continued to expand their range: eutriconodonts produced fairly large, wolverine-like predators such as Repenomamus and Gobiconodon, early therians began to expand into metatherians and eutherians, and cimolodont multituberculates went on to become common in the fossil record. The Late Cretaceous spans from 100 to 66 million years ago. The Late Cretaceous featured a cooling trend that would continue in the Cenozoic Era. Eventually, tropics were restricted to the equator and areas beyond the tropic lines experienced extreme seasonal changes in weather. Dinosaurs still thrived, as new taxa such as Tyrannosaurus, Ankylosaurus, Triceratops and hadrosaurs dominated the food web. In the oceans, mosasaurs ruled, filling the role of the ichthyosaurs, which, after declining, had disappeared in the Cenomanian-Turonian boundary event. Though pliosaurs had gone extinct in the same event, long-necked plesiosaurs such as Elasmosaurus continued to thrive. Flowering plants, possibly appearing as far back as the Triassic, became truly dominant for the first time. Pterosaurs in the Late Cretaceous declined for poorly understood reasons, though this might be due to tendencies of the fossil record, as their diversity seems to be much higher than previously thought. Birds became increasingly common and diversified into a variety of enantiornithe and ornithurine forms. Though mostly small, marine hesperornithes became relatively large and flightless, adapted to life in the open sea. Metatherians and primitive eutherian also became common and even produced large and specialised genera such as Didelphodon and Schowalteria. Still, the dominant mammals were multituberculates, cimolodonts in the north and gondwanatheres in the south. At the end of the Cretaceous, the Deccan traps and other volcanic eruptions were poisoning the atmosphere. As this continued, it is thought that a large meteor smashed into earth 66 million years ago, creating the Chicxulub Crater in an event known as the K-Pg Extinction (formerly K-T), the fifth and most recent mass extinction event, in which 75% of life became extinct, including all non-avian dinosaurs. Paleogeography and tectonics Compared to the vigorous convergent plate mountain-building of the late Paleozoic, Mesozoic tectonic deformation was comparatively mild. The sole major Mesozoic orogeny occurred in what is now the Arctic, creating the Innuitian orogeny, the Brooks Range, the Verkhoyansk and Cherskiy Ranges in Siberia, and the Khingan Mountains in Manchuria. This orogeny was related to the opening of the Arctic Ocean and suturing of the North China and Siberian cratons to Asia. In contrast, the era featured the dramatic rifting of the supercontinent Pangaea, which gradually split into a northern continent, Laurasia, and a southern continent, Gondwana. This created the passive continental margin that characterizes most of the Atlantic coastline (such as along the U.S. East Coast) today. By the end of the era, the continents had rifted into nearly their present forms, though not their present positions. Laurasia became North America and Eurasia, while Gondwana split into South America, Africa, Australia, Antarctica and the Indian subcontinent, which collided with the Asian plate during the Cenozoic, giving rise to the Himalayas. Climate The Triassic was generally dry, a trend that began in the late Carboniferous, and highly seasonal, especially in the interior of Pangaea. Low sea levels may have also exacerbated temperature extremes. With its high specific heat capacity, water acts as a temperature-stabilizing heat reservoir, and land areas near large bodies of water—especially oceans—experience less variation in temperature. Because much of Pangaea's land was distant from its shores, temperatures fluctuated greatly, and the interior probably included expansive deserts. Abundant red beds and evaporites such as halite support these conclusions, but some evidence suggests the generally dry climate of the Triassic was punctuated by episodes of increased rainfall. The most important humid episodes were the Carnian Pluvial Event and one in the Rhaetian, a few million years before the Triassic–Jurassic extinction event. Sea levels began to rise during the Jurassic, probably caused by an increase in seafloor spreading. The formation of new crust beneath the surface displaced ocean waters by as much as above today's sea level, flooding coastal areas. Furthermore, Pangaea began to rift into smaller divisions, creating new shoreline around the Tethys Ocean. Temperatures continued to increase, then began to stabilize. Humidity also increased with the proximity of water, and deserts retreated. The climate of the Cretaceous is less certain and more widely disputed. Probably, higher levels of carbon dioxide in the atmosphere are thought to have almost eliminated the north–south temperature gradient: temperatures were about the same across the planet, and about 10°C higher than today. The circulation of oxygen to the deep ocean may also have been disrupted, preventing the decomposition of large volumes of organic matter, which was eventually deposited as "black shale". Different studies have come to different conclusions about the amount of oxygen in the atmosphere during different parts of the Mesozoic, with some concluding oxygen levels were lower than the current level (about 21%) throughout the Mesozoic, some concluding they were lower in the Triassic and part of the Jurassic but higher in the Cretaceous, and some concluding they were higher throughout most or all of the Triassic, Jurassic and Cretaceous. Life Flora The dominant land plant species of the time were gymnosperms, which are vascular, cone-bearing, non-flowering plants such as conifers that produce seeds without a coating. This contrasts with the earth's current flora, in which the dominant land plants in terms of number of species are angiosperms. The earliest members of the genus Ginkgo first appeared during the Middle Jurassic. This genus is represented today by a single species, Ginkgo biloba. Modern conifer groups began to radiate during the Jurassic. Bennettitales, an extinct group of gymnosperms with foliage superficially resembling that of cycads gained a global distribution during the Late Triassic, and represented one of the most common groups of Mesozoic seed plants. Flowering plants radiated during the early Cretaceous, first in the tropics, but the even temperature gradient allowed them to spread toward the poles throughout the period. By the end of the Cretaceous, angiosperms dominated tree floras in many areas, although some evidence suggests that biomass was still dominated by cycads and ferns until after the Cretaceous–Paleogene extinction. Some plant species had distributions that were markedly different from succeeding periods; for example, the Schizeales, a fern order, were skewed to the Northern Hemisphere in the Mesozoic, but are now better represented in the Southern Hemisphere. Fauna The extinction of nearly all animal species at the end of the Permian Period allowed for the radiation of many new lifeforms. In particular, the extinction of the large herbivorous pareiasaurs and carnivorous gorgonopsians left those ecological niches empty. Some were filled by the surviving cynodonts and dicynodonts, the latter of which subsequently became extinct. Recent research indicates that it took much longer for the reestablishment of complex ecosystems with high biodiversity, complex food webs, and specialized animals in a variety of niches, beginning in the mid-Triassic 4 million to 6 million years after the extinction, and not fully proliferated until 30 million years after the extinction. Animal life was then dominated by various archosaurs: dinosaurs, pterosaurs, and aquatic reptiles such as ichthyosaurs, plesiosaurs, and mosasaurs. The climatic changes of the late Jurassic and Cretaceous favored further adaptive radiation. The Jurassic was the height of archosaur diversity, and the first birds and eutherian mammals also appeared. Some have argued that insects diversified in symbiosis with angiosperms, because insect anatomy, especially the mouth parts, seems particularly well-suited for flowering plants. However, all major insect mouth parts preceded angiosperms, and insect diversification actually slowed when they arrived, so their anatomy originally must have been suited for some other purpose. Microbiota At the dawn of the Mesozoic, ocean plankton communities transitioned from ones dominated by green archaeplastidans to ones dominated by endosymbiotic algae with red-algal-derived plastids. This transition is speculated to have been caused by an increasing paucity of many trace metals in the Mesozoic ocean.
Physical sciences
Geological periods
null
19331
https://en.wikipedia.org/wiki/Moon
Moon
The Moon is Earth's only natural satellite. It orbits at an average distance of , about 30 times the diameter of Earth. Tidal forces between Earth and the Moon have synchronized the Moon's orbital period (lunar month) with its rotation period (lunar day) at 29.5 Earth days, causing the same side of the Moon to always face Earth. The Moon's gravitational pull is the main driver of Earth's tides. In geophysical terms, the Moon is a planetary-mass object or satellite planet. Its mass is 1.2% that of the Earth, and its diameter is , roughly one-quarter of Earth's (about as wide as the contiguous United States). Within the Solar System, it is the largest and most massive satellite in relation to its parent planet, the fifth-largest and fifth-most massive moon overall, and larger and more massive than all known dwarf planets. Its surface gravity is about one-sixth of Earth's, about half that of Mars, and the second-highest among all moons in the Solar System, after Jupiter's moon Io. The body of the Moon is differentiated and terrestrial, with no significant hydrosphere, atmosphere, or magnetic field. It formed 4.51 billion years ago, not long after Earth's formation, out of the debris from a giant impact between Earth and a hypothesized Mars-sized body called Theia. The lunar surface is covered in lunar dust and marked by mountains, impact craters, their ejecta, ray-like streaks, rilles and, mostly on the near side of the Moon, by dark maria ('seas'), which are plains of cooled lava. These maria were formed when molten lava flowed into ancient impact basins. The Moon is, except when passing through Earth's shadow during a lunar eclipse, always illuminated by the Sun, but from Earth the visible illumination shifts during its orbit, producing the lunar phases. The Moon is the brightest celestial object in Earth's night sky. This is mainly due to its large angular diameter, while the reflectance of the lunar surface is comparable to that of asphalt. The apparent size is nearly the same as that of the Sun, allowing it to cover the Sun completely during a total solar eclipse. From Earth about 59% of the lunar surface is visible over time due to cyclical shifts in perspective (libration), making parts of the far side of the Moon visible. The Moon has been an important source of inspiration and knowledge for humans, having been crucial to cosmography, mythology, religion, art, time keeping, natural science, and spaceflight. The first human-made objects to fly to an extraterrestrial body were sent to the Moon, starting in 1959 with the flyby of the Soviet Union's Luna 1 and the intentional impact of Luna 2. In 1966, the first soft landing (by Luna 9) and orbital insertion (by Luna 10) followed. On July 20, 1969, humans for the first time stepped on an extraterrestrial body, landing on the Moon at Mare Tranquillitatis with the lander Eagle of the United States' Apollo 11 mission. Five more crews were sent between then and 1972, each with two men landing on the surface. The longest stay was 75 hours by the Apollo 17 crew. Since then, exploration of the Moon has continued robotically, and crewed missions are being planned to return beginning in the late 2020s. Names and etymology The English proper name for Earth's natural satellite is typically written as Moon, with a capital M. The noun moon is derived from Old English , which stems from Proto-Germanic *mēnōn, which in turn comes from Proto-Indo-European *mēnsis 'month' (from earlier *mēnōt, genitive *mēneses) which may be related to the verb 'measure' (of time). Occasionally, the name Luna is used in scientific writing and especially in science fiction to distinguish the Earth's moon from others, while in poetry "Luna" has been used to denote personification of the Moon. Cynthia is a rare poetic name for the Moon personified as a goddess, while Selene (literally 'Moon') is the Greek goddess of the Moon. The English adjective pertaining to the Moon is lunar, derived from the Latin word for the Moon, . Selenian is an adjective used to describe the Moon as a world, rather than as a celestial object, but its use is rare. It is derived from , the Greek word for the Moon, and its cognate selenic was originally a rare synonym but now nearly always refers to the chemical element selenium. The element name selenium and the prefix seleno- (as in selenography, the study of the physical features of the Moon) come from this Greek word. Artemis, the Greek goddess of the wilderness and the hunt, also came to be identified with Selene, and was sometimes called Cynthia after her birthplace on Mount Cynthus. Her Roman equivalent is Diana. The names Luna, Cynthia, and Selene are reflected in technical terms for lunar orbits such as apolune, pericynthion and selenocentric. The astronomical symbols for the Moon are the crescent and decrescent , for example in M☾ 'lunar mass'. Natural history Lunar geologic timescale The lunar geological periods are named after their characteristic features, from most impact craters outside the dark mare, to the mare and later craters, and finally the young, still bright and therefore readily visible craters with ray systems like Copernicus or Tycho. Formation Isotope dating of lunar samples suggests the Moon formed around 50 million years after the origin of the Solar System. Historically, several formation mechanisms have been proposed, but none satisfactorily explains the features of the Earth–Moon system. A fission of the Moon from Earth's crust through centrifugal force would require too great an initial rotation rate of Earth. Gravitational capture of a pre-formed Moon depends on an unfeasibly extended atmosphere of Earth to dissipate the energy of the passing Moon. A co-formation of Earth and the Moon together in the primordial accretion disk does not explain the depletion of metals in the Moon. None of these hypotheses can account for the high angular momentum of the Earth–Moon system. The prevailing theory is that the Earth–Moon system formed after a giant impact of a Mars-sized body (named Theia) with the proto-Earth. The oblique impact blasted material into orbit about the Earth and the material accreted and formed the Moon just beyond the Earth's Roche limit of ~. Giant impacts are thought to have been common in the early Solar System. Computer simulations of giant impacts have produced results that are consistent with the mass of the lunar core and the angular momentum of the Earth–Moon system. These simulations show that most of the Moon derived from the impactor, rather than the proto-Earth. However, models from 2007 and later suggest a larger fraction of the Moon derived from the proto-Earth. Other bodies of the inner Solar System such as Mars and Vesta have, according to meteorites from them, very different oxygen and tungsten isotopic compositions compared to Earth. However, Earth and the Moon have nearly identical isotopic compositions. The isotopic equalization of the Earth–Moon system might be explained by the post-impact mixing of the vaporized material that formed the two, although this is debated. The impact would have released enough energy to liquefy both the ejecta and the Earth's crust, forming a magma ocean. The liquefied ejecta could have then re-accreted into the Earth–Moon system. The newly formed Moon would have had its own magma ocean; its depth is estimated from about to . While the giant-impact theory explains many lines of evidence, some questions are still unresolved, most of which involve the Moon's composition. Models that have the Moon acquiring a significant amount of the proto-earth are more difficult to reconcile with geochemical data for the isotopes of zirconium, oxygen, silicon, and other elements. A study published in 2022, using high-resolution simulations (up to particles), found that giant impacts can immediately place a satellite with similar mass and iron content to the Moon into orbit far outside Earth's Roche limit. Even satellites that initially pass within the Roche limit can reliably and predictably survive, by being partially stripped and then torqued onto wider, stable orbits. On November 1, 2023, scientists reported that, according to computer simulations, remnants of Theia could still be present inside the Earth. Natural development The newly formed Moon settled into a much closer Earth orbit than it has today. Each body therefore appeared much larger in the sky of the other, eclipses were more frequent, and tidal effects were stronger. Due to tidal acceleration, the Moon's orbit around Earth has become significantly larger, with a longer period. Following formation, the Moon has cooled and most of its atmosphere has been stripped. The lunar surface has since been shaped by large impact events and many small ones, forming a landscape featuring craters of all ages. The Moon was volcanically active until 1.2 billion years ago, which laid down the prominent lunar maria. Most of the mare basalts erupted during the Imbrian period, 3.3–3.7 billion years ago, though some are as young as 1.2 billion years and some as old as 4.2 billion years. There are differing explanations for the eruption of mare basalts, particularly their uneven occurrence which mainly appear on the near-side. Causes of the distribution of the lunar highlands on the far side are also not well understood. Topological measurements show the near side crust is thinner than the far side. One possible scenario then is that large impacts on the near side may have made it easier for lava to flow onto the surface. Physical characteristics The Moon is a very slightly scalene ellipsoid due to tidal stretching, with its long axis displaced 30° from facing the Earth, due to gravitational anomalies from impact basins. Its shape is more elongated than current tidal forces can account for. This 'fossil bulge' indicates that the Moon solidified when it orbited at half its current distance to the Earth, and that it is now too cold for its shape to restore hydrostatic equilibrium at its current orbital distance. Size and mass The Moon is by size and mass the fifth largest natural satellite of the Solar System, categorizable as one of its planetary-mass moons, making it a satellite planet under the geophysical definitions of the term. It is smaller than Mercury and considerably larger than the largest dwarf planet of the Solar System, Pluto. The Moon is the largest natural satellite in the Solar System relative to its primary planet. The Moon's diameter is about 3,500 km, more than one-quarter of Earth's, with the face of the Moon comparable to the width of either mainland Australia, Europe or the contiguous United States. The whole surface area of the Moon is about 38 million square kilometers, comparable to that of the Americas. The Moon's mass is of Earth's, being the second densest among the planetary moons, and having the second highest surface gravity, after Io, at and an escape velocity of . Structure The Moon is a differentiated body that was initially in hydrostatic equilibrium but has since departed from this condition. It has a geochemically distinct crust, mantle, and core. The Moon has a solid iron-rich inner core with a radius possibly as small as and a fluid outer core primarily made of liquid iron with a radius of roughly . Around the core is a partially molten boundary layer with a radius of about . This structure is thought to have developed through the fractional crystallization of a global magma ocean shortly after the Moon's formation 4.5 billion years ago. Crystallization of this magma ocean would have created a mafic mantle from the precipitation and sinking of the minerals olivine, clinopyroxene, and orthopyroxene; after about three-quarters of the magma ocean had crystallized, lower-density plagioclase minerals could form and float into a crust atop. The final liquids to crystallize would have been initially sandwiched between the crust and mantle, with a high abundance of incompatible and heat-producing elements. Consistent with this perspective, geochemical mapping made from orbit suggests a crust of mostly anorthosite. The Moon rock samples of the flood lavas that erupted onto the surface from partial melting in the mantle confirm the mafic mantle composition, which is more iron-rich than that of Earth. The crust is on average about thick. The Moon is the second-densest satellite in the Solar System, after Io. However, the inner core of the Moon is small, with a radius of about or less, around 20% of the radius of the Moon. Its composition is not well understood but is probably metallic iron alloyed with a small amount of sulfur and nickel; analyzes of the Moon's time-variable rotation suggest that it is at least partly molten. The pressure at the lunar core is estimated to be . Gravitational field On average the Moon's surface gravity is (; ), about half of the surface gravity of Mars and about a sixth of Earth's. The Moon's gravitational field is not uniform. The details of the gravitational field have been measured through tracking the Doppler shift of radio signals emitted by orbiting spacecraft. The main lunar gravity features are mascons, large positive gravitational anomalies associated with some of the giant impact basins, partly caused by the dense mare basaltic lava flows that fill those basins. The anomalies greatly influence the orbit of spacecraft about the Moon. There are some puzzles: lava flows by themselves cannot explain all of the gravitational signature, and some mascons exist that are not linked to mare volcanism. Magnetic field The Moon has an external magnetic field of less than 0.2 nanoteslas, or less than one hundred thousandth that of Earth. The Moon does not have a global dipolar magnetic field and only has crustal magnetization likely acquired early in its history when a dynamo was still operating. Early in its history, 4 billion years ago, its magnetic field strength was likely close to that of Earth today. This early dynamo field apparently expired by about one billion years ago, after the lunar core had crystallized. Theoretically, some of the remnant magnetization may originate from transient magnetic fields generated during large impacts through the expansion of plasma clouds. These clouds are generated during large impacts in an ambient magnetic field. This is supported by the location of the largest crustal magnetizations situated near the antipodes of the giant impact basins. Atmosphere The Moon has an atmosphere so tenuous as to be nearly vacuum, with a total mass of less than . The surface pressure of this small mass is around 3 × 10−15 atm (0.3 nPa); it varies with the lunar day. Its sources include outgassing and sputtering, a product of the bombardment of lunar soil by solar wind ions. Elements that have been detected include sodium and potassium, produced by sputtering (also found in the atmospheres of Mercury and Io); helium-4 and neon from the solar wind; and argon-40, radon-222, and polonium-210, outgassed after their creation by radioactive decay within the crust and mantle. The absence of such neutral species (atoms or molecules) as oxygen, nitrogen, carbon, hydrogen and magnesium, which are present in the regolith, is not understood. Water vapor has been detected by Chandrayaan-1 and found to vary with latitude, with a maximum at ~60–70 degrees; it is possibly generated from the sublimation of water ice in the regolith. These gases either return into the regolith because of the Moon's gravity or are lost to space, either through solar radiation pressure or, if they are ionized, by being swept away by the solar wind's magnetic field. Studies of Moon magma samples retrieved by the Apollo missions demonstrate that the Moon had once possessed a relatively thick atmosphere for a period of 70 million years between 3 and 4 billion years ago. This atmosphere, sourced from gases ejected from lunar volcanic eruptions, was twice the thickness of that of present-day Mars. The ancient lunar atmosphere was eventually stripped away by solar winds and dissipated into space. A permanent Moon dust cloud exists around the Moon, generated by small particles from comets. Estimates are 5 tons of comet particles strike the Moon's surface every 24 hours, resulting in the ejection of dust particles. The dust stays above the Moon approximately 10 minutes, taking 5 minutes to rise, and 5 minutes to fall. On average, 120 kilograms of dust are present above the Moon, rising up to 100 kilometers above the surface. Dust counts made by LADEE's Lunar Dust EXperiment (LDEX) found particle counts peaked during the Geminid, Quadrantid, Northern Taurid, and Omicron Centaurid meteor showers, when the Earth, and Moon pass through comet debris. The lunar dust cloud is asymmetric, being denser near the boundary between the Moon's dayside and nightside. Surface conditions Ionizing radiation from cosmic rays, the Sun and the resulting neutron radiation produce radiation levels on average of 1.369 millisieverts per day during lunar daytime, which is about 2.6 times more than on the International Space Station with 0.53 millisieverts per day at about 400 km above Earth in orbit, 5–10 times more than during a trans-Atlantic flight, 200 times more than on Earth's surface. For further comparison radiation on a flight to Mars is about 1.84 millisieverts per day and on Mars on average 0.64 millisieverts per day, with some locations on Mars possibly having levels as low as 0.342 millisieverts per day. Solar radiation charges the highly abrasive lunar dust, making it levitate and therefore add to the easy spread of the lung and gear damaging sticky lunar dust. The Moon's axial tilt with respect to the ecliptic is only 1.5427°, much less than the 23.44° of Earth. Because of this small tilt, the Moon's solar illumination varies much less with season than on Earth and it allows for the existence of some peaks of eternal light at the Moon's north pole, at the rim of the crater Peary. The surface is exposed to drastic temperature differences ranging from to depending on the solar irradiance. Because of the lack of atmosphere, temperatures of different areas vary particularly upon whether they are in sunlight or shadow, making topographical details play a decisive role on local surface temperatures. Parts of many craters, particularly the bottoms of many polar craters, are permanently shadowed, these "craters of eternal darkness" have extremely low temperatures. The Lunar Reconnaissance Orbiter measured the lowest summer temperatures in craters at the southern pole at and just close to the winter solstice in the north polar crater Hermite. This is the coldest temperature in the Solar System ever measured by a spacecraft, colder even than the surface of Pluto. Blanketed on top of the Moon's crust is a highly comminuted (broken into ever smaller particles) and impact gardened mostly gray surface layer called regolith, formed by impact processes. The finer regolith, the lunar soil of silicon dioxide glass, has a texture resembling snow and a scent resembling spent gunpowder. The regolith of older surfaces is generally thicker than for younger surfaces: it varies in thickness from in the highlands and in the maria. Beneath the finely comminuted regolith layer is the megaregolith, a layer of highly fractured bedrock many kilometers thick. These extreme conditions are considered to make it unlikely for spacecraft to harbor bacterial spores at the Moon for longer than just one lunar orbit. Surface features The topography of the Moon has been measured with laser altimetry and stereo image analysis. Its most extensive topographic feature is the giant far-side South Pole–Aitken basin, some in diameter, the largest crater on the Moon and the second-largest confirmed impact crater in the Solar System. At deep, its floor is the lowest point on the surface of the Moon. The highest elevations of the Moon's surface are located directly to the northeast, which might have been thickened by the oblique formation impact of the South Pole–Aitken basin. Other large impact basins such as Imbrium, Serenitatis, Crisium, Smythii, and Orientale possess regionally low elevations and elevated rims. The far side of the lunar surface is on average about higher than that of the near side. The discovery of fault scarp cliffs suggest that the Moon has shrunk by about 90 metres (300 ft) within the past billion years. Similar shrinkage features exist on Mercury. Mare Frigoris, a basin near the north pole long assumed to be geologically dead, has cracked and shifted. Since the Moon does not have tectonic plates, its tectonic activity is slow, and cracks develop as it loses heat. Scientists have confirmed the presence of a cave on the Moon near the Sea of Tranquillity, not far from the 1969 Apollo 11 landing site. The cave, identified as an entry point to a collapsed lava tube, is roughly 45 meters wide and up to 80 m long. This discovery marks the first confirmed entry point to a lunar cave. The analysis was based on photos taken in 2010 by NASA's Lunar Reconnaissance Orbiter. The cave's stable temperature of around could provide a hospitable environment for future astronauts, protecting them from extreme temperatures, solar radiation, and micrometeorites. However, challenges include accessibility and risks of avalanches and cave-ins. This discovery offers potential for future lunar bases or emergency shelters. Volcanic features The main features visible from Earth by the naked eye are dark and relatively featureless lunar plains called maria (singular mare; Latin for "seas", as they were once believed to be filled with water) are vast solidified pools of ancient basaltic lava. Although similar to terrestrial basalts, lunar basalts have more iron and no minerals altered by water. The majority of these lava deposits erupted or flowed into the depressions associated with impact basins, though the Moon's largest expanse of basalt flooding, Oceanus Procellarum, does not correspond to an obvious impact basin. Different episodes of lava flow in maria can often be recognized by variations in surface albedo and distinct flow margins. As the maria formed, cooling and contraction of the basaltic lava created wrinkle ridges in some areas. These low, sinuous ridges can extend for hundreds of kilometers and often outline buried structures within the mare. Another result of maria formation is the creation of concentric depressions along the edges, known as arcuate rilles. These features occur as the mare basalts sink inward under their own weight, causing the edges to fracture and separate. In addition to the visible maria, the Moon has mare deposits covered by ejecta from impacts. Called cryptomares, these hidden mares are likely older than the exposed ones. Conversely, mare lava has obscured many impact melt sheets and pools. Impact melts are formed when intense shock pressures from collisions vaporize and melt zones around the impact site. Where still exposed, impact melt can be distinguished from mare lava by its distribution, albedo, and texture. Sinuous rilles, found in and around maria, are likely extinct lava channels or collapsed lava tubes. They typically originate from volcanic vents, meandering and sometimes branching as they progress. The largest examples, such as Schroter's Valley and Rima Hadley, are significantly longer, wider, and deeper than terrestrial lava channels, sometimes featuring bends and sharp turns that again, are uncommon on Earth. Mare volcanism has altered impact craters in various ways, including filling them to varying degrees, and raising and fracturing their floors from uplift of mare material beneath their interiors. Examples of such craters include Taruntius and Gassendi. Some craters, such as Hyginus, are of wholly volcanic origin, forming as calderas or collapse pits. Such craters are relatively rare and tend to be smaller (typically a few kilometers wide), shallower, and more irregularly shaped than impact craters. They also lack the upturned rims characteristic of impact craters. Several geologic provinces containing shield volcanoes and volcanic domes are found within the near side maria. There are also some regions of pyroclastic deposits, scoria cones and non-basaltic domes made of particularly high viscosity lava. Almost all maria are on the near side of the Moon, and cover 31% of the surface of the near side compared with 2% of the far side. This is likely due to a concentration of heat-producing elements under the crust on the near side, which would have caused the underlying mantle to heat up, partially melt, rise to the surface and erupt. Most of the Moon's mare basalts erupted during the Imbrian period, 3.3–3.7 billion years ago, though some being as young as 1.2 billion years and as old as 4.2 billion years. In 2006, a study of Ina, a tiny depression in Lacus Felicitatis, found jagged, relatively dust-free features that, because of the lack of erosion by infalling debris, appeared to be only 2 million years old. Moonquakes and releases of gas indicate continued lunar activity. Evidence of recent lunar volcanism has been identified at 70 irregular mare patches, some less than 50 million years old. This raises the possibility of a much warmer lunar mantle than previously believed, at least on the near side where the deep crust is substantially warmer because of the greater concentration of radioactive elements. Evidence has been found for 2–10 million years old basaltic volcanism within the crater Lowell, inside the Orientale basin. Some combination of an initially hotter mantle and local enrichment of heat-producing elements in the mantle could be responsible for prolonged activities on the far side in the Orientale basin. The lighter-colored regions of the Moon are called terrae, or more commonly highlands, because they are higher than most maria. They have been radiometrically dated to having formed 4.4 billion years ago and may represent plagioclase cumulates of the lunar magma ocean. In contrast to Earth, no major lunar mountains are believed to have formed as a result of tectonic events. The concentration of maria on the near side likely reflects the substantially thicker crust of the highlands of the Far Side, which may have formed in a slow-velocity impact of a second moon of Earth a few tens of millions of years after the Moon's formation. Alternatively, it may be a consequence of asymmetrical tidal heating when the Moon was much closer to the Earth. Impact craters A major geologic process that has affected the Moon's surface is impact cratering, with craters formed when asteroids and comets collide with the lunar surface. There are estimated to be roughly 300,000 craters wider than on the Moon's near side. Lunar craters exhibit a variety of forms, depending on their size. In order of increasing diameter, the basic types are simple craters with smooth bowl shaped interiors and upturned rims, complex craters with flat floors, terraced walls and central peaks, peak ring basins, and multi-ring basins with two or more concentric rings of peaks. The vast majority of impact craters are circular, but some, like Cantor and Janssen, have more polygonal outlines, possibly guided by underlying faults and joints. Others, such as the Messier pair, Schiller, and Daniell, are elongated. Such elongation can result from highly oblique impacts, binary asteroid impacts, fragmentation of impactors before surface strike, or closely spaced secondary impacts. The lunar geologic timescale is based on the most prominent impact events, such as multi-ring formations like Nectaris, Imbrium, and Orientale that are between hundreds and thousands of kilometers in diameter and associated with a broad apron of ejecta deposits that form a regional stratigraphic horizon. The lack of an atmosphere, weather, and recent geological processes mean that many of these craters are well-preserved. Although only a few multi-ring basins have been definitively dated, they are useful for assigning relative ages. Because impact craters accumulate at a nearly constant rate, counting the number of craters per unit area can be used to estimate the age of the surface. However care needs to be exercised with the crater counting technique due to the potential presence of secondary craters. Ejecta from impacts can create secondary craters that often appear in clusters or chains but can also occur as isolated formations at a considerable distance from the impact. These can resemble primary craters, and may even dominate small crater populations, so their unidentified presence can distort age estimates. The radiometric ages of impact-melted rocks collected during the Apollo missions cluster between 3.8 and 4.1 billion years old: this has been used to propose a Late Heavy Bombardment period of increased impacts. High-resolution images from the Lunar Reconnaissance Orbiter in the 2010s show a contemporary crater-production rate significantly higher than was previously estimated. A secondary cratering process caused by distal ejecta is thought to churn the top two centimeters of regolith on a timescale of 81,000 years. This rate is 100 times faster than the rate computed from models based solely on direct micrometeorite impacts. Lunar swirls Lunar swirls are enigmatic features found across the Moon's surface. They are characterized by a high albedo, appear optically immature (i.e. the optical characteristics of a relatively young regolith), and often have a sinuous shape. Their shape is often accentuated by low albedo regions that wind between the bright swirls. They are located in places with enhanced surface magnetic fields and many are located at the antipodal point of major impacts. Well known swirls include the Reiner Gamma feature and Mare Ingenii. They are hypothesized to be areas that have been partially shielded from the solar wind, resulting in slower space weathering. Presence of water Liquid water cannot persist on the lunar surface. When exposed to solar radiation, water quickly decomposes through a process known as photodissociation and is lost to space. However, since the 1960s, scientists have hypothesized that water ice may be deposited by impacting comets or possibly produced by the reaction of oxygen-rich lunar rocks, and hydrogen from solar wind, leaving traces of water which could possibly persist in cold, permanently shadowed craters at either pole on the Moon. Computer simulations suggest that up to of the surface may be in permanent shadow. The presence of usable quantities of water on the Moon is an important factor in rendering lunar habitation as a cost-effective plan; the alternative of transporting water from Earth would be prohibitively expensive. In years since, signatures of water have been found to exist on the lunar surface. In 1994, the bistatic radar experiment located on the Clementine spacecraft, indicated the existence of small, frozen pockets of water close to the surface. However, later radar observations by Arecibo, suggest these findings may rather be rocks ejected from young impact craters. In 1998, the neutron spectrometer on the Lunar Prospector spacecraft showed that high concentrations of hydrogen are present in the first meter of depth in the regolith near the polar regions. Volcanic lava beads, brought back to Earth aboard Apollo 15, showed small amounts of water in their interior. The 2008 Chandrayaan-1 spacecraft has since confirmed the existence of surface water ice, using the on-board Moon Mineralogy Mapper. The spectrometer observed absorption lines common to hydroxyl, in reflected sunlight, providing evidence of large quantities of water ice, on the lunar surface. The spacecraft showed that concentrations may possibly be as high as 1,000 ppm. Using the mapper's reflectance spectra, indirect lighting of areas in shadow confirmed water ice within 20° latitude of both poles in 2018. In 2009, LCROSS sent a impactor into a permanently shadowed polar crater, and detected at least of water in a plume of ejected material. Another examination of the LCROSS data showed the amount of detected water to be closer to . In May 2011, 615–1410 ppm water in melt inclusions in lunar sample 74220 was reported, the famous high-titanium "orange glass soil" of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago. This concentration is comparable with that of magma in Earth's upper mantle. Although of considerable selenological interest, this insight does not mean that water is easily available since the sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to find them with a state-of-the-art ion microprobe instrument. Analysis of the findings of the Moon Mineralogy Mapper (M3) revealed in August 2018 for the first time "definitive evidence" for water-ice on the lunar surface. The data revealed the distinct reflective signatures of water-ice, as opposed to dust and other reflective substances. The ice deposits were found on the North and South poles, although it is more abundant in the South, where water is trapped in permanently shadowed craters and crevices, allowing it to persist as ice on the surface since they are shielded from the sun. In October 2020, astronomers reported detecting molecular water on the sunlit surface of the Moon by several independent spacecraft, including the Stratospheric Observatory for Infrared Astronomy (SOFIA). Earth–Moon system Orbit The Earth and the Moon form the Earth–Moon satellite system with a shared center of mass, or barycenter. This barycenter is (about a quarter of Earth's radius) beneath the Earth's surface. The Moon's orbit is slightly elliptical, with an orbital eccentricity of 0.055. The semi-major axis of the geocentric lunar orbit, called the lunar distance, is approximately 400,000 km (250,000 miles or 1.28 light-seconds), comparable to going around Earth 9.5 times. The Moon makes a complete orbit around Earth with respect to the fixed stars, its sidereal period, about once every 27.3 days. However, because the Earth–Moon system moves at the same time in its orbit around the Sun, it takes slightly longer, 29.5 days, to return to the same lunar phase, completing a full cycle, as seen from Earth. This synodic period or synodic month is commonly known as the lunar month and is equal to the length of the solar day on the Moon. Due to tidal locking, the Moon has a 1:1 spin–orbit resonance. This rotation–orbit ratio makes the Moon's orbital periods around Earth equal to its corresponding rotation periods. This is the reason for only one side of the Moon, its so-called near side, being visible from Earth. That said, while the movement of the Moon is in resonance, it still is not without nuances such as libration, resulting in slightly changing perspectives, making over time and location on Earth about 59% of the Moon's surface visible from Earth. Unlike most satellites of other planets, the Moon's orbital plane is closer to the ecliptic plane than to the planet's equatorial plane. The Moon's orbit is subtly perturbed by the Sun and Earth in many small, complex and interacting ways. For example, the plane of the Moon's orbit gradually rotates once every 18.61years, which affects other aspects of lunar motion. These follow-on effects are mathematically described by Cassini's laws. Tidal effects The gravitational attraction that Earth and the Moon (as well as the Sun) exert on each other manifests in a slightly greater attraction on the sides closest to each other, resulting in tidal forces. Ocean tides are the most widely experienced result of this, but tidal forces also considerably affect other mechanics of Earth, as well as the Moon and their system. The lunar solid crust experiences tides of around amplitude over 27 days, with three components: a fixed one due to Earth, because they are in synchronous rotation, a variable tide due to orbital eccentricity and inclination, and a small varying component from the Sun. The Earth-induced variable component arises from changing distance and libration, a result of the Moon's orbital eccentricity and inclination (if the Moon's orbit were perfectly circular and un-inclined, there would only be solar tides). According to recent research, scientists suggest that the Moon's influence on the Earth may contribute to maintaining Earth's magnetic field. The cumulative effects of stress built up by these tidal forces produces moonquakes. Moonquakes are much less common and weaker than are earthquakes, although moonquakes can last for up to an hour – significantly longer than terrestrial quakes – because of scattering of the seismic vibrations in the dry fragmented upper crust. The existence of moonquakes was an unexpected discovery from seismometers placed on the Moon by Apollo astronauts from 1969 through 1972. The most commonly known effect of tidal forces is elevated sea levels called ocean tides. While the Moon exerts most of the tidal forces, the Sun also exerts tidal forces and therefore contributes to the tides as much as 40% of the Moon's tidal force; producing in interplay the spring and neap tides. The tides are two bulges in the Earth's oceans, one on the side facing the Moon and the other on the side opposite. As the Earth rotates on its axis, one of the ocean bulges (high tide) is held in place "under" the Moon, while another such tide is opposite. The tide under the Moon is explained by the Moon's gravity being stronger on the water close to it. The tide on the opposite side can be explained either by the centrifugal force as the Earth orbits the barycenter or by the water's inertia as the Moon's gravity is stronger on the solid Earth close to it and it is pull away from the farther water. Thus, there are two high tides, and two low tides in about 24 hours. Since the Moon is orbiting the Earth in the same direction of the Earth's rotation, the high tides occur about every 12 hours and 25 minutes; the 25 minutes is due to the Moon's time to orbit the Earth. If the Earth were a water world (one with no continents) it would produce a tide of only one meter, and that tide would be very predictable, but the ocean tides are greatly modified by other effects: the frictional coupling of water to Earth's rotation through the ocean floors the inertia of water's movement ocean basins that grow shallower near land the sloshing of water between different ocean basins As a result, the timing of the tides at most points on the Earth is a product of observations that are explained, incidentally, by theory. System evolution Delays in the tidal peaks of both ocean and solid-body tides cause torque in opposition to the Earth's rotation. This "drains" angular momentum and rotational kinetic energy from Earth's rotation, slowing the Earth's rotation. That angular momentum, lost from the Earth, is transferred to the Moon in a process known as tidal acceleration, which lifts the Moon into a higher orbit while lowering orbital speed around the Earth. Thus the distance between Earth and Moon is increasing, and the Earth's rotation is slowing in reaction. Measurements from laser reflectors left during the Apollo missions (lunar ranging experiments) have found that the Moon's distance increases by per year (roughly the rate at which human fingernails grow). Atomic clocks show that Earth's Day lengthens by about 17 microseconds every year, slowly increasing the rate at which UTC is adjusted by leap seconds. This tidal drag makes the rotation of the Earth, and the orbital period of the Moon very slowly match. This matching first results in tidally locking the lighter body of the orbital system, as is already the case with the Moon. Theoretically, in 50 billion years, the Earth's rotation will have slowed to the point of matching the Moon's orbital period, causing the Earth to always present the same side to the Moon. However, the Sun will become a red giant, most likely engulfing the Earth–Moon system long before then. If the Earth–Moon system isn't engulfed by the enlarged Sun, the drag from the solar atmosphere can cause the orbit of the Moon to decay. Once the orbit of the Moon closes to a distance of , it will cross Earth's Roche limit, meaning that tidal interaction with Earth would break apart the Moon, turning it into a ring system. Most of the orbiting rings will begin to decay, and the debris will impact Earth. Hence, even if the Sun does not swallow up Earth, the planet may be left moonless. Position and appearance The Moon's highest altitude at culmination varies by its lunar phase, or more correctly its orbital position, and time of the year, or more correctly the position of the Earth's axis. The full moon is highest in the sky during winter and lowest during summer (for each hemisphere respectively), with its altitude changing towards dark moon to the opposite. At the North and South Poles the Moon is 24 hours above the horizon for two weeks every tropical month (about 27.3 days), comparable to the polar day of the tropical year. Zooplankton in the Arctic use moonlight when the Sun is below the horizon for months on end. The apparent orientation of the Moon depends on its position in the sky and the hemisphere of the Earth from which it is being viewed. In the northern hemisphere it appears upside down compared to the view from the southern hemisphere. Sometimes the "horns" of a crescent moon appear to be pointing more upwards than sideways. This phenomenon is called a wet moon and occurs more frequently in the tropics. The distance between the Moon and Earth varies from around (perigee) to (apogee), making the Moon's distance and apparent size fluctuate up to 14%. On average the Moon's angular diameter is about 0.52°, roughly the same apparent size as the Sun (see ). In addition, a purely psychological effect, known as the Moon illusion, makes the Moon appear larger when close to the horizon. Rotation The tidally locked synchronous rotation of the Moon as it orbits the Earth results in it always keeping nearly the same face turned towards the planet. The side of the Moon that faces Earth is called the near side, and the opposite the far side. The far side is often inaccurately called the "dark side", but it is in fact illuminated as often as the near side: once every 29.5 Earth days. During dark moon to new moon, the near side is dark. The Moon originally rotated at a faster rate, but early in its history its rotation slowed and became tidally locked in this orientation as a result of frictional effects associated with tidal deformations caused by Earth. With time, the energy of rotation of the Moon on its axis was dissipated as heat, until there was no rotation of the Moon relative to Earth. In 2016, planetary scientists using data collected on the 1998–99 NASA Lunar Prospector mission found two hydrogen-rich areas (most likely former water ice) on opposite sides of the Moon. It is speculated that these patches were the poles of the Moon billions of years ago before it was tidally locked to Earth. Illumination and phases Half of the Moon's surface is always illuminated by the Sun (except during a lunar eclipse). Earth also reflects light onto the Moon, observable at times as Earthlight when it is reflected back to Earth from areas of the near side of the Moon that are not illuminated by the Sun. Since the Moon's axial tilt with respect to the ecliptic is 1.5427°, in every draconic year (346.62 days) the Sun moves from being 1.5427° north of the lunar equator to being 1.5427° south of it and then back, just as on Earth the Sun moves from the Tropic of Cancer to the Tropic of Capricorn and back once every tropical year. The poles of the Moon are therefore in the dark for half a draconic year (or with only part of the Sun visible) and then lit for half a draconic year. The amount of sunlight falling on horizontal areas near the poles depends on the altitude angle of the Sun. But these "seasons" have little effect in more equatorial areas. With the different positions of the Moon, different areas of it are illuminated by the Sun. This illumination of different lunar areas, as viewed from Earth, produces the different lunar phases during the synodic month. The phase is equal to the area of the visible lunar sphere that is illuminated by the Sun. This area or degree of illumination is given by , where is the elongation (i.e., the angle between Moon, the observer on Earth, and the Sun). Brightness and apparent size of the Moon changes also due to its elliptic orbit around Earth. At perigee (closest), since the Moon is up to 14% closer to Earth than at apogee (most distant), it subtends a solid angle which is up to 30% larger. Consequently, given the same phase, the Moon's brightness also varies by up to 30% between apogee and perigee. A full (or new) moon at such a position is called a supermoon. Observational phenomena There has been historical controversy over whether observed features on the Moon's surface change over time. Today, many of these claims are thought to be illusory, resulting from observation under different lighting conditions, poor astronomical seeing, or inadequate drawings. However, outgassing does occasionally occur and could be responsible for a minor percentage of the reported lunar transient phenomena. Recently, it has been suggested that a roughly diameter region of the lunar surface was modified by a gas release event about a million years ago. Albedo and color The Moon has an exceptionally low albedo, giving it a reflectance that is slightly brighter than that of worn asphalt. Despite this, it is the brightest object in the sky after the Sun. This is due partly to the brightness enhancement of the opposition surge; the Moon at quarter phase is only one-tenth as bright, rather than half as bright, as at full moon. Additionally, color constancy in the visual system recalibrates the relations between the colors of an object and its surroundings, and because the surrounding sky is comparatively dark, the sunlit Moon is perceived as a bright object. The edges of the full moon seem as bright as the center, without limb darkening, because of the reflective properties of lunar soil, which retroreflects light more towards the Sun than in other directions. The Moon's color depends on the light the Moon reflects, which in turn depends on the Moon's surface and its features, having for example large darker regions. In general, the lunar surface reflects a brown-tinged gray light. At times, the Moon can appear red or blue. It may appear red during a lunar eclipse, because of the red spectrum of the Sun's light being refracted onto the Moon by Earth's atmosphere. Because of this red color, lunar eclipses are also sometimes called blood moons. The Moon can also seem red when it appears at low angles and through a thick atmosphere. The Moon may appear blue depending on the presence of certain particles in the air, such as volcanic particles, in which case it can be called a blue moon. Because the words "red moon" and "blue moon" can also be used to refer to specific full moons of the year, they do not always refer to the presence of red or blue moonlight. Eclipses Eclipses only occur when the Sun, Earth, and Moon are all in a straight line (termed "syzygy"). Solar eclipses occur at new moon, when the Moon is between the Sun and Earth. In contrast, lunar eclipses occur at full moon, when Earth is between the Sun and Moon. The apparent size of the Moon is roughly the same as that of the Sun, with both being viewed at close to one-half a degree wide. The Sun is much larger than the Moon, but it is the vastly greater distance that gives it the same apparent size as the much closer and much smaller Moon from the perspective of Earth. The variations in apparent size, due to the non-circular orbits, are nearly the same as well, though occurring in different cycles. This makes possible both total (with the Moon appearing larger than the Sun) and annular (with the Moon appearing smaller than the Sun) solar eclipses. In a total eclipse, the Moon completely covers the disc of the Sun and the solar corona becomes visible to the naked eye. Because the distance between the Moon and Earth is very slowly increasing over time, the angular diameter of the Moon is decreasing. As it evolves toward becoming a red giant, the size of the Sun, and its apparent diameter in the sky, are slowly increasing. The combination of these two changes means that hundreds of millions of years ago, the Moon would always completely cover the Sun on solar eclipses, and no annular eclipses were possible. Likewise, hundreds of millions of years in the future, the Moon will no longer cover the Sun completely, and total solar eclipses will not occur. As the Moon's orbit around Earth is inclined by about 5.145° (5° 9') to the orbit of Earth around the Sun, eclipses do not occur at every full and new moon. For an eclipse to occur, the Moon must be near the intersection of the two orbital planes. The periodicity and recurrence of eclipses of the Sun by the Moon, and of the Moon by Earth, is described by the saros, which has a period of approximately 18 years. Because the Moon continuously blocks the view of a half-degree-wide circular area of the sky, the related phenomenon of occultation occurs when a bright star or planet passes behind the Moon and is occulted: hidden from view. In this way, a solar eclipse is an occultation of the Sun. Because the Moon is comparatively close to Earth, occultations of individual stars are not visible everywhere on the planet, nor at the same time. Because of the precession of the lunar orbit, each year different stars are occulted. History of exploration and human presence Pre-telescopic observation (before 1609) It is believed by some that the oldest cave paintings from up to 40,000 BP of bulls and geometric shapes, or 20–30,000 year old tally sticks were used to observe the phases of the Moon, keeping time using the waxing and waning of the Moon's phases. One of the earliest-discovered possible depictions of the Moon is a 3,000 BCE rock carving Orthostat 47 at Knowth, Ireland. Lunar deities like Nanna/Sin featuring crescents are found since the 3rd millennium BCE. Though the oldest found and identified astronomical depiction of the Moon is the Nebra sky disc from . The ancient Greek philosopher Anaxagoras () reasoned that the Sun and Moon were both giant spherical rocks, and that the latter reflected the light of the former. Elsewhere in the to , Babylonian astronomers had recorded the 18-year Saros cycle of lunar eclipses, and Indian astronomers had described the Moon's monthly elongation. The Chinese astronomer Shi Shen gave instructions for predicting solar and lunar eclipses. In Aristotle's (384–322 BC) description of the universe, the Moon marked the boundary between the spheres of the mutable elements (earth, water, air and fire), and the imperishable stars of aether, an influential philosophy that would dominate for centuries. Archimedes (287–212 BC) designed a planetarium that could calculate the motions of the Moon and other objects in the Solar System. In the , Seleucus of Seleucia correctly thought that tides were due to the attraction of the Moon, and that their height depends on the Moon's position relative to the Sun. In the same century, Aristarchus computed the size and distance of the Moon from Earth, obtaining a value of about twenty times the radius of Earth for the distance. The Chinese of the Han dynasty believed the Moon to be energy equated to qi and their 'radiating influence' theory recognized that the light of the Moon was merely a reflection of the Sun; Jing Fang (78–37 BC) noted the sphericity of the Moon. Ptolemy (90–168 AD) greatly improved on the numbers of Aristarchus, calculating a mean distance of 59 times Earth's radius and a diameter of 0.292 Earth diameters, close to the correct values of about 60 and 0.273 respectively. In the 2nd century AD, Lucian wrote the novel A True Story, in which the heroes travel to the Moon and meet its inhabitants. In 510 AD, the Indian astronomer Aryabhata mentioned in his Aryabhatiya that reflected sunlight is the cause of the shining of the Moon. The astronomer and physicist Ibn al-Haytham (965–1039) found that sunlight was not reflected from the Moon like a mirror, but that light was emitted from every part of the Moon's sunlit surface in all directions. Shen Kuo (1031–1095) of the Song dynasty created an allegory equating the waxing and waning of the Moon to a round ball of reflective silver that, when doused with white powder and viewed from the side, would appear to be a crescent. During the Middle Ages, before the invention of the telescope, the Moon was increasingly recognized as a sphere, though many believed that it was "perfectly smooth". Telescopic exploration (1609–1959) In 1609, Galileo Galilei used an early telescope to make drawings of the Moon for his book , and deduced that it was not smooth but had mountains and craters. Thomas Harriot had made but not published such drawings a few months earlier. Telescopic mapping of the Moon followed: later in the 17th century, the efforts of Giovanni Battista Riccioli and Francesco Maria Grimaldi led to the system of naming of lunar features in use today. The more exact 1834–1836 of Wilhelm Beer and Johann Heinrich von Mädler, and their associated 1837 book , the first trigonometrically accurate study of lunar features, included the heights of more than a thousand mountains, and introduced the study of the Moon at accuracies possible in earthly geography. Lunar craters, first noted by Galileo, were thought to be volcanic until the 1870s proposal of Richard Proctor that they were formed by collisions. This view gained support in 1892 from the experimentation of geologist Grove Karl Gilbert, and from comparative studies from 1920 to the 1940s, leading to the development of lunar stratigraphy, which by the 1950s was becoming a new and growing branch of astrogeology. First missions to the Moon (1959–1976) After World War II the first launch systems were developed and by the end of the 1950s they reached capabilities that allowed the Soviet Union and the United States to launch spacecraft into space. The Cold War fueled a closely followed development of launch systems by the two states, resulting in the so-called Space Race and its later phase the Moon Race, accelerating efforts and interest in exploration of the Moon. After the first spaceflight of Sputnik 1 in 1957 during International Geophysical Year the spacecraft of the Soviet Union's Luna program were the first to accomplish a number of goals. Following three unnamed failed missions in 1958, the first human-made object Luna 1 escaped Earth's gravity and passed near the Moon in 1959. Later that year the first human-made object Luna 2 reached the Moon's surface by intentionally impacting. By the end of the year Luna 3 reached as the first human-made object the normally occluded far side of the Moon, taking the first photographs of it. The first spacecraft to perform a successful lunar soft landing was Luna 9 and the first vehicle to orbit the Moon was Luna 10, both in 1966. Following President John F. Kennedy's 1961 commitment to a crewed Moon landing before the end of the decade, the United States, under NASA leadership, launched a series of uncrewed probes to develop an understanding of the lunar surface in preparation for human missions: the Jet Propulsion Laboratory's Ranger program, the Lunar Orbiter program and the Surveyor program. The crewed Apollo program was developed in parallel; after a series of uncrewed and crewed tests of the Apollo spacecraft in Earth orbit, and spurred on by a potential Soviet lunar human landing, in 1968 Apollo 8 made the first human mission to lunar orbit (the first Earthlings, two tortoises, had circled the Moon three months earlier on the Soviet Union's Zond 5, followed by turtles on Zond 6). The first time a person landed on the Moon and any extraterrestrial body was when Neil Armstrong, the commander of the American mission Apollo 11, set foot on the Moon at 02:56 UTC on July 21, 1969. Considered the culmination of the Space Race, an estimated 500 million people worldwide watched the transmission by the Apollo TV camera, the largest television audience for a live broadcast at that time. While at the same time another mission, the robotic sample return mission Luna 15 by the Soviet Union had been in orbit around the Moon, becoming together with Apollo 11 the first ever case of two extraterrestrial missions being conducted at the same time. The Apollo missions 11 to 17 (except Apollo 13, which aborted its planned lunar landing) removed of lunar rock and soil in 2,196 separate samples. Scientific instrument packages were installed on the lunar surface during all the Apollo landings. Long-lived instrument stations, including heat flow probes, seismometers, and magnetometers, were installed at the Apollo 12, 14, 15, 16, and 17 landing sites. Direct transmission of data to Earth concluded in late 1977 because of budgetary considerations, but as the stations' lunar laser ranging corner-cube retroreflector arrays are passive instruments, they are still being used. Apollo 17 in 1972 remains the last crewed mission to the Moon. Explorer 49 in 1973 was the last dedicated U.S. probe to the Moon until the 1990s. The Soviet Union continued sending robotic missions to the Moon until 1976, deploying in 1970 with Luna 17 the first remote controlled rover Lunokhod 1 on an extraterrestrial surface, and collecting and returning 0.3 kg of rock and soil samples with three Luna sample return missions (Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976). Moon Treaty and explorational absence (1976–1990) Following the last Soviet mission to the Moon of 1976, there was little further lunar exploration for fourteen years. Astronautics had shifted its focus towards the exploration of the inner (e.g. Venera program) and outer (e.g. Pioneer 10, 1972) Solar System planets, but also towards Earth orbit, developing and continuously operating, beside communication satellites, Earth observation satellites (e.g. Landsat program, 1972), space telescopes and particularly space stations (e.g. Salyut program, 1971). Negotiation in 1979 of Moon treaty, and its subsequent ratification in 1984 was the only major activity regarding the Moon until 1990. Renewed exploration (1990–present) In 1990 Hiten-Hagoromo, the first dedicated lunar mission since 1976, reached the Moon. Sent by Japan, it became the first mission that was not a Soviet Union or U.S. mission to the Moon. In 1994, the U.S. dedicated a mission to fly a spacecraft (Clementine) to the Moon again for the first time since 1973. This mission obtained the first near-global topographic map of the Moon, and the first global multispectral images of the lunar surface. In 1998, this was followed by the Lunar Prospector mission, whose instruments indicated the presence of excess hydrogen at the lunar poles, which is likely to have been caused by the presence of water ice in the upper few meters of the regolith within permanently shadowed craters. The next years saw a row of first missions to the Moon by a new group of states actively exploring the Moon. Between 2004 and 2006 the first spacecraft by the European Space Agency (ESA) (SMART-1) reached the Moon, recording the first detailed survey of chemical elements on the lunar surface. The Chinese Lunar Exploration Program reached the Moon for the first time with the orbiter Chang'e 1 (2007–2009), obtaining a full image map of the Moon. India reached, orbited and impacted the Moon in 2008 for the first time with its Chandrayaan-1 and Moon Impact Probe, becoming the fifth and sixth state to do so, creating a high-resolution chemical, mineralogical and photo-geological map of the lunar surface, and confirming the presence of water molecules in lunar soil. The U.S. launched the Lunar Reconnaissance Orbiter (LRO) and the LCROSS impactor on June 18, 2009. LCROSS completed its mission by making a planned and widely observed impact in the crater Cabeus on October 9, 2009, whereas LRO is currently in operation, obtaining precise lunar altimetry and high-resolution imagery. China continued its lunar program in 2010 with Chang'e 2, mapping the surface at a higher resolution over an eight-month period, and in 2013 with Chang'e 3, a lunar lander along with a lunar rover named Yutu (). This was the first lunar rover mission since Lunokhod 2 in 1973 and the first lunar soft landing since Luna 24 in 1976, making China the third country to achieve this. In 2014 the first privately funded probe, the Manfred Memorial Moon Mission, reached the Moon. Another Chinese rover mission, Chang'e 4, achieved the first landing on the Moon's far side in early 2019. Also in 2019, India successfully sent its second probe, Chandrayaan-2 to the Moon. In 2020, China carried out its first robotic sample return mission (Chang'e 5), bringing back 1,731 grams of lunar material to Earth. The U.S. developed plans for returning to the Moon beginning in 2004, and with the signing of the U.S.-led Artemis Accords in 2020, the Artemis program aims to return the astronauts to the Moon in the 2020s. The Accords have been joined by a growing number of countries. The introduction of the Artemis Accords has fueled a renewed discussion about the international framework and cooperation of lunar activity, building on the Moon Treaty and the ESA-led Moon Village concept. 2023 and 2024 India and Japan became the fourth and fifth country to soft land a spacecraft on the Moon, following the Soviet Union and United States in the 1960s, and China in the 2010s. Notably, Japan's spacecraft, the Smart Lander for Investigating Moon, survived 3 lunar nights. The IM-1 lander became the first commercially built lander to land on the Moon in 2024. China launched the Chang'e 6 on May 3, 2024, which conducted another lunar sample return from the far side of the Moon. It also carried a Chinese rover to conduct infrared spectroscopy of lunar surface. Pakistan sent a lunar orbiter called ICUBE-Q along with Chang'e 6. Nova-C 2, iSpace Lander and Blue Ghost are all planned to launch to the Moon in 2024. Future Beside the progressing Artemis program and supporting Commercial Lunar Payload Services, leading an international and commercial crewed opening up of the Moon and sending the first woman, person of color and non-US citizen to the Moon in the 2020s, China is continuing its ambitious Chang'e program, having announced with Russia's struggling Luna-Glob program joint missions. Both the Chinese and US lunar programs have the goal to establish in the 2030s a lunar base with their international partners, though the US and its partners will first establish an orbital Lunar Gateway station in the 2020s, from which Artemis missions will land the Human Landing System to set up temporary surface camps. While the Apollo missions were explorational in nature, the Artemis program plans to establish a more permanent presence. To this end, NASA is partnering with industry leaders to establish key elements such as modern communication infrastructure. A 4G connectivity demonstration is to be launched aboard an Intuitive Machines Nova-C lander in 2024. Another focus is on in situ resource utilization, which is a key part of the DARPA lunar programs. DARPA has requested that industry partners develop a 10–year lunar architecture plan to enable the beginning of a lunar economy. Human presence In 1959 the first extraterrestrial probes reached the Moon (Luna program), just a year into the space age, after the first ever orbital flight. Since then, humans have sent a range of probes and people to the Moon. The first stay of people on the Moon was conducted in 1969, in a series of crewed exploration missions (the Apollo Program), the last having taken place in 1972. Uninterrupted presence has been the case through the remains of impactors, landings and lunar orbiters. Some landings and orbiters have maintained a small lunar infrastructure, providing continuous observation and communication at the Moon. Increasing human activity in cislunar space as well as on the Moon's surface, particularly missions at the far side of the Moon or the lunar north and south polar regions, are in need for a lunar infrastructure. For that purpose, orbiters in orbits around the Moon or the Earth–Moon Lagrange points, have since 2006 been operated. With highly eccentric orbits providing continuous communication, as with the orbit of Queqiao and Queqiao-2 relay satellite or the planned first extraterrestrial space station, the Lunar Gateway. Human impact While the Moon has the lowest planetary protection target-categorization, its degradation as a pristine body and scientific place has been discussed. If there is astronomy performed from the Moon, it will need to be free from any physical and radio pollution. While the Moon has no significant atmosphere, traffic and impacts on the Moon causes clouds of dust that can spread far and possibly contaminate the original state of the Moon and its special scientific content. Scholar Alice Gorman asserts that, although the Moon is inhospitable, it is not dead, and that sustainable human activity would require treating the Moon's ecology as a co-participant. The so-called "Tardigrade affair" of the 2019 crashed Beresheet lander and its carrying of tardigrades has been discussed as an example for lacking measures and lacking international regulation for planetary protection. Space debris beyond Earth around the Moon has been considered as a future challenge with increasing numbers of missions to the Moon, particularly as a danger for such missions. As such lunar waste management has been raised as an issue which future lunar missions, particularly on the surface, need to tackle. Human remains have been transported to the Moon, including by private companies such as Celestis and Elysium Space. Because the Moon has been sacred or significant to many cultures, the practice of space burials have attracted criticism from indigenous peoples leaders. For example, thenNavajo Nation president Albert Hale criticized NASA for sending the cremated ashes of scientist Eugene Shoemaker to the Moon in 1998. Beside the remains of human activity on the Moon, there have been some intended permanent installations like the Moon Museum art piece, Apollo 11 goodwill messages, six lunar plaques, the Fallen Astronaut memorial, and other artifacts. Longterm missions continuing to be active are some orbiters such as the 2009-launched Lunar Reconnaissance Orbiter surveilling the Moon for future missions, as well as some Landers such as the 2013-launched Chang'e 3 with its Lunar Ultraviolet Telescope still operational. Five retroreflectors have been installed on the Moon since the 1970s and since used for accurate measurements of the physical librations through laser ranging to the Moon. There are several missions by different agencies and companies planned to establish a long-term human presence on the Moon, with the Lunar Gateway as the currently most advanced project as part of the Artemis program. Astronomy from the Moon The Moon has been used as a site for astronomical and Earth observations. The Earth appears in the Moon's sky with an apparent size of 1° 48 to 2°, three to four times the size of the Moon or Sun in Earth's sky, or about the apparent width of two little fingers at an arm's length away. Observations from the Moon started as early as 1966 with the first images of Earth from the Moon, taken by Lunar Orbiter 1. Of particular cultural significance is the 1968 photograph called Earthrise, taken by Bill Anders of Apollo 8 in 1968. In April 1972 the Apollo 16 mission set up the first dedicated telescope, the Far Ultraviolet Camera/Spectrograph, recording various astronomical photos and spectra. The Moon is recognized as an excellent site for telescopes. It is relatively nearby; certain craters near the poles are permanently dark and cold and especially useful for infrared telescopes; and radio telescopes on the far side would be shielded from the radio chatter of Earth. The lunar soil, although it poses a problem for any moving parts of telescopes, can be mixed with carbon nanotubes and epoxies and employed in the construction of mirrors up to 50 meters in diameter. A lunar zenith telescope can be made cheaply with an ionic liquid. Living on the Moon The only instances of humans living on the Moon have taken place in an Apollo Lunar Module for several days at a time (for example, during the Apollo 17 mission). One challenge to astronauts during their stay on the surface is that lunar dust sticks to their suits and is carried into their quarters. Astronauts could taste and smell the dust, which smells like gunpowder and was called the "Apollo aroma". This fine lunar dust can cause health issues. In 2019, at least one plant seed sprouted in an experiment on the Chang'e 4 lander. It was carried from Earth along with other small life in its Lunar Micro Ecosystem. Legal status Although Luna landers scattered pennants of the Soviet Union on the Moon, and U.S. flags were symbolically planted at their landing sites by the Apollo astronauts, no nation claims ownership of any part of the Moon's surface. Likewise no private ownership of parts of the Moon, or as a whole, is considered credible. The 1967 Outer Space Treaty defines the Moon and all outer space as the "province of all mankind". It restricts the use of the Moon to peaceful purposes, explicitly banning military installations and weapons of mass destruction. A majority of countries are parties of this treaty. The 1979 Moon Agreement was created to elaborate, and restrict the exploitation of the Moon's resources by any single nation, leaving it to a yet unspecified international regulatory regime. As of January 2020, it has been signed and ratified by 18 nations, none of which have human spaceflight capabilities. Since 2020, countries have joined the U.S. in their Artemis Accords, which are challenging the treaty. The U.S. has furthermore emphasized in a presidential executive order ("Encouraging International Support for the Recovery and Use of Space Resources.") that "the United States does not view outer space as a 'global commons and calls the Moon Agreement "a failed attempt at constraining free enterprise." With Australia signing and ratifying both the Moon Treaty in 1986 as well as the Artemis Accords in 2020, there has been a discussion if they can be harmonized. In this light an Implementation Agreement for the Moon Treaty has been advocated for, as a way to compensate for the shortcomings of the Moon Treaty and to harmonize it with other laws and agreements such as the Artemis Accords, allowing it to be more widely accepted. In the face of such increasing commercial and national interest, particularly prospecting territories, U.S. lawmakers have introduced in late 2020 specific regulation for the conservation of historic landing sites and interest groups have argued for making such sites World Heritage Sites and zones of scientific value protected zones, all of which add to the legal availability and territorialization of the Moon. In 2021, the Declaration of the Rights of the Moon was created by a group of "lawyers, space archaeologists and concerned citizens", drawing on precedents in the Rights of Nature movement and the concept of legal personality for non-human entities in space. Coordination and regulation Increasing human activity at the Moon has raised the need for coordination to safeguard international and commercial lunar activity. Issues from cooperation to mere coordination, through for example the development of a shared Lunar time, have been raised. In particular the establishment of an international or United Nations regulatory regime for lunar human activity has been called for by the Moon Treaty and suggested through an Implementation Agreement, but remains contentious. Current lunar programs are multilateral, with the US-led Artemis program and the China-led International Lunar Research Station. For broader international cooperation and coordination, the International Lunar Exploration Working Group (ILEWG), the Moon Village Association (MVA) and more generally the International Space Exploration Coordination Group (ISECG) has been established. In culture and life Timekeeping Since pre-historic times people have taken note of the Moon's phases and its waxing and waning cycle and used it to keep record of time. Tally sticks, notched bones dating as far back as 20–30,000 years ago, are believed by some to mark the phases of the Moon. The counting of the days between the Moon's phases eventually gave rise to generalized time periods of lunar cycles as months, and possibly of its phases as weeks. The words for the month in a range of different languages carry this relation between the period of the month and the Moon etymologically. The English month as well as moon, and its cognates in other Indo-European languages (e.g. the Latin and Ancient Greek (meis) or (mēn), meaning "month") stem from the Proto-Indo-European (PIE) root of moon, *méh1nōt, derived from the PIE verbal root *meh1-, "to measure", "indicat[ing] a functional conception of the Moon, i.e. marker of the month" (cf. the English words measure and menstrual). To give another example from a different language family, the Chinese language uses the same word () for moon as for month, which furthermore can be found in the symbols for the word week (). This lunar timekeeping gave rise to the historically dominant, but varied, lunisolar calendars. The 7th-century Islamic calendar is an example of a purely lunar calendar, where months are traditionally determined by the visual sighting of the hilal, or earliest crescent moon, over the horizon. Of particular significance has been the occasion of full moon, highlighted and celebrated in a range of calendars and cultures, an example being the Buddhist Vesak. The full moon around the southern or northern autumnal equinox is often called the harvest moon and is celebrated with festivities such as the Harvest Moon Festival of the Chinese lunar calendar, its second most important celebration after the Chinese lunisolar Lunar New Year. Furthermore, association of time with the Moon can also be found in religion, such as the ancient Egyptian temporal and lunar deity Khonsu. Cultural representation Since prehistoric times humans have depicted and later described their perception of the Moon and its importance for them and their cosmologies. It has been characterized and associated in many different ways, from having a spirit or being a deity, and an aspect thereof or an aspect in astrology. Crescent For the representation of the Moon, especially its lunar phases, the crescent (🌙) has been a recurring symbol in a range of cultures since at least 3,000 BCE or possibly earlier with bull horns dating to the earliest cave paintings at 40,000 BP. In writing systems such as Chinese the crescent has developed into the symbol , the word for Moon, and in ancient Egyptian it was the symbol , meaning Moon and spelled like the ancient Egyptian lunar deity Iah, which the other ancient Egyptian lunar deities Khonsu and Thoth were associated with. Iconographically the crescent was used in Mesopotamia as the primary symbol of Nanna/Sîn, the ancient Sumerian lunar deity, who was the father of Inanna/Ishtar, the goddess of the planet Venus (symbolized as the eight pointed Star of Ishtar), and Utu/Shamash, the god of the Sun (symbolized as a disc, optionally with eight rays), all three often depicted next to each other. Nanna/Sîn is, like some other lunar deities, for example Iah and Khonsu of ancient Egypt, Mene/Selene of ancient Greece and Luna of ancient Rome, depicted as a horned deity, featuring crescent shaped headgears or crowns. The particular arrangement of the crescent with a star known as the star and crescent (☪️) goes back to the Bronze Age, representing either the Sun and Moon, or the Moon and the planet Venus, in combination. It came to represent the selene goddess Artemis, and via the patronage of Hecate, which as triple deity under the epithet trimorphos/trivia included aspects of Artemis/Diana, came to be used as a symbol of Byzantium, with Virgin Mary (Queen of Heaven) later taking her place, becoming depicted in Marian veneration on a crescent and adorned with stars. Since then the heraldric use of the star and crescent proliferated, Byzantium's symbolism possibly influencing the development of the Ottoman flag, specifically the combination of the Turkish crescent with a star, and becoming a popular symbol for Islam (as the hilal of the Islamic calendar) and for a range of nations. Other association The features of the Moon, the contrasting brighter highlands and darker maria, have been seen by different cultures forming abstract shapes. Such shapes are among others the Man in the Moon (e.g. Coyolxāuhqui) or the Moon Rabbit (e.g. the Chinese Tu'er Ye or in Indigenous American mythologies the aspect of the Mayan Moon goddess, from which possibly Awilix is derived, or of Metztli/Tēcciztēcatl). Occasionally some lunar deities have been also depicted driving a chariot across the sky, such as the Hindu Chandra/Soma, the Greek Artemis, which is associated with Selene, or Luna, Selene's ancient Roman equivalent. Color and material wise the Moon has been associated in Western alchemy with silver, while gold is associated with the Sun. Through a miracle, the so-called splitting of the Moon () in Islam, association with the Moon applies also to Muhammad. Representation in modern culture The perception of the Moon in modern times has been informed by telescope enabled modern astronomy and later by spaceflight enabled actual human activity at the Moon, particularly the culturally impactful lunar landings. These new insights inspired cultural references, connecting romantic reflections about the Moon and speculative fiction such as science-fiction dealing with the Moon. Contemporarily the Moon has been seen as a place for economic expansion into space, with missions prospecting for lunar resources. This has been accompanied with renewed public and critical reflection on humanity's cultural and legal relation to the celestial body, especially regarding colonialism, as in the 1970 poem "Whitey on the Moon". In this light the Moon's nature has been invoked, particularly for lunar conservation and as a common. In 2021 20 July, the date of the first crewed Moon landing, became the annual International Moon Day. Lunar effect The lunar effect is a purported unproven correlation between specific stages of the roughly 29.5-day lunar cycle and behavior and physiological changes in living beings on Earth, including humans. The Moon has long been associated with insanity and irrationality; the words lunacy and lunatic are derived from the Latin name for the Moon, Luna. Philosophers Aristotle and Pliny the Elder argued that the full moon induced insanity in susceptible individuals, believing that the brain, which is mostly water, must be affected by the Moon and its power over the tides, but the Moon's gravity is too slight to affect any single person. Even today, people who believe in a lunar effect claim that admissions to psychiatric hospitals, traffic accidents, homicides or suicides increase during a full moon, but dozens of studies invalidate these claims.
Physical sciences
Science and medicine
null
19338
https://en.wikipedia.org/wiki/Mountain%20range
Mountain range
A mountain range or hill range is a series of mountains or hills arranged in a line and connected by high ground. A mountain system or mountain belt is a group of mountain ranges with similarity in form, structure, and alignment that have arisen from the same cause, usually an orogeny. Mountain ranges are formed by a variety of geological processes, but most of the significant ones on Earth are the result of plate tectonics. Mountain ranges are also found on many planetary mass objects in the Solar System and are likely a feature of most terrestrial planets. Mountain ranges are usually segmented by highlands or mountain passes and valleys. Individual mountains within the same mountain range do not necessarily have the same geologic structure or petrology. They may be a mix of different orogenic expressions and terranes, for example thrust sheets, uplifted blocks, fold mountains, and volcanic landforms resulting in a variety of rock types. Major ranges Most geologically young mountain ranges on the Earth's land surface are associated with either the Pacific Ring of Fire or the Alpide belt. The Pacific Ring of Fire includes the Andes of South America, extends through the North American Cordillera, the Aleutian Range, on through Kamchatka Peninsula, Japan, Taiwan, the Philippines, Papua New Guinea, to New Zealand. The Andes is long and is often considered the world's longest mountain system. The Alpide belt stretches 15,000 km across southern Eurasia, from Java in Maritime Southeast Asia to the Iberian Peninsula in Western Europe, including the ranges of the Himalayas, Karakoram, Hindu Kush, Alborz, Caucasus, and the Alps. The Himalayas contain the highest mountains in the world, including Mount Everest, which is high. Mountain ranges outside these two systems include the Arctic Cordillera, Appalachians, Great Dividing Range, East Siberians, Altais, Scandinavians, Qinling, Western Ghats, Vindhyas, Byrrangas, and the Annamite Range. If the definition of a mountain range is stretched to include underwater mountains, then the Ocean Ridge forms the longest continuous mountain system on Earth, with a length of . Climate The position of mountain ranges influences climate, such as rain or snow. When air masses move up and over mountains, the air cools, producing orographic precipitation (rain or snow). As the air descends on the leeward side, it warms again (following the adiabatic lapse rate) and is drier, having been stripped of much of its moisture. Often, a rain shadow will affect the leeward side of a range. As a consequence, large mountain ranges, such as the Andes, compartmentalize continents into distinct climate regions. Erosion Mountain ranges are constantly subjected to erosional forces which work to tear them down. The basins adjacent to an eroding mountain range are then filled with sediments that are buried and turned into sedimentary rock. Erosion is at work while the mountains are being uplifted until the mountains are reduced to low hills and plains. The early Cenozoic uplift of the Rocky Mountains of Colorado provides an example. As the uplift was occurring some of mostly Mesozoic sedimentary strata were removed by erosion over the core of the mountain range and spread as sand and clays across the Great Plains to the east. This mass of rock was removed as the range was actively undergoing uplift. The removal of such a mass from the core of the range most likely caused further uplift as the region adjusted isostatically in response to the removed weight. Rivers are traditionally believed to be the principal cause of mountain range erosion, by cutting into bedrock and transporting sediment. Computer simulation has shown that as mountain belts change from tectonically active to inactive, the rate of erosion drops because there are fewer abrasive particles in the water and fewer landslides. Extraterrestrial "Montes" Mountains on other planets and natural satellites of the Solar System, including the Moon, are often isolated and formed mainly by processes such as impacts, though there are examples of mountain ranges (or "Montes") somewhat similar to those on Earth. Saturn's moon Titan and Pluto, in particular, exhibit large mountain ranges in chains composed mainly of ices rather than rock. Examples include the Mithrim Montes and Doom Mons on Titan, and Tenzing Montes and Hillary Montes on Pluto. Some terrestrial planets other than Earth also exhibit rocky mountain ranges, such as Maxwell Montes on Venus taller than any on Earth and Tartarus Montes on Mars. Jupiter's moon Io has mountain ranges formed from tectonic processes including the Boösaule, Dorian, Hi'iaka and Euboea Montes.
Physical sciences
Landforms
null
19344
https://en.wikipedia.org/wiki/March
March
March is the third month of the year in both the Julian and Gregorian calendars. Its length is 31 days. In the Northern Hemisphere, the meteorological beginning of spring occurs on the first day of March. The March equinox on the 20 or 21 marks the astronomical beginning of spring in the Northern Hemisphere and the beginning of autumn in the Southern Hemisphere, where September is the seasonal equivalent of the Northern Hemisphere's March. History The name of March comes from Martius, the first month of the earliest Roman calendar. It was named after Mars, the Roman god of war, and an ancestor of the Roman people through his sons Romulus and Remus. His month Martius was the beginning of the season for warfare, and the festivals held in his honor during the month were mirrored by others in October, when the season for these activities came to a close. Martius remained the first month of the Roman calendar year perhaps as late as 153 BC, and several religious observances in the first half of the month were originally new year's celebrations. Even in late antiquity, Roman mosaics picturing the months sometimes still placed March first. March 1 began the numbered year in Russia until the end of the 15th century. Great Britain and its colonies continued to use March 25 until 1752, when they finally adopted the Gregorian calendar (the fiscal year in the UK continues to begin on 6 April, initially identical to 25 March in the former Julian calendar). Many other cultures, for example in Iran, or Ethiopia, still celebrate the beginning of the New Year in March. March is the first month of spring in the Northern Hemisphere (North America, Europe, Asia and part of Africa) and the first month of fall or autumn in the Southern Hemisphere (South America, part of Africa, and Oceania). Ancient Roman observances celebrated in March include Agonium Martiale, celebrated on March 1, March 14, and March 17, Matronalia, celebrated on March 1, Junonalia, celebrated on March 7, Equirria, celebrated on March 14, Mamuralia, celebrated on either March 14 or March 15, Hilaria on March 15 and then through March 22–28, Argei, celebrated on March 16–17, Liberalia and Bacchanalia, celebrated March 17, Quinquatria, celebrated March 19–23, and Tubilustrium, celebrated March 23. These dates do not correspond to the modern Gregorian calendar. Other names In Finnish, the month is called maaliskuu, which is believed to originate from maallinen kuu. The latter means earthy month and may refer to the first appearance of "earth" from under the winter's snow. In Ukrainian, the month is called березень/berezenʹ, meaning birch tree, and březen in Czech. Historical names for March include the Saxon Lentmonat, named after the March equinox and gradual lengthening of days, and the eventual namesake of Lent. Saxons also called March Rhed-monat or Hreth-monath (deriving from their goddess Rhedam/Hreth), and Angles called it Hyld-monath, which became the English Lide. In Slovene, the traditional name is sušec, meaning the month when the earth becomes dry enough so that it is possible to cultivate it. The name was first written in 1466 in the Škofja Loka manuscript. Other names were used too, for example brezen and breznik, "the month of birches". The Turkish word Mart is given after the name of Mars the god. Symbols March's birthstones are aquamarine and bloodstone. These stones symbolize courage. Its birth flower is the daffodil. The zodiac signs are Pisces until approximately March 20 and Aries from approximately March 21 onward. Observances This list does not necessarily imply either official status or general observance. Month-long In Catholic tradition, March is the Month of Saint Joseph. Endometriosis Awareness Month (International observance) National Nutrition Month (Canada) Season for Nonviolence: January 30 – April 4 (International observance) Women's History Month (Australia, United Kingdom, United States) Women's Role in History Month (Philippines) American Cerebral Palsy Awareness Month Irish-American Heritage Month Multiple Sclerosis Awareness Month Music in our Schools Month National Athletic Training Month National Bleeding Disorders Awareness Month National Celery Month National Frozen Food Month National Kidney Month National Nutrition Month National Professional Social Work Month National Reading Awareness Month Youth Art Month Non-Gregorian (All Baháʼí, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Baháʼí calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Movable List of movable Eastern Christian observances List of movable Western Christian observances National Corndog Day (United States): March 21 Equal Pay Day (United States): March 31 First Sunday Children's Day (New Zealand) First week, March 1 to 7 Global Money Week School day closest to March 2 Read Across America Day First Monday Casimir Pulaski Day (United States) First Tuesday Grandmother's Day (France) First Thursday World Book Day (UK and Ireland) World Maths Day First Friday Employee Appreciation Day (United States, Canada) Second Sunday Daylight saving time begins (United States and Canada) Week of March 8: March 8–14 Women of Aviation Worldwide Week Monday closest to March 9, unless March 9 falls on a Saturday Baron Bliss Day (Belize) Second Monday Canberra Day (Australia) Commonwealth Day (Commonwealth of Nations) Second Wednesday Decoration Day (Liberia) No Smoking Day (United Kingdom) Second Thursday World Kidney Day Friday of the second full week of March World Sleep Day Third week in March National Poison Prevention Week (United States) Third Monday Birthday of Benito Juarez (Mexico) March 19th, unless the 19th is a Sunday, then March 20 Feast of Joseph of Nazareth (Western Christianity) Father's Day (Spain, Portugal, Italy, Honduras, and Bolivia) Las Fallas, celebrated on the week leading to March 19. (Valencia) "Return of the Swallow", annual observance of the swallows' return to Mission San Juan Capistrano in California. Third Wednesday National Festival of Trees (Netherlands) March equinox: c. March 20 Nowruz, The Iranian new year. (Observed Internationally) Chunfen (East Asia) Dísablót (some Asatru groups) Earth Equinox Day Equinox of the Gods/New Year (Thelema) Higan (Japan) International Astrology Day Mabon (Southern Hemisphere) (Neo-paganism) Ostara (Northern Hemisphere) (Neo-paganism) Shunbun no Hi (Japan) Sigrblót (The Troth) Summer Finding (Asatru Free Assembly) Sun-Earth Day (United States) Vernal Equinox Day/Kōreisai (Japan) World Storytelling Day Fourth Monday Labour Day (Christmas Island, Australia) Fourth Tuesday American Diabetes Alert Day (United States) Last Saturday Earth Hour (International observance) Last Sunday European Summer Time begins Last Monday Seward's Day (Alaska, United States) Fixed March 1 Baba Marta (Bulgaria), Beer Day (Iceland) Commemoration of Mustafa Barzani's Death (Iraqi Kurdistan) Heroes' Day (Paraguay) Independence Day (Bosnia and Herzegovina) Mărțișor (Romania and Moldavia) National Pig Day (United States) Remembrance Day (Marshall Islands) Saint David's Day (Wales) Samiljeol (South Korea) Self-injury Awareness Day (International observance) World Civil Defence Day March 2 National Banana Creme Pie Day (United States) National Reading Day (United States) Omizu-okuri ("Water Carrying") Festival (Obama, Japan) Peasant's Day (Burma) Texas Independence Day (Texas, United States) Victory at Adwa Day (Ethiopia) March 3 Hinamatsuri (Japan) Liberation Day (Bulgaria) Martyr's Day (Malawi) Mother's Day (Georgia) National Canadian Bacon Day (United States) Sportsmen's Day (Egypt) World Wildlife Day March 4 National Grammar Day (United States) St Casimir's Day (Poland and Lithuania) March 5 Custom Chief's Day (Vanuatu) Day of Physical Culture and Sport (Azerbaijan) Learn from Lei Feng Day (China) National Absinthe Day (United States) National Cheez Doodle Day (United States) St Piran's Day (Cornwall) March 6 European Day of the Righteous () Foundation Day (Norfolk Island) Independence Day (Ghana) March 7 Liberation of Sulaymaniyah (Iraqi Kurdistan) National Crown Roast of Pork Day (United States) Teacher's Day (Albania) March 8 International Women's Day International Women's Collaboration Brew Day Mother's Day (primarily Eastern Europe, Russia, and the former Soviet bloc) National Peanut Cluster Day (United States) National Potato Salad Day (United States) March 9 National Crabmeat Day (United States) National Meatball Day (United States) Teachers' Day (Lebanon) March 10 Harriet Tubman Day (United States of America) Holocaust Remembrance Day (Bulgaria) Hote Matsuri (Shiogama, Japan) National Blueberry Popover Day (United States) National Mario Day (United States) National Women and Girls HIV/AIDS Awareness Day (United States) Tibetan Uprising Day (Tibetan independence movement) March 11 Day of Restoration of Independence of Lithuania Johnny Appleseed Day (United States) Moshoeshoe Day (Lesotho) Oatmeal Nut Waffles Day (United States) March 12 Arbor Day (China) Arbor Day (Taiwan) Aztec New Year Girl Scout Birthday (United States) National Baked Scallops Day (United States) National Day (Mauritius) Tree Day (North Macedonia) World Day Against Cyber Censorship Youth Day (Zambia) March 13 Anniversary of the election of Pope Francis (Vatican City) Kasuga Matsuri (Kasuga Grand Shrine, Nara, Japan) L. Ron Hubbard's birthday (Scientology) Liberation of Duhok City (Iraqi Kurdistan) National Coconut Torte Day (United States) March 14 Multiple Sclerosis Awareness Week March 14 to March 20 (United States) Pi Day White Day (Asia) March 15 Hōnen Matsuri (Japan) International Day Against Police Brutality J. J. Roberts' Birthday (Liberia) National Day (Hungary) World Consumer Rights Day World Contact Day World Day of Muslim Culture, Peace, Dialogue and Film World Speech Day Youth Day (Palau) March 16 Day of the Book Smugglers (Lithuania) Remembrance day of the Latvian legionnaires (Latvia) Halabja Day (Iraqi Kurdistan) Saint Urho's Day (Finnish Americans and Finnish Canadians) March 17 Children's Day (Bangladesh) Evacuation Day (Massachusetts) (Suffolk County, Massachusetts) Saint Patrick's Day (Ireland, Irish diaspora) March 18 Anniversary of the Oil Expropriation (Mexico) Flag Day (Aruba) Gallipoli Memorial Day (Turkey) Men's and Soldiers' Day (Mongolia) Teacher's Day (Syria) March 19 Kashubian Unity Day (Poland) Minna Canth's Birthday (Finland) March 20 Feast of the Supreme Ritual (Thelema) Great American Meatout (United States) International Day of Happiness (United Nations) Independence Day (Tunisia) International Francophonie Day (Organisation internationale de la Francophonie), and its related observance: UN French Language Day (United Nations) Liberation of Kirkuk City (Iraqi Kurdistan) National Native HIV/AIDS Awareness Day (United States) World Sparrow Day March 21 Arbor Day (Portugal) Birth of Benito Juárez, a Fiestas Patrias (Mexico) Harmony Day (Australia) Human Rights Day (South Africa) Independence Day (Namibia) International Colour Day (International observance) International Day for the Elimination of Racial Discrimination (International observance) International Day of Forests (International observance) Mother's Day (most of the Arab world) National Tree Planting Day (Lesotho) Truant's Day (Poland, Faroe Islands) World Down Syndrome Day (International observance) World Poetry Day (International observance) World Puppetry Day (International observance) Youth Day (Tunisia) March 22 Emancipation Day (Puerto Rico) World Water Day March 23 Day of the Sea (Bolivia) Ministry of Environment and Natural Resources Day (Azerbaijan) National Chips and Dip Day (United States) Pakistan Day (Pakistan) Promised Messiah Day (Ahmadiyya) World Meteorological Day March 24 Commonwealth Covenant Day (Northern Mariana Islands, United States) Day of Remembrance for Truth and Justice (Argentina) Day of National Revolution (Kyrgyzstan) International Day for the Right to the Truth Concerning Gross Human Rights Violations and for the Dignity of Victims (United Nations) National Tree Planting Day (Uganda) Student Day (Scientology) World Tuberculosis Day March 25 Anniversary of the Arengo and the Feast of the Militants (San Marino) Cultural Workers Day (Russia) Empress Menen's Birthday (Rastafari) EU Talent Day (European Union) Feast of the Annunciation (Christianity), and its related observances: Lady Day (United Kingdom) (see Quarter Days) International Day of the Unborn Child (international) Mother's Day (Slovenia) Waffle Day (Sweden) Freedom Day (Belarus) International Day of Remembrance of the Victims of Slavery and the Transatlantic Slave Trade International Day of Solidarity with Detained and Missing Staff Members (United Nations General Assembly) Maryland Day (Maryland, United States) Revolution Day (Greece) Struggle for Human Rights Day (Slovakia) Tolkien Reading Day (Tolkien fandom) March 26 Independence Day (Bangladesh) Martyr's Day or Day of Democracy (Mali) Prince Kūhiō Day (Hawaii, United States) Purple Day (Canada and United States) March 27 Armed Forces Day (Myanmar) International whisk(e)y day World Theatre Day (International) March 28 Commemoration of Sen no Rikyū (Schools of Japanese tea ceremony) Serfs Emancipation Day (Tibet) Teachers' Day (Czech Republic and Slovakia) March 29 Boganda Day (Central African Republic) Commemoration of the 1947 Rebellion (Madagascar) Day of the Young Combatant (Chile) Youth Day (Taiwan) March 30 Land Day (Palestine) National Doctors' Day (United States) Spiritual Baptist/Shouter Liberation Day (Trinidad and Tobago) World Idli Day March 31 César Chávez Day (United States) Culture Day (Public holidays in the Federated States of Micronesia) Day of Genocide of Azerbaijanis (Azerbaijan) Freedom Day (Malta) International Transgender Day of Visibility King Nangklao Memorial Day (Thailand) National Backup Day (United States) National Clams on the Half Shell Day (United States) Thomas Mundy Peterson Day (New Jersey, United States) Transfer Day (US Virgin Islands)
Technology
Months
null
19345
https://en.wikipedia.org/wiki/May
May
May is the fifth month of the year in the Julian and Gregorian calendars. Its length is 31 days. May is a month of spring in the Northern Hemisphere, and autumn in the Southern Hemisphere. Therefore, May in the Southern Hemisphere is the seasonal equivalent of November in the Northern Hemisphere and vice versa. Late May typically marks the start of the summer vacation season in the United States (Memorial Day) and Canada (Victoria Day) that ends on Labor Day, the first Monday of September. May (in Latin, Maius) was named for the Greek goddess Maia, who was identified with the Roman era goddess of fertility, Bona Dea, whose festival was held in May. Conversely, the Roman poet Ovid provides a second etymology, in which he says that the month of May is named for the maiores, Latin for "elders," and that the following month (June) is named for the iuniores, or "young people" (Fasti VI.88). Eta Aquariids meteor shower appears in May. It is visible from about April 21 to about May 20 each year with peak activity on or around May 6. The Arietids shower from May 22 – July 2, and peaks on June 7. The Virginids also shower at various dates in May. Ancient Roman observances Under the calendar of ancient Rome, the festival of Bona Dea fell on May 1, Argei fell on May 14 or May 15, Agonalia fell on May 21, and Ambarvalia on May 29. Floralia was held April 27 during the Republican era, or April 28 on the Julian calendar, and lasted until May 3. Lemuria fell on 9,11, and 13 May under the Julian calendar. The College of Aesculapius and Hygia celebrated two festivals of Rosalia, one on May 11 and one on May 22. Rosalia was also celebrated at Pergamon on May 24–26. A military Rosalia festival, Rosaliae signorum, also occurred on May 31. Ludi Fabarici was celebrated May 29 – June 1. Mercury would receive a sacrifice on the Ides of May (May 15). Tubilustrium took place on May 23 as well as in March. These dates do not correspond to the modern Gregorian calendar. Symbols May's birthstone is the emerald which is emblematic of love and success. Birth flowers are the Lily of the Valley and Crataegus monogyna. Both are native throughout the cool temperate Northern Hemisphere in Asia, Europe, and in the southern Appalachian Mountains in the United States, but have been naturalized throughout the temperate climatic world. The "Mayflower" Epigaea repens is a North American harbinger of May, and the floral emblem of both Nova Scotia and Massachusetts. Its native range extends from Newfoundland south to Florida, west to Kentucky in the southern range, and to Northwest Territories in the north. The zodiac signs are Taurus (until May 20) and Gemini (May 21 onward). Observances Month-long Working class history month Better Hearing and Speech Month In Catholic tradition, May is the Month of the Blessed Virgin Mary. See May devotions to the Blessed Virgin Mary Flores de Mayo (Philippines) Celiac Awareness Month Cystic Fibrosis Awareness Month Ehlers-Danlos Syndrome Awareness month Garden for Wildlife month Huntington's Disease Awareness Month (International) International Mediterranean Diet Month Kaamatan harvest festival (Labuan, Sabah) New Zealand Music Month (New Zealand) National Pet Month (United Kingdom) National Smile Month (United Kingdom) Season of Emancipation (April 14 to August 23) (Barbados) Skin Cancer Awareness Month South Asian Heritage Month (International) World Trade Month United States Asian American and Pacific Islander Heritage Month National ALS Awareness Month Bicycle Month National Brain Tumor Awareness Month National Burger Month Community Action Awareness Month (North Dakota) National Electrical Safety Month National Foster Care Month National Golf Month Jewish American Heritage Month Haitian Heritage Month Hepatitis Awareness Month Mental Health Awareness Month National Military Appreciation Month National Moving Month National Osteoporosis Month National Stroke Awareness Month National Water Safety Month Older Americans Month Non-Gregorian (All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Movable, 2019 Phi Ta Khon (Dan Sai, Loei province, Isan, Thailand) Dates are selected by village mediums and can take place anywhere between March and July. National Small Business Week (United States): May 5 – 11 National Hurricane Preparedness Week (United States): May 5 – 11 New Zealand Sign Language Week: May 6 – 12 Green Office Week (Britain, United States): May 13 – 17 Walk Safely to School Day (Australia): May 17 Emergency Medical Services Week (United States): May 19 – 25 Bike to Work Week Victoria (May 27 – June 2) Western Christian Special devotions to the Virgin Mary take place in May. See May devotions to the Blessed Virgin Mary. Labour Day: May 1 International Workers' Day Sunday after Divine Mercy Sunday: May 5 Jubilate Sunday Monday and Tuesday in the week following the third Sunday of Easter: May 6–7 Hocktide (England) Fourth Sunday after Easter: May 12 Cantate Sunday Good Shepherd Sunday Fourth Friday after Easter: May 17 Store Bededag (Denmark) Third Sunday of May: May 19 Feast of Our Lady of the Audience Sunday preceding the Rogation days: May 26 Rogation Sunday Monday, Tuesday, and Wednesday preceding Feast of the Ascension: May 27–29 Minor Rogation days 39 days after Easter: May 30 Feast of the Ascension Father's Day (Germany) Festa della Sensa (Venice) Global Day of Prayer Sheep Festival (Cameroon) Eastern Christian Wednesday after Pascha: May 1 Bright Wednesday Thursday after Pascha: May 2 Bright Thursday Friday after Pascha: May 3 Bright Friday Saturday after Pascha: May 4 Bright Saturday 8th day after Pascha: May 5 Thomas Sunday 2nd Tuesday of Pascha, or 2nd Monday of Pascha, depending on region: May 6 or May 7 Radonitsa (Russian Orthodox) 2nd Sunday following Pascha: May 12 Sunday of the Myrrhbearers 4th Sunday of Pascha: May 26 Sunday of the Paralytic Wednesday after the Sunday of the Paralytic: May 29 Mid-Pentecost Movable civic Last Friday in April to the first Sunday in May National Arbour Week (Ontario, Canada) First Thursday Arbour Day (Nova Scotia, Canada) National Day of Prayer (United States) National Day of Reason (United States) First Saturday Kentucky Derby Free Comic Book Day Green Up Day (Vermont, United States) World Naked Gardening Day First Sunday Mother's Day (Angola, Cape Verde, Hungary, Lithuania, Mozambique, Portugal, Spain) World Laughter Day Children's Day (South Korea) First full week National Teacher Appreciation Week (United States) North American Occupational Safety and Health Week Tuesday of First full week National Teacher Appreciation Day (United States) Wednesday of first full week Occupational Safety and Health Professional Day Second week in May National Stuttering Awareness Week (United States) First Tuesday World Asthma Day Friday preceding Second Sunday in May Military Spouse Day (United States) National Public Gardens Day (United States) Saturday closest to May 10 National Train Day (United States) Second Saturday International Migratory Bird Day (Canada, the United States, Mexico, Central and South America, and the Caribbean) National Tree Planting Day (Mongolia) National Train Day Second Weekend National Mills Weekend (United Kingdom) World Migratory Bird Day Second Sunday National Nursing Home Week (United States) Children's Day (Spain) Father's Day (Romania) Mother's Day (Anguilla, Aruba, Australia, Austria, Bahamas, Barbados, Bangladesh, Belgium, Belize, Bermuda, Bonaire, Brazil, Brunei, Canada, Chile, Colombia, Cuba, Croatia, Curaçao, Czech Republic, Denmark, Ecuador, Estonia, Finland, Germany, Greece, Grenada, Honduras, Hong Kong, Iceland, India, Italy, Jamaica, Japan, Latvia, Malta, Malaysia, the Netherlands, New Zealand, Pakistan, Peru, Philippines, Puerto Rico, Singapore, Slovakia, South Africa, Suriname, Switzerland, Taiwan, Trinidad and Tobago, Turkey, United States, Uruguay, Venezuela, Zimbabwe) State Flag and State Emblem Day (Belarus) World Fair Trade Day Week of May 12 National Nursing Week (United States) Third Weekend, including Friday Sanja Matsuri (Tokyo, Japan) Third Friday Arbour Day (Prince Edward Island, Canada) National Defense Transportation Day Endangered Species Day (United States) National Pizza Party Day (United States) Third Saturday The Preakness Stakes is run, second jewel in the triple crown of horse racing. Armed Forces Day (United States) Culture Freedom Day Sanja Matsuri World Whisky Day Third Sunday Commemoration Day of Fallen Soldiers Father's Day (Tonga) Feast of Our Lady of the Audience Sanja Matsuri (Tokyo, Japan) Monday on or before May 24 Victoria Day (Scotland) Third Monday Discovery Day (Cayman Islands) Monday on or before May 25 National Patriots' Day (Quebec) Last Monday preceding May 25 Victoria Day (Canada) May 24, or the nearest weekday if May 24 falls on a weekend Bermuda Day (Bermuda) Saturday closest to May 30 Armed Forces Day (Spain) Last Weekend Kyiv Day (Kyiv) Last Sunday Arbor Day (Venezuela) Children's Day (Hungary) Mother's Day (Algeria, Dominican Republic, Haiti, Mauritius, Morocco, Sweden, Tunisia) Turkmen Carpet Day (Turkmenistan) Last Monday Heroes' Day (Turks and Caicos Islands) Memorial Day (United States), a public holiday, is on May 30, but observed on the last Monday in May. Ratu Sir Lala Sukuna Day (Fiji), removed as a national holiday in 2010. Last Wednesday World Multiple Sclerosis Day Last Thursday Take a Girl Child to Work Day (South Africa) Fixed April 29 to May 5 in Japan, which includes four different holidays, is called "Golden Week". Many workers have up to 10 days off. There is also 'May sickness', where new students or workers start to be tired of their new routine. (In Japan the school year and fiscal year start on April 1.) Mayovka, in the context of the late Russian Empire, was a picnic in the countryside or in a park in the early days of May, hence the name. Eventually, "mayovka" (specifically, "proletarian mayovka") came to mean an illegal celebration of May 1 by revolutionary public, typically presented as an innocent picnic. May 1 Armed Forces Day (Mauritania) Beltane (Ireland, Neopaganism) Constitution Day (Argentina) Lei Day (Hawaii, United States) May Day (International observance) May 2 Anniversary of the Dos de Mayo Uprising (Community of Madrid, Spain) Birth Anniversary of Third Druk Gyalpo (Bhutan) Flag Day (Poland) Indonesia National Education Day May 3 Constitution Day (Poland) Constitution Memorial Day (Japan) Roodmas Sun Day (International) World Press Freedom Day May 4 Anti-Bullying Day (United Nations) Bird Day (United States) Cassinga Day (Namibia) Death of Milan Rastislav Štefánik Day (Slovakia) Greenery Day (Japan) International Firefighters' Day May Fourth Movement commemorations: Literary Day (Taiwan) Youth Day (China) Remembrance Day for Martyrs and Disabled (Afghanistan) Remembrance of the Dead (Netherlands) Restoration of Independence day (Latvia) Star Wars Day (International observance) World Give Day Youth Day (Fiji) May 5 Children's Day (Japan, Korea) Cinco de Mayo Constitution Day (Kyrgyzstan) Coronation Day (Thailand) Europe Day in Europe (uncommon usage, largely replaced by May 9). Feast of al-Khadr or Saint George (Palestinian people) Indian Arrival Day (Guyana) International Midwives' Day Liberation Day (Denmark) Liberation Day (Netherlands) Lusophone Culture Day (Community of Portuguese Language Countries) Martyrs' Day (Albania) Patriots' Victory Day (Ethiopia) Senior Citizens Day (Palau) Tango no sekku (Japan) May 6 Martyrs' Day (Gabon) Martyrs' Day (Lebanon and Syria) International No Diet Day Teachers' Day (Jamaica) The first day of Hıdırellez (Turkey) St George's Day related observances (Eastern Orthodox Church): Day of Bravery, also known as Gergyovden (Bulgaria) Đurđevdan (Gorani, Roma) Police Day (Georgia) Yuri's Day (Russian Orthodox Church) May 7 Defender of the Fatherland Day (Kazakhstan) Dien Bien Phu Victory Day (Vietnam) Radio Day (Russia, Bulgaria) May 8 Miguel Hidalgo's birthday (Mexico) Parents' Day (South Korea) Time of Remembrance and Reconciliation for Those Who Lost Their Lives during the Second World War, continues to May 9 Truman Day (Missouri, United States) White Lotus Day (Theosophy) World Red Cross and Red Crescent Day Veterans Day (Norway) VE Day in Western Europe. In Eastern Europe it is celebrated on May 9. May 9 Anniversary of Dianetics (Church of Scientology) Europe Day (European Union) Liberation Day (Guernsey), commemorating the end of the German occupation of the Channel Islands during World War II. Liberation Day (Jersey), commemorating the end of the German occupation of the Channel Islands during World War II. Time of Remembrance and Reconciliation for Those Who Lost Their Lives during the Second World War, continued from May 8. Victory Day observances, celebration of the Soviet Union victory over Nazi Germany (Soviet Union, Azerbaijan, Belarus, Bosnia and Herzegovina, Georgia, Israel, Kazakhstan, Kyrgyzstan, Moldova, Russia, Serbia, Tajikistan, Turkmenistan, Uzbekistan) Victory Day over Nazism in World War II (Ukraine) Victory and Peace Day (Armenia) marks both the capture of Shusha (1992) in the First Nagorno-Karabakh War, and the end of World War II. May 10 Children's Day (Maldives) Confederate Memorial Day (North Carolina and South Carolina) Constitution Day (Federated States of Micronesia) Golden Spike Day (1869 – Completion of the First transcontinental railroad – Promontory Summit, Utah) Independence Day (Romania), celebrating the declaration of independence of Romania from the Ottoman Empire in 1877. Liberation Day (Sark), commemorating the end of the German occupation of the Channel Islands during World War II. May 11 National Technology Day (India) Statehood Day (Minnesota) Vietnam Human Rights Day (Vietnam) May 12 Saint Andrea the First Day (Georgia (country)) Day of the Finnish Identity (Finland) International Myalgic Encephalomyelitis/Chronic Fatigue Syndrome Awareness Day International Nurses Day May 13 Abbotsbury Garland Day (Dorset, England) Heroes' Day (Romania) Rotuma Day (Rotuma, Fiji) May 14 Hastings Banda's Birthday (Malawi) First day of Izumo-taisha Shrine Grand Festival. (Izumo-taisha, Japan) National Unification Day (Liberia) May 15 Beginning of Tourette Syndrome awareness month. It ends on June 15 Army Day (Slovenia) Constituent Assembly Day (Lithuania) Independence Day (Paraguay) International Day of Families Nakba Day (Palestinian communities) Peace Officers Memorial Day (United States) Republic Day (Lithuania) Saint Ubaldo Day Teachers' Day (Colombia, Mexico, South Korea) May 16 Martyrs of Sudan (Episcopal Church (USA)) St Brendan Birthday & Feast day Mass Graves Day (Iraq) National Day, declared by Salva Kiir Mayardit (South Sudan) Teachers' Day (Malaysia) International Day of Light May 17 National Day Against Homophobia (Canada) International Day Against Homophobia, Transphobia and Biphobia, also known as IDAHOT Birthday of the Raja (Perlis) Children's Day (Norway) Constitution Day (Nauru) Galician Literature Day (Galicia (Spain)) World Hypertension Day World Information Society Day Liberation Day (Democratic Republic of the Congo) Navy Day (Argentina) Norwegian Constitution Day May 18 Baltic Fleet Day (Russia) Battle of Las Piedras Day (Uruguay) Day of Remembrance of Crimean Tatar genocide (Ukraine) Flag and Universities Day (Haiti) Independence Day (Somaliland) (unrecognized) International Museum Day Mullivaikkal Remembrance Day (Sri Lankan Tamils) Revival, Unity, and Poetry of Magtymguly Day (Turkmenistan) Teacher's Day (Syria) Victory Day (Sri Lanka) World AIDS Vaccine Day May 19 Commemoration of Atatürk, Youth and Sports Day (Turkey, Northern Cyprus) Greek Genocide Remembrance Day (Greece) Hồ Chí Minh's Birthday (Vietnam) Malcolm X Day (United States of America) National Asian & Pacific Islander HIV/AIDS Awareness Day Hepatitis Testing Day (United States) May 20 Day of Remembrance (Cambodia) Emancipation Day (Florida) European Maritime Day (European Council) Independence Day (Cuba) Independence Day, East Timor Josephine Baker Day (NAACP) National Awakening Day (Indonesia) National Day (Cameroon) World Metrology Day May 21 Afro-Colombian Day (Colombia) Circassian Day of Mourning (Circassians) Day of Patriots and Military (Hungary) Navy Day (Chile) Saint Helena Day, celebrates the discovery of Saint Helena in 1502. World Day for Cultural Diversity for Dialogue and Development (International) One of the three festivals of Vejovis (Roman Empire) May 22 Abolition Day (Martinique) Harvey Milk Day (California) International Day for Biological Diversity (International) National Maritime Day (United States) National Sovereignty Day (Haiti) Republic Day (Sri Lanka) Translation of the Relics of Saint Nicholas from Myra to Bari (Ukraine) Unity Day (Yemen) World Goth Day May 23 Constitution Day (Germany) Labour Day (Jamaica) Students' Day (Mexico) World Turtle Day May 24 Feast of Mary Help of Christians (Roman Catholicism) Aldersgate Day/Wesley Day (Methodism) Battle of Pichincha Day (Ecuador) Commonwealth Day (Belize) Independence Day (Eritrea) Lubiri Memorial Day (Buganda) Saints Cyril and Methodius Day (Eastern Orthodox Church) and its related observance: Bulgarian Education and Culture and Slavonic Literature Day (Bulgaria) Saints Cyril and Methodius, Slavonic Enlighteners' Day (North Macedonia) May 25 Africa Day (African Union) African Liberation Day (African Union) Day of Youth Geek Pride Day Independence Day (Jordan) Liberation Day (Lebanon) May Revolution (or Revolución de Mayo), a national holiday in Argentina International Missing Children's Day Last bell (Russia, post-Soviet countries) Liberation Day (Lebanon) National Day (Argentina) National Missing Children's Day (United States) National Tap Dance Day (United States) Towel Day May 26 Crown Prince's Birthday (Denmark) Independence Day (Guyana) Independence Day (Georgia) Mother's Day (Poland) National Day of Healing (Australia) National Paper Airplane Day (United States) May 27 Armed Forces Day (Nicaragua) Children's Day (Nigeria) Mother's Day (Bolivia) Navy Day (Japan) Slavery Abolition Day (Guadeloupe, Saint Barthélemy, Saint Martin) World MS Day Start of National Reconciliation Week (Australia) May 28 Armed Forces Day (Croatia) Downfall of the Derg Day (Ethiopia) Flag Day (Philippines) (Display of the flag in all places until June 12 is encouraged) Independence Day (Armenia) Republic Day (Nepal) TDFR Republic Day Youm-e-Takbir (Pakistan) May 29 Army Day (Argentina) International Day of United Nations Peacekeepers (International) Oak Apple Day (England), and its related observance: Castleton Garland Day (Castleton) Statehood Day (Rhode Island and Wisconsin) Veterans Day (Sweden) World Digestive Health Day May 30 Anguilla Day (Anguilla) Canary Islands Day (Spain) Indian Arrival Day (Trinidad and Tobago) Lod Massacre Remembrance Day (Puerto Rico) Mother's Day (Nicaragua) Parliament Day (Croatia) May 31 Anniversary of Royal Brunei Malay Regiment (Brunei) Castile–La Mancha Day (Castile-La Mancha) Visitation of Mary (Western Christianity) World No Tobacco Day (International)
Technology
Months
null
19356
https://en.wikipedia.org/wiki/Mental%20disorder
Mental disorder
A mental disorder, also referred to as a mental illness, a mental health condition, or a psychiatric disability, is a behavioral or mental pattern that causes significant distress or impairment of personal functioning. A mental disorder is also characterized by a clinically significant disturbance in an individual's cognition, emotional regulation, or behavior, often in a social context. Such disturbances may occur as single episodes, may be persistent, or may be relapsing–remitting. There are many different types of mental disorders, with signs and symptoms that vary widely between specific disorders. A mental disorder is one aspect of mental health. The causes of mental disorders are often unclear. Theories incorporate findings from a range of fields. Disorders may be associated with particular regions or functions of the brain. Disorders are usually diagnosed or assessed by a mental health professional, such as a clinical psychologist, psychiatrist, psychiatric nurse, or clinical social worker, using various methods such as psychometric tests, but often relying on observation and questioning. Cultural and religious beliefs, as well as social norms, should be taken into account when making a diagnosis. Services for mental disorders are usually based in psychiatric hospitals, outpatient clinics, or in the community, Treatments are provided by mental health professionals. Common treatment options are psychotherapy or psychiatric medication, while lifestyle changes, social interventions, peer support, and self-help are also options. In a minority of cases, there may be involuntary detention or treatment. Prevention programs have been shown to reduce depression. In 2019, common mental disorders around the globe include: depression, which affects about 264 million people; dementia, which affects about 50 million; bipolar disorder, which affects about 45 million; and schizophrenia and other psychoses, which affect about 20 million people. Neurodevelopmental disorders include attention deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD), and intellectual disability, of which onset occurs early in the developmental period. Stigma and discrimination can add to the suffering and disability associated with mental disorders, leading to various social movements attempting to increase understanding and challenge social exclusion. Definition The definition and classification of mental disorders are key issues for researchers as well as service providers and those who may be diagnosed. For a mental state to be classified as a disorder, it generally needs to cause dysfunction. Most international clinical documents use the term mental "disorder", while "illness" is also common. It has been noted that using the term "mental" (i.e., of the mind) is not necessarily meant to imply separateness from the brain or body. According to the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), published in 1994, a mental disorder is a psychological syndrome or pattern that is associated with distress (e.g., via a painful symptom), disability (impairment in one or more important areas of functioning), increased risk of death, or causes a significant loss of autonomy; however, it excludes normal responses such as the grief from loss of a loved one and also excludes deviant behavior for political, religious, or societal reasons not arising from a dysfunction in the individual. DSM-IV predicates the definition with caveats, stating that, as in the case with many medical terms, mental disorder "lacks a consistent operational definition that covers all situations", noting that different levels of abstraction can be used for medical definitions, including pathology, symptomology, deviance from a normal range, or etiology, and that the same is true for mental disorders, so that sometimes one type of definition is appropriate and sometimes another, depending on the situation. In 2013, the American Psychiatric Association (APA) redefined mental disorders in the DSM-5 as "a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning." The final draft of ICD-11 contains a very similar definition. The terms "mental breakdown" or "nervous breakdown" may be used by the general population to mean a mental disorder. The terms "nervous breakdown" and "mental breakdown" have not been formally defined through a medical diagnostic system such as the DSM-5 or ICD-10 and are nearly absent from scientific literature regarding mental illness. Although "nervous breakdown" is not rigorously defined, surveys of laypersons suggest that the term refers to a specific acute time-limited reactive disorder involving symptoms such as anxiety or depression, usually precipitated by external stressors. Many health experts today refer to a nervous breakdown as a mental health crisis. Nervous illness In addition to the concept of mental disorder, some people have argued for a return to the old-fashioned concept of nervous illness. In How Everyone Became Depressed: The Rise and Fall of the Nervous Breakdown (2013), Edward Shorter, a professor of psychiatry and the history of medicine, says: Classifications There are currently two widely established systems that classify mental disorders: ICD-11 Chapter 06: Mental, behavioural or neurodevelopmental disorders, part of the International Classification of Diseases produced by the WHO (in effect since 1 January 2022). Diagnostic and Statistical Manual of Mental Disorders (DSM-5) produced by the APA since 1952. Both of these list categories of disorder and provide standardized criteria for diagnosis. They have deliberately converged their codes in recent revisions so that the manuals are often broadly comparable, although significant differences remain. Other classification schemes may be used in non-western cultures, for example, the Chinese Classification of Mental Disorders, and other manuals may be used by those of alternative theoretical persuasions, such as the Psychodynamic Diagnostic Manual. In general, mental disorders are classified separately from neurological disorders, learning disabilities or intellectual disability. Unlike the DSM and ICD, some approaches are not based on identifying distinct categories of disorder using dichotomous symptom profiles intended to separate the abnormal from the normal. There is significant scientific debate about the relative merits of categorical versus such non-categorical (or hybrid) schemes, also known as continuum or dimensional models. A spectrum approach may incorporate elements of both. In the scientific and academic literature on the definition or classification of mental disorder, one extreme argues that it is entirely a matter of value judgements (including of what is normal) while another proposes that it is or could be entirely objective and scientific (including by reference to statistical norms). Common hybrid views argue that the concept of mental disorder is objective even if only a "fuzzy prototype" that can never be precisely defined, or conversely that the concept always involves a mixture of scientific facts and subjective value judgments. Although the diagnostic categories are referred to as 'disorders', they are presented as medical diseases, but are not validated in the same way as most medical diagnoses. Some neurologists argue that classification will only be reliable and valid when based on neurobiological features rather than clinical interview, while others suggest that the differing ideological and practical perspectives need to be better integrated. The DSM and ICD approach remains under attack both because of the implied causality model and because some researchers believe it better to aim at underlying brain differences which can precede symptoms by many years. Dimensional models The high degree of comorbidity between disorders in categorical models such as the DSM and ICD have led some to propose dimensional models. Studying comorbidity between disorders have demonstrated two latent (unobserved) factors or dimensions in the structure of mental disorders that are thought to possibly reflect etiological processes. These two dimensions reflect a distinction between internalizing disorders, such as mood or anxiety symptoms, and externalizing disorders such as behavioral or substance use symptoms. A single general factor of psychopathology, similar to the g factor for intelligence, has been empirically supported. The p factor model supports the internalizing-externalizing distinction, but also supports the formation of a third dimension of thought disorders such as schizophrenia. Biological evidence also supports the validity of the internalizing-externalizing structure of mental disorders, with twin and adoption studies supporting heritable factors for externalizing and internalizing disorders. A leading dimensional model is the Hierarchical Taxonomy of Psychopathology. Disorders There are many different categories of mental disorder, and many different facets of human behavior and personality that can become disordered. Anxiety disorders An anxiety disorder is anxiety or fear that interferes with normal functioning may be classified as an anxiety disorder. Commonly recognized categories include specific phobias, generalized anxiety disorder, social anxiety disorder, panic disorder, agoraphobia, obsessive–compulsive disorder and post-traumatic stress disorder. Mood disorders Other affective (emotion/mood) processes can also become disordered. Mood disorder involving unusually intense and sustained sadness, melancholia, or despair is known as major depression (also known as unipolar or clinical depression). Milder, but still prolonged depression, can be diagnosed as dysthymia. Bipolar disorder (also known as manic depression) involves abnormally "high" or pressured mood states, known as mania or hypomania, alternating with normal or depressed moods. The extent to which unipolar and bipolar mood phenomena represent distinct categories of disorder, or mix and merge along a dimension or spectrum of mood, is subject to some scientific debate. Psychotic disorders Patterns of belief, language use and perception of reality can become dysregulated (e.g., delusions, thought disorder, hallucinations). Psychotic disorders in this domain include schizophrenia, and delusional disorder. Schizoaffective disorder is a category used for individuals showing aspects of both schizophrenia and affective disorders. Schizotypy is a category used for individuals showing some of the characteristics associated with schizophrenia, but without meeting cutoff criteria. Personality disorders Personality—the fundamental characteristics of a person that influence thoughts and behaviors across situations and time—may be considered disordered if judged to be abnormally rigid and maladaptive. Although treated separately by some, the commonly used categorical schemes include them as mental disorders, albeit on a separate axis II in the case of the DSM-IV. A number of different personality disorders are listed, including those sometimes classed as eccentric, such as paranoid, schizoid and schizotypal personality disorders; types that have described as dramatic or emotional, such as antisocial, borderline, histrionic or narcissistic personality disorders; and those sometimes classed as fear-related, such as anxious-avoidant, dependent, or obsessive–compulsive personality disorders. Personality disorders, in general, are defined as emerging in childhood, or at least by adolescence or early adulthood. The ICD also has a category for enduring personality change after a catastrophic experience or psychiatric illness. If an inability to sufficiently adjust to life circumstances begins within three months of a particular event or situation, and ends within six months after the stressor stops or is eliminated, it may instead be classed as an adjustment disorder. There is an emerging consensus that personality disorders, similar to personality traits in general, incorporate a mixture of acute dysfunctional behaviors that may resolve in short periods, and maladaptive temperamental traits that are more enduring. Furthermore, there are also non-categorical schemes that rate all individuals via a profile of different dimensions of personality without a symptom-based cutoff from normal personality variation, for example through schemes based on dimensional models. Neurodevelopmental disorders Neurodevelopmental disorders is a group of mental disorder that affect the central nervous system, such as the brain and spinal cord. These disorders can appear in early childhood. They can even persist into adulthood. A few of the common ones are attention deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD), intellectual disabilities, motor disorders, and communication disorders among others. Some causes can contribute to these disorders like genetic factors (genetics, family medical history), environmental factors (excessive stress, exposure to neurotoxins, pollution, viral infections, and bacterial infections), physical factors (traumatic brain injury, illness), and prenatal factors (birth defects, exposure to drugs during pregnancy, low birth weight). Neurodevelopmental disorders can be managed with behavioral therapy, applied behavioral analysis (ABA), educational interventions, specific medications, and other such treatments. Approximately 8 in 10 people with autism suffer from a mental health problem in their lifetime, in comparison to 1 in 4 of the general population that suffers from a mental health problem in their lifetimes. Eating disorders An eating disorder is a serious mental health condition that involves an unhealthy relationship with food and body image. They can cause severe physical and psychological problems. Eating disorders involve disproportionate concern in matters of food and weight. Categories of disorder in this area include anorexia nervosa, bulimia nervosa, exercise bulimia or binge eating disorder. Sleep disorders Sleep disorders are associated with disruption to normal sleep patterns. A common sleep disorder is insomnia, which is described as difficulty falling and/or staying asleep. Other sleep disorders include narcolepsy, sleep apnea, REM sleep behavior disorder, chronic sleep deprivation, and restless leg syndrome. Narcolepsy is a condition of extreme tendencies to fall asleep whenever and wherever. People with narcolepsy feel refreshed after their random sleep, but eventually get sleepy again. Narcolepsy diagnosis requires an overnight stay at a sleep center for analysis, during which doctors ask for a detailed sleep history and sleep records. Doctors also use actigraphs and polysomnography. Doctors will do a multiple sleep latency test, which measures how long it takes a person to fall asleep. Sleep apnea, when breathing repeatedly stops and starts during sleep, can be a serious sleep disorder. Three types of sleep apnea include obstructive sleep apnea, central sleep apnea, and complex sleep apnea. Sleep apnea can be diagnosed at home or with polysomnography at a sleep center. An ear, nose, and throat doctor may further help with the sleeping habits. Sexuality related Sexual disorders include dyspareunia and various kinds of paraphilia (sexual arousal to objects, situations, or individuals that are considered abnormal or harmful to the person or others). Other Impulse control disorders: People who are abnormally unable to resist certain urges or impulses that could be harmful to themselves or others, may be classified as having an impulse control disorder, and disorders such as kleptomania (stealing) or pyromania (fire-setting). Various behavioral addictions, such as gambling addiction, may be classed as a disorder. Obsessive–compulsive disorder can sometimes involve an inability to resist certain acts but is classed separately as being primarily an anxiety disorder. Substance use disorders: This disorder refers to the use of drugs (legal or illegal, including alcohol) that persists despite significant problems or harm related to its use. Substance dependence and substance abuse fall under this umbrella category in the DSM. Substance use disorder may be due to a pattern of compulsive and repetitive use of a drug that results in tolerance to its effects and withdrawal symptoms when use is reduced or stopped. Dissociative disorders: People with severe disturbances of their self-identity, memory, and general awareness of themselves and their surroundings may be classified as having these types of disorders, including depersonalization derealization disorder or dissociative identity disorder (which was previously referred to as multiple personality disorder or "split personality"). Cognitive disorders: These affect cognitive abilities, including learning and memory. This category includes delirium and mild and major neurocognitive disorder (previously termed dementia). Somatoform disorders may be diagnosed when there are problems that appear to originate in the body that are thought to be manifestations of a mental disorder. This includes somatization disorder and conversion disorder. There are also disorders of how a person perceives their body, such as body dysmorphic disorder. Neurasthenia is an old diagnosis involving somatic complaints as well as fatigue and low spirits/depression, which is officially recognized by the ICD-10 but no longer by the DSM-IV. Factitious disorders are diagnosed where symptoms are thought to be reported for personal gain. Symptoms are often deliberately produced or feigned, and may relate to either symptoms in the individual or in someone close to them, particularly people they care for. There are attempts to introduce a category of relational disorder, where the diagnosis is of a relationship rather than on any one individual in that relationship. The relationship may be between children and their parents, between couples, or others. There already exists, under the category of psychosis, a diagnosis of shared psychotic disorder where two or more individuals share a particular delusion because of their close relationship with each other. There are a number of uncommon psychiatric syndromes, which are often named after the person who first described them, such as Capgras syndrome, De Clerambault syndrome, Othello syndrome, Ganser syndrome, Cotard delusion, and Ekbom syndrome, and additional disorders such as the Couvade syndrome and Geschwind syndrome. Signs and symptoms Course The onset of psychiatric disorders usually occurs from childhood to early adulthood. Impulse-control disorders and a few anxiety disorders tend to appear in childhood. Some other anxiety disorders, substance disorders, and mood disorders emerge later in the mid-teens. Symptoms of schizophrenia typically manifest from late adolescence to early twenties. The likely course and outcome of mental disorders vary and are dependent on numerous factors related to the disorder itself, the individual as a whole, and the social environment. Some disorders may last a brief period of time, while others may be long-term in nature. All disorders can have a varied course. Long-term international studies of schizophrenia have found that over a half of individuals recover in terms of symptoms, and around a fifth to a third in terms of symptoms and functioning, with many requiring no medication. While some have serious difficulties and support needs for many years, "late" recovery is still plausible. The World Health Organization (WHO) concluded that the long-term studies' findings converged with others in "relieving patients, carers and clinicians of the chronicity paradigm which dominated thinking throughout much of the 20th century." A follow-up study by Tohen and coworkers revealed that around half of people initially diagnosed with bipolar disorder achieve symptomatic recovery (no longer meeting criteria for the diagnosis) within six weeks, and nearly all achieve it within two years, with nearly half regaining their prior occupational and residential status in that period. Less than half go on to experience a new episode of mania or major depression within the next two years. Disability Some disorders may be very limited in their functional effects, while others may involve substantial disability and support needs. In this context, the terms psychiatric disability and psychological disability are sometimes used instead of mental disorder. The degree of ability or disability may vary over time and across different life domains. Furthermore, psychiatric disability has been linked to institutionalization, discrimination and social exclusion as well as to the inherent effects of disorders. Alternatively, functioning may be affected by the stress of having to hide a condition in work or school, etc., by adverse effects of medications or other substances, or by mismatches between illness-related variations and demands for regularity. It is also the case that, while often being characterized in purely negative terms, some mental traits or states labeled as psychiatric disabilities can also involve above-average creativity, non-conformity, goal-striving, meticulousness, or empathy. In addition, the public perception of the level of disability associated with mental disorders can change. Nevertheless, internationally, people report equal or greater disability from commonly occurring mental conditions than from commonly occurring physical conditions, particularly in their social roles and personal relationships. The proportion with access to professional help for mental disorders is far lower, however, even among those assessed as having a severe psychiatric disability. Disability in this context may or may not involve such things as: Basic activities of daily living. Including looking after the self (health care, grooming, dressing, shopping, cooking etc.) or looking after accommodation (chores, DIY tasks, etc.) Interpersonal relationships. Including communication skills, ability to form relationships and sustain them, ability to leave the home or mix in crowds or particular settings Occupational functioning. Ability to acquire an employment and hold it, cognitive and social skills required for the job, dealing with workplace culture, or studying as a student. In terms of total disability-adjusted life years (DALYs), which is an estimate of how many years of life are lost due to premature death or to being in a state of poor health and disability, psychiatric disabilities rank amongst the most disabling conditions. Unipolar (also known as Major) depressive disorder is the third leading cause of disability worldwide, of any condition mental or physical, accounting for 65.5 million years lost. The first systematic description of global disability arising in youth, in 2011, found that among 10- to 24-year-olds nearly half of all disability (current and as estimated to continue) was due to psychiatric disabilities, including substance use disorders and conditions involving self-harm. Second to this were accidental injuries (mainly traffic collisions) accounting for 12 percent of disability, followed by communicable diseases at 10 percent. The psychiatric disabilities associated with most disabilities in high-income countries were unipolar major depression (20%) and alcohol use disorder (11%). In the eastern Mediterranean region, it was unipolar major depression (12%) and schizophrenia (7%), and in Africa it was unipolar major depression (7%) and bipolar disorder (5%). Suicide, which is often attributed to some underlying mental disorder, is a leading cause of death among teenagers and adults under 35. There are an estimated 10 to 20 million non-fatal attempted suicides every year worldwide. Risk factors The predominant view is that genetic, psychological, and environmental factors all contribute to the development or progression of mental disorders. Different risk factors may be present at different ages, with risk occurring as early as during prenatal period. Genetics A number of psychiatric disorders are linked to a family history (including depression, narcissistic personality disorder and anxiety). Twin studies have also revealed a very high heritability for many mental disorders (especially autism and schizophrenia). Although researchers have been looking for decades for clear linkages between genetics and mental disorders, that work has not yielded specific genetic biomarkers yet that might lead to better diagnosis and better treatments. Statistical research looking at eleven disorders found widespread assortative mating between people with mental illness. That means that individuals with one of these disorders were two to three times more likely than the general population to have a partner with a mental disorder. Sometimes people seemed to have preferred partners with the same mental illness. Thus, people with schizophrenia or ADHD are seven times more likely to have affected partners with the same disorder. This is even more pronounced for people with Autism spectrum disorders who are 10 times more likely to have a spouse with the same disorder. Environment During the prenatal stage, factors like unwanted pregnancy, lack of adaptation to pregnancy or substance use during pregnancy increases the risk of developing a mental disorder. Maternal stress and birth complications including prematurity and infections have also been implicated in increasing susceptibility for mental illness. Infants neglected or not provided optimal nutrition have a higher risk of developing cognitive impairment. Social influences have also been found to be important, including abuse, neglect, bullying, social stress, traumatic events, and other negative or overwhelming life experiences. Aspects of the wider community have also been implicated, including employment problems, socioeconomic inequality, lack of social cohesion, problems linked to migration, and features of particular societies and cultures. The specific risks and pathways to particular disorders are less clear, however. Nutrition also plays a role in mental disorders. In schizophrenia and psychosis, risk factors include migration and discrimination, childhood trauma, bereavement or separation in families, recreational use of drugs, and urbanicity. In anxiety, risk factors may include parenting factors including parental rejection, lack of parental warmth, high hostility, harsh discipline, high maternal negative affect, anxious childrearing, modelling of dysfunctional and drug-abusing behavior, and child abuse (emotional, physical and sexual). Adults with imbalance work to life are at higher risk for developing anxiety. For bipolar disorder, stress (such as childhood adversity) is not a specific cause, but does place genetically and biologically vulnerable individuals at risk for a more severe course of illness. Drug use Mental disorders are associated with drug use including: cannabis, alcohol and caffeine, use of which appears to promote anxiety. For psychosis and schizophrenia, usage of a number of drugs has been associated with development of the disorder, including cannabis, cocaine, and amphetamines. There has been debate regarding the relationship between usage of cannabis and bipolar disorder. Cannabis has also been associated with depression. Adolescents are at increased risk for tobacco, alcohol and drug use; Peer pressure is the main reason why adolescents start using substances. At this age, the use of substances could be detrimental to the development of the brain and place them at higher risk of developing a mental disorder. Chronic disease People living with chronic conditions like HIV and diabetes are at higher risk of developing a mental disorder. People living with diabetes experience significant stress from the biological impact of the disease, which places them at risk for developing anxiety and depression. Diabetic patients also have to deal with emotional stress trying to manage the disease. Conditions like heart disease, stroke, respiratory conditions, cancer, and arthritis increase the risk of developing a mental disorder when compared to the general population. Personality traits Risk factors for mental illness include a propensity for high neuroticism or "emotional instability". In anxiety, risk factors may include temperament and attitudes (e.g. pessimism). Causal models Mental disorders can arise from multiple sources, and in many cases there is no single accepted or consistent cause currently established. An eclectic or pluralistic mix of models may be used to explain particular disorders. The primary paradigm of contemporary mainstream Western psychiatry is said to be the biopsychosocial model which incorporates biological, psychological and social factors, although this may not always be applied in practice. Biological psychiatry follows a biomedical model where many mental disorders are conceptualized as disorders of brain circuits likely caused by developmental processes shaped by a complex interplay of genetics and experience. A common assumption is that disorders may have resulted from genetic and developmental vulnerabilities, exposed by stress in life (for example in a diathesis–stress model), although there are various views on what causes differences between individuals. Some types of mental disorders may be viewed as primarily neurodevelopmental disorders. Evolutionary psychology may be used as an overall explanatory theory, while attachment theory is another kind of evolutionary-psychological approach sometimes applied in the context of mental disorders. Psychoanalytic theories have continued to evolve alongside and cognitive-behavioral and systemic-family approaches. A distinction is sometimes made between a "medical model" or a "social model" of psychiatric disability. Diagnosis Psychiatrists seek to provide a medical diagnosis of individuals by an assessment of symptoms, signs and impairment associated with particular types of mental disorder. Other mental health professionals, such as clinical psychologists, may or may not apply the same diagnostic categories to their clinical formulation of a client's difficulties and circumstances. The majority of mental health problems are, at least initially, assessed and treated by family physicians (in the UK general practitioners) during consultations, who may refer a patient on for more specialist diagnosis in acute or chronic cases. Routine diagnostic practice in mental health services typically involves an interview known as a mental status examination, where evaluations are made of appearance and behavior, self-reported symptoms, mental health history, and current life circumstances. The views of other professionals, relatives, or other third parties may be taken into account. A physical examination to check for ill health or the effects of medications or other drugs may be conducted. Psychological testing is sometimes used via paper-and-pen or computerized questionnaires, which may include algorithms based on ticking off standardized diagnostic criteria, and in rare specialist cases neuroimaging tests may be requested, but such methods are more commonly found in research studies than routine clinical practice. Time and budgetary constraints often limit practicing psychiatrists from conducting more thorough diagnostic evaluations. It has been found that most clinicians evaluate patients using an unstructured, open-ended approach, with limited training in evidence-based assessment methods, and that inaccurate diagnosis may be common in routine practice. In addition, comorbidity is very common in psychiatric diagnosis, where the same person meets the criteria for more than one disorder. On the other hand, a person may have several different difficulties only some of which meet the criteria for being diagnosed. There may be specific problems with accurate diagnosis in developing countries. More structured approaches are being increasingly used to measure levels of mental illness. HoNOS is the most widely used measure in English mental health services, being used by at least 61 trusts. In HoNOS a score of 0–4 is given for each of 12 factors, based on functional living capacity. Research has been supportive of HoNOS, although some questions have been asked about whether it provides adequate coverage of the range and complexity of mental illness problems, and whether the fact that often only 3 of the 12 scales vary over time gives enough subtlety to accurately measure outcomes of treatment. Criticism Since the 1980s, Paula Caplan has been concerned about the subjectivity of psychiatric diagnosis, and people being arbitrarily "slapped with a psychiatric label." Caplan says because psychiatric diagnosis is unregulated, doctors are not required to spend much time interviewing patients or to seek a second opinion. The Diagnostic and Statistical Manual of Mental Disorders can lead a psychiatrist to focus on narrow checklists of symptoms, with little consideration of what is actually causing the person's problems. So, according to Caplan, getting a psychiatric diagnosis and label often stands in the way of recovery. In 2013, psychiatrist Allen Frances wrote a paper entitled "The New Crisis of Confidence in Psychiatric Diagnosis", which said that "psychiatric diagnosis... still relies exclusively on fallible subjective judgments rather than objective biological tests." Frances was also concerned about "unpredictable overdiagnosis." For many years, marginalized psychiatrists (such as Peter Breggin, Thomas Szasz) and outside critics (such as Stuart A. Kirk) have "been accusing psychiatry of engaging in the systematic medicalization of normality." More recently these concerns have come from insiders who have worked for and promoted the American Psychiatric Association (e.g., Robert Spitzer, Allen Frances). A 2002 editorial in the British Medical Journal warned of inappropriate medicalization leading to disease mongering, where the boundaries of the definition of illnesses are expanded to include personal problems as medical problems or risks of diseases are emphasized to broaden the market for medications. Gary Greenberg, a psychoanalyst, in his book "the Book of Woe", argues that mental illness is really about suffering and how the DSM creates diagnostic labels to categorize people's suffering. Indeed, the psychiatrist Thomas Szasz, in his book "the Medicalization of Everyday Life", also argues that what is psychiatric illness, is not always biological in nature (i.e. social problems, poverty, etc.), and may even be a part of the human condition. Potential routine use of MRI/fMRI in diagnosis in 2018 the American Psychological Association commissioned a review to reach a consensus on whether modern clinical MRI/fMRI will be able to be used in the diagnosis of mental health disorders. The criteria presented by the APA stated that the biomarkers used in diagnosis should: "have a sensitivity of at least 80% for detecting a particular psychiatric disorder" should "have a specificity of at least 80% for distinguishing this disorder from other psychiatric or medical disorders" "should be reliable, reproducible, and ideally be noninvasive, simple to perform, and inexpensive" proposed biomarkers should be verified by 2 independent studies each by a different investigator and different population samples and published in a peer-reviewed journal. The review concluded that although neuroimaging diagnosis may technically be feasible, very large studies are needed to evaluate specific biomarkers which were not available. Prevention The 2004 WHO report "Prevention of Mental Disorders" stated that "Prevention of these disorders is obviously one of the most effective ways to reduce the [disease] burden." The 2011 European Psychiatric Association (EPA) guidance on prevention of mental disorders states "There is considerable evidence that various psychiatric conditions can be prevented through the implementation of effective evidence-based interventions." A 2011 UK Department of Health report on the economic case for mental health promotion and mental illness prevention found that "many interventions are outstandingly good value for money, low in cost and often become self-financing over time, saving public expenditure". In 2016, the National Institute of Mental Health re-affirmed prevention as a research priority area. Parenting may affect the child's mental health, and evidence suggests that helping parents to be more effective with their children can address mental health needs. Universal prevention (aimed at a population that has no increased risk for developing a mental disorder, such as school programs or mass media campaigns) need very high numbers of people to show effect (sometimes known as the "power" problem). Approaches to overcome this are (1) focus on high-incidence groups (e.g. by targeting groups with high risk factors), (2) use multiple interventions to achieve greater, and thus more statistically valid, effects, (3) use cumulative meta-analyses of many trials, and (4) run very large trials. Management Treatment and support for mental disorders are provided in psychiatric hospitals, clinics or a range of community mental health services. In some countries services are increasingly based on a recovery approach, intended to support individual's personal journey to gain the kind of life they want. There is a range of different types of treatment and what is most suitable depends on the disorder and the individual. Many things have been found to help at least some people, and a placebo effect may play a role in any intervention or medication. In a minority of cases, individuals may be treated against their will, which can cause particular difficulties depending on how it is carried out and perceived. Compulsory treatment while in the community versus non-compulsory treatment does not appear to make much of a difference except by maybe decreasing victimization. Lifestyle Lifestyle strategies, including dietary changes, exercise and quitting smoking may be of benefit. Therapy There is also a wide range of psychotherapists (including family therapy), counselors, and public health professionals. In addition, there are peer support roles where personal experience of similar issues is the primary source of expertise. A major option for many mental disorders is psychotherapy. There are several main types. Cognitive behavioral therapy (CBT) is widely used and is based on modifying the patterns of thought and behavior associated with a particular disorder. Other psychotherapies include dialectic behavioral therapy (DBT) and interpersonal psychotherapy (IPT). Psychoanalysis, addressing underlying psychic conflicts and defenses, has been a dominant school of psychotherapy and is still in use. Systemic therapy or family therapy is sometimes used, addressing a network of significant others as well as an individual. Some psychotherapies are based on a humanistic approach. There are many specific therapies used for particular disorders, which may be offshoots or hybrids of the above types. Mental health professionals often employ an eclectic or integrative approach. Much may depend on the therapeutic relationship, and there may be problems with trust, confidentiality and engagement. Medication A major option for many mental disorders is psychiatric medication and there are several main groups. Antidepressants are used for the treatment of clinical depression, as well as often for anxiety and a range of other disorders. Anxiolytics (including sedatives) are used for anxiety disorders and related problems such as insomnia. Mood stabilizers are used primarily in bipolar disorder. Antipsychotics are used for psychotic disorders, notably for positive symptoms in schizophrenia, and also increasingly for a range of other disorders. Stimulants are commonly used, notably for ADHD. Despite the different conventional names of the drug groups, there may be considerable overlap in the disorders for which they are actually indicated, and there may also be off-label use of medications. There can be problems with adverse effects of medication and adherence to them, and there is also criticism of pharmaceutical marketing and professional conflicts of interest. However, these medications in combination with non-pharmacological methods, such as cognitive-behavioral therapy (CBT) are seen to be most effective in treating mental disorders. Other Electroconvulsive therapy (ECT) is sometimes used in severe cases when other interventions for severe intractable depression have failed. ECT is usually indicated for treatment resistant depression, severe vegetative symptoms, psychotic depression, intense suicidal ideation, depression during pregnancy, and catatonia. Psychosurgery is considered experimental but is advocated by some neurologists in certain rare cases. Counseling (professional) and co-counseling (between peers) may be used. Psychoeducation programs may provide people with the information to understand and manage their problems. Creative therapies are sometimes used, including music therapy, art therapy or drama therapy. Lifestyle adjustments and supportive measures are often used, including peer support, self-help groups for mental health and supported housing or supported employment (including social firms). Some advocate dietary supplements. Reasonable accommodations (adjustments and supports) might be put in place to help an individual cope and succeed in environments despite potential disability related to mental health problems. This could include an emotional support animal or specifically trained psychiatric service dog. cannabis is specifically not recommended as a treatment. Epidemiology Mental disorders are common. Worldwide, more than one in three people in most countries report sufficient criteria for at least one at some point in their life. In the United States, 46% qualify for a mental illness at some point. An ongoing survey indicates that anxiety disorders are the most common in all but one country, followed by mood disorders in all but two countries, while substance disorders and impulse-control disorders were consistently less prevalent. Rates varied by region. A review of anxiety disorder surveys in different countries found average lifetime prevalence estimates of 16.6%, with women having higher rates on average. A review of mood disorder surveys in different countries found lifetime rates of 6.7% for major depressive disorder (higher in some studies, and in women) and 0.8% for Bipolar I disorder. In the United States the frequency of disorder is: anxiety disorder (28.8%), mood disorder (20.8%), impulse-control disorder (24.8%) or substance use disorder (14.6%). A 2004 cross-Europe study found that approximately one in four people reported meeting criteria at some point in their life for at least one of the DSM-IV disorders assessed, which included mood disorders (13.9%), anxiety disorders (13.6%), or alcohol disorder (5.2%). Approximately one in ten met the criteria within a 12-month period. Women and younger people of either gender showed more cases of the disorder. A 2005 review of surveys in 16 European countries found that 27% of adult Europeans are affected by at least one mental disorder in a 12-month period. An international review of studies on the prevalence of schizophrenia found an average (median) figure of 0.4% for lifetime prevalence; it was consistently lower in poorer countries. Studies of the prevalence of personality disorders (PDs) have been fewer and smaller-scale, but one broad Norwegian survey found a five-year prevalence of almost 1 in 7 (13.4%). Rates for specific disorders ranged from 0.8% to 2.8%, differing across countries, and by gender, educational level and other factors. A US survey that incidentally screened for personality disorder found a rate of 14.79%. Approximately 7% of a preschool pediatric sample were given a psychiatric diagnosis in one clinical study, and approximately 10% of 1- and 2-year-olds receiving developmental screening have been assessed as having significant emotional/behavioral problems based on parent and pediatrician reports. While rates of psychological disorders are often the same for men and women, women tend to have a higher rate of depression. Each year 73 million women are affected by major depression, and suicide is ranked 7th as the cause of death for women between the ages of 20–59. Depressive disorders account for close to 41.9% of the psychiatric disabilities among women compared to 29.3% among men. History Ancient civilizations Ancient civilizations described and treated a number of mental disorders. Mental illnesses were well known in ancient Mesopotamia, where diseases and mental disorders were believed to be caused by specific deities. Because hands symbolized control over a person, mental illnesses were known as "hands" of certain deities. One psychological illness was known as Qāt Ištar, meaning "Hand of Ishtar". Others were known as "Hand of Shamash", "Hand of the Ghost", and "Hand of the God". Descriptions of these illnesses, however, are so vague that it is usually impossible to determine which illnesses they correspond to in modern terminology. Mesopotamian doctors kept detailed record of their patients' hallucinations and assigned spiritual meanings to them. The royal family of Elam was notorious for its members often being insane. The Greeks coined terms for melancholy, hysteria and phobia and developed the humorism theory. Mental disorders were described, and treatments developed, in Persia, Arabia and in the medieval Islamic world. Europe Middle Ages Conceptions of madness in the Middle Ages in Christian Europe were a mixture of the divine, diabolical, magical and humoral, and transcendental. In the early modern period, some people with mental disorders may have been victims of the witch-hunts. While not every witch and sorcerer accused were mentally ill, all mentally ill were considered to be witches or sorcerers. Many terms for mental disorders that found their way into everyday use first became popular in the 16th and 17th centuries. Eighteenth century By the end of the 17th century and into the Enlightenment, madness was increasingly seen as an organic physical phenomenon with no connection to the soul or moral responsibility. Asylum care was often harsh and treated people like wild animals, but towards the end of the 18th century a moral treatment movement gradually developed. Clear descriptions of some syndromes may be rare before the 19th century. Nineteenth century Industrialization and population growth led to a massive expansion of the number and size of insane asylums in every Western country in the 19th century. Numerous different classification schemes and diagnostic terms were developed by different authorities, and the term psychiatry was coined (1808), though medical superintendents were still known as alienists. Twentieth century The turn of the 20th century saw the development of psychoanalysis, which would later come to the fore, along with Kraepelin's classification scheme. Asylum "inmates" were increasingly referred to as "patients", and asylums were renamed as hospitals. Europe and the United States Early in the 20th century in the United States, a mental hygiene movement developed, aiming to prevent mental disorders. Clinical psychology and social work developed as professions. World War I saw a massive increase of conditions that came to be termed "shell shock". World War II saw the development in the U.S. of a new psychiatric manual for categorizing mental disorders, which along with existing systems for collecting census and hospital statistics led to the first Diagnostic and Statistical Manual of Mental Disorders. The International Classification of Diseases (ICD) also developed a section on mental disorders. The term stress, having emerged from endocrinology work in the 1930s, was increasingly applied to mental disorders. Electroconvulsive therapy, insulin shock therapy, lobotomies and the neuroleptic chlorpromazine came to be used by mid-century. In the 1960s there were many challenges to the concept of mental illness itself. These challenges came from psychiatrists like Thomas Szasz who argued that mental illness was a myth used to disguise moral conflicts; from sociologists such as Erving Goffman who said that mental illness was merely another example of how society labels and controls non-conformists; from behavioral psychologists who challenged psychiatry's fundamental reliance on unobservable phenomena; and from gay rights activists who criticised the APA's listing of homosexuality as a mental disorder. A study published in Science by Rosenhan received much publicity and was viewed as an attack on the efficacy of psychiatric diagnosis. Deinstitutionalization gradually occurred in the West, with isolated psychiatric hospitals being closed down in favor of community mental health services. A consumer/survivor movement gained momentum. Other kinds of psychiatric medication gradually came into use, such as "psychic energizers" (later antidepressants) and lithium. Benzodiazepines gained widespread use in the 1970s for anxiety and depression, until dependency problems curtailed their popularity. Advances in neuroscience, genetics, and psychology led to new research agendas. Cognitive behavioral therapy and other psychotherapies developed. The DSM and then ICD adopted new criteria-based classifications, and the number of "official" diagnoses saw a large expansion. Through the 1990s, new SSRI-type antidepressants became some of the most widely prescribed drugs in the world, as later did antipsychotics. Also during the 1990s, a recovery approach developed. Africa and Nigeria Most Africans view mental disturbances as external spiritual attack on the person. Those who have a mental illness are thought to be under a spell or bewitched. Often than usual, People view a mentally ill person as possessed of an evil spirit and is seen as more of sociological perspective than a psychological order. The WHO estimated that fewer than 10% of mentally ill Nigerians have access to a psychiatrist or health worker, because there is a low ratio of mental-health specialists available in a country of 200 million people. WHO estimates that the number of mentally ill Nigerians ranges from 40 million to 60 million. Disorders such as depression, anxiety, schizophrenia, personality disorder, old age-related disorder, and substance-abuse disorder are common in Nigeria, as in other countries in Africa. Nigeria is still nowhere near being equipped to solve prevailing mental health challenges. With little scientific research carried out, coupled with insufficient mental-health hospitals in the country, traditional healers provide specialized psychotherapy care to those that require their services and pharmacotherapy Society and culture Different societies or cultures, even different individuals in a subculture, can disagree as to what constitutes optimal versus pathological biological and psychological functioning. Research has demonstrated that cultures vary in the relative importance placed on, for example, happiness, autonomy, or social relationships for pleasure. Likewise, the fact that a behavior pattern is valued, accepted, encouraged, or even statistically normative in a culture does not necessarily mean that it is conducive to optimal psychological functioning. People in all cultures find some behaviors bizarre or even incomprehensible. But just what they feel is bizarre or incomprehensible is ambiguous and subjective. These differences in determination can become highly contentious. The process by which conditions and difficulties come to be defined and treated as medical conditions and problems, and thus come under the authority of doctors and other health professionals, is known as medicalization or pathologization. Mental illness in the Latin American community There is a perception in Latin American communities, especially among older people, that discussing problems with mental health can create embarrassment and shame for the family. This results in fewer people seeking treatment. Latin Americans from the US are slightly more likely to have a mental health disorder than first-generation Latin American immigrants, although differences between ethnic groups were found to disappear after adjustment for place of birth. From 2015 to 2018, rates of serious mental illness in young adult Latin Americans increased by 60%, from 4% to 6.4%. The prevalence of major depressive episodes in young and adult Latin Americans increased from 8.4% to 11.3%. More than a third of Latin Americans reported more than one bad mental health day in the last three months. The rate of suicide among Latin Americans was about half the rate of non-Latin American white Americans in 2018, and this was the second-leading cause of death among Latin Americans ages 15 to 34. However, Latin American suicide rates rose steadily after 2020 in relation to the COVID-19 pandemic, even as the national rate declined. Family relations are an integral part of the Latin American community. Some research has shown that Latin Americans are more likely rely on family bonds, or familismo, as a source of therapy while struggling with mental health issues. Because Latin Americans have a high rate of religiosity, and because there is less stigma associated with religion than with psychiatric services, religion may play a more important therapeutic role for the mentally ill in Latin American communities. However, research has also suggested that religion may also play a role in stigmatizing mental illness in Latin American communities, which can discourage community members from seeking professional help. Religion Religious, spiritual, or transpersonal experiences and beliefs meet many criteria of delusional or psychotic disorders. A belief or experience can sometimes be shown to produce distress or disability—the ordinary standard for judging mental disorders. There is a link between religion and schizophrenia, a complex mental disorder characterized by a difficulty in recognizing reality, regulating emotional responses, and thinking in a clear and logical manner. Those with schizophrenia commonly report some type of religious delusion, and religion itself may be a trigger for schizophrenia. Movements Controversy has often surrounded psychiatry, and the term anti-psychiatry was coined by the psychiatrist David Cooper in 1967. The anti-psychiatry message is that psychiatric treatments are ultimately more damaging than helpful to patients, and psychiatry's history involves what may now be seen as dangerous treatments. Electroconvulsive therapy was one of these, which was used widely between the 1930s and 1960s. Lobotomy was another practice that was ultimately seen as too invasive and brutal. Diazepam and other sedatives were sometimes over-prescribed, which led to an epidemic of dependence. There was also concern about the large increase in prescribing psychiatric drugs for children. Some charismatic psychiatrists came to personify the movement against psychiatry. The most influential of these was R.D. Laing who wrote a series of best-selling books, including The Divided Self. Thomas Szasz wrote The Myth of Mental Illness. Some ex-patient groups have become militantly anti-psychiatric, often referring to themselves as survivors. Giorgio Antonucci has questioned the basis of psychiatry through his work on the dismantling of two psychiatric hospitals (in the city of Imola), carried out from 1973 to 1996. The consumer/survivor movement (also known as user/survivor movement) is made up of individuals (and organizations representing them) who are clients of mental health services or who consider themselves survivors of psychiatric interventions. Activists campaign for improved mental health services and for more involvement and empowerment within mental health services, policies and wider society. Patient advocacy organizations have expanded with increasing deinstitutionalization in developed countries, working to challenge the stereotypes, stigma and exclusion associated with psychiatric conditions. There is also a carers rights movement of people who help and support people with mental health conditions, who may be relatives, and who often work in difficult and time-consuming circumstances with little acknowledgement and without pay. An anti-psychiatry movement fundamentally challenges mainstream psychiatric theory and practice, including in some cases asserting that psychiatric concepts and diagnoses of 'mental illness' are neither real nor useful. Alternatively, a movement for global mental health has emerged, defined as 'the area of study, research and practice that places a priority on improving mental health and achieving equity in mental health for all people worldwide'. Cultural bias Diagnostic guidelines of the 2000s, namely the DSM and to some extent the ICD, have been criticized as having a fundamentally Euro-American outlook. Opponents argue that even when diagnostic criteria are used across different cultures, it does not mean that the underlying constructs have validity within those cultures, as even reliable application can prove only consistency, not legitimacy. Advocating a more culturally sensitive approach, critics such as Carl Bell and Marcello Maviglia contend that the cultural and ethnic diversity of individuals is often discounted by researchers and service providers. Cross-cultural psychiatrist Arthur Kleinman contends that the Western bias is ironically illustrated in the introduction of cultural factors to the DSM-IV. Disorders or concepts from non-Western or non-mainstream cultures are described as "culture-bound", whereas standard psychiatric diagnoses are given no cultural qualification whatsoever, revealing to Kleinman an underlying assumption that Western cultural phenomena are universal. Kleinman's negative view towards the culture-bound syndrome is largely shared by other cross-cultural critics. Common responses included both disappointment over the large number of documented non-Western mental disorders still left out and frustration that even those included are often misinterpreted or misrepresented. Many mainstream psychiatrists are dissatisfied with the new culture-bound diagnoses, although for partly different reasons. Robert Spitzer, a lead architect of the DSM-III, has argued that adding cultural formulations was an attempt to appease cultural critics, and has stated that they lack any scientific rationale or support. Spitzer also posits that the new culture-bound diagnoses are rarely used, maintaining that the standard diagnoses apply regardless of the culture involved. In general, mainstream psychiatric opinion remains that if a diagnostic category is valid, cross-cultural factors are either irrelevant or are significant only to specific symptom presentations. Clinical conceptions of mental illness also overlap with personal and cultural values in the domain of morality, so much so that it is sometimes argued that separating the two is impossible without fundamentally redefining the essence of being a particular person in a society. In clinical psychiatry, persistent distress and disability indicate an internal disorder requiring treatment; but in another context, that same distress and disability can be seen as an indicator of emotional struggle and the need to address social and structural problems. This dichotomy has led some academics and clinicians to advocate a postmodernist conceptualization of mental distress and well-being. Such approaches, along with cross-cultural and "heretical" psychologies centered on alternative cultural and ethnic and race-based identities and experiences, stand in contrast to the mainstream psychiatric community's alleged avoidance of any explicit involvement with either morality or culture. In many countries there are attempts to challenge perceived prejudice against minority groups, including alleged institutional racism within psychiatric services. There are also ongoing attempts to improve professional cross cultural sensitivity. Laws and policies Three-quarters of countries around the world have mental health legislation. Compulsory admission to mental health facilities (also known as involuntary commitment) is a controversial topic. It can impinge on personal liberty and the right to choose, and carry the risk of abuse for political, social, and other reasons; yet it can potentially prevent harm to self and others, and assist some people in attaining their right to healthcare when they may be unable to decide in their own interests. Because of this it is a concern of medical ethics. All human rights oriented mental health laws require proof of the presence of a mental disorder as defined by internationally accepted standards, but the type and severity of disorder that counts can vary in different jurisdictions. The two most often used grounds for involuntary admission are said to be serious likelihood of immediate or imminent danger to self or others, and the need for treatment. Applications for someone to be involuntarily admitted usually come from a mental health practitioner, a family member, a close relative, or a guardian. Human-rights-oriented laws usually stipulate that independent medical practitioners or other accredited mental health practitioners must examine the patient separately and that there should be regular, time-bound review by an independent review body. The individual should also have personal access to independent advocacy. For involuntary treatment to be administered (by force if necessary), it should be shown that an individual lacks the mental capacity for informed consent (i.e. to understand treatment information and its implications, and therefore be able to make an informed choice to either accept or refuse). Legal challenges in some areas have resulted in supreme court decisions that a person does not have to agree with a psychiatrist's characterization of the issues as constituting an "illness", nor agree with a psychiatrist's conviction in medication, but only recognize the issues and the information about treatment options. Proxy consent (also known as surrogate or substituted decision-making) may be transferred to a personal representative, a family member, or a legally appointed guardian. Moreover, patients may be able to make, when they are considered well, an advance directive stipulating how they wish to be treated should they be deemed to lack mental capacity in the future. The right to supported decision-making, where a person is helped to understand and choose treatment options before they can be declared to lack capacity, may also be included in the legislation. There should at the very least be shared decision-making as far as possible. Involuntary treatment laws are increasingly extended to those living in the community, for example outpatient commitment laws (known by different names) are used in New Zealand, Australia, the United Kingdom, and most of the United States. The World Health Organization reports that in many instances national mental health legislation takes away the rights of persons with mental disorders rather than protecting rights, and is often outdated. In 1991, the United Nations adopted the Principles for the Protection of Persons with Mental Illness and the Improvement of Mental Health Care, which established minimum human rights standards of practice in the mental health field. In 2006, the UN formally agreed the Convention on the Rights of Persons with Disabilities to protect and enhance the rights and opportunities of disabled people, including those with psychiatric disabilities. The term insanity, sometimes used colloquially as a synonym for mental illness, is often used technically as a legal term. The insanity defense may be used in a legal trial (known as the mental disorder defence in some countries). Perception and discrimination Stigma The social stigma associated with mental disorders is a widespread problem. The US Surgeon General stated in 1999 that: "Powerful and pervasive, stigma prevents people from acknowledging their own mental health problems, much less disclosing them to others." Additionally, researcher Wulf Rössler in 2016, in his article, "The Stigma of Mental Disorders" stated In the United States, racial and ethnic minorities are more likely to experience mental health disorders often due to low socioeconomic status, and discrimination. In Taiwan, people with mental disorders often face misconceptions from the general public. These misconceptions include the belief that mental health issues stem from excessive worry, having too much free time, a lack of progress or ambition, not taking life seriously, neglecting real-life responsibilities, mental weakness, unwillingness to be resilient, perfectionism, or a lack of courage. Employment discrimination is reported to play a significant part in the high rate of unemployment among those with a diagnosis of mental illness. An Australian study found that having a psychiatric disability is a bigger barrier to employment than a physical disability. The mentally ill are stigmatized in Chinese society and can not legally marry. Efforts are being undertaken worldwide to eliminate the stigma of mental illness, although the methods and outcomes used have sometimes been criticized. Media and general public Media coverage of mental illness comprises predominantly negative and pejorative depictions, for example, of incompetence, violence or criminality, with far less coverage of positive issues such as accomplishments or human rights issues. Such negative depictions, including in children's cartoons, are thought to contribute to stigma and negative attitudes in the public and in those with mental health problems themselves, although more sensitive or serious cinematic portrayals have increased in prevalence. In the United States, the Carter Center has created fellowships for journalists in South Africa, the U.S., and Romania, to enable reporters to research and write stories on mental health topics. Former US First Lady Rosalynn Carter began the fellowships not only to train reporters in how to sensitively and accurately discuss mental health and mental illness, but also to increase the number of stories on these topics in the news media. There is also a World Mental Health Day, which in the United States and Canada falls within a Mental Illness Awareness Week. The general public have been found to hold a strong stereotype of dangerousness and desire for social distance from individuals described as mentally ill. A US national survey found that a higher percentage of people rate individuals described as displaying the characteristics of a mental disorder as "likely to do something violent to others", compared to the percentage of people who are rating individuals described as being troubled. In the article, "Discrimination Against People with a Mental Health Diagnosis: Qualitative Analysis of Reported Experiences", an individual who has a mental disorder, revealed that, "If people don't know me and don't know about the problems, they'll talk to me quite happily. Once they've seen the problems or someone's told them about me, they tend to be a bit more wary." In addition, in the article, "Stigma and its Impact on Help-Seeking for Mental Disorders: What Do We Know?" by George Schomerus and Matthias Angermeyer, it is affirmed that "Family doctors and psychiatrists have more pessimistic views about the outcomes for mental illnesses than the general public (Jorm et al., 1999), and mental health professionals hold more negative stereotypes about mentally ill patients, but, reassuringly, they are less accepting of restrictions towards them." Recent depictions in media have included leading characters successfully living with and managing a mental illness, including in bipolar disorder in Homeland (2011) and post-traumatic stress disorder in Iron Man 3 (2013). Violence Despite public or media opinion, national studies have indicated that severe mental illness does not independently predict future violent behavior, on average, and is not a leading cause of violence in society. There is a statistical association with various factors that do relate to violence (in anyone), such as substance use and various personal, social, and economic factors. A 2015 review found that in the United States, about 4% of violence is attributable to people diagnosed with mental illness, and a 2014 study found that 7.5% of crimes committed by mentally ill people were directly related to the symptoms of their mental illness. The majority of people with serious mental illness are never violent. In fact, findings consistently indicate that it is many times more likely that people diagnosed with a serious mental illness living in the community will be the victims rather than the perpetrators of violence. In a study of individuals diagnosed with "severe mental illness" living in a US inner-city area, a quarter were found to have been victims of at least one violent crime over the course of a year, a proportion eleven times higher than the inner-city average, and higher in every category of crime including violent assaults and theft. People with a diagnosis may find it more difficult to secure prosecutions, however, due in part to prejudice and being seen as less credible. However, there are some specific diagnoses, such as childhood conduct disorder or adult antisocial personality disorder or psychopathy, which are defined by, or are inherently associated with, conduct problems and violence. There are conflicting findings about the extent to which certain specific symptoms, notably some kinds of psychosis (hallucinations or delusions) that can occur in disorders such as schizophrenia, delusional disorder or mood disorder, are linked to an increased risk of serious violence on average. The mediating factors of violent acts, however, are most consistently found to be mainly socio-demographic and socio-economic factors such as being young, male, of lower socioeconomic status and, in particular, substance use (including alcohol use) to which some people may be particularly vulnerable. High-profile cases have led to fears that serious crimes, such as homicide, have increased due to deinstitutionalization, but the evidence does not support this conclusion. Violence that does occur in relation to mental disorder (against the mentally ill or by the mentally ill) typically occurs in the context of complex social interactions, often in a family setting rather than between strangers. It is also an issue in health care settings and the wider community. Mental health The recognition and understanding of mental health conditions have changed over time and across cultures and there are still variations in definition, assessment, and classification, although standard guideline criteria are widely used. In many cases, there appears to be a continuum between mental health and mental illness, making diagnosis complex. According to the World Health Organization, over a third of people in most countries report problems at some time in their life which meet the criteria for diagnosis of one or more of the common types of mental disorder. Corey M Keyes has created a two continua model of mental illness and health which holds that both are related, but distinct dimensions: one continuum indicates the presence or absence of mental health, the other the presence or absence of mental illness. For example, people with optimal mental health can also have a mental illness, and people who have no mental illness can also have poor mental health. Other animals Psychopathology in non-human primates has been studied since the mid-20th century. Over 20 behavioral patterns in captive chimpanzees have been documented as (statistically) abnormal for frequency, severity or oddness—some of which have also been observed in the wild. Captive great apes show gross behavioral abnormalities such as stereotypy of movements, self-mutilation, disturbed emotional reactions (mainly fear or aggression) towards companions, lack of species-typical communications, and generalized learned helplessness. In some cases such behaviors are hypothesized to be equivalent to symptoms associated with psychiatric disorders in humans such as depression, anxiety disorders, eating disorders and post-traumatic stress disorder. Concepts of antisocial, borderline and schizoid personality disorders have also been applied to non-human great apes. The risk of anthropomorphism is often raised concerning such comparisons, and assessment of non-human animals cannot incorporate evidence from linguistic communication. However, available evidence may range from nonverbal behaviors—including physiological responses and homologous facial displays and acoustic utterances—to neurochemical studies. It is pointed out that human psychiatric classification is often based on statistical description and judgment of behaviors (especially when speech or language is impaired) and that the use of verbal self-report is itself problematic and unreliable. Psychopathology has generally been traced, at least in captivity, to adverse rearing conditions such as early separation of infants from mothers; early sensory deprivation; and extended periods of social isolation. Studies have also indicated individual variation in temperament, such as sociability or impulsiveness. Particular causes of problems in captivity have included integration of strangers into existing groups and a lack of individual space, in which context some pathological behaviors have also been seen as coping mechanisms. Remedial interventions have included careful individually tailored re-socialization programs, behavior therapy, environment enrichment, and on rare occasions psychiatric drugs. Socialization has been found to work 90% of the time in disturbed chimpanzees, although restoration of functional sexuality and caregiving is often not achieved. Laboratory researchers sometimes try to develop animal models of human mental disorders, including by inducing or treating symptoms in animals through genetic, neurological, chemical or behavioral manipulation, but this has been criticized on empirical grounds and opposed on animal rights grounds.
Biology and health sciences
Illness and injury
null
19372
https://en.wikipedia.org/wiki/Minute
Minute
Minute is a unit of time defined as equal to 60 seconds. One hour contains 60 minutes. Although not a unit in the International System of Units (SI), the minute is accepted for use in the SI. The SI symbol for minutes is min (without a dot). The prime symbol is also sometimes used informally to denote minutes. In the UTC time standard, a minute on rare occasions has 61 seconds, a consequence of leap seconds; there is also a provision to insert a negative leap second, which would result in a 59-second minute, but this has never happened in more than 40 years under this system. History Al-Biruni first subdivided the hour sexagesimally into minutes, seconds, thirds and fourths in 1000 CE while discussing Jewish months. Historically, the word "minute" comes from the Latin pars minuta prima, meaning "first small part". This division of the hour can be further refined with a "second small part" (Latin: pars minuta secunda), and this is where the word "second" comes from. For even further refinement, the term "third" ( of a second) remains in some languages, for example Polish (tercja) and Turkish (salise), although most modern usage subdivides seconds by using decimals. The symbol notation of the prime for minutes and double prime for seconds can be seen as indicating the first and second cut of the hour (similar to how the foot is the first cut of the yard or perhaps chain, with inches as the second cut). In 1267, the medieval scientist Roger Bacon, writing in Latin, defined the division of time between full moons as a number of hours, minutes, seconds, thirds, and fourths (horae, minuta, secunda, tertia, and quarta) after noon on specified calendar dates. Jost Bürgi was the first clock maker to include a minute hand on clock for astronomer Tycho Brahe in 1577. The introduction of the minute hand into watches was possible only after the invention of the hairspring by Thomas Tompion, an English watchmaker, in 1675.
Physical sciences
Time
null
19374
https://en.wikipedia.org/wiki/Model%20organism
Model organism
A model organism is a non-human species that is extensively studied to understand particular biological phenomena, with the expectation that discoveries made in the model organism will provide insight into the workings of other organisms. Model organisms are widely used to research human disease when human experimentation would be unfeasible or unethical. This strategy is made possible by the common descent of all living organisms, and the conservation of metabolic and developmental pathways and genetic material over the course of evolution. Research using animal models has been central to most of the achievements of modern medicine. It has contributed most of the basic knowledge in fields such as human physiology and biochemistry, and has played significant roles in fields such as neuroscience and infectious disease. The results have included the near-eradication of polio and the development of organ transplantation, and have benefited both humans and animals. From 1910 to 1927, Thomas Hunt Morgan's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes, and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". Research in model organisms led to further medical advances, such as the production of the diphtheria antitoxin and the 1922 discovery of insulin and its use in treating diabetes, which had previously meant death. Modern general anaesthetics such as halothane were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine. In researching human disease, model organisms allow for better understanding the disease process without the added risk of harming an actual human. The species of the model organism is usually chosen so that it reacts to disease or its treatment in a way that resembles human physiology, even though care must be taken when generalizing from one organism to another. However, many drugs, treatments and cures for human diseases are developed in part with the guidance of animal models. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful in vitro model system available. Model organisms are drawn from all three domains of life, as well as viruses. One of the first model systems for molecular biology was the bacterium Escherichia coli (E. coli), a common constituent of the human digestive system. The mouse (Mus musculus) has been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st centuries. Other examples include baker's yeast (Saccharomyces cerevisiae), the T4 phage virus, the fruit fly Drosophila melanogaster, the flowering plant Arabidopsis thaliana, and guinea pigs (Cavia porcellus). Several of the bacterial viruses (bacteriophage) that infect E. coli also have been very useful for the study of gene structure and gene regulation (e.g. phages Lambda and T4). Disease models are divided into three categories: homologous animals have the same causes, symptoms and treatment options as would humans who have the same disease, isomorphic animals share the same symptoms and treatments, and predictive models are similar to a particular human disease in only a couple of aspects, but are useful in isolating and making predictions about mechanisms of a set of disease features. History The use of animals in research dates back to ancient Greece, with Aristotle (384–322 BCE) and Erasistratus (304–258 BCE) among the first to perform experiments on living animals. Discoveries in the 18th and 19th centuries included Antoine Lavoisier's use of a guinea pig in a calorimeter to prove that respiration was a form of combustion, and Louis Pasteur's demonstration of the germ theory of disease in the 1880s using anthrax in sheep. Research using animal models has been central to most of the achievements of modern medicine. It has contributed most of the basic knowledge in fields such as human physiology and biochemistry, and has played significant roles in fields such as neuroscience and infectious disease. For example, the results have included the near-eradication of polio and the development of organ transplantation, and have benefited both humans and animals. From 1910 to 1927, Thomas Hunt Morgan's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes. Drosophila became one of the first, and for some time the most widely used, model organisms, and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". D. melanogaster remains one of the most widely used eukaryotic model organisms. During the same time period, studies on mouse genetics in the laboratory of William Ernest Castle in collaboration with Abbie Lathrop led to generation of the DBA ("dilute, brown and non-agouti") inbred mouse strain and the systematic generation of other inbred strains. The mouse has since been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st centuries. In the late 19th century, Emil von Behring isolated the diphtheria toxin and demonstrated its effects in guinea pigs. He went on to develop an antitoxin against diphtheria in animals and then in humans, which resulted in the modern methods of immunization and largely ended diphtheria as a threatening disease. The diphtheria antitoxin is famously commemorated in the Iditarod race, which is modeled after the delivery of antitoxin in the 1925 serum run to Nome. The success of animal studies in producing the diphtheria antitoxin has also been attributed as a cause for the decline of the early 20th-century opposition to animal research in the United States. Subsequent research in model organisms led to further medical advances, such as Frederick Banting's research in dogs, which determined that the isolates of pancreatic secretion could be used to treat dogs with diabetes. This led to the 1922 discovery of insulin (with John Macleod) and its use in treating diabetes, which had previously meant death. John Cade's research in guinea pigs discovered the anticonvulsant properties of lithium salts, which revolutionized the treatment of bipolar disorder, replacing the previous treatments of lobotomy or electroconvulsive therapy. Modern general anaesthetics, such as halothane and related compounds, were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. In the 1940s, Jonas Salk used rhesus monkey studies to isolate the most virulent forms of the polio virus, which led to his creation of a polio vaccine. The vaccine, which was made publicly available in 1955, reduced the incidence of polio 15-fold in the United States over the following five years. Albert Sabin improved the vaccine by passing the polio virus through animal hosts, including monkeys; the Sabin vaccine was produced for mass consumption in 1963, and had virtually eradicated polio in the United States by 1965. It has been estimated that developing and producing the vaccines required the use of 100,000 rhesus monkeys, with 65 doses of vaccine produced from each monkey. Sabin wrote in 1992, "Without the use of animals and human beings, it would have been impossible to acquire the important knowledge needed to prevent much suffering and premature death not only among humans, but also among animals." Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful in vitro model system available. Selection Models are those organisms with a wealth of biological data that make them attractive to study as examples for other species and/or natural phenomena that are more difficult to study directly. Continual research on these organisms focuses on a wide variety of experimental techniques and goals from many different levels of biology—from ecology, behavior and biomechanics, down to the tiny functional scale of individual tissues, organelles and proteins. Inquiries about the DNA of organisms are classed as genetic models (with short generation times, such as the fruitfly and nematode worm), experimental models, and genomic parsimony models, investigating pivotal position in the evolutionary tree. Historically, model organisms include a handful of species with extensive genomic research data, such as the NIH model organisms. Often, model organisms are chosen on the basis that they are amenable to experimental manipulation. This usually will include characteristics such as short life-cycle, techniques for genetic manipulation (inbred strains, stem cell lines, and methods of transformation) and non-specialist living requirements. Sometimes, the genome arrangement facilitates the sequencing of the model organism's genome, for example, by being very compact or having a low proportion of junk DNA (e.g. yeast, arabidopsis, or pufferfish). When researchers look for an organism to use in their studies, they look for several traits. Among these are size, generation time, accessibility, manipulation, genetics, conservation of mechanisms, and potential economic benefit. As comparative molecular biology has become more common, some researchers have sought model organisms from a wider assortment of lineages on the tree of life. Phylogeny and genetic relatedness The primary reason for the use of model organisms in research is the evolutionary principle that all organisms share some degree of relatedness and genetic similarity due to common ancestry. The study of taxonomic human relatives, then, can provide a great deal of information about mechanism and disease within the human body that can be useful in medicine. Various phylogenetic trees for vertebrates have been constructed using comparative proteomics, genetics, genomics as well as the geochemical and fossil record. These estimations tell us that humans and chimpanzees last shared a common ancestor about 6 million years ago (mya). As our closest relatives, chimpanzees have a lot of potential to tell us about mechanisms of disease (and what genes may be responsible for human intelligence). However, chimpanzees are rarely used in research and are protected from highly invasive procedures. Rodents are the most common animal models. Phylogenetic trees estimate that humans and rodents last shared a common ancestor ~80-100mya. Despite this distant split, humans and rodents have far more similarities than they do differences. This is due to the relative stability of large portions of the genome, making the use of vertebrate animals particularly productive. Genomic data is used to make close comparisons between species and determine relatedness. Humans share about 99% of their genome with chimpanzees (98.7% with bonobos) and over 90% with the mouse. With so much of the genome conserved across species, it is relatively impressive that the differences between humans and mice can be accounted for in approximately six thousand genes (of ~30,000 total). Scientists have been able to take advantage of these similarities in generating experimental and predictive models of human disease. Use There are many model organisms. One of the first model systems for molecular biology was the bacterium Escherichia coli, a common constituent of the human digestive system. Several of the bacterial viruses (bacteriophage) that infect E. coli also have been very useful for the study of gene structure and gene regulation (e.g. phages Lambda and T4). However, it is debated whether bacteriophages should be classified as organisms, because they lack metabolism and depend on functions of the host cells for propagation. In eukaryotes, several yeasts, particularly Saccharomyces cerevisiae ("baker's" or "budding" yeast), have been widely used in genetics and cell biology, largely because they are quick and easy to grow. The cell cycle in a simple yeast is very similar to the cell cycle in humans and is regulated by homologous proteins. The fruit fly Drosophila melanogaster is studied, again, because it is easy to grow for an animal, has various visible congenital traits and has a polytene (giant) chromosome in its salivary glands that can be examined under a light microscope. The roundworm Caenorhabditis elegans is studied because it has very defined development patterns involving fixed numbers of cells, and it can be rapidly assayed for abnormalities. Disease models Animal models serving in research may have an existing, inbred or induced disease or injury that is similar to a human condition. These test conditions are often termed as animal models of disease. The use of animal models allows researchers to investigate disease states in ways which would be inaccessible in a human patient, performing procedures on the non-human animal that imply a level of harm that would not be considered ethical to inflict on a human. The best models of disease are similar in etiology (mechanism of cause) and phenotype (signs and symptoms) to the human equivalent. However complex human diseases can often be better understood in a simplified system in which individual parts of the disease process are isolated and examined. For instance, behavioral analogues of anxiety or pain in laboratory animals can be used to screen and test new drugs for the treatment of these conditions in humans. A 2000 study found that animal models concorded (coincided on true positives and false negatives) with human toxicity in 71% of cases, with 63% for nonrodents alone and 43% for rodents alone. In 1987, Davidson et al. suggested that selection of an animal model for research be based on nine considerations. These include Animal models can be classified as homologous, isomorphic or predictive. Animal models can also be more broadly classified into four categories: 1) experimental, 2) spontaneous, 3) negative, 4) orphan. Experimental models are most common. These refer to models of disease that resemble human conditions in phenotype or response to treatment but are induced artificially in the laboratory. Some examples include: The use of metrazol (pentylenetetrazol) as an animal model of epilepsy Induction of mechanical brain injury as an animal model of post-traumatic epilepsy Injection of the neurotoxin 6-hydroxydopamine to dopaminergic parts of the basal ganglia as an animal model of Parkinson's disease. Immunisation with an auto-antigen to induce an immune response to model autoimmune diseases such as Experimental autoimmune encephalomyelitis Occlusion of the middle cerebral artery as an animal model of ischemic stroke Injection of blood in the basal ganglia of mice as a model for hemorrhagic stroke Sepsis and septic shock induction by impairing the integrity of barrier tissues, administering live pathogens or toxins Infecting animals with pathogens to reproduce human infectious diseases Injecting animals with agonists or antagonists of various neurotransmitters to reproduce human mental disorders Using ionizing radiation to cause tumors Using gene transfer to cause tumors Implanting animals with tumors to test and develop treatments using ionizing radiation Genetically selected (such as in diabetic mice also known as NOD mice) Various animal models for screening of drugs for the treatment of glaucoma The use of the ovariectomized rat in osteoporosis research Use of Plasmodium yoelii as a model of human malaria Spontaneous models refer to diseases that are analogous to human conditions that occur naturally in the animal being studied. These models are rare, but informative. Negative models essentially refer to control animals, which are useful for validating an experimental result. Orphan models refer to diseases for which there is no human analog and occur exclusively in the species studied. The increase in knowledge of the genomes of non-human primates and other mammals that are genetically close to humans is allowing the production of genetically engineered animal tissues, organs and even animal species which express human diseases, providing a more robust model of human diseases in an animal model. Animal models observed in the sciences of psychology and sociology are often termed animal models of behavior. It is difficult to build an animal model that perfectly reproduces the symptoms of depression in patients. Depression, as other mental disorders, consists of endophenotypes that can be reproduced independently and evaluated in animals. An ideal animal model offers an opportunity to understand molecular, genetic and epigenetic factors that may lead to depression. By using animal models, the underlying molecular alterations and the causal relationship between genetic or environmental alterations and depression can be examined, which would afford a better insight into pathology of depression. In addition, animal models of depression are indispensable for identifying novel therapies for depression. Important model organisms Model organisms are drawn from all three domains of life, as well as viruses. The most widely studied prokaryotic model organism is Escherichia coli (E. coli), which has been intensively investigated for over 60 years. It is a common, gram-negative gut bacterium which can be grown and cultured easily and inexpensively in a laboratory setting. It is the most widely used organism in molecular genetics, and is an important species in the fields of biotechnology and microbiology, where it has served as the host organism for the majority of work with recombinant DNA. Simple model eukaryotes include baker's yeast (Saccharomyces cerevisiae) and fission yeast (Schizosaccharomyces pombe), both of which share many characters with higher cells, including those of humans. For instance, many cell division genes that are critical for the development of cancer have been discovered in yeast. Chlamydomonas reinhardtii, a unicellular green alga with well-studied genetics, is used to study photosynthesis and motility. C. reinhardtii has many known and mapped mutants and expressed sequence tags, and there are advanced methods for genetic transformation and selection of genes. Dictyostelium discoideum is used in molecular biology and genetics, and is studied as an example of cell communication, differentiation, and programmed cell death. Among invertebrates, the fruit fly Drosophila melanogaster is famous as the subject of genetics experiments by Thomas Hunt Morgan and others. They are easily raised in the lab, with rapid generations, high fecundity, few chromosomes, and easily induced observable mutations. The nematode Caenorhabditis elegans is used for understanding the genetic control of development and physiology. It was first proposed as a model for neuronal development by Sydney Brenner in 1963, and has been extensively used in many different contexts since then. C. elegans was the first multicellular organism whose genome was completely sequenced, and as of 2012, the only organism to have its connectome (neuronal "wiring diagram") completed. Arabidopsis thaliana is currently the most popular model plant. Its small stature and short generation time facilitates rapid genetic studies, and many phenotypic and biochemical mutants have been mapped. A. thaliana was the first plant to have its genome sequenced. Among vertebrates, guinea pigs (Cavia porcellus) were used by Robert Koch and other early bacteriologists as a host for bacterial infections, becoming a byword for "laboratory animal", but are less commonly used today. The classic model vertebrate is currently the mouse (Mus musculus). Many inbred strains exist, as well as lines selected for particular traits, often of medical interest, e.g. body size, obesity, muscularity, and voluntary wheel-running behavior. The rat (Rattus norvegicus) is particularly useful as a toxicology model, and as a neurological model and source of primary cell cultures, owing to the larger size of organs and suborganellar structures relative to the mouse, while eggs and embryos from Xenopus tropicalis and Xenopus laevis (African clawed frog) are used in developmental biology, cell biology, toxicology, and neuroscience. Likewise, the zebrafish (Danio rerio) has a nearly transparent body during early development, which provides unique visual access to the animal's internal anatomy during this time period. Zebrafish are used to study development, toxicology and toxicopathology, specific gene function and roles of signaling pathways. Other important model organisms and some of their uses include: T4 phage (viral infection), Tetrahymena thermophila (intracellular processes), maize (transposons), hydras (regeneration and morphogenesis), cats (neurophysiology), chickens (development), dogs (respiratory and cardiovascular systems), Nothobranchius furzeri (aging), non-human primates such as the rhesus macaque and chimpanzee (hepatitis, HIV, Parkinson's disease, cognition, and vaccines), and ferrets (SARS-CoV-2) Selected model organisms The organisms below have become model organisms because they facilitate the study of certain characters or because of their genetic accessibility. For example, E. coli was one of the first organisms for which genetic techniques such as transformation or genetic manipulation has been developed. The genomes of all model species have been sequenced, including their mitochondrial/chloroplast genomes. Model organism databases exist to provide researchers with a portal from which to download sequences (DNA, RNA, or protein) or to access functional information on specific genes, for example the sub-cellular localization of the gene product or its physiological role. Limitations Many animal models serving as test subjects in biomedical research, such as rats and mice, may be selectively sedentary, obese and glucose intolerant. This may confound their use to model human metabolic processes and diseases as these can be affected by dietary energy intake and exercise. Similarly, there are differences between the immune systems of model organisms and humans that lead to significantly altered responses to stimuli, although the underlying principles of genome function may be the same. The impoverished environments inside standard laboratory cages deny research animals of the mental and physical challenges are necessary for healthy emotional development. Without day-to-day variety, risks and rewards, and complex environments, some have argued that animal models are irrelevant models of human experience. Mice differ from humans in several immune properties: mice are more resistant to some toxins than humans; have a lower total neutrophil fraction in the blood, a lower neutrophil enzymatic capacity, lower activity of the complement system, and a different set of pentraxins involved in the inflammatory process; and lack genes for important components of the immune system, such as IL-8, IL-37, TLR10, ICAM-3, etc. Laboratory mice reared in specific-pathogen-free (SPF) conditions usually have a rather immature immune system with a deficit of memory T cells. These mice may have limited diversity of the microbiota, which directly affects the immune system and the development of pathological conditions. Moreover, persistent virus infections (for example, herpesviruses) are activated in humans, but not in SPF mice, with septic complications and may change the resistance to bacterial coinfections. “Dirty” mice are possibly better suitable for mimicking human pathologies. In addition, inbred mouse strains are used in the overwhelming majority of studies, while the human population is heterogeneous, pointing to the importance of studies in interstrain hybrid, outbred, and nonlinear mice. Unintended bias Some studies suggests that inadequate published data in animal testing may result in irreproducible research, with missing details about how experiments are done omitted from published papers or differences in testing that may introduce bias. Examples of hidden bias include a 2014 study from McGill University in Montreal, Canada which suggests that mice handled by men rather than women showed higher stress levels. Another study in 2016 suggested that gut microbiomes in mice may have an impact upon scientific research. Alternatives Ethical concerns, as well as the cost, maintenance and relative inefficiency of animal research has encouraged development of alternative methods for the study of disease. Cell culture, or in vitro studies, provide an alternative that preserves the physiology of the living cell, but does not require the sacrifice of an animal for mechanistic studies. Human, inducible pluripotent stem cells can also elucidate new mechanisms for understanding cancer and cell regeneration. Imaging studies (such as MRI or PET scans) enable non-invasive study of human subjects. Recent advances in genetics and genomics can identify disease-associated genes, which can be targeted for therapies. Many biomedical researchers argue that there is no substitute for a living organism when studying complex interactions in disease pathology or treatments. Ethics Debate about the ethical use of animals in research dates at least as far back as 1822 when the British Parliament under pressure from British and Indian intellectuals enacted the first law for animal protection preventing cruelty to cattle. This was followed by the Cruelty to Animals Act of 1835 and 1849, which criminalized ill-treating, over-driving, and torturing animals. In 1876, under pressure from the National Anti-Vivisection Society, the Cruelty to Animals Act was amended to include regulations governing the use of animals in research. This new act stipulated that 1) experiments must be proven absolutely necessary for instruction, or to save or prolong human life; 2) animals must be properly anesthetized; and 3) animals must be killed as soon as the experiment is over. Today, these three principles are central to the laws and guidelines governing the use of animals and research. In the U.S., the Animal Welfare Act of 1970 (see also Laboratory Animal Welfare Act) set standards for animal use and care in research. This law is enforced by APHIS's Animal Care program. In academic settings in which NIH funding is used for animal research, institutions are governed by the NIH Office of Laboratory Animal Welfare (OLAW). At each site, OLAW guidelines and standards are upheld by a local review board called the Institutional Animal Care and Use Committee (IACUC). All laboratory experiments involving living animals are reviewed and approved by this committee. In addition to proving the potential for benefit to human health, minimization of pain and distress, and timely and humane euthanasia, experimenters must justify their protocols based on the principles of Replacement, Reduction and Refinement. "Replacement" refers to efforts to engage alternatives to animal use. This includes the use of computer models, non-living tissues and cells, and replacement of “higher-order” animals (primates and mammals) with “lower” order animals (e.g. cold-blooded animals, invertebrates) wherever possible. "Reduction" refers to efforts to minimize number of animals used during the course of an experiment, as well as prevention of unnecessary replication of previous experiments. To satisfy this requirement, mathematical calculations of statistical power are employed to determine the minimum number of animals that can be used to get a statistically significant experimental result. "Refinement" refers to efforts to make experimental design as painless and efficient as possible in order to minimize the suffering of each animal subject.
Biology and health sciences
Basics
null
19391
https://en.wikipedia.org/wiki/Midwifery
Midwifery
Midwifery is the health science and health profession that deals with pregnancy, childbirth, and the postpartum period (including care of the newborn), in addition to the sexual and reproductive health of women throughout their lives. In many countries, midwifery is a medical profession (special for its independent and direct specialized education; should not be confused with the medical specialty, which depends on a previous general training). A professional in midwifery is known as a midwife. A 2013 Cochrane review concluded that "most women should be offered midwifery-led continuity models of care and women should be encouraged to ask for this option although caution should be exercised in applying this advice to women with substantial medical or obstetric complications." The review found that midwifery-led care was associated with a reduction in the use of epidurals, with fewer episiotomies or instrumental births, and a decreased risk of losing the baby before 24 weeks' gestation. However, midwifery-led care was also associated with a longer mean length of labor as measured in hours. Main areas of midwifery Pregnancy First trimester Trimester means "three months". A normal pregnancy lasts about nine months and has three trimesters. First trimester screening varies by country. Women are typically offered urinalysis (UA) and blood tests including a complete blood count (CBC), blood typing (including Rh screen), syphilis, hepatitis, HIV, and rubella testing. Additionally, women may have chlamydia testing via a urine sample, and women considered at high risk are screened for sickle cell disease and thalassemia. Women must consent to all tests before they are carried out. The woman's blood pressure, height and weight are measured. Her past pregnancies and family, social, and medical history are discussed. Women may have an ultrasound scan during the first trimester which may be used to help find the estimated due date. Some women may have genetic testing, such as screening for Down syndrome. Diet, exercise, and common disorders of pregnancy such as morning sickness are discussed. Second trimester The mother visits the midwife monthly or more often during the second trimester. The mother's partner and/or the birth companion may accompany her. The midwife will discuss pregnancy issues such as fatigue, heartburn, varicose veins, and other common problems such as back pain. Blood pressure and weight are monitored and the midwife measures the mother's abdomen to see if the baby is growing as expected. Lab tests such as a UA, CBC, and glucose tolerance test are done if clinically indicated. Third trimester In the third trimester the midwife will see the mother every two weeks until week 36 and every week after that. Weight, blood pressure, and abdominal measurements will continue to be done. Lab tests such as a CBC and UA may be done with additional testing done for at-risk pregnancies. The midwife palpates the woman's abdomen to establish the lie, presentation and position of the fetus and later, the engagement. A pelvic exam may be done to see if the mother's cervix is dilating. The midwife and the mother discuss birthing options and write a birth care plan. Childbirth Labor and delivery Midwives are qualified to assist with a normal vaginal delivery while more complicated deliveries are handled by a health care provider who has had further training. Childbirth is divided into four stages. First stage of labor The first stage of labour involves the opening of the cervix. In the early parts of this stage the cervix will become soft and thin thus preparing for the delivery of the baby. The first stage of labour is complete when the cervix has dilated the full 10cm. During the first stage of labor the mother begins to feel strong and regular contractions that come every 5 to 20 minutes and last 30 to 60 seconds. Contractions gradually become stronger, more frequent, and longer lasting. Second stage of labor During the second stage the baby begins to move down the birth canal. As the baby moves to the opening of the vagina it "crowns", meaning the top of the head can be seen at the vaginal entrance. At one time an "episiotomy", (an incision in the tissue at the opening of the vagina) was done routinely because it was believed that it prevented excessive tearing and healed more readily than a natural tear. However, more recent research shows that a surgical incision may be more extensive than a natural tear, and is more likely to contribute to later incontinence and pain during sex than a natural tear would have. The midwife assists the baby as needed and when fully emerged, cuts the umbilical cord. If desired, either of the baby's parents may cut the cord. In the past the cord was cut shortly after birth, but there is growing evidence that delayed cord clamping may benefit the infant. Third stage of labor The third stage of labour is where the mother must deliver the placenta. In order for the mother to do this they may need to push. Just like the contractions in the first stage of labour they may experience one or two of these. The midwife may assist the mother in delivering the placenta by gently pulling on the umbilical cord. Fourth stage of labor The fourth stage of labor is the period beginning immediately after the birth and extending for about six weeks. The World Health Organization describes this period as the most critical and yet the most neglected phase in the lives of mothers and babies. Until recently babies were routinely removed from their mothers following birth, however beginning around 2000, some authorities began to suggest that early skin-to-skin contact (placing the naked baby on the mother's chest) is of benefit to both mother and infant. As of 2014, early skin-to-skin contact is endorsed by all major organizations that are responsible for the well-being of infants. Thus, to help establish bonding and successful breastfeeding, the midwife carries out immediate mother and infant assessments as the infant lies on the mother's chest and removes the infant for further observations only after they have had their first breastfeed. Following the birth, if the mother had an episiotomy or a tearing of the perineum, it is sutured. The midwife does regular assessments for uterine contraction, fundal height, and vaginal bleeding. Throughout labor and delivery the mother's vital signs (temperature, blood pressure, and pulse) are closely monitored and her fluid intake and output are measured. The midwife also monitors the baby's pulse rate, palpates the mother's abdomen to monitor the baby's position, and does vaginal examinations as indicated. If the birth deviates from the norm at any stage, the midwife requests assistance from the multi-disciplinary team. Birthing positions Until the last century most women have used both the upright position and alternative positions to give birth. The lithotomy position was not used until the advent of forceps in the seventeenth century and since then childbirth has progressively moved from a woman supported experience in the home to a medical intervention within the hospital. There are significant advantages to assuming an upright position in labor and birth, such as stronger and more efficient uterine contractions aiding cervical dilatation, increased pelvic inlet and outlet diameters and improved uterine contractility. Upright positions in the second stage include sitting, squatting, kneeling, and being on hands and knees. Postpartum period For women who have a hospital birth, the minimum hospital stay is six hours. Women who leave before this do so against medical advice. Women may choose when to leave the hospital. Full postnatal assessments are conducted daily whilst inpatient, or more frequently if needed. A postnatal assessment includes the woman's observations, general well-being, breasts (either a discussion and assistance with breastfeeding or a discussion about lactation suppression), abdominal palpation (if she has not had a caesarean section) to check for involution of the uterus, or a check of her caesarean wound (the dressing does not need to be removed for this), a check of her perineum, particularly if she tore or had stitches, reviewing her lochia, ensuring she has passed urine and had her bowels open and checking for signs and symptoms of a DVT. The baby is also checked for jaundice, signs of adequate feeding, or other concerns. The baby has a nursery exam between six and seventy two hours of birth to check for conditions such as heart defects, hip problems, or eye problems. In the community, the community midwife sees the woman at least until day ten. This does not mean she sees the woman and baby daily, but she cannot discharge them from her care until day ten at the earliest. Postnatal checks include neonatal screening test (NST, or heel prick test) around day five. The baby is weighed and the midwife plans visits according to the health and needs of mother and baby. They are discharged to the care of the health visitor. Care of the newborn At birth, the baby receives an Apgar score at, at the least, one minute and five minutes of age. This is a score out of 10 that assesses the baby on five different areas—each worth between 0 and 2 points. These areas are: colour, respiratory effort, tone, heart rate, and response to stimuli. The midwife checks the baby for any obvious problems, weighs the baby, and measure head circumference. The midwife ensures the cord has been clamped securely and the baby has the appropriate name tags on (if in hospital). Babies lengths are not routinely measured. The midwife performs these checks as close to the mother as possible and returns the baby to the mother quickly. Skin-to-skin is encouraged, as this regulates the baby's heart rate, breathing, oxygen saturation, and temperature—and promotes bonding and breastfeeding. In some countries, such as Chile, the midwife is the professional who can direct neonatal intensive care units. This is an advantage for these professionals, who can use the knowledge of perinatology to bring a high quality care of the newborn, with medical or surgical conditions. Midwifery-led continuity of care Midwifery-led continuity of care is where one or more midwives have the primary responsibility for the continuity of care for childbearing women, with a multidisciplinary network of consultation and referral with other health care providers. This is different from "medical-led care" where an obstetrician or family physician is primarily responsible. In "shared-care" models, responsibility may be shared between a midwife, an obstetrician and/or a family physician. The midwife is part of very intimate situations with the mother. For this reason, many say that the most important thing to look for in a midwife is comfort with them, as one will go to them with every question or problem. According to a Cochrane review of public health systems in Australia, Canada, Ireland, New Zealand and the United Kingdom, "most women should be offered midwifery-led continuity models of care and women should be encouraged to ask for this option although caution should be exercised in applying this advice to women with substantial medical or obstetric complications." Midwifery-led care has effects including the following: a reduction in the use of epidurals, with fewer episiotomies or instrumental births. a longer mean length of labour as measured in hours increased chances of being cared for in labour by a midwife known by the childbearing woman increased chances of having a spontaneous vaginal birth decreased risk of preterm birth decreased risk of losing the baby before 24 weeks' gestation, although there appears to be no differences in the risk of losing the baby after 24 weeks or overall There was no difference in the number of Caesarean sections. All trials in the Cochrane review included licensed midwives, and none included lay or traditional midwives. Also, no trial included out of hospital birth. Compared to women in other care models, women in continuity models of midwifery care are more satisfied with their care. The updated version of the Cochrane review also shows a cost-saving effect in continuity models, compared to other midwifery models of care. In continuity models of midwifery care, the midwife-woman relationship is developing over time. The deepened relationship has shown to be of great importance and is in a systematic review described as "the viechle through which personalised care, trust and empowerment are achieved in the continuity of care midwifery model". In some cultures, midwifery is the most traditional way of carrying out a pregnancy and childbirth, and it has been conducted for multiple generations. Child birthing women in these cultures, take Zimbabwe for example, feel that health facilities are not as comforting as cultural roots of care. Also, according to the World Health Organization, women should be able to have their children where ever they feel the most safe, so if having a midwife and proceeding with an at-home birth is what makes some women feel safe, then midwifery-led continuity of care might be the best option for them. History Ancient history In ancient Egypt, midwifery was a recognized female occupation, as attested by the Ebers Papyrus which dates from 1900 to 1550 BCE. Five columns of this papyrus deal with obstetrics and gynecology, especially concerning the acceleration of parturition (the action or process of giving birth to offspring) and the birth prognosis of the newborn. The Westcar papyrus, dated to 1700 BCE, includes instructions for calculating the expected date of confinement and describes different styles of birth chairs. Bas reliefs in the royal birth rooms at Luxor and other temples also attest to the heavy presence of midwifery in this culture. Midwifery in Greco-Roman antiquity covered a wide range of women, including old women who continued folk medical traditions in the villages of the Roman Empire, trained midwives who garnered their knowledge from a variety of sources, and highly trained women who were considered physicians. However, there were certain characteristics desired in a "good" midwife, as described by the physician Soranus of Ephesus in the 2nd century. He states in his work, Gynecology, that "a suitable person will be literate, with her wits about her, possessed of a good memory, loving work, respectable and generally not unduly handicapped as regards her senses [i.e., sight, smell, hearing], sound of limb, robust, and, according to some people, endowed with long slim fingers and short nails at her fingertips." Soranus also recommends that the midwife be of sympathetic disposition (although she need not have borne a child herself) and that she keep her hands soft for the comfort of both mother and child. Pliny, another physician from this time, valued nobility and a quiet and inconspicuous disposition in a midwife. There appears to have been three "grades" of midwives present: The first was technically proficient; the second may have read some of the texts on obstetrics and gynecology; but the third was highly trained and reasonably considered a medical specialist with a concentration in midwifery. Agnodice or Agnodike (Gr. Ἀγνοδίκη) was the earliest historical, and likely apocryphal, midwife mentioned among the ancient Greeks. Midwives were known by many different titles in antiquity, ranging from iatrinē (Gr. nurse), maia (Gr., midwife), obstetrix (Lat., obstetrician), and medica (Lat., doctor). It appears as though midwifery was treated differently in the Eastern end of the Mediterranean basin as opposed to the West. In the East, some women advanced beyond the profession of midwife (maia) to that of gynaecologist (iatros gynaikeios, translated as women's doctor), for which formal training was required. Also, there were some gynecological tracts circulating in the medical and educated circles of the East that were written by women with Greek names, although these women were few in number. Based on these facts, it would appear that midwifery in the East was a respectable profession in which respectable women could earn their livelihoods and enough esteem to publish works read and cited by male physicians. In fact, a number of Roman legal provisions strongly suggest that midwives enjoyed status and remuneration comparable to that of male doctors. One example of such a midwife is Salpe of Lemnos, who wrote on women's diseases and was mentioned several times in the works of Pliny. However, in the Roman West, information about practicing midwives comes mainly from funerary epitaphs. Two hypotheses are suggested by looking at a small sample of these epitaphs. The first is the midwifery was not a profession to which freeborn women of families that had enjoyed free status of several generations were attracted; therefore it seems that most midwives were of servile origin. Second, since most of these funeral epitaphs describe the women as freed, it can be proposed that midwives were generally valued enough, and earned enough income, to be able to gain their freedom. It is not known from these epitaphs how certain slave women were selected for training as midwives. Slave girls may have been apprenticed, and it is most likely that mothers taught their daughters. The actual duties of the midwife in antiquity consisted mainly of assisting in the birthing process, although they may also have helped with other medical problems relating to women when needed. Often, the midwife would call for the assistance of a physician when a more difficult birth was anticipated. In many cases the midwife brought along two or three assistants. In antiquity, it was believed by both midwives and physicians that a normal delivery was made easier when a woman sat upright. Therefore, during parturition, midwives brought a stool to the home where the delivery was to take place. In the seat of the birthstool was a crescent-shaped hole through which the baby would be delivered. The birthstool or chair often had armrests for the mother to grasp during the delivery. Most birthstools or chairs had backs which the patient could press against, but Soranus suggests that in some cases the chairs were backless and an assistant would stand behind the mother to support her. The midwife sat facing the mother, encouraging and supporting her through the birth, perhaps offering instruction on breathing and pushing, sometimes massaging her vaginal opening, and supporting her perineum during the delivery of the baby. The assistants may have helped by pushing downwards on the top of the mother's abdomen. Finally, the midwife received the infant, placed it in pieces of cloth, cut the umbilical cord, and cleansed the baby. The child was sprinkled with "fine and powdery salt, or natron or aphronitre" to soak up the birth residue, rinsed, and then powdered and rinsed again. Next, the midwives cleared away any and all mucus present from the nose, mouth, ears, or anus. Midwives were encouraged by Soranus to put olive oil in the baby's eyes to cleanse away any birth residue, and to place a piece of wool soaked in olive oil over the umbilical cord. After the delivery, the midwife made the initial call on whether or not an infant was healthy and fit to rear. She inspected the newborn for congenital deformities and testing its cry to hear whether or not it was robust and hearty. Ultimately, midwives made a determination about the chances for an infant's survival and likely recommended that a newborn with any severe deformities be exposed. A 2nd-century terracotta relief from the Ostian tomb of Scribonia Attice, wife of physician-surgeon M. Ulpius Amerimnus, details a childbirth scene. Scribonia was a midwife and the relief shows her in the midst of a delivery. A patient sits in the birth chair, gripping the handles and the midwife's assistant stands behind her providing support. Scribonia sits on a low stool in front of the woman, modestly looking away while also assisting the delivery by dilating and massaging the vagina, as encouraged by Soranus. The services of a midwife were not inexpensive; this fact that suggests poorer women who could not afford the services of a professional midwife often had to make do with female relatives. Many wealthier families had their own midwives. However, the vast majority of women in the Greco-Roman world very likely received their maternity care from hired midwives. They may have been highly trained or possessed only a rudimentary knowledge of obstetrics. Also, many families had a choice of whether or not they wanted to employ a midwife who practiced the traditional folk medicine or the newer methods of professional parturition. Like a lot of other factors in antiquity, quality gynecological care often depended heavily on the socioeconomic status of the patient. Post-classical history Modern history From the 18th century, a conflict between surgeons and midwives arose, as medical men began to assert that their modern scientific techniques were better for mothers and infants than the folk medicine practiced by midwives. As doctors and medical associations pushed for a legal monopoly on obstetrical care, midwifery became outlawed or heavily regulated throughout the United States and Canada. In Northern Europe and Russia, the situation for midwives was a little easier - in the Duchy of Estonia in Imperial Russia, Professor Christian Friedrich Deutsch established a midwifery school for women at the University of Dorpat in 1811, which existed until World War I. It was the predecessor for the Tartu Health Care College. Training lasted for 7 months and in the end a certificate for practice was issued to the female students. Despite accusations that midwives were "incompetent and ignorant", some argued that poorly trained surgeons were far more of a danger to pregnant women. In 1846, the physician Ignaz Semmelweiss observed that more women died in maternity wards staffed by male surgeons than by female midwives, and traced these outbreaks of puerperal fever back to (then all-male) medical students not washing their hands properly after dissecting cadavers, but his sanitary recommendations were ignored until acceptance of germ theory became widespread. The argument that surgeons were more dangerous than midwives lasted until the study of bacteriology became popular in the early 1900s and hospital hygiene was improved. Women began to feel safer in the setting of the hospitals with the amount of aid and the ease of birth that they experienced with doctors. "Physicians trained in the new century found a great contrast between their hospital and obstetrics practice in women's homes where they could not maintain sterile conditions or have trained help." German social scientists Gunnar Heinsohn and Otto Steiger theorize that midwifery became a target of persecution and repression by public authorities because midwives possessed highly specialized knowledge and skills regarding not only assisting birth, but also contraception and abortion. Contemporary At late 20th century, midwives were already recognized as highly trained and specialized professionals in obstetrics. However, at the beginning of the 21st century, the medical perception of pregnancy and childbirth as potentially pathological and dangerous still dominates Western culture. Midwives who work in hospital settings also have been influenced by this view, although by and large they are trained to view birth as a normal and healthy process. While midwives play a much larger role in the care of pregnant mothers in Europe than in America, the medicalized model of birth still has influence in those countries, even though the World Health Organization recommends a natural, normal and humanized birth. The midwifery model of pregnancy and childbirth as a normal and healthy process plays a much larger role in Sweden and the Netherlands than the rest of Europe, however. Swedish midwives stand out, since they administer 80 percent of prenatal care and more than 80 percent of family planning services in Sweden. Midwives in Sweden attend all normal births in public hospitals and Swedish women tend to have fewer interventions in hospitals than American women. The Dutch infant mortality rate is one of the lowest rate in the world, at 4.0 deaths per thousand births, while the United States ranked twenty-second. Midwives in the Netherlands and Sweden owe a great deal of their success to supportive government policies.
Biology and health sciences
Health professionals
Health
19446
https://en.wikipedia.org/wiki/Magnetic%20resonance%20imaging
Magnetic resonance imaging
Magnetic resonance imaging (MRI) is a medical imaging technique used in radiology to generate pictures of the anatomy and the physiological processes inside the body. MRI scanners use strong magnetic fields, magnetic field gradients, and radio waves to form images of the organs in the body. MRI does not involve X-rays or the use of ionizing radiation, which distinguishes it from computed tomography (CT) and positron emission tomography (PET) scans. MRI is a medical application of nuclear magnetic resonance (NMR) which can also be used for imaging in other NMR applications, such as NMR spectroscopy. MRI is widely used in hospitals and clinics for medical diagnosis, staging and follow-up of disease. Compared to CT, MRI provides better contrast in images of soft tissues, e.g. in the brain or abdomen. However, it may be perceived as less comfortable by patients, due to the usually longer and louder measurements with the subject in a long, confining tube, although "open" MRI designs mostly relieve this. Additionally, implants and other non-removable metal in the body can pose a risk and may exclude some patients from undergoing an MRI examination safely. MRI was originally called NMRI (nuclear magnetic resonance imaging), but "nuclear" was dropped to avoid negative associations. Certain atomic nuclei are able to absorb radio frequency (RF) energy when placed in an external magnetic field; the resultant evolving spin polarization can induce an RF signal in a radio frequency coil and thereby be detected. In other words, the nuclear magnetic spin of protons in the hydrogen nuclei resonates with the RF incident waves and emit coherent radiation with compact direction, energy (frequency) and phase. This coherent amplified radiation is easily detected by RF antennas close to the subject being examined. It is a process similar to masers. In clinical and research MRI, hydrogen atoms are most often used to generate a macroscopic polarized radiation that is detected by the antennas. Hydrogen atoms are naturally abundant in humans and other biological organisms, particularly in water and fat. For this reason, most MRI scans essentially map the location of water and fat in the body. Pulses of radio waves excite the nuclear spin energy transition, and magnetic field gradients localize the polarization in space. By varying the parameters of the pulse sequence, different contrasts may be generated between tissues based on the relaxation properties of the hydrogen atoms therein. Since its development in the 1970s and 1980s, MRI has proven to be a versatile imaging technique. While MRI is most prominently used in diagnostic medicine and biomedical research, it also may be used to form images of non-living objects, such as mummies. Diffusion MRI and functional MRI extend the utility of MRI to capture neuronal tracts and blood flow respectively in the nervous system, in addition to detailed spatial images. The sustained increase in demand for MRI within health systems has led to concerns about cost effectiveness and overdiagnosis. Mechanism Construction and physics In most medical applications, hydrogen nuclei, which consist solely of a proton, that are in tissues create a signal that is processed to form an image of the body in terms of the density of those nuclei in a specific region. Given that the protons are affected by fields from other atoms to which they are bonded, it is possible to separate responses from hydrogen in specific compounds. To perform a study, the person is positioned within an MRI scanner that forms a strong magnetic field around the area to be imaged. First, energy from an oscillating magnetic field is temporarily applied to the patient at the appropriate resonance frequency. Scanning with X and Y gradient coils causes a selected region of the patient to experience the exact magnetic field required for the energy to be absorbed. The atoms are excited by a RF pulse and the resultant signal is measured by a receiving coil. The RF signal may be processed to deduce position information by looking at the changes in RF level and phase caused by varying the local magnetic field using gradient coils. As these coils are rapidly switched during the excitation and response to perform a moving line scan, they create the characteristic repetitive noise of an MRI scan as the windings move slightly due to magnetostriction. The contrast between different tissues is determined by the rate at which excited atoms return to the equilibrium state. Exogenous contrast agents may be given to the person to make the image clearer. The major components of an MRI scanner are the main magnet, which polarizes the sample, the shim coils for correcting shifts in the homogeneity of the main magnetic field, the gradient system which is used to localize the region to be scanned and the RF system, which excites the sample and detects the resulting NMR signal. The whole system is controlled by one or more computers. MRI requires a magnetic field that is both strong and uniform to a few parts per million across the scan volume. The field strength of the magnet is measured in teslas – and while the majority of systems operate at 1.5 T, commercial systems are available between 0.2 and 7 T. 3T MRI systems, also called 3 Tesla MRIs, have stronger magnets than 1.5 systems and are considered better for images of organs and soft tissue. Whole-body MRI systems for research applications operate in e.g. 9.4T, 10.5T, 11.7T. Even higher field whole-body MRI systems e.g. 14 T and beyond are in conceptual proposal or in engineering design. Most clinical magnets are superconducting magnets, which require liquid helium to keep them at low temperatures. Lower field strengths can be achieved with permanent magnets, which are often used in "open" MRI scanners for claustrophobic patients. Lower field strengths are also used in a portable MRI scanner approved by the FDA in 2020. Recently, MRI has been demonstrated also at ultra-low fields, i.e., in the microtesla-to-millitesla range, where sufficient signal quality is made possible by prepolarization (on the order of 10–100 mT) and by measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs). T1 and T2 Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 (spin-lattice; that is, magnetization in the same direction as the static magnetic field) and T2 (spin-spin; transverse to the static magnetic field). To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging. To create a T2-weighted image, magnetization is allowed to decay before measuring the MR signal by changing the echo time (TE). This image weighting is useful for detecting edema and inflammation, revealing white matter lesions, and assessing zonal anatomy in the prostate and uterus. The information from MRI scans comes in the form of image contrasts based on differences in the rate of relaxation of nuclear spins following their perturbation by an oscillating magnetic field (in the form of radiofrequency pulses through the sample). The relaxation rates are a measure of the time it takes for a signal to decay back to an equilibrium state from either the longitudinal or transverse plane. Magnetization builds up along the z-axis in the presence of a magnetic field, B0, such that the magnetic dipoles in the sample will, on average, align with the z-axis summing to a total magnetization Mz. This magnetization along z is defined as the equilibrium magnetization; magnetization is defined as the sum of all magnetic dipoles in a sample. Following the equilibrium magnetization, a 90° radiofrequency (RF) pulse flips the direction of the magnetization vector in the xy-plane, and is then switched off. The initial magnetic field B0, however, is still applied. Thus, the spin magnetization vector will slowly return from the xy-plane back to the equilibrium state. The time it takes for the magnetization vector to return to its equilibrium value, Mz, is referred to as the longitudinal relaxation time, T1. Subsequently, the rate at which this happens is simply the reciprocal of the relaxation time: . Similarly, the time in which it takes for Mxy to return to zero is T2, with the rate . Magnetization as a function of time is defined by the Bloch equations. T1 and T2 values are dependent on the chemical environment of the sample; hence their utility in MRI. Soft tissue and muscle tissue relax at different rates, yielding the image contrast in a typical scan. The standard display of MR images is to represent fluid characteristics in black-and-white images, where different tissues turn out as follows: Diagnostics Usage by organ or system MRI has a wide range of applications in medical diagnosis and around 50,000 scanners are estimated to be in use worldwide. MRI affects diagnosis and treatment in many specialties although the effect on improved health outcomes is disputed in certain cases. MRI is the investigation of choice in the preoperative staging of rectal and prostate cancer and has a role in the diagnosis, staging, and follow-up of other tumors, as well as for determining areas of tissue for sampling in biobanking. Neuroimaging MRI is the investigative tool of choice for neurological cancers over CT, as it offers better visualization of the posterior cranial fossa, containing the brainstem and the cerebellum. The contrast provided between grey and white matter makes MRI the best choice for many conditions of the central nervous system, including demyelinating diseases, dementia, cerebrovascular disease, infectious diseases, Alzheimer's disease and epilepsy. Since many images are taken milliseconds apart, it shows how the brain responds to different stimuli, enabling researchers to study both the functional and structural brain abnormalities in psychological disorders. MRI also is used in guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer. New tools that implement artificial intelligence in healthcare have demonstrated higher image quality and morphometric analysis in neuroimaging with the application of a denoising system. The record for the highest spatial resolution of a whole intact brain (postmortem) is 100 microns, from Massachusetts General Hospital. The data was published in NATURE on 30 October 2019. Though MRI is used widely in research on mental disabilities, based on a 2024 systematic literature review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI), available research using MRI scans to diagnose ADHD showed great variability. The authors conclude that MRI cannot be reliably used to assist in making a clinical diagnosis of ADHD. Cardiovascular Cardiac MRI is complementary to other imaging techniques, such as echocardiography, cardiac CT, and nuclear medicine. It can be used to assess the structure and the function of the heart. Its applications include assessment of myocardial ischemia and viability, cardiomyopathies, myocarditis, iron overload, vascular diseases, and congenital heart disease. Musculoskeletal Applications in the musculoskeletal system include spinal imaging, assessment of joint disease, and soft tissue tumors. Also, MRI techniques can be used for diagnostic imaging of systemic muscle diseases including genetic muscle diseases. Swallowing movement of throat and oesophagus can cause motion artifact over the imaged spine. Therefore, a saturation pulse applied over this region the throat and oesophagus can help to avoid this artifact. Motion artifact arising due to pumping of the heart can be reduced by timing the MRI pulse according to heart cycles. Blood vessels flow artifacts can be reduced by applying saturation pulses above and below the region of interest. Liver and gastrointestinal Hepatobiliary MR is used to detect and characterize lesions of the liver, pancreas, and bile ducts. Focal or diffuse disorders of the liver may be evaluated using diffusion-weighted, opposed-phase imaging and dynamic contrast enhancement sequences. Extracellular contrast agents are used widely in liver MRI, and newer hepatobiliary contrast agents also provide the opportunity to perform functional biliary imaging. Anatomical imaging of the bile ducts is achieved by using a heavily T2-weighted sequence in magnetic resonance cholangiopancreatography (MRCP). Functional imaging of the pancreas is performed following administration of secretin. MR enterography provides non-invasive assessment of inflammatory bowel disease and small bowel tumors. MR-colonography may play a role in the detection of large polyps in patients at increased risk of colorectal cancer. Angiography Magnetic resonance angiography (MRA) generates pictures of the arteries to evaluate them for stenosis (abnormal narrowing) or aneurysms (vessel wall dilatations, at risk of rupture). MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (called a "run-off"). A variety of techniques can be used to generate the pictures, such as administration of a paramagnetic contrast agent (gadolinium) or using a technique known as "flow-related enhancement" (e.g., 2D and 3D time-of-flight sequences), where most of the signal on an image is due to blood that recently moved into that plane (see also FLASH MRI). Techniques involving phase accumulation (known as phase contrast angiography) can also be used to generate flow velocity maps easily and accurately. Magnetic resonance venography (MRV) is a similar procedure that is used to image veins. In this method, the tissue is now excited inferiorly, while the signal is gathered in the plane immediately superior to the excitation plane—thus imaging the venous blood that recently moved from the excited plane. Contrast agents MRI for imaging anatomical structures or blood flow do not require contrast agents since the varying properties of the tissues or blood provide natural contrasts. However, for more specific types of imaging, exogenous contrast agents may be given intravenously, orally, or intra-articularly. Most contrast agents are either paramagnetic (e.g.: gadolinium, manganese, europium), and are used to shorten T1 in the tissue they accumulate in, or super-paramagnetic (SPIONs), and are used to shorten T2 and T2* in healthy tissue reducing its signal intensity (negative contrast agents). The most commonly used intravenous contrast agents are based on chelates of gadolinium, which is highly paramagnetic. In general, these agents have proved safer than the iodinated contrast agents used in X-ray radiography or CT. Anaphylactoid reactions are rare, occurring in approx. 0.03–0.1%. Of particular interest is the lower incidence of nephrotoxicity, compared with iodinated agents, when given at usual doses—this has made contrast-enhanced MRI scanning an option for patients with renal impairment, who would otherwise not be able to undergo contrast-enhanced CT. Gadolinium-based contrast reagents are typically octadentate complexes of gadolinium(III). The complex is very stable (log K > 20) so that, in use, the concentration of the un-complexed Gd3+ ions should be below the toxicity limit. The 9th place in the metal ion's coordination sphere is occupied by a water molecule which exchanges rapidly with water molecules in the reagent molecule's immediate environment, affecting the magnetic resonance relaxation time. In December 2017, the Food and Drug Administration (FDA) in the United States announced in a drug safety communication that new warnings were to be included on all gadolinium-based contrast agents (GBCAs). The FDA also called for increased patient education and requiring gadolinium contrast vendors to conduct additional animal and clinical studies to assess the safety of these agents. Although gadolinium agents have proved useful for patients with kidney impairment, in patients with severe kidney failure requiring dialysis there is a risk of a rare but serious illness, nephrogenic systemic fibrosis, which may be linked to the use of certain gadolinium-containing agents. The most frequently linked is gadodiamide, but other agents have been linked too. Although a causal link has not been definitively established, current guidelines in the United States are that dialysis patients should only receive gadolinium agents where essential and that dialysis should be performed as soon as possible after the scan to remove the agent from the body promptly. In Europe, where more gadolinium-containing agents are available, a classification of agents according to potential risks has been released. In 2008, a new contrast agent named gadoxetate, brand name Eovist (US) or Primovist (EU), was approved for diagnostic use: This has the theoretical benefit of a dual excretion path. Sequences An MRI sequence is a particular setting of radiofrequency pulses and gradients, resulting in a particular image appearance. The T1 and T2 weighting can also be described as MRI sequences. Specialized configurations Magnetic resonance spectroscopy Magnetic resonance spectroscopy (MRS) is used to measure the levels of different metabolites in body tissues, which can be achieved through a variety of single voxel or imaging-based techniques. The MR signal produces a spectrum of resonances that corresponds to different molecular arrangements of the isotope being "excited". This signature is used to diagnose certain metabolic disorders, especially those affecting the brain, and to provide information on tumor metabolism. Magnetic resonance spectroscopic imaging (MRSI) combines both spectroscopic and imaging methods to produce spatially localized spectra from within the sample or patient. The spatial resolution is much lower (limited by the available SNR), but the spectra in each voxel contains information about many metabolites. Because the available signal is used to encode spatial and spectral information, MRSI requires high SNR achievable only at higher field strengths (3 T and above). The high procurement and maintenance costs of MRI with extremely high field strengths inhibit their popularity. However, recent compressed sensing-based software algorithms (e.g., SAMV) have been proposed to achieve super-resolution without requiring such high field strengths. Real-time Interventional MRI The lack of harmful effects on the patient and the operator make MRI well-suited for interventional radiology, where the images produced by an MRI scanner guide minimally invasive procedures. Such procedures use no ferromagnetic instruments. A specialized growing subset of interventional MRI is intraoperative MRI, in which an MRI is used in surgery. Some specialized MRI systems allow imaging concurrent with the surgical procedure. More typically, the surgical procedure is temporarily interrupted so that MRI can assess the success of the procedure or guide subsequent surgical work. Magnetic resonance guided focused ultrasound In guided therapy, high-intensity focused ultrasound (HIFU) beams are focused on a tissue, that are controlled using MR thermal imaging. Due to the high energy at the focus, the temperature rises to above 65 °C (150 °F) which completely destroys the tissue. This technology can achieve precise ablation of diseased tissue. MR imaging provides a three-dimensional view of the target tissue, allowing for the precise focusing of ultrasound energy. The MR imaging provides quantitative, real-time, thermal images of the treated area. This allows the physician to ensure that the temperature generated during each cycle of ultrasound energy is sufficient to cause thermal ablation within the desired tissue and if not, to adapt the parameters to ensure effective treatment. Multinuclear imaging Hydrogen has the most frequently imaged nucleus in MRI because it is present in biological tissues in great abundance, and because its high gyromagnetic ratio gives a strong signal. However, any nucleus with a net nuclear spin could potentially be imaged with MRI. Such nuclei include helium-3, lithium-7, carbon-13, fluorine-19, oxygen-17, sodium-23, phosphorus-31 and xenon-129. 23Na and 31P are naturally abundant in the body, so they can be imaged directly. Gaseous isotopes such as 3He or 129Xe must be hyperpolarized and then inhaled as their nuclear density is too low to yield a useful signal under normal conditions. 17O and 19F can be administered in sufficient quantities in liquid form (e.g. 17O-water) that hyperpolarization is not a necessity. Using helium or xenon has the advantage of reduced background noise, and therefore increased contrast for the image itself, because these elements are not normally present in biological tissues. Moreover, the nucleus of any atom that has a net nuclear spin and that is bonded to a hydrogen atom could potentially be imaged via heteronuclear magnetization transfer MRI that would image the high-gyromagnetic-ratio hydrogen nucleus instead of the low-gyromagnetic-ratio nucleus that is bonded to the hydrogen atom. In principle, heteronuclear magnetization transfer MRI could be used to detect the presence or absence of specific chemical bonds. Multinuclear imaging is primarily a research technique at present. However, potential applications include functional imaging and imaging of organs poorly seen on 1H MRI (e.g., lungs and bones) or as alternative contrast agents. Inhaled hyperpolarized 3He can be used to image the distribution of air spaces within the lungs. Injectable solutions containing 13C or stabilized bubbles of hyperpolarized 129Xe have been studied as contrast agents for angiography and perfusion imaging. 31P can potentially provide information on bone density and structure, as well as functional imaging of the brain. Multinuclear imaging holds the potential to chart the distribution of lithium in the human brain, this element finding use as an important drug for those with conditions such as bipolar disorder. Molecular imaging by MRI MRI has the advantages of having very high spatial resolution and is very adept at morphological imaging and functional imaging. MRI does have several disadvantages though. First, MRI has a sensitivity of around 10−3 mol/L to 10−5 mol/L, which, compared to other types of imaging, can be very limiting. This problem stems from the fact that the population difference between the nuclear spin states is very small at room temperature. For example, at 1.5 teslas, a typical field strength for clinical MRI, the difference between high and low energy states is approximately 9 molecules per 2 million. Improvements to increase MR sensitivity include increasing magnetic field strength and hyperpolarization via optical pumping or dynamic nuclear polarization. There are also a variety of signal amplification schemes based on chemical exchange that increase sensitivity. To achieve molecular imaging of disease biomarkers using MRI, targeted MRI contrast agents with high specificity and high relaxivity (sensitivity) are required. To date, many studies have been devoted to developing targeted-MRI contrast agents to achieve molecular imaging by MRI. Commonly, peptides, antibodies, or small ligands, and small protein domains, such as HER-2 affibodies, have been applied to achieve targeting. To enhance the sensitivity of the contrast agents, these targeting moieties are usually linked to high payload MRI contrast agents or MRI contrast agents with high relaxivities. A new class of gene targeting MR contrast agents has been introduced to show gene action of unique mRNA and gene transcription factor proteins. These new contrast agents can trace cells with unique mRNA, microRNA and virus; tissue response to inflammation in living brains. The MR reports change in gene expression with positive correlation to TaqMan analysis, optical and electron microscopy. Parallel MRI It takes time to gather MRI data using sequential applications of magnetic field gradients. Even for the most streamlined of MRI sequences, there are physical and physiologic limits to the rate of gradient switching. Parallel MRI circumvents these limits by gathering some portion of the data simultaneously, rather than in a traditional sequential fashion. This is accomplished using arrays of radiofrequency (RF) detector coils, each with a different 'view' of the body. A reduced set of gradient steps is applied, and the remaining spatial information is filled in by combining signals from various coils, based on their known spatial sensitivity patterns. The resulting acceleration is limited by the number of coils and by the signal to noise ratio (which decreases with increasing acceleration), but two- to four-fold accelerations may commonly be achieved with suitable coil array configurations, and substantially higher accelerations have been demonstrated with specialized coil arrays. Parallel MRI may be used with most MRI sequences. After a number of early suggestions for using arrays of detectors to accelerate imaging went largely unremarked in the MRI field, parallel imaging saw widespread development and application following the introduction of the SiMultaneous Acquisition of Spatial Harmonics (SMASH) technique in 1996–7. The SENSitivity Encoding (SENSE) and Generalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) techniques are the parallel imaging methods in most common use today. The advent of parallel MRI resulted in extensive research and development in image reconstruction and RF coil design, as well as in a rapid expansion of the number of receiver channels available on commercial MR systems. Parallel MRI is now used routinely for MRI examinations in a wide range of body areas and clinical or research applications. Quantitative MRI Most MRI focuses on qualitative interpretation of MR data by acquiring spatial maps of relative variations in signal strength which are "weighted" by certain parameters. Quantitative methods instead attempt to determine spatial maps of accurate tissue relaxometry parameter values or magnetic field, or to measure the size of certain spatial features. Examples of quantitative MRI methods are: T1-mapping (notably used in cardiac magnetic resonance imaging) T2-mapping Quantitative susceptibility mapping (QSM) Quantitative fluid flow MRI (i.e. some cerebrospinal fluid flow MRI) Magnetic resonance elastography (MRE) Quantitative MRI aims to increase the reproducibility of MR images and interpretations, but has historically require longer scan times. Quantitative MRI (or qMRI) sometimes more specifically refers to multi-parametric quantitative MRI, the mapping of multiple tissue relaxometry parameters in a single imaging session. Efforts to make multi-parametric quantitative MRI faster have produced sequences which map multiple parameters simultaneously, either by building separate encoding methods for each parameter into the sequence, or by fitting MR signal evolution to a multi-parameter model. Hyperpolarized gas MRI Traditional MRI generates poor images of lung tissue because there are fewer water molecules with protons that can be excited by the magnetic field. Using hyperpolarized gas an MRI scan can identify ventilation defects in the lungs. Before the scan, a patient is asked to inhale hyperpolarized xenon mixed with a buffer gas of helium or nitrogen. The resulting lung images are much higher quality than with traditional MRI. Safety MRI is, in general, a safe technique, although injuries may occur as a result of failed safety procedures or human error. Contraindications to MRI include most cochlear implants and cardiac pacemakers, shrapnel, and metallic foreign bodies in the eyes. Magnetic resonance imaging in pregnancy appears to be safe, at least during the second and third trimesters if done without contrast agents. Since MRI does not use any ionizing radiation, its use is generally favored in preference to CT when either modality could yield the same information. Some patients experience claustrophobia and may require sedation or shorter MRI protocols. Amplitude and rapid switching of gradient coils during image acquisition may cause peripheral nerve stimulation. MRI uses powerful magnets and can therefore cause magnetic materials to move at great speeds, posing a projectile risk, and may cause fatal accidents. However, as millions of MRIs are performed globally each year, fatalities are extremely rare. MRI machines can produce loud noise, up to 120 dB(A). This can cause hearing loss, tinnitus and hyperacusis, so appropriate hearing protection is essential for anyone inside the MRI scanner room during the examination. Overuse Medical societies issue guidelines for when physicians should use MRI on patients and recommend against overuse. MRI can detect health problems or confirm a diagnosis, but medical societies often recommend that MRI not be the first procedure for creating a plan to diagnose or manage a patient's complaint. A common case is to use MRI to seek a cause of low back pain; the American College of Physicians, for example, recommends against imaging (including MRI) as unlikely to result in a positive outcome for the patient. Artifacts An MRI artifact is a visual artifact, that is, an anomaly during visual representation. Many different artifacts can occur during magnetic resonance imaging (MRI), some affecting the diagnostic quality, while others may be confused with pathology. Artifacts can be classified as patient-related, signal processing-dependent and hardware (machine)-related. Non-medical use MRI is used industrially mainly for routine analysis of chemicals. The nuclear magnetic resonance technique is also used, for example, to measure the ratio between water and fat in foods, monitoring of flow of corrosive fluids in pipes, or to study molecular structures such as catalysts. Being non-invasive and non-damaging, MRI can be used to study the anatomy of plants, their water transportation processes and water balance. It is also applied to veterinary radiology for diagnostic purposes. Outside this, its use in zoology is limited due to the high cost; but it can be used on many species. In palaeontology it is used to examine the structure of fossils. Forensic imaging provides graphic documentation of an autopsy, which manual autopsy does not. CT scanning provides quick whole-body imaging of skeletal and parenchymal alterations, whereas MR imaging gives better representation of soft tissue pathology. All that being said, MRI is more expensive, and more time-consuming to utilize. Moreover, the quality of MR imaging deteriorates below 10 °C. History In 1971 at Stony Brook University, Paul Lauterbur applied magnetic field gradients in all three dimensions and a back-projection technique to create NMR images. He published the first images of two tubes of water in 1973 in the journal Nature, followed by the picture of a living animal, a clam, and in 1974 by the image of the thoracic cavity of a mouse. Lauterbur called his imaging method zeugmatography, a term which was replaced by (N)MR imaging. In the late 1970s, physicists Peter Mansfield and Paul Lauterbur developed MRI-related techniques, like the echo-planar imaging (EPI) technique. Raymond Damadian's work into nuclear magnetic resonance (NMR) has been incorporated into MRI, having built one of the first scanners. Advances in semiconductor technology were crucial to the development of practical MRI, which requires a large amount of computational power. This was made possible by the rapidly increasing number of transistors on a single integrated circuit chip. Mansfield and Lauterbur were awarded the 2003 Nobel Prize in Physiology or Medicine for their "discoveries concerning magnetic resonance imaging".
Technology
Imaging
null
19447
https://en.wikipedia.org/wiki/Group%20%28mathematics%29
Group (mathematics)
In mathematics, a group is a set with an operation that associates an element of the set to every pair of elements of the set (as does every binary operation) and satisfies the following constraints: the operation is associative, it has an identity element, and every element of the set has an inverse element. Many mathematical structures are groups endowed with other properties. For example, the integers with the addition operation form an infinite group, which is generated by a single element called (these properties characterize the integers in a unique way). The concept of a group was elaborated for handling, in a unified way, many mathematical structures such as numbers, geometric shapes and polynomial roots. Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics. In geometry, groups arise naturally in the study of symmetries and geometric transformations: The symmetries of an object form a group, called the symmetry group of the object, and the transformations of a given type form a general group. Lie groups appear in symmetry groups in geometry, and also in the Standard Model of particle physics. The Poincaré group is a Lie group consisting of the symmetries of spacetime in special relativity. Point groups describe symmetry in molecular chemistry. The concept of a group arose in the study of polynomial equations, starting with Évariste Galois in the 1830s, who introduced the term group (French: ) for the symmetry group of the roots of an equation, now called a Galois group. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view of representation theory (that is, through the representations of the group) and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become an active area in group theory. Definition and illustration First example: the integers One of the more familiar groups is the set of integers together with addition. For any two integers and , the sum is also an integer; this closure property says that is a binary operation on . The following properties of integer addition serve as a model for the group axioms in the definition below. For all integers , and , one has . Expressed in words, adding to first, and then adding the result to gives the same final result as adding to the sum of and . This property is known as associativity. If is any integer, then and . Zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer , there is an integer such that and . The integer is called the inverse element of the integer and is denoted . The integers, together with the operation , form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures as a collective, the following definition is developed. Definition A group is a non-empty set together with a binary operation on , here denoted "", that combines any two elements and of to form an element of , denoted , such that the following three requirements, known as group axioms, are satisfied: Associativity For all , , in , one has . Identity element There exists an element in such that, for every in , one has and . Such an element is unique (see below). It is called the identity element (or sometimes neutral element) of the group. Inverse element For each in , there exists an element in such that and , where is the identity element. For each , the element is unique (see below); it is called the inverse of and is commonly denoted . Notation and terminology Formally, a group is an ordered pair of a set and a binary operation on this set that satisfies the group axioms. The set is called the underlying set of the group, and the operation is called the group operation or the group law. A group and its underlying set are thus two different mathematical objects. To avoid cumbersome notation, it is common to abuse notation by using the same symbol to denote both. This reflects also an informal way of thinking: that the group is the same as the set except that it has been enriched by additional structure provided by the operation. For example, consider the set of real numbers , which has the operations of addition and multiplication . Formally, is a set, is a group, and is a field. But it is common to write to denote any of these three objects. The additive group of the field is the group whose underlying set is and whose operation is addition. The multiplicative group of the field is the group whose underlying set is the set of nonzero real numbers and whose operation is multiplication. More generally, one speaks of an additive group whenever the group operation is notated as addition; in this case, the identity is typically denoted , and the inverse of an element is denoted . Similarly, one speaks of a multiplicative group whenever the group operation is notated as multiplication; in this case, the identity is typically denoted , and the inverse of an element is denoted . In a multiplicative group, the operation symbol is usually omitted entirely, so that the operation is denoted by juxtaposition, instead of . The definition of a group does not require that for all elements and in . If this additional condition holds, then the operation is said to be commutative, and the group is called an abelian group. It is a common convention that for an abelian group either additive or multiplicative notation may be used, but for a nonabelian group only multiplicative notation is used. Several other notations are commonly used for groups whose elements are not numbers. For a group whose elements are functions, the operation is often function composition ; then the identity may be denoted id. In the more specific cases of geometric transformation groups, symmetry groups, permutation groups, and automorphism groups, the symbol is often omitted, as for multiplicative groups. Many other variants of notation may be encountered. Second example: a symmetry group Two figures in the plane are congruent if one can be changed into the other using a combination of rotations, reflections, and translations. Any figure is congruent to itself. However, some figures are congruent to themselves in more than one way, and these extra congruences are called symmetries. A square has eight symmetries. These are: the identity operation leaving everything unchanged, denoted id; rotations of the square around its center by 90°, 180°, and 270° clockwise, denoted by , and , respectively; reflections about the horizontal and vertical middle line ( and ), or through the two diagonals ( and ). These symmetries are functions. Each sends a point in the square to the corresponding point under the symmetry. For example, sends a point to its rotation 90° clockwise around the square's center, and sends a point to its reflection across the square's vertical middle line. Composing two of these symmetries gives another symmetry. These symmetries determine a group called the dihedral group of degree four, denoted . The underlying set of the group is the above set of symmetries, and the group operation is function composition. Two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. The result of performing first and then is written symbolically from right to left as ("apply the symmetry after performing the symmetry "). This is the usual notation for composition of functions. A Cayley table lists the results of all such compositions possible. For example, rotating by 270° clockwise () and then reflecting horizontally () is the same as performing a reflection along the diagonal (). Using the above symbols, highlighted in blue in the Cayley table: Given this set of symmetries and the described operation, the group axioms can be understood as follows. Binary operation: Composition is a binary operation. That is, is a symmetry for any two symmetries and . For example, that is, rotating 270° clockwise after reflecting horizontally equals reflecting along the counter-diagonal (). Indeed, every other combination of two symmetries still gives a symmetry, as can be checked using the Cayley table. Associativity: The associativity axiom deals with composing more than two symmetries: Starting with three elements , and of , there are two possible ways of using these three symmetries in this order to determine a symmetry of the square. One of these ways is to first compose and into a single symmetry, then to compose that symmetry with . The other way is to first compose and , then to compose the resulting symmetry with . These two ways must give always the same result, that is, For example, can be checked using the Cayley table: Identity element: The identity element is , as it does not change any symmetry when composed with it either on the left or on the right. Inverse element: Each symmetry has an inverse: , the reflections , , , and the 180° rotation are their own inverse, because performing them twice brings the square back to its original orientation. The rotations and are each other's inverses, because rotating 90° and then rotation 270° (or vice versa) yields a rotation over 360° which leaves the square unchanged. This is easily verified on the table. In contrast to the group of integers above, where the order of the operation is immaterial, it does matter in , as, for example, but . In other words, is not abelian. History The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois's ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation (1854) gives the first abstract definition of a finite group. Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884. The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss's number-theoretical work Disquisitiones Arithmeticae (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers. The convergence of these various sources into a uniform theory of groups started with Camille Jordan's (1870). Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an "abstract group", in the terminology of the time. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside (who worked on representation theory of finite groups), Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by the work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research concerning this classification proof is ongoing. Group theory remains a highly active mathematical branch, impacting many other fields, as the examples below illustrate. Elementary consequences of the group axioms Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. For example, repeated applications of the associativity axiom show that the unambiguity of generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted. Uniqueness of identity element The group axioms imply that the identity element is unique; that is, there exists only one identity element: any two identity elements and of a group are equal, because the group axioms imply . It is thus customary to speak of the identity element of the group. Uniqueness of inverses The group axioms also imply that the inverse of each element is unique. Let a group element have both and as inverses. Then Therefore, it is customary to speak of the inverse of an element. Division Given elements and of a group , there is a unique solution in to the equation , namely . It follows that for each in , the function that maps each to is a bijection; it is called left multiplication by or left translation by . Similarly, given and , the unique solution to is . For each , the function that maps each to is a bijection called right multiplication by or right translation by . Equivalent definition with relaxed axioms The group axioms for identity and inverses may be "weakened" to assert only the existence of a left identity and left inverses. From these one-sided axioms, one can prove that the left identity is also a right identity and a left inverse is also a right inverse for the same element. Since they define exactly the same structures as groups, collectively the axioms are not weaker. In particular, assuming associativity and the existence of a left identity (that is, ) and a left inverse for each element (that is, ), one can show that every left inverse is also a right inverse of the same element as follows. Indeed, one has Similarly, the left identity is also a right identity: These proofs require all three axioms (associativity, existence of left identity and existence of left inverse). For a structure with a looser definition (like a semigroup) one may have, for example, that a left identity is not necessarily a right identity. The same result can be obtained by only assuming the existence of a right identity and a right inverse. However, only assuming the existence of a left identity and a right inverse (or vice versa) is not sufficient to define a group. For example, consider the set with the operator satisfying and . This structure does have a left identity (namely, ), and each element has a right inverse (which is for both elements). Furthermore, this operation is associative (since the product of any number of elements is always equal to the rightmost element in that product, regardless of the order in which these operations are done). However, is not a group, since it lacks a right identity. Basic concepts When studying sets, one uses concepts such as subset, function, and quotient by an equivalence relation. When studying groups, one uses instead subgroups, homomorphisms, and quotient groups. These are the analogues that take the group structure into account. Group homomorphisms Group homomorphisms are functions that respect group structure; they may be used to relate two groups. A homomorphism from a group to a group is a function such that It would be natural to require also that respect identities, , and inverses, for all in . However, these additional requirements need not be included in the definition of homomorphisms, because they are already implied by the requirement of respecting the group operation. The identity homomorphism of a group is the homomorphism that maps each element of to itself. An inverse homomorphism of a homomorphism is a homomorphism such that and , that is, such that for all in and such that for all in . An isomorphism is a homomorphism that has an inverse homomorphism; equivalently, it is a bijective homomorphism. Groups and are called isomorphic if there exists an isomorphism . In this case, can be obtained from simply by renaming its elements according to the function ; then any statement true for is true for , provided that any specific elements mentioned in the statement are also renamed. The collection of all groups, together with the homomorphisms between them, form a category, the category of groups. An injective homomorphism factors canonically as an isomorphism followed by an inclusion, for some subgroup of . Injective homomorphisms are the monomorphisms in the category of groups. Subgroups Informally, a subgroup is a group contained within a bigger one, : it has a subset of the elements of , with the same operation. Concretely, this means that the identity element of must be contained in , and whenever and are both in , then so are and , so the elements of , equipped with the group operation on restricted to , indeed form a group. In this case, the inclusion map is a homomorphism. In the example of symmetries of a square, the identity and the rotations constitute a subgroup , highlighted in red in the Cayley table of the example: any two rotations composed are still a rotation, and a rotation can be undone by (i.e., is inverse to) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270°. The subgroup test provides a necessary and sufficient condition for a nonempty subset of a group to be a subgroup: it is sufficient to check that for all elements and in . Knowing a group's subgroups is important in understanding the group as a whole. Given any subset of a group , the subgroup generated by consists of all products of elements of and their inverses. It is the smallest subgroup of containing . In the example of symmetries of a square, the subgroup generated by and consists of these two elements, the identity element , and the element . Again, this is a subgroup, because combining any two of these four elements or their inverses (which are, in this particular case, these same elements) yields an element of this subgroup. Cosets In many situations it is desirable to consider two group elements the same if they differ by an element of a given subgroup. For example, in the symmetry group of a square, once any reflection is performed, rotations alone cannot return the square to its original position, so one can think of the reflected positions of the square as all being equivalent to each other, and as inequivalent to the unreflected positions; the rotation operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this insight: a subgroup determines left and right cosets, which can be thought of as translations of by an arbitrary group element . In symbolic terms, the left and right cosets of , containing an element , are The left cosets of any subgroup form a partition of ; that is, the union of all left cosets is equal to and two left cosets are either equal or have an empty intersection. The first case happens precisely when , i.e., when the two elements differ by an element of . Similar considerations apply to the right cosets of . The left cosets of may or may not be the same as its right cosets. If they are (that is, if all in satisfy ), then is said to be a normal subgroup. In , the group of symmetries of a square, with its subgroup of rotations, the left cosets are either equal to , if is an element of itself, or otherwise equal to (highlighted in green in the Cayley table of ). The subgroup is normal, because and similarly for the other elements of the group. (In fact, in the case of , the cosets generated by reflections are all equal: .) Quotient groups Suppose that is a normal subgroup of a group , and denotes its set of cosets. Then there is a unique group law on for which the map sending each element to is a homomorphism. Explicitly, the product of two cosets and is , the coset serves as the identity of , and the inverse of in the quotient group is . The group , read as " modulo ", is called a quotient group or factor group. The quotient group can alternatively be characterized by a universal property. The elements of the quotient group are and . The group operation on the quotient is shown in the table. For example, . Both the subgroup and the quotient are abelian, but is not. Sometimes a group can be reconstructed from a subgroup and quotient (plus some additional data), by the semidirect product construction; is an example. The first isomorphism theorem implies that any surjective homomorphism factors canonically as a quotient homomorphism followed by an isomorphism: . Surjective homomorphisms are the epimorphisms in the category of groups. Presentations Every group is isomorphic to a quotient of a free group, in many ways. For example, the dihedral group is generated by the right rotation and the reflection in a vertical line (every element of is a finite product of copies of these and their inverses). Hence there is a surjective homomorphism from the free group on two generators to sending to and to . Elements in are called relations; examples include . In fact, it turns out that is the smallest normal subgroup of containing these three elements; in other words, all relations are consequences of these three. The quotient of the free group by this normal subgroup is denoted . This is called a presentation of by generators and relations, because the first isomorphism theorem for yields an isomorphism . A presentation of a group can be used to construct the Cayley graph, a graphical depiction of a discrete group. Examples and applications Examples and applications of groups abound. A starting point is the group of integers with addition as group operation, introduced above. If instead of addition multiplication is considered, one obtains multiplicative groups. These groups are predecessors of important constructions in abstract algebra. Groups are also applied in many other mathematical areas. Mathematical objects are often examined by associating groups to them and studying the properties of the corresponding groups. For example, Henri Poincaré founded what is now called algebraic topology by introducing the fundamental group. By means of this connection, topological properties such as proximity and continuity translate into properties of groups. Elements of the fundamental group of a topological space are equivalence classes of loops, where loops are considered equivalent if one can be smoothly deformed into another, and the group operation is "concatenation" (tracing one loop then the other). For example, as shown in the figure, if the topological space is the plane with one point removed, then loops which do not wrap around the missing point (blue) can be smoothly contracted to a single point and are the identity element of the fundamental group. A loop which wraps around the missing point times cannot be deformed into a loop which wraps times (with ), because the loop cannot be smoothly deformed across the hole, so each class of loops is characterized by its winding number around the missing point. The resulting group is isomorphic to the integers under addition. In more recent applications, the influence has also been reversed to motivate geometric constructions by a group-theoretical background. In a similar vein, geometric group theory employs geometric concepts, for example in the study of hyperbolic groups. Further branches crucially applying groups include algebraic geometry and number theory. In addition to the above theoretical applications, many practical applications of groups exist. Cryptography relies on the combination of the abstract group theory approach together with algorithmical knowledge obtained in computational group theory, in particular when implemented for finite groups. Applications of group theory are not restricted to mathematics; sciences such as physics, chemistry and computer science benefit from the concept. Numbers Many number systems, such as the integers and the rationals, enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known as rings and fields. Further abstract algebraic concepts such as modules, vector spaces and algebras also form groups. Integers The group of integers under addition, denoted , has been described above. The integers, with the operation of multiplication instead of addition, do not form a group. The associativity and identity axioms are satisfied, but inverses do not exist: for example, is an integer, but the only solution to the equation in this case is , which is a rational number, but not an integer. Hence not every element of has a (multiplicative) inverse. Rationals The desire for the existence of multiplicative inverses suggests considering fractions Fractions of integers (with nonzero) are known as rational numbers. The set of all such irreducible fractions is commonly denoted . There is still a minor obstacle for , the rationals with multiplication, being a group: because zero does not have a multiplicative inverse (i.e., there is no such that ), is still not a group. However, the set of all nonzero rational numbers does form an abelian group under multiplication, also denoted . Associativity and identity element axioms follow from the properties of integers. The closure requirement still holds true after removing zero, because the product of two nonzero rationals is never zero. Finally, the inverse of is , therefore the axiom of the inverse element is satisfied. The rational numbers (including zero) also form a group under addition. Intertwining addition and multiplication operations yields more complicated structures called rings and – if division by other than zero is possible, such as in – fields, which occupy a central position in abstract algebra. Group theoretic arguments therefore underlie parts of the theory of those entities. Modular arithmetic Modular arithmetic for a modulus defines any two elements and that differ by a multiple of to be equivalent, denoted by . Every integer is equivalent to one of the integers from to , and the operations of modular arithmetic modify normal arithmetic by replacing the result of any operation by its equivalent representative. Modular addition, defined in this way for the integers from to , forms a group, denoted as or , with as the identity element and as the inverse element of . A familiar example is addition of hours on the face of a clock, where 12 rather than 0 is chosen as the representative of the identity. If the hour hand is on and is advanced hours, it ends up on , as shown in the illustration. This is expressed by saying that is congruent to "modulo " or, in symbols, For any prime number , there is also the multiplicative group of integers modulo . Its elements can be represented by to . The group operation, multiplication modulo , replaces the usual product by its representative, the remainder of division by . For example, for , the four group elements can be represented by . In this group, , because the usual product is equivalent to : when divided by it yields a remainder of . The primality of ensures that the usual product of two representatives is not divisible by , and therefore that the modular product is nonzero. The identity element is represented by , and associativity follows from the corresponding property of the integers. Finally, the inverse element axiom requires that given an integer not divisible by , there exists an integer such that that is, such that evenly divides . The inverse can be found by using Bézout's identity and the fact that the greatest common divisor equals . In the case above, the inverse of the element represented by is that represented by , and the inverse of the element represented by is represented by , as . Hence all group axioms are fulfilled. This example is similar to above: it consists of exactly those elements in the ring that have a multiplicative inverse. These groups, denoted , are crucial to public-key cryptography. Cyclic groups A cyclic group is a group all of whose elements are powers of a particular element . In multiplicative notation, the elements of the group are where means , stands for , etc. Such an element is called a generator or a primitive element of the group. In additive notation, the requirement for an element to be primitive is that each element of the group can be written as In the groups introduced above, the element is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose terms are . Any cyclic group with elements is isomorphic to this group. A second example for cyclic groups is the group of th complex roots of unity, given by complex numbers satisfying . These numbers can be visualized as the vertices on a regular -gon, as shown in blue in the image for . The group operation is multiplication of complex numbers. In the picture, multiplying with corresponds to a counter-clockwise rotation by 60°. From field theory, the group is cyclic for prime : for example, if , is a generator since , , , and . Some cyclic groups have an infinite number of elements. In these groups, for every non-zero element , all the powers of are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to , the group of integers under addition introduced above. As these two prototypes are both abelian, so are all cyclic groups. The study of finitely generated abelian groups is quite mature, including the fundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such as center and commutator, describe the extent to which a given group is not abelian. Symmetry groups Symmetry groups are groups consisting of symmetries of given mathematical objects, principally geometric entities, such as the symmetry group of the square given as an introductory example above, although they also arise in algebra such as the symmetries among the roots of polynomial equations dealt with in Galois theory (see below). Conceptually, group theory can be thought of as the study of symmetry. Symmetries in mathematics greatly simplify the study of geometrical or analytical objects. A group is said to act on another mathematical object if every group element can be associated to some operation on and the composition of these operations follows the group law. For example, an element of the (2,3,7) triangle group acts on a triangular tiling of the hyperbolic plane by permuting the triangles. By a group action, the group pattern is connected to the structure of the object being acted on. In chemistry, point groups describe molecular symmetries, while space groups describe crystal symmetries in crystallography. These symmetries underlie the chemical and physical behavior of these systems, and group theory enables simplification of quantum mechanical analysis of these properties. For example, group theory is used to show that optical transitions between certain quantum levels cannot occur simply because of the symmetry of the states involved. Group theory helps predict the changes in physical properties that occur when a material undergoes a phase transition, for example, from a cubic to a tetrahedral crystalline form. An example is ferroelectric materials, where the change from a paraelectric to a ferroelectric state occurs at the Curie temperature and is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectric state, accompanied by a so-called soft phonon mode, a vibrational lattice mode that goes to zero frequency at the transition. Such spontaneous symmetry breaking has found further application in elementary particle physics, where its occurrence is related to the appearance of Goldstone bosons. Finite symmetry groups such as the Mathieu groups are used in coding theory, which is in turn applied in error correction of transmitted data, and in CD players. Another application is differential Galois theory, which characterizes functions having antiderivatives of a prescribed form, giving group-theoretic criteria for when solutions of certain differential equations are well-behaved. Geometric properties that remain stable under group actions are investigated in (geometric) invariant theory. General linear group and representation theory Matrix groups consist of matrices together with matrix multiplication. The general linear group consists of all invertible -by- matrices with real entries. Its subgroups are referred to as matrix groups or linear groups. The dihedral group example mentioned above can be viewed as a (very small) matrix group. Another important matrix group is the special orthogonal group . It describes all possible rotations in dimensions. Rotation matrices in this group are used in computer graphics. Representation theory is both an application of the group concept and important for a deeper understanding of groups. It studies the group by its group actions on other spaces. A broad class of group representations are linear representations in which the group acts on a vector space, such as the three-dimensional Euclidean space . A representation of a group on an -dimensional real vector space is simply a group homomorphism from the group to the general linear group. This way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations. A group action gives further means to study the object being acted on. On the other hand, it also yields information about the group. Group representations are an organizing principle in the theory of finite groups, Lie groups, algebraic groups and topological groups, especially (locally) compact groups. Galois groups Galois groups were developed to help solve polynomial equations by capturing their symmetry features. For example, the solutions of the quadratic equation are given by Each solution can be obtained by replacing the sign by or ; analogous formulae are known for cubic and quartic equations, but do not exist in general for degree 5 and higher. In the quadratic formula, changing the sign (permuting the resulting two solutions) can be viewed as a (very simple) group operation. Analogous Galois groups act on the solutions of higher-degree polynomial equations and are closely related to the existence of formulas for their solution. Abstract properties of these groups (in particular their solvability) give a criterion for the ability to express the solutions of these polynomials using solely addition, multiplication, and roots similar to the formula above. Modern Galois theory generalizes the above type of Galois groups by shifting to field theory and considering field extensions formed as the splitting field of a polynomial. This theory establishes—via the fundamental theorem of Galois theory—a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics. Finite groups A group is called finite if it has a finite number of elements. The number of elements is called the order of the group. An important class is the symmetric groups , the groups of permutations of objects. For example, the symmetric group on 3 letters is the group of all possible reorderings of the objects. The three letters ABC can be reordered into ABC, ACB, BAC, BCA, CAB, CBA, forming in total 6 (factorial of 3) elements. The group operation is composition of these reorderings, and the identity element is the reordering operation that leaves the order unchanged. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group for a suitable integer , according to Cayley's theorem. Parallel to the group of symmetries of the square above, can also be interpreted as the group of symmetries of an equilateral triangle. The order of an element in a group is the least positive integer such that , where represents that is, application of the operation "" to copies of . (If "" represents multiplication, then corresponds to the th power of .) In infinite groups, such an may not exist, in which case the order of is said to be infinity. The order of an element equals the order of the cyclic subgroup generated by this element. More sophisticated counting techniques, for example, counting cosets, yield more precise statements about finite groups: Lagrange's Theorem states that for a finite group the order of any finite subgroup divides the order of . The Sylow theorems give a partial converse. The dihedral group of symmetries of a square is a finite group of order 8. In this group, the order of is 4, as is the order of the subgroup that this element generates. The order of the reflection elements etc. is 2. Both orders divide 8, as predicted by Lagrange's theorem. The groups of multiplication modulo a prime have order . Finite abelian groups Any finite abelian group is isomorphic to a product of finite cyclic groups; this statement is part of the fundamental theorem of finitely generated abelian groups. Any group of prime order is isomorphic to the cyclic group (a consequence of Lagrange's theorem). Any group of order is abelian, isomorphic to or . But there exist nonabelian groups of order ; the dihedral group of order above is an example. Simple groups When a group has a normal subgroup other than and itself, questions about can sometimes be reduced to questions about and . A nontrivial group is called simple if it has no such normal subgroup. Finite simple groups are to finite groups as prime numbers are to positive integers: they serve as building blocks, in a sense made precise by the Jordan–Hölder theorem. Classification of finite simple groups Computer algebra systems have been used to list all groups of order up to 2000. But classifying all finite groups is a problem considered too hard to be solved. The classification of all finite simple groups was a major achievement in contemporary group theory. There are several infinite families of such groups, as well as 26 "sporadic groups" that do not belong to any of the families. The largest sporadic group is called the monster group. The monstrous moonshine conjectures, proved by Richard Borcherds, relate the monster group to certain modular functions. The gap between the classification of simple groups and the classification of all groups lies in the extension problem. Groups with additional structure An equivalent definition of group consists of replacing the "there exist" part of the group axioms by operations whose result is the element that must exist. So, a group is a set equipped with a binary operation (the group operation), a unary operation (which provides the inverse) and a nullary operation, which has no operand and results in the identity element. Otherwise, the group axioms are exactly the same. This variant of the definition avoids existential quantifiers and is used in computing with groups and for computer-aided proofs. This way of defining groups lends itself to generalizations such as the notion of group object in a category. Briefly, this is an object with morphisms that mimic the group axioms. Topological groups Some topological spaces may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions; informally, and must not vary wildly if and vary only a little. Such groups are called topological groups, and they are the group objects in the category of topological spaces. The most basic examples are the group of real numbers under addition and the group of nonzero real numbers under multiplication. Similar examples can be formed from any other topological field, such as the field of complex numbers or the field of -adic numbers. These examples are locally compact, so they have Haar measures and can be studied via harmonic analysis. Other locally compact topological groups include the group of points of an algebraic group over a local field or adele ring; these are basic to number theory Galois groups of infinite algebraic field extensions are equipped with the Krull topology, which plays a role in infinite Galois theory. A generalization used in algebraic geometry is the étale fundamental group. Lie groups A Lie group is a group that also has the structure of a differentiable manifold; informally, this means that it looks locally like a Euclidean space of some fixed dimension. Again, the definition requires the additional structure, here the manifold structure, to be compatible: the multiplication and inverse maps are required to be smooth. A standard example is the general linear group introduced above: it is an open subset of the space of all -by- matrices, because it is given by the inequality where denotes an -by- matrix. Lie groups are of fundamental importance in modern physics: Noether's theorem links continuous symmetries to conserved quantities. Rotation, as well as translations in space and time, are basic symmetries of the laws of mechanics. They can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically lead to significant simplification in the equations one needs to solve to provide a physical description. Another example is the group of Lorentz transformations, which relate measurements of time and velocity of two observers in motion relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations as a rotational symmetry of Minkowski space. The latter serves—in the absence of significant gravitation—as a model of spacetime in special relativity. The full symmetry group of Minkowski space, i.e., including translations, is known as the Poincaré group. By the above, it plays a pivotal role in special relativity and, by implication, for quantum field theories. Symmetries that vary with location are central to the modern description of physical interactions with the help of gauge theory. An important example of a gauge theory is the Standard Model, which describes three of the four known fundamental forces and classifies all known elementary particles. Generalizations More general structures may be defined by relaxing some of the axioms defining a group. The table gives a list of several structures generalizing groups. For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called a monoid. The natural numbers (including zero) under addition form a monoid, as do the nonzero integers under multiplication . Adjoining inverses of all elements of the monoid produces a group , and likewise adjoining inverses to any (abelian) monoid produces a group known as the Grothendieck group of . A group can be thought of as a small category with one object in which every morphism is an isomorphism: given such a category, the set is a group; conversely, given a group , one can build a small category with one object in which . More generally, a groupoid is any small category in which every morphism is an isomorphism. In a groupoid, the set of all morphisms in the category is usually not a group, because the composition is only partially defined: is defined only when the source of matches the target of . Groupoids arise in topology (for instance, the fundamental groupoid) and in the theory of stacks. Finally, it is possible to generalize any of these concepts by replacing the binary operation with an -ary operation (i.e., an operation taking arguments, for some nonnegative integer ). With the proper generalization of the group axioms, this gives a notion of -ary group.
Mathematics
Algebra
null
19528
https://en.wikipedia.org/wiki/Mechanical%20engineering
Mechanical engineering
Mechanical engineering is the study of physical machines that may involve force and movement. It is an engineering branch that combines engineering physics and mathematics principles with materials science, to design, analyze, manufacture, and maintain mechanical systems. It is one of the oldest and broadest of the engineering branches. Mechanical engineering requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, design, structural analysis, and electricity. In addition to these core principles, mechanical engineers use tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), and product lifecycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, transport systems, motor vehicles, aircraft, watercraft, robotics, medical devices, weapons, and others. Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century; however, its development can be traced back several thousand years around the world. In the 19th century, developments in physics led to the development of mechanical engineering science. The field has continually evolved to incorporate advancements; today mechanical engineers are pursuing developments in such areas as composites, mechatronics, and nanotechnology. It also overlaps with aerospace engineering, metallurgical engineering, civil engineering, structural engineering, electrical engineering, manufacturing engineering, chemical engineering, industrial engineering, and other engineering disciplines to varying amounts. Mechanical engineers may also work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology, and modelling of biological systems. History The application of mechanical engineering can be seen in the archives of various ancient and medieval societies. The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. Mesopotamian civilization is credited with the invention of the wheel by several, mainly old sources. However, some recent sources either suggest that it was invented independently in both Mesopotamia and Eastern Europe or credit prehistoric Eastern Europeans with the invention of the wheel The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC. The Saqiyah was developed in the Kingdom of Kush during the 4th century BC. It relied on animal power reducing the tow on the requirement of human energy. Reservoirs in the form of Hafirs were developed in Kush to store water and boost irrigation. Bloomeries and blast furnaces were developed during the seventh century BC in Meroe. Kushite sundials applied mathematics in the form of advanced trigonometry. The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC. In ancient Greece, the works of Archimedes (287–212 BC) influenced mechanics in the Western tradition. The geared Antikythera mechanisms was an Analog computer invented around the 2nd century BC. In Roman Egypt, Heron of Alexandria (c. 10–70 AD) created the first steam-powered device (Aeolipile). In China, Zhang Heng (78–139 AD) improved a water clock and invented a seismometer, and Ma Jun (200–265 AD) invented a chariot with differential gears. The medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an escapement mechanism into his astronomical clock tower two centuries before escapement devices were found in medieval European clocks. He also invented the world's first known endless power-transmitting chain drive. The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, Dual-roller gins appeared in India and China between the 12th and 14th centuries. The worm gear roller gin appeared in the Indian subcontinent during the early Delhi Sultanate era of the 13th to 14th centuries. During the Islamic Golden Age (7th to 15th century), Muslim inventors made remarkable contributions in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and presented many mechanical designs. In the 17th century, important breakthroughs in the foundations of mechanical engineering occurred in England and the Continent. The Dutch mathematician and physicist Christiaan Huygens invented the pendulum clock in 1657, which was the first reliable timekeeper for almost 300 years, and published a work dedicated to clock designs and the theory behind them. In England, Isaac Newton formulated Newton's Laws of Motion and developed the calculus, which would become the mathematical basis of physics. Newton was reluctant to publish his works for years, but he was finally persuaded to do so by his colleagues, such as Edmond Halley. Gottfried Wilhelm Leibniz, who earlier designed a mechanical calculator, is also credited with developing the calculus during the same time period. During the early 19th century Industrial Revolution, machine tools were developed in England, Germany, and Scotland. This allowed mechanical engineering to develop as a separate field within engineering. They brought with them manufacturing machines and the engines to power them. The first British professional society of mechanical engineers was formed in 1847 Institution of Mechanical Engineers, thirty years after the civil engineers formed the first such professional society Institution of Civil Engineers. On the European continent, Johann von Zimmermann (1820–1901) founded the first factory for grinding machines in Chemnitz, Germany in 1848. In the United States, the American Society of Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional engineering society, after the American Society of Civil Engineers (1852) and the American Institute of Mining Engineers (1871). The first schools in the United States to offer an engineering education were the United States Military Academy in 1817, an institution now known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825. Education in mechanical engineering has historically been based on a strong foundation in mathematics and science. Education Degrees in mechanical engineering are offered at various universities worldwide. Mechanical engineering programs typically take four to five years of study depending on the place and university and result in a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Science Engineering (B.Sc.Eng.), Bachelor of Technology (B.Tech.), Bachelor of Mechanical Engineering (B.M.E.), or Bachelor of Applied Science (B.A.Sc.) degree, in or with emphasis in mechanical engineering. In Spain, Portugal and most of South America, where neither B.S. nor B.Tech. programs have been adopted, the formal name for the degree is "Mechanical Engineer", and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, but in order to qualify as an Engineer one has to pass a state exam at the end of the course. In Greece, the coursework is based on a five-year curriculum. In the United States, most undergraduate mechanical engineering programs are accredited by the Accreditation Board for Engineering and Technology (ABET) to ensure similar course requirements and standards among universities. The ABET web site lists 302 accredited mechanical engineering programs as of 11 March 2014. Mechanical engineering programs in Canada are accredited by the Canadian Engineering Accreditation Board (CEAB), and most other countries offering engineering degrees have similar accreditation societies. In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering (Mechanical) or similar nomenclature, although there are an increasing number of specialisations. The degree takes four years of full-time study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord. Before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are also present in South Africa and are overseen by the Engineering Council of South Africa (ECSA). In India, to become an engineer, one needs to have an engineering degree like a B.Tech. or B.E., have a diploma in engineering, or by completing a course in an engineering trade like fitter from the Industrial Training Institute (ITIs) to receive a "ITI Trade Certificate" and also pass the All India Trade Test (AITT) with an engineering trade conducted by the National Council of Vocational Training (NCVT) by which one is awarded a "National Trade Certificate". A similar system is used in Nepal. Some mechanical engineers go on to pursue a postgraduate degree such as a Master of Engineering, Master of Technology, Master of Science, Master of Engineering Management (M.Eng.Mgt. or M.E.M.), a Doctor of Philosophy in engineering (Eng.D. or Ph.D.) or an engineer's degree. The master's and engineer's degrees may or may not include research. The Doctor of Philosophy includes a significant research component and is often viewed as the entry point to academia. The Engineer's degree exists at a few institutions at an intermediate level between the master's degree and the doctorate. Coursework Standards set by each country's accreditation society are intended to provide uniformity in fundamental subject material, promote competence among graduating engineers, and to maintain confidence in the engineering profession as a whole. Engineering programs in the U.S., for example, are required by ABET to show that their students can "work professionally in both thermal and mechanical systems areas." The specific courses required to graduate, however, may differ from program to program. Universities and institutes of technology will often combine multiple subjects into a single class or split a subject into multiple classes, depending on the faculty available and the university's major area(s) of research. The fundamental subjects required for mechanical engineering usually include: Mathematics (in particular, calculus, differential equations, and linear algebra) Basic physical sciences (including physics and chemistry) Statics and dynamics Strength of materials and solid mechanics Materials engineering, composites Thermodynamics, heat transfer, energy conversion, and HVAC Fuels, combustion, internal combustion engine Fluid mechanics (including fluid statics and fluid dynamics) Mechanism and Machine design (including kinematics and dynamics) Instrumentation and measurement Manufacturing engineering, technology, or processes Vibration, control theory and control engineering Hydraulics and Pneumatics Mechatronics and robotics Engineering design and product design Drafting, computer-aided design (CAD) and computer-aided manufacturing (CAM) Mechanical engineers are also expected to understand and be able to apply basic concepts from chemistry, physics, tribology, chemical engineering, civil engineering, and electrical engineering. All mechanical engineering programs include multiple semesters of mathematical classes including calculus, and advanced mathematical concepts including differential equations, partial differential equations, linear algebra, differential geometry, and statistics, among others. In addition to the core mechanical engineering curriculum, many mechanical engineering programs offer more specialized programs and classes, such as control systems, robotics, transport and logistics, cryogenics, fuel technology, automotive engineering, biomechanics, vibration, optics and others, if a separate department does not exist for these subjects. Most mechanical engineering programs also require varying amounts of research or community projects to gain practical problem-solving experience. In the United States it is common for mechanical engineering students to complete one or more internships while studying, though this is not typically mandated by the university. Cooperative education is another option. Future work skills research puts demand on study components that feed student's creativity and innovation. Job duties Mechanical engineers research, design, develop, build, and test mechanical and thermal devices, including tools, engines, and machines. Mechanical engineers typically do the following: Analyze problems to see how mechanical and thermal devices might help solve the problem. Design or redesign mechanical and thermal devices using analysis and computer-aided design. Develop and test prototypes of devices they design. Analyze the test results and change the design as needed. Oversee the manufacturing process for the device. Manage a team of professionals in specialized fields like mechanical drafting and designing, prototyping, 3D printing or/and CNC Machines specialists. Mechanical engineers design and oversee the manufacturing of many products ranging from medical devices to new batteries. They also design power-producing machines such as electric generators, internal combustion engines, and steam and gas turbines as well as power-using machines, such as refrigeration and air-conditioning systems. Like other engineers, mechanical engineers use computers to help create and analyze designs, run simulations and test how a machine is likely to work. License and regulation Engineers may seek license by a state, provincial, or national government. The purpose of this process is to ensure that engineers possess the necessary technical knowledge, real-world experience, and knowledge of the local legal system to practice engineering at a professional level. Once certified, the engineer is given the title of Professional Engineer United States, Canada, Japan, South Korea, Bangladesh and South Africa), Chartered Engineer (in the United Kingdom, Ireland, India and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (much of the European Union). In the U.S., to become a licensed Professional Engineer (PE), an engineer must pass the comprehensive FE (Fundamentals of Engineering) exam, work a minimum of 4 years as an Engineering Intern (EI) or Engineer-in-Training (EIT), and pass the "Principles and Practice" or PE (Practicing Engineer or Professional Engineer) exams. The requirements and steps of this process are set forth by the National Council of Examiners for Engineering and Surveying (NCEES), composed of engineering and land surveying licensing boards representing all U.S. states and territories. In Australia (Queensland and Victoria) an engineer must be registered as a Professional Engineer within the State in which they practice, for example Registered Professional Engineer of Queensland or Victoria, RPEQ or RPEV. respectively. In the UK, current graduates require a BEng plus an appropriate master's degree or an integrated MEng degree, a minimum of 4 years post graduate on the job competency development and a peer-reviewed project report to become a Chartered Mechanical Engineer (CEng, MIMechE) through the Institution of Mechanical Engineers. CEng MIMechE can also be obtained via an examination route administered by the City and Guilds of London Institute. In most developed countries, certain engineering tasks, such as the design of bridges, electric power plants, and chemical plants, must be approved by a professional engineer or a chartered engineer. "Only a licensed engineer, for instance, may prepare, sign, seal and submit engineering plans and drawings to a public authority for approval, or to seal engineering work for public and private clients." This requirement can be written into state and provincial legislation, such as in the Canadian provinces, for example the Ontario or Quebec's Engineer Act. In other countries, such as the UK, no such legislation exists; however, practically all certifying bodies maintain a code of ethics independent of legislation, that they expect all members to abide by or risk expulsion. Salaries and workforce statistics The total number of engineers employed in the U.S. in 2015 was roughly 1.6 million. Of these, 278,340 were mechanical engineers (17.28%), the largest discipline by size. In 2012, the median annual income of mechanical engineers in the U.S. workforce was $80,580. The median income was highest when working for the government ($92,030), and lowest in education ($57,090). In 2014, the total number of mechanical engineering jobs was projected to grow 5% over the next decade. As of 2009, the average starting salary was $58,800 with a bachelor's degree. Subdisciplines The field of mechanical engineering can be thought of as a collection of many mechanical engineering science disciplines. Several of these subdisciplines which are typically taught at the undergraduate level are listed below, with a brief explanation and the most common application of each. Some of these subdisciplines are unique to mechanical engineering, while others are a combination of mechanical engineering and one or more other disciplines. Most work that a mechanical engineer does uses skills and techniques from several of these subdisciplines, as well as specialized subdisciplines. Specialized subdisciplines, as used in this article, are more likely to be the subject of graduate studies or on-the-job training than undergraduate research. Several specialized subdisciplines are discussed in this section. Mechanics Mechanics is, in the most general sense, the study of forces and their effect upon matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include Statics, the study of non-moving bodies under known loads, how forces affect static bodies Dynamics, the study of how forces affect moving bodies. Dynamics includes kinematics (about movement, velocity, and acceleration) and kinetics (about forces and resulting accelerations). Mechanics of materials, the study of how different materials deform under various types of stress Fluid mechanics, the study of how fluids react to forces Kinematics, the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. Kinematics is often used in the design and analysis of mechanisms. Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete) Mechanical engineers typically use mechanics in the design or analysis phases of engineering. If the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine, to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle (see HVAC), or to design the intake system for the engine. Mechatronics and robotics Mechatronics is a combination of mechanics and electronics. It is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid automation systems. In this way, machines can be automated through the use of electric motors, servo-mechanisms, and other electrical systems in conjunction with special software. A common example of a mechatronics system is a CD-ROM drive. Mechanical systems open and close the drive, spin the CD and move the laser, while an optical system reads the data on the CD and converts it to bits. Integrated software controls the process and communicates the contents of the CD to the computer. Robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in industrial automation engineering. They allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. Many companies employ assembly lines of robots, especially in Automotive Industries and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications, from recreation to domestic applications. Structural analysis Structural analysis is the branch of mechanical engineering (and also civil engineering) devoted to examining why and how objects fail and to fix the objects and their performance. Structural failures occur in two general modes: static failure, and fatigue failure. Static structural failure occurs when, upon being loaded (having a force applied) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. Fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. Fatigue failure occurs because of imperfections in the object: a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle (propagation) until the crack is large enough to cause ultimate failure. Failure is not simply defined as when a part breaks, however; it is defined as when a part does not operate as intended. Some systems, such as the perforated top sections of some plastic bags, are designed to break. If these systems do not break, failure analysis might be employed to determine the cause. Structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. Engineers often use online documents and books such as those published by ASM to aid them in determining the type of failure and possible causes. Once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. Structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests. Thermodynamics and thermo-science Thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. At its simplest, thermodynamics is the study of energy, its use and transformation through a system. Typically, engineering thermodynamics is concerned with changing energy from one form to another. As an example, automotive engines convert chemical energy (enthalpy) from the fuel into heat, and then into mechanical work that eventually turns the wheels. Thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. Mechanical engineers use thermo-science to design engines and power plants, heating, ventilation, and air-conditioning (HVAC) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others. Design and drafting Drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S. mechanical engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings. However, with the advent of computer numerically controlled (CNC) manufacturing, parts can now be fabricated without the need for constant technician input. Manually manufactured parts generally consist of spray coatings, surface finishes, and other processes that cannot economically or practically be done by a machine. Drafting is used in nearly every subdiscipline of mechanical engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD). Modern tools Many mechanical engineering companies, especially those in industrialized nations, have incorporated computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and the ease of use in designing mating interfaces and tolerances. Other CAE programs commonly used by mechanical engineers include product lifecycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows. As mechanical engineering begins to merge with other disciplines, as seen in mechatronics, multidisciplinary design optimization (MDO) is being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also use sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems. Areas of research Mechanical engineers are constantly pushing the boundaries of what is physically possible in order to produce safer, cheaper, and more efficient machines and mechanical systems. Some technologies at the cutting edge of mechanical engineering are listed below (see also exploratory engineering). Micro electro-mechanical systems (MEMS) Micron-scale mechanical components such as springs, gears, fluidic and heat transfer devices are fabricated from a variety of substrate materials such as silicon, glass and polymers like SU8. Examples of MEMS components are the accelerometers that are used as car airbag sensors, modern cell phones, gyroscopes for precise positioning and microfluidic devices used in biomedical applications. Friction stir welding (FSW) Friction stir welding, a new type of welding, was discovered in 1991 by The Welding Institute (TWI). The innovative steady state (non-fusion) welding technique joins materials previously un-weldable, including several aluminum alloys. It plays an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include welding the seams of the aluminum main Space Shuttle external tank, Orion Crew Vehicle, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation among an increasingly growing pool of uses. Composites Composites or composite materials are a combination of materials which provide different physical characteristics than either material separately. Composite material research within mechanical engineering typically focuses on designing (and, subsequently, finding applications for) stronger or more rigid materials while attempting to reduce weight, susceptibility to corrosion, and other undesirable factors. Carbon fiber reinforced composites, for instance, have been used in such diverse applications as spacecraft and fishing rods. Mechatronics Mechatronics is the synergistic combination of mechanical engineering, electronic engineering, and software engineering. The discipline of mechatronics began as a way to combine mechanical principles with electrical engineering. Mechatronic concepts are used in the majority of electro-mechanical systems. Typical electro-mechanical sensors used in mechatronics are strain gauges, thermocouples, and pressure transducers. Nanotechnology At the smallest scales, mechanical engineering becomes nanotechnology—one speculative goal of which is to create a molecular assembler to build molecules and materials via mechanosynthesis. For now that goal remains within exploratory engineering. Areas of current mechanical engineering research in nanotechnology include nanofilters, nanofilms, and nanostructures, among others. Finite element analysis Finite Element Analysis is a computational tool used to estimate stress, strain, and deflection of solid bodies. It uses a mesh setup with user-defined sizes to measure physical quantities at a node. The more nodes there are, the higher the precision. This field is not new, as the basis of Finite Element Analysis (FEA) or Finite Element Method (FEM) dates back to 1941. But the evolution of computers has made FEA/FEM a viable option for analysis of structural problems. Many commercial software applications such as NASTRAN, ANSYS, and ABAQUS are widely used in industry for research and the design of components. Some 3D modeling and CAD software packages have added FEA modules. In the recent times, cloud simulation platforms like SimScale are becoming more common. Other techniques such as finite difference method (FDM) and finite-volume method (FVM) are employed to solve problems relating heat and mass transfer, fluid flows, fluid surface interaction, etc. Biomechanics Biomechanics is the application of mechanical principles to biological systems, such as humans, animals, plants, organs, and cells. Biomechanics also aids in creating prosthetic limbs and artificial organs for humans. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. In the past decade, reverse engineering of materials found in nature such as bone matter has gained funding in academia. The structure of bone matter is optimized for its purpose of bearing a large amount of compressive stress per unit weight. The goal is to replace crude steel with bio-material for structural design. Over the past decade the Finite element method (FEM) has also entered the Biomedical sector highlighting further engineering aspects of Biomechanics. FEM has since then established itself as an alternative to in vivo surgical assessment and gained the wide acceptance of academia. The main advantage of Computational Biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modelling to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g. BioSpine). Computational fluid dynamics Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as turbulent flows. Initial validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests. Acoustical engineering Acoustical engineering is one of many other sub-disciplines of mechanical engineering and is the application of acoustics. Acoustical engineering is the study of Sound and Vibration. These engineers work effectively to reduce noise pollution in mechanical devices and in buildings by soundproofing or removing sources of unwanted noise. The study of acoustics can range from designing a more efficient hearing aid, microphone, headphone, or recording studio to enhancing the sound quality of an orchestra hall. Acoustical engineering also deals with the vibration of different mechanical systems. Related fields Manufacturing engineering, aerospace engineering, automotive engineering and marine engineering are grouped with mechanical engineering at times. A bachelor's degree in these areas will typically have a difference of a few specialized classes.
Technology
Technology: General
null
19544
https://en.wikipedia.org/wiki/Microevolution
Microevolution
Microevolution is the change in allele frequencies that occurs over time within a population. This change is due to four different processes: mutation, selection (natural and artificial), gene flow and genetic drift. This change happens over a relatively short (in evolutionary terms) amount of time compared to the changes termed macroevolution. Population genetics is the branch of biology that provides the mathematical structure for the study of the process of microevolution. Ecological genetics concerns itself with observing microevolution in the wild. Typically, observable instances of evolution are examples of microevolution; for example, bacterial strains that have antibiotic resistance. Microevolution provides the raw material for macroevolution. Difference from macroevolution Macroevolution is guided by sorting of interspecific variation ("species selection"), as opposed to sorting of intraspecific variation in microevolution. Species selection may occur as (a) effect-macroevolution, where organism-level traits (aggregate traits) affect speciation and extinction rates, and (b) strict-sense species selection, where species-level traits (e.g. geographical range) affect speciation and extinction rates. Macroevolution does not produce evolutionary novelties, but it determines their proliferation within the clades in which they evolved, and it adds species-level traits as non-organismic factors of sorting to this process. Four processes Mutation Mutations are changes in the DNA sequence of a cell's genome and are caused by radiation, viruses, transposons and mutagenic chemicals, as well as errors that occur during meiosis or DNA replication. Errors are introduced particularly often in the process of DNA replication, in the polymerization of the second strand. These errors can also be induced by the organism itself, by cellular processes such as hypermutation. Mutations can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the proofreading ability of DNA polymerases. (Without proofreading error rates are a thousandfold higher; because many viruses rely on DNA and RNA polymerases that lack proofreading ability, they experience higher mutation rates.) Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well, and cells use DNA repair mechanisms to repair mismatches and breaks in DNA—nevertheless, the repair sometimes fails to return the DNA to its original sequence. In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment making some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions or deletions of entire regions, or the accidental exchanging of whole parts between different chromosomes (called translocation). Mutation can result in several different types of change in DNA sequences; these can either have no effect, alter the product of a gene, or prevent the gene from functioning. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. Due to the damaging effects that mutations can have on cells, organisms have evolved mechanisms such as DNA repair to remove mutations. Therefore, the optimal mutation rate for a species is a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate, such as DNA repair enzymes. Viruses that use RNA as their genetic material have rapid mutation rates, which can be an advantage since these viruses will evolve constantly and rapidly, and thus evade the defensive responses of e.g. the human immune system. Mutations can involve large sections of DNA becoming duplicated, usually through genetic recombination. These duplications are a major source of raw material for evolving new genes, with tens to hundreds of genes duplicated in animal genomes every million years. Most genes belong to larger families of genes of shared ancestry. Novel genes are produced by several methods, commonly through the duplication and mutation of an ancestral gene, or by recombining parts of different genes to form new combinations with new functions. Here, domains act as modules, each with a particular and independent function, that can be mixed together to produce genes encoding new proteins with novel properties. For example, the human eye uses four genes to make structures that sense light: three for color vision and one for night vision; all four arose from a single ancestral gene. Another advantage of duplicating a gene (or even an entire genome) is that this increases redundancy; this allows one gene in the pair to acquire a new function while the other copy performs the original function. Other types of mutation occasionally create new genes from previously noncoding DNA. Selection Selection is the process by which heritable traits that make it more likely for an organism to survive and successfully reproduce become more common in a population over successive generations. It is sometimes valuable to distinguish between naturally occurring selection, natural selection, and selection that is a manifestation of choices made by humans, artificial selection. This distinction is rather diffuse. Natural selection is nevertheless the dominant part of selection. The natural genetic variation within a population of organisms means that some individuals will survive more successfully than others in their current environment. Factors which affect reproductive success are also important, an issue which Charles Darwin developed in his ideas on sexual selection. Natural selection acts on the phenotype, or the observable characteristics of an organism, but the genetic (heritable) basis of any phenotype which gives a reproductive advantage will become more common in a population (see allele frequency). Over time, this process can result in adaptations that specialize organisms for particular ecological niches and may eventually result in the speciation (the emergence of new species). Natural selection is one of the cornerstones of modern biology. The term was introduced by Darwin in his groundbreaking 1859 book On the Origin of Species, in which natural selection was described by analogy to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favored for reproduction. The concept of natural selection was originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, nothing was known of modern genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical and molecular genetics is termed the modern evolutionary synthesis. Natural selection remains the primary explanation for adaptive evolution. Genetic drift Genetic drift is the change in the relative frequency in which a gene variant (allele) occurs in a population due to random sampling. That is, the alleles in the offspring in the population are a random sample of those in the parents. And chance has a role in determining whether a given individual survives and reproduces. A population's allele frequency is the fraction or percentage of its gene copies compared to the total number of gene alleles that share a particular form. Genetic drift is an evolutionary process which leads to changes in allele frequencies over time. It may cause gene variants to disappear completely, and thereby reduce genetic variability. In contrast to natural selection, which makes gene variants more common or less common depending on their reproductive success, the changes due to genetic drift are not driven by environmental or adaptive pressures, and may be beneficial, neutral, or detrimental to reproductive success. The effect of genetic drift is larger in small populations, and smaller in large populations. Vigorous debates wage among scientists over the relative importance of genetic drift compared with natural selection. Ronald Fisher held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. In 1968 Motoo Kimura rekindled the debate with his neutral theory of molecular evolution which claims that most of the changes in the genetic material are caused by genetic drift. The predictions of neutral theory, based on genetic drift, do not fit recent data on whole genomes well: these data suggest that the frequencies of neutral alleles change primarily due to selection at linked sites, rather than due to genetic drift by means of sampling error. Gene flow Gene flow is the exchange of genes between populations, which are usually of the same species. Examples of gene flow within a species include the migration and then breeding of organisms, or the exchange of pollen. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Migration into or out of a population can change allele frequencies, as well as introducing genetic variation into a population. Immigration may add new genetic material to the established gene pool of a population. Conversely, emigration may remove genetic material. As barriers to reproduction between two diverging populations are required for the populations to become new species, gene flow may slow this process by spreading genetic differences between the populations. Gene flow is hindered by mountain ranges, oceans and deserts or even man-made structures such as the Great Wall of China, which has hindered the flow of plant genes. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile, due to the two different sets of chromosomes being unable to pair up during meiosis. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridization in developing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example. Hybridization is, however, an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals. Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. Polyploid hybrids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis may also have occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which appear to have received a range of genes from bacteria, fungi, and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and prokaryotes, during the acquisition of chloroplasts and mitochondria. Gene flow is the transfer of alleles from one population to another. Migration into or out of a population may be responsible for a marked change in allele frequencies. Immigration may also result in the addition of new genetic variants to the established gene pool of a particular species or population. There are a number of factors that affect the rate of gene flow between different populations. One of the most significant factors is mobility, as greater mobility of an individual tends to give it greater migratory potential. Animals tend to be more mobile than plants, although pollen and seeds may be carried great distances by animals or wind. Maintained gene flow between two populations can also lead to a combination of the two gene pools, reducing the genetic variation between the two groups. It is for this reason that gene flow strongly acts against speciation, by recombining the gene pools of the groups, and thus, repairing the developing differences in genetic variation that would have led to full speciation and creation of daughter species. For example, if a species of grass grows on both sides of a highway, pollen is likely to be transported from one side to the other and vice versa. If this pollen is able to fertilise the plant where it ends up and produce viable offspring, then the alleles in the pollen have effectively been able to move from the population on one side of the highway to the other. Origin and extended use of the term Origin The term microevolution was first used by botanist Robert Greenleaf Leavitt in the journal Botanical Gazette in 1909, addressing what he called the "mystery" of how formlessness gives rise to form. ..The production of form from formlessness in the egg-derived individual, the multiplication of parts and the orderly creation of diversity among them, in an actual evolution, of which anyone may ascertain the facts, but of which no one has dissipated the mystery in any significant measure. This microevolution forms an integral part of the grand evolution problem and lies at the base of it, so that we shall have to understand the minor process before we can thoroughly comprehend the more general one... However, Leavitt was using the term to describe what we would now call developmental biology; it was not until Russian Entomologist Yuri Filipchenko used the terms "macroevolution" and "microevolution" in 1927 in his German language work, Variabilität und Variation, that it attained its modern usage. The term was later brought into the English-speaking world by Filipchenko's student Theodosius Dobzhansky in his book Genetics and the Origin of Species (1937). Use in creationism In young Earth creationism and baraminology a central tenet is that evolution can explain diversity in a limited number of created kinds which can interbreed (which they call "microevolution") while the formation of new "kinds" (which they call "macroevolution") is impossible. This acceptance of "microevolution" only within a "kind" is also typical of old Earth creationism. Scientific organizations such as the American Association for the Advancement of Science describe microevolution as small scale change within species, and macroevolution as the formation of new species, but otherwise not being different from microevolution. In macroevolution, an accumulation of microevolutionary changes leads to speciation. The main difference between the two processes is that one occurs within a few generations, whilst the other takes place over thousands of years (i.e. a quantitative difference). Essentially they describe the same process; although evolution beyond the species level results in beginning and ending generations which could not interbreed, the intermediate generations could. Opponents to creationism argue that changes in the number of chromosomes can be accounted for by intermediate stages in which a single chromosome divides in generational stages, or multiple chromosomes fuse, and cite the chromosome difference between humans and the other great apes as an example. Creationists insist that since the actual divergence between the other great apes and humans was not observed, the evidence is circumstantial. Describing the fundamental similarity between macro and microevolution in his authoritative textbook "Evolutionary Biology," biologist Douglas Futuyma writes, Contrary to the claims of some antievolution proponents, evolution of life forms beyond the species level (i.e. speciation) has indeed been observed and documented by scientists on numerous occasions. In creation science, creationists accepted speciation as occurring within a "created kind" or "baramin", but objected to what they called "third level-macroevolution" of a new genus or higher rank in taxonomy. There is ambiguity in the ideas as to where to draw a line on "species", "created kinds", and what events and lineages fall within the rubric of microevolution or macroevolution.
Biology and health sciences
Basics_4
Biology
19545
https://en.wikipedia.org/wiki/MySQL
MySQL
MySQL () is an open-source relational database management system (RDBMS). Its name is a combination of "My", the name of co-founder Michael Widenius's daughter My, and "SQL", the acronym for Structured Query Language. A relational database organizes data into one or more data tables in which data may be related to each other; these relations help structure the data. SQL is a language that programmers use to create, modify and extract data from the relational database, as well as control user access to the database. In addition to relational databases and SQL, an RDBMS like MySQL works with an operating system to implement a relational database in a computer's storage system, manages users, allows for network access and facilitates testing database integrity and creation of backups. MySQL is free and open-source software under the terms of the GNU General Public License, and is also available under a variety of proprietary licenses. MySQL was owned and sponsored by the Swedish company MySQL AB, which was bought by Sun Microsystems (now Oracle Corporation). In 2010, when Oracle acquired Sun, Widenius forked the open-source MySQL project to create MariaDB. MySQL has stand-alone clients that allow users to interact directly with a MySQL database using SQL, but more often, MySQL is used with other programs to implement applications that need relational database capability. MySQL is a component of the LAMP web application software stack (and others), which is an acronym for Linux, Apache, MySQL, Perl/PHP/Python. MySQL is used by many database-driven web applications, including Drupal, Joomla, phpBB, and WordPress. MySQL is also used by many popular websites, including Facebook, Flickr, MediaWiki, Twitter, and YouTube. Overview MySQL is written in C and C++. Its SQL parser is written in yacc, but it uses a home-brewed lexical analyzer. MySQL works on many system platforms, including AIX, BSDi, FreeBSD, HP-UX, ArcaOS, eComStation, IBM i, IRIX, Linux, macOS, Microsoft Windows, NetBSD, Novell NetWare, OpenBSD, OpenSolaris, OS/2 Warp, QNX, Oracle Solaris, Symbian, SunOS, SCO OpenServer, SCO UnixWare, Sanos and Tru64. A port of MySQL to OpenVMS also exists. The MySQL server software itself and the client libraries use dual-licensing distribution. They are offered under GPL version 2, or a proprietary license. Support can be obtained from the official manual. Free support additionally is available in different IRC channels and forums. Oracle offers paid support via its MySQL Enterprise products. They differ in the scope of services and in price. Additionally, a number of third party organisations exist to provide support and services. MySQL has received positive reviews, and reviewers noticed it "performs extremely well in the average case" and that the "developer interfaces are there, and the documentation (not to mention feedback in the real world via Web sites and the like) is very, very good". It has also been tested to be a "fast, stable and true multi-user, multi-threaded SQL database server". History MySQL was created by a Swedish company, MySQL AB, founded by Swedes David Axmark and Allan Larsson, along with Finnish Michael "Monty" Widenius. Original development of MySQL by Widenius and Axmark began in 1994. The first version of MySQL appeared on 23 May 1995. It was initially created for personal usage from mSQL based on the low-level language ISAM, which the creators considered too slow and inflexible. They created a new SQL interface, while keeping the same API as mSQL. By keeping the API consistent with the mSQL system, many developers were able to use MySQL instead of the (proprietarily licensed) mSQL antecedent. Milestones Additional milestones in MySQL development included: First internal release on 23 May 1995 Version 3.19: End of 1996, from www.tcx.se Version 3.20: January 1997 Windows version was released on 8 January 1998 for Windows 95 and NT Version 3.21: production release 1998, from www.mysql.com Version 3.22: alpha, beta from 1998 Version 3.23: beta from June 2000, production release 22 January 2001 Version 4.0: beta from August 2002, production release March 2003 (unions). Version 4.1: beta from June 2004, production release October 2004 (R-trees and B-trees, subqueries, prepared statements). Version 5.0: beta from March 2005, production release October 2005 (cursors, stored procedures, triggers, views, XA transactions). The developer of the Federated Storage Engine states that "The Federated Storage Engine is a proof-of-concept storage engine", but the main distributions of MySQL version 5.0 included it and turned it on by default. Documentation of some of the short-comings appears in "MySQL Federated Tables: The Missing Manual". Sun Microsystems acquired MySQL AB in 2008. Version 5.1: production release 27 November 2008 (event scheduler, partitioning, plugin API, row-based replication, server log tables) Version 5.1 contained 20 known crashing and wrong result bugs in addition to the 35 present in version 5.0 (almost all fixed as of release 5.1.51). MySQL 5.1 and 6.0-alpha showed poor performance when used for data warehousing partly due to its inability to utilize multiple CPU cores for processing a single query. Oracle acquired Sun Microsystems on 27 January 2010. The day Oracle announced the purchase of Sun, Michael "Monty" Widenius forked MySQL, launching MariaDB, and took a swath of MySQL developers with him. MySQL Server 5.5 was generally available (). Enhancements and features include: The default storage engine is InnoDB, which supports transactions and referential integrity constraints. Improved InnoDB I/O subsystem Improved SMP support Semisynchronous replication. SIGNAL and RESIGNAL statement in compliance with the SQL standard. Support for supplementary Unicode character sets utf16, utf32, and utf8mb4. New options for user-defined partitioning. MySQL Server 6.0.11-alpha was announced on 22 May 2009 as the last release of the 6.0 line. Future MySQL Server development uses a New Release Model. Features developed for 6.0 are being incorporated into future releases. The general availability of MySQL 5.6 was announced in February 2013. New features included performance improvements to the query optimizer, higher transactional throughput in InnoDB, new NoSQL-style memcached APIs, improvements to partitioning for querying and managing very large tables, column type that correctly stores milliseconds, improvements to replication, and better performance monitoring by expanding the data available through the . The InnoDB storage engine also included support for full-text search and improved group commit performance. The general availability of MySQL 5.7 was announced in October 2015. As of MySQL 5.7.8, August 2015, MySQL supports a native JSON data type defined by RFC 7159. MySQL Server 8.0 was announced in April 2018, including NoSQL Document Store, atomic and crash safe DDL sentences and JSON Extended syntax, new functions, such as JSON table functions, improved sorting, and partial updates. Previous MySQL Server 8.0.0-dmr (Milestone Release) was announced 12 September 2016. MySQL was declared DBMS of the year 2019 from the DB-Engines ranking Release history Work on version 6 stopped after the Sun Microsystems acquisition. The MySQL Cluster product uses version 7. The decision was made to jump to version 8 as the next major version number. Legal disputes and acquisitions On 15 June 2001, NuSphere sued MySQL AB, TcX DataKonsult AB and its original authors Michael ("Monty") Widenius and David Axmark in U.S. District Court in Boston for "breach of contract, tortious interference with third party contracts and relationships and unfair competition". In 2002, MySQL AB sued Progress NuSphere for copyright and trademark infringement in United States district court. NuSphere had allegedly violated MySQL AB's copyright by linking MySQL's GPL'ed code with NuSphere Gemini table without being in compliance with the license. After a preliminary hearing before Judge Patti Saris on 27 February 2002, the parties entered settlement talks and eventually settled. After the hearing, FSF commented that "Judge Saris made clear that she sees the GNU GPL to be an enforceable and binding license." In October 2005, Oracle Corporation acquired Innobase OY, the Finnish company that developed the third-party InnoDB storage engine that allows MySQL to provide such functionality as transactions and foreign keys. After the acquisition, an Oracle press release mentioned that the contracts that make the company's software available to MySQL AB would be due for renewal (and presumably renegotiation) some time in 2006. During the MySQL Users Conference in April 2006, MySQL AB issued a press release that confirmed that MySQL AB and Innobase OY agreed to a "multi-year" extension of their licensing agreement. In February 2006, Oracle Corporation acquired Sleepycat Software, makers of the Berkeley DB, a database engine providing the basis for another MySQL storage engine. This had little effect, as Berkeley DB was not widely used, and was dropped (due to lack of use) in MySQL 5.1.12, a pre-GA release of MySQL 5.1 released in October 2006. In January 2008, Sun Microsystems bought MySQL AB for $1 billion. In April 2009, Oracle Corporation entered into an agreement to purchase Sun Microsystems, then owners of MySQL copyright and trademark. Sun's board of directors unanimously approved the deal. It was also approved by Sun's shareholders, and by the U.S. government on 20 August 2009. On 14 December 2009, Oracle pledged to continue to enhance MySQL as it had done for the previous four years. A movement against Oracle's acquisition of MySQL AB, to "Save MySQL" from Oracle was started by one of the MySQL AB founders, Monty Widenius. The petition of 50,000+ developers and users called upon the European Commission to block approval of the acquisition. At the same time, some Free Software opinion leaders (including Pamela Jones of Groklaw, Jan Wildeboer and Carlo Piana, who also acted as co-counsel in the merger regulation procedure) advocated for the unconditional approval of the merger. As part of the negotiations with the European Commission, Oracle committed that MySQL server will continue until at least 2015 to use the dual-licensing strategy long used by MySQL AB, with proprietary and GPL versions available. The antitrust of the EU had been "pressuring it to divest MySQL as a condition for approval of the merger". But the US Department of Justice, at the request of Oracle, pressured the EU to approve the merger unconditionally. The European Commission eventually unconditionally approved Oracle's acquisition of MySQL AB on 21 January 2010. In January 2010, before Oracle's acquisition of MySQL AB, Monty Widenius started a GPL-only fork, MariaDB. MariaDB is based on the same code base as MySQL server 5.5 and aims to maintain compatibility with Oracle-provided versions. Features MySQL is offered under two different editions: the open source MySQL Community Server and the proprietary Enterprise Server. MySQL Enterprise Server is differentiated by a series of proprietary extensions which install as server plugins, but otherwise shares the version numbering system and is built from the same code base. Major features as available in MySQL 5.6: A broad subset of ANSI SQL 99, as well as extensions Cross-platform support Stored procedures, using a procedural language that closely adheres to SQL/PSM Triggers Cursors Updatable views Online Data Definition Language (DDL) when using the InnoDB Storage Engine. Information schema Performance Schema that collects and aggregates statistics about server execution and query performance for monitoring purposes. A set of SQL Mode options to control runtime behavior, including a strict mode to better adhere to SQL standards. X/Open XA distributed transaction processing (DTP) support; two phase commit as part of this, using the default InnoDB storage engine Transactions with savepoints when using the default InnoDB Storage Engine. The NDB Cluster Storage Engine also supports transactions. ACID compliance when using InnoDB and NDB Cluster Storage Engines SSL support Query caching Sub-SELECTs (i.e. nested SELECTs) Built-in replication support Asynchronous replication: master-slave from one master to many slaves or many masters to one slave Semi synchronous replication: Master to slave replication where the master waits on replication Synchronous replication: Multi-master replication is provided in MySQL Cluster. Virtual Synchronous: Self managed groups of MySQL servers with multi master support can be done using: Galera Cluster or the built in Group Replication plugin Full-text indexing and searching Embedded database library Unicode support Partitioned tables with pruning of partitions in optimizer Shared-nothing clustering through MySQL Cluster Multiple storage engines, allowing one to choose the one that is most effective for each table in the application. Native storage engines InnoDB, MyISAM, Merge, Memory (heap), Federated, Archive, CSV, Blackhole, NDB Cluster. Commit grouping, gathering multiple transactions from multiple connections together to increase the number of commits per second. The developers release minor updates of the MySQL Server approximately every two months. The sources can be obtained from MySQL's website or from MySQL's GitHub repository, both under the GPL license. Limitations When using some storage engines other than the default of InnoDB, MySQL does not comply with the full SQL standard for some of the implemented functionality, including foreign key references. Check constraints are parsed but ignored by all storage engines before MySQL version 8.0.15. Up until MySQL 5.7, triggers are limited to one per action / timing, meaning that at most one trigger can be defined to be executed after an operation, and one before on the same table. No triggers can be defined on views. Before MySQL 8.0.28, inbuilt functions like would return after 03:14:07 UTC on 19 January 2038. In 2017, an attempt to solve the problem was submitted, but was not used for the final solution that was shipped in 2022. Deployment MySQL can be built and installed manually from source code, but it is more commonly installed from a binary package unless special customizations are required. On most Linux distributions, the package management system can download and install MySQL with minimal effort, though further configuration is often required to adjust security and optimization settings. Though MySQL began as a low-end alternative to more powerful proprietary databases, it has gradually evolved to support higher-scale needs as well. It is still most commonly used in small to medium scale single-server deployments, either as a component in a LAMP-based web application or as a standalone database server. Much of MySQL's appeal originates in its relative simplicity and ease of use, which is enabled by an ecosystem of open source tools such as phpMyAdmin. In the medium range, MySQL can be scaled by deploying it on more powerful hardware, such as a multi-processor server with gigabytes of memory. There are, however, limits to how far performance can scale on a single server ('scaling up'), so on larger scales, multi-server MySQL ('scaling out') deployments are required to provide improved performance and reliability. A typical high-end configuration can include a powerful master database which handles data write operations and is replicated to multiple slaves that handle all read operations. The master server continually pushes binlog events to connected slaves so in the event of failure a slave can be promoted to become the new master, minimizing downtime. Further improvements in performance can be achieved by caching the results from database queries in memory using memcached, or breaking down a database into smaller chunks called shards which can be spread across a number of distributed server clusters. High availability software Oracle MySQL offers a high availability solution with a mix of tools including the MySQL router and the MySQL shell. They are based on Group Replication, open source tools. MariaDB offers a similar offer in terms of products. Cloud deployment MySQL can also be run on cloud computing platforms such as Microsoft Azure, Amazon Elastic Compute Cloud, and Oracle Cloud Infrastructure. Some common deployment models for MySQL on the cloud are: Virtual machine image In this implementation, cloud users can upload a machine image of their own with MySQL installed, or use a ready-made machine image with an optimized installation of MySQL on it, such as the one provided by Amazon EC2. MySQL as a service Some cloud platforms offer MySQL "as a service". In this configuration, application owners do not have to install and maintain the MySQL database on their own. Instead, the database service provider takes responsibility for installing and maintaining the database, and application owners pay according to their usage. Notable cloud-based MySQL services are the Amazon Relational Database Service; Oracle MySQL HeatWave Database Service, Azure Database for MySQL, Rackspace; HP Converged Cloud; Heroku and Jelastic. In this model the database service provider takes responsibility for maintaining the host and database. User interfaces Graphical user interfaces A graphical user interface (GUI) is a type of interface that allows users to interact with electronic devices or programs through graphical icons and visual indicators such as secondary notation, as opposed to text-based interfaces, typed command labels or text navigation. Third-party proprietary and free graphical administration applications (or "front ends") are available that integrate with MySQL and enable users to work with database structure and data visually. MySQL Workbench MySQL Workbench is the integrated environment for MySQL. It was developed by MySQL AB, and enables users to graphically administer MySQL databases and visually design database structures. MySQL Workbench is available in three editions, the regular free and open source Community Edition which may be downloaded from the MySQL website, and the proprietary Standard Edition which extends and improves the feature set of the Community Edition, and the MySQL Cluster CGE. Other GUI tools Adminer Database Workbench DBeaver DBEdit HeidiSQL LibreOffice Base Navicat OpenOffice.org Base phpMyAdmin SQLBuddy SQLyog Toad for MySQL Webmin Command-line interfaces A command-line interface is a means of interacting with a computer program where the user issues commands to the program by typing in successive lines of text (command lines). MySQL ships with many command line tools, from which the main interface is the client. MySQL Utilities is a set of utilities designed to perform common maintenance and administrative tasks. Originally included as part of the MySQL Workbench, the utilities are a stand-alone download available from Oracle. Percona Toolkit is a cross-platform toolkit for MySQL, developed in Perl. Percona Toolkit can be used to prove replication is working correctly, fix corrupted data, automate repetitive tasks, and speed up servers. Percona Toolkit is included with several Linux distributions such as CentOS and Debian, and packages are available for Fedora and Ubuntu as well. Percona Toolkit was originally developed as Maatkit, but as of late 2011, Maatkit is no longer developed. MySQL shell is a tool for interactive use and administration of the MySQL database. It supports JavaScript, Python or SQL modes and it can be used for administration and access purposes. Application programming interfaces Many programming languages with language-specific APIs include libraries for accessing MySQL databases. These include MySQL Connector/Net for .NET/CLI Languages, and the JDBC driver for Java. In addition, an ODBC interface called MySQL Connector/ODBC allows additional programming languages that support the ODBC interface to communicate with a MySQL database, such as ASP or ColdFusion. The HTSQL URL-based query method also ships with a MySQL adapter, allowing direct interaction between a MySQL database and any web client via structured URLs. Other drivers exists for languages like Python or Node.js. Project forks A variety of MySQL forks exist, including the following. Current MariaDB MariaDB is a community-developed fork of the MySQL relational database management system intended to remain free under the GNU GPL. The fork has been led by the original developers of MySQL, who forked it due to concerns over its acquisition by Oracle. Percona Server for MySQL Percona Server for MySQL, forked by Percona, aims to retain close compatibility to the official MySQL releases. Also included in Percona Server for MySQL is XtraDB, Percona's fork of the InnoDB Storage Engine. Abandoned Drizzle Drizzle was a free software/open source relational database management system (DBMS) that was forked from the now-defunct 6.0 development branch of the MySQL DBMS. Like MySQL, Drizzle had a client/server architecture and uses SQL as its primary command language. Drizzle was distributed under version 2 and 3 of the GNU General Public License (GPL) with portions, including the protocol drivers and replication messaging under the BSD license. WebScaleSQL WebScaleSQL was a software branch of MySQL 5.6, and was announced on 27 March 2014 by Facebook, Google, LinkedIn and Twitter as a joint effort to provide a centralized development structure for extending MySQL with new features specific to its large-scale deployments, such as building large replicated databases running on server farms. Thus, WebScaleSQL opened a path toward deduplicating the efforts each company had been putting into maintaining its own branch of MySQL, and toward bringing together more developers. By combining the efforts of these companies and incorporating various changes and new features into MySQL, WebScaleSQL aimed at supporting the deployment of MySQL in large-scale environments. The project's source code is licensed under version 2 of the GNU General Public License, and is hosted on GitHub. OurDelta The OurDelta distribution, created by the Australian company Open Query (later acquired by Catalyst IT Australia), had two versions: 5.0, which was based on MySQL, and 5.1, which was based on MariaDB. It included patches developed by Open Query and by other notable members of the MySQL community including Jeremy Cole and Google. Once the patches were incorporated into the MariaDB mainline, OurDelta's objectives were achieved and OurDelta passed on its build and packaging toolchain to Monty Program (now MariaDB Plc).
Technology
Office and data management
null
19553
https://en.wikipedia.org/wiki/Microprocessor
Microprocessor
A microprocessor is a computer processor for which the data processing logic and control is included on a single integrated circuit (IC), or a small number of ICs. The microprocessor contains the arithmetic, logic, and control circuitry required to perform the functions of a computer's central processing unit (CPU). The IC is capable of interpreting and executing program instructions and performing arithmetic operations. The microprocessor is a multipurpose, clock-driven, register-based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, and provides results (also in binary form) as output. Microprocessors contain both combinational logic and sequential digital logic, and operate on numbers and symbols represented in the binary number system. The integration of a whole CPU onto a single or a few integrated circuits using Very-Large-Scale Integration (VLSI) greatly reduced the cost of processing power. Integrated circuit processors are produced in large numbers by highly automated metal–oxide–semiconductor (MOS) fabrication processes, resulting in a relatively low unit price. Single-chip processors increase reliability because there are fewer electrical connections that can fail. As microprocessor designs improve, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits, typically of TTL type. Microprocessors combined this into one or a few large-scale ICs. While there is disagreement over who deserves credit for the invention of the microprocessor, the first commercially available microprocessor was the Intel 4004, designed by Federico Faggin and introduced in 1971. Continued increases in microprocessor capacity have since rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers. A microprocessor is distinct from a microcontroller including a system on a chip. A microprocessor is related but distinct from a digital signal processor, a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. Structure The complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, and the heat that the chip can dissipate. Advancing technology makes more complex and powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit (ALU), and a control logic section. The ALU performs addition, subtraction, and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation (zero value, negative number, overflow, or others). The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths, registers, and other elements of the processor. As integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger; allowing more transistors on a chip allowed word sizes to increase from 4- and 8-bit words up to today's 64-bit words. Additional features were added to the processor architecture; more on-chip registers sped up programs, and complex instructions could be used to make more compact programs. Floating-point arithmetic, for example, was often not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating-point unit, first as a separate integrated circuit and then as part of the same microprocessor chip, sped up floating-point calculations. Occasionally, physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each word. While this required extra logic to handle, for example, carry and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each. The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more rapidly than external memory speed, so cache memory is necessary if the processor is not to be delayed by slower external memory. The design of some processors has become complicated enough to be difficult to fully test, and this has caused problems at large cloud providers. Special-purpose designs A microprocessor is a general purpose processing entity. Several specialized processing devices have followed: A digital signal processor (DSP) is specialized for signal processing. Graphics processing units (GPUs) are processors designed primarily for real-time rendering of images. Other specialized units exist for video processing and machine vision. (See: Hardware acceleration.) Microcontrollers in embedded systems and peripheral devices. Systems on chip (SoCs) often integrate one or more microprocessor and microcontroller cores with other components such as radio modems, and are used in smartphones and tablet computers. Speed and power considerations Microprocessors can be selected for differing applications based on their word size, which is a measure of their complexity. Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4-, 8- or 12-bit processors are widely integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16-, 32- or 64-bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require extremely low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Some people say that running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. However, others say that modern 8-bit chips are always more power-efficient than 32-bit chips when running equivalent software routines. Embedded applications Thousands of items that were traditionally not computer-related include microprocessors. These include household appliances, vehicles (and their accessories), tools and test instruments, toys, light switches/dimmers and electrical circuit breakers, smoke alarms, battery packs, and hi-fi audio/visual components (from DVD players to phonograph turntables). Such products as cellular telephones, DVD video system and HDTV broadcast systems fundamentally require consumer devices with powerful, low-cost, microprocessors. Increasingly stringent pollution control standards effectively require automobile manufacturers to use microprocessor engine management systems to allow optimal control of emissions over the widely varying operating conditions of an automobile. Non-programmable controls would require bulky, or costly implementation to achieve the results possible with a microprocessor. A microprocessor control program (embedded software) can be tailored to fit the needs of a product line, allowing upgrades in performance with minimal redesign of the product. Unique features can be implemented in product line's various models at negligible production cost. Microprocessor control of a system can provide control strategies that would be impractical to implement using electromechanical controls or purpose-built electronic controls. For example, an internal combustion engine's control system can adjust ignition timing based on engine speed, load, temperature, and any observed tendency for knocking—allowing the engine to operate on a range of fuel grades. History The advent of low-cost computers on integrated circuits has transformed modern society. General-purpose microprocessors in personal computers are used for computation, text editing, multimedia display, and communication over the Internet. Many more microprocessors are part of embedded systems, providing digital control over myriad objects from appliances to automobiles to cellular phones and industrial process control. Microprocessors perform binary operations based on Boolean logic, named after George Boole. The ability to operate computer systems using Boolean Logic was first proven in a 1938 thesis by master's student Claude Shannon, who later went on to become a professor. Shannon is considered "The Father of Information Theory". In 1951 Microprogramming was invented by Maurice Wilkes at the University of Cambridge, UK, from the realisation that the central processor could be controlled by a specialised program in a dedicated ROM. Wilkes is also credited with the idea of symbolic labels, macros and subroutine libraries. Following the development of MOS integrated circuit chips in the early 1960s, MOS chips reached higher transistor density and lower manufacturing costs than bipolar integrated circuits by 1964. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on several MOS LSI chips. Designers in the late 1960s were striving to integrate the central processing unit (CPU) functions of a computer onto a handful of MOS LSI chips, called microprocessor unit (MPU) chipsets. While there is disagreement over who invented the microprocessor, the first commercially available microprocessor was the Intel 4004, released as a single MOS LSI chip in 1971. The single-chip microprocessor was made possible with the development of MOS silicon-gate technology (SGT). The earliest MOS transistors had aluminium metal gates, which Italian physicist Federico Faggin replaced with silicon self-aligned gates to develop the first silicon-gate MOS chip at Fairchild Semiconductor in 1968. Faggin later joined Intel and used his silicon-gate MOS technology to develop the 4004, along with Marcian Hoff, Stanley Mazor and Masatoshi Shima in 1971. The 4004 was designed for Busicom, which had earlier proposed a multi-chip design in 1969, before Faggin's team at Intel changed it into a new single-chip design. The 4-bit Intel 4004 was soon followed by the 8-bit Intel 8008 in 1972. The MP944 chipset used in the F-14 Central Air Data Computer in 1970 has also been cited as an early microprocessor, but was not known to the public until declassified in 1998. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general-purpose microcomputers from the mid-1970s on. The first use of the term "microprocessor" is attributed to Viatron Computer Systems describing the custom integrated circuit used in their System 21 small computer system announced in 1968. Since the early 1970s, the increase in capacity of microprocessors has followed Moore's law; this originally suggested that the number of components that can be fitted onto a chip doubles every year. With present technology, it is actually every two years, and as a result Moore later changed the period to two years. First projects These projects delivered a microprocessor at about the same time: Garrett AiResearch's Central Air Data Computer (CADC) (1970), Texas Instruments' TMS 1802NC (September 1971) and Intel's 4004 (November 1971, based on an earlier 1969 Busicom design). Arguably, Four-Phase Systems AL1 microprocessor was also delivered in 1969. Four-Phase Systems AL1 (1969) The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers and an ALU. It was designed by Lee Boysel in 1969. At the time, it formed part of a nine-chip, 24-bit CPU with three AL1s. It was later called a microprocessor when, in response to 1990s litigation by Texas Instruments, Boysel constructed a demonstration system where a single AL1 with a 1969 datestamp formed part of a courtroom demonstration computer system, together with RAM, ROM, and an input-output device. The AL1 wasn't sold individually, but was part of the System IV/70 announced in September 1970 and first delivered in February 1972. Garrett AiResearch CADC (1970) In 1968, Garrett AiResearch (who employed designers Ray Holt and Steve Geller) was invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy's new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against and was used in all of the early Tomcat models. This system contained "a 20-bit, pipelined, parallel multi-microprocessor". The Navy refused to allow publication of the design until 1997. Released in 1998, the documentation on the CADC, and the MP944 chipset, are well known. Ray Holt's autobiographical story of this design and development is presented in the book: The Accidental Engineer. Ray Holt graduated from California State Polytechnic University, Pomona in 1968, and began his computer design career with the CADC. From its inception, it was shrouded in secrecy until 1998 when at Holt's request, the US Navy allowed the documents into the public domain. Holt has claimed that no one has compared this microprocessor with those that came later. According to Parab et al. (2007), This convergence of DSP and microcontroller architectures is known as a digital signal controller. Gilbert Hyatt (1970) In 1990, American engineer Gilbert Hyatt was awarded U.S. Patent No. 4,942,516, which was based on a 16-bit serial computer he built at his Northridge, California, home in 1969 from boards of bipolar chips after quitting his job at Teledyne in 1968; though the patent had been submitted in December 1970 and prior to Texas Instruments' filings for the TMX 1795 and TMS 0100, Hyatt's invention was never manufactured. This nonetheless led to claims that Hyatt was the inventor of the microprocessor and the payment of substantial royalties through a Philips N.V. subsidiary, until Texas Instruments prevailed in a complex legal battle in 1996, when the U.S. Patent Office overturned key parts of the patent, while allowing Hyatt to keep it. Hyatt said in a 1990 Los Angeles Times article that his invention would have been created had his prospective investors backed him, and that the venture investors leaked details of his chip to the industry, though he did not elaborate with evidence to support this claim. In the same article, The Chip author T.R. Reid was quoted as saying that historians may ultimately place Hyatt as a co-inventor of the microprocessor, in the way that Intel's Noyce and TI's Kilby share credit for the invention of the chip in 1958: "Kilby got the idea first, but Noyce made it practical. The legal ruling finally favored Noyce, but they are considered co-inventors. The same could happen here." Hyatt would go on to fight a decades-long legal battle with the state of California over alleged unpaid taxes on his patent's windfall after 1990, which would culminate in a landmark Supreme Court case addressing states' sovereign immunity in Franchise Tax Board of California v. Hyatt (2019). Texas Instruments TMX 1795 (1970–1971) Texas Instruments developed in 1970–1971 a one-chip CPU replacement for the Datapoint 2200 terminal, the TMX 1795 (later TMC 1795). Like Intel's later 8008, it was rejected by customer Datapoint. According to Gary Boone, the TMX 1795 never reached production. Still it reached a prototype state at 1971 February 24. Since it was built to the same specification, its instruction set was very similar to the Intel 8008. Texas Instruments TMS 1802NC (1971) The TMS1802NC, announced September 17, 1971, was the first microcontroller and at launch implemented a four-function calculator. The TMS1802NC, despite its designation, was not part of the TMS 1000 series; it was later redesignated as part of the TMS 0100 series, which was used in the TI Datamath calculator. It was marketed as a calculator-on-a-chip and also "fully programmable", but this programming had to done during manufacturing. Its chip integrated a CPU with an 11-bit instruction word, 3520 bits (320 instructions) of ROM and 182 bits of RAM. Pico/General Instrument (1971) In 1971, Pico Electronics and General Instrument (GI) introduced their first collaboration in ICs, a complete single-chip calculator IC for the Monroe/Litton Royal Digital III calculator. This chip could also arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip. The layout for the four layers of the PMOS process was hand drawn at x500 scale on mylar film, a significant task at the time given the complexity of the chip. Pico was a spinout by five GI design engineers whose vision was to create single-chip calculator ICs. They had significant previous design experience on multiple calculator chipsets with both GI and Marconi-Elliott. The key team members had originally been tasked by Elliott Automation to create an 8-bit computer in MOS and had helped establish a MOS Research Laboratory in Glenrothes, Scotland in 1967. Calculators were becoming the largest single market for semiconductors so Pico and GI went on to have significant success in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with products including the CP1600, IOB1680 and PIC1650. In 1987, the GI Microelectronics business was spun out into the Microchip PIC microcontroller business. Intel 4004 (1971) The Intel 4004 is often (falsely) regarded as the first true microprocessor built on a single chip, priced at . The first known advertisement for the 4004 is dated November 15, 1971, and appeared in Electronic News. The microprocessor was designed by a team consisting of Italian engineer Federico Faggin, American engineers Marcian Hoff and Stanley Mazor, and Japanese engineer Masatoshi Shima. The project that produced the 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop calculators. Busicom's original design called for a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a four-chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple I/O device, and a 4-bit central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated into a single chip, but as he lacked the technical know-how the idea remained just a wish for the time being. While the architecture and specifications of the MCS-4 came from the interaction of Hoff with Stanley Mazor, a software engineer reporting to him, and with Busicom engineer Masatoshi Shima, during 1969, Mazor and Hoff moved on to other projects. In April 1970, Intel hired Italian engineer Federico Faggin as project leader, a move that ultimately made the single-chip CPU final design a reality (Shima meanwhile designed the Busicom calculator firmware and assisted Faggin during the first six months of the implementation). Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor and designed the world's first commercial integrated circuit using SGT, the Fairchild 3708, had the correct background to lead the project into what would become the first commercial general purpose microprocessor. Since SGT was his very own invention, Faggin also used it to create his new methodology for random logic design that made it possible to implement a single-chip CPU with the proper speed, power dissipation and cost. The manager of Intel's MOS Design Department was Leslie L. Vadász at the time of the MCS-4 development but Vadász's attention was completely focused on the mainstream business of semiconductor memories so he left the leadership and the management of the MCS-4 project to Faggin, who was ultimately responsible for leading the 4004 project to its realization. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971. 8-bit designs The Intel 4004 was followed in 1972 by the Intel 8008, intel's first 8-bit microprocessor. The 8008 was not, however, an extension of the 4004 design, but instead the culmination of a separate design project at Intel, arising from a contract with Computer Terminals Corporation, of San Antonio TX, for a chip for a terminal they were designing, the Datapoint 2200—fundamental aspects of the design came not from Intel but from CTC. In 1968, CTC's Vic Poor and Harry Pyle developed the original design for the instruction set and operation of the processor. In 1969, CTC contracted two companies, Intel and Texas Instruments, to make a single-chip implementation, known as the CTC 1201. In late 1970 or early 1971, TI dropped out being unable to make a reliable part. In 1970, with Intel yet to deliver the part, CTC opted to use their own implementation in the Datapoint 2200, using traditional TTL logic instead (thus the first machine to run "8008 code" was not in fact a microprocessor at all and was delivered a year earlier). Intel's version of the 1201 microprocessor arrived in late 1971, but was too late, slow, and required a number of additional support chips. CTC had no interest in using it. CTC had originally contracted Intel for the chip, and would have owed them for their design work. To avoid paying for a chip they did not want (and could not use), CTC released Intel from their contract and allowed them free use of the design. Intel marketed it as the 8008 in April, 1972, as the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8" computer kit advertised in the magazine Radio-Electronics in 1974. This processor had an 8-bit data bus and a 14-bit address bus. The 8008 was the precursor to the successful Intel 8080 (1974), which offered improved performance over the 8008 and required fewer support chips. Federico Faggin conceived and designed it using high voltage N channel MOS. The Zilog Z80 (1976) was also a Faggin design, using low voltage N channel with depletion load and derivative Intel 8-bit processors: all designed with the methodology Faggin created for the 4004. Motorola released the competing 6800 in August 1974, and the similar MOS Technology 6502 was released in 1975 (both designed largely by the same people). The 6502 family rivaled the Z80 in popularity during the 1980s. A low overall cost, little packaging, simple computer bus requirements, and sometimes the integration of extra circuitry (e.g. the Z80's built-in memory refresh circuitry) allowed the home computer "revolution" to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the Sinclair ZX81, which sold for . A variation of the 6502, the MOS Technology 6510 was used in the Commodore 64 and yet another variant, the 8502, powered the Commodore 128. The Western Design Center, Inc (WDC) introduced the CMOS WDC 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the Apple IIe and IIc personal computers as well as in medical implantable grade pacemakers and defibrillators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM (32-bit) and other microprocessor intellectual property (IP) providers in the 1990s. Motorola introduced the MC6809 in 1978. It was an ambitious and well thought-through 8-bit design that was source compatible with the 6800, and implemented using purely hard-wired logic (subsequent 16-bit microprocessors typically used microcode to some extent, as CISC design requirements were becoming too complex for pure hard-wired logic). Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture. A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used on board the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at very low power, and because a variant was available fabricated using a special production process, silicon on sapphire (SOS), which provided much better protection against cosmic radiation and electrostatic discharge than that of any other processor of the era. Thus, the SOS version of the 1802 was said to be the first radiation-hardened microprocessor. The RCA 1802 had a static design, meaning that the clock frequency could be made arbitrarily low, or even stopped. This let the Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers or sensors would awaken the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication. Current versions of the Western Design Center 65C02 and 65C816 also have static cores, and thus retain data even when the clock is completely halted. 12-bit designs The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support and memory ICs. The microprocessor recognised the DEC PDP-8 minicomputer instruction set. As such it was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980s. 16-bit designs The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. Other early multi-chip 16-bit microprocessors include the MCP-1600 that Digital Equipment Corporation (DEC) used in the LSI-11 OEM board set and the packaged PDP-11/03 minicomputer—and the Fairchild Semiconductor MicroFlame 9440, both introduced in 1975–76. In late 1974, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900. Next in list is the General Instrument CP1600, released in February 1975, which was used mainly in the Intellivision console. Another early single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110. The Western Design Center (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIGS and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time. Intel "upsized" their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intel introduced the 8086 as a cost-effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an 8-bit external data bus, was the microprocessor in the first IBM PC. Intel then released the 80186 and 80188, the 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family's backwards compatibility. The 80186 and 80188 were essentially versions of the 8086 and 8088, enhanced with some onboard peripherals and a few new instructions. Although Intel's 80186 and 80188 were not used in IBM PC type designs, second source versions from NEC, the V20 and V30 frequently were. The 8086 and successors had an innovative but limited method of memory segmentation, while the 80286 introduced a full-featured segmented memory management unit (MMU). The 80386 introduced a flat 32-bit memory model with paged memory management. The 16-bit Intel x86 processors up to and including the 80386 do not include floating-point units (FPUs). Intel introduced the 8087, 80187, 80287 and 80387 math coprocessors to add hardware floating-point and transcendental function capabilities to the 8086 through 80386 CPUs. The 8087 works with the 8086/8088 and 80186/80188, the 80187 works with the 80186 but not the 80188, the 80287 works with the 80286 and the 80387 works with the 80386. The combination of an x86 CPU and an x87 coprocessor forms a single multi-chip microprocessor; the two chips are programmed as a unit using a single integrated instruction set. The 8087 and 80187 coprocessors are connected in parallel with the data and address buses of their parent processor and directly execute instructions intended for them. The 80287 and 80387 coprocessors are interfaced to the CPU through I/O ports in the CPU's address space, this is transparent to the program, which does not need to know about or access these I/O ports directly; the program accesses the coprocessor and its registers through normal instruction opcodes. 32-bit designs 16-bit designs had only been on the market briefly when 32-bit implementations started to appear. The most significant of the 32-bit designs is the Motorola MC68000, introduced in 1979. The 68k, as it was widely known, had 32-bit registers in its programming model but used 16-bit internal data paths, three 16-bit Arithmetic Logic Units, and a 16-bit external data bus (to reduce pin count), and externally supported only 24-bit addresses (internally it worked with full 32 bit addresses). In PC-based IBM-compatible mainframes the MC68000 internal microcode was modified to emulate the 32-bit System/370 IBM mainframe. Motorola generally described it as a 16-bit processor. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did other designs in the mid-1980s, including the Atari ST and Amiga. The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982. After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop super microcomputer; in the "Companion", the world's first 32-bit laptop computer; and in "Alexander", the world's first book-sized super microcomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX System V operating system. The first commercial, single chip, fully 32-bit microprocessor available on the market was the HP FOCUS. Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981, but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel's own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler. Motorola's success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1984 added full 32-bit data and address buses. The 68020 became hugely popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems, Cromemco) produced desktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the MC68040, which included an FPU for better math performance. The 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68k family faded from use in the early 1990s. Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs. The ColdFire processor cores are derivatives of the 68020. During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032. Later, National Semiconductor produced the NS 32132, which allowed two CPUs to reside on the same memory bus with built in arbitration. The NS32016/32 outperformed the MC68000/10, but the NS32332—which arrived at approximately the same time as the MC68020—did not have enough performance. The third generation chip, the NS32532, was different. It had about double the performance of the MC68030, which was released around the same time. The appearance of RISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764. Technically advanced—with a superscalar RISC core, 64-bit bus, and internally overclocked—it could still execute Series 32000 instructions through real-time translation. When National Semiconductor decided to leave the Unix market, the chip was redesigned into the Swordfish Embedded processor with a set of on-chip peripherals. The chip turned out to be too expensive for the laser printer market and was killed. The design team went to Intel and there designed the Pentium processor, which is very similar to the NS32764 core internally. The big success of the Series 32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by large companies like Canon. By the mid-1980s, Sequent introduced the first SMP server-class computer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s. The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others. Other designs included the Zilog Z80000, which arrived too late to market to stand a chance and disappeared quickly. The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores and integrate them into their own system on a chip products; only a few such vendors such as Apple are licensed to modify the ARM cores or create their own. Most cell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well as symmetric multiprocessor (SMP) applications processors with virtual memory. From 1993 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at broad. 64-bit designs in personal computers While 64-bit microprocessor designs have been in use in several markets since the early 1990s (including the Nintendo 64 gaming console in 1996), the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market. With AMD's introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (also called AMD64), in September 2003, followed by Intel's near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With operating systems Windows XP x64, Windows Vista x64, Windows 7 x64, Linux, BSD, and macOS that run 64-bit natively, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers. The move to 64 bits by PowerPC had been intended since the architecture's design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating-point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal. In 2011, ARM introduced the new 64-bit ARM architecture. RISC In the mid-1980s to early 1990s, a crop of new high-performance reduced instruction set computer (RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and Unix workstations, but then gained wide acceptance in other roles. The first commercial RISC microprocessor design was released in 1984, by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released). In 1986, HP released its first system with a PA-RISC CPU. In 1987, in the non-Unix Acorn computers' 32-bit, then cache-less, ARM2-based Acorn Archimedes became the first commercial success using the ARM architecture, then known as Acorn RISC Machine (ARM); first silicon ARM1 in 1985. The R3000 made the design truly practical, and the R4000 introduced the world's first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha. In the late 1990s, only two 64-bit RISC architectures were still produced in volume for non-embedded applications: SPARC and Power ISA, but as ARM has become increasingly powerful, in the early 2010s, it became the third RISC architecture in the general computing segment. SMP and multi-core design SMP symmetric multiprocessing is a configuration of two, four, or more CPU's (in pairs) that are typically used in servers, certain workstations and in desktop personal computers, since the 1990s. A multi-core processor is a single CPU that contains more than one microprocessor core. This popular two-socket motherboard from Abit was released in 1999 as the first SMP enabled PC motherboard, the Intel Pentium Pro was the first commercial CPU offered to system builders and enthusiasts. The Abit BP9 supports two Intel Celeron CPU's and when used with a SMP enabled operating system (Windows NT/2000/Linux) many applications obtain much higher performance than a single CPU. The early Celerons are easily overclockable and hobbyists used these relatively inexpensive CPU's clocked as high as 533Mhz - far beyond Intel's specification. After discovering the capacity of these motherboards Intel removed access to the multiplier in later CPU's. In 2001 IBM released the POWER4 CPU, it was a processor that was developed over five years of research, began in 1996 using a team of 250 researchers. The effort to accomplish the impossible was buttressed by development of and through—remote-collaboration and assigning younger engineers to work with more experienced engineers. The teams work achieved success with the new microprocessor, Power4. It is a two-in-one CPU that more than doubled performance at half the price of the competition, and a major advance in computing. The business magazine eWeek wrote: "The newly designed 1GHz Power4 represents a tremendous leap over its predecessor". An industry analyst, Brad Day of Giga Information Group said: "IBM is getting very aggressive, and this server is a game changer". The Power4 won "Analysts’ Choice Award for Best Workstation/Server Processor of 2001", and it broke notable records, including winning a contest against the best players on the Jeopardy! U.S. television show. Intel's codename Yonah CPU's launched on Jan 6, 2006, and were manufactured with two dies packaged on a multi-chip module. In a hotly-contested marketplace AMD and others released new versions of multi-core CPU's, AMD's SMP enabled Athlon MP CPU's from the AthlonXP line in 2001, Sun released the Niagara and Niagara 2 with eight-cores, AMD's Athlon X2 was released in June 2007. The companies were engaged in a never-ending race for speed, indeed more demanding software mandated more processing power and faster CPU speeds. By 2012 dual and quad-core processors became widely used in PCs and laptops, newer processors - similar to the higher cost professional level Intel Xeon's - with additional cores that execute instructions in parallel so software performance typically increases, provided the software is designed to utilize advanced hardware. Operating systems provided support for multiple-cores and SMD CPU's, many software applications including large workload and resource intensive applications - such as 3-D games - are programmed to take advantage of multiple core and multi-CPU systems. Apple, Intel, and AMD currently lead the market with multiple core desktop and workstation CPU's. Although they frequently leapfrog each other for the lead in the performance tier. Intel retains higher frequencies and thus has the fastest single core performance, while AMD is often the leader in multi-threaded routines due to a more advanced ISA and the process node the CPU's are fabricated on. Multiprocessing concepts for multi-core/multi-cpu configurations are related to Amdahl's law. Market statistics In 1997, about 55% of all CPUs sold in the world were 8-bit microcontrollers, of which over 2 billion were sold. In 2002, less than 10% of all the CPUs sold in the world were 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded control applications such as household appliances, automobiles, and computer peripherals. Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over . In 2003, about $44 billion (equivalent to about $ billion in ) worth of microprocessors were manufactured and sold. Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 2% of all CPUs sold. The quality-adjusted price of laptop microprocessors improved −25% to −35% per year in 2004–2010, and the rate of improvement slowed to −15% to −25% per year in 2010–2013. About 10 billion CPUs were manufactured in 2008. Most new CPUs produced each year are embedded.
Technology
Computer hardware
null
19555
https://en.wikipedia.org/wiki/Molecule
Molecule
A molecule is a group of two or more atoms that are held together by attractive forces known as chemical bonds; depending on context, the term may or may not include ions that satisfy this criterion. In quantum physics, organic chemistry, and biochemistry, the distinction from ions is dropped and molecule is often used when referring to polyatomic ions. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, e.g. two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, e.g. water (two hydrogen atoms and one oxygen atom; H2O). In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. This relaxes the requirement that a molecule contains two or more atoms, since the noble gases are individual atoms. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are typically not considered single molecules. Concepts similar to molecules have been discussed since ancient times, but modern investigation into the nature of molecules and their bonds began in the 17th century. Refined over time by scientists such as Robert Boyle, Amedeo Avogadro, Jean Perrin, and Linus Pauling, the study of molecules is today known as molecular physics or molecular chemistry. Etymology According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass. The word is derived from French (1678), from Neo-Latin , diminutive of Latin "mass, barrier". The word, which until the late 18th century was used only in Latin form, became popular after being used in works of philosophy by Descartes. History The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules. The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids. Circa 450 BC Empedocles imagined fundamental elements (fire (), earth (), air (), and water ()) and "forces" of attraction and repulsion allowing the elements to interact. A fifth element, the incorruptible quintessence aether, was considered to be the fundamental building block of the heavenly bodies. The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe. In a more concrete manner, however, the concept of aggregates or units of bonded atoms, i.e. "molecules", traces its origins to Robert Boyle's 1661 hypothesis, in his famous treatise The Sceptical Chymist, that matter is composed of clusters of particles and that chemical change results from the rearrangement of the clusters. Boyle argued that matter's basic elements consisted of various sorts and sizes of particles, called "corpuscles", which were capable of arranging themselves into groups. In 1789, William Higgins published views on what he called combinations of "ultimate" particles, which foreshadowed the concept of valency bonds. If, for example, according to Higgins, the force between the ultimate particle of oxygen and the ultimate particle of nitrogen were 6, then the strength of the force would be divided accordingly, and similarly for the other combinations of ultimate particles. Amedeo Avogadro created the word "molecule". His 1811 paper "Essay on Determining the Relative Masses of the Elementary Molecules of Bodies", he essentially states, i.e. according to Partington's A Short History of Chemistry, that:In coordination with these concepts, in 1833 the French chemist Marc Antoine Auguste Gaudin presented a clear account of Avogadro's hypothesis, regarding atomic weights, by making use of "volume diagrams", which clearly show both semi-correct molecular geometries, such as a linear water molecule, and correct molecular formulas, such as H2O: In 1917, an unknown American undergraduate chemical engineer named Linus Pauling was learning the Dalton hook-and-eye bonding method, which was the mainstream description of bonds between atoms at the time. Pauling, however, was not satisfied with this method and looked to the newly emerging field of quantum physics for a new method. In 1926, French physicist Jean Perrin received the Nobel Prize in physics for proving, conclusively, the existence of molecules. He did this by calculating the Avogadro constant using three different methods, all involving liquid phase systems. First, he used a gamboge soap-like emulsion, second by doing experimental work on Brownian motion, and third by confirming Einstein's theory of particle rotation in the liquid phase. In 1927, the physicists Fritz London and Walter Heitler applied the new quantum mechanics to the deal with the saturable, nondynamic forces of attraction and repulsion, i.e., exchange forces, of the hydrogen molecule. Their valence bond treatment of this problem, in their joint paper, was a landmark in that it brought chemistry under quantum mechanics. Their work was an influence on Pauling, who had just received his doctorate and visited Heitler and London in Zürich on a Guggenheim Fellowship. Subsequently, in 1931, building on the work of Heitler and London and on theories found in Lewis' famous article, Pauling published his ground-breaking article "The Nature of the Chemical Bond" in which he used quantum mechanics to calculate properties and structures of molecules, such as angles between bonds and rotation about bonds. On these concepts, Pauling developed hybridization theory to account for bonds in molecules such as CH4, in which four sp³ hybridised orbitals are overlapped by hydrogen's 1s orbital, yielding four sigma (σ) bonds. The four bonds are of the same length and strength, which yields a molecular structure as shown below: Molecular science The science of molecules is called molecular chemistry or molecular physics, depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate. Prevalence Molecules as components of matter are common. They also make up most of the oceans and atmosphere. Most organic substances are molecules. The substances of life are molecules, e.g. proteins, the amino acids of which they are composed, the nucleic acids (DNA and RNA), sugars, carbohydrates, fats, and vitamins. The nutrient minerals are generally ionic compounds, thus they are not molecules, e.g. iron sulfate. However, the majority of familiar solid substances on Earth are made partly or completely of crystals or ionic compounds, which are not made of molecules. These include all of the minerals that make up the substance of the Earth, sand, clay, pebbles, rocks, boulders, bedrock, the molten interior, and the core of the Earth. All of these contain many chemical bonds, but are not made of identifiable molecules. No typical molecule can be defined for salts nor for covalent crystals, although these are often composed of repeating unit cells that extend either in a plane, e.g. graphene; or three-dimensionally e.g. diamond, quartz, sodium chloride. The theme of repeated unit-cellular-structure also holds for most metals which are condensed phases with metallic bonding. Thus solid metals are not made of molecules. In glasses, which are solids that exist in a vitreous disordered state, the atoms are held together by chemical bonds with no presence of any definable molecule, nor any of the regularity of repeating unit-cellular-structure that characterizes salts, covalent crystals, and metals. Bonding Molecules are generally held together by covalent bonding. Several non-metallic elements exist only as molecules in the environment either in compounds or as homonuclear molecules, not as free atoms: for example, hydrogen. While some people say a metallic crystal can be considered a single giant molecule held together by metallic bonding, others point out that metals behave very differently than molecules. Covalent A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are termed shared pairs or bonding pairs, and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed covalent bonding. Ionic Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The ions are atoms that have lost one or more electrons (termed cations) and atoms that have gained one or more electrons (termed anions). This transfer of electrons is termed electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. At normal temperatures and pressures, ionic bonding mostly creates solids (or occasionally liquids) without separate identifiable molecules, but the vaporization/sublimation of such materials does produce separate molecules where electrons are still transferred fully enough for the bonds to be considered ionic rather than covalent. Molecular size Most molecules are far too small to be seen with the naked eye, although molecules of many polymers can reach macroscopic sizes, including biopolymers such as DNA. Molecules commonly used as building blocks for organic synthesis have a dimension of a few angstroms (Å) to several dozen Å, or around one billionth of a meter. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules. The smallest molecule is the diatomic hydrogen (H2), with a bond length of 0.74 Å. Effective molecular radius is the size a molecule displays in solution. The table of permselectivity for different substances contains examples. Molecular formulas Chemical formula types The chemical formula for a molecule uses one line of chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, and plus (+) and minus (−) signs. These are limited to one typographic line of symbols, which may include subscripts and superscripts. A compound's empirical formula is a very simple type of chemical formula. It is the simplest integer ratio of the chemical elements that constitute it. For example, water is always composed of a 2:1 ratio of hydrogen to oxygen atoms, and ethanol (ethyl alcohol) is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely – dimethyl ether has the same ratios as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. Also carbohydrates, for example, have the same ratio (carbon:hydrogen:oxygen= 1:2:1) (and thus the same empirical formula) but different total numbers of atoms in the molecule. The molecular formula reflects the exact number of atoms that compose the molecule and so characterizes different molecules. However different isomers can have the same atomic composition while being different molecules. The empirical formula is often the same as the molecular formula but not always. For example, the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH. The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12 of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations. Structural formula For molecules with a complicated 3-dimensional structure, especially involving atoms bonded to four different substituents, a simple molecular formula or even semi-structural chemical formula may not be enough to completely specify the molecule. In this case, a graphical type of formula called a structural formula may be needed. Structural formulas may in turn be represented with a one-dimensional chemical name, but such chemical nomenclature requires many words and terms which are not part of chemical formulas. Molecular geometry Molecules have fixed equilibrium geometries—bond lengths and angles— about which they continuously oscillate through vibrational and rotational motions. A pure substance is composed of molecules with the same average geometrical structure. The chemical formula and the structure of a molecule are the two important factors that determine its properties, particularly its reactivity. Isomers share a chemical formula but normally have very different properties because of their different structures. Stereoisomers, a particular type of isomer, may have very similar physico-chemical properties and at the same time different biochemical activities. Molecular spectroscopy Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to the Planck relation). Molecules have quantized energy levels that can be analyzed by detecting the molecule's energy exchange through absorbance or emission. Spectroscopy does not generally refer to diffraction studies where particles such as neutrons, electrons, or high energy X-rays interact with a regular arrangement of molecules (as in a crystal). Microwave spectroscopy commonly measures changes in the rotation of molecules, and can be used to identify molecules in outer space. Infrared spectroscopy measures the vibration of molecules, including stretching, bending or twisting motions. It is commonly used to identify the kinds of bonds or functional groups in molecules. Changes in the arrangements of electrons yield absorption or emission lines in ultraviolet, visible or near infrared light, and result in colour. Nuclear resonance spectroscopy measures the environment of particular nuclei in the molecule, and can be used to characterise the numbers of atoms in different positions in a molecule. Theoretical aspects The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively charged protons and one negatively charged electron, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry. When trying to define rigorously whether an arrangement of atoms is sufficiently stable to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state". This definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state and is so loosely bound that it is only likely to be observed at very low temperatures. Whether or not an arrangement of atoms is sufficiently stable to be considered a molecule is inherently an operational definition. Philosophically, therefore, a molecule is not a fundamental entity (in contrast, for instance, to an elementary particle); rather, the concept of a molecule is the chemist's way of making a useful statement about the strengths of atomic-scale interactions in the world that we observe.
Physical sciences
Chemistry
null
19559
https://en.wikipedia.org/wiki/Mechanics
Mechanics
Mechanics () is the area of physics concerned with the relationships between force, matter, and motion among physical objects. Forces applied to objects may result in displacements, which are changes of an object's position relative to its environment. Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo Galilei, Johannes Kepler, Christiaan Huygens, and Isaac Newton laid the foundation for what is now known as classical mechanics. As a branch of classical physics, mechanics deals with bodies that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as the physical science that deals with the motion of and forces on bodies not in the quantum realm. History Antiquity The ancient Greek philosophers were among the first to propose that abstract principles govern nature. The main theory of mechanics in antiquity was Aristotelian mechanics, though an alternative theory is exposed in the pseudo-Aristotelian Mechanical Problems, often attributed to one of his successors. There is another tradition that goes back to the ancient Greeks where mathematics is used more extensively to analyze bodies statically or dynamically, an approach that may have been stimulated by prior work of the Pythagorean Archytas. Examples of this tradition include pseudo-Euclid (On the Balance), Archimedes (On the Equilibrium of Planes, On Floating Bodies), Hero (Mechanica), and Pappus (Collection, Book VIII). Medieval age In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus. Persian Islamic polymath Ibn Sīnā published his theory of motion in The Book of Healing (1020). He said that an impetus is imparted to a projectile by the thrower, and viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, consistent with Newton's first law of motion. On the question of a body subject to a constant (uniform) force, the 12th-century Jewish-Arab scholar Hibat Allah Abu'l-Barakat al-Baghdaadi (born Nathanel, Iraqi, of Baghdad) stated that constant force imparts constant acceleration. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]." Influenced by earlier writers such as Ibn Sina and al-Baghdaadi, the 14th-century French priest Jean Buridan developed the theory of impetus, which later developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th-century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies. The concept that the main properties of a body are uniformly accelerated motion (as of falling bodies) was worked out by the 14th-century Oxford Calculators. Early modern age Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics. There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and many of the mathematics results therein could not have been stated earlier without the development of the calculus. However, many of the ideas, particularly as pertain to inertia and falling bodies, had been developed by prior scholars such as Christiaan Huygens and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable. Modern age Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th-century ideas. The development in the modern continuum mechanics, particularly in the areas of elasticity, plasticity, fluid dynamics, electrodynamics, and thermodynamics of deformable media, started in the second half of the 20th century. Types of mechanical bodies The often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc. Other distinctions between the various sub-disciplines of mechanics concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space. Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study. For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics. Sub-disciplines The following are the three main designations consisting of various subjects that are studied in mechanics. Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether it be classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function. Classical The following are described as forming classical mechanics: Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics) Analytical mechanics is a reformulation of Newtonian mechanics with an emphasis on system energy, rather than on forces. There are two main branches of analytical mechanics: Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy Lagrangian mechanics, another theoretical formalism, based on the principle of the least action Classical statistical mechanics generalizes ordinary classical mechanics to consider systems in an unknown state; often used to derive thermodynamic properties. Celestial mechanics, the motion of bodies in space: planets, comets, stars, galaxies, etc. Astrodynamics, spacecraft navigation, etc. Solid mechanics, elasticity, plasticity, or viscoelasticity exhibited by deformable solids Fracture mechanics Acoustics, sound (density, variation, propagation) in solids, fluids and gases Statics, semi-rigid bodies in mechanical equilibrium Fluid mechanics, the motion of fluids Soil mechanics, mechanical behavior of soils Continuum mechanics, mechanics of continua (both solid and fluid) Hydraulics, mechanical properties of liquids Fluid statics, liquids in equilibrium Applied mechanics (also known as engineering mechanics) Biomechanics, solids, fluids, etc. in biology Biophysics, physical processes in living organisms Relativistic or Einsteinian mechanics Quantum The following are categorized as being part of quantum mechanics: Schrödinger wave mechanics, used to describe the movements of the wavefunction of a single particle. Matrix mechanics is an alternative formulation that allows considering systems with a finite-dimensional state space. Quantum statistical mechanics generalizes ordinary quantum mechanics to consider systems in an unknown state; often used to derive thermodynamic properties. Particle physics, the motion, structure, and behavior of fundamental particles Nuclear physics, the motion, structure, and reactions of nuclei Condensed matter physics, quantum gases, solids, liquids, etc. Historically, classical mechanics had been around for nearly a quarter millennium before quantum mechanics developed. Classical mechanics originated with Isaac Newton's laws of motion in Philosophiæ Naturalis Principia Mathematica, developed over the seventeenth century. Quantum mechanics developed later, over the nineteenth century, precipitated by Planck's postulate and Albert Einstein's explanation of the photoelectric effect. Both fields are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers, i.e. if quantum mechanics is applied to large systems (for e.g. a baseball), the result would almost be the same if classical mechanics had been applied. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular, atomic, and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult (mainly due to computational limits) in quantum mechanics and hence remains useful and well used. Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth; the Sun, the Moon, and the stars travel in circles around the Earth because it is the nature of heavenly objects to travel in perfect circles. Often cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance travelled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein's theory of relativity. [A sentence illustrating the computational complication of Einstein's theory of relativity.] For atomic and subatomic particles, Newton's laws were superseded by quantum theory. For everyday phenomena, however, Newton's three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. Relativistic Akin to the distinction between quantum and classical mechanics, Albert Einstein's general and special theories of relativity have expanded the scope of Newton and Galileo's formulation of mechanics. The differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a body approaches the speed of light. For instance, in Newtonian mechanics, the kinetic energy of a free particle is , whereas in relativistic mechanics, it is (where is the Lorentz factor; this formula reduces to the Newtonian expression in the low energy limit). For high-energy processes, quantum mechanics must be adjusted to account for special relativity; this has led to the development of quantum field theory. Professional organizations Applied Mechanics Division, American Society of Mechanical Engineers Fluid Dynamics Division, American Physical Society Society for Experimental Mechanics Institution of Mechanical Engineers is the United Kingdom's qualifying body for mechanical engineers and has been the home of Mechanical Engineers for over 150 years. International Union of Theoretical and Applied Mechanics
Physical sciences
Basics_10
null
19562
https://en.wikipedia.org/wiki/Mandelbrot%20set
Mandelbrot set
The Mandelbrot set () is a two-dimensional set with a relatively simple definition that exhibits great complexity, especially as it is magnified. It is popular for its aesthetic appeal and fractal structures. The set is defined in the complex plane as the complex numbers for which the function does not diverge to infinity when iterated starting at , i.e., for which the sequence , , etc., remains bounded in absolute value. This set was first defined and drawn by Robert W. Brooks and Peter Matelski in 1978, as part of a study of Kleinian groups. Afterwards, in 1980, Benoit Mandelbrot obtained high-quality visualizations of the set while working at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York. Images of the Mandelbrot set exhibit an infinitely complicated boundary that reveals progressively ever-finer recursive detail at increasing magnifications; mathematically, the boundary of the Mandelbrot set is a fractal curve. The "style" of this recursive detail depends on the region of the set boundary being examined. Mandelbrot set images may be created by sampling the complex numbers and testing, for each sample point , whether the sequence goes to infinity. Treating the real and imaginary parts of as image coordinates on the complex plane, pixels may then be colored according to how soon the sequence crosses an arbitrarily chosen threshold (the threshold must be at least 2, as −2 is the complex number with the largest magnitude within the set, but otherwise the threshold is arbitrary). If is held constant and the initial value of is varied instead, the corresponding Julia set for the point is obtained. The Mandelbrot set has become popular outside mathematics both for its aesthetic appeal and as an example of a complex structure arising from the application of simple rules. It is one of the best-known examples of mathematical visualization, mathematical beauty, and motif. History The Mandelbrot set has its origin in complex dynamics, a field first investigated by the French mathematicians Pierre Fatou and Gaston Julia at the beginning of the 20th century. The fractal was first defined and drawn in 1978 by Robert W. Brooks and Peter Matelski as part of a study of Kleinian groups. On 1 March 1980, at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York, Benoit Mandelbrot first visualized the set. Mandelbrot studied the parameter space of quadratic polynomials in an article that appeared in 1980. The mathematical study of the Mandelbrot set really began with work by the mathematicians Adrien Douady and John H. Hubbard (1985), who established many of its fundamental properties and named the set in honor of Mandelbrot for his influential work in fractal geometry. The mathematicians Heinz-Otto Peitgen and Peter Richter became well known for promoting the set with photographs, books (1986), and an internationally touring exhibit of the German Goethe-Institut (1985). The cover article of the August 1985 Scientific American introduced the algorithm for computing the Mandelbrot set. The cover was created by Peitgen, Richter and Saupe at the University of Bremen. The Mandelbrot set became prominent in the mid-1980s as a computer-graphics demo, when personal computers became powerful enough to plot and display the set in high resolution. The work of Douady and Hubbard occurred during an increase in interest in complex dynamics and abstract mathematics, and the study of the Mandelbrot set has been a centerpiece of this field ever since. Formal definition The Mandelbrot set is the uncountable set of values of c in the complex plane for which the orbit of the critical point under iteration of the quadratic map remains bounded. Thus, a complex number c is a member of the Mandelbrot set if, when starting with and applying the iteration repeatedly, the absolute value of remains bounded for all . For example, for c = 1, the sequence is 0, 1, 2, 5, 26, ..., which tends to infinity, so 1 is not an element of the Mandelbrot set. On the other hand, for , the sequence is 0, −1, 0, −1, 0, ..., which is bounded, so −1 does belong to the set. The Mandelbrot set can also be defined as the connectedness locus of the family of quadratic polynomials , the subset of the space of parameters for which the Julia set of the corresponding polynomial forms a connected set. In the same way, the boundary of the Mandelbrot set can be defined as the bifurcation locus of this quadratic family, the subset of parameters near which the dynamic behavior of the polynomial (when it is iterated repeatedly) changes drastically. Basic properties The Mandelbrot set is a compact set, since it is closed and contained in the closed disk of radius 2 centred on zero. A point belongs to the Mandelbrot set if and only if for all . In other words, the absolute value of must remain at or below 2 for to be in the Mandelbrot set, , and if that absolute value exceeds 2, the sequence will escape to infinity. Since , it follows that , establishing that will always be in the closed disk of radius 2 around the origin. The intersection of with the real axis is the interval . The parameters along this interval can be put in one-to-one correspondence with those of the real logistic family, The correspondence is given by This gives a correspondence between the entire parameter space of the logistic family and that of the Mandelbrot set. Douady and Hubbard showed that the Mandelbrot set is connected. They constructed an explicit conformal isomorphism between the complement of the Mandelbrot set and the complement of the closed unit disk. Mandelbrot had originally conjectured that the Mandelbrot set is disconnected. This conjecture was based on computer pictures generated by programs that are unable to detect the thin filaments connecting different parts of . Upon further experiments, he revised his conjecture, deciding that should be connected. A topological proof of the connectedness was discovered in 2001 by Jeremy Kahn. The dynamical formula for the uniformisation of the complement of the Mandelbrot set, arising from Douady and Hubbard's proof of the connectedness of , gives rise to external rays of the Mandelbrot set. These rays can be used to study the Mandelbrot set in combinatorial terms and form the backbone of the Yoccoz parapuzzle. The boundary of the Mandelbrot set is the bifurcation locus of the family of quadratic polynomials. In other words, the boundary of the Mandelbrot set is the set of all parameters for which the dynamics of the quadratic map exhibits sensitive dependence on i.e. changes abruptly under arbitrarily small changes of It can be constructed as the limit set of a sequence of plane algebraic curves, the Mandelbrot curves, of the general type known as polynomial lemniscates. The Mandelbrot curves are defined by setting , and then interpreting the set of points in the complex plane as a curve in the real Cartesian plane of degree in x and y. Each curve is the mapping of an initial circle of radius 2 under . These algebraic curves appear in images of the Mandelbrot set computed using the "escape time algorithm" mentioned below. Other properties Main cardioid and period bulbs The main cardioid is the period 1 continent. It is the region of parameters for which the map has an attracting fixed point. It consists of all parameters of the form for some in the open unit disk. To the left of the main cardioid, attached to it at the point , a circular bulb, the period-2 bulb is visible. The bulb consists of for which has an attracting cycle of period 2. It is the filled circle of radius 1/4 centered around −1. More generally, for every positive integer , there are circular bulbs tangent to the main cardioid called period-q bulbs (where denotes the Euler phi function), which consist of parameters for which has an attracting cycle of period . More specifically, for each primitive th root of unity (where ), there is one period-q bulb called the bulb, which is tangent to the main cardioid at the parameter and which contains parameters with -cycles having combinatorial rotation number . More precisely, the periodic Fatou components containing the attracting cycle all touch at a common point (commonly called the -fixed point). If we label these components in counterclockwise orientation, then maps the component to the component . The change of behavior occurring at is known as a bifurcation: the attracting fixed point "collides" with a repelling period-q cycle. As we pass through the bifurcation parameter into the -bulb, the attracting fixed point turns into a repelling fixed point (the -fixed point), and the period-q cycle becomes attracting. Hyperbolic components Bulbs that are interior components of the Mandelbrot set in which the maps have an attracting periodic cycle are called hyperbolic components. It is conjectured that these are the only interior regions of and that they are dense in . This problem, known as density of hyperbolicity, is one of the most important open problems in complex dynamics. Hypothetical non-hyperbolic components of the Mandelbrot set are often referred to as "queer" or ghost components. For real quadratic polynomials, this question was proved in the 1990s independently by Lyubich and by Graczyk and Świątek. (Note that hyperbolic components intersecting the real axis correspond exactly to periodic windows in the Feigenbaum diagram. So this result states that such windows exist near every parameter in the diagram.) Not every hyperbolic component can be reached by a sequence of direct bifurcations from the main cardioid of the Mandelbrot set. Such a component can be reached by a sequence of direct bifurcations from the main cardioid of a little Mandelbrot copy (see below). Each of the hyperbolic components has a center, which is a point c such that the inner Fatou domain for has a super-attracting cycle—that is, that the attraction is infinite. This means that the cycle contains the critical point 0, so that 0 is iterated back to itself after some iterations. Therefore, for some n. If we call this polynomial (letting it depend on c instead of z), we have that and that the degree of is . Therefore, constructing the centers of the hyperbolic components is possible by successively solving the equations . The number of new centers produced in each step is given by Sloane's . Local connectivity It is conjectured that the Mandelbrot set is locally connected. This conjecture is known as MLC (for Mandelbrot locally connected). By the work of Adrien Douady and John H. Hubbard, this conjecture would result in a simple abstract "pinched disk" model of the Mandelbrot set. In particular, it would imply the important hyperbolicity conjecture mentioned above. The work of Jean-Christophe Yoccoz established local connectivity of the Mandelbrot set at all finitely renormalizable parameters; that is, roughly speaking those contained only in finitely many small Mandelbrot copies. Since then, local connectivity has been proved at many other points of , but the full conjecture is still open. Self-similarity The Mandelbrot set is self-similar under magnification in the neighborhoods of the Misiurewicz points. It is also conjectured to be self-similar around generalized Feigenbaum points (e.g., −1.401155 or −0.1528 + 1.0397i), in the sense of converging to a limit set. The Mandelbrot set in general is quasi-self-similar, as small slightly different versions of itself can be found at arbitrarily small scales. These copies of the Mandelbrot set are all slightly different, mostly because of the thin threads connecting them to the main body of the set. Further results The Hausdorff dimension of the boundary of the Mandelbrot set equals 2 as determined by a result of Mitsuhiro Shishikura. The fact that this is greater by a whole integer than its topological dimension, which is 1, reflects the extreme fractal nature of the Mandelbrot set boundary. Roughly speaking, Shishikura's result states that the Mandelbrot set boundary is so "wiggly" that it locally fills space as efficiently as a two-dimensional planar region. Curves with Hausdorff dimension 2, despite being (topologically) 1-dimensional, are oftentimes capable of having nonzero area (more formally, a nonzero planar Lebesgue measure). Whether this is the case for the Mandelbrot set boundary is an unsolved problem. It has been shown that the generalized Mandelbrot set in higher-dimensional hypercomplex number spaces (i.e. when the power of the iterated variable tends to infinity) is convergent to the unit (-1)-sphere. In the Blum–Shub–Smale model of real computation, the Mandelbrot set is not computable, but its complement is computably enumerable. Many simple objects (e.g., the graph of exponentiation) are also not computable in the BSS model. At present, it is unknown whether the Mandelbrot set is computable in models of real computation based on computable analysis, which correspond more closely to the intuitive notion of "plotting the set by a computer". Hertling has shown that the Mandelbrot set is computable in this model if the hyperbolicity conjecture is true. Relationship with Julia sets As a consequence of the definition of the Mandelbrot set, there is a close correspondence between the geometry of the Mandelbrot set at a given point and the structure of the corresponding Julia set. For instance, a value of c belongs to the Mandelbrot set if and only if the corresponding Julia set is connected. Thus, the Mandelbrot set may be seen as a map of the connected Julia sets. This principle is exploited in virtually all deep results on the Mandelbrot set. For example, Shishikura proved that, for a dense set of parameters in the boundary of the Mandelbrot set, the Julia set has Hausdorff dimension two, and then transfers this information to the parameter plane. Similarly, Yoccoz first proved the local connectivity of Julia sets, before establishing it for the Mandelbrot set at the corresponding parameters. Geometry For every rational number , where p and q are coprime, a hyperbolic component of period q bifurcates from the main cardioid at a point on the edge of the cardioid corresponding to an internal angle of . The part of the Mandelbrot set connected to the main cardioid at this bifurcation point is called the p/q-limb. Computer experiments suggest that the diameter of the limb tends to zero like . The best current estimate known is the Yoccoz-inequality, which states that the size tends to zero like . A period-q limb will have "antennae" at the top of its limb. The period of a given bulb is determined by counting these antennas. The numerator of the rotation number, p, is found by numbering each antenna counterclockwise from the limb from 1 to and finding which antenna is the shortest. Pi in the Mandelbrot set In an attempt to demonstrate that the thickness of the p/q-limb is zero, David Boll carried out a computer experiment in 1991, where he computed the number of iterations required for the series to diverge for ( being the location thereof). As the series does not diverge for the exact value of , the number of iterations required increases with a small . It turns out that multiplying the value of with the number of iterations required yields an approximation of that becomes better for smaller . For example, for = 0.0000001, the number of iterations is 31415928 and the product is 3.1415928. In 2001, Aaron Klebanoff proved Boll's discovery. Fibonacci sequence in the Mandelbrot set The Mandelbrot Set features a fundamental cardioid shape adorned with numerous bulbs directly attached to it. Understanding the arrangement of these bulbs requires a detailed examination of the Mandelbrot Set's boundary. As one zooms into specific portions with a geometric perspective, precise deducible information about the location within the boundary and the corresponding dynamical behavior for parameters drawn from associated bulbs emerges. The iteration of the quadratic polynomial , where  is a parameter drawn from one of the bulbs attached to the main cardioid within the Mandelbrot Set, gives rise to maps featuring attracting cycles of a specified period  and a rotation number . In this context, the attracting cycle of  exhibits rotational motion around a central fixed point, completing an average of  revolutions at each iteration. The bulbs within the Mandelbrot Set are distinguishable by both their attracting cycles and the geometric features of their structure. Each bulb is characterized by an antenna attached to it, emanating from a junction point and displaying a certain number of spokes indicative of its period. For instance, the bulb is identified by its attracting cycle with a rotation number of . Its distinctive antenna-like structure comprises a junction point from which five spokes emanate. Among these spokes, called the principal spoke is directly attached to the bulb, and the 'smallest' non-principal spoke is positioned approximately of a turn counterclockwise from the principal spoke, providing a distinctive identification as a -bulb. This raises the question: how does one discern which among these spokes is the 'smallest'? In the theory of external rays developed by Douady and Hubbard, there are precisely two external rays landing at the root point of a satellite hyperbolic component of the Mandelbrot Set. Each of these rays possesses an external angle that undergoes doubling under the angle doubling map . According to this theorem, when two rays land at the same point, no other rays between them can intersect. Thus, the 'size' of this region is measured by determining the length of the arc between the two angles. If the root point of the main cardioid is the cusp at , then the main cardioid is the -bulb. The root point of any other bulb is just the point where this bulb is attached to the main cardioid. This prompts the inquiry: which is the largest bulb between the root points of the and -bulbs? It is clearly the -bulb. And note that is obtained from the previous two fractions by Farey addition, i.e., adding the numerators and adding the denominators Similarly, the largest bulb between the and -bulbs is the -bulb, again given by Farey addition. The largest bulb between the and -bulb is the -bulb, while the largest bulb between the and -bulbs is the -bulb, and so on. The arrangement of bulbs within the Mandelbrot set follows a remarkable pattern governed by the Farey tree, a structure encompassing all rationals between and . This ordering positions the bulbs along the boundary of the main cardioid precisely according to the rational numbers in the unit interval. Starting with the bulb at the top and progressing towards the circle, the sequence unfolds systematically: the largest bulb between and is , between and is , and so forth. Intriguingly, the denominators of the periods of circular bulbs at sequential scales in the Mandelbrot Set conform to the Fibonacci number sequence, the sequence that is made by adding the previous two terms – 1, 2, 3, 5, 8, 13, 21... The Fibonacci sequence manifests in the number of spiral arms at a unique spot on the Mandelbrot set, mirrored both at the top and bottom. This distinctive location demands the highest number of iterations of  for a detailed fractal visual, with intricate details repeating as one zooms in. Image gallery of a zoom sequence The boundary of the Mandelbrot set shows more intricate detail the closer one looks or magnifies the image. The following is an example of an image sequence zooming to a selected c value. The magnification of the last image relative to the first one is about 1010 to 1. Relating to an ordinary computer monitor, it represents a section of a Mandelbrot set with a diameter of 4 million kilometers. The seahorse "body" is composed by 25 "spokes" consisting of two groups of 12 "spokes" each and one "spoke" connecting to the main cardioid. These two groups can be attributed by some metamorphosis to the two "fingers" of the "upper hand" of the Mandelbrot set; therefore, the number of "spokes" increases from one "seahorse" to the next by 2; the "hub" is a Misiurewicz point. Between the "upper part of the body" and the "tail", there is a distorted copy of the Mandelbrot set, called a "satellite". The islands in the third-to-last step seem to consist of infinitely many parts, as is the case for the corresponding Julia set . They are connected by tiny structures, so that the whole represents a simply connected set. The tiny structures meet each other at a satellite in the center that is too small to be recognized at this magnification. The value of for the corresponding is not the image center but, relative to the main body of the Mandelbrot set, has the same position as the center of this image relative to the satellite shown in the 6th step. Inner structure While the Mandelbrot set is typically rendered showing outside boundary detail, structure within the bounded set can also be revealed. For example, while calculating whether or not a given c value is bound or unbound, while it remains bound, the maximum value that this number reaches can be compared to the c value at that location. If the sum of squares method is used, the calculated number would be max:(real^2 + imaginary^2) - c:(real^2 + imaginary^2). The magnitude of this calculation can be rendered as a value on a gradient. This produces results like the following, gradients with distinct edges and contours as the boundaries are approached. The animations serve to highlight the gradient boundaries. Generalizations Multibrot sets Multibrot sets are bounded sets found in the complex plane for members of the general monic univariate polynomial family of recursions . For an integer d, these sets are connectedness loci for the Julia sets built from the same formula. The full cubic connectedness locus has also been studied; here one considers the two-parameter recursion , whose two critical points are the complex square roots of the parameter k. A parameter is in the cubic connectedness locus if both critical points are stable. For general families of holomorphic functions, the boundary of the Mandelbrot set generalizes to the bifurcation locus. The Multibrot set is obtained by varying the value of the exponent d. The article has a video that shows the development from d = 0 to 7, at which point there are 6 i.e. lobes around the perimeter. In general, when d is a positive integer, the central region in each of these sets is always an epicycloid of cusps. A similar development with negative integral exponents results in clefts on the inside of a ring, where the main central region of the set is a hypocycloid of cusps. Higher dimensions There is no perfect extension of the Mandelbrot set into 3D, because there is no 3D analogue of the complex numbers for it to iterate on. There is an extension of the complex numbers into 4 dimensions, the quaternions, that creates a perfect extension of the Mandelbrot set and the Julia sets into 4 dimensions. These can then be either cross-sectioned or projected into a 3D structure. The quaternion (4-dimensional) Mandelbrot set is simply a solid of revolution of the 2-dimensional Mandelbrot set (in the j-k plane), and is therefore uninteresting to look at. Taking a 3-dimensional cross section at results in a solid of revolution of the 2-dimensional Mandelbrot set around the real axis. Other non-analytic mappings Of particular interest is the tricorn fractal, the connectedness locus of the anti-holomorphic family . The tricorn (also sometimes called the Mandelbar) was encountered by Milnor in his study of parameter slices of real cubic polynomials. It is not locally connected. This property is inherited by the connectedness locus of real cubic polynomials. Another non-analytic generalization is the Burning Ship fractal, which is obtained by iterating the following: . Computer drawings There exist a multitude of various algorithms for plotting the Mandelbrot set via a computing device. Here, the most widely used and simplest algorithm will be demonstrated, namely, the naïve "escape time algorithm". In the escape time algorithm, a repeating calculation is performed for each x, y point in the plot area and based on the behavior of that calculation, a color is chosen for that pixel. The x and y locations of each point are used as starting values in a repeating, or iterating calculation (described in detail below). The result of each iteration is used as the starting values for the next. The values are checked during each iteration to see whether they have reached a critical "escape" condition, or "bailout". If that condition is reached, the calculation is stopped, the pixel is drawn, and the next x, y point is examined. The color of each point represents how quickly the values reached the escape point. Often black is used to show values that fail to escape before the iteration limit, and gradually brighter colors are used for points that escape. This gives a visual representation of how many cycles were required before reaching the escape condition. To render such an image, the region of the complex plane we are considering is subdivided into a certain number of pixels. To color any such pixel, let be the midpoint of that pixel. Iterate the critical point 0 under , checking at each step whether the orbit point has a radius larger than 2. When this is the case, does not belong to the Mandelbrot set, and color the pixel according to the number of iterations used to find out. Otherwise, keep iterating up to a fixed number of steps, after which we decide that our parameter is "probably" in the Mandelbrot set, or at least very close to it, and color the pixel black. In pseudocode, this algorithm would look as follows. The algorithm does not use complex numbers and manually simulates complex-number operations using two real numbers, for those who do not have a complex data type. The program may be simplified if the programming language includes complex-data-type operations. for each pixel (Px, Py) on the screen do x0 := scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.00, 0.47)) y0 := scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1.12, 1.12)) x := 0.0 y := 0.0 iteration := 0 max_iteration := 1000 while (x^2 + y^2 ≤ 2^2 AND iteration < max_iteration) do xtemp := x^2 - y^2 + x0 y := 2*x*y + y0 x := xtemp iteration := iteration + 1 color := palette[iteration] plot(Px, Py, color) Here, relating the pseudocode to , and : and so, as can be seen in the pseudocode in the computation of x and y: and . To get colorful images of the set, the assignment of a color to each value of the number of executed iterations can be made using one of a variety of functions (linear, exponential, etc.). Python code Here is the code implementing the above algorithm in Python: import numpy as np import matplotlib.pyplot as plt # setting parameters (these values can be changed) xDomain, yDomain = np.linspace(-2, 2, 500), np.linspace(-2, 2, 500) bound = 2 max_iterations = 50 # any positive integer value colormap = "nipy_spectral" # set to any matplotlib valid colormap func = lambda z, p, c: z**p + c # computing 2-d array to represent the mandelbrot-set iterationArray = [] for y in yDomain: row = [] for x in xDomain: z = 0 p = 2 c = complex(x, y) for iterationNumber in range(max_iterations): if abs(z) >= bound: row.append(iterationNumber) break else: try: z = func(z, p, c) except ValueError: z = c except ZeroDivisionError: z = c else: row.append(0) iterationArray.append(row) # plotting the data ax = plt.axes() ax.set_aspect("equal") graph = ax.pcolormesh(xDomain, yDomain, iterationArray, cmap=colormap) plt.colorbar(graph) plt.xlabel("Real-Axis") plt.ylabel("Imaginary-Axis") plt.show() The value of power variable can be modified to generate an image of equivalent multibrot set (). For example, setting p = 5 produces the associated image.
Mathematics
Other
null
19566
https://en.wikipedia.org/wiki/Main-group%20element
Main-group element
In chemistry and atomic physics, the main group is the group of elements (sometimes called the representative elements) whose lightest members are represented by helium, lithium, beryllium, boron, carbon, nitrogen, oxygen, and fluorine as arranged in the periodic table of the elements. The main group includes the elements (except hydrogen, which is sometimes not included) in groups 1 and 2 (s-block), and groups 13 to 18 (p-block). The s-block elements are primarily characterised by one main oxidation state, and the p-block elements, when they have multiple oxidation states, often have common oxidation states separated by two units. Main-group elements (with some of the lighter transition metals) are the most abundant elements on Earth, in the Solar System, and in the universe. Group 12 elements are often considered to be transition metals; however, zinc (Zn), cadmium (Cd), and mercury (Hg) share some properties of both groups, and some scientists believe they should be included in the main group. Occasionally, even the group 3 elements as well as the lanthanides and actinides have been included, because especially the group 3 elements and many lanthanides are electropositive elements with only one main oxidation state like the group 1 and 2 elements. The position of the actinides is more questionable, but the most common and stable of them, thorium (Th) and uranium (U), are similar to main-group elements as thorium is an electropositive element with only one main oxidation state (+4), and uranium has two main ones separated by two oxidation units (+4 and +6). In older nomenclature, the main-group elements are groups IA and IIA, and groups IIIB to 0 (CAS groups IIIA to VIIIA). Group 12 is labelled as group IIB in both systems. Group 3 is labelled as group IIIA in the older nomenclature (CAS group IIIB).
Physical sciences
Periodic table
Chemistry
19568
https://en.wikipedia.org/wiki/Microscope
Microscope
A microscope () is a laboratory instrument used to examine objects that are too small to be seen by the naked eye. Microscopy is the science of investigating small objects and structures using a microscope. Microscopic means being invisible to the eye unless aided by a microscope. There are many types of microscopes, and they may be grouped in different ways. One way is to describe the method an instrument uses to interact with a sample and produce images, either by sending a beam of light or electrons through a sample in its optical path, by detecting photon emissions from a sample, or by scanning across and a short distance from the surface of a sample using a probe. The most common microscope (and the first to be invented) is the optical microscope, which uses lenses to refract visible light that passed through a thinly sectioned sample to produce an observable image. Other major types of microscopes are the fluorescence microscope, electron microscope (both the transmission electron microscope and the scanning electron microscope) and various types of scanning probe microscopes. History Although objects resembling lenses date back 4,000 years and there are Greek accounts of the optical properties of water-filled spheres (5th century BC) followed by many centuries of writings on optics, the earliest known use of simple microscopes (magnifying glasses) dates back to the widespread use of lenses in eyeglasses in the 13th century. The earliest known examples of compound microscopes, which combine an objective lens near the specimen with an eyepiece to view a real image, appeared in Europe around 1620. The inventor is unknown, even though many claims have been made over the years. Several revolve around the spectacle-making centers in the Netherlands, including claims it was invented in 1590 by Zacharias Janssen (claim made by his son) or Zacharias' father, Hans Martens, or both, claims it was invented by their neighbor and rival spectacle maker, Hans Lippershey (who applied for the first telescope patent in 1608), and claims it was invented by expatriate Cornelis Drebbel, who was noted to have a version in London in 1619. Galileo Galilei (also sometimes cited as compound microscope inventor) seems to have found after 1610 that he could close focus his telescope to view small objects and, after seeing a compound microscope built by Drebbel exhibited in Rome in 1624, built his own improved version. Giovanni Faber coined the name microscope for the compound microscope Galileo submitted to the in 1625 (Galileo had called it the occhiolino 'little eye'). René Descartes (Dioptrique, 1637) describes microscopes wherein a concave mirror, with its concavity towards the object, is used, in conjunction with a lens, for illuminating the object, which is mounted on a point fixing it at the focus of the mirror. Rise of modern light microscopes The first detailed account of the microscopic anatomy of organic tissue based on the use of a microscope did not appear until 1644, in Giambattista Odierna's L'occhio della mosca, or The Fly's Eye. The microscope was still largely a novelty until the 1660s and 1670s when naturalists in Italy, the Netherlands and England began using them to study biology. Italian scientist Marcello Malpighi, called the father of histology by some historians of biology, began his analysis of biological structures with the lungs. The publication in 1665 of Robert Hooke's Micrographia had a huge impact, largely because of its impressive illustrations. Hooke created tiny lenses of small glass globules made by fusing the ends of threads of spun glass. A significant contribution came from Antonie van Leeuwenhoek who achieved up to 300 times magnification using a simple single lens microscope. He sandwiched a very small glass ball lens between the holes in two metal plates riveted together, and with an adjustable-by-screws needle attached to mount the specimen. Then, Van Leeuwenhoek re-discovered red blood cells (after Jan Swammerdam) and spermatozoa, and helped popularise the use of microscopes to view biological ultrastructure. On 9 October 1676, van Leeuwenhoek reported the discovery of micro-organisms. The performance of a compound light microscope depends on the quality and correct use of the condensor lens system to focus light on the specimen and the objective lens to capture the light from the specimen and form an image. Early instruments were limited until this principle was fully appreciated and developed from the late 19th to very early 20th century, and until electric lamps were available as light sources. In 1893 August Köhler developed a key principle of sample illumination, Köhler illumination, which is central to achieving the theoretical limits of resolution for the light microscope. This method of sample illumination produces even lighting and overcomes the limited contrast and resolution imposed by early techniques of sample illumination. Further developments in sample illumination came from the discovery of phase contrast by Frits Zernike in 1953, and differential interference contrast illumination by Georges Nomarski in 1955; both of which allow imaging of unstained, transparent samples. Electron microscopes In the early 20th century a significant alternative to the light microscope was developed, an instrument that uses a beam of electrons rather than light to generate an image. The German physicist, Ernst Ruska, working with electrical engineer Max Knoll, developed the first prototype electron microscope in 1931, a transmission electron microscope (TEM). The transmission electron microscope works on similar principles to an optical microscope but uses electrons in the place of light and electromagnets in the place of glass lenses. Use of electrons, instead of light, allows for much higher resolution. Development of the transmission electron microscope was quickly followed in 1935 by the development of the scanning electron microscope by Max Knoll. Although TEMs were being used for research before WWII, and became popular afterwards, the SEM was not commercially available until 1965. Transmission electron microscopes became popular following the Second World War. Ernst Ruska, working at Siemens, developed the first commercial transmission electron microscope and, in the 1950s, major scientific conferences on electron microscopy started being held. In 1965, the first commercial scanning electron microscope was developed by Professor Sir Charles Oatley and his postgraduate student Gary Stewart, and marketed by the Cambridge Instrument Company as the "Stereoscan". One of the latest discoveries made about using an electron microscope is the ability to identify a virus. Since this microscope produces a visible, clear image of small organelles, in an electron microscope there is no need for reagents to see the virus or harmful cells, resulting in a more efficient way to detect pathogens. Scanning probe microscopes From 1981 to 1983 Gerd Binnig and Heinrich Rohrer worked at IBM in Zürich, Switzerland to study the quantum tunnelling phenomenon. They created a practical instrument, a scanning probe microscope from quantum tunnelling theory, that read very small forces exchanged between a probe and the surface of a sample. The probe approaches the surface so closely that electrons can flow continuously between probe and sample, making a current from surface to probe. The microscope was not initially well received due to the complex nature of the underlying theoretical explanations. In 1984 Jerry Tersoff and D.R. Hamann, while at AT&T's Bell Laboratories in Murray Hill, New Jersey began publishing articles that tied theory to the experimental results obtained by the instrument. This was closely followed in 1985 with functioning commercial instruments, and in 1986 with Gerd Binnig, Quate, and Gerber's invention of the atomic force microscope, then Binnig's and Rohrer's Nobel Prize in Physics for the SPM. New types of scanning probe microscope have continued to be developed as the ability to machine ultra-fine probes and tips has advanced. Fluorescence microscopes The most recent developments in light microscope largely centre on the rise of fluorescence microscopy in biology. During the last decades of the 20th century, particularly in the post-genomic era, many techniques for fluorescent staining of cellular structures were developed. The main groups of techniques involve targeted chemical staining of particular cell structures, for example, the chemical compound DAPI to label DNA, use of antibodies conjugated to fluorescent reporters, see immunofluorescence, and fluorescent proteins, such as green fluorescent protein. These techniques use these different fluorophores for analysis of cell structure at a molecular level in both live and fixed samples. The rise of fluorescence microscopy drove the development of a major modern microscope design, the confocal microscope. The principle was patented in 1957 by Marvin Minsky, although laser technology limited practical application of the technique. It was not until 1978 when Thomas and Christoph Cremer developed the first practical confocal laser scanning microscope and the technique rapidly gained popularity through the 1980s. Super resolution microscopes Much current research (in the early 21st century) on optical microscope techniques is focused on development of superresolution analysis of fluorescently labelled samples. Structured illumination can improve resolution by around two to four times and techniques like stimulated emission depletion (STED) microscopy are approaching the resolution of electron microscopes. This occurs because the diffraction limit is occurred from light or excitation, which makes the resolution must be doubled to become super saturated. Stefan Hell was awarded the 2014 Nobel Prize in Chemistry for the development of the STED technique, along with Eric Betzig and William Moerner who adapted fluorescence microscopy for single-molecule visualization. X-ray microscopes X-ray microscopes are instruments that use electromagnetic radiation usually in the soft X-ray band to image objects. Technological advances in X-ray lens optics in the early 1970s made the instrument a viable imaging choice. They are often used in tomography (see micro-computed tomography) to produce three dimensional images of objects, including biological materials that have not been chemically fixed. Currently research is being done to improve optics for hard X-rays which have greater penetrating power. Types Microscopes can be separated into several different classes. One grouping is based on what interacts with the sample to generate the image, i.e., light or photons (optical microscopes), electrons (electron microscopes) or a probe (scanning probe microscopes). Alternatively, microscopes can be classified based on whether they analyze the sample via a scanning point (confocal optical microscopes, scanning electron microscopes and scanning probe microscopes) or analyze the sample all at once (wide field optical microscopes and transmission electron microscopes). Wide field optical microscopes and transmission electron microscopes both use the theory of lenses (optics for light microscopes and electromagnet lenses for electron microscopes) in order to magnify the image generated by the passage of a wave transmitted through the sample, or reflected by the sample. The waves used are electromagnetic (in optical microscopes) or electron beams (in electron microscopes). Resolution in these microscopes is limited by the wavelength of the radiation used to image the sample, where shorter wavelengths allow for a higher resolution. Scanning optical and electron microscopes, like the confocal microscope and scanning electron microscope, use lenses to focus a spot of light or electrons onto the sample then analyze the signals generated by the beam interacting with the sample. The point is then scanned over the sample to analyze a rectangular region. Magnification of the image is achieved by displaying the data from scanning a physically small sample area on a relatively large screen. These microscopes have the same resolution limit as wide field optical, probe, and electron microscopes. Scanning probe microscopes also analyze a single point in the sample and then scan the probe over a rectangular sample region to build up an image. As these microscopes do not use electromagnetic or electron radiation for imaging they are not subject to the same resolution limit as the optical and electron microscopes described above. Optical microscope The most common type of microscope (and the first invented) is the optical microscope. This is an optical instrument containing one or more lenses producing an enlarged image of a sample placed in the focal plane. Optical microscopes have refractive glass (occasionally plastic or quartz), to focus light on the eye or on to another light detector. Mirror-based optical microscopes operate in the same manner. Typical magnification of a light microscope, assuming visible range light, is up to 1,250× with a theoretical resolution limit of around 0.250 micrometres or 250 nanometres. This limits practical magnification to ~1,500×. Specialized techniques (e.g., scanning confocal microscopy, Vertico SMI) may exceed this magnification but the resolution is diffraction limited. The use of shorter wavelengths of light, such as ultraviolet, is one way to improve the spatial resolution of the optical microscope, as are devices such as the near-field scanning optical microscope. Sarfus is a recent optical technique that increases the sensitivity of a standard optical microscope to a point where it is possible to directly visualize nanometric films (down to 0.3 nanometre) and isolated nano-objects (down to 2 nm-diameter). The technique is based on the use of non-reflecting substrates for cross-polarized reflected light microscopy. Ultraviolet light enables the resolution of microscopic features as well as the imaging of samples that are transparent to the eye. Near infrared light can be used to visualize circuitry embedded in bonded silicon devices, since silicon is transparent in this region of wavelengths. In fluorescence microscopy many wavelengths of light ranging from the ultraviolet to the visible can be used to cause samples to fluoresce, which allows viewing by eye or with specifically sensitive cameras. Phase-contrast microscopy is an optical microscopic illumination technique in which small phase shifts in the light passing through a transparent specimen are converted into amplitude or contrast changes in the image. The use of phase contrast does not require staining to view the slide. This microscope technique made it possible to study the cell cycle in live cells. The traditional optical microscope has more recently evolved into the digital microscope. In addition to, or instead of, directly viewing the object through the eyepieces, a type of sensor similar to those used in a digital camera is used to obtain an image, which is then displayed on a computer monitor. These sensors may use CMOS or charge-coupled device (CCD) technology, depending on the application. Digital microscopy with very low light levels to avoid damage to vulnerable biological samples is available using sensitive photon-counting digital cameras. It has been demonstrated that a light source providing pairs of entangled photons may minimize the risk of damage to the most light-sensitive samples. In this application of ghost imaging to photon-sparse microscopy, the sample is illuminated with infrared photons, each of which is spatially correlated with an entangled partner in the visible band for efficient imaging by a photon-counting camera. Electron microscope The two major types of electron microscopes are transmission electron microscopes (TEMs) and scanning electron microscopes (SEMs). They both have series of electromagnetic and electrostatic lenses to focus a high energy beam of electrons on a sample. In a TEM the electrons pass through the sample, analogous to basic optical microscopy. This requires careful sample preparation, since electrons are scattered strongly by most materials. The samples must also be very thin (below 100 nm) in order for the electrons to pass through it. Cross-sections of cells stained with osmium and heavy metals reveal clear organelle membranes and proteins such as ribosomes. With a 0.1 nm level of resolution, detailed views of viruses (20 – 300 nm) and a strand of DNA (2 nm in width) can be obtained. In contrast, the SEM has raster coils to scan the surface of bulk objects with a fine electron beam. Therefore, the specimen do not necessarily need to be sectioned, but coating with a nanometric metal or carbon layer may be needed for nonconductive samples. SEM allows fast surface imaging of samples, possibly in thin water vapor to prevent drying. Scanning probe The different types of scanning probe microscopes arise from the many different types of interactions that occur when a small probe is scanned over and interacts with a specimen. These interactions or modes can be recorded or mapped as function of location on the surface to form a characterization map. The three most common types of scanning probe microscopes are atomic force microscopes (AFM), near-field scanning optical microscopes (NSOM or SNOM, scanning near-field optical microscopy), and scanning tunneling microscopes (STM). An atomic force microscope has a fine probe, usually of silicon or silicon nitride, attached to a cantilever; the probe is scanned over the surface of the sample, and the forces that cause an interaction between the probe and the surface of the sample are measured and mapped. A near-field scanning optical microscope is similar to an AFM but its probe consists of a light source in an optical fiber covered with a tip that has usually an aperture for the light to pass through. The microscope can capture either transmitted or reflected light to measure very localized optical properties of the surface, commonly of a biological specimen. Scanning tunneling microscopes have a metal tip with a single apical atom; the tip is attached to a tube through which a current flows. The tip is scanned over the surface of a conductive sample until a tunneling current flows; the current is kept constant by computer movement of the tip and an image is formed by the recorded movements of the tip. Other types Scanning acoustic microscopes use sound waves to measure variations in acoustic impedance. Similar to Sonar in principle, they are used for such jobs as detecting defects in the subsurfaces of materials including those found in integrated circuits. On February 4, 2013, Australian engineers built a "quantum microscope" which provides unparalleled precision. Mobile apps Mobile app microscopes can optionally be used as optical microscope when the flashlight is activated. However, mobile app microscopes are harder to use due to visual noise, are often limited to 40x, and the resolution limits of the camera lens itself.
Technology
Optical
null
19583
https://en.wikipedia.org/wiki/Monomer
Monomer
A monomer ( ; mono-, "one" + -mer, "part") is a molecule that can react together with other monomer molecules to form a larger polymer chain or three-dimensional network in a process called polymerization. Classification Chemistry classifies monomers by type, and two broad classes based on the type of polymer they form. By type: natural vs synthetic, e.g. glycine vs caprolactam, respectively polar vs nonpolar, e.g. vinyl acetate vs ethylene, respectively cyclic vs linear, e.g. ethylene oxide vs ethylene glycol, respectively By type of polymer they form: those that participate in condensation polymerization those that participate in addition polymerization Differing stoichiometry causes each class to create its respective form of polymer. The polymerization of one kind of monomer gives a homopolymer. Many polymers are copolymers, meaning that they are derived from two different monomers. In the case of condensation polymerizations, the ratio of comonomers is usually 1:1. For example, the formation of many nylons requires equal amounts of a dicarboxylic acid and diamine. In the case of addition polymerizations, the comonomer content is often only a few percent. For example, small amounts of 1-octene monomer are copolymerized with ethylene to give specialized polyethylene. Synthetic monomers Ethylene gas (H2C=CH2) is the monomer for polyethylene. Other modified ethylene derivatives include: tetrafluoroethylene (F2C=CF2) which leads to Teflon vinyl chloride (H2C=CHCl) which leads to PVC styrene (C6H5CH=CH2) which leads to polystyrene Epoxide monomers may be cross linked with themselves, or with the addition of a co-reactant, to form epoxy BPA is the monomer precursor for polycarbonate Terephthalic acid is a comonomer that, with ethylene glycol, forms polyethylene terephthalate. Dimethylsilicon dichloride is a monomer that, upon hydrolysis, gives polydimethylsiloxane. Ethyl methacrylate is an acrylic monomer that, when combined with an acrylic polymer, catalyzes and forms an acrylate plastic used to create artificial nail extensions Biopolymers The term "monomeric protein" may also be used to describe one of the proteins making up a multiprotein complex. Natural monomers Some of the main biopolymers are listed below: Amino acids For proteins, the monomers are amino acids. Polymerization occurs at ribosomes. Usually about 20 types of amino acid monomers are used to produce proteins. Hence proteins are not homopolymers. Nucleotides For polynucleic acids (DNA/RNA), the monomers are nucleotides, each of which is made of a pentose sugar, a nitrogenous base and a phosphate group. Nucleotide monomers are found in the cell nucleus. Four types of nucleotide monomers are precursors to DNA and four different nucleotide monomers are precursors to RNA. Glucose and related sugars For carbohydrates, the monomers are monosaccharides. The most abundant natural monomer is glucose, which is linked by glycosidic bonds into the polymers cellulose, starch, and glycogen. Isoprene Isoprene is a natural monomer that polymerizes to form a natural rubber, most often cis-1,4-polyisoprene, but also trans-1,4-polymer. Synthetic rubbers are often based on butadiene, which is structurally related to isoprene.
Physical sciences
Polymers
Chemistry
19588
https://en.wikipedia.org/wiki/Mitochondrion
Mitochondrion
A mitochondrion () is an organelle found in the cells of most eukaryotes, such as animals, plants and fungi. Mitochondria have a double membrane structure and use aerobic respiration to generate adenosine triphosphate (ATP), which is used throughout the cell as a source of chemical energy. They were discovered by Albert von Kölliker in 1857 in the voluntary muscles of insects. Meaning a thread-like granule, the term mitochondrion was coined by Carl Benda in 1898. The mitochondrion is popularly nicknamed the "powerhouse of the cell", a phrase popularized by Philip Siekevitz in a 1957 Scientific American article of the same name. Some cells in some multicellular organisms lack mitochondria (for example, mature mammalian red blood cells). The multicellular animal Henneguya salminicola is known to have retained mitochondrion-related organelles despite a complete loss of their mitochondrial genome. A large number of unicellular organisms, such as microsporidia, parabasalids and diplomonads, have reduced or transformed their mitochondria into other structures, e.g. hydrogenosomes and mitosomes. The oxymonads Monocercomonoides, Streblomastix, and Blattamonas have completely lost their mitochondria. Mitochondria are commonly between 0.75 and 3 μm in cross section, but vary considerably in size and structure. Unless specifically stained, they are not visible. In addition to supplying cellular energy, mitochondria are involved in other tasks, such as signaling, cellular differentiation, and cell death, as well as maintaining control of the cell cycle and cell growth. Mitochondrial biogenesis is in turn temporally coordinated with these cellular processes. Mitochondria have been implicated in several human disorders and conditions, such as mitochondrial diseases, cardiac dysfunction, heart failure and autism. The number of mitochondria in a cell can vary widely by organism, tissue, and cell type. A mature red blood cell has no mitochondria, whereas a liver cell can have more than 2000. The mitochondrion is composed of compartments that carry out specialized functions. These compartments or regions include the outer membrane, intermembrane space, inner membrane, cristae, and matrix. Although most of a eukaryotic cell's DNA is contained in the cell nucleus, the mitochondrion has its own genome ("mitogenome") that is substantially similar to bacterial genomes. This finding has led to general acceptance of the endosymbiotic hypothesis - that free-living prokaryotic ancestors of modern mitochondria permanently fused with eukaryotic cells in the distant past, evolving such that modern animals, plants, fungi, and other eukaryotes are able to respire to generate cellular energy. Structure Mitochondria may have a number of different shapes. A mitochondrion contains outer and inner membranes composed of phospholipid bilayers and proteins. The two membranes have different properties. Because of this double-membraned organization, there are five distinct parts to a mitochondrion: The outer mitochondrial membrane, The intermembrane space (the space between the outer and inner membranes), The inner mitochondrial membrane, The cristae space (formed by infoldings of the inner membrane), and The matrix (space within the inner membrane), which is a fluid. Mitochondria have folding to increase surface area, which in turn increases ATP (adenosine triphosphate) production. Mitochondria stripped of their outer membrane are called mitoplasts. Outer membrane The outer mitochondrial membrane, which encloses the entire organelle, is 60 to 75 angstroms (Å) thick. It has a protein-to-phospholipid ratio similar to that of the cell membrane (about 1:1 by weight). It contains large numbers of integral membrane proteins called porins. A major trafficking protein is the pore-forming voltage-dependent anion channel (VDAC). The VDAC is the primary transporter of nucleotides, ions and metabolites between the cytosol and the intermembrane space. It is formed as a beta barrel that spans the outer membrane, similar to that in the gram-negative bacterial outer membrane. Larger proteins can enter the mitochondrion if a signaling sequence at their N-terminus binds to a large multisubunit protein called translocase in the outer membrane, which then actively moves them across the membrane. Mitochondrial pro-proteins are imported through specialised translocation complexes. The outer membrane also contains enzymes involved in such diverse activities as the elongation of fatty acids, oxidation of epinephrine, and the degradation of tryptophan. These enzymes include monoamine oxidase, rotenone-insensitive NADH-cytochrome c-reductase, kynurenine hydroxylase and fatty acid Co-A ligase. Disruption of the outer membrane permits proteins in the intermembrane space to leak into the cytosol, leading to cell death. The outer mitochondrial membrane can associate with the endoplasmic reticulum (ER) membrane, in a structure called MAM (mitochondria-associated ER-membrane). This is important in the ER-mitochondria calcium signaling and is involved in the transfer of lipids between the ER and mitochondria. Outside the outer membrane are small (diameter: 60 Å) particles named sub-units of Parson. Intermembrane space The mitochondrial intermembrane space is the space between the outer membrane and the inner membrane. It is also known as perimitochondrial space. Because the outer membrane is freely permeable to small molecules, the concentrations of small molecules, such as ions and sugars, in the intermembrane space is the same as in the cytosol. However, large proteins must have a specific signaling sequence to be transported across the outer membrane, so the protein composition of this space is different from the protein composition of the cytosol. One protein that is localized to the intermembrane space in this way is cytochrome c. Inner membrane The inner mitochondrial membrane contains proteins with three types of functions: Those that perform the electron transport chain redox reactions ATP synthase, which generates ATP in the matrix Specific transport proteins that regulate metabolite passage into and out of the mitochondrial matrix It contains more than 151 different polypeptides, and has a very high protein-to-phospholipid ratio (more than 3:1 by weight, which is about 1 protein for 15 phospholipids). The inner membrane is home to around 1/5 of the total protein in a mitochondrion. Additionally, the inner membrane is rich in an unusual phospholipid, cardiolipin. This phospholipid was originally discovered in cow hearts in 1942, and is usually characteristic of mitochondrial and bacterial plasma membranes. Cardiolipin contains four fatty acids rather than two, and may help to make the inner membrane impermeable, and its disruption can lead to multiple clinical disorders including neurological disorders and cancer. Unlike the outer membrane, the inner membrane does not contain porins, and is highly impermeable to all molecules. Almost all ions and molecules require special membrane transporters to enter or exit the matrix. Proteins are ferried into the matrix via the translocase of the inner membrane (TIM) complex or via OXA1L. In addition, there is a membrane potential across the inner membrane, formed by the action of the enzymes of the electron transport chain. Inner membrane fusion is mediated by the inner membrane protein OPA1. Cristae The inner mitochondrial membrane is compartmentalized into numerous folds called cristae, which expand the surface area of the inner mitochondrial membrane, enhancing its ability to produce ATP. For typical liver mitochondria, the area of the inner membrane is about five times as large as that of the outer membrane. This ratio is variable and mitochondria from cells that have a greater demand for ATP, such as muscle cells, contain even more cristae. Mitochondria within the same cell can have substantially different crista-density, with the ones that are required to produce more energy having much more crista-membrane surface. These folds are studded with small round bodies known as F particles or oxysomes. Matrix The matrix is the space enclosed by the inner membrane. It contains about 2/3 of the total proteins in a mitochondrion. The matrix is important in the production of ATP with the aid of the ATP synthase contained in the inner membrane. The matrix contains a highly concentrated mixture of hundreds of enzymes, special mitochondrial ribosomes, tRNA, and several copies of the mitochondrial DNA genome. Of the enzymes, the major functions include oxidation of pyruvate and fatty acids, and the citric acid cycle. The DNA molecules are packaged into nucleoids by proteins, one of which is TFAM. Function The most prominent roles of mitochondria are to produce the energy currency of the cell, ATP (i.e., phosphorylation of ADP), through respiration and to regulate cellular metabolism. The central set of reactions involved in ATP production are collectively known as the citric acid cycle, or the Krebs cycle, and oxidative phosphorylation. However, the mitochondrion has many other functions in addition to the production of ATP. Energy conversion A dominant role for the mitochondria is the production of ATP, as reflected by the large number of proteins in the inner membrane for this task. This is done by oxidizing the major products of glucose: pyruvate, and NADH, which are produced in the cytosol. This type of cellular respiration, known as aerobic respiration, is dependent on the presence of oxygen. When oxygen is limited, the glycolytic products will be metabolized by anaerobic fermentation, a process that is independent of the mitochondria. The production of ATP from glucose and oxygen has an approximately 13-times higher yield during aerobic respiration compared to fermentation. Plant mitochondria can also produce a limited amount of ATP either by breaking the sugar produced during photosynthesis or without oxygen by using the alternate substrate nitrite. ATP crosses out through the inner membrane with the help of a specific protein, and across the outer membrane via porins. After conversion of ATP to ADP by dephosphorylation that releases energy, ADP returns via the same route. Pyruvate and the citric acid cycle Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form CO, acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g., in muscle) are suddenly increased by activity. In the citric acid cycle, all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that the additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence, the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell. Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO and water, with the energy thus released captured in the form of ATP. In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway, which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here, the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted to cytosolic oxaloacetate, and ultimately to glucose, in a process that is almost the reverse of glycolysis. The enzymes of the citric acid cycle are located in the mitochondrial matrix, with the exception of succinate dehydrogenase, which is bound to the inner mitochondrial membrane as part of Complex II. The citric acid cycle oxidizes the acetyl-CoA to carbon dioxide, and, in the process, produces reduced cofactors (three molecules of NADH and one molecule of FADH) that are a source of electrons for the electron transport chain, and a molecule of GTP (which is readily converted to an ATP). O and NADH: energy-releasing reactions The electrons from NADH and FADH are transferred to oxygen (O) and hydrogen (protons) in several steps via an electron transport chain. NADH and FADH molecules are produced within the matrix via the citric acid cycle and in the cytoplasm by glycolysis. Reducing equivalents from the cytoplasm can be imported via the malate-aspartate shuttle system of antiporter proteins or fed into the electron transport chain using a glycerol phosphate shuttle. The major energy-releasing reactions that make the mitochondrion the "powerhouse of the cell" occur at protein complexes I, III and IV in the inner mitochondrial membrane (NADH dehydrogenase (ubiquinone), cytochrome c reductase, and cytochrome c oxidase). At complex IV, O2 reacts with the reduced form of iron in cytochrome c: O2{} + 4H+(aq){} + 4 Fe^{2+}(cyt\,c) -> 2H2O{} + 4 Fe^{3+}(cyt\,c) releasing a lot of free energy from the reactants without breaking bonds of an organic fuel. The free energy put in to remove an electron from Fe2+ is released at complex III when Fe3+ of cytochrome c reacts to oxidize ubiquinol (QH2): 2 Fe^{3+}(cyt\,c){} + QH2 -> 2 Fe^{2+}(cyt\,c){} + Q{} + 2H+(aq) The ubiquinone (Q) generated reacts, in complex I, with NADH: Q + H+(aq){} + NADH -> QH2 + NAD+ { } While the reactions are controlled by an electron transport chain, free electrons are not amongst the reactants or products in the three reactions shown and therefore do not affect the free energy released, which is used to pump protons (H) into the intermembrane space. This process is efficient, but a small percentage of electrons may prematurely reduce oxygen, forming reactive oxygen species such as superoxide. This can cause oxidative stress in the mitochondria and may contribute to the decline in mitochondrial function associated with aging. As the proton concentration increases in the intermembrane space, a strong electrochemical gradient is established across the inner membrane. The protons can return to the matrix through the ATP synthase complex, and their potential energy is used to synthesize ATP from ADP and inorganic phosphate (P). This process is called chemiosmosis, and was first described by Peter Mitchell, who was awarded the 1978 Nobel Prize in Chemistry for his work. Later, part of the 1997 Nobel Prize in Chemistry was awarded to Paul D. Boyer and John E. Walker for their clarification of the working mechanism of ATP synthase. Heat production Under certain conditions, protons can re-enter the mitochondrial matrix without contributing to ATP synthesis. This process is known as proton leak or mitochondrial uncoupling and is due to the facilitated diffusion of protons into the matrix. The process results in the unharnessed potential energy of the proton electrochemical gradient being released as heat. The process is mediated by a proton channel called thermogenin, or UCP1. Thermogenin is primarily found in brown adipose tissue, or brown fat, and is responsible for non-shivering thermogenesis. Brown adipose tissue is found in mammals, and is at its highest levels in early life and in hibernating animals. In humans, brown adipose tissue is present at birth and decreases with age. Mitochondrial fatty acid synthesis Mitochondrial fatty acid synthesis (mtFASII) is essential for cellular respiration and mitochondrial biogenesis. It is also thought to play a role as a mediator in intracellular signaling due to its influence on the levels of bioactive lipids, such as lysophospholipids and sphingolipids. Octanoyl-ACP (C8) is considered to be the most important end product of mtFASII, which also forms the starting substrate of lipoic acid biosynthesis. Since lipoic acid is the cofactor of important mitochondrial enzyme complexes, such as the pyruvate dehydrogenase complex (PDC), α-ketoglutarate dehydrogenase complex (OGDC), branched-chain α-ketoacid dehydrogenase complex (BCKDC), and in the glycine cleavage system (GCS), mtFASII has an influence on energy metabolism. Other products of mtFASII play a role in the regulation of mitochondrial translation, FeS cluster biogenesis and assembly of oxidative phosphorylation complexes. Furthermore, with the help of mtFASII and acylated ACP, acetyl-CoA regulates its consumption in mitochondria. Uptake, storage and release of calcium ions The concentrations of free calcium in the cell can regulate an array of reactions and is important for signal transduction in the cell. Mitochondria can transiently store calcium, a contributing process for the cell's homeostasis of calcium. Their ability to rapidly take in calcium for later release makes them good "cytosolic buffers" for calcium. The endoplasmic reticulum (ER) is the most significant storage site of calcium, and there is a significant interplay between the mitochondrion and ER with regard to calcium. The calcium is taken up into the matrix by the mitochondrial calcium uniporter on the inner mitochondrial membrane. It is primarily driven by the mitochondrial membrane potential. Release of this calcium back into the cell's interior can occur via a sodium-calcium exchange protein or via "calcium-induced-calcium-release" pathways. This can initiate calcium spikes or calcium waves with large changes in the membrane potential. These can activate a series of second messenger system proteins that can coordinate processes such as neurotransmitter release in nerve cells and release of hormones in endocrine cells. Ca influx to the mitochondrial matrix has recently been implicated as a mechanism to regulate respiratory bioenergetics by allowing the electrochemical potential across the membrane to transiently "pulse" from ΔΨ-dominated to pH-dominated, facilitating a reduction of oxidative stress. In neurons, concomitant increases in cytosolic and mitochondrial calcium act to synchronize neuronal activity with mitochondrial energy metabolism. Mitochondrial matrix calcium levels can reach the tens of micromolar levels, which is necessary for the activation of isocitrate dehydrogenase, one of the key regulatory enzymes of the Krebs cycle. Cellular proliferation regulation The relationship between cellular proliferation and mitochondria has been investigated. Tumor cells require ample ATP to synthesize bioactive compounds such as lipids, proteins, and nucleotides for rapid proliferation. The majority of ATP in tumor cells is generated via the oxidative phosphorylation pathway (OxPhos). Interference with OxPhos cause cell cycle arrest suggesting that mitochondria play a role in cell proliferation. Mitochondrial ATP production is also vital for cell division and differentiation in infection in addition to basic functions in the cell including the regulation of cell volume, solute concentration, and cellular architecture. ATP levels differ at various stages of the cell cycle suggesting that there is a relationship between the abundance of ATP and the cell's ability to enter a new cell cycle. ATP's role in the basic functions of the cell make the cell cycle sensitive to changes in the availability of mitochondrial derived ATP. The variation in ATP levels at different stages of the cell cycle support the hypothesis that mitochondria play an important role in cell cycle regulation. Although the specific mechanisms between mitochondria and the cell cycle regulation is not well understood, studies have shown that low energy cell cycle checkpoints monitor the energy capability before committing to another round of cell division. Programmed cell death and innate immunity Programmed cell death (PCD) is crucial for various physiological functions, including organ development and cellular homeostasis. It serves as an intrinsic mechanism to prevent malignant transformation and plays a fundamental role in immunity by aiding in antiviral defense, pathogen elimination, inflammation, and immune cell recruitment. Mitochondria have long been recognized for their central role in the intrinsic pathway of apoptosis, a form of PCD. In recent decades, they have also been identified as a signalling hub for much of the innate immune system. The endosymbiotic origin of mitochondria distinguishes them from other cellular components, and the exposure of mitochondrial elements to the cytosol can trigger the same pathways as infection markers. These pathways lead to apoptosis, autophagy, or the induction of proinflammatory genes. Mitochondria contribute to apoptosis by releasing cytochrome c, which directly induces the formation of apoptosomes. Additionally, they are a source of various damage-associated molecular patterns (DAMPs). These DAMPs are often recognised by the same pattern-recognition receptors (PRRs) that respond to pathogen-associated molecular patterns (PAMPs) during infections. For example, mitochondrial mtDNA resembles bacterial DNA due to its lack of CpG methylation and can be detected by Toll-like receptor 9 and cGAS. Double-stranded RNA (dsRNA), produced due to bidirectional mitochondrial transcription, can activate viral sensing pathways through RIG-I-like receptors. Additionally, the N-formylation of mitochondrial proteins, similar to that of bacterial proteins, can be recognized by formyl peptide receptors. Normally, these mitochondrial components are sequestered from the rest of the cell but are released following mitochondrial membrane permeabilization during apoptosis or passively after mitochondrial damage. However, mitochondria also play an active role in innate immunity, releasing mtDNA in response to metabolic cues. Mitochondria are also the localization site for immune and apoptosis regulatory proteins, such as BAX, MAVS (located on the outer membrane), and NLRX1 (found in the matrix). These proteins are modulated by the mitochondrial metabolic status and mitochondrial dynamics. Additional functions Mitochondria play a central role in many other metabolic tasks, such as: Signaling through mitochondrial reactive oxygen species Regulation of the membrane potential Calcium signaling (including calcium-evoked apoptosis) Regulation of cellular metabolism Certain heme synthesis reactions (see also: Porphyrin) Steroid synthesis Hormonal signaling – mitochondria are sensitive and responsive to hormones, in part by the action of mitochondrial estrogen receptors (mtERs). These receptors have been found in various tissues and cell types, including brain and heart Development and function of immune cells Neuronal mitochondria also contribute to cellular quality control by reporting neuronal status towards microglia through specialised somatic-junctions. Mitochondria of developing neurons contribute to intercellular signaling towards microglia, which communication is indispensable for proper regulation of brain development. Some mitochondrial functions are performed only in specific types of cells. For example, mitochondria in liver cells contain enzymes that allow them to detoxify ammonia, a waste product of protein metabolism. A mutation in the genes regulating any of these functions can result in mitochondrial diseases. Mitochondrial proteins (proteins transcribed from mitochondrial DNA) vary depending on the tissue and the species. In humans, 615 distinct types of proteins have been identified from cardiac mitochondria, whereas in rats, 940 proteins have been reported. The mitochondrial proteome is thought to be dynamically regulated. Organization and distribution Mitochondria (or related structures) are found in all eukaryotes (except the Oxymonad Monocercomonoides). Although commonly depicted as bean-like structures they form a highly dynamic network in the majority of cells where they constantly undergo fission and fusion. The population of all the mitochondria of a given cell constitutes the chondriome. Mitochondria vary in number and location according to cell type. A single mitochondrion is often found in unicellular organisms, while human liver cells have about 1000–2000 mitochondria per cell, making up 1/5 of the cell volume. The mitochondrial content of otherwise similar cells can vary substantially in size and membrane potential, with differences arising from sources including uneven partitioning at cell division, leading to extrinsic differences in ATP levels and downstream cellular processes. The mitochondria can be found nestled between myofibrils of muscle or wrapped around the sperm flagellum. Often, they form a complex 3D branching network inside the cell with the cytoskeleton. The association with the cytoskeleton determines mitochondrial shape, which can affect the function as well: different structures of the mitochondrial network may afford the population a variety of physical, chemical, and signalling advantages or disadvantages. Mitochondria in cells are always distributed along microtubules and the distribution of these organelles is also correlated with the endoplasmic reticulum. Recent evidence suggests that vimentin, one of the components of the cytoskeleton, is also critical to the association with the cytoskeleton. Mitochondria-associated ER membrane (MAM) The mitochondria-associated ER membrane (MAM) is another structural element that is increasingly recognized for its critical role in cellular physiology and homeostasis. Once considered a technical snag in cell fractionation techniques, the alleged ER vesicle contaminants that invariably appeared in the mitochondrial fraction have been re-identified as membranous structures derived from the MAM—the interface between mitochondria and the ER. Physical coupling between these two organelles had previously been observed in electron micrographs and has more recently been probed with fluorescence microscopy. Such studies estimate that at the MAM, which may comprise up to 20% of the mitochondrial outer membrane, the ER and mitochondria are separated by a mere 10–25 nm and held together by protein tethering complexes. Purified MAM from subcellular fractionation is enriched in enzymes involved in phospholipid exchange, in addition to channels associated with Ca signaling. These hints of a prominent role for the MAM in the regulation of cellular lipid stores and signal transduction have been borne out, with significant implications for mitochondrial-associated cellular phenomena, as discussed below. Not only has the MAM provided insight into the mechanistic basis underlying such physiological processes as intrinsic apoptosis and the propagation of calcium signaling, but it also favors a more refined view of the mitochondria. Though often seen as static, isolated 'powerhouses' hijacked for cellular metabolism through an ancient endosymbiotic event, the evolution of the MAM underscores the extent to which mitochondria have been integrated into overall cellular physiology, with intimate physical and functional coupling to the endomembrane system. Phospholipid transfer The MAM is enriched in enzymes involved in lipid biosynthesis, such as phosphatidylserine synthase on the ER face and phosphatidylserine decarboxylase on the mitochondrial face. Because mitochondria are dynamic organelles constantly undergoing fission and fusion events, they require a constant and well-regulated supply of phospholipids for membrane integrity. But mitochondria are not only a destination for the phospholipids they finish synthesis of; rather, this organelle also plays a role in inter-organelle trafficking of the intermediates and products of phospholipid biosynthetic pathways, ceramide and cholesterol metabolism, and glycosphingolipid anabolism. Such trafficking capacity depends on the MAM, which has been shown to facilitate transfer of lipid intermediates between organelles. In contrast to the standard vesicular mechanism of lipid transfer, evidence indicates that the physical proximity of the ER and mitochondrial membranes at the MAM allows for lipid flipping between opposed bilayers. Despite this unusual and seemingly energetically unfavorable mechanism, such transport does not require ATP. Instead, in yeast, it has been shown to be dependent on a multiprotein tethering structure termed the ER-mitochondria encounter structure, or ERMES, although it remains unclear whether this structure directly mediates lipid transfer or is required to keep the membranes in sufficiently close proximity to lower the energy barrier for lipid flipping. The MAM may also be part of the secretory pathway, in addition to its role in intracellular lipid trafficking. In particular, the MAM appears to be an intermediate destination between the rough ER and the Golgi in the pathway that leads to very-low-density lipoprotein, or VLDL, assembly and secretion. The MAM thus serves as a critical metabolic and trafficking hub in lipid metabolism. Calcium signaling A critical role for the ER in calcium signaling was acknowledged before such a role for the mitochondria was widely accepted, in part because the low affinity of Ca channels localized to the outer mitochondrial membrane seemed to contradict this organelle's purported responsiveness to changes in intracellular Ca flux. But the presence of the MAM resolves this apparent contradiction: the close physical association between the two organelles results in Ca microdomains at contact points that facilitate efficient Ca transmission from the ER to the mitochondria. Transmission occurs in response to so-called "Ca puffs" generated by spontaneous clustering and activation of IP3R, a canonical ER membrane Ca channel. The fate of these puffs—in particular, whether they remain restricted to isolated locales or integrated into Ca waves for propagation throughout the cell—is determined in large part by MAM dynamics. Although reuptake of Ca by the ER (concomitant with its release) modulates the intensity of the puffs, thus insulating mitochondria to a certain degree from high Ca exposure, the MAM often serves as a firewall that essentially buffers Ca puffs by acting as a sink into which free ions released into the cytosol can be funneled. This Ca tunneling occurs through the low-affinity Ca receptor VDAC1, which recently has been shown to be physically tethered to the IP3R clusters on the ER membrane and enriched at the MAM. The ability of mitochondria to serve as a Ca sink is a result of the electrochemical gradient generated during oxidative phosphorylation, which makes tunneling of the cation an exergonic process. Normal, mild calcium influx from cytosol into the mitochondrial matrix causes transient depolarization that is corrected by pumping out protons. But transmission of Ca is not unidirectional; rather, it is a two-way street. The properties of the Ca pump SERCA and the channel IP3R present on the ER membrane facilitate feedback regulation coordinated by MAM function. In particular, the clearance of Ca by the MAM allows for spatio-temporal patterning of Ca signaling because Ca alters IP3R activity in a biphasic manner. SERCA is likewise affected by mitochondrial feedback: uptake of Ca by the MAM stimulates ATP production, thus providing energy that enables SERCA to reload the ER with Ca for continued Ca efflux at the MAM. Thus, the MAM is not a passive buffer for Ca puffs; rather it helps modulate further Ca signaling through feedback loops that affect ER dynamics. Regulating ER release of Ca at the MAM is especially critical because only a certain window of Ca uptake sustains the mitochondria, and consequently the cell, at homeostasis. Sufficient intraorganelle Ca signaling is required to stimulate metabolism by activating dehydrogenase enzymes critical to flux through the citric acid cycle. However, once Ca signaling in the mitochondria passes a certain threshold, it stimulates the intrinsic pathway of apoptosis in part by collapsing the mitochondrial membrane potential required for metabolism. Studies examining the role of pro- and anti-apoptotic factors support this model; for example, the anti-apoptotic factor Bcl-2 has been shown to interact with IP3Rs to reduce Ca filling of the ER, leading to reduced efflux at the MAM and preventing collapse of the mitochondrial membrane potential post-apoptotic stimuli. Given the need for such fine regulation of Ca signaling, it is perhaps unsurprising that dysregulated mitochondrial Ca has been implicated in several neurodegenerative diseases, while the catalogue of tumor suppressors includes a few that are enriched at the MAM. Molecular basis for tethering Recent advances in the identification of the tethers between the mitochondrial and ER membranes suggest that the scaffolding function of the molecular elements involved is secondary to other, non-structural functions. In yeast, ERMES, a multiprotein complex of interacting ER- and mitochondrial-resident membrane proteins, is required for lipid transfer at the MAM and exemplifies this principle. One of its components, for example, is also a constituent of the protein complex required for insertion of transmembrane beta-barrel proteins into the lipid bilayer. However, a homologue of the ERMES complex has not yet been identified in mammalian cells. Other proteins implicated in scaffolding likewise have functions independent of structural tethering at the MAM; for example, ER-resident and mitochondrial-resident mitofusins form heterocomplexes that regulate the number of inter-organelle contact sites, although mitofusins were first identified for their role in fission and fusion events between individual mitochondria. Glucose-related protein 75 (grp75) is another dual-function protein. In addition to the matrix pool of grp75, a portion serves as a chaperone that physically links the mitochondrial and ER Ca channels VDAC and IP3R for efficient Ca transmission at the MAM. Another potential tether is Sigma-1R, a non-opioid receptor whose stabilization of ER-resident IP3R may preserve communication at the MAM during the metabolic stress response. Perspective The MAM is a critical signaling, metabolic, and trafficking hub in the cell that allows for the integration of ER and mitochondrial physiology. Coupling between these organelles is not simply structural but functional as well and critical for overall cellular physiology and homeostasis. The MAM thus offers a perspective on mitochondria that diverges from the traditional view of this organelle as a static, isolated unit appropriated for its metabolic capacity by the cell. Instead, this mitochondrial-ER interface emphasizes the integration of the mitochondria, the product of an endosymbiotic event, into diverse cellular processes. Recently it has also been shown, that mitochondria and MAM-s in neurons are anchored to specialised intercellular communication sites (so called somatic-junctions). Microglial processes monitor and protect neuronal functions at these sites, and MAM-s are supposed to have an important role in this type of cellular quality-control. Origin and evolution There are two hypotheses about the origin of mitochondria: endosymbiotic and autogenous. The endosymbiotic hypothesis suggests that mitochondria were originally prokaryotic cells, capable of implementing oxidative mechanisms that were not possible for eukaryotic cells; they became endosymbionts living inside the eukaryote. In the autogenous hypothesis, mitochondria were born by splitting off a portion of DNA from the nucleus of the eukaryotic cell at the time of divergence with the prokaryotes; this DNA portion would have been enclosed by membranes, which could not be crossed by proteins. Since mitochondria have many features in common with bacteria, the endosymbiotic hypothesis is the more widely accepted of the two accounts. A mitochondrion contains DNA, which is organized as several copies of a single, usually circular chromosome. This mitochondrial chromosome contains genes for redox proteins, such as those of the respiratory chain. The CoRR hypothesis proposes that this co-location is required for redox regulation. The mitochondrial genome codes for some RNAs of ribosomes, and the 22 tRNAs necessary for the translation of mRNAs into protein. The circular structure is also found in prokaryotes. The proto-mitochondrion was probably closely related to Rickettsia. However, the exact relationship of the ancestor of mitochondria to the alphaproteobacteria and whether the mitochondrion was formed at the same time or after the nucleus, remains controversial. For example, it has been suggested that the SAR11 clade of bacteria shares a relatively recent common ancestor with the mitochondria, while phylogenomic analyses indicate that mitochondria evolved from a Pseudomonadota lineage that is closely related to or a member of alphaproteobacteria. Some papers describe mitochondria as sister to the alphaproteobactera, together forming the sister the marineproteo1 group, together forming the sister to Magnetococcidae. The ribosomes coded for by the mitochondrial DNA are similar to those from bacteria in size and structure. They closely resemble the bacterial 70S ribosome and not the 80S cytoplasmic ribosomes, which are coded for by nuclear DNA. The endosymbiotic relationship of mitochondria with their host cells was popularized by Lynn Margulis. The endosymbiotic hypothesis suggests that mitochondria descended from aerobic bacteria that somehow survived endocytosis by another cell, and became incorporated into the cytoplasm. The ability of these bacteria to conduct respiration in host cells that had relied on glycolysis and fermentation would have provided a considerable evolutionary advantage. This symbiotic relationship probably developed 1.7 to 2 billion years ago. A few groups of unicellular eukaryotes have only vestigial mitochondria or derived structures: The microsporidians, metamonads, and archamoebae. These groups appear as the most primitive eukaryotes on phylogenetic trees constructed using rRNA information, which once suggested that they appeared before the origin of mitochondria. However, this is now known to be an artifact of long-branch attraction: They are derived groups and retain genes or organelles derived from mitochondria (e. g., mitosomes and hydrogenosomes). Hydrogenosomes, mitosomes, and related organelles as found in some loricifera (e. g. Spinoloricus) and myxozoa (e. g. Henneguya zschokkei) are together classified as MROs, mitochondrion-related organelles. Monocercomonoides and other oxymonads appear to have lost their mitochondria completely and at least some of the mitochondrial functions seem to be carried out by cytoplasmic proteins now. Mitochondrial genetics Mitochondria contain their own genome. The human mitochondrial genome is a circular double-stranded DNA molecule of about 16 kilobases. It encodes 37 genes: 13 for subunits of respiratory complexes I, III, IV and V, 22 for mitochondrial tRNA (for the 20 standard amino acids, plus an extra gene for leucine and serine), and 2 for rRNA (12S and 16S rRNA). One mitochondrion can contain two to ten copies of its DNA. One of the two mitochondrial DNA (mtDNA) strands has a disproportionately higher ratio of the heavier nucleotides adenine and guanine, and this is termed the heavy strand (or H strand), whereas the other strand is termed the light strand (or L strand). The weight difference allows the two strands to be separated by centrifugation. mtDNA has one long non-coding stretch known as the non-coding region (NCR), which contains the heavy strand promoter (HSP) and light strand promoter (LSP) for RNA transcription, the origin of replication for the H strand (OriH) localized on the L strand, three conserved sequence boxes (CSBs 1–3), and a termination-associated sequence (TAS). The origin of replication for the L strand (OriL) is localized on the H strand 11,000 bp downstream of OriH, located within a cluster of genes coding for tRNA. As in prokaryotes, there is a very high proportion of coding DNA and an absence of repeats. Mitochondrial genes are transcribed as multigenic transcripts, which are cleaved and polyadenylated to yield mature mRNAs. Most proteins necessary for mitochondrial function are encoded by genes in the cell nucleus and the corresponding proteins are imported into the mitochondrion. The exact number of genes encoded by the nucleus and the mitochondrial genome differs between species. Most mitochondrial genomes are circular. In general, mitochondrial DNA lacks introns, as is the case in the human mitochondrial genome; however, introns have been observed in some eukaryotic mitochondrial DNA, such as that of yeast and protists, including Dictyostelium discoideum. Between protein-coding regions, tRNAs are present. Mitochondrial tRNA genes have different sequences from the nuclear tRNAs, but lookalikes of mitochondrial tRNAs have been found in the nuclear chromosomes with high sequence similarity. In animals, the mitochondrial genome is typically a single circular chromosome that is approximately 16 kb long and has 37 genes. The genes, while highly conserved, may vary in location. Curiously, this pattern is not found in the human body louse (Pediculus humanus). Instead, this mitochondrial genome is arranged in 18 minicircular chromosomes, each of which is 3–4 kb long and has one to three genes. This pattern is also found in other sucking lice, but not in chewing lice. Recombination has been shown to occur between the minichromosomes. Human population genetic studies The near-absence of genetic recombination in mitochondrial DNA makes it a useful source of information for studying population genetics and evolutionary biology. Because all the mitochondrial DNA is inherited as a single unit, or haplotype, the relationships between mitochondrial DNA from different individuals can be represented as a gene tree. Patterns in these gene trees can be used to infer the evolutionary history of populations. The classic example of this is in human evolutionary genetics, where the molecular clock can be used to provide a recent date for mitochondrial Eve. This is often interpreted as strong support for a recent modern human expansion out of Africa. Another human example is the sequencing of mitochondrial DNA from Neanderthal bones. The relatively large evolutionary distance between the mitochondrial DNA sequences of Neanderthals and living humans has been interpreted as evidence for the lack of interbreeding between Neanderthals and modern humans. However, mitochondrial DNA reflects only the history of the females in a population. This can be partially overcome by the use of paternal genetic sequences, such as the non-recombining region of the Y-chromosome. Recent measurements of the molecular clock for mitochondrial DNA reported a value of 1 mutation every 7884 years dating back to the most recent common ancestor of humans and apes, which is consistent with estimates of mutation rates of autosomal DNA (10 per base per generation). Alternative genetic code While slight variations on the standard genetic code had been predicted earlier, none was discovered until 1979, when researchers studying human mitochondrial genes determined that they used an alternative code. Nonetheless, the mitochondria of many other eukaryotes, including most plants, use the standard code. Many slight variants have been discovered since, including various alternative mitochondrial codes. Further, the AUA, AUC, and AUU codons are all allowable start codons. Some of these differences should be regarded as pseudo-changes in the genetic code due to the phenomenon of RNA editing, which is common in mitochondria. In higher plants, it was thought that CGG encoded for tryptophan and not arginine; however, the codon in the processed RNA was discovered to be the UGG codon, consistent with the standard genetic code for tryptophan. Of note, the arthropod mitochondrial genetic code has undergone parallel evolution within a phylum, with some organisms uniquely translating AGG to lysine. Replication and inheritance Mitochondria divide by mitochondrial fission, a form of binary fission that is also done by bacteria although the process is tightly regulated by the host eukaryotic cell and involves communication between and contact with several other organelles. The regulation of this division differs between eukaryotes. In many single-celled eukaryotes, their growth and division are linked to the cell cycle. For example, a single mitochondrion may divide synchronously with the nucleus. This division and segregation process must be tightly controlled so that each daughter cell receives at least one mitochondrion. In other eukaryotes (in mammals for example), mitochondria may replicate their DNA and divide mainly in response to the energy needs of the cell, rather than in phase with the cell cycle. When the energy needs of a cell are high, mitochondria grow and divide. When energy use is low, mitochondria are destroyed or become inactive. In such examples mitochondria are apparently randomly distributed to the daughter cells during the division of the cytoplasm. Mitochondrial dynamics, the balance between mitochondrial fusion and fission, is an important factor in pathologies associated with several disease conditions. The hypothesis of mitochondrial binary fission has relied on the visualization by fluorescence microscopy and conventional transmission electron microscopy (TEM). The resolution of fluorescence microscopy (≈200 nm) is insufficient to distinguish structural details, such as double mitochondrial membrane in mitochondrial division or even to distinguish individual mitochondria when several are close together. Conventional TEM has also some technical limitations in verifying mitochondrial division. Cryo-electron tomography was recently used to visualize mitochondrial division in frozen hydrated intact cells. It revealed that mitochondria divide by budding. An individual's mitochondrial genes are inherited only from the mother, with rare exceptions. In humans, when an egg cell is fertilized by a sperm, the mitochondria, and therefore the mitochondrial DNA, usually come from the egg only. The sperm's mitochondria enter the egg, but do not contribute genetic information to the embryo. Instead, paternal mitochondria are marked with ubiquitin to select them for later destruction inside the embryo. The egg cell contains relatively few mitochondria, but these mitochondria divide to populate the cells of the adult organism. This mode is seen in most organisms, including the majority of animals. However, mitochondria in some species can sometimes be inherited paternally. This is the norm among certain coniferous plants, although not in pine trees and yews. For Mytilids, paternal inheritance only occurs within males of the species. It has been suggested that it occurs at a very low level in humans. Uniparental inheritance leads to little opportunity for genetic recombination between different lineages of mitochondria, although a single mitochondrion can contain 2–10 copies of its DNA. What recombination does take place maintains genetic integrity rather than maintaining diversity. However, there are studies showing evidence of recombination in mitochondrial DNA. It is clear that the enzymes necessary for recombination are present in mammalian cells. Further, evidence suggests that animal mitochondria can undergo recombination. The data are more controversial in humans, although indirect evidence of recombination exists. Entities undergoing uniparental inheritance and with little to no recombination may be expected to be subject to Muller's ratchet, the accumulation of deleterious mutations until functionality is lost. Animal populations of mitochondria avoid this buildup through a developmental process known as the mtDNA bottleneck. The bottleneck exploits stochastic processes in the cell to increase the cell-to-cell variability in mutant load as an organism develops: a single egg cell with some proportion of mutant mtDNA thus produces an embryo where different cells have different mutant loads. Cell-level selection may then act to remove those cells with more mutant mtDNA, leading to a stabilization or reduction in mutant load between generations. The mechanism underlying the bottleneck is debated, with a recent mathematical and experimental metastudy providing evidence for a combination of random partitioning of mtDNAs at cell divisions and random turnover of mtDNA molecules within the cell. DNA repair Mitochondria can repair oxidative DNA damage by mechanisms analogous to those occurring in the cell nucleus. The proteins employed in mtDNA repair are encoded by nuclear genes, and are translocated to the mitochondria. The DNA repair pathways in mammalian mitochondria include base excision repair, double-strand break repair, direct reversal and mismatch repair. Alternatively, DNA damage may be bypassed, rather than repaired, by translesion synthesis. Of the several DNA repair process in mitochondria, the base excision repair pathway has been most comprehensively studied. Base excision repair is carried out by a sequence of enzyme-catalyzed steps that include recognition and excision of a damaged DNA base, removal of the resulting abasic site, end processing, gap filling and ligation. A common damage in mtDNA that is repaired by base excision repair is 8-oxoguanine produced by oxidation of guanine. Double-strand breaks can be repaired by homologous recombinational repair in both mammalian mtDNA and plant mtDNA. Double-strand breaks in mtDNA can also be repaired by microhomology-mediated end joining. Although there is evidence for the repair processes of direct reversal and mismatch repair in mtDNA, these processes are not well characterized. Lack of mitochondrial DNA Some organisms have lost mitochondrial DNA altogether. In these cases, genes encoded by the mitochondrial DNA have been lost or transferred to the nucleus. Cryptosporidium have mitochondria that lack any DNA, presumably because all their genes have been lost or transferred. In Cryptosporidium, the mitochondria have an altered ATP generation system that renders the parasite resistant to many classical mitochondrial inhibitors such as cyanide, azide, and atovaquone. Mitochondria that lack their own DNA have been found in a marine parasitic dinoflagellate from the genus Amoebophyra. This microorganism, A. cerati, has functional mitochondria that lack a genome. In related species, the mitochondrial genome still has three genes, but in A. cerati only a single mitochondrial gene — the cytochrome c oxidase I gene (cox1) — is found, and it has migrated to the genome of the nucleus. Dysfunction and disease Mitochondrial diseases Damage and subsequent dysfunction in mitochondria is an important factor in a range of human diseases due to their influence in cell metabolism. Mitochondrial disorders often present as neurological disorders, including autism. They can also manifest as myopathy, diabetes, multiple endocrinopathy, and a variety of other systemic disorders. Diseases caused by mutation in the mtDNA include Kearns–Sayre syndrome, MELAS syndrome and Leber's hereditary optic neuropathy. In the vast majority of cases, these diseases are transmitted by a female to her children, as the zygote derives its mitochondria and hence its mtDNA from the ovum. Diseases such as Kearns-Sayre syndrome, Pearson syndrome, and progressive external ophthalmoplegia are thought to be due to large-scale mtDNA rearrangements, whereas other diseases such as MELAS syndrome, Leber's hereditary optic neuropathy, MERRF syndrome, and others are due to point mutations in mtDNA. It has also been reported that drug tolerant cancer cells have an increased number and size of mitochondria which suggested an increase in mitochondrial biogenesis. A 2022 study in Nature Nanotechnology has reported that cancer cells can hijack the mitochondria from immune cells via physical tunneling nanotubes. In other diseases, defects in nuclear genes lead to dysfunction of mitochondrial proteins. This is the case in Friedreich's ataxia, hereditary spastic paraplegia, and Wilson's disease. These diseases are inherited in a dominance relationship, as applies to most other genetic diseases. A variety of disorders can be caused by nuclear mutations of oxidative phosphorylation enzymes, such as coenzyme Q10 deficiency and Barth syndrome. Environmental influences may interact with hereditary predispositions and cause mitochondrial disease. For example, there may be a link between pesticide exposure and the later onset of Parkinson's disease. Other pathologies with etiology involving mitochondrial dysfunction include schizophrenia, bipolar disorder, dementia, Alzheimer's disease, Parkinson's disease, epilepsy, stroke, cardiovascular disease, myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), retinitis pigmentosa, and diabetes mellitus. Mitochondria-mediated oxidative stress plays a role in cardiomyopathy in type 2 diabetics. Increased fatty acid delivery to the heart increases fatty acid uptake by cardiomyocytes, resulting in increased fatty acid oxidation in these cells. This process increases the reducing equivalents available to the electron transport chain of the mitochondria, ultimately increasing reactive oxygen species (ROS) production. ROS increases uncoupling proteins (UCPs) and potentiate proton leakage through the adenine nucleotide translocator (ANT), the combination of which uncouples the mitochondria. Uncoupling then increases oxygen consumption by the mitochondria, compounding the increase in fatty acid oxidation. This creates a vicious cycle of uncoupling; furthermore, even though oxygen consumption increases, ATP synthesis does not increase proportionally because the mitochondria are uncoupled. Less ATP availability ultimately results in an energy deficit presenting as reduced cardiac efficiency and contractile dysfunction. To compound the problem, impaired sarcoplasmic reticulum calcium release and reduced mitochondrial reuptake limits peak cytosolic levels of the important signaling ion during muscle contraction. Decreased intra-mitochondrial calcium concentration increases dehydrogenase activation and ATP synthesis. So in addition to lower ATP synthesis due to fatty acid oxidation, ATP synthesis is impaired by poor calcium signaling as well, causing cardiac problems for diabetics. Mitochondria also modulate processes such as testicular somatic cell development, spermatogonial stem cell differentiation, luminal acidification, testosterone production in testes, and more. Thus, dysfunction of mitochondria in spermatozoa can be a cause for infertility. In efforts to combat mitochondrial disease, mitochondrial replacement therapy (MRT) has been developed. This form of in vitro fertilization uses donor mitochondria, which avoids the transmission of diseases caused by mutations of mitochondrial DNA. However, this therapy is still being researched and can introduce genetic modification, as well as safety concerns. These diseases are rare but can be extremely debilitating and progressive diseases, thus posing complex ethical questions for public policy. Relationships to aging There may be some leakage of the electrons transferred in the respiratory chain to form reactive oxygen species. This was thought to result in significant oxidative stress in the mitochondria with high mutation rates of mitochondrial DNA. Hypothesized links between aging and oxidative stress are not new and were proposed in 1956, which was later refined into the mitochondrial free radical theory of aging. A vicious cycle was thought to occur, as oxidative stress leads to mitochondrial DNA mutations, which can lead to enzymatic abnormalities and further oxidative stress. A number of changes can occur to mitochondria during the aging process. Tissues from elderly humans show a decrease in enzymatic activity of the proteins of the respiratory chain. However, mutated mtDNA can only be found in about 0.2% of very old cells. Large deletions in the mitochondrial genome have been hypothesized to lead to high levels of oxidative stress and neuronal death in Parkinson's disease. Mitochondrial dysfunction has also been shown to occur in amyotrophic lateral sclerosis. Since mitochondria cover a pivotal role in the ovarian function, by providing ATP necessary for the development from germinal vesicle to mature oocyte, a decreased mitochondria function can lead to inflammation, resulting in premature ovarian failure and accelerated ovarian aging. The resulting dysfunction is then reflected in quantitative (such as mtDNA copy number and mtDNA deletions), qualitative (such as mutations and strand breaks) and oxidative damage (such as dysfunctional mitochondria due to ROS), which are not only relevant in ovarian aging, but perturb oocyte-cumulus crosstalk in the ovary, are linked to genetic disorders (such as Fragile X) and can interfere with embryo selection. History The first observations of intracellular structures that probably represented mitochondria were published in 1857, by the physiologist Albert von Kolliker. Richard Altmann, in 1890, established them as cell organelles and called them "bioblasts". In 1898, Carl Benda coined the term "mitochondria" from the Greek , , "thread", and , , "granule". Leonor Michaelis discovered that Janus green can be used as a supravital stain for mitochondria in 1900. In 1904, Friedrich Meves made the first recorded observation of mitochondria in plants in cells of the white waterlily, Nymphaea alba, and in 1908, along with Claudius Regaud, suggested that they contain proteins and lipids. Benjamin F. Kingsbury, in 1912, first related them with cell respiration, but almost exclusively based on morphological observations. In 1913, Otto Heinrich Warburg linked respiration to particles which he had obtained from extracts of guinea-pig liver and which he called "grana". Warburg and Heinrich Otto Wieland, who had also postulated a similar particle mechanism, disagreed on the chemical nature of the respiration. It was not until 1925, when David Keilin discovered cytochromes, that the respiratory chain was described. In 1939, experiments using minced muscle cells demonstrated that cellular respiration using one oxygen molecule can form four adenosine triphosphate (ATP) molecules, and in 1941, the concept of the phosphate bonds of ATP being a form of energy in cellular metabolism was developed by Fritz Albert Lipmann. In the following years, the mechanism behind cellular respiration was further elaborated, although its link to the mitochondria was not known. The introduction of tissue fractionation by Albert Claude allowed mitochondria to be isolated from other cell fractions and biochemical analysis to be conducted on them alone. In 1946, he concluded that cytochrome oxidase and other enzymes responsible for the respiratory chain were isolated to the mitochondria. Eugene Kennedy and Albert Lehninger discovered in 1948 that mitochondria are the site of oxidative phosphorylation in eukaryotes. Over time, the fractionation method was further developed, improving the quality of the mitochondria isolated, and other elements of cell respiration were determined to occur in the mitochondria. The first high-resolution electron micrographs appeared in 1952, replacing the Janus Green stains as the preferred way to visualize mitochondria. This led to a more detailed analysis of the structure of the mitochondria, including confirmation that they were surrounded by a membrane. It also showed a second membrane inside the mitochondria that folded up in ridges dividing up the inner chamber and that the size and shape of the mitochondria varied from cell to cell. The popular term "powerhouse of the cell" was coined by Philip Siekevitz in 1957. In 1967, it was discovered that mitochondria contained ribosomes. In 1968, methods were developed for mapping the mitochondrial genes, with the genetic and physical map of yeast mitochondrial DNA completed in 1976. In November 2024, Researchers from the United States have discovered that mitochondria divide into two distinct forms when cells are starved, this could help explain and describe how cancers thrive in hostile conditions.
Biology and health sciences
Organelles and other cell parts
null
19589
https://en.wikipedia.org/wiki/Minimax
Minimax
Minimax (sometimes Minmax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst case (maximum loss) scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty. Game theory In general games The maximin value is the highest value that the player can be sure to get without knowing the actions of the other players; equivalently, it is the lowest value the other players can force the player to receive when they know the player's action. Its formal definition is: Where: is the index of the player of interest. denotes all other players except player . is the action taken by player . denotes the actions taken by all other players. is the value function of player . Calculating the maximin value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions – the one that gives player the smallest value. Then, we determine which action player can take in order to make sure that this smallest value is the highest possible. For example, consider the following game for two players, where the first player ("row player") may choose any of three moves, labelled , , or , and the second player ("column player") may choose either of two moves, or . The result of the combination of both moves is expressed in a payoff table: (where the first number in each of the cell is the pay-out of the row player and the second number is the pay-out of the column player). For the sake of example, we consider only pure strategies. Check each player in turn: The row player can play , which guarantees them a payoff of at least (playing is risky since it can lead to payoff , and playing can result in a payoff of ). Hence: . The column player can play and secure a payoff of at least (playing puts them in the risk of getting ). Hence: . If both players play their respective maximin strategies , the payoff vector is . The minimax value of a player is the smallest value that the other players can force the player to receive, without knowing the player's actions; equivalently, it is the largest value the player can be sure to get when they know the actions of the other players. Its formal definition is: The definition is very similar to that of the maximin value – only the order of the maximum and minimum operators is inverse. In the above example: The row player can get a maximum value of (if the other player plays ) or (if the other player plays ), so: The column player can get a maximum value of (if the other player plays ), (if ) or (if ). Hence: For every player , the maximin is at most the minimax: Intuitively, in maximin the maximization comes after the minimization, so player tries to maximize their value before knowing what the others will do; in minimax the maximization comes before the minimization, so player is in a much better position – they maximize their value knowing what the others did. Another way to understand the notation is by reading from right to left: When we write the initial set of outcomes depends on both and We first marginalize away from , by maximizing over (for every possible value of ) to yield a set of marginal outcomes which depends only on We then minimize over over these outcomes. (Conversely for maximin.) Although it is always the case that and the payoff vector resulting from both players playing their minimax strategies, in the case of or in the case of cannot similarly be ranked against the payoff vector resulting from both players playing their maximin strategy. In zero-sum games In two-player zero-sum games, the minimax solution is the same as the Nash equilibrium. In the context of zero-sum games, the minimax theorem is equivalent to: For every two-person zero-sum game with finitely many strategies, there exists a value and a mixed strategy for each player, such that (a) Given Player 2's strategy, the best payoff possible for Player 1 is , and (b) Given Player 1's strategy, the best payoff possible for Player 2 is −. Equivalently, Player 1's strategy guarantees them a payoff of regardless of Player 2's strategy, and similarly Player 2 can guarantee themselves a payoff of −. The name minimax arises because each player minimizes the maximum payoff possible for the other – since the game is zero-sum, they also minimize their own maximum loss (i.e., maximize their minimum payoff).
Mathematics
Game theory
null
19594
https://en.wikipedia.org/wiki/Missile
Missile
A missile is an airborne ranged weapon capable of self-propelled flight aided usually by a propellant, jet engine or rocket motor. Historically, 'missile' referred to any projectile that is thrown, shot or propelled towards a target; this usage is still recognized today with any unguided jet- or rocket-propelled weapons generally described as rocket artillery. Airborne explosive devices without propulsion are referred to as shells if fired by an artillery piece and bombs if dropped by an aircraft. Missiles are also generally guided towards specific targets termed as guided missiles or guided rockets. Missile systems usually have five system components: targeting, guidance system, flight system, engine, and warhead. Missiles are primarily classified into different types based on firing source and target such as surface-to-surface, air-to-surface, surface-to-air and air-to-air missiles. History Rockets were the precursor to modern missiles and the first rockets were used as propulsion systems for arrows as early as the 10th century in China. Usage of rockets as weapons before modern rocketry is attested to in China, Korea, India and Europe. In the 18th century, iron-cased rockets were used in India by the Kingdom of Mysore and Maratha Empire against the British which was developed into Congreve rocket and used in the Napoleonic Wars. In the early 20th century, American Robert Goddard and German Hermann Oberth developed early rockets propelled by jet engines. In the 1920s, Soviet Union developed solid fuel rockets at the Gas Dynamics Laboratory. Later, the first missiles to be used operationally were a series of rocket based missiles developed by Nazi Germany during World War II including the V-1 flying bomb and V-2 rocket which used mechanical autopilot to keep the missile flying along a pre-chosen route. Less well known were a series of anti-ship and anti-aircraft missiles, typically based on a simple radio control (command guidance) system directed by the operator. However, these early systems in World War II were only built in small numbers. After World War II, the advent of the Cold War and development of nuclear weapons necessitated faster, more accurate and more versatile missiles with longer range and missile development was pursued by multiple countries. Proliferation restrictions Various attempts have been made to control the spread of long range missiles capable of carrying weapons of mass destruction, such as the Missile Technology Control Regime (1987) and the International Code of Conduct against Ballistic Missile Proliferation (2002). These were voluntary and not international treaties. Though not legally binding, more than 140 countries have been part of these agreements, and provide prior information on missile programs, expected launches, and tests. The gradual introduction of missile launched hypersonic glide vehicles since 2019, anti-satellite missiles, and the deployment of dual use missiles capable of carrying both conventional and nuclear warheads are proliferation concerns. Components Guidance, targeting and flight systems A missile is most often guided by a guidance system though there are missiles that are unguided during some phases of flight. Missile guidance refers to methods of guiding a missile to its intended target. Effective guidance is important because reaching the target position accurately and precisely is a critical factor for its effectiveness. The missile guidance system accomplishes this by four steps: tracking the target, computing the directions using tracking information, directing the computed inputs to steering control and steering the missile by directing inputs to motors or flight control surfaces. The guidance system consists of three sections: launch, mid-course and terminal with same or different systems employed across sections. The guidance and homing systems are generally classified broadly into active, semi-active and passive. In active homing systems, the missile carries the equipment needed to transmit the radiation needed to illuminate the target and receive the reflected energy. Once the homing is initiated, the missile directs independently towards the target. In semi-active systems, the source of the radiation is located outside the missile usually in the launch vehicle which might be an aircraft or a ship and the missile will receive the radiation to direct towards the target. As the source is located outside, the launch vehicle needs to continue supporting the missile till it is guided to the intended target. In a passive system, the missile relies solely on the information from the target. The homing system might use light such as infrared, laser or visible light, radio waves or other electromagnetic radiation to illuminate the target. Once the guidance system identifies the target, the target might required to be tracked continuously if it is in motion. A guidance system might use INS which consists of a gyroscope and accelerometer or might use satellite guidance (such as GPS) to track the missile’s position relative to a known target. The missile computers will compute the flight path required to steer the missile towards the target. In command guidance, a human operator may operate it manually or a support or launch system will transmit commands by using either optic fiber or radio to guide the missile. The flight system uses the data from the targeting or guidance system to maneuver the missile in flight which might be accomplished using vectored thrust of engines or aerodynamic maneuvering using flight control surfaces such as wings, fins and canards. Engine Missiles are powered by propellants igniting to produce thrust and might employ types of rocket or jet engines. Rockets might be fueled by solid-propellants which are comparatively easier to maintain and enables faster deployment. These propellants contain a fuel and oxidizer mixed in select proportions with the grain size and burn chamber determining the rate and time of burn. Larger missiles might use liquid-propellant rockets where propulsion is provided by a single or combination of liquid fuels. A hybrid system uses solid rocket fuel with a liquid oxidizer. Jet engines are generally used in cruise missiles, most commonly of the turbojet type, because of their relative simplicity and low frontal area while turbofans and ramjets can also be theoretically used. Long-range missiles have multiple engine stages and might use similar type or a mix of engine types. Some missiles may have additional propulsion from another source at launch such as a catapult, cannon or tank gun. Warhead Missiles have one or more explosive warheads, although other weapon types may also be used. The warheads of a missile provide its primary destructive power which might cause secondary destruction due to the kinetic energy of the weapon and unused fuel. Warheads are most commonly of the high explosive type, often employing shaped charges to exploit the accuracy of a guided weapon to destroy hardened targets. Warhead might carry conventional, incendiary, nuclear, chemical, biological or radiological weapons. Classification Missiles can be classified into categories by various parameters such as type, launch platform and target, range, propulsion and guidance system. Missiles are generally categorized into strategic or tactical missile systems. Tactical missile systems are short-range systems used to carry out a limited strike in a smaller area and might carry conventional or nuclear warheads. Strategic missiles are long-range weapons used to target beyond the immediate vicinity and are mostly designed to carry nuclear warheads though other warheads can also be fitted. Strategic Strategic weapons are often classified into cruise and ballistic missiles. Ballistic missiles are powered by rockets during launch and follow a trajectory that arches upwards before descending to reach its intended target while cruise missiles are continuously powered by jet engines and travel at a flatter trajectory. Ballistic A ballistic missile is powered by single or multiple rockets in stages initially before following an unpowered trajectory that arches upwards before descending to reach its intended target. It can carry both nuclear and conventional warheads. A ballistic missile might reach supersonic or hypersonic speed and often travel out of the Earth's atmosphere before re-entry. It usually has three stages of flight: Boost phase: First phase at launch when one or more stages of rocket engine(s) fire propelling the missile Mid-course phase: Second phase when the rocket engines stop firing and the missile continues ascending upwards on the given trajectory Terminal phase: Final phase when the warhead(s) detach and descend towards the target Ballistic missiles are categorized based on range as: Short-range : less than Medium-range : to Intermediate-range : to Inter-continental : greater than Cruise A cruise missile is a guided missile that remains in the atmosphere and flies the major portion of its flight at a constant speed. It is designed to deliver a large warhead over long distances with high precision and are propelled by jet engines. A cruise missile can be launched from multiple platforms and is often self-guided. It flies at lower speeds (often subsonic or supersonic) and close to the surface of the Earth, which expends more fuel but makes it difficult to detect. Tactical Missiles might be also be classified basis launch platform and target into surface-to-air, surface-to-surface, air-to-air, air-to-surface, anti-ship and anti-tank. Anti-ship An anti-ship missile (AShM) is designed for use against large boats and ships such as destroyers and aircraft carriers. Most anti-ship missiles are of the sea skimming variety, and many use a combination of inertial guidance and active radar homing. A large number of other anti-ship missiles use infrared homing to follow the heat that is emitted by a ship; it is also possible for anti-ship missiles to be guided by radio command all the way. Many anti-ship missiles can be launched from a variety of weapons systems including surface warships, submarines, fighter aircraft, maritime patrol aircraft, helicopters, shore batteries, land vehicles and by infantry. Anti-submarine missile is a standoff anti-submarine weapon variant of anti-ship missiles used to deliver an explosive warhead aimed directly at a submarine, a depth charge, or a homing torpedo. Anti-tank An anti-tank guided missile (ATGM) is a guided missile primarily designed to hit and destroy heavily armored military vehicles. ATGMs range in size from shoulder-launched weapons, which can be transported by a single soldier, to larger tripod-mounted or vehicle and aircraft mounted missile systems. Earlier man-portable anti-tank weapons like anti-tank rifles and magnetic anti-tank mines had a short range but sophisticated antitank missiles can be directed to a longer target by several different guidance systems, including laser guiding, television camera, or wire guiding. Air-to-air An air-to-air missile (AAM) is a missile fired from a fighter aircraft for the purpose of destroying another aircraft. AAMs are typically powered by one or more rocket motors, usually solid fueled but sometimes liquid fueled. A radar or heat emission based homing system is generally used and sometimes can use a combination. Short range missiles used to engage opposing aircraft at ranges of less than 16 km often use infrared guidance while long range missiles mostly rely upon radar guidance. Air-to-surface An air-to-surface missile (ASM) is a missile fired from an attack aircraft, strike fighter or an attack helicopter for the purpose of destroying land based targets. Missiles are typically guided and unguided glide bombs not considered missiles. The most common propulsion systems are rocket motor for short range and jet engines for long-range but ramjets are also used. Missile guidance is typically via laser, infrared homing, optical or satellite. Air-to-surface missiles for ground attack by aircraft provide a higher standoff distance engaging targets from far away and out of range of low range air defenses. Surface-to-air A surface-to-air missile (SAM) is a missile designed to be launched from the ground to destroy aircraft, other missiles or flying objects. It is a type of anti-aircraft system and missiles have replaced most other forms of anti-aircraft weapons due to the increased range and accuracy. Anti-aircraft guns are being used only for specialized close-in firing roles. Missiles can be mounted in clusters on vehicles or towed on trailers and can be hand operated by infantry. SAMs frequently use solid-propellants and may be guided by radar or infrared sensors or by a human operator using optical tracking. Surface-to-surface A surface-to-surface missile (SSM) is a missile designed to be launched from the ground or the sea and strike targets on land. They may be fired from hand-held or vehicle mounted devices, from fixed installations or from a ship. They are often powered by a rocket engine or sometimes fired by an explosive charge, since the launching platform is typically stationary or moving slowly. They usually have fins and/or wings for lift and stability, although hyper-velocity or short-ranged missiles may use body lift or fly a ballistic trajectory. Most anti-tank and anti-ship missiles are part of surface-to-surface missile systems. Anti-satellite An anti-satellite weapon (ASAT) is a space weapon designed to incapacitate or destroy satellites for strategic or tactical purposes. Although no ASAT system has been utilized in warfare, a few countries have successfully shot down their own satellites to demonstrate their ASAT capabilities in a show of force. ASATs have also been used to remove decommissioned satellites. ASAT roles include defensive measures against an adversary's space-based and nuclear weapons, a force multiplier for a nuclear first strike, a countermeasure against an adversary's anti-ballistic missile defense (ABM), an asymmetric counter to a technologically superior adversary, and a counter-value weapon.
Technology
Explosive weapons
null
19595
https://en.wikipedia.org/wiki/Mendelian%20inheritance
Mendelian inheritance
Mendelian inheritance (also known as Mendelism) is a type of biological inheritance following the principles originally proposed by Gregor Mendel in 1865 and 1866, re-discovered in 1900 by Hugo de Vries and Carl Correns, and later popularized by William Bateson. These principles were initially controversial. When Mendel's theories were integrated with the Boveri–Sutton chromosome theory of inheritance by Thomas Hunt Morgan in 1915, they became the core of classical genetics. Ronald Fisher combined these ideas with the theory of natural selection in his 1930 book The Genetical Theory of Natural Selection, putting evolution onto a mathematical footing and forming the basis for population genetics within the modern evolutionary synthesis. History The principles of Mendelian inheritance were named for and first derived by Gregor Johann Mendel, a nineteenth-century Moravian monk who formulated his ideas after conducting simple hybridization experiments with pea plants (Pisum sativum) he had planted in the garden of his monastery. Between 1856 and 1863, Mendel cultivated and tested some 5,000 pea plants. From these experiments, he induced two generalizations which later became known as Mendel's Principles of Heredity or Mendelian inheritance. He described his experiments in a two-part paper, Versuche über Pflanzen-Hybriden (Experiments on Plant Hybridization), that he presented to the Natural History Society of Brno on 8 February and 8 March 1865, and which was published in 1866. Mendel's results were at first largely ignored. Although they were not completely unknown to biologists of the time, they were not seen as generally applicable, even by Mendel himself, who thought they only applied to certain categories of species or traits. A major roadblock to understanding their significance was the importance attached by 19th-century biologists to the apparent blending of many inherited traits in the overall appearance of the progeny, now known to be due to multi-gene interactions, in contrast to the organ-specific binary characters studied by Mendel. In 1900, however, his work was "re-discovered" by three European scientists, Hugo de Vries, Carl Correns, and Erich von Tschermak. The exact nature of the "re-discovery" has been debated: De Vries published first on the subject, mentioning Mendel in a footnote, while Correns pointed out Mendel's priority after having read De Vries' paper and realizing that he himself did not have priority. De Vries may not have acknowledged truthfully how much of his knowledge of the laws came from his own work and how much came only after reading Mendel's paper. Later scholars have accused Von Tschermak of not truly understanding the results at all. Regardless, the "re-discovery" made Mendelism an important but controversial theory. Its most vigorous promoter in Europe was William Bateson, who coined the terms "genetics" and "allele" to describe many of its tenets. The model of heredity was contested by other biologists because it implied that heredity was discontinuous, in opposition to the apparently continuous variation observable for many traits. Many biologists also dismissed the theory because they were not sure it would apply to all species. However, later work by biologists and statisticians such as Ronald Fisher showed that if multiple Mendelian factors were involved in the expression of an individual trait, they could produce the diverse results observed, thus demonstrating that Mendelian genetics is compatible with natural selection. Thomas Hunt Morgan and his assistants later integrated Mendel's theoretical model with the chromosome theory of inheritance, in which the chromosomes of cells were thought to hold the actual hereditary material, and created what is now known as classical genetics, a highly successful foundation which eventually cemented Mendel's place in history. Mendel's findings allowed scientists such as Fisher and J.B.S. Haldane to predict the expression of traits on the basis of mathematical probabilities. An important aspect of Mendel's success can be traced to his decision to start his crosses only with plants he demonstrated were true-breeding. He only measured discrete (binary) characteristics, such as color, shape, and position of the seeds, rather than quantitatively variable characteristics. He expressed his results numerically and subjected them to statistical analysis. His method of data analysis and his large sample size gave credibility to his data. He had the foresight to follow several successive generations (P, F1, F2, F3) of pea plants and record their variations. Finally, he performed "test crosses" (backcrossing descendants of the initial hybridization to the initial true-breeding lines) to reveal the presence and proportions of recessive characters. Inheritance tools Punnett Squares Punnett Squares are a well known genetics tool that was created by an English geneticist, Reginald Punnett, which can visually demonstrate all the possible genotypes that an offspring can receive, given the genotypes of their parents. Each parent carries two alleles, which can be shown on the top and the side of the chart, and each contribute one of them towards reproduction at a time. Each of the squares in the middle demonstrates the number of times each pairing of parental alleles could combine to make potential offspring. Using probabilities, one can then determine which genotypes the parents can create, and at what frequencies they can be created. For example, if two parents both have a heterozygous genotype, then there would be a 50% chance for their offspring to have the same genotype, and a 50% chance they would have a homozygous genotype. Since they could possibly contribute two identical alleles, the 50% would be halved to 25% to account for each type of homozygote, whether this was a homozygous dominant genotype, or a homozygous recessive genotype. Pedigrees Pedigrees are visual tree like representations that demonstrate exactly how alleles are being passed from past generations to future ones. They also provide a diagram displaying each individual that carries a desired allele, and exactly which side of inheritance it was received from, whether it was from their mother's side or their father's side. Pedigrees can also be used to aid researchers in determining the inheritance pattern for the desired allele, because they share information such as the gender of all individuals, the phenotype, a predicted genotype, the potential sources for the alleles, and also based its history, how it could continue to spread in the future generations to come. By using pedigrees, scientists have been able to find ways to control the flow of alleles over time, so that alleles that act problematic can be resolved upon discovery. Mendel's genetic discoveries Five parts of Mendel's discoveries were an important divergence from the common theories at the time and were the prerequisite for the establishment of his rules. Characters are unitary, that is, they are discrete e.g.: purple vs. white, tall vs. dwarf. There is no medium-sized plant or light purple flower. Genetic characteristics have alternate forms, each inherited from one of two parents. Today these are called alleles. One allele is dominant over the other. The phenotype reflects the dominant allele. Gametes are created by random segregation. Heterozygotic individuals produce gametes with an equal frequency of the two alleles. Different traits have independent assortment. In modern terms, genes are unlinked. According to customary terminology, the principles of inheritance discovered by Gregor Mendel are here referred to as Mendelian laws, although today's geneticists also speak of Mendelian rules or Mendelian principles, as there are many exceptions summarized under the collective term Non-Mendelian inheritance. The laws were initially formulated by the geneticist Thomas Hunt Morgan in 1916. Mendel selected for the experiment the following characters of pea plants: Form of the ripe seeds (round or roundish, surface shallow or wrinkled) Colour of the seed–coat (white, gray, or brown, with or without violet spotting) Colour of the seeds and cotyledons (yellow or green) Flower colour (white or violet-red) Form of the ripe pods (simply inflated, not contracted, or constricted between the seeds and wrinkled) Colour of the unripe pods (yellow or green) Position of the flowers (axial or terminal) Length of the stem When he crossed purebred white flower and purple flower pea plants (the parental or P generation) by artificial pollination, the resulting flower colour was not a blend. Rather than being a mix of the two, the offspring in the first generation (F1-generation) were all purple-flowered. Therefore, he called this biological trait dominant. When he allowed self-fertilization in the uniform looking F1-generation, he obtained both colours in the F2 generation with a purple flower to white flower ratio of 3 : 1. In some of the other characters also one of the traits was dominant. He then conceived the idea of heredity units, which he called hereditary "factors". Mendel found that there are alternative forms of factors—now called genes—that account for variations in inherited characteristics. For example, the gene for flower color in pea plants exists in two forms, one for purple and the other for white. The alternative "forms" are now called alleles. For each trait, an organism inherits two alleles, one from each parent. These alleles may be the same or different. An organism that has two identical alleles for a gene is said to be homozygous for that gene (and is called a homozygote). An organism that has two different alleles for a gene is said to be heterozygous for that gene (and is called a heterozygote). Mendel hypothesized that allele pairs separate randomly, or segregate, from each other during the production of the gametes in the seed plant (egg cell) and the pollen plant (sperm). Because allele pairs separate during gamete production, a sperm or egg carries only one allele for each inherited trait. When sperm and egg unite at fertilization, each contributes its allele, restoring the paired condition in the offspring. Mendel also found that each pair of alleles segregates independently of the other pairs of alleles during gamete formation. The genotype of an individual is made up of the many alleles it possesses. The phenotype is the result of the expression of all characteristics that are genetically determined by its alleles as well as by its environment. The presence of an allele does not mean that the trait will be expressed in the individual that possesses it. If the two alleles of an inherited pair differ (the heterozygous condition), then one determines the organism's appearance and is called the dominant allele; the other has no noticeable effect on the organism's appearance and is called the recessive allele. Mendel's laws of inheritance Law of Dominance and Uniformity If two parents are mated with each other who differ in one genetic characteristic for which they are both homozygous (each pure-bred), all offspring in the first generation (F1) are equal to the examined characteristic in genotype and phenotype showing the dominant trait. This uniformity rule or reciprocity rule applies to all individuals of the F1-generation. The principle of dominant inheritance discovered by Mendel states that in a heterozygote the dominant allele will cause the recessive allele to be "masked": that is, not expressed in the phenotype. Only if an individual is homozygous with respect to the recessive allele will the recessive trait be expressed. Therefore, a cross between a homozygous dominant and a homozygous recessive organism yields a heterozygous organism whose phenotype displays only the dominant trait. The F1 offspring of Mendel's pea crosses always looked like one of the two parental varieties. In this situation of "complete dominance", the dominant allele had the same phenotypic effect whether present in one or two copies. But for some characteristics, the F1 hybrids have an appearance in between the phenotypes of the two parental varieties. A cross between two four o'clock (Mirabilis jalapa) plants shows an exception to Mendel's principle, called incomplete dominance. Flowers of heterozygous plants have a phenotype somewhere between the two homozygous genotypes. In cases of intermediate inheritance (incomplete dominance) in the F1-generation Mendel's principle of uniformity in genotype and phenotype applies as well. Research about intermediate inheritance was done by other scientists. The first was Carl Correns with his studies about Mirabilis jalapa. Law of Segregation of genes The Law of Segregation of genes applies when two individuals, both heterozygous for a certain trait are crossed, for example, hybrids of the F1-generation. The offspring in the F2-generation differ in genotype and phenotype so that the characteristics of the grandparents (P-generation) regularly occur again. In a dominant-recessive inheritance, an average of 25% are homozygous with the dominant trait, 50% are heterozygous showing the dominant trait in the phenotype (genetic carriers), 25% are homozygous with the recessive trait and therefore express the recessive trait in the phenotype. The genotypic ratio is 1: 2 : 1, and the phenotypic ratio is 3: 1. In the pea plant example, the capital "B" represents the dominant allele for purple blossom and lowercase "b" represents the recessive allele for white blossom. The pistil plant and the pollen plant are both F1-hybrids with genotype "B b". Each has one allele for purple and one allele for white. In the offspring, in the F2-plants in the Punnett-square, three combinations are possible. The genotypic ratio is 1 BB : 2 Bb : 1 bb. But the phenotypic ratio of plants with purple blossoms to those with white blossoms is 3 : 1 due to the dominance of the allele for purple. Plants with homozygous "b b" are white flowered like one of the grandparents in the P-generation. In cases of incomplete dominance the same segregation of alleles takes place in the F2-generation, but here also the phenotypes show a ratio of 1 : 2 : 1, as the heterozygous are different in phenotype from the homozygous because the genetic expression of one allele compensates the missing expression of the other allele only partially. This results in an intermediate inheritance which was later described by other scientists. In some literature sources, the principle of segregation is cited as the "first law". Nevertheless, Mendel did his crossing experiments with heterozygous plants after obtaining these hybrids by crossing two purebred plants, discovering the principle of dominance and uniformity first. Molecular proof of segregation of genes was subsequently found through observation of meiosis by two scientists independently, the German botanist Oscar Hertwig in 1876, and the Belgian zoologist Edouard Van Beneden in 1883. Most alleles are located in chromosomes in the cell nucleus. Paternal and maternal chromosomes get separated in meiosis because during spermatogenesis the chromosomes are segregated on the four sperm cells that arise from one mother sperm cell, and during oogenesis the chromosomes are distributed between the polar bodies and the egg cell. Every individual organism contains two alleles for each trait. They segregate (separate) during meiosis such that each gamete contains only one of the alleles. When the gametes unite in the zygote the alleles—one from the mother one from the father—get passed on to the offspring. An offspring thus receives a pair of alleles for a trait by inheriting homologous chromosomes from the parent organisms: one allele for each trait from each parent. Heterozygous individuals with the dominant trait in the phenotype are genetic carriers of the recessive trait. Law of Independent Assortment The Law of Independent Assortment proposes alleles for separate traits are passed independently of one another. That is, the biological selection of an allele for one trait has nothing to do with the selection of an allele for any other trait. Mendel found support for this law in his dihybrid cross experiments. In his monohybrid crosses, an idealized 3:1 ratio between dominant and recessive phenotypes resulted. In dihybrid crosses, however, he found a 9:3:3:1 ratios. This shows that each of the two alleles is inherited independently from the other, with a 3:1 phenotypic ratio for each. Independent assortment occurs in eukaryotic organisms during meiotic metaphase I, and produces a gamete with a mixture of the organism's chromosomes. The physical basis of the independent assortment of chromosomes is the random orientation of each bivalent chromosome along the metaphase plate with respect to the other bivalent chromosomes. Along with crossing over, independent assortment increases genetic diversity by producing novel genetic combinations. There are many deviations from the principle of independent assortment due to genetic linkage. Of the 46 chromosomes in a normal diploid human cell, half are maternally derived (from the mother's egg) and half are paternally derived (from the father's sperm). This occurs as sexual reproduction involves the fusion of two haploid gametes (the egg and sperm) to produce a zygote and a new organism, in which every cell has two sets of chromosomes (diploid). During gametogenesis the normal complement of 46 chromosomes needs to be halved to 23 to ensure that the resulting haploid gamete can join with another haploid gamete to produce a diploid organism. In independent assortment, the chromosomes that result are randomly sorted from all possible maternal and paternal chromosomes. Because zygotes end up with a mix instead of a pre-defined "set" from either parent, chromosomes are therefore considered assorted independently. As such, the zygote can end up with any combination of paternal or maternal chromosomes. For human gametes, with 23 chromosomes, the number of possibilities is 223 or 8,388,608 possible combinations. This contributes to the genetic variability of progeny. Generally, the recombination of genes has important implications for many evolutionary processes. Mendelian trait A Mendelian trait is one whose inheritance follows Mendel's principles—namely, the trait depends only on a single locus, whose alleles are either dominant or recessive. Many traits are inherited in a non-Mendelian fashion. Non-Mendelian inheritance Mendel himself warned that care was needed in extrapolating his patterns to other organisms or traits. Indeed, many organisms have traits whose inheritance works differently from the principles he described; these traits are called non-Mendelian. For example, Mendel focused on traits whose genes have only two alleles, such as "A" and "a". However, many genes have more than two alleles. He also focused on traits determined by a single gene. But some traits, such as height, depend on many genes rather than just one. Traits dependent on multiple genes are called polygenic traits.
Biology and health sciences
Genetics and taxonomy
null
19605
https://en.wikipedia.org/wiki/Main%20sequence
Main sequence
In astronomy, the main sequence is a classification of stars which appear on plots of stellar color versus brightness as a continuous and distinctive band. Stars on this band are known as main-sequence stars or dwarf stars, and positions of stars on and off the band are believed to indicate their physical properties, as well as their progress through several types of star life-cycles. These are the most numerous true stars in the universe and include the Sun. Color-magnitude plots are known as Hertzsprung–Russell diagrams after Ejnar Hertzsprung and Henry Norris Russell. After condensation and ignition of a star, it generates thermal energy in its dense core region through nuclear fusion of hydrogen into helium. During this stage of the star's lifetime, it is located on the main sequence at a position determined primarily by its mass but also based on its chemical composition and age. The cores of main-sequence stars are in hydrostatic equilibrium, where outward thermal pressure from the hot core is balanced by the inward pressure of gravitational collapse from the overlying layers. The strong dependence of the rate of energy generation on temperature and pressure helps to sustain this balance. Energy generated at the core makes its way to the surface and is radiated away at the photosphere. The energy is carried by either radiation or convection, with the latter occurring in regions with steeper temperature gradients, higher opacity, or both. The main sequence is sometimes divided into upper and lower parts, based on the dominant process that a star uses to generate energy. The Sun, along with main sequence stars below about 1.5 times the mass of the Sun (), primarily fuse hydrogen atoms together in a series of stages to form helium, a sequence called the proton–proton chain. Above this mass, in the upper main sequence, the nuclear fusion process mainly uses atoms of carbon, nitrogen, and oxygen as intermediaries in the CNO cycle that produces helium from hydrogen atoms. Main-sequence stars with more than two solar masses undergo convection in their core regions, which acts to stir up the newly created helium and maintain the proportion of fuel needed for fusion to occur. Below this mass, stars have cores that are entirely radiative with convective zones near the surface. With decreasing stellar mass, the proportion of the star forming a convective envelope steadily increases. The main-sequence stars below undergo convection throughout their mass. When core convection does not occur, a helium-rich core develops surrounded by an outer layer of hydrogen. The more massive a star is, the shorter its lifespan on the main sequence. After the hydrogen fuel at the core has been consumed, the star evolves away from the main sequence on the HR diagram, into a supergiant, red giant, or directly to a white dwarf. History In the early part of the 20th century, information about the types and distances of stars became more readily available. The spectra of stars were shown to have distinctive features, which allowed them to be categorized. Annie Jump Cannon and Edward Charles Pickering at Harvard College Observatory developed a method of categorization that became known as the Harvard Classification Scheme, published in the Harvard Annals in 1901. In Potsdam in 1906, the Danish astronomer Ejnar Hertzsprung noticed that the reddest stars—classified as K and M in the Harvard scheme—could be divided into two distinct groups. These stars are either much brighter than the Sun or much fainter. To distinguish these groups, he called them "giant" and "dwarf" stars. The following year he began studying star clusters; large groupings of stars that are co-located at approximately the same distance. For these stars, he published the first plots of color versus luminosity. These plots showed a prominent and continuous sequence of stars, which he named the Main Sequence. At Princeton University, Henry Norris Russell was following a similar course of research. He was studying the relationship between the spectral classification of stars and their actual brightness as corrected for distance—their absolute magnitude. For this purpose, he used a set of stars that had reliable parallaxes and many of which had been categorized at Harvard. When he plotted the spectral types of these stars against their absolute magnitude, he found that dwarf stars followed a distinct relationship. This allowed the real brightness of a dwarf star to be predicted with reasonable accuracy. Of the red stars observed by Hertzsprung, the dwarf stars also followed the spectra-luminosity relationship discovered by Russell. However, giant stars are much brighter than dwarfs and so do not follow the same relationship. Russell proposed that "giant stars must have low density or great surface brightness, and the reverse is true of dwarf stars". The same curve also showed that there were very few faint white stars. In 1933, Bengt Strömgren introduced the term Hertzsprung–Russell diagram to denote a luminosity-spectral class diagram. This name reflected the parallel development of this technique by both Hertzsprung and Russell earlier in the century. As evolutionary models of stars were developed during the 1930s, it was shown that, for stars with the same composition, the star's mass determines its luminosity and radius. Conversely, when a star's chemical composition and its position on the main sequence are known, the star's mass and radius can be deduced. This became known as the Vogt–Russell theorem; named after Heinrich Vogt and Henry Norris Russell. It was subsequently discovered that this relationship breaks down somewhat for stars of the non-uniform composition. A refined scheme for stellar classification was published in 1943 by William Wilson Morgan and Philip Childs Keenan. The MK classification assigned each star a spectral type—based on the Harvard classification—and a luminosity class. The Harvard classification had been developed by assigning a different letter to each star based on the strength of the hydrogen spectral line before the relationship between spectra and temperature was known. When ordered by temperature and when duplicate classes were removed, the spectral types of stars followed, in order of decreasing temperature with colors ranging from blue to red, the sequence O, B, A, F, G, K, and M. (A popular mnemonic for memorizing this sequence of stellar classes is "Oh Be A Fine Girl/Guy, Kiss Me".) The luminosity class ranged from I to V, in order of decreasing luminosity. Stars of luminosity class V belonged to the main sequence. In April 2018, astronomers reported the detection of the most distant "ordinary" (i.e., main sequence) star, named Icarus (formally, MACS J1149 Lensed Star 1), at 9 billion light-years away from Earth. Formation and evolution When a protostar is formed from the collapse of a giant molecular cloud of gas and dust in the local interstellar medium, the initial composition is homogeneous throughout, consisting of about 70% hydrogen, 28% helium, and trace amounts of other elements, by mass. The initial mass of the star depends on the local conditions within the cloud. (The mass distribution of newly formed stars is described empirically by the initial mass function.) During the initial collapse, this pre-main-sequence star generates energy through gravitational contraction. Once sufficiently dense, stars begin converting hydrogen into helium and giving off energy through an exothermic nuclear fusion process. When nuclear fusion of hydrogen becomes the dominant energy production process and the excess energy gained from gravitational contraction has been lost, the star lies along a curve on the Hertzsprung–Russell diagram (or HR diagram) called the standard main sequence. Astronomers will sometimes refer to this stage as "zero-age main sequence", or ZAMS. The ZAMS curve can be calculated using computer models of stellar properties at the point when stars begin hydrogen fusion. From this point, the brightness and surface temperature of stars typically increase with age. A star remains near its initial position on the main sequence until a significant amount of hydrogen in the core has been consumed, then begins to evolve into a more luminous star. (On the HR diagram, the evolving star moves up and to the right of the main sequence.) Thus the main sequence represents the primary hydrogen-burning stage of a star's lifetime. Classification Main sequence stars are divided into the following types: O-type main-sequence star B-type main-sequence star A-type main-sequence star F-type main-sequence star G-type main-sequence star K-type main-sequence star M-type main-sequence star M-type (and, to a lesser extent, K-type) main-sequence stars are usually referred to as red dwarfs. Properties The majority of stars on a typical HR diagram lie along the main-sequence curve. This line is pronounced because both the spectral type and the luminosity depends only on a star's mass, at least to zeroth-order approximation, as long as it is fusing hydrogen at its core—and that is what almost all stars spend most of their "active" lives doing. The temperature of a star determines its spectral type via its effect on the physical properties of plasma in its photosphere. A star's energy emission as a function of wavelength is influenced by both its temperature and composition. A key indicator of this energy distribution is given by the color index, , which measures the star's magnitude in blue (B) and green-yellow (V) light by means of filters. This difference in magnitude provides a measure of a star's temperature. Dwarf terminology Main-sequence stars are called dwarf stars, but this terminology is partly historical and can be somewhat confusing. For the cooler stars, dwarfs such as red dwarfs, orange dwarfs, and yellow dwarfs are indeed much smaller and dimmer than other stars of those colors. However, for hotter blue and white stars, the difference in size and brightness between so-called "dwarf" stars that are on the main sequence and so-called "giant" stars that are not, becomes smaller. For the hottest stars the difference is not directly observable and for these stars, the terms "dwarf" and "giant" refer to differences in spectral lines which indicate whether a star is on or off the main sequence. Nevertheless, very hot main-sequence stars are still sometimes called dwarfs, even though they have roughly the same size and brightness as the "giant" stars of that temperature. The common use of "dwarf" to mean the main sequence is confusing in another way because there are dwarf stars that are not main-sequence stars. For example, a white dwarf is the dead core left over after a star has shed its outer layers, and is much smaller than a main-sequence star, roughly the size of Earth. These represent the final evolutionary stage of many main-sequence stars. Parameters By treating the star as an idealized energy radiator known as a black body, the luminosity L and radius R can be related to the effective temperature Teff by the Stefan–Boltzmann law: where σ is the Stefan–Boltzmann constant. As the position of a star on the HR diagram shows its approximate luminosity, this relation can be used to estimate its radius. The mass, radius, and luminosity of a star are closely interlinked, and their respective values can be approximated by three relations. First is the Stefan–Boltzmann law, which relates the luminosity L, the radius R and the surface temperature Teff. Second is the mass–luminosity relation, which relates the luminosity L and the mass M. Finally, the relationship between M and R is close to linear. The ratio of M to R increases by a factor of only three over 2.5 orders of magnitude of M. This relation is roughly proportional to the star's inner temperature TI, and its extremely slow increase reflects the fact that the rate of energy generation in the core strongly depends on this temperature, whereas it has to fit the mass-luminosity relation. Thus, a too-high or too-low temperature will result in stellar instability. A better approximation is to take , the energy generation rate per unit mass, as ε is proportional to TI15, where TI is the core temperature. This is suitable for stars at least as massive as the Sun, exhibiting the CNO cycle, and gives the better fit . Sample parameters The table below shows typical values for stars along the main sequence. The values of luminosity (L), radius (R), and mass (M) are relative to the Sun—a dwarf star with a spectral classification of G2 V. The actual values for a star may vary by as much as 20–30% from the values listed below. Energy generation All main-sequence stars have a core region where energy is generated by nuclear fusion. The temperature and density of this core are at the levels necessary to sustain the energy production that will support the remainder of the star. A reduction of energy production would cause the overlaying mass to compress the core, resulting in an increase in the fusion rate because of higher temperature and pressure. Likewise, an increase in energy production would cause the star to expand, lowering the pressure at the core. Thus the star forms a self-regulating system in hydrostatic equilibrium that is stable over the course of its main-sequence lifetime. Main-sequence stars employ two types of hydrogen fusion processes, and the rate of energy generation from each type depends on the temperature in the core region. Astronomers divide the main sequence into upper and lower parts, based on which of the two is the dominant fusion process. In the lower main sequence, energy is primarily generated as the result of the proton–proton chain, which directly fuses hydrogen together in a series of stages to produce helium. Stars in the upper main sequence have sufficiently high core temperatures to efficiently use the CNO cycle (see chart). This process uses atoms of carbon, nitrogen, and oxygen as intermediaries in the process of fusing hydrogen into helium. At a stellar core temperature of 18 million kelvin, the PP process and CNO cycle are equally efficient, and each type generates half of the star's net luminosity. As this is the core temperature of a star with about , the upper main sequence consists of stars above this mass. Thus, roughly speaking, stars of spectral class F or cooler belong to the lower main sequence, while A-type stars or hotter are upper main-sequence stars. The transition in primary energy production from one form to the other spans a range difference of less than a single solar mass. In the Sun, a one solar-mass star, only 1.5% of the energy is generated by the CNO cycle. By contrast, stars with or above generate almost their entire energy output through the CNO cycle. The observed upper limit for a main-sequence star is . The theoretical explanation for this limit is that stars above this mass can not radiate energy fast enough to remain stable, so any additional mass will be ejected in a series of pulsations until the star reaches a stable limit. The lower limit for sustained proton-proton nuclear fusion is about or 80 times the mass of Jupiter. Below this threshold are sub-stellar objects that can not sustain hydrogen fusion, known as brown dwarfs. Structure Because there is a temperature difference between the core and the surface, or photosphere, energy is transported outward. The two modes for transporting this energy are radiation and convection. A radiation zone, where energy is transported by radiation, is stable against convection and there is very little mixing of the plasma. By contrast, in a convection zone the energy is transported by bulk movement of plasma, with hotter material rising and cooler material descending. Convection is a more efficient mode for carrying energy than radiation, but it will only occur under conditions that create a steep temperature gradient. In massive stars (above ) the rate of energy generation by the CNO cycle is very sensitive to temperature, so the fusion is highly concentrated at the core. Consequently, there is a high temperature gradient in the core region, which results in a convection zone for more efficient energy transport. This mixing of material around the core removes the helium ash from the hydrogen-burning region, allowing more of the hydrogen in the star to be consumed during the main-sequence lifetime. The outer regions of a massive star transport energy by radiation, with little or no convection. Intermediate-mass stars such as Sirius may transport energy primarily by radiation, with a small core convection region. Medium-sized, low-mass stars like the Sun have a core region that is stable against convection, with a convection zone near the surface that mixes the outer layers. This results in a steady buildup of a helium-rich core, surrounded by a hydrogen-rich outer region. By contrast, cool, very low-mass stars (below ) are convective throughout. Thus the helium produced at the core is distributed across the star, producing a relatively uniform atmosphere and a proportionately longer main-sequence lifespan. Luminosity-color variation As non-fusing helium accumulates in the core of a main-sequence star, the reduction in the abundance of hydrogen per unit mass results in a gradual lowering of the fusion rate within that mass. Since it is fusion-supplied power that maintains the pressure of the core and supports the higher layers of the star, the core gradually gets compressed. This brings hydrogen-rich material into a shell around the helium-rich core at a depth where the pressure is sufficient for fusion to occur. The high power output from this shell pushes the higher layers of the star further out. This causes a gradual increase in the radius and consequently luminosity of the star over time. For example, the luminosity of the early Sun was only about 70% of its current value. As a star ages it thus changes its position on the HR diagram. This evolution is reflected in a broadening of the main sequence band which contains stars at various evolutionary stages. Other factors that broaden the main sequence band on the HR diagram include uncertainty in the distance to stars and the presence of unresolved binary stars that can alter the observed stellar parameters. However, even perfect observation would show a fuzzy main sequence because mass is not the only parameter that affects a star's color and luminosity. Variations in chemical composition caused by the initial abundances, the star's evolutionary status, interaction with a close companion, rapid rotation, or a magnetic field can all slightly change a main-sequence star's HR diagram position, to name just a few factors. As an example, there are metal-poor stars (with a very low abundance of elements with higher atomic numbers than helium) that lie just below the main sequence and are known as subdwarfs. These stars are fusing hydrogen in their cores and so they mark the lower edge of the main sequence fuzziness caused by variance in chemical composition. A nearly vertical region of the HR diagram, known as the instability strip, is occupied by pulsating variable stars known as Cepheid variables. These stars vary in magnitude at regular intervals, giving them a pulsating appearance. The strip intersects the upper part of the main sequence in the region of class A and F stars, which are between one and two solar masses. Pulsating stars in this part of the instability strip intersecting the upper part of the main sequence are called Delta Scuti variables. Main-sequence stars in this region experience only small changes in magnitude, so this variation is difficult to detect. Other classes of unstable main-sequence stars, like Beta Cephei variables, are unrelated to this instability strip. Lifetime The total amount of energy that a star can generate through nuclear fusion of hydrogen is limited by the amount of hydrogen fuel that can be consumed at the core. For a star in equilibrium, the thermal energy generated at the core must be at least equal to the energy radiated at the surface. Since the luminosity gives the amount of energy radiated per unit time, the total life span can be estimated, to first approximation, as the total energy produced divided by the star's luminosity. For a star with at least , when the hydrogen supply in its core is exhausted and it expands to become a red giant, it can start to fuse helium atoms to form carbon. The energy output of the helium fusion process per unit mass is only about a tenth the energy output of the hydrogen process, and the luminosity of the star increases. This results in a much shorter length of time in this stage compared to the main-sequence lifetime. (For example, the Sun is predicted to spend burning helium, compared to about 12 billion years burning hydrogen.) Thus, about 90% of the observed stars above will be on the main sequence. On average, main-sequence stars are known to follow an empirical mass–luminosity relationship. The luminosity (L) of the star is roughly proportional to the total mass (M) as the following power law: This relationship applies to main-sequence stars in the range . The amount of fuel available for nuclear fusion is proportional to the mass of the star. Thus, the lifetime of a star on the main sequence can be estimated by comparing it to solar evolutionary models. The Sun has been a main-sequence star for about 4.5 billion years and it will become a red giant in 6.5 billion years, for a total main-sequence lifetime of roughly 1010 years. Hence: where M and L are the mass and luminosity of the star, respectively, is a solar mass, is the solar luminosity and is the star's estimated main-sequence lifetime. Although more massive stars have more fuel to burn and might intuitively be expected to last longer, they also radiate a proportionately greater amount with increased mass. This is required by the stellar equation of state; for a massive star to maintain equilibrium, the outward pressure of radiated energy generated in the core not only must but will rise to match the titanic inward gravitational pressure of its envelope. Thus, the most massive stars may remain on the main sequence for only a few million years, while stars with less than a tenth of a solar mass may last for over a trillion years. The exact mass-luminosity relationship depends on how efficiently energy can be transported from the core to the surface. A higher opacity has an insulating effect that retains more energy at the core, so the star does not need to produce as much energy to remain in hydrostatic equilibrium. By contrast, a lower opacity means energy escapes more rapidly and the star must burn more fuel to remain in equilibrium. A sufficiently high opacity can result in energy transport via convection, which changes the conditions needed to remain in equilibrium. In high-mass main-sequence stars, the opacity is dominated by electron scattering, which is nearly constant with increasing temperature. Thus the luminosity only increases as the cube of the star's mass. For stars below , the opacity becomes dependent on temperature, resulting in the luminosity varying approximately as the fourth power of the star's mass. For very low-mass stars, molecules in the atmosphere also contribute to the opacity. Below about , the luminosity of the star varies as the mass to the power of 2.3, producing a flattening of the slope on a graph of mass versus luminosity. Even these refinements are only an approximation, however, and the mass-luminosity relation can vary depending on a star's composition. Evolutionary tracks When a main-sequence star has consumed the hydrogen at its core, the loss of energy generation causes its gravitational collapse to resume and the star evolves off the main sequence. The path which the star follows across the HR diagram is called an evolutionary track. Stars with less than are predicted to directly become white dwarfs when energy generation by nuclear fusion of hydrogen at their core comes to a halt, but stars in this mass range have main-sequence lifetimes longer than the current age of the universe, so no stars are old enough for this to have occurred. In stars more massive than , the hydrogen surrounding the helium core reaches sufficient temperature and pressure to undergo fusion, forming a hydrogen-burning shell and causing the outer layers of the star to expand and cool. The stage as these stars move away from the main sequence is known as the subgiant branch; it is relatively brief and appears as a gap in the evolutionary track since few stars are observed at that point. When the helium core of low-mass stars becomes degenerate, or the outer layers of intermediate-mass stars cool sufficiently to become opaque, their hydrogen shells increase in temperature and the stars start to become more luminous. This is known as the red-giant branch; it is a relatively long-lived stage and it appears prominently in H–R diagrams. These stars will eventually end their lives as white dwarfs. The most massive stars do not become red giants; instead, their cores quickly become hot enough to fuse helium and eventually heavier elements and they are known as supergiants. They follow approximately horizontal evolutionary tracks from the main sequence across the top of the H–R diagram. Supergiants are relatively rare and do not show prominently on most H–R diagrams. Their cores will eventually collapse, usually leading to a supernova and leaving behind either a neutron star or black hole. When a cluster of stars is formed at about the same time, the main-sequence lifespan of these stars will depend on their individual masses. The most massive stars will leave the main sequence first, followed in sequence by stars of ever lower masses. The position where stars in the cluster are leaving the main sequence is known as the turnoff point. By knowing the main-sequence lifespan of stars at this point, it becomes possible to estimate the age of the cluster.
Physical sciences
Stellar astronomy
null
19614
https://en.wikipedia.org/wiki/Molecular%20orbital
Molecular orbital
In chemistry, a molecular orbital () is a mathematical function describing the location and wave-like behavior of an electron in a molecule. This function can be used to calculate chemical and physical properties such as the probability of finding an electron in any specific region. The terms atomic orbital and molecular orbital were introduced by Robert S. Mulliken in 1932 to mean one-electron orbital wave functions. At an elementary level, they are used to describe the region of space in which a function has a significant amplitude. In an isolated atom, the orbital electrons' location is determined by functions called atomic orbitals. When multiple atoms combine chemically into a molecule by forming a valence chemical bond, the electrons' locations are determined by the molecule as a whole, so the atomic orbitals combine to form molecular orbitals. The electrons from the constituent atoms occupy the molecular orbitals. Mathematically, molecular orbitals are an approximate solution to the Schrödinger equation for the electrons in the field of the molecule's atomic nuclei. They are usually constructed by combining atomic orbitals or hybrid orbitals from each atom of the molecule, or other molecular orbitals from groups of atoms. They can be quantitatively calculated using the Hartree–Fock or self-consistent field (SCF) methods. Molecular orbitals are of three types: bonding orbitals which have an energy lower than the energy of the atomic orbitals which formed them, and thus promote the chemical bonds which hold the molecule together; antibonding orbitals which have an energy higher than the energy of their constituent atomic orbitals, and so oppose the bonding of the molecule, and non-bonding orbitals which have the same energy as their constituent atomic orbitals and thus have no effect on the bonding of the molecule. Overview A molecular orbital (MO) can be used to represent the regions in a molecule where an electron occupying that orbital is likely to be found. Molecular orbitals are approximate solutions to the Schrödinger equation for the electrons in the electric field of the molecule's atomic nuclei. However calculating the orbitals directly from this equation is far too intractable a problem. Instead they are obtained from the combination of atomic orbitals, which predict the location of an electron in an atom. A molecular orbital can specify the electron configuration of a molecule: the spatial distribution and energy of one (or one pair of) electron(s). Most commonly a MO is represented as a linear combination of atomic orbitals (the LCAO-MO method), especially in qualitative or very approximate usage. They are invaluable in providing a simple model of bonding in molecules, understood through molecular orbital theory. Most present-day methods in computational chemistry begin by calculating the MOs of the system. A molecular orbital describes the behavior of one electron in the electric field generated by the nuclei and some average distribution of the other electrons. In the case of two electrons occupying the same orbital, the Pauli principle demands that they have opposite spin. Necessarily this is an approximation, and highly accurate descriptions of the molecular electronic wave function do not have orbitals (see configuration interaction). Molecular orbitals are, in general, delocalized throughout the entire molecule. Moreover, if the molecule has symmetry elements, its nondegenerate molecular orbitals are either symmetric or antisymmetric with respect to any of these symmetries. In other words, the application of a symmetry operation S (e.g., a reflection, rotation, or inversion) to molecular orbital ψ results in the molecular orbital being unchanged or reversing its mathematical sign: Sψ = ±ψ. In planar molecules, for example, molecular orbitals are either symmetric (sigma) or antisymmetric (pi) with respect to reflection in the molecular plane. If molecules with degenerate orbital energies are also considered, a more general statement that molecular orbitals form bases for the irreducible representations of the molecule's symmetry group holds. The symmetry properties of molecular orbitals means that delocalization is an inherent feature of molecular orbital theory and makes it fundamentally different from (and complementary to) valence bond theory, in which bonds are viewed as localized electron pairs, with allowance for resonance to account for delocalization. In contrast to these symmetry-adapted canonical molecular orbitals, localized molecular orbitals can be formed by applying certain mathematical transformations to the canonical orbitals. The advantage of this approach is that the orbitals will correspond more closely to the "bonds" of a molecule as depicted by a Lewis structure. As a disadvantage, the energy levels of these localized orbitals no longer have physical meaning. (The discussion in the rest of this article will focus on canonical molecular orbitals. For further discussions on localized molecular orbitals, see: natural bond orbital and sigma-pi and equivalent-orbital models.) Formation of molecular orbitals Molecular orbitals arise from allowed interactions between atomic orbitals, which are allowed if the symmetries (determined from group theory) of the atomic orbitals are compatible with each other. Efficiency of atomic orbital interactions is determined from the overlap (a measure of how well two orbitals constructively interact with one another) between two atomic orbitals, which is significant if the atomic orbitals are close in energy. Finally, the number of molecular orbitals formed must be equal to the number of atomic orbitals in the atoms being combined to form the molecule. Qualitative discussion For an imprecise, but qualitatively useful, discussion of the molecular structure, the molecular orbitals can be obtained from the "Linear combination of atomic orbitals molecular orbital method" ansatz. Here, the molecular orbitals are expressed as linear combinations of atomic orbitals. Linear combinations of atomic orbitals (LCAO) Molecular orbitals were first introduced by Friedrich Hund and Robert S. Mulliken in 1927 and 1928. The linear combination of atomic orbitals or "LCAO" approximation for molecular orbitals was introduced in 1929 by Sir John Lennard-Jones. His ground-breaking paper showed how to derive the electronic structure of the fluorine and oxygen molecules from quantum principles. This qualitative approach to molecular orbital theory is part of the start of modern quantum chemistry. Linear combinations of atomic orbitals (LCAO) can be used to estimate the molecular orbitals that are formed upon bonding between the molecule's constituent atoms. Similar to an atomic orbital, a Schrödinger equation, which describes the behavior of an electron, can be constructed for a molecular orbital as well. Linear combinations of atomic orbitals, or the sums and differences of the atomic wavefunctions, provide approximate solutions to the Hartree–Fock equations which correspond to the independent-particle approximation of the molecular Schrödinger equation. For simple diatomic molecules, the wavefunctions obtained are represented mathematically by the equations where and are the molecular wavefunctions for the bonding and antibonding molecular orbitals, respectively, and are the atomic wavefunctions from atoms a and b, respectively, and and are adjustable coefficients. These coefficients can be positive or negative, depending on the energies and symmetries of the individual atomic orbitals. As the two atoms become closer together, their atomic orbitals overlap to produce areas of high electron density, and, as a consequence, molecular orbitals are formed between the two atoms. The atoms are held together by the electrostatic attraction between the positively charged nuclei and the negatively charged electrons occupying bonding molecular orbitals. Bonding, antibonding, and nonbonding MOs When atomic orbitals interact, the resulting molecular orbital can be of three types: bonding, antibonding, or nonbonding. Bonding MOs: Bonding interactions between atomic orbitals are constructive (in-phase) interactions. Bonding MOs are lower in energy than the atomic orbitals that combine to produce them. Antibonding MOs: Antibonding interactions between atomic orbitals are destructive (out-of-phase) interactions, with a nodal plane where the wavefunction of the antibonding orbital is zero between the two interacting atoms Antibonding MOs are higher in energy than the atomic orbitals that combine to produce them. Nonbonding MOs: Nonbonding MOs are the result of no interaction between atomic orbitals because of lack of compatible symmetries. Nonbonding MOs will have the same energy as the atomic orbitals of one of the atoms in the molecule. Sigma and pi labels for MOs The type of interaction between atomic orbitals can be further categorized by the molecular-orbital symmetry labels σ (sigma), π (pi), δ (delta), φ (phi), γ (gamma) etc. These are the Greek letters corresponding to the atomic orbitals s, p, d, f and g respectively. The number of nodal planes containing the internuclear axis between the atoms concerned is zero for σ MOs, one for π, two for δ, three for φ and four for γ. σ symmetry A MO with σ symmetry results from the interaction of either two atomic s-orbitals or two atomic pz-orbitals. An MO will have σ-symmetry if the orbital is symmetric with respect to the axis joining the two nuclear centers, the internuclear axis. This means that rotation of the MO about the internuclear axis does not result in a phase change. A σ* orbital, sigma antibonding orbital, also maintains the same phase when rotated about the internuclear axis. The σ* orbital has a nodal plane that is between the nuclei and perpendicular to the internuclear axis. π symmetry A MO with π symmetry results from the interaction of either two atomic px orbitals or py orbitals. An MO will have π symmetry if the orbital is asymmetric with respect to rotation about the internuclear axis. This means that rotation of the MO about the internuclear axis will result in a phase change. There is one nodal plane containing the internuclear axis, if real orbitals are considered. A π* orbital, pi antibonding orbital, will also produce a phase change when rotated about the internuclear axis. The π* orbital also has a second nodal plane between the nuclei. δ symmetry A MO with δ symmetry results from the interaction of two atomic dxy or dx2-y2 orbitals. Because these molecular orbitals involve low-energy d atomic orbitals, they are seen in transition-metal complexes. A δ bonding orbital has two nodal planes containing the internuclear axis, and a δ* antibonding orbital also has a third nodal plane between the nuclei. φ symmetry Theoretical chemists have conjectured that higher-order bonds, such as phi bonds corresponding to overlap of f atomic orbitals, are possible. There is no known example of a molecule purported to contain a phi bond. Gerade and ungerade symmetry For molecules that possess a center of inversion (centrosymmetric molecules) there are additional labels of symmetry that can be applied to molecular orbitals. Centrosymmetric molecules include: Homonuclear diatomics, X2 Octahedral, EX6 Square planar, EX4. Non-centrosymmetric molecules include: Heteronuclear diatomics, XY Tetrahedral, EX4. If inversion through the center of symmetry in a molecule results in the same phases for the molecular orbital, then the MO is said to have gerade (g) symmetry, from the German word for even. If inversion through the center of symmetry in a molecule results in a phase change for the molecular orbital, then the MO is said to have ungerade (u) symmetry, from the German word for odd. For a bonding MO with σ-symmetry, the orbital is σg (s' + s'' is symmetric), while an antibonding MO with σ-symmetry the orbital is σu, because inversion of s' – s'' is antisymmetric. For a bonding MO with π-symmetry the orbital is πu because inversion through the center of symmetry for would produce a sign change (the two p atomic orbitals are in phase with each other but the two lobes have opposite signs), while an antibonding MO with π-symmetry is πg because inversion through the center of symmetry for would not produce a sign change (the two p orbitals are antisymmetric by phase). MO diagrams The qualitative approach of MO analysis uses a molecular orbital diagram to visualize bonding interactions in a molecule. In this type of diagram, the molecular orbitals are represented by horizontal lines; the higher a line the higher the energy of the orbital, and degenerate orbitals are placed on the same level with a space between them. Then, the electrons to be placed in the molecular orbitals are slotted in one by one, keeping in mind the Pauli exclusion principle and Hund's rule of maximum multiplicity (only 2 electrons, having opposite spins, per orbital; place as many unpaired electrons on one energy level as possible before starting to pair them). For more complicated molecules, the wave mechanics approach loses utility in a qualitative understanding of bonding (although is still necessary for a quantitative approach). Some properties: A basis set of orbitals includes those atomic orbitals that are available for molecular orbital interactions, which may be bonding or antibonding The number of molecular orbitals is equal to the number of atomic orbitals included in the linear expansion or the basis set If the molecule has some symmetry, the degenerate atomic orbitals (with the same atomic energy) are grouped in linear combinations (called symmetry-adapted atomic orbitals (SO)), which belong to the representation of the symmetry group, so the wave functions that describe the group are known as symmetry-adapted linear combinations (SALC). The number of molecular orbitals belonging to one group representation is equal to the number of symmetry-adapted atomic orbitals belonging to this representation Within a particular representation, the symmetry-adapted atomic orbitals mix more if their atomic energy levels are closer. The general procedure for constructing a molecular orbital diagram for a reasonably simple molecule can be summarized as follows: 1. Assign a point group to the molecule. 2. Look up the shapes of the SALCs. 3. Arrange the SALCs of each molecular fragment in order of energy, noting first whether they stem from s, p, or d orbitals (and put them in the order s < p < d), and then their number of internuclear nodes. 4. Combine SALCs of the same symmetry type from the two fragments, and from N SALCs form N molecular orbitals. 5. Estimate the relative energies of the molecular orbitals from considerations of overlap and relative energies of the parent orbitals, and draw the levels on a molecular orbital energy level diagram (showing the origin of the orbitals). 6. Confirm, correct, and revise this qualitative order by carrying out a molecular orbital calculation by using commercial software. Bonding in molecular orbitals Orbital degeneracy Molecular orbitals are said to be degenerate if they have the same energy. For example, in the homonuclear diatomic molecules of the first ten elements, the molecular orbitals derived from the px and the py atomic orbitals result in two degenerate bonding orbitals (of low energy) and two degenerate antibonding orbitals (of high energy). Ionic bonds In an ionic bond, oppositely charged ions are bonded by electrostatic attraction. It is possible to describe ionic bonds with molecular orbital theory by treating them as extremely polar bonds. Their bonding orbitals are very close in energy to the atomic orbitals of the anion. They are also very similar in character to the anion's atomic orbitals, which means the electrons are completely shifted to the anion. In computer diagrams, the orbitals are centered on the anion's core. Bond order The bond order, or number of bonds, of a molecule can be determined by combining the number of electrons in bonding and antibonding molecular orbitals. A pair of electrons in a bonding orbital creates a bond, whereas a pair of electrons in an antibonding orbital negates a bond. For example, N2, with eight electrons in bonding orbitals and two electrons in antibonding orbitals, has a bond order of three, which constitutes a triple bond. Bond strength is proportional to bond order—a greater amount of bonding produces a more stable bond—and bond length is inversely proportional to it—a stronger bond is shorter. There are rare exceptions to the requirement of molecule having a positive bond order. Although Be2 has a bond order of 0 according to MO analysis, there is experimental evidence of a highly unstable Be2 molecule having a bond length of 245 pm and bond energy of 10 kJ/mol. HOMO and LUMO The highest occupied molecular orbital and lowest unoccupied molecular orbital are often referred to as the HOMO and LUMO, respectively. The difference of the energies of the HOMO and LUMO is called the HOMO-LUMO gap. This notion is often the matter of confusion in literature and should be considered with caution. Its value is usually located between the fundamental gap (difference between ionization potential and electron affinity) and the optical gap. In addition, HOMO-LUMO gap can be related to a bulk material band gap or transport gap, which is usually much smaller than fundamental gap. Examples Homonuclear diatomics Homonuclear diatomic MOs contain equal contributions from each atomic orbital in the basis set. This is shown in the homonuclear diatomic MO diagrams for H2, He2, and Li2, all of which containing symmetric orbitals. H2 As a simple MO example, consider the electrons in a hydrogen molecule, H2 (see molecular orbital diagram), with the two atoms labelled H' and H". The lowest-energy atomic orbitals, 1s' and 1s", do not transform according to the symmetries of the molecule. However, the following symmetry adapted atomic orbitals do: The symmetric combination (called a bonding orbital) is lower in energy than the basis orbitals, and the antisymmetric combination (called an antibonding orbital) is higher. Because the H2 molecule has two electrons, they can both go in the bonding orbital, making the system lower in energy (hence more stable) than two free hydrogen atoms. This is called a covalent bond. The bond order is equal to the number of bonding electrons minus the number of antibonding electrons, divided by 2. In this example, there are 2 electrons in the bonding orbital and none in the antibonding orbital; the bond order is 1, and there is a single bond between the two hydrogen atoms. He2 On the other hand, consider the hypothetical molecule of He2 with the atoms labeled He' and He". As with H2, the lowest energy atomic orbitals are the 1s' and 1s", and do not transform according to the symmetries of the molecule, while the symmetry adapted atomic orbitals do. The symmetric combination—the bonding orbital—is lower in energy than the basis orbitals, and the antisymmetric combination—the antibonding orbital—is higher. Unlike H2, with two valence electrons, He2 has four in its neutral ground state. Two electrons fill the lower-energy bonding orbital, σg(1s), while the remaining two fill the higher-energy antibonding orbital, σu*(1s). Thus, the resulting electron density around the molecule does not support the formation of a bond between the two atoms; without a stable bond holding the atoms together, the molecule would not be expected to exist. Another way of looking at it is that there are two bonding electrons and two antibonding electrons; therefore, the bond order is 0 and no bond exists (the molecule has one bound state supported by the Van der Waals potential). Li2 Dilithium Li2 is formed from the overlap of the 1s and 2s atomic orbitals (the basis set) of two Li atoms. Each Li atom contributes three electrons for bonding interactions, and the six electrons fill the three MOs of lowest energy, σg(1s), σu*(1s), and σg(2s). Using the equation for bond order, it is found that dilithium has a bond order of one, a single bond. Noble gases Considering a hypothetical molecule of He2, since the basis set of atomic orbitals is the same as in the case of H2, we find that both the bonding and antibonding orbitals are filled, so there is no energy advantage to the pair. HeH would have a slight energy advantage, but not as much as H2 + 2 He, so the molecule is very unstable and exists only briefly before decomposing into hydrogen and helium. In general, we find that atoms such as He that have full energy shells rarely bond with other atoms. Except for short-lived Van der Waals complexes, there are very few noble gas compounds known. Heteronuclear diatomics While MOs for homonuclear diatomic molecules contain equal contributions from each interacting atomic orbital, MOs for heteronuclear diatomics contain different atomic orbital contributions. Orbital interactions to produce bonding or antibonding orbitals in heteronuclear diatomics occur if there is sufficient overlap between atomic orbitals as determined by their symmetries and similarity in orbital energies. HF In hydrogen fluoride HF overlap between the H 1s and F 2s orbitals is allowed by symmetry but the difference in energy between the two atomic orbitals prevents them from interacting to create a molecular orbital. Overlap between the H 1s and F 2pz orbitals is also symmetry allowed, and these two atomic orbitals have a small energy separation. Thus, they interact, leading to creation of σ and σ* MOs and a molecule with a bond order of 1. Since HF is a non-centrosymmetric molecule, the symmetry labels g and u do not apply to its molecular orbitals. Quantitative approach To obtain quantitative values for the molecular energy levels, one needs to have molecular orbitals that are such that the configuration interaction (CI) expansion converges fast towards the full CI limit. The most common method to obtain such functions is the Hartree–Fock method, which expresses the molecular orbitals as eigenfunctions of the Fock operator. One usually solves this problem by expanding the molecular orbitals as linear combinations of Gaussian functions centered on the atomic nuclei (see linear combination of atomic orbitals and basis set (chemistry)). The equation for the coefficients of these linear combinations is a generalized eigenvalue equation known as the Roothaan equations, which are in fact a particular representation of the Hartree–Fock equation. There are a number of programs in which quantum chemical calculations of MOs can be performed, including Spartan. Simple accounts often suggest that experimental molecular orbital energies can be obtained by the methods of ultra-violet photoelectron spectroscopy for valence orbitals and X-ray photoelectron spectroscopy for core orbitals. This, however, is incorrect as these experiments measure the ionization energy, the difference in energy between the molecule and one of the ions resulting from the removal of one electron. Ionization energies are linked approximately to orbital energies by Koopmans' theorem. While the agreement between these two values can be close for some molecules, it can be very poor in other cases.
Physical sciences
Molecular physics
null
19622
https://en.wikipedia.org/wiki/Materials%20science
Materials science
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries. The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study. Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. Materials science is also an important part of forensic engineering and failure analysis investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. History The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials. Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena. Fundamentals A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few. The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials. Structure Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc. Structure is studied in the following levels. Atomic structure Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material. Bonding To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure. Crystallography Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties. Nanostructure Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit. Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties. In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure. Microstructure Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured. The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties. Macrostructure Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye. Properties Materials exhibit myriad properties, including the following. Mechanical properties, see Strength of materials Chemical properties, see Chemistry Electrical properties, see Electricity Thermal properties, see Thermodynamics Optical properties, see Optics and Photonics Magnetic properties, see Magnetism The properties of a material determine its usability and hence its engineering application. Processing Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene. Thermodynamics Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics. The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium. Kinetics Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat. Research Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas. Nanomaterials Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc. Biomaterials A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science. Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material. Electronic, optical, and magnetic Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance. Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer. This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics. Computational materials science With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more. Industry Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.). Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. Ceramics and glasses Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components. Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries. Composites Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases. Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide. Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose. Polymers Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics. Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties. Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics. Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc. The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints. Metal alloys The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels. Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. Semiconductors A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate. Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications. Relation with other fields Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more. The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education. Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in. The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields. Emerging technologies Subdisciplines The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites. Ceramic engineering Metallurgy Polymer science and engineering Composite engineering There are additionally broadly applicable, materials independent, endeavors. Materials characterization (spectroscopy, microscopy, diffraction) Computational materials science Materials informatics and selection There are also relatively broad focuses across materials on specific phenomena and techniques. Crystallography Surface science Tribology Microelectronics Related or interdisciplinary fields Condensed matter physics, solid-state physics and solid-state chemistry Nanotechnology Mineralogy Supramolecular chemistry Biomaterials science Professional societies American Ceramic Society ASM International Association for Iron and Steel Technology Materials Research Society The Minerals, Metals & Materials Society
Physical sciences
Chemistry: General
null
19636
https://en.wikipedia.org/wiki/Mathematical%20logic
Mathematical logic
Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory (also known as computability theory). Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics. Since its inception, mathematical logic has both contributed to and been motivated by the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and clarified the issues involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be formalized in terms of sets, although there are some theorems that cannot be proven in common axiom systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing which parts of mathematics can be formalized in particular formal systems (as in reverse mathematics) rather than trying to find theories in which all of mathematics can be developed. Subfields and scope The Handbook of Mathematical Logic in 1977 makes a rough division of contemporary mathematical logic into four areas: set theory model theory recursion theory, and proof theory and constructive mathematics (considered as parts of a single area). Additionally, sometimes the field of computational complexity theory is also included together with mathematical logic. Each area has a distinct focus, although many techniques and results are shared among multiple areas. The borderlines amongst these fields, and the lines separating mathematical logic and other fields of mathematics, are not always sharp. Gödel's incompleteness theorem marks not only a milestone in recursion theory and proof theory, but has also led to Löb's theorem in modal logic. The method of forcing is employed in set theory, model theory, and recursion theory, as well as in the study of intuitionistic mathematics. The mathematical field of category theory uses many formal axiomatic methods, and includes the study of categorical logic, but category theory is not ordinarily considered a subfield of mathematical logic. Because of its applicability in diverse fields of mathematics, mathematicians including Saunders Mac Lane have proposed category theory as a foundational system for mathematics, independent of set theory. These foundations use toposes, which resemble generalized models of set theory that may employ classical or nonclassical logic. History Mathematical logic emerged in the mid-19th century as a subfield of mathematics, reflecting the confluence of two traditions: formal philosophical logic and mathematics. Mathematical logic, also called 'logistic', 'symbolic logic', the 'algebra of logic', and, more recently, simply 'formal logic', is the set of logical theories elaborated in the course of the nineteenth century with the aid of an artificial notation and a rigorously deductive method. Before this emergence, logic was studied with rhetoric, with calculationes, through the syllogism, and with philosophy. The first half of the 20th century saw an explosion of fundamental results, accompanied by vigorous debate over the foundations of mathematics. Early history Theories of logic were developed in many cultures in history, including China, India, Greece and the Islamic world. Greek methods, particularly Aristotelian logic (or term logic) as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of propositional logic. In 18th-century Europe, attempts to treat the operations of formal logic in a symbolic or algebraic way had been made by philosophical mathematicians including Leibniz and Lambert, but their labors remained isolated and little known. 19th century In the middle of the nineteenth century, George Boole and then Augustus De Morgan presented systematic mathematical treatments of logic. Their work, building on work by algebraists such as George Peacock, extended the traditional Aristotelian doctrine of logic into a sufficient framework for the study of foundations of mathematics. In 1847, Vatroslav Bertić made substantial work on algebraization of logic, independently from Boole. Charles Sanders Peirce later built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885. Gottlob Frege presented an independent development of logic with quantifiers in his Begriffsschrift, published in 1879, a work generally considered as marking a turning point in the history of logic. Frege's work remained obscure, however, until Bertrand Russell began to promote it near the turn of the century. The two-dimensional notation Frege developed was never widely adopted and is unused in contemporary texts. From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century. Foundational theories Concerns that mathematics had not been built on a proper foundation led to the development of axiomatic systems for fundamental areas of mathematics such as arithmetic, analysis, and geometry. In logic, the term arithmetic refers to the theory of the natural numbers. Giuseppe Peano published a set of axioms for arithmetic that came to bear his name (Peano axioms), using a variation of the logical system of Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at the time. Around the same time Richard Dedekind showed that the natural numbers are uniquely characterized by their induction properties. Dedekind proposed a different characterization, which lacked the formal logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive definitions of addition and multiplication from the successor function and mathematical induction. In the mid-19th century, flaws in Euclid's axioms for geometry became known. In addition to the independence of the parallel postulate, established by Nikolai Lobachevsky in 1826, mathematicians discovered that certain theorems taken for granted by Euclid were not in fact provable from his axioms. Among these is the theorem that a line contains at least two points, or that circles of the same radius whose centers are separated by that radius must intersect. Hilbert developed a complete set of axioms for geometry, building on previous work by Pasch. The success in axiomatizing geometry motivated Hilbert to seek complete axiomatizations of other areas of mathematics, such as the natural numbers and the real line. This would prove to be a major area of research in the first half of the 20th century. The 19th century saw great advances in the theory of real analysis, including theories of convergence of functions and Fourier series. Mathematicians such as Karl Weierstrass began to construct functions that stretched intuition, such as nowhere-differentiable continuous functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, which sought to axiomatize analysis using properties of the natural numbers. The modern (ε, δ)-definition of limit and continuous functions was already developed by Bolzano in 1817, but remained relatively unknown. Cauchy in 1821 defined continuity in terms of infinitesimals (see Cours d'Analyse, page 34). In 1858, Dedekind proposed a definition of the real numbers in terms of Dedekind cuts of rational numbers, a definition still employed in contemporary texts. Georg Cantor developed the fundamental concepts of infinite set theory. His early results developed the theory of cardinality and proved that the reals and the natural numbers have different cardinalities. Over the next twenty years, Cantor developed a theory of transfinite numbers in a series of publications. In 1891, he published a new proof of the uncountability of the real numbers that introduced the diagonal argument, and used this method to prove Cantor's theorem that no set can have the same cardinality as its powerset. Cantor believed that every set could be well-ordered, but was unable to produce a proof for this result, leaving it as an open problem in 1895. 20th century In the early decades of the 20th century, the main areas of study were set theory and formal logic. The discovery of paradoxes in informal set theory caused some to wonder whether mathematics itself is inconsistent, and to look for proofs of consistency. In 1900, Hilbert posed a famous list of 23 problems for the next century. The first two of these were to resolve the continuum hypothesis and prove the consistency of elementary arithmetic, respectively; the tenth was to produce a method that could decide whether a multivariate polynomial equation over the integers has a solution. Subsequent work to resolve these problems shaped the direction of mathematical logic, as did the effort to resolve Hilbert's Entscheidungsproblem, posed in 1928. This problem asked for a procedure that would decide, given a formalized mathematical statement, whether the statement is true or false. Set theory and paradoxes Ernst Zermelo gave a proof that every set could be well-ordered, a result Georg Cantor had been unable to obtain. To achieve the proof, Zermelo introduced the axiom of choice, which drew heated debate and research among mathematicians and the pioneers of set theory. The immediate criticism of the method led Zermelo to publish a second exposition of his result, directly addressing criticisms of his proof. This paper led to the general acceptance of the axiom of choice in the mathematics community. Skepticism about the axiom of choice was reinforced by recently discovered paradoxes in naive set theory. Cesare Burali-Forti was the first to state a paradox: the Burali-Forti paradox shows that the collection of all ordinal numbers cannot form a set. Very soon thereafter, Bertrand Russell discovered Russell's paradox in 1901, and Jules Richard discovered Richard's paradox. Zermelo provided the first set of axioms for set theory. These axioms, together with the additional axiom of replacement proposed by Abraham Fraenkel, are now called Zermelo–Fraenkel set theory (ZF). Zermelo's axioms incorporated the principle of limitation of size to avoid Russell's paradox. In 1910, the first volume of Principia Mathematica by Russell and Alfred North Whitehead was published. This seminal work developed the theory of functions and cardinality in a completely formal framework of type theory, which Russell and Whitehead developed in an effort to avoid the paradoxes. Principia Mathematica is considered one of the most influential works of the 20th century, although the framework of type theory did not prove popular as a foundational theory for mathematics. Fraenkel proved that the axiom of choice cannot be proved from the axioms of Zermelo's set theory with urelements. Later work by Paul Cohen showed that the addition of urelements is not needed, and the axiom of choice is unprovable in ZF. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory. Symbolic logic Leopold Löwenheim and Thoralf Skolem obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the cardinalities of infinite structures. Skolem realized that this theorem would apply to first-order formalizations of set theory, and that it implies any such formalization has a countable model. This counterintuitive fact became known as Skolem's paradox. In his doctoral thesis, Kurt Gödel proved the completeness theorem, which establishes a correspondence between syntax and semantics in first-order logic. Gödel used the completeness theorem to prove the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant logic used by mathematicians. In 1931, Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which proved the incompleteness (in a different meaning of the word) of all sufficiently strong, effective first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for mathematics, striking a strong blow to Hilbert's program. It showed the impossibility of providing a consistency proof of arithmetic within any formal theory of arithmetic. Hilbert, however, did not acknowledge the importance of the incompleteness theorem for some time. Gödel's theorem shows that a consistency proof of any sufficiently strong, effective axiom system cannot be obtained in the system itself, if the system is consistent, nor in any weaker system. This leaves open the possibility of consistency proofs that cannot be formalized within the system they consider. Gentzen proved the consistency of arithmetic using a finitistic system together with a principle of transfinite induction. Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel gave a different consistency proof, which reduces the consistency of classical arithmetic to that of intuitionistic arithmetic in higher types. The first textbook on symbolic logic for the layman was written by Lewis Carroll, author of Alice's Adventures in Wonderland, in 1896. Beginnings of the other branches Alfred Tarski developed the basics of model theory. Beginning in 1935, a group of prominent mathematicians collaborated under the pseudonym Nicolas Bourbaki to publish Éléments de mathématique, a series of encyclopedic mathematics texts. These texts, written in an austere and axiomatic style, emphasized rigorous presentation and set-theoretic foundations. Terminology coined by these texts, such as the words bijection, injection, and surjection, and the set-theoretic foundations the texts employed, were widely adopted throughout mathematics. The study of computability came to be known as recursion theory or computability theory, because early formalizations by Gödel and Kleene relied on recursive definitions of functions. When these definitions were shown equivalent to Turing's formalization involving Turing machines, it became clear that a new concept – the computable function – had been discovered, and that this definition was robust enough to admit numerous independent characterizations. In his work on the incompleteness theorems in 1931, Gödel lacked a rigorous concept of an effective formal system; he immediately realized that the new definitions of computability could be used for this purpose, allowing him to state the incompleteness theorems in generality that could only be implied in the original paper. Numerous results in recursion theory were obtained in the 1940s by Stephen Cole Kleene and Emil Leon Post. Kleene introduced the concepts of relative computability, foreshadowed by Turing, and the arithmetical hierarchy. Kleene later generalized recursion theory to higher-order functionals. Kleene and Georg Kreisel studied formal versions of intuitionistic mathematics, particularly in the context of proof theory. Formal logical systems At its core, mathematical logic deals with mathematical concepts expressed using formal logical systems. These systems, though they differ in many details, share the common property of considering only expressions in a fixed formal language. The systems of propositional logic and first-order logic are the most widely studied today, because of their applicability to foundations of mathematics and because of their desirable proof-theoretic properties. Stronger classical logics such as second-order logic or infinitary logic are also studied, along with Non-classical logics such as intuitionistic logic. First-order logic First-order logic is a particular formal system of logic. Its syntax involves only finite expressions as well-formed formulas, while its semantics are characterized by the limitation of all quantifiers to a fixed domain of discourse. Early results from formal logic established limitations of first-order logic. The Löwenheim–Skolem theorem (1919) showed that if a set of sentences in a countable first-order language has an infinite model then it has at least one model of each infinite cardinality. This shows that it is impossible for a set of first-order axioms to characterize the natural numbers, the real numbers, or any other infinite structure up to isomorphism. As the goal of early foundational studies was to produce axiomatic theories for all parts of mathematics, this limitation was particularly stark. Gödel's completeness theorem established the equivalence between semantic and syntactic definitions of logical consequence in first-order logic. It shows that if a particular sentence is true in every model that satisfies a particular set of axioms, then there must be a finite deduction of the sentence from the axioms. The compactness theorem first appeared as a lemma in Gödel's proof of the completeness theorem, and it took many years before logicians grasped its significance and began to apply it routinely. It says that a set of sentences has a model if and only if every finite subset has a model, or in other words that an inconsistent set of formulas must have a finite inconsistent subset. The completeness and compactness theorems allow for sophisticated analysis of logical consequence in first-order logic and the development of model theory, and they are a key reason for the prominence of first-order logic in mathematics. Gödel's incompleteness theorems establish additional limits on first-order axiomatizations. The first incompleteness theorem states that for any consistent, effectively given (defined below) logical system that is capable of interpreting arithmetic, there exists a statement that is true (in the sense that it holds for the natural numbers) but not provable within that logical system (and which indeed may fail in some non-standard models of arithmetic which may be consistent with the logical system). For example, in every logical system capable of expressing the Peano axioms, the Gödel sentence holds for the natural numbers but cannot be proved. Here a logical system is said to be effectively given if it is possible to decide, given any formula in the language of the system, whether the formula is an axiom, and one which can express the Peano axioms is called "sufficiently strong." When applied to first-order logic, the first incompleteness theorem implies that any sufficiently strong, consistent, effective first-order theory has models that are not elementarily equivalent, a stronger limitation than the one established by the Löwenheim–Skolem theorem. The second incompleteness theorem states that no sufficiently strong, consistent, effective axiom system for arithmetic can prove its own consistency, which has been interpreted to show that Hilbert's program cannot be reached. Other classical logics Many logics besides first-order logic are studied. These include infinitary logics, which allow for formulas to provide an infinite amount of information, and higher-order logics, which include a portion of set theory directly in their semantics. The most well studied infinitary logic is . In this logic, quantifiers may only be nested to finite depths, as in first-order logic, but formulas may have finite or countably infinite conjunctions and disjunctions within them. Thus, for example, it is possible to say that an object is a whole number using a formula of such as Higher-order logics allow for quantification not only of elements of the domain of discourse, but subsets of the domain of discourse, sets of such subsets, and other objects of higher type. The semantics are defined so that, rather than having a separate domain for each higher-type quantifier to range over, the quantifiers instead range over all objects of the appropriate type. The logics studied before the development of first-order logic, for example Frege's logic, had similar set-theoretic aspects. Although higher-order logics are more expressive, allowing complete axiomatizations of structures such as the natural numbers, they do not satisfy analogues of the completeness and compactness theorems from first-order logic, and are thus less amenable to proof-theoretic analysis. Another type of logics are s that allow inductive definitions, like one writes for primitive recursive functions. One can formally define an extension of first-order logic — a notion which encompasses all logics in this section because they behave like first-order logic in certain fundamental ways, but does not encompass all logics in general, e.g. it does not encompass intuitionistic, modal or fuzzy logic. Lindström's theorem implies that the only extension of first-order logic satisfying both the compactness theorem and the downward Löwenheim–Skolem theorem is first-order logic. Nonclassical and modal logic Modal logics include additional modal operators, such as an operator which states that a particular formula is not only true, but necessarily true. Although modal logic is not often used to axiomatize mathematics, it has been used to study the properties of first-order provability and set-theoretic forcing. Intuitionistic logic was developed by Heyting to study Brouwer's program of intuitionism, in which Brouwer himself avoided formalization. Intuitionistic logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation is true. Kleene's work with the proof theory of intuitionistic logic showed that constructive information can be recovered from intuitionistic proofs. For example, any provably total function in intuitionistic arithmetic is computable; this is not true in classical theories of arithmetic such as Peano arithmetic. Algebraic logic Algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. A fundamental example is the use of Boolean algebras to represent truth values in classical propositional logic, and the use of Heyting algebras to represent truth values in intuitionistic propositional logic. Stronger logics, such as first-order logic and higher-order logic, are studied using more complicated algebraic structures such as cylindric algebras. Set theory Set theory is the study of sets, which are abstract collections of objects. Many of the basic notions, such as ordinal and cardinal numbers, were developed informally by Cantor before formal axiomatizations of set theory were developed. The first such axiomatization, due to Zermelo, was extended slightly to become Zermelo–Fraenkel set theory (ZF), which is now the most widely used foundational theory for mathematics. Other formalizations of set theory have been proposed, including von Neumann–Bernays–Gödel set theory (NBG), Morse–Kelley set theory (MK), and New Foundations (NF). Of these, ZF, NBG, and MK are similar in describing a cumulative hierarchy of sets. New Foundations takes a different approach; it allows objects such as the set of all sets at the cost of restrictions on its set-existence axioms. The system of Kripke–Platek set theory is closely related to generalized recursion theory. Two famous statements in set theory are the axiom of choice and the continuum hypothesis. The axiom of choice, first stated by Zermelo, was proved independent of ZF by Fraenkel, but has come to be widely accepted by mathematicians. It states that given a collection of nonempty sets there is a single set C that contains exactly one element from each set in the collection. The set C is said to "choose" one element from each set in the collection. While the ability to make such a choice is considered obvious by some, since each set in the collection is nonempty, the lack of a general, concrete rule by which the choice can be made renders the axiom nonconstructive. Stefan Banach and Alfred Tarski showed that the axiom of choice can be used to decompose a solid ball into a finite number of pieces which can then be rearranged, with no scaling, to make two solid balls of the original size. This theorem, known as the Banach–Tarski paradox, is one of many counterintuitive results of the axiom of choice. The continuum hypothesis, first proposed as a conjecture by Cantor, was listed by David Hilbert as one of his 23 problems in 1900. Gödel showed that the continuum hypothesis cannot be disproven from the axioms of Zermelo–Fraenkel set theory (with or without the axiom of choice), by developing the constructible universe of set theory in which the continuum hypothesis must hold. In 1963, Paul Cohen showed that the continuum hypothesis cannot be proven from the axioms of Zermelo–Fraenkel set theory. This independence result did not completely settle Hilbert's question, however, as it is possible that new axioms for set theory could resolve the hypothesis. Recent work along these lines has been conducted by W. Hugh Woodin, although its importance is not yet clear. Contemporary research in set theory includes the study of large cardinals and determinacy. Large cardinals are cardinal numbers with particular properties so strong that the existence of such cardinals cannot be proved in ZFC. The existence of the smallest large cardinal typically studied, an inaccessible cardinal, already implies the consistency of ZFC. Despite the fact that large cardinals have extremely high cardinality, their existence has many ramifications for the structure of the real line. Determinacy refers to the possible existence of winning strategies for certain two-player games (the games are said to be determined). The existence of these strategies implies structural properties of the real line and other Polish spaces. Model theory Model theory studies the models of various formal theories. Here a theory is a set of formulas in a particular formal logic and signature, while a model is a structure that gives a concrete interpretation of the theory. Model theory is closely related to universal algebra and algebraic geometry, although the methods of model theory focus more on logical considerations than those fields. The set of all models of a particular theory is called an elementary class; classical model theory seeks to determine the properties of models in a particular elementary class, or determine whether certain classes of structures form elementary classes. The method of quantifier elimination can be used to show that definable sets in particular theories cannot be too complicated. Tarski established quantifier elimination for real-closed fields, a result which also shows the theory of the field of real numbers is decidable. He also noted that his methods were equally applicable to algebraically closed fields of arbitrary characteristic. A modern subfield developing from this is concerned with o-minimal structures. Morley's categoricity theorem, proved by Michael D. Morley, states that if a first-order theory in a countable language is categorical in some uncountable cardinality, i.e. all models of this cardinality are isomorphic, then it is categorical in all uncountable cardinalities. A trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many nonisomorphic countable models can have only countably many. Vaught's conjecture, named after Robert Lawson Vaught, says that this is true even independently of the continuum hypothesis. Many special cases of this conjecture have been established. Recursion theory Recursion theory, also called computability theory, studies the properties of computable functions and the Turing degrees, which divide the uncomputable functions into sets that have the same level of uncomputability. Recursion theory also includes the study of generalized computability and definability. Recursion theory grew from the work of Rózsa Péter, Alonzo Church and Alan Turing in the 1930s, which was greatly extended by Kleene and Post in the 1940s. Classical recursion theory focuses on the computability of functions from the natural numbers to the natural numbers. The fundamental results establish a robust, canonical class of computable functions with numerous independent, equivalent characterizations using Turing machines, λ calculus, and other systems. More advanced results concern the structure of the Turing degrees and the lattice of recursively enumerable sets. Generalized recursion theory extends the ideas of recursion theory to computations that are no longer necessarily finite. It includes the study of computability in higher types as well as areas such as hyperarithmetical theory and α-recursion theory. Contemporary research in recursion theory includes the study of applications such as algorithmic randomness, computable model theory, and reverse mathematics, as well as new results in pure recursion theory. Algorithmically unsolvable problems An important subfield of recursion theory studies algorithmic unsolvability; a decision problem or function problem is algorithmically unsolvable if there is no possible computable algorithm that returns the correct answer for all legal inputs to the problem. The first results about unsolvability, obtained independently by Church and Turing in 1936, showed that the Entscheidungsproblem is algorithmically unsolvable. Turing proved this by establishing the unsolvability of the halting problem, a result with far-ranging implications in both recursion theory and computer science. There are many known examples of undecidable problems from ordinary mathematics. The word problem for groups was proved algorithmically unsolvable by Pyotr Novikov in 1955 and independently by W. Boone in 1959. The busy beaver problem, developed by Tibor Radó in 1962, is another well-known example. Hilbert's tenth problem asked for an algorithm to determine whether a multivariate polynomial equation with integer coefficients has a solution in the integers. Partial progress was made by Julia Robinson, Martin Davis and Hilary Putnam. The algorithmic unsolvability of the problem was proved by Yuri Matiyasevich in 1970. Proof theory and constructive mathematics Proof theory is the study of formal proofs in various logical deduction systems. These proofs are represented as formal mathematical objects, facilitating their analysis by mathematical techniques. Several deduction systems are commonly considered, including Hilbert-style deduction systems, systems of natural deduction, and the sequent calculus developed by Gentzen. The study of constructive mathematics, in the context of mathematical logic, includes the study of systems in non-classical logic such as intuitionistic logic, as well as the study of predicative systems. An early proponent of predicativism was Hermann Weyl, who showed it is possible to develop a large part of real analysis using only predicative methods. Because proofs are entirely finitary, whereas truth in a structure is not, it is common for work in constructive mathematics to emphasize provability. The relationship between provability in classical (or nonconstructive) systems and provability in intuitionistic (or constructive, respectively) systems is of particular interest. Results such as the Gödel–Gentzen negative translation show that it is possible to embed (or translate) classical logic into intuitionistic logic, allowing some properties about intuitionistic proofs to be transferred back to classical proofs. Recent developments in proof theory include the study of proof mining by Ulrich Kohlenbach and the study of proof-theoretic ordinals by Michael Rathjen. Applications "Mathematical logic has been successfully applied not only to mathematics and its foundations (G. Frege, B. Russell, D. Hilbert, P. Bernays, H. Scholz, R. Carnap, S. Lesniewski, T. Skolem), but also to physics (R. Carnap, A. Dittrich, B. Russell, C. E. Shannon, A. N. Whitehead, H. Reichenbach, P. Fevrier), to biology (J. H. Woodger, A. Tarski), to psychology (F. B. Fitch, C. G. Hempel), to law and morals (K. Menger, U. Klug, P. Oppenheim), to economics (J. Neumann, O. Morgenstern), to practical questions (E. C. Berkeley, E. Stamm), and even to metaphysics (J. [Jan] Salamucha, H. Scholz, J. M. Bochenski). Its applications to the history of logic have proven extremely fruitful (J. Lukasiewicz, H. Scholz, B. Mates, A. Becker, E. Moody, J. Salamucha, K. Duerr, Z. Jordan, P. Boehner, J. M. Bochenski, S. [Stanislaw] T. Schayer, D. Ingalls)." "Applications have also been made to theology (F. Drewnowski, J. Salamucha, I. Thomas)." Connections with computer science The study of computability theory in computer science is closely related to the study of computability in mathematical logic. There is a difference of emphasis, however. Computer scientists often focus on concrete programming languages and feasible computability, while researchers in mathematical logic often focus on computability as a theoretical concept and on noncomputability. The theory of semantics of programming languages is related to model theory, as is program verification (in particular, model checking). The Curry–Howard correspondence between proofs and programs relates to proof theory, especially intuitionistic logic. Formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages. Computer science also contributes to mathematics by developing techniques for the automatic checking or even finding of proofs, such as automated theorem proving and logic programming. Descriptive complexity theory relates logics to computational complexity. The first significant result in this area, Fagin's theorem (1974) established that NP is precisely the set of languages expressible by sentences of existential second-order logic. Foundations of mathematics In the 19th century, mathematicians became aware of logical gaps and inconsistencies in their field. It was shown that Euclid's axioms for geometry, which had been taught for centuries as an example of the axiomatic method, were incomplete. The use of infinitesimals, and the very definition of function, came into question in analysis, as pathological examples such as Weierstrass' nowhere-differentiable continuous function were discovered. Cantor's study of arbitrary infinite sets also drew criticism. Leopold Kronecker famously stated "God made the integers; all else is the work of man," endorsing a return to the study of finite, concrete objects in mathematics. Although Kronecker's argument was carried forward by constructivists in the 20th century, the mathematical community as a whole rejected them. David Hilbert argued in favor of the study of the infinite, saying "No one shall expel us from the Paradise that Cantor has created." Mathematicians began to search for axiom systems that could be used to formalize large parts of mathematics. In addition to removing ambiguity from previously naive terms such as function, it was hoped that this axiomatization would allow for consistency proofs. In the 19th century, the main method of proving the consistency of a set of axioms was to provide a model for it. Thus, for example, non-Euclidean geometry can be proved consistent by defining point to mean a point on a fixed sphere and line to mean a great circle on the sphere. The resulting structure, a model of elliptic geometry, satisfies the axioms of plane geometry except the parallel postulate. With the development of formal logic, Hilbert asked whether it would be possible to prove that an axiom system is consistent by analyzing the structure of possible proofs in the system, and showing through this analysis that it is impossible to prove a contradiction. This idea led to the study of proof theory. Moreover, Hilbert proposed that the analysis should be entirely concrete, using the term finitary to refer to the methods he would allow but not precisely defining them. This project, known as Hilbert's program, was seriously affected by Gödel's incompleteness theorems, which show that the consistency of formal theories of arithmetic cannot be established using methods formalizable in those theories. Gentzen showed that it is possible to produce a proof of the consistency of arithmetic in a finitary system augmented with axioms of transfinite induction, and the techniques he developed to do so were seminal in proof theory. A second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. The study of constructive mathematics includes many different programs with various definitions of constructive. At the most accommodating end, proofs in ZF set theory that do not use the axiom of choice are called constructive by many mathematicians. More limited versions of constructivism limit themselves to natural numbers, number-theoretic functions, and sets of natural numbers (which can be used to represent real numbers, facilitating the study of mathematical analysis). A common idea is that a concrete means of computing the values of the function must be known before the function itself can be said to exist. In the early 20th century, Luitzen Egbertus Jan Brouwer founded intuitionism as a part of philosophy of mathematics. This philosophy, poorly understood at first, stated that in order for a mathematical statement to be true to a mathematician, that person must be able to intuit the statement, to not only believe its truth but understand the reason for its truth. A consequence of this definition of truth was the rejection of the law of the excluded middle, for there are statements that, according to Brouwer, could not be claimed to be true while their negations also could not be claimed true. Brouwer's philosophy was influential, and the cause of bitter disputes among prominent mathematicians. Kleene and Kreisel would later study formalized versions of intuitionistic logic (Brouwer rejected formalization, and presented his work in unformalized natural language). With the advent of the BHK interpretation and Kripke models, intuitionism became easier to reconcile with classical mathematics.
Mathematics
Discrete mathematics
null
19638
https://en.wikipedia.org/wiki/MEMS
MEMS
MEMS (micro-electromechanical systems) is the technology of microscopic devices incorporating both electronic and moving parts. MEMS are made up of components between 1 and 100 micrometres in size (i.e., 0.001 to 0.1 mm), and MEMS devices generally range in size from 20 micrometres to a millimetre (i.e., 0.02 to 1.0 mm), although components arranged in arrays (e.g., digital micromirror devices) can be more than 1000 mm2. They usually consist of a central unit that processes data (an integrated circuit chip such as microprocessor) and several components that interact with the surroundings (such as microsensors). Because of the large surface area to volume ratio of MEMS, forces produced by ambient electromagnetism (e.g., electrostatic charges and magnetic moments), and fluid dynamics (e.g., surface tension and viscosity) are more important design considerations than with larger scale mechanical devices. MEMS technology is distinguished from molecular nanotechnology or molecular electronics in that the latter two must also consider surface chemistry. The potential of very small machines was appreciated before the technology existed that could make them (see, for example, Richard Feynman's famous 1959 lecture There's Plenty of Room at the Bottom). MEMS became practical once they could be fabricated using modified semiconductor device fabrication technologies, normally used to make electronics. These include molding and plating, wet etching (KOH, TMAH) and dry etching (RIE and DRIE), electrical discharge machining (EDM), and other technologies capable of manufacturing small devices. They merge at the nanoscale into nanoelectromechanical systems (NEMS) and nanotechnology. History An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Robert A. Wickstrom for Harvey C. Nathanson in 1965. Another early example is the resonistor, an electromechanical monolithic resonator patented by Raymond J. Wilfinger between 1966 and 1971. During the 1970s to early 1980s, a number of MOSFET microsensors were developed for measuring physical, chemical, biological and environmental parameters. The term "MEMS" was introduced in 1986. S.C. Jacobsen (PI) and J.E. Wood (Co-PI) introduced the term "MEMS" by way of a proposal to DARPA (15 July 1986), titled "Micro Electro-Mechanical Systems (MEMS)", granted to the University of Utah. The term "MEMS" was presented by way of an invited talk by S.C. Jacobsen, titled "Micro Electro-Mechanical Systems (MEMS)", at the IEEE Micro Robots and Teleoperators Workshop, Hyannis, MA Nov. 9–11, 1987. The term "MEMS" was published by way of a submitted paper by J.E. Wood, S.C. Jacobsen, and K.W. Grace, titled "SCOFSS: A Small Cantilevered Optical Fiber Servo System", in the IEEE Proceedings Micro Robots and Teleoperators Workshop, Hyannis, MA Nov. 9–11, 1987. CMOS transistors have been manufactured on top of MEMS structures. Types There are two basic types of MEMS switch technology: capacitive and ohmic. A capacitive MEMS switch is developed using a moving plate or sensing element, which changes the capacitance. Ohmic switches are controlled by electrostatically controlled cantilevers. Ohmic MEMS switches can fail from metal fatigue of the MEMS actuator (cantilever) and contact wear, since cantilevers can deform over time. Materials The fabrication of MEMS evolved from the process technology in semiconductor device fabrication, i.e. the basic techniques are deposition of material layers, patterning by photolithography and etching to produce the required shapes. Silicon Silicon is the material used to create most integrated circuits used in consumer electronics in the modern industry. The economies of scale, ready availability of inexpensive high-quality materials, and ability to incorporate electronic functionality make silicon attractive for a wide variety of MEMS applications. Silicon also has significant advantages engendered through its material properties. In single crystal form, silicon is an almost perfect Hookean material, meaning that when it is flexed there is virtually no hysteresis and hence almost no energy dissipation. As well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking. Semiconductor nanostructures based on silicon are gaining increasing importance in the field of microelectronics and MEMS in particular. Silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems. Polymers Even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. Polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. MEMS devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges. Metals Metals can also be used to create MEMS elements. While metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. Metals can be deposited by electroplating, evaporation, and sputtering processes. Commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver. Ceramics The nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in MEMS fabrication due to advantageous combinations of material properties. AlN crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. TiN, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic MEMS actuation schemes with ultrathin beams. Moreover, the high resistance of TiN against biocorrosion qualifies the material for applications in biogenic environments. The figure shows an electron-microscopic picture of a MEMS biosensor with a 50 nm thin bendable TiN beam above a TiN ground plate. Both can be driven as opposite electrodes of a capacitor, since the beam is fixed in electrically isolating side walls. When a fluid is suspended in the cavity its viscosity may be derived from bending the beam by electrical attraction to the ground plate and measuring the bending velocity. Basic processes Deposition processes One of the basic building blocks in MEMS processing is the ability to deposit thin films of material with a thickness anywhere from one micrometre to about 100 micrometres. The NEMS process is the same, although the measurement of film deposition ranges from a few nanometres to one micrometre. There are two types of deposition processes, as follows. Physical deposition Physical vapor deposition ("PVD") consists of a process in which a material is removed from a target, and deposited on a surface. Techniques to do this include the process of sputtering, in which an ion beam liberates atoms from a target, allowing them to move through the intervening space and deposit on the desired substrate, and evaporation, in which a material is evaporated from a target using either heat (thermal evaporation) or an electron beam (e-beam evaporation) in a vacuum system. Chemical deposition Chemical deposition techniques include chemical vapor deposition (CVD), in which a stream of source gas reacts on the substrate to grow the material desired. This can be further divided into categories depending on the details of the technique, for example LPCVD (low-pressure chemical vapor deposition) and PECVD (plasma-enhanced chemical vapor deposition). Oxide films can also be grown by the technique of thermal oxidation, in which the (typically silicon) wafer is exposed to oxygen and/or steam, to grow a thin surface layer of silicon dioxide. Patterning Patterning is the transfer of a pattern into a material. Lithography Lithography in a MEMS context is typically the transfer of a pattern into a photosensitive material by selective exposure to a radiation source such as light. A photosensitive material is a material that experiences a change in its physical properties when exposed to a radiation source. If a photosensitive material is selectively exposed to radiation (e.g. by masking some of the radiation) the pattern of the radiation on the material is transferred to the material exposed, as the properties of the exposed and unexposed regions differs. This exposed region can then be removed or treated providing a mask for the underlying substrate. Photolithography is typically used with metal or other thin film deposition, wet and dry etching. Sometimes, photolithography is used to create structure without any kind of post etching. One example is SU8 based lens where SU8 based square blocks are generated. Then the photoresist is melted to form a semi-sphere which acts as a lens. Electron beam lithography (often abbreviated as e-beam lithography) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film (called the resist), ("exposing" the resist) and of selectively removing either exposed or non-exposed regions of the resist ("developing"). The purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. It was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures. The primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. This form of maskless lithography has found wide usage in photomask-making used in photolithography, low-volume production of semiconductor components, and research & development. The key limitation of electron beam lithography is throughput, i.e., the very long time it takes to expose an entire silicon wafer or glass substrate. A long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. Also, the turn-around time for reworking or re-design is lengthened unnecessarily if the pattern is not being changed the second time. It is known that focused-ion beam lithography has the capability of writing extremely fine lines (less than 50 nm line and space has been achieved) without proximity effect. However, because the writing field in ion-beam lithography is quite small, large area patterns must be created by stitching together the small fields. Ion track technology is a deep cutting tool with a resolution limit around 8 nm applicable to radiation resistant minerals, glasses and polymers. It is capable of generating holes in thin films without any development process. Structural depth can be defined either by ion range or by material thickness. Aspect ratios up to several 104 can be reached. The technique can shape and texture materials at a defined inclination angle. Random pattern, single-ion track structures and an aimed pattern consisting of individual single tracks can be generated. X-ray lithography is a process used in the electronic industry to selectively remove parts of a thin film. It uses X-rays to transfer a geometric pattern from a mask to a light-sensitive chemical photoresist, or simply "resist", on the substrate. A series of chemical treatments then engraves the produced pattern into the material underneath the photoresist. Diamond patterning is a method of forming diamond MEMS. It is achieved by the lithographic application of diamond films to a substrate such as silicon. The patterns can be formed by selective deposition through a silicon dioxide mask, or by deposition followed by micromachining or focused ion beam milling. Etching processes There are two basic categories of etching processes: wet etching and dry etching. In the former, the material is dissolved when immersed in a chemical solution. In the latter, the material is sputtered or dissolved using reactive ions or a vapor phase etchant. Wet etching Wet chemical etching consists of the selective removal of material by dipping a substrate into a solution that dissolves it. The chemical nature of this etching process provides good selectivity, which means the etching rate of the target material is considerably higher than the mask material if selected carefully. Wet etching can be performed using either isotropic wet etchants or anisotropic wet etchants. Isotropic wet etchant etch in all directions of the crystalline silicon at approximately equal rates. Anisotropic wet etchants preferably etch along certain crystal planes at faster rates than other planes, thereby allowing more complicated 3-D microstructures to be implemented. Wet anisotropic etchants are often used in conjunction with boron etch stops wherein the surface of the silicon is heavily doped with boron resulting in a silicon material layer that is resistant to the wet etchants. This has been used in MEWS pressure sensor manufacturing for example. Etching progresses at the same speed in all directions. Long and narrow holes in a mask will produce v-shaped grooves in the silicon. The surface of these grooves can be atomically smooth if the etch is carried out correctly, with dimensions and angles being extremely accurate. Some single crystal materials, such as silicon, will have different etching rates depending on the crystallographic orientation of the substrate. This is known as anisotropic etching and one of the most common examples is the etching of silicon in KOH (potassium hydroxide), where Si <111> planes etch approximately 100 times slower than other planes (crystallographic orientations). Therefore, etching a rectangular hole in a (100)-Si wafer results in a pyramid shaped etch pit with 54.7° walls, instead of a hole with curved sidewalls as with isotropic etching. Hydrofluoric acid is commonly used as an aqueous etchant for silicon dioxide (, also known as BOX for SOI), usually in 49% concentrated form, 5:1, 10:1 or 20:1 BOE (buffered oxide etchant) or BHF (Buffered HF). They were first used in medieval times for glass etching. It was used in IC fabrication for patterning the gate oxide until the process step was replaced by RIE. Hydrofluoric acid is considered one of the more dangerous acids in the cleanroom. Electrochemical etching (ECE) for dopant-selective removal of silicon is a common method to automate and to selectively control etching. An active p–n diode junction is required, and either type of dopant can be the etch-resistant ("etch-stop") material. Boron is the most common etch-stop dopant. In combination with wet anisotropic etching as described above, ECE has been used successfully for controlling silicon diaphragm thickness in commercial piezoresistive silicon pressure sensors. Selectively doped regions can be created either by implantation, diffusion, or epitaxial deposition of silicon. Dry etching Xenon difluoride () is a dry vapor phase isotropic etch for silicon originally applied for MEMS in 1995 at University of California, Los Angeles. Primarily used for releasing metal and dielectric structures by undercutting silicon, has the advantage of a stiction-free release unlike wet etchants. Its etch selectivity to silicon is very high, allowing it to work with photoresist, , silicon nitride, and various metals for masking. Its reaction to silicon is "plasmaless", is purely chemical and spontaneous and is often operated in pulsed mode. Models of the etching action are available, and university laboratories and various commercial tools offer solutions using this approach. Modern VLSI processes avoid wet etching, and use plasma etching instead. Plasma etchers can operate in several modes by adjusting the parameters of the plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure, commonly used in vacuum engineering, equals approximately 133.3 pascals.) The plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. Since neutral particles attack the wafer from all angles, this process is isotropic. Plasma etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in deep reactive ion etching. The use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation-dependent etching. The source gas for the plasma usually contains small molecules rich in chlorine or fluorine. For instance, carbon tetrachloride () etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. A plasma containing oxygen is used to oxidize ("ash") photoresist and facilitate its removal. Ion milling, or sputter etching, uses lower pressures, often as low as 10−4 Torr (10 mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock atoms from the substrate by transferring momentum. Because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching (RIE) operates under conditions intermediate between sputter and plasma etching (between 10−3 and 10−1 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to produce deep, narrow features. In reactive-ion etching (RIE), the substrate is placed inside a reactor, and several gases are introduced. A plasma is struck in the gas mixture using an RF power source, which breaks the gas molecules into ions. The ions accelerate towards, and react with, the surface of the material being etched, forming another gaseous material. This is known as the chemical part of reactive ion etching. There is also a physical part, which is similar to the sputtering deposition process. If the ions have high enough energy, they can knock atoms out of the material to be etched without a chemical reaction. It is a very complex task to develop dry etch processes that balance chemical and physical etching, since there are many parameters to adjust. By changing the balance it is possible to influence the anisotropy of the etching, since the chemical part is isotropic and the physical part highly anisotropic the combination can form sidewalls that have shapes from rounded to vertical. Deep reactive ion etching (DRIE) is a special subclass of RIE that is growing in popularity. In this process, etch depths of hundreds of micrometers are achieved with almost vertical sidewalls. The primary technology is based on the so-called "Bosch process", named after the German company Robert Bosch, which filed the original patent, where two different gas compositions alternate in the reactor. Currently, there are two variations of the DRIE. The first variation consists of three distinct steps (the original Bosch process) while the second variation only consists of two steps. In the first variation, the etch cycle is as follows: (i) isotropic etch; (ii) passivation; (iii) anisotropic etch for floor cleaning. In the 2nd variation, steps (i) and (iii) are combined. Both variations operate similarly. The creates a polymer on the surface of the substrate, and the second gas composition ( and ) etches the substrate. The polymer is immediately sputtered away by the physical part of the etching, but only on the horizontal surfaces and not the sidewalls. Since the polymer only dissolves very slowly in the chemical part of the etching, it builds up on the sidewalls and protects them from etching. As a result, etching aspect ratios of 50 to 1 can be achieved. The process can easily be used to etch completely through a silicon substrate, and etch rates are 3–6 times higher than wet etching. After preparing a large number of MEMS devices on a silicon wafer, individual dies have to be separated, which is called die preparation in semiconductor technology. For some applications, the separation is preceded by wafer backgrinding in order to reduce the wafer thickness. Wafer dicing may then be performed either by sawing using a cooling liquid or a dry laser process called stealth dicing. Manufacturing technologies Bulk micromachining is the oldest paradigm of silicon-based MEMS. The whole thickness of a silicon wafer is used for building the micro-mechanical structures. Silicon is machined using various etching processes. Bulk micromachining has been essential in enabling high performance pressure sensors and accelerometers that changed the sensor industry in the 1980s and 1990s. Surface micromachining uses layers deposited on the surface of a substrate as the structural materials, rather than using the substrate itself. Surface micromachining was created in the late 1980s to render micromachining of silicon more compatible with planar integrated circuit technology, with the goal of combining MEMS and integrated circuits on the same silicon wafer. The original surface micromachining concept was based on thin polycrystalline silicon layers patterned as movable mechanical structures and released by sacrificial etching of the underlying oxide layer. Interdigital comb electrodes were used to produce in-plane forces and to detect in-plane movement capacitively. This MEMS paradigm has enabled the manufacturing of low cost accelerometers for e.g. automotive air-bag systems and other applications where low performance and/or high g-ranges are sufficient. Analog Devices has pioneered the industrialization of surface micromachining and has realized the co-integration of MEMS and integrated circuits. Wafer bonding involves joining two or more substrates (usually having the same diameter) to one another to form a composite structure. There are several types of wafer bonding processes that are used in microsystems fabrication including: direct or fusion wafer bonding, wherein two or more wafers are bonded together that are usually made of silicon or some other semiconductor material; anodic bonding wherein a boron-doped glass wafer is bonded to a semiconductor wafer, usually silicon; thermocompression bonding, wherein an intermediary thin-film material layer is used to facilitate wafer bonding; and eutectic bonding, wherein a thin-film layer of gold is used to bond two silicon wafers. Each of these methods have specific uses depending on the circumstances. Most wafer bonding processes rely on three basic criteria for successfully bonding: the wafers to be bonded are sufficiently flat; the wafer surfaces are sufficiently smooth; and the wafer surfaces are sufficiently clean. The most stringent criteria for wafer bonding is usually the direct fusion wafer bonding since even one or more small particulates can render the bonding unsuccessful. In comparison, wafer bonding methods that use intermediary layers are often far more forgiving. Both bulk and surface silicon micromachining are used in the industrial production of sensors, ink-jet nozzles, and other devices. But in many cases the distinction between these two has diminished. A new etching technology, deep reactive-ion etching, has made it possible to combine good performance typical of bulk micromachining with comb structures and in-plane operation typical of surface micromachining. While it is common in surface micromachining to have structural layer thickness in the range of 2 μm, in HAR silicon micromachining the thickness can be from 10 to 100 μm. The materials commonly used in HAR silicon micromachining are thick polycrystalline silicon, known as epi-poly, and bonded silicon-on-insulator (SOI) wafers although processes for bulk silicon wafer also have been created (SCREAM). Bonding a second wafer by glass frit bonding, anodic bonding or alloy bonding is used to protect the MEMS structures. Integrated circuits are typically not combined with HAR silicon micromachining. Applications Some common commercial applications of MEMS include: Inkjet printers, which use piezoelectrics or thermal bubble ejection to deposit ink on paper. Accelerometers in modern cars for a large number of purposes including airbag deployment and electronic stability control. Inertial measurement units (IMUs): MEMS accelerometers. MEMS gyroscopes in remote controlled, or autonomous, helicopters, planes and multirotors (also known as drones), used for automatically sensing and balancing flying characteristics of roll, pitch and yaw. MEMS magnetic field sensor (magnetometer) may also be incorporated in such devices to provide directional heading. MEMS inertial navigation systems (INSs) of modern cars, airplanes, submarines and other vehicles to detect yaw, pitch, and roll; for example, the autopilot of an airplane. Accelerometers in consumer electronics devices such as game controllers (Nintendo Wii), personal media players / cell phones (virtually all smartphones, various HTC PDA models), augmented reality (AR) and virtual reality (VR) devices, and a number of digital cameras (various Canon Digital IXUS models). Also used in PCs to park the hard disk head when free-fall is detected, to prevent damage and data loss. MEMS barometers. MEMS microphones in portable devices, e.g., mobile phones, head sets and laptops. The market for smart microphones includes smartphones, wearable devices, smart home and automotive applications. Precision temperature-compensated resonators in real-time clocks. Silicon pressure sensors e.g., car tire pressure sensors, and disposable blood pressure sensors. Displays e.g., the digital micromirror device (DMD) chip in a projector based on DLP technology, which has a surface with several hundred thousand micromirrors or single micro-scanning-mirrors also called microscanners. The MEMS mirrors can also be used in conjunction with laser scanning to project an image. Optical switching technology, which is used for switching technology and alignment for data communications. RF switches and relays. Bio-MEMS applications in medical and health related technologies including lab-on-a-chip (taking advantage of microfluidics and micropumps), biosensors, chemosensors as well as embedded components of medical devices e.g. stents. Interferometric modulator display (IMOD) applications in consumer electronics (primarily displays for mobile devices), used to create interferometric modulation − reflective display technology as found in mirasol displays. Fluid acceleration, such as for micro-cooling. Micro-scale energy harvesting including piezoelectric, electrostatic and electromagnetic micro harvesters. Micromachined ultrasound transducers. MEMS-based loudspeakers focusing on applications such as in-ear headphones and hearing aids. MEMS oscillators. MEMS-based scanning probe microscopes including atomic force microscopes. LiDAR (light detection and ranging). Industry structure The global market for micro-electromechanical systems, which includes products such as automobile airbag systems, display systems and inkjet cartridges totaled $40 billion in 2006 according to Global MEMS/Microsystems Markets and Opportunities, a research report from SEMI and Yole Development and is forecasted to reach $72 billion by 2011. Companies with strong MEMS programs come in many sizes. Larger firms specialize in manufacturing high volume inexpensive components or packaged solutions for end markets such as automobiles, biomedical, and electronics. Smaller firms provide value in innovative solutions and absorb the expense of custom fabrication with high sales margins. Both large and small companies typically invest in R&D to explore new MEMS technology. The market for materials and equipment used to manufacture MEMS devices topped $1 billion worldwide in 2006. Materials demand is driven by substrates, making up over 70 percent of the market, packaging coatings and increasing use of chemical mechanical planarization (CMP). While MEMS manufacturing continues to be dominated by used semiconductor equipment, there is a migration to 200mm lines and select new tools, including etch and bonding for certain MEMS applications.
Technology
Machinery and tools: General
null
19662
https://en.wikipedia.org/wiki/Mean%20value%20theorem
Mean value theorem
In mathematics, the mean value theorem (or Lagrange's mean value theorem) states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval. History A special case of this theorem for inverse interpolation of the sine was first described by Parameshvara (1380–1460), from the Kerala School of Astronomy and Mathematics in India, in his commentaries on Govindasvāmi and Bhāskara II. A restricted form of the theorem was proved by Michel Rolle in 1691; the result was what is now known as Rolle's theorem, and was proved only for polynomials, without the techniques of calculus. The mean value theorem in its modern form was stated and proved by Augustin Louis Cauchy in 1823. Many variations of this theorem have been proved since then. Statement Let be a continuous function on the closed interval and differentiable on the open interval where Then there exists some in such that: The mean value theorem is a generalization of Rolle's theorem, which assumes , so that the right-hand side above is zero. The mean value theorem is still valid in a slightly more general setting. One only needs to assume that is continuous on , and that for every in the limit exists as a finite number or equals or . If finite, that limit equals . An example where this version of the theorem applies is given by the real-valued cube root function mapping , whose derivative tends to infinity at the origin. Proof The expression gives the slope of the line joining the points and , which is a chord of the graph of , while gives the slope of the tangent to the curve at the point . Thus the mean value theorem says that given any chord of a smooth curve, we can find a point on the curve lying between the end-points of the chord such that the tangent of the curve at that point is parallel to the chord. The following proof illustrates this idea. Define , where is a constant. Since is continuous on and differentiable on , the same is true for . We now want to choose so that satisfies the conditions of Rolle's theorem. Namely By Rolle's theorem, since is differentiable and , there is some in for which , and it follows from the equality that, Implications Theorem 1: Assume that is a continuous, real-valued function, defined on an arbitrary interval of the real line. If the derivative of at every interior point of the interval exists and is zero, then is constant in the interior. Proof: Assume the derivative of at every interior point of the interval exists and is zero. Let be an arbitrary open interval in . By the mean value theorem, there exists a point in such that This implies that . Thus, is constant on the interior of and thus is constant on by continuity. (See below for a multivariable version of this result.) Remarks: Only continuity of , not differentiability, is needed at the endpoints of the interval . No hypothesis of continuity needs to be stated if is an open interval, since the existence of a derivative at a point implies the continuity at this point. (See the section continuity and differentiability of the article derivative.) The differentiability of can be relaxed to one-sided differentiability, a proof is given in the article on semi-differentiability. Theorem 2: If for all in an interval of the domain of these functions, then is constant, i.e. where is a constant on . Proof: Let , then on the interval , so the above theorem 1 tells that is a constant or . Theorem 3: If is an antiderivative of on an interval , then the most general antiderivative of on is where is a constant. Proof: It directly follows from the theorem 2 above. Cauchy's mean value theorem Cauchy's mean value theorem, also known as the extended mean value theorem, is a generalization of the mean value theorem. It states: if the functions and are both continuous on the closed interval and differentiable on the open interval , then there exists some , such that Of course, if and , this is equivalent to: Geometrically, this means that there is some tangent to the graph of the curve which is parallel to the line defined by the points and . However, Cauchy's theorem does not claim the existence of such a tangent in all cases where and are distinct points, since it might be satisfied only for some value with , in other words a value for which the mentioned curve is stationary; in such points no tangent to the curve is likely to be defined at all. An example of this situation is the curve given by which on the interval goes from the point to , yet never has a horizontal tangent; however it has a stationary point (in fact a cusp) at . Cauchy's mean value theorem can be used to prove L'Hôpital's rule. The mean value theorem is the special case of Cauchy's mean value theorem when . Proof The proof of Cauchy's mean value theorem is based on the same idea as the proof of the mean value theorem. Suppose . Define , where is fixed in such a way that , namely Since and are continuous on and differentiable on , the same is true for . All in all, satisfies the conditions of Rolle's theorem: consequently, there is some in for which . Now using the definition of we have: and thus If , then, applying Rolle's theorem to , it follows that there exists in for which . Using this choice of , Cauchy's mean value theorem (trivially) holds. Mean value theorem in several variables The mean value theorem generalizes to real functions of multiple variables. The trick is to use parametrization to create a real function of one variable, and then apply the one-variable theorem. Let be an open subset of , and let be a differentiable function. Fix points such that the line segment between lies in , and define . Since is a differentiable function in one variable, the mean value theorem gives: for some between 0 and 1. But since and , computing explicitly we have: where denotes a gradient and a dot product. This is an exact analog of the theorem in one variable (in the case this is the theorem in one variable). By the Cauchy–Schwarz inequality, the equation gives the estimate: In particular, when is convex and the partial derivatives of are bounded, is Lipschitz continuous (and therefore uniformly continuous). As an application of the above, we prove that is constant if the open subset is connected and every partial derivative of is 0. Pick some point , and let . We want to show for every . For that, let . Then is closed in and nonempty. It is open too: for every , for every in open ball centered at and contained in . Since is connected, we conclude . The above arguments are made in a coordinate-free manner; hence, they generalize to the case when is a subset of a Banach space. Mean value theorem for vector-valued functions There is no exact analog of the mean value theorem for vector-valued functions (see below). However, there is an inequality which can be applied to many of the same situations to which the mean value theorem is applicable in the one dimensional case: Mean value inequality Jean Dieudonné in his classic treatise Foundations of Modern Analysis discards the mean value theorem and replaces it by mean inequality as the proof is not constructive and one cannot find the mean value and in applications one only needs mean inequality. Serge Lang in Analysis I uses the mean value theorem, in integral form, as an instant reflex but this use requires the continuity of the derivative. If one uses the Henstock–Kurzweil integral one can have the mean value theorem in integral form without the additional assumption that derivative should be continuous as every derivative is Henstock–Kurzweil integrable. The reason why there is no analog of mean value equality is the following: If is a differentiable function (where is open) and if , is the line segment in question (lying inside ), then one can apply the above parametrization procedure to each of the component functions of f (in the above notation set ). In doing so one finds points on the line segment satisfying But generally there will not be a single point on the line segment satisfying for all simultaneously. For example, define: Then , but and are never simultaneously zero as ranges over . The above theorem implies the following: In fact, the above statement suffices for many applications and can be proved directly as follows. (We shall write for for readability.) Cases where the theorem cannot be applied All conditions for the mean value theorem are necessary: is differentiable on is continuous on is real-valued When one of the above conditions is not satisfied, the mean value theorem is not valid in general, and so it cannot be applied. The necessity of the first condition can be seen by the counterexample where the function on [-1,1] is not differentiable. The necessity of the second condition can be seen by the counterexample where the function satisfies criteria 1 since on but not criteria 2 since and for all so no such exists. The theorem is false if a differentiable function is complex-valued instead of real-valued. For example, if for all real , then while for any real . Mean value theorems for definite integrals First mean value theorem for definite integrals Let f : [a, b] → R be a continuous function. Then there exists c in (a, b) such that This follows at once from the fundamental theorem of calculus, together with the mean value theorem for derivatives. Since the mean value of f on [a, b] is defined as we can interpret the conclusion as f achieves its mean value at some c in (a, b). In general, if f : [a, b] → R is continuous and g is an integrable function that does not change sign on [a, b], then there exists c in (a, b) such that Second mean value theorem for definite integrals There are various slightly different theorems called the second mean value theorem for definite integrals. A commonly found version is as follows: If is a positive monotonically decreasing function and is an integrable function, then there exists a number x in (a, b] such that Here stands for , the existence of which follows from the conditions. Note that it is essential that the interval (a, b] contains b. A variant not having this requirement is: If is a monotonic (not necessarily decreasing and positive) function and is an integrable function, then there exists a number x in (a, b) such that If the function returns a multi-dimensional vector, then the MVT for integration is not true, even if the domain of is also multi-dimensional. For example, consider the following 2-dimensional function defined on an -dimensional cube: Then, by symmetry it is easy to see that the mean value of over its domain is (0,0): However, there is no point in which , because everywhere. Generalizations Linear algebra Assume that and are differentiable functions on that are continuous on . Define There exists such that . Notice that and if we place , we get Cauchy's mean value theorem. If we place and we get Lagrange's mean value theorem. The proof of the generalization is quite simple: each of and are determinants with two identical rows, hence . The Rolle's theorem implies that there exists such that . Probability theory Let X and Y be non-negative random variables such that E[X] < E[Y] < ∞ and (i.e. X is smaller than Y in the usual stochastic order). Then there exists an absolutely continuous non-negative random variable Z having probability density function Let g be a measurable and differentiable function such that E[g(X)], E[g(Y)] < ∞, and let its derivative g′ be measurable and Riemann-integrable on the interval [x, y] for all y ≥ x ≥ 0. Then, E[g′(Z)] is finite and Complex analysis As noted above, the theorem does not hold for differentiable complex-valued functions. Instead, a generalization of the theorem is stated such: Let f : Ω → C be a holomorphic function on the open convex set Ω, and let a and b be distinct points in Ω. Then there exist points u, v on the interior of the line segment from a to b such that Where Re() is the real part and Im() is the imaginary part of a complex-valued function.
Mathematics
Real analysis
null
19673
https://en.wikipedia.org/wiki/MP3
MP3
MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio developed largely by the Fraunhofer Society in Germany under the lead of Karlheinz Brandenburg. It was designed to greatly reduce the amount of data required to represent audio, yet still sound like a faithful reproduction of the original uncompressed audio to most listeners; for example, compared to CD-quality digital audio, MP3 compression can commonly achieve a 75–95% reduction in size, depending on the bit rate. In popular usage, MP3 often refers to files of sound or music recordings stored in the MP3 file format (.mp3) on consumer electronic devices. Originally defined in 1991 as the third audio format of the MPEG-1 standard, it was retained and further extended—defining additional bit rates and support for more audio channels—as the third audio format of the subsequent MPEG-2 standard. MP3 as a file format commonly designates files containing an elementary stream of MPEG-1 Audio or MPEG-2 Audio encoded data, without other complexities of the MP3 standard. Concerning audio compression, which is its most apparent element to end-users, MP3 uses lossy compression to encode data using inexact approximations and the partial discarding of data, allowing for a large reduction in file sizes when compared to uncompressed audio. The combination of small size and acceptable fidelity led to a boom in the distribution of music over the Internet in the late 1990s, with MP3 serving as an enabling technology at a time when bandwidth and storage were still at a premium. The MP3 format soon became associated with controversies surrounding copyright infringement, music piracy, and the file-ripping and sharing services MP3.com and Napster, among others. With the advent of portable media players (including "MP3 players"), a product category also including smartphones, MP3 support remains near-universal and a de facto standard for digital audio. History The Moving Picture Experts Group (MPEG) designed MP3 as part of its MPEG-1, and later MPEG-2, standards. MPEG-1 Audio (MPEG-1 Part 3), which included MPEG-1 Audio Layer I, II, and III, was approved as a committee draft for an ISO/IEC standard in 1991, finalized in 1992, and published in 1993 as ISO/IEC 11172-3:1993. An MPEG-2 Audio (MPEG-2 Part 3) extension with lower sample and bit rates was published in 1995 as ISO/IEC 13818-3:1995. It requires only minimal modifications to existing MPEG-1 decoders (recognition of the MPEG-2 bit in the header and addition of the new lower sample and bit rates). Background The MP3 lossy compression algorithm takes advantage of a perceptual limitation of human hearing called auditory masking. In 1894, the American physicist Alfred M. Mayer reported that a tone could be rendered inaudible by another tone of lower frequency. In 1959, Richard Ehmer described a complete set of auditory curves regarding this phenomenon. Between 1967 and 1974, Eberhard Zwicker did work in the areas of tuning and masking of critical frequency-bands, which in turn built on the fundamental research in the area from Harvey Fletcher and his collaborators at Bell Labs. Perceptual coding was first used for speech coding compression with linear predictive coding (LPC), which has origins in the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. In 1978, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs proposed an LPC speech codec, called adaptive predictive coding, that used a psychoacoustic coding-algorithm exploiting the masking properties of the human ear. Further optimization by Schroeder and Atal with J.L. Hall was later reported in a 1979 paper. That same year, a psychoacoustic masking codec was also proposed by M. A. Krasner, who published and produced hardware for speech (not usable as music bit-compression), but the publication of his results in a relatively obscure Lincoln Laboratory Technical Report did not immediately influence the mainstream of psychoacoustic codec-development. The discrete cosine transform (DCT), a type of transform coding for lossy compression, proposed by Nasir Ahmed in 1972, was developed by Ahmed with T. Natarajan and K. R. Rao in 1973; they published their results in 1974. This led to the development of the modified discrete cosine transform (MDCT), proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The MDCT later became a core part of the MP3 algorithm. Ernst Terhardt and other collaborators constructed an algorithm describing auditory masking with high accuracy in 1982. This work added to a variety of reports from authors dating back to Fletcher, and to the work that initially determined critical ratios and critical bandwidths. In 1985, Atal and Schroeder presented code-excited linear prediction (CELP), an LPC-based perceptual speech-coding algorithm with auditory masking that achieved a significant data compression ratio for its time. IEEE's refereed Journal on Selected Areas in Communications reported on a wide variety of (mostly perceptual) audio compression algorithms in 1988. The "Voice Coding for Communications" edition published in February 1988 reported on a wide range of established, working audio bit compression technologies, some of them using auditory masking as part of their fundamental design, and several showing real-time hardware implementations. Development The genesis of the MP3 technology is fully described in a paper from Professor Hans Musmann, who chaired the ISO MPEG Audio group for several years. In December 1988, MPEG called for an audio coding standard. In June 1989, 14 audio coding algorithms were submitted. Because of certain similarities between these coding proposals, they were clustered into four development groups. The first group was ASPEC, by Fraunhofer Gesellschaft, AT&T, France Telecom, Deutsche and Thomson-Brandt. The second group was MUSICAM, by Matsushita, CCETT, ITT and Philips. The third group was ATAC (ATRAC Coding), by Fujitsu, JVC, NEC and Sony. And the fourth group was SB-ADPCM, by NTT and BTRL. The immediate predecessors of MP3 were "Optimum Coding in the Frequency Domain" (OCF), and Perceptual Transform Coding (PXFM). These two codecs, along with block-switching contributions from Thomson-Brandt, were merged into a codec called ASPEC, which was submitted to MPEG, and which won the quality competition, but that was mistakenly rejected as too complex to implement. The first practical implementation of an audio perceptual coder (OCF) in hardware (Krasner's hardware was too cumbersome and slow for practical use), was an implementation of a psychoacoustic transform coder based on Motorola 56000 DSP chips. Another predecessor of the MP3 format and technology is to be found in the perceptual codec MUSICAM based on an integer arithmetics 32 sub-bands filter bank, driven by a psychoacoustic model. It was primarily designed for Digital Audio Broadcasting (digital radio) and digital TV, and its basic principles were disclosed to the scientific community by CCETT (France) and IRT (Germany) in Atlanta during an IEEE-ICASSP conference in 1991, after having worked on MUSICAM with Matsushita and Philips since 1989. This codec incorporated into a broadcasting system using COFDM modulation was demonstrated on air and in the field with Radio Canada and CRC Canada during the NAB show (Las Vegas) in 1991. The implementation of the audio part of this broadcasting system was based on a two-chip encoder (one for the subband transform, one for the psychoacoustic model designed by the team of G. Stoll (IRT Germany), later known as psychoacoustic model I) and a real-time decoder using one Motorola 56001 DSP chip running an integer arithmetics software designed by Y.F. Dehery's team (CCETT, France). The simplicity of the corresponding decoder together with the high audio quality of this codec using for the first time a 48 kHz sampling rate, a 20 bits/sample input format (the highest available sampling standard in 1991, compatible with the AES/EBU professional digital input studio standard) were the main reasons to later adopt the characteristics of MUSICAM as the basic features for an advanced digital music compression codec. During the development of the MUSICAM encoding software, Stoll and Dehery's team made thorough use of a set of high-quality audio assessment material selected by a group of audio professionals from the European Broadcasting Union, and later used as a reference for the assessment of music compression codecs. The subband coding technique was found to be efficient, not only for the perceptual coding of high-quality sound materials but especially for the encoding of critical percussive sound materials (drums, triangle,...), due to the specific temporal masking effect of the MUSICAM sub-band filterbank (this advantage being a specific feature of short transform coding techniques). As a doctoral student at Germany's University of Erlangen-Nuremberg, Karlheinz Brandenburg began working on digital music compression in the early 1980s, focusing on how people perceive music. He completed his doctoral work in 1989. MP3 is directly descended from OCF and PXFM, representing the outcome of the collaboration of Brandenburg — working as a postdoctoral researcher at AT&T-Bell Labs with James D. Johnston ("JJ") of AT&T-Bell Labs — with the Fraunhofer Institute for Integrated Circuits, Erlangen (where he worked with Bernhard Grill and four other researchers – "The Original Six"), with relatively minor contributions from the MP2 branch of psychoacoustic sub-band coders. In 1990, Brandenburg became an assistant professor at Erlangen-Nuremberg. While there, he continued to work on music compression with scientists at the Fraunhofer Society's Heinrich Herz Institute. In 1993, he joined the staff of Fraunhofer HHI. An acapella version of the song "Tom's Diner" by Suzanne Vega was the first song used by Brandenburg to develop the MP3 format. It was used as a benchmark to see how well MP3's compression algorithm handled the human voice. Brandenburg adopted the song for testing purposes, listening to it again and again each time he refined the compression algorithm, making sure it did not adversely affect the reproduction of Vega's voice. Accordingly, he dubbed Vega the "Mother of MP3". Instrumental music had been easier to compress, but Vega's voice sounded unnatural in early versions of the format. Brandenburg eventually met Vega and heard Tom's Diner performed live. Standardization In 1991, two available proposals were assessed for an MPEG audio standard: MUSICAM (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing) and ASPEC (Adaptive Spectral Perceptual Entropy Coding). The MUSICAM technique, proposed by Philips (Netherlands), CCETT (France), the Institute for Broadcast Technology (Germany), and Matsushita (Japan), was chosen due to its simplicity and error robustness, as well as for its high level of computational efficiency. The MUSICAM format, based on sub-band coding, became the basis for the MPEG Audio compression format, incorporating, for example, its frame structure, header format, sample rates, etc. While much of MUSICAM technology and ideas were incorporated into the definition of MPEG Audio Layer I and Layer II, the filter bank alone and the data structure based on 1152 samples framing (file format and byte-oriented stream) of MUSICAM remained in the Layer III (MP3) format, as part of the computationally inefficient hybrid filter bank. Under the chairmanship of Professor Musmann of the Leibniz University Hannover, the editing of the standard was delegated to Leon van de Kerkhof (Netherlands), Gerhard Stoll (Germany), and Yves-François Dehery (France), who worked on Layer I and Layer II. ASPEC was the joint proposal of AT&T Bell Laboratories, Thomson Consumer Electronics, Fraunhofer Society, and CNET. It provided the highest coding efficiency. A working group consisting of van de Kerkhof, Stoll, Leonardo Chiariglione (CSELT VP for Media), Yves-François Dehery, Karlheinz Brandenburg (Germany) and James D. Johnston (United States) took ideas from ASPEC, integrated the filter bank from Layer II, added some of their ideas such as the joint stereo coding of MUSICAM and created the MP3 format, which was designed to achieve the same quality at 128 kbit/s as MP2 at . The algorithms for MPEG-1 Audio Layer I, II and III were approved in 1991 and finalized in 1992 as part of MPEG-1, the first standard suite by MPEG, which resulted in the international standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio or MPEG-1 Part 3), published in 1993. Files or data streams conforming to this standard must handle sample rates of 48k, 44100, and 32k and continue to be supported by current MP3 players and decoders. Thus the first generation of MP3 defined interpretations of MP3 frame data structures and size layouts. The compression efficiency of encoders is typically defined by the bit rate because the compression ratio depends on the bit depth and sampling rate of the input signal. Nevertheless, compression ratios are often published. They may use the compact disc (CD) parameters as references (44.1 kHz, 2 channels at 16 bits per channel or 2×16 bit), or sometimes the Digital Audio Tape (DAT) SP parameters (48 kHz, 2×16 bit). Compression ratios with this latter reference are higher, which demonstrates the problem with the use of the term compression ratio for lossy encoders. Karlheinz Brandenburg used a CD recording of Suzanne Vega's song "Tom's Diner" to assess and refine the MP3 compression algorithm. This song was chosen because of its nearly monophonic nature and wide spectral content, making it easier to hear imperfections in the compression format during playbacks. This particular track has an interesting property in that the two channels are almost, but not completely, the same, leading to a case where Binaural Masking Level Depression causes spatial unmasking of noise artifacts unless the encoder properly recognizes the situation and applies corrections similar to those detailed in the MPEG-2 AAC psychoacoustic model. Some more critical audio excerpts (glockenspiel, triangle, accordion, etc.) were taken from the EBU V3/SQAM reference compact disc and have been used by professional sound engineers to assess the subjective quality of the MPEG Audio formats. Going public A reference simulation software implementation, written in the C language and later known as ISO 11172-5, was developed (in 1991–1996) by the members of the ISO MPEG Audio committee to produce bit-compliant MPEG Audio files (Layer 1, Layer 2, Layer 3). It was approved as a committee draft of the ISO/IEC technical report in March 1994 and printed as document CD 11172-5 in April 1994. It was approved as a draft technical report (DTR/DIS) in November 1994, finalized in 1996 and published as international standard ISO/IEC TR 11172-5:1998 in 1998. The reference software in C language was later published as a freely available ISO standard. Working in non-real time on several operating systems, it was able to demonstrate the first real-time hardware decoding (DSP based) of compressed audio. Some other real-time implementations of MPEG Audio encoders and decoders were available for digital broadcasting (radio DAB, television DVB) towards consumer receivers and set-top boxes. On 7 July 1994, the Fraunhofer Society released the first software MP3 encoder, called l3enc. The filename extension .mp3 was chosen by the Fraunhofer team on 14 July 1995 (previously, the files had been named .bit). With the first real-time software MP3 player WinPlay3 (released 9 September 1995) many people were able to encode and play back MP3 files on their PCs. Because of the relatively small hard drives of the era (≈500–1000 MB) lossy compression was essential to store multiple albums' worth of music on a home computer as full recordings (as opposed to MIDI notation, or tracker files which combined notation with short recordings of instruments playing single notes). Fraunhofer example implementation A hacker named SoloH discovered the source code of the "dist10" MPEG reference implementation shortly after the release on the servers of the University of Erlangen. He developed a higher-quality version and spread it on the internet. This code started the widespread CD ripping and digital music distribution as MP3 over the internet. Further versions Further work on MPEG audio was finalized in 1994 as part of the second suite of MPEG standards, MPEG-2, more formally known as international standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Part 3 or backward compatible MPEG-2 Audio or MPEG-2 Audio BC), originally published in 1995. MPEG-2 Part 3 (ISO/IEC 13818-3) defined 42 additional bit rates and sample rates for MPEG-1 Audio Layer I, II and III. The new sampling rates are exactly half that of those originally defined in MPEG-1 Audio. This reduction in sampling rates serves to cut the available frequency fidelity in half while likewise cutting the bit rate by 50%. MPEG-2 Part 3 also enhanced MPEG-1's audio by allowing the coding of audio programs with more than two channels, up to 5.1 multichannel. An MP3 coded with MPEG-2 results in half of the bandwidth reproduction of MPEG-1 appropriate for piano and singing. A third generation of "MP3" style data streams (files) extended the MPEG-2 ideas and implementation but was named MPEG-2.5 audio since MPEG-3 already had a different meaning. This extension was developed at Fraunhofer IIS, the registered patent holder of MP3, by reducing the frame sync field in the MP3 header from 12 to 11 bits. As in the transition from MPEG-1 to MPEG-2, MPEG-2.5 adds additional sampling rates exactly half of those available using MPEG-2. It thus widens the scope of MP3 to include human speech and other applications yet requires only 25% of the bandwidth (frequency reproduction) possible using MPEG-1 sampling rates. While not an ISO-recognized standard, MPEG-2.5 is widely supported by both inexpensive Chinese and brand-name digital audio players as well as computer software-based MP3 encoders (LAME), decoders (FFmpeg) and players (MPC) adding additional MP3 frame types. Each generation of MP3 thus supports 3 sampling rates exactly half that of the previous generation for a total of 9 varieties of MP3 format files. The sample rate comparison table between MPEG-1, 2, and 2.5 is given later in the article. MPEG-2.5 is supported by LAME (since 2000), Media Player Classic (MPC), iTunes, and FFmpeg. MPEG-2.5 was not developed by MPEG (see above) and was never approved as an international standard. MPEG-2.5 is thus an unofficial or proprietary extension to the MP3 format. It is nonetheless ubiquitous and especially advantageous for low-bit-rate human speech applications. The ISO standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio) defined three formats: the MPEG-1 Audio Layer I, Layer II and Layer III. The ISO standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Audio) defined an extended version of MPEG-1 Audio: MPEG-2 Audio Layer I, Layer II, and Layer III. MPEG-2 Audio (MPEG-2 Part 3) should not be confused with MPEG-2 AAC (MPEG-2 Part 7 – ISO/IEC 13818-7). LAME is the most advanced MP3 encoder. LAME includes a variable bit rate (VBR) encoding which uses a quality parameter rather than a bit rate goal. Later versions (2008+) support an n.nnn quality goal which automatically selects MPEG-2 or MPEG-2.5 sampling rates as appropriate for human speech recordings that need only 5512 Hz bandwidth resolution. Internet distribution In the second half of the 1990s, MP3 files began to spread on the Internet, often via underground pirated song networks. The first known experiment in Internet distribution was organized in the early 1990s by the Internet Underground Music Archive, better known by the acronym IUMA. After some experiments using uncompressed audio files, this archive started to deliver on the native worldwide low-speed Internet some compressed MPEG Audio files using the MP2 (Layer II) format and later on used MP3 files when the standard was fully completed. The popularity of MP3s began to rise rapidly with the advent of Nullsoft's audio player Winamp, released in 1997, which still had in 2023 a community of 80 million active users. In 1998, the first portable solid-state digital audio player MPMan, developed by SaeHan Information Systems, which is headquartered in Seoul, South Korea, was released and the Rio PMP300 was sold afterward in 1998, despite legal suppression efforts by the RIAA. In November 1997, the website mp3.com was offering thousands of MP3s created by independent artists for free. The small size of MP3 files enabled widespread peer-to-peer file sharing of music ripped from CDs, which would have previously been nearly impossible. The first large peer-to-peer filesharing network, Napster, was launched in 1999. The ease of creating and sharing MP3s resulted in widespread copyright infringement. Major record companies argued that this free sharing of music reduced sales, and called it "music piracy". They reacted by pursuing lawsuits against Napster, which was eventually shut down and later sold, and against individual users who engaged in file sharing. Unauthorized MP3 file sharing continues on next-generation peer-to-peer networks. Some authorized services, such as Beatport, Bleep, Juno Records, eMusic, Zune Marketplace, Walmart.com, Rhapsody, the recording industry approved re-incarnation of Napster, and Amazon.com sell unrestricted music in the MP3 format. Design File structure An MP3 file is made up of MP3 frames, which consist of a header and a data block. This sequence of frames is called an elementary stream. Due to the "bit reservoir", frames are not independent items and cannot usually be extracted on arbitrary frame boundaries. The MP3 Data blocks contain the (compressed) audio information in terms of frequencies and amplitudes. The diagram shows that the MP3 Header consists of a sync word, which is used to identify the beginning of a valid frame. This is followed by a bit indicating that this is the MPEG standard and two bits that indicate that layer 3 is used; hence MPEG-1 Audio Layer 3 or MP3. After this, the values will differ, depending on the MP3 file. ISO/IEC 11172-3 defines the range of values for each section of the header along with the specification of the header. Most MP3 files today contain ID3 metadata, which precedes or follows the MP3 frames, as noted in the diagram. The data stream can contain an optional checksum. Joint stereo is done only on a frame-to-frame basis. Encoding and decoding In short, MP3 compression works by reducing the accuracy of certain components of sound that are considered (by psychoacoustic analysis) to be beyond the hearing capabilities of most humans. This method is commonly referred to as perceptual coding or psychoacoustic modeling. The remaining audio information is then recorded in a space-efficient manner using MDCT and FFT algorithms. The MP3 encoding algorithm is generally split into four parts. Part 1 divides the audio signal into smaller pieces, called frames, and an MDCT filter is then performed on the output. Part 2 passes the sample into a 1024-point fast Fourier transform (FFT), then the psychoacoustic model is applied and another MDCT filter is performed on the output. Part 3 quantifies and encodes each sample, known as noise allocation, which adjusts itself to meet the bit rate and sound masking requirements. Part 4 formats the bitstream, called an audio frame, which is made up of 4 parts, the header, error check, audio data, and ancillary data. The MPEG-1 standard does not include a precise specification for an MP3 encoder but does provide examples of psychoacoustic models, rate loops, and the like in the non-normative part of the original standard. MPEG-2 doubles the number of sampling rates that are supported and MPEG-2.5 adds 3 more. When this was written, the suggested implementations were quite dated. Implementers of the standard were supposed to devise algorithms suitable for removing parts of the information from the audio input. As a result, many different MP3 encoders became available, each producing files of differing quality. Comparisons were widely available, so it was easy for a prospective user of an encoder to research the best choice. Some encoders that were proficient at encoding at higher bit rates (such as LAME) were not necessarily as good at lower bit rates. Over time, LAME evolved on the SourceForge website until it became the de facto CBR MP3 encoder. Later an ABR mode was added. Work progressed on true variable bit rate using a quality goal between 0 and 10. Eventually, numbers (such as -V 9.600) could generate excellent quality low bit rate voice encoding at only using the MPEG-2.5 extensions. MP3 uses an overlapping MDCT structure. Each MPEG-1 MP3 frame is 1152 samples, divided into two granules of 576 samples. These samples, initially in the time domain, are transformed in one block to 576 frequency-domain samples by MDCT. MP3 also allows the use of shorter blocks in a granule, down to a size of 192 samples; this feature is used when a transient is detected. Doing so limits the temporal spread of quantization noise accompanying the transient (see psychoacoustics). Frequency resolution is limited by the small long block window size, which decreases coding efficiency. Time resolution can be too low for highly transient signals and may cause smearing of percussive sounds. Due to the tree structure of the filter bank, pre-echo problems are made worse, as the combined impulse response of the two filter banks does not, and cannot, provide an optimum solution in time/frequency resolution. Additionally, the combining of the two filter banks' outputs creates aliasing problems that must be handled partially by the "aliasing compensation" stage; however, that creates excess energy to be coded in the frequency domain, thereby decreasing coding efficiency. Decoding, on the other hand, is carefully defined in the standard. Most decoders are "bitstream compliant", which means that the decompressed output that they produce from a given MP3 file will be the same, within a specified degree of rounding tolerance, as the output specified mathematically in the ISO/IEC high standard document (ISO/IEC 11172-3). Therefore, the comparison of decoders is usually based on how computationally efficient they are (i.e., how much memory or CPU time they use in the decoding process). Over time this concern has become less of an issue as CPU clock rates transitioned from MHz to GHz. Encoder/decoder overall delay is not defined, which means there is no official provision for gapless playback. However, some encoders such as LAME can attach additional metadata that will allow players that can handle it to deliver seamless playback. Quality When performing lossy audio encoding, such as creating an MP3 data stream, there is a trade-off between the amount of data generated and the sound quality of the results. The person generating an MP3 selects a bit rate, which specifies how many kilobits per second of audio is desired. The higher the bit rate, the larger the MP3 data stream will be, and, generally, the closer it will sound to the original recording. With too low a bit rate, compression artifacts (i.e., sounds that were not present in the original recording) may be audible in the reproduction. Some audio is hard to compress because of its randomness and sharp attacks. When this type of audio is compressed, artifacts such as ringing or pre-echo are usually heard. A sample of applause or a triangle instrument with a relatively low bit rate provides good examples of compression artifacts. Most subjective testings of perceptual codecs tend to avoid using these types of sound materials, however, the artifacts generated by percussive sounds are barely perceptible due to the specific temporal masking feature of the 32 sub-band filterbank of Layer II on which the format is based. Besides the bit rate of an encoded piece of audio, the quality of MP3-encoded sound also depends on the quality of the encoder algorithm as well as the complexity of the signal being encoded. As the MP3 standard allows quite a bit of freedom with encoding algorithms, different encoders do feature quite different quality, even with identical bit rates. As an example, in a public listening test featuring two early MP3 encoders set at about , one scored 3.66 on a 1–5 scale, while the other scored only 2.22. Quality is dependent on the choice of encoder and encoding parameters. This observation caused a revolution in audio encoding. Early on bit rate was the prime and only consideration. At the time MP3 files were of the very simplest type: they used the same bit rate for the entire file: this process is known as constant bit rate (CBR) encoding. Using a constant bit rate makes encoding simpler and less CPU-intensive. However, it is also possible to optimize the size of the file by creating files where the bit rate changes throughout the file. These are known as variable bit rate. The bit reservoir and VBR encoding were part of the original MPEG-1 standard. The concept behind them is that, in any piece of audio, some sections are easier to compress, such as silence or music containing only a few tones, while others will be more difficult to compress. So, the overall quality of the file may be increased by using a lower bit rate for the less complex passages and a higher one for the more complex parts. With some advanced MP3 encoders, it is possible to specify a given quality, and the encoder will adjust the bit rate accordingly. Users that desire a particular "quality setting" that is transparent to their ears can use this value when encoding all of their music, and generally speaking not need to worry about performing personal listening tests on each piece of music to determine the correct bit rate. Perceived quality can be influenced by the listening environment (ambient noise), listener attention, listener training, and in most cases by listener audio equipment (such as sound cards, speakers, and headphones). Furthermore, sufficient quality may be achieved by a lesser quality setting for lectures and human speech applications and reduces encoding time and complexity. A test given to new students by Stanford University Music Professor Jonathan Berger showed that student preference for MP3-quality music has risen each year. Berger said the students seem to prefer the 'sizzle' sounds that MP3s bring to music. An in-depth study of MP3 audio quality, sound artist and composer Ryan Maguire's project "The Ghost in the MP3" isolates the sounds lost during MP3 compression. In 2015, he released the track "moDernisT" (an anagram of "Tom's Diner"), composed exclusively from the sounds deleted during MP3 compression of the song "Tom's Diner", the track originally used in the formulation of the MP3 standard. A detailed account of the techniques used to isolate the sounds deleted during MP3 compression, along with the conceptual motivation for the project, was published in the 2014 Proceedings of the International Computer Music Conference. Bit rate Bit rate is the product of the sample rate and number of bits per sample used to encode the music. CD audio is 44100 samples per second. The number of bits per sample also depends on the number of audio channels. The CD is stereo and 16 bits per channel. So, multiplying 44100 by 32 gives 1411200—the bit rate of uncompressed CD digital audio. MP3 was designed to encode this data at or less. If less complex passages are detected by the MP3 algorithms then lower bit rates may be employed. When using MPEG-2 instead of MPEG-1, MP3 supports only lower sampling rates (16,000, 22,050, or 24,000 samples per second) and offers choices of bit rate as low as but no higher than . By lowering the sampling rate, MPEG-2 layer III removes all frequencies above half the new sampling rate that may have been present in the source audio. As shown in these two tables, 14 selected bit rates are allowed in MPEG-1 Audio Layer III standard: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and , along with the 3 highest available sampling rates of 32, 44.1 and 48 kHz. MPEG-2 Audio Layer III also allows 14 somewhat different (and mostly lower) bit rates of 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, with sampling rates of 16, 22.05 and 24 kHz which are exactly half that of MPEG-1. MPEG-2.5 Audio Layer III frames are limited to only 8 bit rates of 8, 16, 24, 32, 40, 48, 56 and with 3 even lower sampling rates of 8, 11.025, and 12 kHz. On earlier systems that only support the MPEG-1 Audio Layer III standard, MP3 files with a bit rate below might be played back sped-up and pitched-up. Earlier systems also lack fast forwarding and rewinding playback controls on MP3. MPEG-1 frames contain the most detail in mode, the highest allowable bit rate setting, with silence and simple tones still requiring . MPEG-2 frames can capture up to 12 kHz sound reproductions needed up to . MP3 files made with MPEG-2 do not have 20 kHz bandwidth because of the Nyquist–Shannon sampling theorem. Frequency reproduction is always strictly less than half of the sampling rate, and imperfect filters require a larger margin for error (noise level versus sharpness of filter), so an 8 kHz sampling rate limits the maximum frequency to 4 kHz, while a 48 kHz sampling rate limits an MP3 to a maximum 24 kHz sound reproduction. MPEG-2 uses half and MPEG-2.5 only a quarter of MPEG-1 sample rates. For the general field of human speech reproduction, a bandwidth of 5,512 Hz is sufficient to produce excellent results (for voice) using the sampling rate of 11,025 and VBR encoding from 44,100 (standard) WAV file. English speakers average 41–42 kbit/s with -V 9.6 setting but this may vary with the amount of silence recorded or the rate of delivery (wpm). Resampling to 12,000 (6K bandwidth) is selected by the LAME parameter -V 9.4. Likewise -V 9.2 selects a 16,000 sample rate and a resultant 8K lowpass filtering. Older versions of LAME and FFmpeg only support integer arguments for the variable bit rate quality selection parameter. The n.nnn quality parameter (-V) is documented at lame.sourceforge.net but is only supported in LAME with the new style VBR variable bit rate quality selector—not average bit rate (ABR). A sample rate of 44.1 kHz is commonly used for music reproduction because this is also used for CD audio, the main source used for creating MP3 files. A great variety of bit rates are used on the Internet. A bit rate of is commonly used, at a compression ratio of 11:1, offering adequate audio quality in a relatively small space. As Internet bandwidth availability and hard drive sizes have increased, higher bit rates up to are widespread. Uncompressed audio as stored on an audio-CD has a bit rate of 1,411.2 kbit/s, (16 bit/sample × 44,100 samples/second × 2 channels / 1,000 bits/kilobit), so the bit rates 128, 160, and represent compression ratios of approximately 11:1, 9:1 and 7:1 respectively. Non-standard bit rates up to can be achieved with the LAME encoder and the free format option, although few MP3 players can play those files. According to the ISO standard, decoders are only required to be able to decode streams up to . Early MPEG Layer III encoders used what is now called constant bit rate (CBR). The software was only able to use a uniform bit rate on all frames in an MP3 file. Later more sophisticated MP3 encoders were able to use the bit reservoir to target an average bit rate selecting the encoding rate for each frame based on the complexity of the sound in that portion of the recording. A more sophisticated MP3 encoder can produce variable bit rate audio. MPEG audio may use bit rate switching on a per-frame basis, but only layer III decoders must support it. VBR is used when the goal is to achieve a fixed level of quality. The final file size of a VBR encoding is less predictable than with constant bit rate. Average bit rate is a type of VBR implemented as a compromise between the two: the bit rate is allowed to vary for more consistent quality, but is controlled to remain near an average value chosen by the user, for predictable file sizes. Although an MP3 decoder must support VBR to be standards compliant, historically some decoders have bugs with VBR decoding, particularly before VBR encoders became widespread. The most evolved LAME MP3 encoder supports the generation of VBR, ABR, and even the older CBR MP3 formats. Layer III audio can also use a "bit reservoir", a partially full frame's ability to hold part of the next frame's audio data, allowing temporary changes in effective bit rate, even in a constant bit rate stream. Internal handling of the bit reservoir increases encoding delay. There is no scale factor band 21 (sfb21) for frequencies above approx 16 kHz, forcing the encoder to choose between less accurate representation in band 21 or less efficient storage in all bands below band 21, the latter resulting in wasted bit rate in VBR encoding. Ancillary data The ancillary data field can be used to store user-defined data. The ancillary data is optional and the number of bits available is not explicitly given. The ancillary data is located after the Huffman code bits and ranges to where the next frame's main_data_begin points to. Encoder mp3PRO used ancillary data to encode extra information which could improve audio quality when decoded with its algorithm. Metadata A "tag" in an audio file is a section of the file that contains metadata such as the title, artist, album, track number, or other information about the file's contents. The MP3 standards do not define tag formats for MP3 files, nor is there a standard container format that would support metadata and obviate the need for tags. However, several de facto standards for tag formats exist. As of 2010, the most widespread are ID3v1 and ID3v2, and the more recently introduced APEv2. These tags are normally embedded at the beginning or end of MP3 files, separate from the actual MP3 frame data. MP3 decoders either extract information from the tags or just treat them as ignorable, non-MP3 junk data. Playing and editing software often contains tag editing functionality, but there are also tag editor applications dedicated to the purpose. Aside from metadata about the audio content, tags may also be used for DRM. ReplayGain is a standard for measuring and storing the loudness of an MP3 file (audio normalization) in its metadata tag, enabling a ReplayGain-compliant player to automatically adjust the overall playback volume for each file. MP3Gain may be used to reversibly modify files based on ReplayGain measurements so that adjusted playback can be achieved on players without ReplayGain capability. Licensing, ownership, and legislation The basic MP3 decoding and encoding technology is patent-free in the European Union, all patents having expired there by 2012 at the latest. In the United States, the technology became substantially patent-free on 16 April 2017 (see below). MP3 patents expired in the US between 2007 and 2017. In the past, many organizations have claimed ownership of patents related to MP3 decoding or encoding. These claims led to several legal threats and actions from a variety of sources. As a result, in countries that allow software patents, uncertainty about which patents must have been licensed to create MP3 products without committing patent infringement was common in the early stages of the technology's adoption. The initial near-complete MPEG-1 standard (parts 1, 2, and 3) was publicly available on 6 December 1991 as ISO CD 11172. In most countries, patents cannot be filed after prior art has been made public, and patents expire 20 years after the initial filing date, which can be up to 12 months later for filings in other countries. As a result, patents required to implement MP3 expired in most countries by December 2012, 21 years after the publication of ISO CD 11172. An exception is the United States, where patents in force but filed before 8 June 1995 expire after the later of 17 years from the issue date or 20 years from the priority date. A lengthy patent prosecution process may result in a patent issued much later than normally expected (see submarine patents). The various MP3-related patents expired on dates ranging from 2007 to 2017 in the United States. Patents for anything disclosed in ISO CD 11172 filed a year or more after its publication are questionable. If only the known MP3 patents filed by December 1992 are considered, then MP3 decoding has been patent-free in the US since 22 September 2015, when , which had a PCT filing in October 1992, expired. If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3 technology became patent-free in the United States on 16 April 2017, when , held and administered by Technicolor, expired. As a result, many free and open-source software projects, such as the Fedora operating system, have decided to start shipping MP3 support by default, and users will no longer have to resort to installing unofficial packages maintained by third party software repositories for MP3 playback or encoding. Technicolor (formerly called Thomson Consumer Electronics) claimed to control MP3 licensing of the Layer 3 patents in many countries, including the United States, Japan, Canada, and EU countries. Technicolor had been actively enforcing these patents. MP3 license revenues from Technicolor's administration generated about €100 million for the Fraunhofer Society in 2005. In September 1998, the Fraunhofer Institute sent a letter to several developers of MP3 software stating that a license was required to "distribute and/or sell decoders and/or encoders". The letter claimed that unlicensed products "infringe the patent rights of Fraunhofer and Thomson. To make, sell or distribute products using the [MPEG Layer-3] standard and thus our patents, you need to obtain a license under these patents from us." This led to the situation where the LAME MP3 encoder project could not offer its users official binaries that could run on their computer. The project's position was that as source code, LAME was simply a description of how an MP3 encoder could be implemented. Unofficially, compiled binaries were available from other sources. Sisvel S.p.A., a Luxembourg-based company, administers licenses for patents applying to MPEG Audio. They, along with its United States subsidiary Audio MPEG, Inc. previously sued Thomson for patent infringement on MP3 technology, but those disputes were resolved in November 2005 with Sisvel granting Thomson a license to their patents. Motorola followed soon after and signed with Sisvel to license MP3-related patents in December 2005. Except for three patents, the US patents administered by Sisvel had all expired in 2015. The three exceptions are: , expired February 2017; , expired February 2017; and , expired 9 April 2017. As of around the first quarter of 2023, Sisvel's licensing program has become a legacy. In September 2006, German officials seized MP3 players from SanDisk's booth at the IFA show in Berlin after an Italian patents firm won an injunction on behalf of Sisvel against SanDisk in a dispute over licensing rights. The injunction was later reversed by a Berlin judge, but that reversal was in turn blocked the same day by another judge from the same court, "bringing the Patent Wild West to Germany" in the words of one commentator. In February 2007, Texas MP3 Technologies sued Apple, Samsung Electronics and Sandisk in eastern Texas federal court, claiming infringement of a portable MP3 player patent that Texas MP3 said it had been assigned. Apple, Samsung, and Sandisk all settled the claims against them in January 2009. Alcatel-Lucent has asserted several MP3 coding and compression patents, allegedly inherited from AT&T-Bell Labs, in litigation of its own. In November 2006, before the companies' merger, Alcatel sued Microsoft for allegedly infringing seven patents. On 23 February 2007, a San Diego jury awarded Alcatel-Lucent US $1.52 billion in damages for infringement of two of them. The court subsequently revoked the award, however, finding that one patent had not been infringed and that the other was not owned by Alcatel-Lucent; it was co-owned by AT&T and Fraunhofer, who had licensed it to Microsoft, the judge ruled. That defense judgment was upheld on appeal in 2008. Alternative technologies Other lossy formats exist. Among these, Advanced Audio Coding (AAC) is the most widely used, and was designed to be the successor to MP3. There also exist other lossy formats such as mp3PRO and MP2. They are members of the same technological family as MP3 and depend on roughly similar psychoacoustic models and MDCT algorithms. Whereas MP3 uses a hybrid coding approach that is part MDCT and part FFT, AAC is purely MDCT, significantly improving compression efficiency. Many of the basic patents underlying these formats are held by Fraunhofer Society, Alcatel-Lucent, Thomson Consumer Electronics, Bell, Dolby, LG Electronics, NEC, NTT Docomo, Panasonic, Sony Corporation, ETRI, JVC Kenwood, Philips, Microsoft, and NTT. When the digital audio player market was taking off, MP3 was widely adopted as the standard hence the popular name "MP3 player". Sony was an exception and used their own ATRAC codec taken from their MiniDisc format, which Sony claimed was better. Following criticism and lower than expected Walkman sales, in 2004 Sony for the first time introduced native MP3 support to its Walkman players. There are also open compression formats like Opus and Vorbis that are available free of charge and without any known patent restrictions. Some of the newer audio compression formats, such as AAC, WMA Pro, Vorbis, and Opus, are free of some limitations inherent to the MP3 format that cannot be overcome by any MP3 encoder. Besides lossy compression methods, lossless formats are a significant alternative to MP3 because they provide unaltered audio content, though with an increased file size compared to lossy compression. Lossless formats include FLAC (Free Lossless Audio Codec), Apple Lossless and many others.
Technology
File formats
null
19690
https://en.wikipedia.org/wiki/Machine%20gun
Machine gun
A machine gun (MG) is a fully automatic and rifled firearm designed for sustained direct fire with rifle cartridges. Other automatic firearms such as automatic shotguns and automatic rifles (including assault rifles and battle rifles) are typically designed more for firing short bursts rather than continuous firepower and are not considered true machine guns. Submachine guns fire handgun cartridges rather than rifle cartridges, therefore they are not considered machine guns, while automatic firearms of caliber or more are classified as autocannons rather than machine guns. As a class of military kinetic projectile weapons, machine guns are designed to be mainly used as infantry support weapons and generally used when attached to a bipod or tripod, a fixed mount or a heavy weapons platform for stability against recoils. Many machine guns also use belt feeding and open bolt operation, features not normally found on other infantry firearms. Machine guns can be further categorized as light machine guns, medium machine guns, heavy machine guns, general-purpose machine guns, and squad automatic weapons. Modern overview Unlike semi-automatic firearms, which require one trigger pull per round fired, a machine gun is designed to continue firing for as long as the trigger is held down. Nowadays, the term is restricted to relatively heavy crew-served weapons, able to provide continuous or frequent bursts of automatic fire for as long as ammunition feeding is replete. Machine guns are used against infantry, low-flying aircraft, small boats and lightly/unarmored land vehicles, and can provide suppressive fire (either directly or indirectly) or enforce area denial over a sector of land with grazing fire. They are commonly mounted on fast attack vehicles such as technicals to provide heavy mobile firepower, armored vehicles such as tanks for engaging targets too small to justify the use of the primary weaponry or too fast to effectively engage with it, and on aircraft as defensive armament or for strafing ground targets, though on fighter aircraft true machine guns have mostly been supplanted by large-caliber rotary guns. Some machine guns have in practice sustained fire almost continuously for hours; other automatic weapons overheat after less than a minute of use. Because they become very hot, the great majority of designs fire from an open bolt, to permit air cooling from the breech between bursts. They also usually have either a barrel cooling system, slow-heating heavyweight barrel, or removable barrels which allow a hot barrel to be replaced. Although subdivided into "light", "medium", "heavy" or "general-purpose", even the lightest machine guns tend to be substantially larger and heavier than standard infantry arms. Medium and heavy machine guns are either mounted on a tripod or on a vehicle; when carried on foot, the machine gun and associated equipment (tripod, ammunition, spare barrels) require additional crew members. Light machine guns are designed to provide mobile fire support to a squad and are typically air-cooled weapons fitted with a box magazine or drum and a bipod; they may use full-size rifle rounds, but modern examples often use intermediate rounds. Medium machine guns use full-sized rifle rounds and are designed to be used from fixed positions mounted on a tripod. The heavy machine gun is a term originating in World War I to describe heavyweight medium machine guns and persisted into World War II with Japanese Hotchkiss M1914 clones; today, however, it is used to refer to automatic weapons with a caliber of at least , but less than . A general-purpose machine gun is usually a lightweight medium machine gun that can either be used with a bipod and drum in the light machine gun role or a tripod and belt feed in the medium machine gun role. Machine guns usually have simple iron sights, though the use of optics is becoming more common. A common aiming system for direct fire is to alternate solid ("ball") rounds and tracer ammunition rounds (usually one tracer round for every four ball rounds), so shooters can see the trajectory and "walk" the fire into the target, and direct the fire of other soldiers. Many heavy machine guns, such as the Browning M2 .50 BMG machine gun, are accurate enough to engage targets at great distances. During the Vietnam War, Carlos Hathcock set the record for a long-distance shot at with a .50 caliber heavy machine gun he had equipped with a telescopic sight. This led to the introduction of .50 caliber anti-materiel sniper rifles, such as the Barrett M82. Other automatic weapons are subdivided into several categories based on the size of the bullet used, whether the cartridge is fired from a closed bolt or an open bolt, and whether the action used is locked or is some form of blowback. Fully automatic firearms using pistol-caliber ammunition are called machine pistols or submachine guns largely on the basis of size; those using shotgun cartridges are almost always referred to as automatic shotguns. The term personal defense weapon (PDW) is sometimes applied to weapons firing dedicated armor-piercing rounds which would otherwise be regarded as machine pistols or SMGs, but it is not particularly strongly defined and has historically been used to describe a range of weapons from ordinary SMGs to compact assault rifles. Selective-fire rifles firing a full-power rifle cartridge from a closed bolt are called automatic rifles or battle rifles, while rifles that fire an intermediate cartridge are called assault rifles. Assault rifles are a compromise between the size and weight of a pistol-caliber submachine gun and a full-size battle rifle, firing intermediate cartridges and allowing semi-automatic and burst or full-automatic fire options (selective fire), sometimes with both of the latter presents. Operation Many machine guns are of the locked breech type, and follow this cycle: Pulling (manually or electrically) the bolt assembly/bolt carrier rearward by way of the cocking lever to the point bolt carrier engages a sear and stays at rear position until trigger is activated making bolt carrier move forward Loading fresh round into chamber and locking bolt Firing round by way of a firing pin or striker (except for aircraft medium calibre using electric ignition primers) hitting the primer that ignites the powder when bolt reaches locked position. Unlocking and removing the spent case from the chamber and ejecting it out of the weapon as bolt is moving rearward Loading the next round into the firing chamber. Usually, the recoil spring (also known as main spring) tension pushes bolt back into battery and a cam strips the new round from a feeding device, belt or box. Cycle is repeated as long as the trigger is activated by operator. Releasing the trigger resets the trigger mechanism by engaging a sear so the weapon stops firing with bolt carrier fully at the rear. The operation is basically the same for all locked breech automatic firearms, regardless of the means of activating these mechanisms. There are also multi-chambered formats, such as revolver cannon, and some types, such as the Schwarzlose machine gun etc., that do not lock the breech but instead use some type of delayed blowback. Design Most modern machine guns are of the locking type, and of these, most utilize the principle of gas-operated reloading, which taps off some of the propellant gas from the fired cartridge, using its mechanical pressure to unlock the bolt and cycle the action. The first of these was invented by the French brothers Claire, who patented a gas operated rifle, which included a gas cylinder, in 1892. The Russian PK machine gun is a more modern example. Another efficient and widely used format is the recoil actuated type, which uses the gun's recoil energy for the same purpose. Machine guns, such as the M2 Browning and MG42, are of this second kind. A cam, lever or actuator absorbs part of the energy of the recoil to operate the gun mechanism. An externally actuated weapon uses an external power source, such as an electric motor or hand crank, to move its mechanism through the firing sequence. Modern weapons of this type are often referred to as Gatling guns, after the original inventor (not only of the well-known hand-cranked 19th century proto-machine gun, but also of the first electrically powered version). They have several barrels each with an associated chamber and action on a rotating carousel and a system of cams that load, cock, and fire each mechanism progressively as it rotates through the sequence; essentially each barrel is a separate bolt-action rifle using a common feed source. The continuous nature of the rotary action and its relative immunity to overheating allow for a very high cyclic rate of fire, often several thousand rounds per minute. Rotary guns are less prone to jamming than a gun operated by gas or recoil, as the external power source will eject misfired rounds with no further trouble; but this is not possible in the rare cases of self-powered rotary guns. Rotary designs are intrinsically comparatively bulky and expensive and are therefore generally used with large rounds, 20 mm in diameter or more, often referred to as rotary cannon – though the rifle-calibre Minigun is an exception to this. Whereas such weapons are highly reliable and formidably effective, one drawback is that the weight and size of the power source and driving mechanism makes them usually impractical for use outside of a vehicle or aircraft mount. Revolver cannons, such as the Mauser MK 213, were developed in World War II by the Germans to provide high-caliber cannons with a reasonable rate of fire and reliability. In contrast to the rotary format, such weapons have a single barrel and a recoil-operated carriage holding a revolving chamber with typically five chambers. As each round is fired, electrically, the carriage moves back rotating the chamber which also ejects the spent case, indexes the next live round to be fired with the barrel and loads the next round into the chamber. The action is very similar to that of the revolver pistols common in the 19th and 20th centuries, giving this type of weapon its name. A chain gun is a specific, patented type of revolver cannon, the name, in this case, deriving from its driving mechanism. As noted above, firing a machine gun for prolonged periods produces large amounts of heat. In a worst-case scenario, this may cause a cartridge to overheat and detonate even when the trigger is not pulled, potentially leading to damage or causing the gun to cycle its action and keep firing until it has exhausted its ammunition supply or jammed; this is known as cooking off (as distinct from runaway fire where the sear fails to re-engage when the trigger is released). To guard against cook-offs occurring, some kind of cooling system or design element is required. Early machine guns were often water-cooled and while this technology was very effective (and was indeed one of the sources of the notorious efficiency of machine guns during World War I), the water jackets also added considerable weight to an already bulky design; they were also vulnerable to the enemies' bullets themselves. Armour could be provided, and in World War I, the Germans in particular often did this; but this added yet more weight to the guns. Air-cooled machine guns often feature quick-change barrels (often carried by a crew member), passive cooling fins, or in some designs forced-air cooling, such as that employed by the Lewis Gun. Advances in metallurgy and the use of special composites in barrel liners have allowed for greater heat absorption and dissipation during firing. The higher the rate of fire, the more often barrels must be changed and allowed to cool. To minimize this, most air-cooled guns are fired only in short bursts or at a reduced rate of fire. Some designs – such as the many variants of the MG42 – are capable of rates of fire in excess of 1,200 rounds per minute. Motorized Gatling guns can achieve the fastest firing rates of all, partly because this format involves extra energy being injected into the system from outside, instead of depending on energy derived from the propellant contained within the cartridges, partly because the next round can be inserted simultaneously with or before the ejection of the previous cartridge case, and partly because this design intrinsically deals with the unwanted heat very efficiently – effectively quick-changing the barrel and chamber after every shot. The multiple guns that comprise a Gatling being a much larger bulk of metal than other, single-barreled guns, they are thus much slower to rise in temperature for a given amount of heat, while at the same time they are also much better at shedding the excess, as the extra barrels provide a larger surface area from which to dissipate the unwanted thermal energy. In addition to that, they are in the nature of the design spun at very high speed during rapid fire, which has the benefit of producing enhanced air-cooling as a side-effect. In weapons where the round seats and fires at the same time, mechanical timing is essential for operator safety, to prevent the round from firing before it is seated properly. Machine guns are controlled by one or more mechanical sears. When a sear is in place, it effectively stops the bolt at some point in its range of motion. Some sears stop the bolt when it is locked to the rear. Other sears stop the firing pin from going forward after the round is locked into the chamber. Almost all machine guns have a "safety" sear, which simply keeps the trigger from engaging. History The first successful machine-gun designs were developed in the mid-19th century. The key characteristic of modern machine guns, their relatively high rate of fire and more importantly mechanical loading, first appeared in the Model 1862 Gatling gun, which was adopted by the United States Navy. These weapons were still powered by hand; however, this changed with Hiram Maxim's idea of harnessing recoil energy to power reloading in his Maxim machine gun. Dr. Gatling also experimented with electric-motor-powered models; as discussed above, this externally powered machine reloading has seen use in modern weapons as well. While technical use of the term "machine gun" has varied, the modern definition used by the Sporting Arms and Ammunition Manufacturers' Institute of America is "a fully automatic firearm that loads, fires and ejects continuously when the trigger is held to the rear until the ammunition is exhausted or pressure on the trigger is released." This definition excludes most early manually operated repeating arms the Gatling gun and such as volley guns like the Nordenfelt gun. Medieval The first known ancestors of multi-shot weapons were medieval organ guns. An early example of an attempt at the mechanisation of one of these would be an 'engine of war' produced in the mid-1570s in England capable of firing from 160 to 320 shots 4, 8, 12 or 24 bullets at a time at a rate of fire up to roughly 3 times the rate of fire of the typical arquebusier of the day. It was also claimed that the gun could be reloaded 'as often as you like' and fired no matter the weather though the English government never adopted the weapon despite testing being carried out at the Tower of London. The first firearms to have the ability to fire multiple shots from a single barrel without a full manual reload were revolvers made in Europe in the late 1500s. One is a shoulder-gun-length weapon made in Nuremberg, Germany, circa 1580. Another is a revolving arquebus, produced by Hans Stopler of Nuremberg in 1597. 17th century True repeating long arms were difficult to manufacture prior to the development of the unitary firearm cartridge; nevertheless, lever-action repeating rifles such as the Kalthoff repeater and Cookson repeater were made in small quantities in the 17th century. Perhaps the earliest examples of predecessors to the modern machine gun are to be found in East Asia. According to the Wu-Pei-Chih, a booklet examining Chinese military equipment produced during the first quarter of the 17th century, the Chinese army had in its arsenal the 'Po-Tzu Lien-Chu-P'ao' or 'string-of-100-bullets cannon'. This was a repeating cannon fed by a hopper containing balls which fired its charges sequentially. The way it worked was similar to the Perkins steam gun of 1824 or the Beningfield electrolysis gun of 1845 only slow-burning gunpowder was used as the propelling force in place of steam or the gases produced by electrolysis. Another repeating gun was produced by a Chinese commoner, Dai Zi, in the late 17th century. This weapon was also hopper-fed and never went into mass production. In 1655, a way of loading, aiming and shooting up to 6 wall muskets 60 times in a minute for a total rate of fire of 360 shots per minute was mentioned in The Century of Inventions by Edward Somerset, 2nd Marquess of Worcester, though, like all the inventions mentioned in the book, it is uncertain if it was ever built. It is sometimes claimed (i.e. in George Morgan Chinn's the Machine Gun) that in 1663 the first mention of the automatic principle of machine guns was in a paper presented to the Royal Society of England by Palmer, an Englishman who described a volley gun capable of being operated by either recoil or gas. However, no one has been able to find this paper in recent times and all references to a multi-shot weapon by a Palmer during this period appear to be referring to a somewhat more common Kalthoff repeater or Lorenzoni-system gun. Despite this, there is a reference in 1663 to at least the concept of a genuine automatic gun that was presented to Prince Rupert, though its type and method of operation are unknown. 18th century In 1708, it was reported from Constantinople that a French officer had invented a very light cannon that could fire from a single barrel 30 shots in 2 and a half minutes for a total rate of fire of 12 shots a minute. In 1711, a French lawyer called Barbuot presented to the parliament of Dijon a crank-operated 'war machine' made up of 10 carbine barrels and loaded via a 'drum' capable of firing in vollies. It was said to be accurate at 400 to 500 paces and to strike with enough force to pierce 2 or 3 men at a time when close. It was also claimed to be able to shoot 5 or 6 times before infantry came within musket range or cavalry within pistol range and with no more space between each shot than the time needed to prime a pistol, cock it and release the hammer as well as being nearly as manoeuvrable as cavalry. An alternative and heavier version was said to be able to throw grenades and it was also proposed to equip the machine with a bellows for clearing smoke that built up during firing. Another early revolving gun was created by James Puckle, a London lawyer, who patented what he called "The Puckle Gun" on May 15, 1718. It was a design for a manually operated 1.25 in. (32 mm) caliber, flintlock cannon with a revolver cylinder able to fire 6–11 rounds before reloading by swapping out the cylinder, intended for use on ships. It was one of the earliest weapons to be referred to as a machine gun, being called such in 1722, though its operation does not match the modern usage of the term. According to Puckle, it was able to fire round bullets at Christians and square bullets at Turks. However, it was a commercial failure and was not adopted or produced in any meaningful quantity. In 1729, a report was written in France on a machine capable of firing 600 balls in a few minutes. In 1720, a French inventor called Philippe Vayringe invented a small cannon that could fire 16 shots in succession, which he demonstrated before the Duke of Lorraine. In 1737, it was mentioned that Jacob de Weinholtz, a Dane who was serving in the Portuguese army, had invented a cannon capable of firing 20 to 30 shots a minute though requiring 15 people to work it. The cannons were brought along with a Portuguese fleet sent to India to take part in a colonial war in the 1740s. Also in 1737, it was mentioned that a German engineer had invented a 10-pounder cannon capable of firing 20 times in a minute. In 1740, a cannon able to shoot 11 times per minute was developed by a Frenchman called Chevalier de Benac. Meanwhile, not long after in England, in 1747 a cannon able to simultaneously charge and discharge itself 20 times in a minute was invented by James Allis and presented to the Royal Society of England. In 1750, in Denmark, a Prussian known as Captain Steuben of the Train of Artillery invented a breech-loading cannon worked by 4 people and fed by paper cartridges capable of firing 24 times in a minute and demonstrated it to the King of Denmark along with some other high-ranking officials in the same year. In 1764, Frenchman Ange Goudar wrote in his work The Chinese Spy that he had assisted in Paris in the proofing of a 'great gun' capable of firing 60 times in a minute. In 1773, another cannon capable of firing 23 or 24 times in a minute and cleaning itself after every shot was invented by Thomas Desaguliers. In 1775, it was mentioned that in England two large cannons invented by an unidentified matross at Woolwich had achieved a rate of fire of 59 shots in 59 and a half seconds. Also in 1775, a breech-loading volley gun, similar to the later mitrailleuse, was invented by a Frenchman called Du Perron which was worked by 3 or 4 men and capable of discharging 24 barrels 10 times a minute for a total rate of fire of 240 shots per minute. In 1776, a gun capable of charging and discharging itself 120 times 'by the motion of one hand only' in a minute was invented in England by an inventor from the county of Westmoreland. In 1777, Philadelphia gunsmith Joseph Belton offered the Continental Congress a "new improved gun", which was capable of firing up to twenty shots in five seconds; unlike older repeaters using complex lever-action mechanisms, it used a simpler system of superposed loads, and was loaded with a single large paper cartridge. Congress requested that Belton modify 100 flintlock muskets to fire eight shots in this manner, but rescinded the order when Belton's price proved too high. In 1779, a machine made up of 21 musket barrels worked by 3 men was produced by a British inventor called William Wilson Wright which he claimed could be fired 3 times quicker than a single man could load and fire a musket 3 times. In 1788, a Swiss soldier invented a machine worked by 10 men capable of discharging 300 balls in 3 minutes. Also in 1788, it was reported that a Prussian officer had invented a gun capable of firing 400 balls one after the other. In 1790, a former officer in the French military known as Joseph-François-Louis Grobert invented a 'ballistic machine' or 'pyroballistic machine' with multiple barrels operated by 4 men and a continuous rotational movement capable of firing 360 rifle shots a minute in a variety of calibers. In 1792, a French artist known as Renard invented a piece of ordnance that could be operated by one man and fired 90 shots a minute. Also in 1792, a French mechanic called Garnier invented a musket battery made up of 15 barrels capable of firing 300 shots in 2 minutes for a total rate of fire of 150 shots a minute or 10 shots per minute per barrel and of being operated by one man. 19th century In the early and mid-19th century, a number of rapid-firing weapons appeared which offered multi-shot fire, mostly volley guns. Volley guns (such as the Mitrailleuse) and double-barreled pistols relied on duplicating all parts of the gun, though the Nock gun used the otherwise-undesirable "chain fire" phenomenon (where multiple chambers are ignited at once) to propagate a spark from a single flintlock mechanism to multiple barrels. Pepperbox pistols also did away with needing multiple hammers but used multiple manually operated barrels. Revolvers further reduced this to only needing a pre-prepared cylinder and linked advancing the cylinder to cocking the hammer. However, these were still manually operated. In 1805, a British inventor from Northampton designed a cannon that would prime, load and fire itself 10 times a minute. In 1806, a Viennese copper engraver and mechanic known as Mr Putz invented a machine cannon that could load, fire and clean itself once every second or potentially up to 60 times a minute though the rate of fire was limited by the overheating of the barrel. In 1819, an American inventor from Baltimore designed a gun with 11 barrels that could fire 12 times in a minute for a total rate of fire of 132 shots a minute. In 1821, a muzzle-loading repeating cannon capable of firing 30 shots in 6 minutes or 5 shots per minute was demonstrated in England by the French-American "Fire King" Ivan Ivanitz Chabert. It was worked by a "wheel" fed by paper cartridges from a store attached to the cannon and ignited using a match from a match-holder somewhere else on the cannon. In 1825 an Italian book attempting to catalogue all topographic features of all known countries on Earth mentioned that in France there were 'mechanical rifles' used to defend warehouses that were capable of firing 120 shots without reloading. In 1828, a swivel gun that did not need cleaning or muzzle-loading and was capable of being made to any dimensions and used as an ordinary cannon at a moment's notice and firing 40 shots a minute was invented by a native of Ireland. Also in 1828 a revolver cannon capable of firing 12 shots a minute and worked by 2 artillerymen was invented by a Frenchman called Lesire-Fruyer. In 1854 this cannon would be put on display at the French Museum of the Marine. In France, in 1831, a mechanic from the Vosges department invented a lever-operated cannon that could fire 100 shots a minute. In 1832, a machine capable of firing 500 rifle shots a minute was devised by Hamel, a French mechanic. In the 1830s, General Sir John Scott Lillie, a British veteran of the Peninsula War invented the "Lillie Rifle battery". In the mid-1830s, a machine gun was designed by John Steuble (Swiss), who tried to sell it to the Russian, English and French governments. The English and Russian governments showed interest but the former refused to pay Steuble, who later sued them for this transgression, and the latter tried to imprison him. The French government showed interest at first and while it noted that mechanically there was nothing wrong with Steuble's invention it turned him down, stating that the machine both lacked novelty and could not be usefully employed by the army. The gun was reportedly breech-loading, fed by cartridges from some kind of hopper and could fire 34 barrels of one-inch calibre 4 or 6 times for a total of 136 or 204 shots a minute. A biography of William Lyon Mackenzie mentions that in 1839 a Detroit-based inventor was working on a cannon that could be fired 50 to 60 times in a minute. In 1842, Dr. Thomson or Thompson, an American, invented a cannon fed by pre-loaded breech-pieces with 4 barrels that was operated by means of a revolving cylinder and could be fired 50 times in as many seconds or even up to 500 times in 500 seconds. In 1846, Mr. Francis Dixon, an American, invented a cannon that loaded, primed and discharged itself through the use of a brake at a rate of fire of 30 to 40 shots a minute. A variation of it was worked by clockwork-like machinery and could be made to move by itself a certain distance along rails before firing 10 times and returning to its original position. Also in 1846, in Canada, inventor Simeon "Larochelle" Gautron, invented a cannon that was similar to a wooden model of a repeating cannon he constructed in 1836 but for which he had made a number of improvements since then which could be fired 10 or 12 times in a minute when the typical muzzle-loading cannon of the day could be fired at only a fraction of that speed, and an English newspaper reporting on it claimed it could be fired up to 60 times in the same period of time, and clean itself after every shot. It was worked by a crank, could be worked by one man when the typical cannon of the day required twelve or more, was fed by paper cartridges from a revolving cylinder and used separate percussion caps for ignition. Larochelle tried to interest the Canadian military in his invention but was turned down for reasons of complexity and expense which, while it drew some criticism from the French language Canadian press, led to the inventor discontinuing development of it in favour of more profitable activities. A model of Larochelle's cannon is still on display at the Musee National des Beaux-Arts du Quebec. In 1847, a short description of a prototype electrically ignited mechanical machine gun was published in Scientific American by J.R. Nichols. The model described is small in scale and works by rotating a series of barrels vertically so that it is feeding at the top from a "tube" or hopper and could be discharged immediately at any elevation after having received a charge, according to the author. In 1848, the Italian Cesare Rosaglio announced his invention of a machine gun capable of being operated by a single man and firing 300 rifle shots a minute or 12,000 in an hour after taking into account the time needed to reload the "tanks" of ammunition. In June 1851, a model of a 'war engine' allegedly capable of firing 10,000 ball cartridges in 10 minutes was demonstrated by a British inventor called Francis McGetrick. In 1852, a rotary cannon using a unique form of wheellock ignition was demonstrated by Delany, an Irish immigrant to America. In 1854, a British patent for a mechanically operated machine gun was filed by Henry Clarke. This weapon used multiple barrels arranged side by side, fed by a revolving cylinder similar to that used in a turret revolver that was in turn fed by hoppers, similar to the system used by Nichols. The gun could be fired by percussion or electricity, according to the author. In the percussion version of the gun, firing was carried out by separate percussion caps and the breeches could contain either loose powder and balls or paper cartridges. A model of this weapon, said to be capable of firing 1800 shots in a minute with great precision at 2000 yards and drawn by two horses, was constructed and tested though apparently not adopted for the military. In the same year, water cooling was proposed for machine guns by Henry Bessemer, along with a water cleaning system, though he later abandoned this design. In his patent, Bessemer describes a hydropneumatic delayed-blowback-operated, fully automatic cannon. Part of the patent also refers to a steam-operated piston to be used with firearms but the bulk of the patent is spent detailing the former system. In America, a patent for a machine gun-type weapon was filed by John Andrus Reynolds in 1855. Another early American patent for a manually operated machine gun with a blowback-operated cocking mechanism was filed by C. E. Barnes in 1856. In France and Britain, a mechanically operated machine gun was patented in 1856 by Frenchman Francois Julien. This weapon was a cannon that fed from a type of open-ended tubular magazine, only using rollers and an endless chain in place of springs. The Agar Gun, otherwise known as a "coffee-mill gun" because of its resemblance to a coffee mill, was invented by Wilson Agar at the beginning of the US Civil War. The weapon featured mechanized loading using a hand crank linked to a hopper above the weapon. The weapon featured a single barrel and fired through the turning of the same crank; it operated using paper cartridges fitted with percussion caps and inserted into metal tubes that acted as chambers; it was therefore functionally similar to a revolver. The weapon was demonstrated to President Lincoln in 1861. He was so impressed with the weapon that he purchased 10 on the spot for $1,500 apiece. The Union Army eventually purchased a total of 70 of the weapons. However, due to antiquated views of the Ordnance Department the weapons, like its more famous counterpart the Gatling Gun, saw only limited use. The Gatling gun, patented in 1861 by Richard Jordan Gatling, was the first to offer controlled, sequential fire with mechanical loading. The design's key features were machine loading of prepared cartridges and a hand-operated crank for sequential high-speed firing. It first saw very limited action in the American Civil War; it was subsequently improved and used in the Franco-Prussian war and North-West Rebellion. Many were sold to other armies in the late 19th century and continued to be used into the early 20th century until they were gradually supplanted by Maxim guns. Early multi-barrel guns were approximately the size and weight of contemporary artillery pieces, and were often perceived as a replacement for cannon firing grapeshot or canister shot. The large wheels required to move these guns around required a high firing position, which increased the vulnerability of their crews. Sustained firing of gunpowder cartridges generated a cloud of smoke, making concealment impossible until smokeless powder became available in the late 19th century. Gatling guns were targeted by artillery they could not reach, and their crews were targeted by snipers they could not see. The Gatling gun was used most successfully to expand European colonial empires, since against poorly equipped indigenous armies it did not face such threats. In 1864, in the aftermath of the Second Schleswig War, Denmark started a program intended to develop a gun that used the recoil of a fired shot to reload the firearm though a working model would not be produced until 1888. In 1870, a Lt. Holsten Friberg of the Swedish army patented a fully automatic recoil-operated firearm action and may have produced firing prototypes of a derived design around 1882: this was the forerunner to the 1907 Kjellman machine gun, though, due to rapid residue buildup from the use of black powder, Friberg's design was not a practical weapon. Also in 1870, the Bavarian regiment of the Prussian army used a unique mitrailleuse-style weapon in the Franco-Prussian war. The weapon was made up of four barrels placed side by side that replaced the manual loading of the French mitrailleuse with a mechanical loading system featuring a hopper containing 41 cartridges at the breech of each barrel. Although it was used effectively at times, mechanical difficulties hindered its operation and it was ultimately abandoned shortly after the war ended (de). Maxim and World War I The first practical self-powered machine gun was invented in 1884 by Sir Hiram Maxim. The Maxim machine gun used the recoil power of the previously fired bullet to reload rather than being hand-powered, enabling a much higher rate of fire than was possible using earlier designs such as the Nordenfelt and Gatling weapons. Maxim also introduced the use of water cooling, via a water jacket around the barrel, to reduce overheating. Maxim's gun was widely adopted, and derivative designs were used on all sides during the First World War. The design required fewer crew and was lighter and more usable than the Nordenfelt and Gatling guns. First World War combat experience demonstrated the military importance of the machine gun. The United States Army issued four machine guns per regiment in 1912, but that allowance increased to 336 machine guns per regiment by 1919. Heavy guns based on the Maxim such as the Vickers machine gun were joined by many other machine weapons, which mostly had their start in the early 20th century such as the Hotchkiss machine gun. Submachine guns (e.g., the German MP 18) as well as lighter machine guns (the first light machine gun deployed in any significant number being the Madsen machine gun, with the Chauchat and Lewis gun soon following) saw their first major use in World War I, along with heavy use of large-caliber machine guns. The biggest single cause of casualties in World War I was actually artillery, but combined with wire entanglements, machine guns earned a fearsome reputation. Another fundamental development occurring before and during the war was the incorporation by gun designers of machine gun auto-loading mechanisms into handguns, giving rise to semi-automatic pistols such as the Borchardt (1890s), automatic machine pistols and later submachine guns (such as the Beretta 1918). Aircraft-mounted machine guns were first used in combat in World War I. Immediately this raised a fundamental problem. The most effective position for guns in a single-seater fighter was clearly, for the purpose of aiming, directly in front of the pilot; but this placement would obviously result in bullets striking the moving propeller. Early solutions, aside from simply hoping that luck was on the pilot's side with an unsynchronized forward-firing gun, involved either aircraft with pusher props like the Vickers F.B.5, Royal Aircraft Factory F.E.2 and Airco DH.2, wing mounts like that of the Nieuport 10 and Nieuport 11 which avoided the propeller entirely, or armored propeller blades such as those mounted on the Morane-Saulnier L which would allow the propeller to deflect unsynchronized gunfire. By mid 1915, the introduction of a reliable gun synchronizer by the Imperial German Flying Corps made it possible to fire a closed-bolt machine gun forward through a spinning propeller by timing the firing of the gun to miss the blades. The Allies had no equivalent system until 1916 and their aircraft suffered badly as a result, a period known as the Fokker Scourge, after the Fokker Eindecker, the first German plane to incorporate the new technology. Interwar era and World War II As better materials became available following the First World War, light machine guns became more readily portable; designs such as the Bren light machine gun replaced bulky predecessors like the Lewis gun in the squad support weapon role, while the modern division between medium machine guns like the M1919 Browning machine gun and heavy machine guns like the Browning M2 became clearer. New designs largely abandoned water jacket cooling systems as both undesirable, due to a greater emphasis on mobile tactics; and unnecessary, thanks to the alternative and superior technique of preventing overheating by swapping barrels. The interwar years also produced the first widely used and successful general-purpose machine gun, the German MG 34. While this machine gun was equally able in the light and medium roles, it proved difficult to manufacture in quantity, and experts on industrial metalworking were called in to redesign the weapon for modern tooling, creating the MG 42. This weapon was simpler, cheaper to produce, fired faster, and replaced the MG 34 in every application except vehicle mounts since the MG 42's barrel changing system could not be operated when it was mounted. Cold War Experience with the MG 42 led to the US issuing a requirement to replace the aging Browning Automatic Rifle with a similar weapon, which would also replace the M1919; simply using the MG 42 itself was not possible, as the design brief required a weapon which could be fired from the hip or shoulder like the BAR. The resulting design, the M60 machine gun, was issued to troops during the Vietnam War. As it became clear that a high-volume-of-fire weapon would be needed for fast-moving jet aircraft to reliably hit their opponents, Gatling's work with electrically powered weapons was recalled and the 20 mm M61 Vulcan was designed; as well as a miniaturized 7.62 mm version initially known as the "mini-Vulcan" and quickly shortened to "minigun" soon in production for use on helicopters, where the volume of fire could compensate for the instability of the helicopter as a firing platform. Human interface The most common interface on light machine guns is a pistol grip and trigger with a buttstock attached. Vehicle and tripod mounted machine guns usually have spade grips. Earlier machine guns commonly featured hand cranks, and modern externally powered machine guns, such as miniguns, commonly use an electronic button or trigger on a joystick. In the late 20th century, scopes and other complex optics became more common rather than the more basic iron sights. Loading systems in early manual machine guns were often from a hopper of loose (un-linked) cartridges. Manually operated volley guns usually had to be reloaded all at once (each barrel reloaded by hand, or with a set of cartridges affixed to a plate that was inserted into the weapon). With hoppers, the rounds could often be added while the weapon was firing. This gradually changed to belt-fed systems, which were either held by a person (the shooter or a support person), or in a bag or box. Some modern vehicle machine guns use linkless feed systems. Modern machine guns are commonly mounted in one of four ways. The first is a bipod, often integrated with the weapon, common on light and medium machine guns. Another is the tripod, usually found on medium and heavy machine guns. On ships, vehicles, and aircraft, machine guns are usually mounted on a pintle mount, a steel post that is connected to the frame or body of the vehicle. The last common mounting type is as part of a vehicle's armament system, such as a tank coaxial or part of an aircraft's armament. These are usually electrically fired and have complex sighting systems, for example, the US Helicopter Armament Subsystems.
Technology
Projectile weapons
null
19694
https://en.wikipedia.org/wiki/Mercury%20%28planet%29
Mercury (planet)
Mercury is the first planet from the Sun and the smallest in the Solar System. In English, it is named after the ancient Roman god (Mercury), god of commerce and communication, and the messenger of the gods. Mercury is classified as a terrestrial planet, with roughly the same surface gravity as Mars. The surface of Mercury is heavily cratered, as a result of countless impact events that have accumulated over billions of years. Its largest crater, Caloris Planitia, has a diameter of , which is about one-third the diameter of the planet (). Similarly to the Earth's Moon, Mercury's surface displays an expansive rupes system generated from thrust faults and bright ray systems formed by impact event remnants. Mercury's sidereal year (88.0 Earth days) and sidereal day (58.65 Earth days) are in a 3:2 ratio. This relationship is called spin–orbit resonance, and sidereal here means "relative to the stars". Consequently, one solar day (sunrise to sunrise) on Mercury lasts for around 176 Earth days: twice the planet's sidereal year. This means that one side of Mercury will remain in sunlight for one Mercurian year of 88 Earth days; while during the next orbit, that side will be in darkness all the time until the next sunrise after another 88 Earth days. Combined with its high orbital eccentricity, the planet's surface has widely varying sunlight intensity and temperature, with the equatorial regions ranging from at night to during sunlight. Due to the very small axial tilt, the planet's poles are permanently shadowed. This strongly suggests that water ice could be present in the craters. Above the planet's surface is an extremely tenuous exosphere and a faint magnetic field that is strong enough to deflect solar winds. Mercury has no natural satellites. As of the early 2020s, many broad details of Mercury's geological history are still under investigation or pending data from space probes. Like other planets in the Solar System, Mercury was formed approximately 4.5 billion years ago. Its mantle is highly homogeneous, which suggests that Mercury had a magma ocean early in its history, like the Moon. According to current models, Mercury may have a solid silicate crust and mantle overlying a solid outer core, a deeper liquid core layer, and a solid inner core. There are many competing hypotheses about Mercury's origins and development, some of which incorporate collision with planetesimals and rock vaporization. Nomenclature Historically, humans knew Mercury by different names depending on whether it was an evening star or a morning star. By about 350 BC, the ancient Greeks had realized the two stars were one. They knew the planet as , meaning "twinkling", and , for its fleeting motion, a name that is retained in modern Greek ( ). The Romans named the planet after the swift-footed Roman messenger god, Mercury (Latin ), whom they equated with the Greek Hermes, because it moves across the sky faster than any other planet, though some associated the planet with Apollo instead, as detailed by Pliny the Elder. The astronomical symbol for Mercury is a stylized version of Hermes' caduceus; a Christian cross was added in the 16th century:. Physical characteristics Mercury is one of four terrestrial planets in the Solar System, which means it is a rocky body like Earth. It is the smallest planet in the Solar System, with an equatorial radius of . Mercury is also smaller—albeit more massive—than the largest natural satellites in the Solar System, Ganymede and Titan. Mercury consists of approximately 70% metallic and 30% silicate material. Internal structure Mercury appears to have a solid silicate crust and mantle overlying a solid, metallic outer core layer, a deeper liquid core layer, and a solid inner core. The composition of the iron-rich core remains uncertain, but it likely contains nickel, silicon and perhaps sulfur and carbon, plus trace amounts of other elements. The planet's density is the second highest in the Solar System at 5.427 g/cm3, only slightly less than Earth's density of 5.515 g/cm3. If the effect of gravitational compression were to be factored out from both planets, the materials of which Mercury is made would be denser than those of Earth, with an uncompressed density of 5.3 g/cm3 versus Earth's 4.4 g/cm3. Mercury's density can be used to infer details of its inner structure. Although Earth's high density results appreciably from gravitational compression, particularly at the core, Mercury is much smaller and its inner regions are not as compressed. Therefore, for it to have such a high density, its core must be large and rich in iron. The radius of Mercury's core is estimated to be , based on interior models constrained to be consistent with a moment of inertia factor of . Hence, Mercury's core occupies about 57% of its volume; for Earth this proportion is 17%. Research published in 2007 suggests that Mercury has a molten core. The mantle-crust layer is in total thick. Projections differ as to the size of the crust specifically; data from the and MESSENGER probes suggests a thickness of , whereas an Airy isostacy model suggests a thickness of . One distinctive feature of Mercury's surface is the presence of numerous narrow ridges, extending up to several hundred kilometers in length. It is thought that these were formed as Mercury's core and mantle cooled and contracted at a time when the crust had already solidified. Mercury's core has a higher iron content than that of any other planet in the Solar System, and several theories have been proposed to explain this. The most widely accepted theory is that Mercury originally had a metal–silicate ratio similar to common chondrite meteorites, thought to be typical of the Solar System's rocky matter, and a mass approximately 2.25 times its current mass. Early in the Solar System's history, Mercury may have been struck by a planetesimal of approximately Mercury's mass and several thousand kilometers across. The impact would have stripped away much of the original crust and mantle, leaving the core behind as a relatively major component. A similar process, known as the giant impact hypothesis, has been proposed to explain the formation of Earth's Moon. Alternatively, Mercury may have formed from the solar nebula before the Sun's energy output had stabilized. It would initially have had twice its present mass, but as the protosun contracted, temperatures near Mercury could have been between 2,500 and 3,500 K and possibly even as high as 10,000 K. Much of Mercury's surface rock could have been vaporized at such temperatures, forming an atmosphere of "rock vapor" that could have been carried away by the solar wind. A third hypothesis proposes that the solar nebula caused drag on the particles from which Mercury was accreting, which meant that lighter particles were lost from the accreting material and not gathered by Mercury. Each hypothesis predicts a different surface composition, and two space missions have been tasked with making observations of this composition. The first MESSENGER, which ended in 2015, found higher-than-expected potassium and sulfur levels on the surface, suggesting that the giant impact hypothesis and vaporization of the crust and mantle did not occur because said potassium and sulfur would have been driven off by the extreme heat of these events. BepiColombo, which will arrive at Mercury in 2025, will make observations to test these hypotheses. The findings so far would seem to favor the third hypothesis; however, further analysis of the data is needed. Surface geology Mercury's surface is similar in appearance to that of the Moon, showing extensive mare-like plains and heavy cratering, indicating that it has been geologically inactive for billions of years. It is more heterogeneous than the surface of Mars or the Moon, both of which contain significant stretches of similar geology, such as maria and plateaus. Albedo features are areas of markedly different reflectivity, which include impact craters, the resulting ejecta, and ray systems. Larger albedo features correspond to higher reflectivity plains. Mercury has "wrinkle-ridges" (dorsa), Moon-like highlands, mountains (montes), plains (planitiae), escarpments (rupes), and valleys (valles). The planet's mantle is chemically heterogeneous, suggesting the planet went through a magma ocean phase early in its history. Crystallization of minerals and convective overturn resulted in a layered, chemically heterogeneous crust with large-scale variations in chemical composition observed on the surface. The crust is low in iron but high in sulfur, resulting from the stronger early chemically reducing conditions than is found on other terrestrial planets. The surface is dominated by iron-poor pyroxene and olivine, as represented by enstatite and forsterite, respectively, along with sodium-rich plagioclase and minerals of mixed magnesium, calcium, and iron-sulfide. The less reflective regions of the crust are high in carbon, most likely in the form of graphite. Names for features on Mercury come from a variety of sources and are set according to the IAU planetary nomenclature system. Names coming from people are limited to the deceased. Craters are named for artists, musicians, painters, and authors who have made outstanding or fundamental contributions to their field. Ridges, or dorsa, are named for scientists who have contributed to the study of Mercury. Depressions or fossae are named for works of architecture. Montes are named for the word "hot" in a variety of languages. Plains or planitiae are named for Mercury in various languages. Escarpments or rupēs are named for ships of scientific expeditions. Valleys or valles are named for abandoned cities, towns, or settlements of antiquity. Impact basins and craters Mercury was heavily bombarded by comets and asteroids during and shortly following its formation 4.6 billion years ago, as well as during a possibly separate subsequent episode called the Late Heavy Bombardment that ended 3.8 billion years ago. Mercury received impacts over its entire surface during this period of intense crater formation, facilitated by the lack of any atmosphere to slow impactors down. During this time Mercury was volcanically active; basins were filled by magma, producing smooth plains similar to the maria found on the Moon. One of the most unusual craters is Apollodorus, or "the Spider", which hosts a series of radiating troughs extending outwards from its impact site. Craters on Mercury range in diameter from small bowl-shaped cavities to multi-ringed impact basins hundreds of kilometers across. They appear in all states of degradation, from relatively fresh rayed craters to highly degraded crater remnants. Mercurian craters differ subtly from lunar craters in that the area blanketed by their ejecta is much smaller, a consequence of Mercury's stronger surface gravity. According to International Astronomical Union rules, each new crater must be named after an artist who was famous for more than fifty years, and dead for more than three years, before the date the crater is named. The largest known crater is Caloris Planitia, or Caloris Basin, with a diameter of . The impact that created the Caloris Basin was so powerful that it caused lava eruptions and left a concentric mountainous ring ~ tall surrounding the impact crater. The floor of the Caloris Basin is filled by a geologically distinct flat plain, broken up by ridges and fractures in a roughly polygonal pattern. It is not clear whether they were volcanic lava flows induced by the impact or a large sheet of impact melt. At the antipode of the Caloris Basin is a large region of unusual, hilly terrain known as the "Weird Terrain". One hypothesis for its origin is that shock waves generated during the Caloris impact traveled around Mercury, converging at the basin's antipode (180 degrees away). The resulting high stresses fractured the surface. Alternatively, it has been suggested that this terrain formed as a result of the convergence of ejecta at this basin's antipode. Overall, 46 impact basins have been identified. A notable basin is the -wide, multi-ring Tolstoj Basin that has an ejecta blanket extending up to from its rim and a floor that has been filled by smooth plains materials. Beethoven Basin has a similar-sized ejecta blanket and a -diameter rim. Like the Moon, the surface of Mercury has likely incurred the effects of space weathering processes, including solar wind and micrometeorite impacts. Plains There are two geologically distinct plains regions on Mercury. Gently rolling, hilly plains in the regions between craters are Mercury's oldest visible surfaces, predating the heavily cratered terrain. These inter-crater plains appear to have obliterated many earlier craters, and show a general paucity of smaller craters below about in diameter. Smooth plains are widespread flat areas that fill depressions of various sizes and bear a strong resemblance to lunar maria. Unlike lunar maria, the smooth plains of Mercury have the same albedo as the older inter-crater plains. Despite a lack of unequivocally volcanic characteristics, the localization and rounded, lobate shape of these plains strongly support volcanic origins. All the smooth plains of Mercury formed significantly later than the Caloris basin, as evidenced by appreciably smaller crater densities than on the Caloris ejecta blanket. Compressional features An unusual feature of Mercury's surface is the numerous compression folds, or rupes, that crisscross the plains. These exist on the Moon, but are much more prominent on Mercury. As Mercury's interior cooled, it contracted and its surface began to deform, creating wrinkle ridges and lobate scarps associated with thrust faults. The scarps can reach lengths of and heights of . These compressional features can be seen on top of other features, such as craters and smooth plains, indicating they are more recent. Mapping of the features has suggested a total shrinkage of Mercury's radius in the range of ~. Most activity along the major thrust systems probably ended about 3.6–3.7 billion years ago. Small-scale thrust fault scarps have been found, tens of meters in height and with lengths in the range of a few kilometers, that appear to be less than 50 million years old, indicating that compression of the interior and consequent surface geological activity continue to the present. Volcanism There is evidence for pyroclastic flows on Mercury from low-profile shield volcanoes. Fifty-one pyroclastic deposits have been identified, where 90% of them are found within impact craters. A study of the degradation state of the impact craters that host pyroclastic deposits suggests that pyroclastic activity occurred on Mercury over a prolonged interval. A "rimless depression" inside the southwest rim of the Caloris Basin consists of at least nine overlapping volcanic vents, each individually up to in diameter. It is thus a "compound volcano". The vent floors are at least below their brinks and they bear a closer resemblance to volcanic craters sculpted by explosive eruptions or modified by collapse into void spaces created by magma withdrawal back down into a conduit. Scientists could not quantify the age of the volcanic complex system but reported that it could be on the order of a billion years. Surface conditions and exosphere The surface temperature of Mercury ranges from . It never rises above 180 K at the poles, due to the absence of an atmosphere and a steep temperature gradient between the equator and the poles. At perihelion, the equatorial subsolar point is located at latitude 0°W or 180°W, and it climbs to a temperature of about . During aphelion, this occurs at 90° or 270°W and reaches only . On the dark side of the planet, temperatures average . The intensity of sunlight on Mercury's surface ranges between 4.59 and 10.61 times the solar constant (1,370 W·m−2). Although daylight temperatures at the surface of Mercury are generally extremely high, observations strongly suggest that ice (frozen water) exists on Mercury. The floors of deep craters at the poles are never exposed to direct sunlight, and temperatures there remain below 102 K, far lower than the global average. This creates a cold trap where ice can accumulate. Water ice strongly reflects radar, and observations by the 70-meter Goldstone Solar System Radar and the VLA in the early 1990s revealed that there are patches of high radar reflection near the poles. Although ice was not the only possible cause of these reflective regions, astronomers thought it to be the most likely explanation. The presence of water ice was confirmed using MESSENGER images of craters at the north pole. The icy crater regions are estimated to contain about 1014–1015 kg of ice, and may be covered by a layer of regolith that inhibits sublimation. By comparison, the Antarctic ice sheet on Earth has a mass of about 4 kg, and Mars's south polar cap contains about 1016 kg of water. The origin of the ice on Mercury is not yet known, but the two most likely sources are from outgassing of water from the planet's interior and deposition by impacts of comets. Mercury is too small and hot for its gravity to retain any significant atmosphere over long periods of time; it does have a tenuous surface-bounded exosphere at a surface pressure of less than approximately 0.5 nPa (0.005 picobars). It includes hydrogen, helium, oxygen, sodium, calcium, potassium, magnesium, silicon, and hydroxide, among others. This exosphere is not stable—atoms are continuously lost and replenished from a variety of sources. Hydrogen atoms and helium atoms probably come from the solar wind, diffusing into Mercury's magnetosphere before later escaping back into space. The radioactive decay of elements within Mercury's crust is another source of helium, as well as sodium and potassium. Water vapor is present, released by a combination of processes such as comets striking its surface, sputtering creating water out of hydrogen from the solar wind and oxygen from rock, and sublimation from reservoirs of water ice in the permanently shadowed polar craters. The detection of high amounts of water-related ions like O+, OH−, and H3O+ was a surprise. Because of the quantities of these ions that were detected in Mercury's space environment, scientists surmise that these molecules were blasted from the surface or exosphere by the solar wind. Sodium, potassium, and calcium were discovered in the atmosphere during the 1980s–1990s, and are thought to result primarily from the vaporization of surface rock struck by micrometeorite impacts including presently from Comet Encke. In 2008, magnesium was discovered by MESSENGER. Studies indicate that, at times, sodium emissions are localized at points that correspond to the planet's magnetic poles. This would indicate an interaction between the magnetosphere and the planet's surface. According to NASA, Mercury is not a suitable planet for Earth-like life. It has a surface boundary exosphere instead of a layered atmosphere, extreme temperatures, and high solar radiation. It is unlikely that any living beings can withstand those conditions. Some parts of the subsurface of Mercury may have been habitable, and perhaps life forms, albeit likely primitive microorganisms, may have existed on the planet. Magnetic field and magnetosphere Despite its small size and slow 59-day-long rotation, Mercury has a significant, and apparently global, magnetic field. According to measurements taken by , it is about 1.1% the strength of Earth's. The magnetic-field strength at Mercury's equator is about . Like that of Earth, Mercury's magnetic field is dipolar and nearly aligned with the planet's spin axis (10° dipolar tilt, compared to 11° for Earth). Measurements from both the and MESSENGER space probes have indicated that the strength and shape of the magnetic field are stable. It is likely that this magnetic field is generated by a dynamo effect, in a manner similar to the magnetic field of Earth. This dynamo effect would result from the circulation of the planet's iron-rich liquid core. Particularly strong tidal heating effects caused by the planet's high orbital eccentricity would serve to keep part of the core in the liquid state necessary for this dynamo effect. Mercury's magnetic field is strong enough to deflect the solar wind around the planet, creating a magnetosphere. The planet's magnetosphere, though small enough to fit within Earth, is strong enough to trap solar wind plasma. This contributes to the space weathering of the planet's surface. Observations taken by the spacecraft detected this low energy plasma in the magnetosphere of the planet's nightside. Bursts of energetic particles in the planet's magnetotail indicate a dynamic quality to the planet's magnetosphere. During its second flyby of the planet on October 6, 2008, MESSENGER discovered that Mercury's magnetic field can be extremely "leaky". The spacecraft encountered magnetic "tornadoes"—twisted bundles of magnetic fields connecting the planetary magnetic field to interplanetary space—that were up to wide or a third of the radius of the planet. These twisted magnetic flux tubes, technically known as flux transfer events, form open windows in the planet's magnetic shield through which the solar wind may enter and directly impact Mercury's surface via magnetic reconnection. This also occurs in Earth's magnetic field. The MESSENGER observations showed the reconnection rate was ten times higher at Mercury, but its proximity to the Sun only accounts for about a third of the reconnection rate observed by MESSENGER. Orbit, rotation, and longitude Mercury has the most eccentric orbit of all the planets in the Solar System; its eccentricity is 0.21 with its distance from the Sun ranging from . It takes 87.969 Earth days to complete an orbit. The diagram illustrates the effects of the eccentricity, showing Mercury's orbit overlaid with a circular orbit having the same semi-major axis. Mercury's higher velocity when it is near perihelion is clear from the greater distance it covers in each 5-day interval. In the diagram, the varying distance of Mercury to the Sun is represented by the size of the planet, which is inversely proportional to Mercury's distance from the Sun. This varying distance to the Sun leads to Mercury's surface being flexed by tidal bulges raised by the Sun that are about 17 times stronger than the Moon's on Earth. Combined with a 3:2 spin–orbit resonance of the planet's rotation around its axis, it also results in complex variations of the surface temperature. The resonance makes a single solar day (the length between two meridian transits of the Sun) on Mercury last exactly two Mercury years, or about 176 Earth days. Mercury's orbit is inclined by 7 degrees to the plane of Earth's orbit (the ecliptic), the largest of all eight known solar planets. As a result, transits of Mercury across the face of the Sun can only occur when the planet is crossing the plane of the ecliptic at the time it lies between Earth and the Sun, which is in May or November. This occurs about every seven years on average. Mercury's axial tilt is almost zero, with the best measured value as low as 0.027 degrees. This is significantly smaller than that of Jupiter, which has the second smallest axial tilt of all planets at 3.1 degrees. This means that to an observer at Mercury's poles, the center of the Sun never rises more than 2.1 arcminutes above the horizon. By comparison, the angular size of the Sun as seen from Mercury ranges from to 2 degrees across. At certain points on Mercury's surface, an observer would be able to see the Sun peek up a little more than two-thirds of the way over the horizon, then reverse and set before rising again, all within the same Mercurian day. This is because approximately four Earth days before perihelion, Mercury's angular orbital velocity equals its angular rotational velocity so that the Sun's apparent motion ceases; closer to perihelion, Mercury's angular orbital velocity then exceeds the angular rotational velocity. Thus, to a hypothetical observer on Mercury, the Sun appears to move in a retrograde direction. Four Earth days after perihelion, the Sun's normal apparent motion resumes. A similar effect would have occurred if Mercury had been in synchronous rotation: the alternating gain and loss of rotation over a revolution would have caused a libration of 23.65° in longitude. For the same reason, there are two points on Mercury's equator, 180 degrees apart in longitude, at either of which, around perihelion in alternate Mercurian years (once a Mercurian day), the Sun passes overhead, then reverses its apparent motion and passes overhead again, then reverses a second time and passes overhead a third time, taking a total of about 16 Earth-days for this entire process. In the other alternate Mercurian years, the same thing happens at the other of these two points. The amplitude of the retrograde motion is small, so the overall effect is that, for two or three weeks, the Sun is almost stationary overhead, and is at its most brilliant because Mercury is at perihelion, its closest to the Sun. This prolonged exposure to the Sun at its brightest makes these two points the hottest places on Mercury. Maximum temperature occurs when the Sun is at an angle of about 25 degrees past noon due to diurnal temperature lag, at 0.4 Mercury days and 0.8 Mercury years past sunrise. Conversely, there are two other points on the equator, 90 degrees of longitude apart from the first ones, where the Sun passes overhead only when the planet is at aphelion in alternate years, when the apparent motion of the Sun in Mercury's sky is relatively rapid. These points, which are the ones on the equator where the apparent retrograde motion of the Sun happens when it is crossing the horizon as described in the preceding paragraph, receive much less solar heat than the first ones described above. Mercury attains an inferior conjunction (nearest approach to Earth) every 116 Earth days on average, but this interval can range from 105 days to 129 days due to the planet's eccentric orbit. Mercury can come as near as to Earth, and that is slowly declining: The next approach to within is in 2679, and to within in 4487, but it will not be closer to Earth than until 28,622. Its period of retrograde motion as seen from Earth can vary from 8 to 15 days on either side of an inferior conjunction. This large range arises from the planet's high orbital eccentricity. Essentially, because Mercury is closest to the Sun, when taking an average over time, Mercury is most often the closest planet to the Earth, and—in that measure—it is the closest planet to each of the other planets in the Solar System. Longitude convention The longitude convention for Mercury puts the zero of longitude at one of the two hottest points on the surface, as described above. However, when this area was first visited, by , this zero meridian was in darkness, so it was impossible to select a feature on the surface to define the exact position of the meridian. Therefore, a small crater further west was chosen, called Hun Kal, which provides the exact reference point for measuring longitude. The center of Hun Kal defines the 20° west meridian. A 1970 International Astronomical Union resolution suggests that longitudes be measured positively in the westerly direction on Mercury. The two hottest places on the equator are therefore at longitudes 0° W and 180° W, and the coolest points on the equator are at longitudes 90° W and 270° W. However, the MESSENGER project uses an east-positive convention. Spin-orbit resonance For many years it was thought that Mercury was synchronously tidally locked with the Sun, rotating once for each orbit and always keeping the same face directed towards the Sun, in the same way that the same side of the Moon always faces Earth. Radar observations in 1965 proved that the planet has a 3:2 spin-orbit resonance, rotating three times for every two revolutions around the Sun. The eccentricity of Mercury's orbit makes this resonance stable—at perihelion, when the solar tide is strongest, the Sun is nearly stationary in Mercury's sky. The 3:2 resonant tidal locking is stabilized by the variance of the tidal force along Mercury's eccentric orbit, acting on a permanent dipole component of Mercury's mass distribution. In a circular orbit there is no such variance, so the only resonance stabilized in such an orbit is at 1:1 (e.g., Earth–Moon), when the tidal force, stretching a body along the "center-body" line, exerts a torque that aligns the body's axis of least inertia (the "longest" axis, and the axis of the aforementioned dipole) to always point at the center. However, with noticeable eccentricity, like that of Mercury's orbit, the tidal force has a maximum at perihelion and therefore stabilizes resonances, like 3:2, ensuring that the planet points its axis of least inertia roughly at the Sun when passing through perihelion. The original reason astronomers thought it was synchronously locked was that, whenever Mercury was best placed for observation, it was always nearly at the same point in its 3:2 resonance, hence showing the same face. This is because, coincidentally, Mercury's rotation period is almost exactly half of its synodic period with respect to Earth. Due to Mercury's 3:2 spin-orbit resonance, a solar day lasts about 176 Earth days. A sidereal day (the period of rotation) lasts about 58.7 Earth days. Simulations indicate that the orbital eccentricity of Mercury varies chaotically from nearly zero (circular) to more than 0.45 over millions of years due to perturbations from the other planets. This was thought to explain Mercury's 3:2 spin-orbit resonance (rather than the more usual 1:1), because this state is more likely to arise during a period of high eccentricity. However, accurate modeling based on a realistic model of tidal response has demonstrated that Mercury was captured into the 3:2 spin-orbit state at a very early stage of its history, within 20 (more likely, 10) million years after its formation. Numerical simulations show that a future secular orbital resonant interaction with the perihelion of Jupiter may cause the eccentricity of Mercury's orbit to increase to the point where there is a 1% chance that the orbit will be destabilized in the next five billion years. If this happens, Mercury may fall into the Sun, collide with Venus, be ejected from the Solar System, or even disrupt the rest of the inner Solar System. Advance of perihelion In 1859, the French mathematician and astronomer Urbain Le Verrier reported that the slow precession of Mercury's orbit around the Sun could not be completely explained by Newtonian mechanics and perturbations by the known planets. He suggested, among possible explanations, that another planet (or perhaps instead a series of smaller "corpuscules") might exist in an orbit even closer to the Sun than that of Mercury, to account for this perturbation. Other explanations considered included a slight oblateness of the Sun. The success of the search for Neptune based on its perturbations of the orbit of Uranus led astronomers to place faith in this possible explanation, and the hypothetical planet was named Vulcan, but no such planet was ever found. The observed perihelion precession of Mercury is 5,600 arcseconds (1.5556°) per century relative to Earth, or per century relative to the inertial ICRF. Newtonian mechanics, taking into account all the effects from the other planets and including 0.0254 arcseconds per century due to the oblateness of the Sun, predicts a precession of 5,557 arcseconds (1.5436°) per century relative to Earth, or per century relative to ICRF. In the early 20th century, Albert Einstein's general theory of relativity provided the explanation for the observed precession, by formalizing gravitation as being mediated by the curvature of spacetime. The effect is small: just per century (or 0.43 arcsecond per year, or 0.1035 arcsecond per orbital period) for Mercury; it therefore requires a little over 12.5 million orbits, or 3 million years, for a full excess turn. Similar, but much smaller, effects exist for other Solar System bodies: 8.6247 arcseconds per century for Venus, 3.8387 for Earth, 1.351 for Mars, and 10.05 for 1566 Icarus. Observation Mercury's apparent magnitude is calculated to vary between −2.48 (brighter than Sirius) around superior conjunction and +7.25 (below the limit of naked-eye visibility) around inferior conjunction. The mean apparent magnitude is 0.23 while the standard deviation of 1.78 is the largest of any planet. The mean apparent magnitude at superior conjunction is −1.89 while that at inferior conjunction is +5.93. Observation of Mercury is complicated by its proximity to the Sun, as it is lost in the Sun's glare for much of the time. Mercury can be observed for only a brief period during either morning or evening twilight. Ground-based telescope observations of Mercury reveal only an illuminated partial disk with limited detail. The Hubble Space Telescope cannot observe Mercury at all, due to safety procedures that prevent its pointing too close to the Sun. Because the shift of 0.15 revolutions of Earth in a Mercurian year makes up a seven-Mercurian-year cycle (0.15 × 7 ≈ 1.0), in the seventh Mercurian year, Mercury follows almost exactly (earlier by 7 days) the sequence of phenomena it showed seven Mercurian years before. Like the Moon and Venus, Mercury exhibits phases as seen from Earth. It is "new" at inferior conjunction and "full" at superior conjunction. The planet is rendered invisible from Earth on both of these occasions because of its being obscured by the Sun, except at its new phase during a transit. Mercury is technically brightest as seen from Earth when it is at a full phase. Although Mercury is farthest from Earth when it is full, the greater illuminated area that is visible and the opposition brightness surge more than compensates for the distance. The opposite is true for Venus, which appears brightest when it is a crescent, because it is much closer to Earth than when gibbous. Mercury is best observed at the first and last quarter, although they are phases of lesser brightness. The first and last quarter phases occur at greatest elongation east and west of the Sun, respectively. At both of these times, Mercury's separation from the Sun ranges anywhere from 17.9° at perihelion to 27.8° at aphelion. At greatest western elongation, Mercury rises at its earliest before sunrise, and at greatest eastern elongation, it sets at its latest after sunset. Mercury is more often and easily visible from the Southern Hemisphere than from the Northern. This is because Mercury's maximum western elongation occurs only during early autumn in the Southern Hemisphere, whereas its greatest eastern elongation happens only during late winter in the Southern Hemisphere. In both of these cases, the angle at which the planet's orbit intersects the horizon is maximized, allowing it to rise several hours before sunrise in the former instance and not set until several hours after sundown in the latter from southern mid-latitudes, such as Argentina and South Africa. An alternate method for viewing Mercury involves observing the planet with a telescope during daylight hours when conditions are clear, ideally when it is at its greatest elongation. This allows the planet to be found easily, even when using telescopes with apertures. However, great care must be taken to obstruct the Sun from sight because of the extreme risk for eye damage. This method bypasses the limitation of twilight observing when the ecliptic is located at a low elevation (e.g. on autumn evenings). The planet is higher in the sky and less atmospheric effects affect the view of the planet. Mercury can be viewed as close as 4° to the Sun near superior conjunction when it is almost at its brightest. Mercury can, like several other planets and the brightest stars, be seen during a total solar eclipse. Observation history Ancient astronomers The earliest known recorded observations of Mercury are from the MUL.APIN tablets. These observations were most likely made by an Assyrian astronomer around the 14th century BC. The cuneiform name used to designate Mercury on the MUL.APIN tablets is transcribed as UDU.IDIM.GU\U4.UD ("the jumping planet"). Babylonian records of Mercury date back to the 1st millennium BC. The Babylonians called the planet Nabu after the messenger to the gods in their mythology. The Greco-Egyptian astronomer Ptolemy wrote about the possibility of planetary transits across the face of the Sun in his work Planetary Hypotheses. He suggested that no transits had been observed either because planets such as Mercury were too small to see, or because transits were too infrequent. In ancient China, Mercury was known as "the Hour Star" (Chen-xing ). It was associated with the direction north and the phase of water in the Five Phases system of metaphysics. Modern Chinese, Korean, Japanese and Vietnamese cultures refer to the planet literally as the "water star" (), based on the Five elements. Hindu mythology used the name Budha for Mercury, and this god was thought to preside over Wednesday. The god Odin (or Woden) of Germanic paganism was associated with the planet Mercury and Wednesday. The Maya may have represented Mercury as an owl (or possibly four owls; two for the morning aspect and two for the evening) that served as a messenger to the underworld. In medieval Islamic astronomy, the Andalusian astronomer Abū Ishāq Ibrāhīm al-Zarqālī in the 11th century described the deferent of Mercury's geocentric orbit as being oval, like an egg or a pignon, although this insight did not influence his astronomical theory or his astronomical calculations. In the 12th century, Ibn Bajjah observed "two planets as black spots on the face of the Sun", which was later suggested as the transit of Mercury and/or Venus by the Maragha astronomer Qotb al-Din Shirazi in the 13th century. Most such medieval reports of transits were later taken as observations of sunspots. In India, the Kerala school astronomer Nilakantha Somayaji in the 15th century developed a partially heliocentric planetary model in which Mercury orbits the Sun, which in turn orbits Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. Ground-based telescopic research The first telescopic observations of Mercury were made by Thomas Harriot and Galileo from 1610. In 1612, Simon Marius observed the brightness of Mercury varied with the planet's orbital position and concluded it had phases "in the same way as Venus and the Moon". In 1631, Pierre Gassendi made the first telescopic observations of the transit of a planet across the Sun when he saw a transit of Mercury predicted by Johannes Kepler. In 1639, Giovanni Zupi used a telescope to discover that the planet had orbital phases similar to Venus and the Moon. The observation demonstrated conclusively that Mercury orbited the Sun. A rare event in astronomy is the passage of one planet in front of another (occultation), as seen from Earth. Mercury and Venus occult each other every few centuries, and the event of May 28, 1737, is the only one historically observed, having been seen by John Bevis at the Royal Greenwich Observatory. The next occultation of Mercury by Venus will be on December 3, 2133. The difficulties inherent in observing Mercury meant that it was far less studied than the other planets. In 1800, Johann Schröter made observations of surface features, claiming to have observed mountains. Friedrich Bessel used Schröter's drawings to erroneously estimate the rotation period as 24 hours and an axial tilt of 70°. In the 1880s, Giovanni Schiaparelli mapped the planet more accurately, and suggested that Mercury's rotational period was 88 days, the same as its orbital period due to tidal locking. This phenomenon is known as synchronous rotation. The effort to map the surface of Mercury was continued by Eugenios Antoniadi, who published a book in 1934 that included both maps and his own observations. Many of the planet's surface features, particularly the albedo features, take their names from Antoniadi's map. In June 1962, Soviet scientists at the Institute of Radio-engineering and Electronics of the USSR Academy of Sciences, led by Vladimir Kotelnikov, became the first to bounce a radar signal off Mercury and receive it, starting radar observations of the planet. Three years later, radar observations by Americans Gordon H. Pettengill and Rolf B. Dyce, using the Arecibo radio telescope in Puerto Rico, showed conclusively that the planet's rotational period was about 59 days. The theory that Mercury's rotation was synchronous had become widely held, and it was a surprise to astronomers when these radio observations were announced. If Mercury were tidally locked, its dark face would be extremely cold, but measurements of radio emission revealed that it was much hotter than expected. Astronomers were reluctant to drop the synchronous rotation theory and proposed alternative mechanisms such as powerful heat-distributing winds to explain the observations. In 1965, Italian astronomer Giuseppe Colombo noted that the rotation value was about two-thirds of Mercury's orbital period, and proposed that the planet's orbital and rotational periods were locked into a 3:2 rather than a 1:1 resonance. Data from subsequently confirmed this view. This means that Schiaparelli's and Antoniadi's maps were not "wrong". Instead, the astronomers saw the same features during every second orbit and recorded them, but disregarded those seen in the meantime, when Mercury's other face was toward the Sun, because the orbital geometry meant that these observations were made under poor viewing conditions. Ground-based optical observations did not shed much further light on Mercury, but radio astronomers using interferometry at microwave wavelengths, a technique that enables removal of the solar radiation, were able to discern physical and chemical characteristics of the subsurface layers to a depth of several meters. Not until the first space probe flew past Mercury did many of its most fundamental morphological properties become known. Moreover, technological advances have led to improved ground-based observations. In 2000, high-resolution lucky imaging observations were conducted by the Mount Wilson Observatory Hale telescope. They provided the first views that resolved surface features on the parts of Mercury that were not imaged in the mission. Most of the planet has been mapped by the Arecibo radar telescope, with resolution, including polar deposits in shadowed craters of what may be water ice. Research with space probes Reaching Mercury from Earth poses significant technical challenges, because it orbits so much closer to the Sun than Earth. A Mercury-bound spacecraft launched from Earth must travel over into the Sun's gravitational potential well. Mercury has an orbital speed of , whereas Earth's orbital speed is . Therefore, the spacecraft must make a larger change in velocity (delta-v) to get to Mercury and then enter orbit, as compared to the delta-v required for, say, Mars planetary missions. The potential energy liberated by moving down the Sun's potential well becomes kinetic energy, requiring a delta-v change to do anything other than pass by Mercury. Some portion of this delta-v budget can be provided from a gravity assist during one or more fly-bys of Venus. To land safely or enter a stable orbit the spacecraft would rely entirely on rocket motors. Aerobraking is ruled out because Mercury has a negligible atmosphere. A trip to Mercury requires more rocket fuel than that required to escape the Solar System completely. As a result, only three space probes have visited it so far. A proposed alternative approach would use a solar sail to attain a Mercury-synchronous orbit around the Sun. Mariner 10 The first spacecraft to visit Mercury was NASA's (1974–1975). The spacecraft used the gravity of Venus to adjust its orbital velocity so that it could approach Mercury, making it both the first spacecraft to use this gravitational "slingshot" effect and the first NASA mission to visit multiple planets. provided the first close-up images of Mercury's surface, which immediately showed its heavily cratered nature, and revealed many other types of geological features, such as the giant scarps that were later ascribed to the effect of the planet shrinking slightly as its iron core cools. Unfortunately, the same face of the planet was lit at each of close approaches. This made close observation of both sides of the planet impossible, and resulted in the mapping of less than 45% of the planet's surface. The spacecraft made three close approaches to Mercury, the closest of which took it to within of the surface. At the first close approach, instruments detected a magnetic field, to the great surprise of planetary geologists—Mercury's rotation was expected to be much too slow to generate a significant dynamo effect. The second close approach was primarily used for imaging, but at the third approach, extensive magnetic data were obtained. The data revealed that the planet's magnetic field is much like Earth's, which deflects the solar wind around the planet. For many years after the encounters, the origin of Mercury's magnetic field remained the subject of several competing theories. On March 24, 1975, just eight days after its final close approach, ran out of fuel. Because its orbit could no longer be accurately controlled, mission controllers instructed the probe to shut down. is thought to be still orbiting the Sun, passing close to Mercury every few months. MESSENGER A second NASA mission to Mercury, named MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging), was launched on August 3, 2004. It made a fly-by of Earth in August 2005, and of Venus in October 2006 and June 2007 to place it onto the correct trajectory to reach an orbit around Mercury. A first fly-by of Mercury occurred on January 14, 2008, a second on October 6, 2008, and a third on September 29, 2009. Most of the hemisphere not imaged by was mapped during these fly-bys. The probe successfully entered an elliptical orbit around the planet on March 18, 2011. The first orbital image of Mercury was obtained on March 29, 2011. The probe finished a one-year mapping mission, and then entered a one-year extended mission into 2013. In addition to continued observations and mapping of Mercury, MESSENGER observed the 2012 solar maximum. The mission was designed to clear up six key issues: Mercury's high density, its geological history, the nature of its magnetic field, the structure of its core, whether it has ice at its poles, and where its tenuous atmosphere comes from. To this end, the probe carried imaging devices that gathered much-higher-resolution images of much more of Mercury than , assorted spectrometers to determine the abundances of elements in the crust, and magnetometers and devices to measure velocities of charged particles. Measurements of changes in the probe's orbital velocity were expected to be used to infer details of the planet's interior structure. MESSENGER final maneuver was on April 24, 2015, and it crashed into Mercury's surface on April 30, 2015. The spacecraft's impact with Mercury occurred at 3:26:01 p.m. EDT on April 30, 2015, leaving a crater estimated to be in diameter. BepiColombo The European Space Agency and the Japanese Space Agency developed and launched a joint mission called BepiColombo, which will orbit Mercury with two probes: one to map the planet and the other to study its magnetosphere. Launched on October 20, 2018, BepiColombo is expected to reach Mercury in 2025. It will release a magnetometer probe into an elliptical orbit, then chemical rockets will fire to deposit the mapper probe into a circular orbit. Both probes will operate for one terrestrial year. The mapper probe carries an array of spectrometers similar to those on MESSENGER, and will study the planet at many different wavelengths including infrared, ultraviolet, X-ray and gamma ray. BepiColombo conducted the first of its six planned Mercury flybys on October 1, 2021, and the sixth was completed on January 9, 2025. The spacecraft will enter the planet's orbit in 2026. Perseverance rover On March 5, 2024, NASA released images of transits of the moon Deimos, the moon Phobos and the planet Mercury as viewed by the Perseverance rover on the planet Mars.
Physical sciences
Astronomy
null
19702
https://en.wikipedia.org/wiki/Mutation
Mutation
In biology, a mutation is an alteration in the nucleic acid sequence of the genome of an organism, virus, or extrachromosomal DNA. Viral genomes contain either DNA or RNA. Mutations result from errors during DNA or viral replication, mitosis, or meiosis or other types of damage to DNA (such as pyrimidine dimers caused by exposure to ultraviolet radiation), which then may undergo error-prone repair (especially microhomology-mediated end joining), cause an error during other forms of repair, or cause an error during replication (translesion synthesis). Mutations may also result from substitution, insertion or deletion of segments of DNA due to mobile genetic elements. Mutations may or may not produce detectable changes in the observable characteristics (phenotype) of an organism. Mutations play a part in both normal and abnormal biological processes including: evolution, cancer, and the development of the immune system, including junctional diversity. Mutation is the ultimate source of all genetic variation, providing the raw material on which evolutionary forces such as natural selection can act. Mutation can result in many different types of change in sequences. Mutations in genes can have no effect, alter the product of a gene, or prevent the gene from functioning properly or completely. Mutations can also occur in non-genic regions. A 2007 study on genetic variations between different species of Drosophila suggested that, if a mutation changes a protein produced by a gene, the result is likely to be harmful, with an estimated 70% of amino acid polymorphisms that have damaging effects, and the remainder being either neutral or marginally beneficial. Mutation and DNA damage are the two major types of errors that occur in DNA, but they are fundamentally different. DNA damage is a physical alteration in the DNA structure, such as a single or double strand break, a modified guanosine residue in DNA such as 8-hydroxydeoxyguanosine, or a polycyclic aromatic hydrocarbon adduct. DNA damages can be recognized by enzymes, and therefore can be correctly repaired using the complementary undamaged strand in DNA as a template or an undamaged sequence in a homologous chromosome if it is available. If DNA damage remains in a cell, transcription of a gene may be prevented and thus translation into a protein may also be blocked. DNA replication may also be blocked and/or the cell may die. In contrast to a DNA damage, a mutation is an alteration of the base sequence of the DNA. Ordinarily, a mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and thus a mutation is not ordinarily repaired. At the cellular level, mutations can alter protein function and regulation. Unlike DNA damages, mutations are replicated when the cell replicates. At the level of cell populations, cells with mutations will increase or decrease in frequency according to the effects of the mutations on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damages and mutations are related because DNA damages often cause errors of DNA synthesis during replication or repair and these errors are a major source of mutation. Overview Mutations can involve the duplication of large sections of DNA, usually through genetic recombination. These duplications are a major source of raw material for evolving new genes, with tens to hundreds of genes duplicated in animal genomes every million years. Most genes belong to larger gene families of shared ancestry, detectable by their sequence homology. Novel genes are produced by several methods, commonly through the duplication and mutation of an ancestral gene, or by recombining parts of different genes to form new combinations with new functions. Here, protein domains act as modules, each with a particular and independent function, that can be mixed together to produce genes encoding new proteins with novel properties. For example, the human eye uses four genes to make structures that sense light: three for cone cell or colour vision and one for rod cell or night vision; all four arose from a single ancestral gene. Another advantage of duplicating a gene (or even an entire genome) is that this increases engineering redundancy; this allows one gene in the pair to acquire a new function while the other copy performs the original function. Other types of mutation occasionally create new genes from previously noncoding DNA. Changes in chromosome number may involve even larger mutations, where segments of the DNA within chromosomes break and then rearrange. For example, in the Homininae, two chromosomes fused to produce human chromosome 2; this fusion did not occur in the lineage of the other apes, and they retain these separate chromosomes. In evolution, the most important role of such chromosomal rearrangements may be to accelerate the divergence of a population into new species by making populations less likely to interbreed, thereby preserving genetic differences between these populations. Sequences of DNA that can move about the genome, such as transposons, make up a major fraction of the genetic material of plants and animals, and may have been important in the evolution of genomes. For example, more than a million copies of the Alu sequence are present in the human genome, and these sequences have now been recruited to perform functions such as regulating gene expression. Another effect of these mobile DNA sequences is that when they move within a genome, they can mutate or delete existing genes and thereby produce genetic diversity. Nonlethal mutations accumulate within the gene pool and increase the amount of genetic variation. The abundance of some genetic changes within the gene pool can be reduced by natural selection, while other "more favorable" mutations may accumulate and result in adaptive changes. For example, a butterfly may produce offspring with new mutations. The majority of these mutations will have no effect; but one might change the colour of one of the butterfly's offspring, making it harder (or easier) for predators to see. If this color change is advantageous, the chances of this butterfly's surviving and producing its own offspring are a little better, and over time the number of butterflies with this mutation may form a larger percentage of the population. Neutral mutations are defined as mutations whose effects do not influence the fitness of an individual. These can increase in frequency over time due to genetic drift. It is believed that the overwhelming majority of mutations have no significant effect on an organism's fitness. Also, DNA repair mechanisms are able to mend most changes before they become permanent mutations, and many organisms have mechanisms, such as apoptotic pathways, for eliminating otherwise-permanently mutated somatic cells. Beneficial mutations can improve reproductive success. Causes Four classes of mutations are (1) mutations (molecular decay), (2) mutations due to error-prone replication bypass of naturally occurring DNA damage (also called error-prone translesion synthesis), (3) errors introduced during DNA repair, and (4) induced mutations caused by mutagens. Scientists may sometimes deliberately introduce mutations into cells or research organisms for the sake of scientific experimentation. One 2017 study claimed that 66% of cancer-causing mutations are random, 29% are due to the environment (the studied population spanned 69 countries), and 5% are inherited. Humans on average pass 60 new mutations to their children but fathers pass more mutations depending on their age with every year adding two new mutations to a child. Spontaneous mutation Spontaneous mutations occur with non-zero probability even given a healthy, uncontaminated cell. Naturally occurring oxidative DNA damage is estimated to occur 10,000 times per cell per day in humans and 100,000 times per cell per day in rats. Spontaneous mutations can be characterized by the specific change: Tautomerism – A base is changed by the repositioning of a hydrogen atom, altering the hydrogen bonding pattern of that base, resulting in incorrect base pairing during replication. Theoretical results suggest that proton tunnelling is an important factor in the spontaneous creation of GC tautomers. Depurination – Loss of a purine base (A or G) to form an apurinic site (AP site). Deamination – Hydrolysis changes a normal base to an atypical base containing a keto group in place of the original amine group. Examples include C → U and A → HX (hypoxanthine), which can be corrected by DNA repair mechanisms; and 5MeC (5-methylcytosine) → T, which is less likely to be detected as a mutation because thymine is a normal DNA base. Slipped strand mispairing – Denaturation of the new strand from the template during replication, followed by renaturation in a different spot ("slipping"). This can lead to insertions or deletions. Error-prone replication bypass There is increasing evidence that the majority of spontaneously arising mutations are due to error-prone replication (translesion synthesis) past DNA damage in the template strand. In mice, the majority of mutations are caused by translesion synthesis. Likewise, in yeast, Kunz et al. found that more than 60% of the spontaneous single base pair substitutions and deletions were caused by translesion synthesis. Errors introduced during DNA repair Although naturally occurring double-strand breaks occur at a relatively low frequency in DNA, their repair often causes mutation. Non-homologous end joining (NHEJ) is a major pathway for repairing double-strand breaks. NHEJ involves removal of a few nucleotides to allow somewhat inaccurate alignment of the two ends for rejoining followed by addition of nucleotides to fill in gaps. As a consequence, NHEJ often introduces mutations. Induced mutation Induced mutations are alterations in the gene after it has come in contact with mutagens and environmental causes. Induced mutations on the molecular level can be caused by: Chemicals Hydroxylamine Base analogues (e.g., Bromodeoxyuridine (BrdU)) Alkylating agents (e.g., N-ethyl-N-nitrosourea (ENU). These agents can mutate both replicating and non-replicating DNA. In contrast, a base analogue can mutate the DNA only when the analogue is incorporated in replicating the DNA. Each of these classes of chemical mutagens has certain effects that then lead to transitions, transversions, or deletions. Agents that form DNA adducts (e.g., ochratoxin A) DNA intercalating agents (e.g., ethidium bromide) DNA crosslinkers Oxidative damage Nitrous acid converts amine groups on A and C to diazo groups, altering their hydrogen bonding patterns, which leads to incorrect base pairing during replication. Radiation Ultraviolet light (UV) (including non-ionizing radiation). Two nucleotide bases in DNA—cytosine and thymine—are most vulnerable to radiation that can change their properties. UV light can induce adjacent pyrimidine bases in a DNA strand to become covalently joined as a pyrimidine dimer. UV radiation, in particular longer-wave UVA, can also cause oxidative damage to DNA. Ionizing radiation. Exposure to ionizing radiation, such as gamma radiation, can result in mutation, possibly resulting in cancer or death. Whereas in former times mutations were assumed to occur by chance, or induced by mutagens, molecular mechanisms of mutation have been discovered in bacteria and across the tree of life. As S. Rosenberg states, "These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation." Since they are self-induced mutagenic mechanisms that increase the adaptation rate of organisms, they have some times been named as adaptive mutagenesis mechanisms, and include the SOS response in bacteria, ectopic intrachromosomal recombination and other chromosomal events such as duplications. Classification of types By effect on structure The sequence of a gene can be altered in a number of ways. Gene mutations have varying effects on health depending on where they occur and whether they alter the function of essential proteins. Mutations in the structure of genes can be classified into several types. Large-scale mutations Large-scale mutations in chromosomal structure include: Amplifications (or gene duplications) or repetition of a chromosomal segment or presence of extra piece of a chromosome broken piece of a chromosome may become attached to a homologous or non-homologous chromosome so that some of the genes are present in more than two doses leading to multiple copies of all chromosomal regions, increasing the dosage of the genes located within them. Polyploidy, duplication of entire sets of chromosomes, potentially resulting in a separate breeding population and speciation. Deletions of large chromosomal regions, leading to loss of the genes within those regions. Mutations whose effect is to juxtapose previously separate pieces of DNA, potentially bringing together separate genes to form functionally distinct fusion genes (e.g., bcr-abl). Large scale changes to the structure of chromosomes called chromosomal rearrangement that can lead to a decrease of fitness but also to speciation in isolated, inbred populations. These include: Chromosomal translocations: interchange of genetic parts from nonhomologous chromosomes. Chromosomal inversions: reversing the orientation of a chromosomal segment. Non-homologous chromosomal crossover. Interstitial deletions: an intra-chromosomal deletion that removes a segment of DNA from a single chromosome, thereby apposing previously distant genes. For example, cells isolated from a human astrocytoma, a type of brain tumour, were found to have a chromosomal deletion removing sequences between the Fused in Glioblastoma (FIG) gene and the receptor tyrosine kinase (ROS), producing a fusion protein (FIG-ROS). The abnormal FIG-ROS fusion protein has constitutively active kinase activity that causes oncogenic transformation (a transformation from normal cells to cancer cells). Loss of heterozygosity: loss of one allele, either by a deletion or a genetic recombination event, in an organism that previously had two different alleles. Small-scale mutations Small-scale mutations affect a gene in one or a few nucleotides. (If only a single nucleotide is affected, they are called point mutations.) Small-scale mutations include: Insertions add one or more extra nucleotides into the DNA. They are usually caused by transposable elements, or errors during replication of repeating elements. Insertions in the coding region of a gene may alter splicing of the mRNA (splice site mutation), or cause a shift in the reading frame (frameshift), both of which can significantly alter the gene product. Insertions can be reversed by excision of the transposable element. Deletions remove one or more nucleotides from the DNA. Like insertions, these mutations can alter the reading frame of the gene. In general, they are irreversible: Though exactly the same sequence might, in theory, be restored by an insertion, transposable elements able to revert a very short deletion (say 1–2 bases) in any location either are highly unlikely to exist or do not exist at all. Substitution mutations, often caused by chemicals or malfunction of DNA replication, exchange a single nucleotide for another. These changes are classified as transitions or transversions. Most common is the transition that exchanges a purine for a purine (A ↔ G) or a pyrimidine for a pyrimidine, (C ↔ T). A transition can be caused by nitrous acid, base mispairing, or mutagenic base analogues such as BrdU. Less common is a transversion, which exchanges a purine for a pyrimidine or a pyrimidine for a purine (C/T ↔ A/G). An example of a transversion is the conversion of adenine (A) into a cytosine (C). Point mutations are modifications of single base pairs of DNA or other small base pairs within a gene. A point mutation can be reversed by another point mutation, in which the nucleotide is changed back to its original state (true reversion) or by second-site reversion (a complementary mutation elsewhere that results in regained gene functionality). As discussed below, point mutations that occur within the protein coding region of a gene may be classified as synonymous or nonsynonymous substitutions, the latter of which in turn can be divided into missense or nonsense mutations. By impact on protein sequence The effect of a mutation on protein sequence depends in part on where in the genome it occurs, especially whether it is in a coding or non-coding region. Mutations in the non-coding regulatory sequences of a gene, such as promoters, enhancers, and silencers, can alter levels of gene expression, but are less likely to alter the protein sequence. Mutations within introns and in regions with no known biological function (e.g. pseudogenes, retrotransposons) are generally neutral, having no effect on phenotype – though intron mutations could alter the protein product if they affect mRNA splicing. Mutations that occur in coding regions of the genome are more likely to alter the protein product, and can be categorized by their effect on amino acid sequence: A frameshift mutation is caused by insertion or deletion of a number of nucleotides that is not evenly divisible by three from a DNA sequence. Due to the triplet nature of gene expression by codons, the insertion or deletion can disrupt the reading frame, or the grouping of the codons, resulting in a completely different translation from the original. The earlier in the sequence the deletion or insertion occurs, the more altered the protein produced is. (For example, the code CCU GAC UAC CUA codes for the amino acids proline, aspartic acid, tyrosine, and leucine. If the U in CCU was deleted, the resulting sequence would be CCG ACU ACC UAx, which would instead code for proline, threonine, threonine, and part of another amino acid or perhaps a stop codon (where the x stands for the following nucleotide).) By contrast, any insertion or deletion that is evenly divisible by three is termed an in-frame mutation. A point substitution mutation results in a change in a single nucleotide and can be either synonymous or nonsynonymous. A synonymous substitution replaces a codon with another codon that codes for the same amino acid, so that the produced amino acid sequence is not modified. Synonymous mutations occur due to the degenerate nature of the genetic code. If this mutation does not result in any phenotypic effects, then it is called silent, but not all synonymous substitutions are silent. (There can also be silent mutations in nucleotides outside of the coding regions, such as the introns, because the exact nucleotide sequence is not as crucial as it is in the coding regions, but these are not considered synonymous substitutions.) A nonsynonymous substitution replaces a codon with another codon that codes for a different amino acid, so that the produced amino acid sequence is modified. Nonsynonymous substitutions can be classified as nonsense or missense mutations: A missense mutation changes a nucleotide to cause substitution of a different amino acid. This in turn can render the resulting protein nonfunctional. Such mutations are responsible for diseases such as Epidermolysis bullosa, sickle-cell disease, and SOD1-mediated ALS. On the other hand, if a missense mutation occurs in an amino acid codon that results in the use of a different, but chemically similar, amino acid, then sometimes little or no change is rendered in the protein. For example, a change from AAA to AGA will encode arginine, a chemically similar molecule to the intended lysine. In this latter case the mutation will have little or no effect on phenotype and therefore be neutral. A nonsense mutation is a point mutation in a sequence of DNA that results in a premature stop codon, or a nonsense codon in the transcribed mRNA, and possibly a truncated, and often nonfunctional protein product. This sort of mutation has been linked to different diseases, such as congenital adrenal hyperplasia. (See Stop codon.) By effect on function A mutation becomes an effect on function mutation when the exactitude of functions between a mutated protein and its direct interactor undergoes change. The interactors can be other proteins, molecules, nucleic acids, etc. There are many mutations that fall under the category of by effect on function, but depending on the specificity of the change the mutations listed below will occur. Loss-of-function mutations, also called inactivating mutations, result in the gene product having less or no function (being partially or wholly inactivated). When the allele has a complete loss of function (null allele), it is often called an amorph or amorphic mutation in Muller's morphs schema. Phenotypes associated with such mutations are most often recessive. Exceptions are when the organism is haploid, or when the reduced dosage of a normal gene product is not enough for a normal phenotype (this is called haploinsufficiency). A disease that is caused by a loss-of-function mutation is Gitelman syndrome and cystic fibrosis. Gain-of-function mutations also called activating mutations, change the gene product such that its effect gets stronger (enhanced activation) or even is superseded by a different and abnormal function. When the new allele is created, a heterozygote containing the newly created allele as well as the original will express the new allele; genetically this defines the mutations as dominant phenotypes. Several of Muller's morphs correspond to the gain of function, including hypermorph (increased gene expression) and neomorph (novel function). Dominant negative mutations (also called anti-morphic mutations) have an altered gene product that acts antagonistically to the wild-type allele. These mutations usually result in an altered molecular function (often inactive) and are characterized by a dominant or semi-dominant phenotype. In humans, dominant negative mutations have been implicated in cancer (e.g., mutations in genes p53, ATM, CEBPA, and PPARgamma). Marfan syndrome is caused by mutations in the FBN1 gene, located on chromosome 15, which encodes fibrillin-1, a glycoprotein component of the extracellular matrix. Marfan syndrome is also an example of dominant negative mutation and haploinsufficiency. Lethal mutations result in rapid organismal death when occurring during development and cause significant reductions of life expectancy for developed organisms. An example of a disease that is caused by a dominant lethal mutation is Huntington's disease. Null mutations, also known as Amorphic mutations, are a form of loss-of-function mutations that completely prohibit the gene's function. The mutation leads to a complete loss of operation at the phenotypic level, also causing no gene product to be formed. Atopic eczema and dermatitis syndrome are common diseases caused by a null mutation of the gene that activates filaggrin. Suppressor mutations are a type of mutation that causes the double mutation to appear normally. In suppressor mutations the phenotypic activity of a different mutation is completely suppressed, thus causing the double mutation to look normal. There are two types of suppressor mutations, there are intragenic and extragenic suppressor mutations. Intragenic mutations occur in the gene where the first mutation occurs, while extragenic mutations occur in the gene that interacts with the product of the first mutation. A common disease that results from this type of mutation is Alzheimer's disease. Neomorphic mutations are a part of the gain-of-function mutations and are characterized by the control of new protein product synthesis. The newly synthesized gene normally contains a novel gene expression or molecular function. The result of the neomorphic mutation is the gene where the mutation occurs has a complete change in function. A back mutation or reversion is a point mutation that restores the original sequence and hence the original phenotype. By effect on fitness (harmful, beneficial, neutral mutations) In genetics, it is sometimes useful to classify mutations as either or beneficial (or neutral): A harmful, or , mutation decreases the fitness of the organism. Many, but not all mutations in essential genes are harmful (if a mutation does not change the amino acid sequence in an essential protein, it is harmless in most cases). A beneficial, or advantageous mutation increases the fitness of the organism. Examples are mutations that lead to antibiotic resistance in bacteria (which are beneficial for bacteria but usually not for humans). A neutral mutation has no harmful or beneficial effect on the organism. Such mutations occur at a steady rate, forming the basis for the molecular clock. In the neutral theory of molecular evolution, neutral mutations provide genetic drift as the basis for most variation at the molecular level. In animals or plants, most mutations are neutral, given that the vast majority of their genomes is either non-coding or consists of repetitive sequences that have no obvious function ("junk DNA"). Large-scale quantitative mutagenesis screens, in which thousands of millions of mutations are tested, invariably find that a larger fraction of mutations has harmful effects but always returns a number of beneficial mutations as well. For instance, in a screen of all gene deletions in E. coli, 80% of mutations were negative, but 20% were positive, even though many had a very small effect on growth (depending on condition). Gene deletions involve removal of whole genes, so that point mutations almost always have a much smaller effect. In a similar screen in Streptococcus pneumoniae, but this time with transposon insertions, 76% of insertion mutants were classified as neutral, 16% had a significantly reduced fitness, but 6% were advantageous. This classification is obviously relative and somewhat artificial: a harmful mutation can quickly turn into a beneficial mutations when conditions change. Also, there is a gradient from harmful/beneficial to neutral, as many mutations may have small and mostly neglectable effects but under certain conditions will become relevant. Also, many traits are determined by hundreds of genes (or loci), so that each locus has only a minor effect. For instance, human height is determined by hundreds of genetic variants ("mutations") but each of them has a very minor effect on height, apart from the impact of nutrition. Height (or size) itself may be more or less beneficial as the huge range of sizes in animal or plant groups shows. Distribution of fitness effects (DFE) Attempts have been made to infer the distribution of fitness effects (DFE) using mutagenesis experiments and theoretical models applied to molecular sequence data. DFE, as used to determine the relative abundance of different types of mutations (i.e., strongly deleterious, nearly neutral or advantageous), is relevant to many evolutionary questions, such as the maintenance of genetic variation, the rate of genomic decay, the maintenance of outcrossing sexual reproduction as opposed to inbreeding and the evolution of sex and genetic recombination. DFE can also be tracked by tracking the skewness of the distribution of mutations with putatively severe effects as compared to the distribution of mutations with putatively mild or absent effect. In summary, the DFE plays an important role in predicting evolutionary dynamics. A variety of approaches have been used to study the DFE, including theoretical, experimental and analytical methods. Mutagenesis experiment: The direct method to investigate the DFE is to induce mutations and then measure the mutational fitness effects, which has already been done in viruses, bacteria, yeast, and Drosophila. For example, most studies of the DFE in viruses used site-directed mutagenesis to create point mutations and measure relative fitness of each mutant. In Escherichia coli, one study used transposon mutagenesis to directly measure the fitness of a random insertion of a derivative of Tn10. In yeast, a combined mutagenesis and deep sequencing approach has been developed to generate high-quality systematic mutant libraries and measure fitness in high throughput. However, given that many mutations have effects too small to be detected and that mutagenesis experiments can detect only mutations of moderately large effect; DNA sequence analysis can provide valuable information about these mutations. Molecular sequence analysis: With rapid development of DNA sequencing technology, an enormous amount of DNA sequence data is available and even more is forthcoming in the future. Various methods have been developed to infer the DFE from DNA sequence data. By examining DNA sequence differences within and between species, we are able to infer various characteristics of the DFE for neutral, deleterious and advantageous mutations. To be specific, the DNA sequence analysis approach allows us to estimate the effects of mutations with very small effects, which are hardly detectable through mutagenesis experiments. One of the earliest theoretical studies of the distribution of fitness effects was done by Motoo Kimura, an influential theoretical population geneticist. His neutral theory of molecular evolution proposes that most novel mutations will be highly deleterious, with a small fraction being neutral. A later proposal by Hiroshi Akashi proposed a bimodal model for the DFE, with modes centered around highly deleterious and neutral mutations. Both theories agree that the vast majority of novel mutations are neutral or deleterious and that advantageous mutations are rare, which has been supported by experimental results. One example is a study done on the DFE of random mutations in vesicular stomatitis virus. Out of all mutations, 39.6% were lethal, 31.2% were non-lethal deleterious, and 27.1% were neutral. Another example comes from a high throughput mutagenesis experiment with yeast. In this experiment it was shown that the overall DFE is bimodal, with a cluster of neutral mutations, and a broad distribution of deleterious mutations. Though relatively few mutations are advantageous, those that are play an important role in evolutionary changes. Like neutral mutations, weakly selected advantageous mutations can be lost due to random genetic drift, but strongly selected advantageous mutations are more likely to be fixed. Knowing the DFE of advantageous mutations may lead to increased ability to predict the evolutionary dynamics. Theoretical work on the DFE for advantageous mutations has been done by John H. Gillespie and H. Allen Orr. They proposed that the distribution for advantageous mutations should be exponential under a wide range of conditions, which, in general, has been supported by experimental studies, at least for strongly selected advantageous mutations. In general, it is accepted that the majority of mutations are neutral or deleterious, with advantageous mutations being rare; however, the proportion of types of mutations varies between species. This indicates two important points: first, the proportion of effectively neutral mutations is likely to vary between species, resulting from dependence on effective population size; second, the average effect of deleterious mutations varies dramatically between species. In addition, the DFE also differs between coding regions and noncoding regions, with the DFE of noncoding DNA containing more weakly selected mutations. By inheritance In multicellular organisms with dedicated reproductive cells, mutations can be subdivided into germline mutations, which can be passed on to descendants through their reproductive cells, and somatic mutations (also called acquired mutations), which involve cells outside the dedicated reproductive group and which are not usually transmitted to descendants. Diploid organisms (e.g., humans) contain two copies of each gene—a paternal and a maternal allele. Based on the occurrence of mutation on each chromosome, we may classify mutations into three types. A wild type or homozygous non-mutated organism is one in which neither allele is mutated. A heterozygous mutation is a mutation of only one allele. A homozygous mutation is an identical mutation of both the paternal and maternal alleles. Compound heterozygous mutations or a genetic compound consists of two different mutations in the paternal and maternal alleles. Germline mutation A germline mutation in the reproductive cells of an individual gives rise to a constitutional mutation in the offspring, that is, a mutation that is present in every cell. A constitutional mutation can also occur very soon after fertilization, or continue from a previous constitutional mutation in a parent. A germline mutation can be passed down through subsequent generations of organisms. The distinction between germline and somatic mutations is important in animals that have a dedicated germline to produce reproductive cells. However, it is of little value in understanding the effects of mutations in plants, which lack a dedicated germline. The distinction is also blurred in those animals that reproduce asexually through mechanisms such as budding, because the cells that give rise to the daughter organisms also give rise to that organism's germline. A new germline mutation not inherited from either parent is called a de novo mutation. Somatic mutation A change in the genetic structure that is not inherited from a parent, and also not passed to offspring, is called a somatic mutation. Somatic mutations are not inherited by an organism's offspring because they do not affect the germline. However, they are passed down to all the progeny of a mutated cell within the same organism during mitosis. A major section of an organism therefore might carry the same mutation. These types of mutations are usually prompted by environmental causes, such as ultraviolet radiation or any exposure to certain harmful chemicals, and can cause diseases including cancer. With plants, some somatic mutations can be propagated without the need for seed production, for example, by grafting and stem cuttings. These type of mutation have led to new types of fruits, such as the "Delicious" apple and the "Washington" navel orange. Human and mouse somatic cells have a mutation rate more than ten times higher than the germline mutation rate for both species; mice have a higher rate of both somatic and germline mutations per cell division than humans. The disparity in mutation rate between the germline and somatic tissues likely reflects the greater importance of genome maintenance in the germline than in the soma. Special classes Conditional mutation is a mutation that has wild-type (or less severe) phenotype under certain "permissive" environmental conditions and a mutant phenotype under certain "restrictive" conditions. For example, a temperature-sensitive mutation can cause cell death at high temperature (restrictive condition), but might have no deleterious consequences at a lower temperature (permissive condition). These mutations are non-autonomous, as their manifestation depends upon presence of certain conditions, as opposed to other mutations which appear autonomously. The permissive conditions may be temperature, certain chemicals, light or mutations in other parts of the genome. In vivo mechanisms like transcriptional switches can create conditional mutations. For instance, association of Steroid Binding Domain can create a transcriptional switch that can change the expression of a gene based on the presence of a steroid ligand. Conditional mutations have applications in research as they allow control over gene expression. This is especially useful studying diseases in adults by allowing expression after a certain period of growth, thus eliminating the deleterious effect of gene expression seen during stages of development in model organisms. DNA Recombinase systems like Cre-Lox recombination used in association with promoters that are activated under certain conditions can generate conditional mutations. Dual Recombinase technology can be used to induce multiple conditional mutations to study the diseases which manifest as a result of simultaneous mutations in multiple genes. Certain inteins have been identified which splice only at certain permissive temperatures, leading to improper protein synthesis and thus, loss-of-function mutations at other temperatures. Conditional mutations may also be used in genetic studies associated with ageing, as the expression can be changed after a certain time period in the organism's lifespan. Replication timing quantitative trait loci affects DNA replication. Nomenclature In order to categorize a mutation as such, the "normal" sequence must be obtained from the DNA of a "normal" or "healthy" organism (as opposed to a "mutant" or "sick" one), it should be identified and reported; ideally, it should be made publicly available for a straightforward nucleotide-by-nucleotide comparison, and agreed upon by the scientific community or by a group of expert geneticists and biologists, who have the responsibility of establishing the standard or so-called "consensus" sequence. This step requires a tremendous scientific effort. Once the consensus sequence is known, the mutations in a genome can be pinpointed, described, and classified. The committee of the Human Genome Variation Society (HGVS) has developed the standard human sequence variant nomenclature, which should be used by researchers and DNA diagnostic centers to generate unambiguous mutation descriptions. In principle, this nomenclature can also be used to describe mutations in other organisms. The nomenclature specifies the type of mutation and base or amino acid changes. Nucleotide substitution (e.g., 76A>T) – The number is the position of the nucleotide from the 5' end; the first letter represents the wild-type nucleotide, and the second letter represents the nucleotide that replaced the wild type. In the given example, the adenine at the 76th position was replaced by a thymine. If it becomes necessary to differentiate between mutations in genomic DNA, mitochondrial DNA, and RNA, a simple convention is used. For example, if the 100th base of a nucleotide sequence mutated from G to C, then it would be written as g.100G>C if the mutation occurred in genomic DNA, m.100G>C if the mutation occurred in mitochondrial DNA, or r.100g>c if the mutation occurred in RNA. Note that, for mutations in RNA, the nucleotide code is written in lower case. Amino acid substitution (e.g., D111E) – The first letter is the one letter code of the wild-type amino acid, the number is the position of the amino acid from the N-terminus, and the second letter is the one letter code of the amino acid present in the mutation. Nonsense mutations are represented with an X for the second amino acid (e.g. D111X). Amino acid deletion (e.g., ΔF508) – The Greek letter Δ (delta) indicates a deletion. The letter refers to the amino acid present in the wild type and the number is the position from the N terminus of the amino acid were it to be present as in the wild type. Mutation rates Mutation rates vary substantially across species, and the evolutionary forces that generally determine mutation are the subject of ongoing investigation. In humans, the mutation rate is about 50–90 de novo mutations per genome per generation, that is, each human accumulates about 50–90 novel mutations that were not present in his or her parents. This number has been established by sequencing thousands of human trios, that is, two parents and at least one child. The genomes of RNA viruses are based on RNA rather than DNA. The RNA viral genome can be double-stranded (as in DNA) or single-stranded. In some of these viruses (such as the single-stranded human immunodeficiency virus), replication occurs quickly, and there are no mechanisms to check the genome for accuracy. This error-prone process often results in mutations. The rate of de novo mutations, whether germline or somatic, vary among organisms. Individuals within the same species can even express varying rates of mutation. Overall, rates of de novo mutations are low compared to those of inherited mutations, which categorizes them as rare forms of genetic variation. Many observations of de novo mutation rates have associated higher rates of mutation correlated to paternal age. In sexually reproducing organisms, the comparatively higher frequency of cell divisions in the parental sperm donor germline drive conclusions that rates of de novo mutation can be tracked along a common basis. The frequency of error during the DNA replication process of gametogenesis, especially amplified in the rapid production of sperm cells, can promote more opportunities for de novo mutations to replicate unregulated by DNA repair machinery. This claim combines the observed effects of increased probability for mutation in rapid spermatogenesis with short periods of time between cellular divisions that limit the efficiency of repair machinery. Rates of de novo mutations that affect an organism during its development can also increase with certain environmental factors. For example, certain intensities of exposure to radioactive elements can inflict damage to an organism's genome, heightening rates of mutation. In humans, the appearance of skin cancer during one's lifetime is induced by overexposure to UV radiation that causes mutations in the cellular and skin genome. Randomness of mutations There is a widespread assumption that mutations are (entirely) "random" with respect to their consequences (in terms of probability). This was shown to be wrong as mutation frequency can vary across regions of the genome, with such DNA repair- and mutation-biases being associated with various factors. For instance, Monroe and colleagues demonstrated that—in the studied plant (Arabidopsis thaliana)—more important genes mutate less frequently than less important ones. They demonstrated that mutation is "non-random in a way that benefits the plant". Additionally, previous experiments typically used to demonstrate mutations being random with respect to fitness (such as the Fluctuation Test and Replica plating) have been shown to only support the weaker claim that those mutations are random with respect to external selective constraints, not fitness as a whole. Disease causation Changes in DNA caused by mutation in a coding region of DNA can cause errors in protein sequence that may result in partially or completely non-functional proteins. Each cell, in order to function correctly, depends on thousands of proteins to function in the right places at the right times. When a mutation alters a protein that plays a critical role in the body, a medical condition can result. One study on the comparison of genes between different species of Drosophila suggests that if a mutation does change a protein, the mutation will most likely be harmful, with an estimated 70 per cent of amino acid polymorphisms having damaging effects, and the remainder being either neutral or weakly beneficial. Some mutations alter a gene's DNA base sequence but do not change the protein made by the gene. Studies have shown that only 7% of point mutations in noncoding DNA of yeast are deleterious and 12% in coding DNA are deleterious. The rest of the mutations are either neutral or slightly beneficial. Inherited disorders If a mutation is present in a germ cell, it can give rise to offspring that carries the mutation in all of its cells. This is the case in hereditary diseases. In particular, if there is a mutation in a DNA repair gene within a germ cell, humans carrying such germline mutations may have an increased risk of cancer. A list of 34 such germline mutations is given in the article DNA repair-deficiency disorder. An example of one is albinism, a mutation that occurs in the OCA1 or OCA2 gene. Individuals with this disorder are more prone to many types of cancers, other disorders and have impaired vision. DNA damage can cause an error when the DNA is replicated, and this error of replication can cause a gene mutation that, in turn, could cause a genetic disorder. DNA damages are repaired by the DNA repair system of the cell. Each cell has a number of pathways through which enzymes recognize and repair damages in DNA. Because DNA can be damaged in many ways, the process of DNA repair is an important way in which the body protects itself from disease. Once DNA damage has given rise to a mutation, the mutation cannot be repaired. Role in carcinogenesis On the other hand, a mutation may occur in a somatic cell of an organism. Such mutations will be present in all descendants of this cell within the same organism. The accumulation of certain mutations over generations of somatic cells is part of cause of malignant transformation, from normal cell to cancer cell. Cells with heterozygous loss-of-function mutations (one good copy of gene and one mutated copy) may function normally with the unmutated copy until the good copy has been spontaneously somatically mutated. This kind of mutation happens often in living organisms, but it is difficult to measure the rate. Measuring this rate is important in predicting the rate at which people may develop cancer. Point mutations may arise from spontaneous mutations that occur during DNA replication. The rate of mutation may be increased by mutagens. Mutagens can be physical, such as radiation from UV rays, X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention. Beneficial and conditional mutations Although mutations that cause changes in protein sequences can be harmful to an organism, on occasions the effect may be positive in a given environment. In this case, the mutation may enable the mutant organism to withstand particular environmental stresses better than wild-type organisms, or reproduce more quickly. In these cases a mutation will tend to become more common in a population through natural selection. That said, the same mutation can be beneficial in one condition and disadvantageous in another condition. Examples include the following: HIV resistance: a specific 32 base pair deletion in human CCR5 (CCR5-Δ32) confers HIV resistance to homozygotes and delays AIDS onset in heterozygotes. One possible explanation of the etiology of the relatively high frequency of CCR5-Δ32 in the European population is that it conferred resistance to the bubonic plague in mid-14th century Europe. People with this mutation were more likely to survive infection; thus its frequency in the population increased. This theory could explain why this mutation is not found in Southern Africa, which remained untouched by bubonic plague. A newer theory suggests that the selective pressure on the CCR5 Delta 32 mutation was caused by smallpox instead of the bubonic plague. Malaria resistance: An example of a harmful mutation is sickle-cell disease, a blood disorder in which the body produces an abnormal type of the oxygen-carrying substance haemoglobin in the red blood cells. One-third of all indigenous inhabitants of Sub-Saharan Africa carry the allele, because, in areas where malaria is common, there is a survival value in carrying only a single sickle-cell allele (sickle cell trait). Those with only one of the two alleles of the sickle-cell disease are more resistant to malaria, since the infestation of the malaria Plasmodium is halted by the sickling of the cells that it infests. Antibiotic resistance: Practically all bacteria develop antibiotic resistance when exposed to antibiotics. In fact, bacterial populations already have such mutations that get selected under antibiotic selection. Obviously, such mutations are only beneficial for the bacteria but not for those infected. Lactase persistence. A mutation allowed humans to express the enzyme lactase after they are naturally weaned from breast milk, allowing adults to digest lactose, which is likely one of the most beneficial mutations in recent human evolution. Role in evolution By introducing novel genetic qualities to a population of organisms, de novo mutations play a critical role in the combined forces of evolutionary change. However, the weight of genetic diversity generated by mutational change is often considered a generally "weak" evolutionary force. Although the random emergence of mutations alone provides the basis for genetic variation across all organic life, this force must be taken in consideration alongside all evolutionary forces at play. Spontaneous de novo mutations as cataclysmic events of speciation depend on factors introduced by natural selection, genetic flow, and genetic drift. For example, smaller populations with heavy mutational input (high rates of mutation) are prone to increases of genetic variation which lead to speciation in future generations. In contrast, larger populations tend to see lesser effects of newly introduced mutated traits. In these conditions, selective forces diminish the frequency of mutated alleles, which are most often deleterious, over time. Compensated pathogenic deviations Compensated pathogenic deviations refer to amino acid residues in a protein sequence that are pathogenic in one species but are wild type residues in the functionally equivalent protein in another species. Although the amino acid residue is pathogenic in the first species, it is not so in the second species because its pathogenicity is compensated by one or more amino acid substitutions in the second species. The compensatory mutation can occur in the same protein or in another protein with which it interacts.    It is critical to understand the effects of compensatory mutations in the context of fixed deleterious mutations due to the population fitness decreasing because of fixation. Effective population size refers to a population that is reproducing. An increase in this population size has been correlated with a decreased rate of genetic diversity. The position of a population relative to the critical effect population size is essential to determine the effect deleterious alleles will have on fitness. If the population is below the critical effective size fitness will decrease drastically, however if the population is above the critical effect size, fitness can increase regardless of deleterious mutations due to compensatory alleles. Compensatory mutations in RNA As the function of a RNA molecule is dependent on its structure, the structure of RNA molecules is evolutionarily conserved. Therefore, any mutation that alters the stable structure of RNA molecules must be compensated by other compensatory mutations. In the context of RNA, the sequence of the RNA can be considered as ' genotype' and the structure of the RNA can be considered as its 'phenotype'. Since RNAs have relatively simpler composition than proteins, the structure of RNA molecules can be computationally predicted with high degree of accuracy. Because of this convenience, compensatory mutations have been studied in computational simulations using RNA folding algorithms. Evolutionary mechanism of compensation Compensatory mutations can be explained by the genetic phenomenon epistasis whereby the phenotypic effect of one mutation is dependent upon mutation(s) at other loci. While epistasis was originally conceived in the context of interaction between different genes, intragenic epistasis has also been studied recently. Existence of compensated pathogenic deviations can be explained by 'sign epistasis', in which the effects of a deleterious mutation can be compensated by the presence of an epistatic mutation in another loci. For a given protein, a deleterious mutation (D) and a compensatory mutation (C) can be considered, where C can be in the same protein as D or in a different interacting protein depending on the context. The fitness effect of C itself could be neutral or somewhat deleterious such that it can still exist in the population, and the effect of D is deleterious to the extent that it cannot exist in the population. However, when C and D co-occur together, the combined fitness effect becomes neutral or positive. Thus, compensatory mutations can bring novelty to proteins by forging new pathways of protein evolution : it allows individuals to travel from one fitness peak to another through the valleys of lower fitness.  DePristo et al. 2005 outlined two models to explain the dynamics of compensatory pathogenic deviations (CPD). In the first hypothesis P is a pathogenic amino acid mutation that and C is a neutral compensatory mutation. Under these conditions, if the pathogenic mutation arises after a compensatory mutation, then P can become fixed in the population. The second model of CPDs states that P and C are both deleterious mutations resulting in fitness valleys when mutations occur simultaneously. Using publicly available, Ferrer-Costa et al. 2007 obtained compensatory mutations and human pathogenic mutation datasets that were characterized to determine what causes CPDs. Results indicate that the structural constraints and the location in protein structure determine whether compensated mutations will occur. Experimental evidence of compensatory mutations Experiment in bacteria Lunzer et al. tested the outcome of swapping divergent amino acids between two orthologous proteins of isopropymalate dehydrogenase (IMDH). They substituted 168 amino acids in Escherichia coli IMDH that are wild type residues in IMDH Pseudomonas aeruginosa. They found that over one third of these substitutions compromised IMDH enzymatic activity in the Escherichia coli genetic background. This demonstrated that identical amino acid states can result in different phenotypic states depending on the genetic background. Corrigan et al. 2011 demonstrated how Staphylococcus aureus was able to grow normally without the presence of lipoteichoic acid due to compensatory mutations. Whole genome sequencing results revealed that when Cyclic-di-AMP phosphodiesterase (GdpP) was disrupted in this bacterium, it compensated for the disappearance of the cell wall polymer, resulting in normal cell growth. Research has shown that bacteria can gain drug resistance through compensatory mutations that do not impede or having little effect on fitness. Previous research from Gagneux et al. 2006 has found that laboratory grown Mycobacterium tuberculosis strains with rifampicin resistance have reduced fitness, however drug resistant clinical strains of this pathogenic bacteria do not have reduced fitness. Comas et al. 2012 used whole genome comparisons between clinical strains and lab derived mutants to determine the role and contribution of compensatory mutations in drug resistance to rifampicin. Genome analysis reveal rifampicin resistant strains have a mutation in rpoA and rpoC. A similar study investigated the bacterial fitness associated with compensatory mutations in rifampin resistant Escherichia coli. Results obtained from this study demonstrate that drug resistance is linked to bacterial fitness as higher fitness costs are linked to greater transcription errors. Experiment in virus Gong et al. collected obtained genotype data of influenza nucleoprotein from different timelines and temporally ordered them according to their time of origin. Then they isolated 39 amino acid substitutions that occurred in different timelines and substituted them in a genetic background that approximated the ancestral genotype. They found that 3 of the 39 substitutions significantly reduced the fitness of the ancestral background. Compensatory mutations are new mutations that arise and have a positive or neutral impact on a populations fitness. Previous research has shown that populations have can compensate detrimental mutations. Burch and Chao tested Fisher's geometric model of adaptive evolution by testing whether bacteriophage φ6 evolves by small steps. Their results showed that bacteriophage φ6 fitness declined rapidly and recovered in small steps . Viral nucleoproteins have been shown to avoid cytotoxic T lymphocytes (CTLs) through arginine-to glycine substitutions. This substitution mutations impacts the fitness of viral nucleoproteins, however compensatory co-mutations impede fitness declines and aid the virus to avoid recognition from CTLs. Mutations can have three different effects; mutations can have deleterious effects, some increase fitness through compensatory mutations, and lastly mutations can be counterbalancing resulting in compensatory neutral mutations. Application in human evolution and disease In the human genome, the frequency and characteristics of de novo mutations have been studied as important contextual factors to our evolution. Compared to the human reference genome, a typical human genome varies at approximately 4.1 to 5.0 million loci, and the majority of this genetic diversity is shared by nearly 0.5% of the population. The typical human genome also contains 40,000 to 200,000 rare variants observed in less than 0.5% of the population that can only have occurred from at least one de novo germline mutation in the history of human evolution. De novo mutations have also been researched as playing a crucial role in the persistence of genetic disease in humans. With recents advancements in next-generation sequencing (NGS), all types of de novo mutations within the genome can be directly studied, the detection of which provides a magnitude of insight toward the causes of both rare and common genetic disorders. Currently, the best estimate of the average human germline SNV mutation rate is 1.18 x 10^-8, with an approximate ~78 novel mutations per generation. The ability to conduct whole genome sequencing of parents and offspring allows for the comparison of mutation rates between generations, narrowing down the origin possibilities of certain genetic disorders.
Biology and health sciences
Evolution
null
19712
https://en.wikipedia.org/wiki/Methanol
Methanol
Methanol (also called methyl alcohol and wood spirit, amongst other names) is an organic chemical compound and the simplest aliphatic alcohol, with the chemical formula (a methyl group linked to a hydroxyl group, often abbreviated as MeOH). It is a light, volatile, colorless and flammable liquid with a distinctive alcoholic odor similar to that of ethanol (potable alcohol), but is more acutely toxic than the latter. Methanol acquired the name wood alcohol because it was once produced chiefly by the destructive distillation of wood. Today, methanol is mainly produced industrially by hydrogenation of carbon monoxide. Methanol consists of a methyl group linked to a polar hydroxyl group. With more than 20 million tons produced annually, it is used as a precursor to other commodity chemicals, including formaldehyde, acetic acid, methyl tert-butyl ether, methyl benzoate, anisole, peroxyacids, as well as a host of more specialised chemicals. Occurrence Small amounts of methanol are present in normal, healthy human individuals. One study found a mean of 4.5 ppm in the exhaled breath of test subjects. The mean endogenous methanol in humans of 0.45 g/d may be metabolized from pectin found in fruit; one kilogram of apple produces up to 1.4 g of pectin (0.6 g of methanol.) Methanol is produced by anaerobic bacteria and phytoplankton. Interstellar medium Methanol is also found in abundant quantities in star-forming regions of space and is used in astronomy as a marker for such regions. It is detected through its spectral emission lines. In 2006, astronomers using the MERLIN array of radio telescopes at Jodrell Bank Observatory discovered a large cloud of methanol in space across. In 2016, astronomers detected methanol in a planet-forming disc around the young star TW Hydrae using the Atacama Large Millimeter Array radio telescope. History In their embalming process, the ancient Egyptians used a mixture of substances, including methanol, which they obtained from the pyrolysis of wood. Pure methanol, however, was first isolated in 1661 by Robert Boyle, when he produced it via the distillation of buxus (boxwood). It later became known as "pyroxylic spirit". In 1834, the French Chemists Jean-Baptiste Dumas and Eugene Peligot determined its elemental composition. They also introduced the word "methylène" to organic chemistry, forming it from Greek methy = "alcoholic liquid" + hȳlē = "forest, wood, timber, material". "Methylène" designated a "radical" that was about 14% hydrogen by weight and contained one carbon atom. This would be , but at the time carbon was thought to have an atomic weight only six times that of hydrogen, so they gave the formula as CH. They then called wood alcohol (l'esprit de bois) "bihydrate de méthylène" (bihydrate because they thought the formula was or ). The term "methyl" was derived in about 1840 by back-formation from "methylene", and was then applied to describe "methyl alcohol". This was shortened to "methanol" in 1892 by the International Conference on Chemical Nomenclature. The suffix -yl, which, in organic chemistry, forms names of carbon groups, is from the word methyl. French chemist Paul Sabatier presented the first process that could be used to produce methanol synthetically in 1905. This process suggested that carbon dioxide and hydrogen could be reacted to produce methanol. German chemists Alwin Mittasch and Mathias Pier, working for Badische-Anilin & Soda-Fabrik (BASF), developed a means to convert synthesis gas (a mixture of carbon monoxide, carbon dioxide, and hydrogen) into methanol and received a patent. According to Bozzano and Manenti, BASF's process was first utilized in Leuna, Germany in 1923. Operating conditions consisted of "high" temperatures (between 300 and 400 °C) and pressures (between 250 and 350 atm) with a zinc/chromium oxide catalyst. US patent 1,569,775 () was applied for on 4 September 1924 and issued on 12 January 1926 to BASF; the process used a chromium and manganese oxide catalyst with extremely vigorous conditions: pressures ranging from 50 to 220 atm, and temperatures up to 450 °C. Modern methanol production has been made more efficient through use of catalysts (commonly copper) capable of operating at lower pressures. The modern low pressure methanol (LPM) process was developed by ICI in the late 1960s with the technology patent long since expired. During World War II, methanol was used as a fuel in several German military rocket designs, under the name M-Stoff, and in a roughly 50/50 mixture with hydrazine, known as C-Stoff. The use of methanol as a motor fuel received attention during the oil crises of the 1970s. By the mid-1990s, over 20,000 methanol "flexible fuel vehicles" (FFV) capable of operating on methanol or gasoline were introduced in the US. In addition, low levels of methanol were blended in gasoline fuels sold in Europe during much of the 1980s and early-1990s. Automakers stopped building methanol FFVs by the late-1990s, switching their attention to ethanol-fueled vehicles. While the methanol FFV program was a technical success, rising methanol pricing in the mid- to late-1990s during a period of slumping gasoline pump prices diminished interest in methanol fuels. In the early 1970s, a process was developed by Mobil for producing gasoline fuel from methanol. Between the 1960s and 1980s methanol emerged as a precursor to the feedstock chemicals acetic acid and acetic anhydride. These processes include the Monsanto acetic acid synthesis, Cativa process, and Tennessee Eastman acetic anhydride process. Applications Production of formaldehyde, acetic acid, methyl tert-butyl ether Methanol is primarily converted to formaldehyde, which is widely used in many areas, especially polymers. The conversion entails oxidation: 2 CH3OH + O2 -> 2 CH2O + 2 H2O Acetic acid can be produced from methanol. Methanol and isobutene are combined to give methyl tert-butyl ether (MTBE). MTBE is a major octane booster in gasoline. Methanol to hydrocarbons, olefins, gasoline Condensation of methanol to produce hydrocarbons and even aromatic systems is the basis of several technologies related to gas to liquids. These include methanol-to-hydrocarbons (MtH), methanol to gasoline (MtG), methanol to olefins (MtO), and methanol to propylene (MtP). These conversions are catalyzed by zeolites as heterogeneous catalysts. The MtG process was once commercialized at Motunui in New Zealand. Gasoline additive The European Fuel Quality Directive allows fuel producers to blend up to 3% methanol, with an equal amount of cosolvent, with gasoline sold in Europe. In 2019, it is estimated that China used as much as 7 million tons of methanol as transportation fuels, representing over 5% of their fuel pool. Other chemicals Methanol is the precursor to most simple methylamines, methyl halides, and methyl ethers. Methyl esters are produced from methanol, including the transesterification of fats and production of biodiesel via transesterification. Niche and potential uses Energy carrier Methanol is a promising energy carrier because, as a liquid, it is easier to store than hydrogen and natural gas. Its energy density is, however, lower than methane, per kg. Its combustion energy density is 15.6 MJ/L (LHV), whereas that of ethanol is 24 and gasoline is 33 MJ/L. Further advantages for methanol is its ready biodegradability and low environmental toxicity. It does not persist in either aerobic (oxygen-present) or anaerobic (oxygen-absent) environments. The half-life for methanol in groundwater is just one to seven days, while many common gasoline components have half-lives in the hundreds of days (such as benzene at 10–730 days). Since methanol is miscible with water and biodegradable, it is unlikely to accumulate in groundwater, surface water, air or soil. Fuel Methanol is occasionally used to fuel internal combustion engines. It burns forming carbon dioxide and water: 2 CH3OH + 3 O2 -> 2 CO2 + 4 H2O Methanol fuel has been proposed for ground transportation. The chief advantage of a methanol economy is that it could be adapted to gasoline internal combustion engines with minimum modification to the engines and to the infrastructure that delivers and stores liquid fuel. Its energy density, however, is less than gasoline, meaning more frequent fill ups would be required. However, it is equivalent to super high-octane gasoline in horsepower, and most modern computer-controlled fuel injection systems can already use it. Methanol is an alternative fuel for ships that helps the shipping industry meet increasingly strict emissions regulations. It significantly reduces emissions of sulfur oxides (SOx), nitrogen oxides (NOx) and particulate matter. Methanol can be used with high efficiency in marine diesel engines after minor modifications using a small amount of pilot fuel (dual fuel). In China, methanol fuels industrial boilers, which are used extensively to generate heat and steam for various industrial applications and residential heating. Its use is displacing coal, which is under pressure from increasingly stringent environmental regulations. Direct-methanol fuel cells are unique in their low temperature, atmospheric pressure operation, which lets them be greatly miniaturized. This, combined with the relatively easy and safe storage and handling of methanol, may open the possibility of fuel cell-powered consumer electronics, such as laptop computers and mobile phones. Methanol is also a widely used fuel in camping and boating stoves. Methanol burns well in an unpressurized burner, so alcohol stoves are often very simple, sometimes little more than a cup to hold fuel. This lack of complexity makes them a favorite of hikers who spend extended time in the wilderness. Similarly, the alcohol can be gelled to reduce risk of leaking or spilling, as with the brand "Sterno". Methanol is mixed with water and injected into high performance diesel and gasoline engines for an increase of power and a decrease in intake air temperature in a process known as water methanol injection. Other applications Methanol is used as a denaturant for ethanol, the product being known as "denatured alcohol" or "methylated spirit". This was commonly used during the US prohibition to discourage consumption of bootlegged liquor, and ended up causing several deaths. It is sometimes used as a fuel in alcohol lamps, portable fire pits and camping stoves. Methanol is used as a solvent and as an antifreeze in pipelines and windshield washer fluid. Methanol was used as an automobile coolant antifreeze in the early 1900s. As of May 2018, methanol was banned in the EU for use in windscreen washing or defrosting due to its risk of human consumption as a result of 2012 Czech Republic methanol poisonings. In some wastewater treatment plants, a small amount of methanol is added to wastewater to provide a carbon food source for the denitrifying bacteria, which convert nitrates to nitrogen gas and reduce the nitrification of sensitive aquifers. Methanol is used as a destaining agent in polyacrylamide gel electrophoresis. Production From synthesis gas Carbon monoxide and hydrogen react over a catalyst to produce methanol. Today, the most widely used catalyst is a mixture of copper and zinc oxides, supported on alumina, as first used by ICI in 1966. At 5–10 MPa (50–100 atm) and , the reaction CO + 2 H2 -> CH3OH is characterized by high selectivity (>99.8%). The production of synthesis gas from methane produces three moles of hydrogen for every mole of carbon monoxide, whereas the synthesis consumes only two moles of hydrogen gas per mole of carbon monoxide. One way of dealing with the excess hydrogen is to inject carbon dioxide into the methanol synthesis reactor, where it, too, reacts to form methanol according to the equation CO2 + 3 H2 -> CH3OH + H2O In terms of mechanism, the process occurs via initial conversion of CO into , which is then hydrogenated: CO2 + 3 H2 -> CH3OH + H2O where the byproduct is recycled via the water-gas shift reaction CO + H2O -> CO2 + H2 This gives an overall reaction CO + 2 H2 -> CH3OH which is the same as listed above. In a process closely related to methanol production from synthesis gas, a feed of hydrogen and can be used directly. The main advantage of this process is that captured and hydrogen sourced from electrolysis could be used, removing the dependence on fossil fuels. Biosynthesis The catalytic conversion of methane to methanol is effected by enzymes including methane monooxygenases. These enzymes are mixed-function oxygenases, i.e. oxygenation is coupled with production of water and : CH4 + O2 + NADPH + H+ -> CH3OH + H2O + NAD+ Both Fe- and Cu-dependent enzymes have been characterized. Intense but largely fruitless efforts have been undertaken to emulate this reactivity. Methanol is more easily oxidized than is the feedstock methane, so the reactions tend not to be selective. Some strategies exist to circumvent this problem. Examples include Shilov systems and Fe- and Cu-containing zeolites. These systems do not necessarily mimic the mechanisms employed by metalloenzymes, but draw some inspiration from them. Active sites can vary substantially from those known in the enzymes. For example, a dinuclear active site is proposed in the sMMO enzyme, whereas a mononuclear iron (alpha-oxygen) is proposed in the Fe-zeolite. Global emissions of methanol by plants are estimated at between 180 and 250 million tons per year. This is between two and three times larger than man-made industrial production of methanol. Green methanol As of 2023, 0.2% of global methanol production is produced in ways that have relatively low greenhouse gas emissions; this is known as "green" methanol. Most green methanol is produced from gasification of biomass. Syngas is produced from biomass gasification and further converted into green methanol. Another method of producing green methanol involves combining hydrogen, carbon dioxide, and a catalyst under high heat and pressure. To be classified as green methanol, the hydrogen must be green hydrogen, which is produced using renewable electricity. Additionally, the carbon dioxide in this process must be a product of carbon capture and storage or direct air capture or biomass of recent origin. Some definitions of green methanol specify that the carbon dioxide must be captured during the burning of bioenergy. Quality specifications and analysis Methanol is available commercially in various purity grades. Commercial methanol is generally classified according to ASTM purity grades A and AA. Both grade A and grade AA purity are 99.85% methanol by weight. Grade "AA" methanol contains trace amounts of ethanol as well. Methanol for chemical use normally corresponds to Grade AA. In addition to water, typical impurities include acetone and ethanol (which are very difficult to separate by distillation). UV-vis spectroscopy is a convenient method for detecting aromatic impurities. Water content can be determined by the Karl-Fischer titration. Safety Methanol is highly flammable. Its vapours are slightly heavier than air and can travel to a distant ignition source and ignite. Methanol fires should be extinguished with dry chemical, carbon dioxide, water spray or alcohol-resistant foam. Methanol flames are invisible in daylight. Toxicity Ingesting as little as of pure methanol can cause permanent blindness by destruction of the optic nerve. is potentially fatal. The median lethal dose is , i.e., 1–2 mL/kg body weight of pure methanol. The reference dose for methanol is 0.5 mg/kg in a day. Toxic effects begin hours after ingestion, and antidotes can often prevent permanent damage. Because of its similarities in both appearance and odor to ethanol (the alcohol in beverages) or isopropyl alcohol, it is difficult to differentiate between the three. Methanol is toxic by two mechanisms. First, methanol can be fatal due to effects on the central nervous system, acting as a central nervous system depressant in the same manner as ethanol poisoning. Second, in a process of toxication, it is metabolised to formic acid (which is present as the formate ion) via formaldehyde in a process initiated by the enzyme alcohol dehydrogenase in the liver. Methanol is converted to formaldehyde via alcohol dehydrogenase (ADH) and formaldehyde is converted to formic acid (formate) via aldehyde dehydrogenase (ALDH). The conversion to formate via ALDH proceeds completely, with no detectable formaldehyde remaining. Formate is toxic because it inhibits mitochondrial cytochrome c oxidase, causing hypoxia at the cellular level, and metabolic acidosis, among a variety of other metabolic disturbances. Outbreaks of methanol poisoning have occurred primarily due to contamination of drinking alcohol. This is more common in the developing world. In 2013 more than 1700 cases nonetheless occurred in the United States. Those affected are often adult men. Outcomes may be good with early treatment. Toxicity to methanol was described as early as 1856. Because of its toxic properties, methanol is frequently used as a denaturant additive for ethanol manufactured for industrial uses. This addition of methanol exempts industrial ethanol (commonly known as "denatured alcohol" or "methylated spirit") from liquor excise taxation in the US and other countries.
Physical sciences
Carbon–oxygen bond
null
19716
https://en.wikipedia.org/wiki/Magnetism
Magnetism
Magnetism is the class of physical attributes that occur through a magnetic field, which allows objects to attract or repel each other. Because both electric currents and magnetic moments of elementary particles give rise to a magnetic field, magnetism is one of two aspects of electromagnetism. The most familiar effects occur in ferromagnetic materials, which are strongly attracted by magnetic fields and can be magnetized to become permanent magnets, producing magnetic fields themselves. Demagnetizing a magnet is also possible. Only a few substances are ferromagnetic; the most common ones are iron, cobalt, nickel, and their alloys. All substances exhibit some type of magnetism. Magnetic materials are classified according to their bulk susceptibility. Ferromagnetism is responsible for most of the effects of magnetism encountered in everyday life, but there are actually several types of magnetism. Paramagnetic substances, such as aluminium and oxygen, are weakly attracted to an applied magnetic field; diamagnetic substances, such as copper and carbon, are weakly repelled; while antiferromagnetic materials, such as chromium, have a more complex relationship with a magnetic field. The force of a magnet on paramagnetic, diamagnetic, and antiferromagnetic materials is usually too weak to be felt and can be detected only by laboratory instruments, so in everyday life, these substances are often described as non-magnetic. The strength of a magnetic field always decreases with distance from the magnetic source, though the exact mathematical relationship between strength and distance varies. Many factors can influence the magnetic field of an object including the magnetic moment of the material, the physical shape of the object, both the magnitude and direction of any electric current present within the object, and the temperature of the object. History Magnetism was first discovered in the ancient world when people noticed that lodestones, naturally magnetized pieces of the mineral magnetite, could attract iron. The word magnet comes from the Greek term μαγνῆτις λίθος magnētis lithos, "the Magnesian stone, lodestone". In ancient Greece, Aristotle attributed the first of what could be called a scientific discussion of magnetism to the philosopher Thales of Miletus, who lived from about 625 BC to about 545 BC. The ancient Indian medical text Sushruta Samhita describes using magnetite to remove arrows embedded in a person's body. In ancient China, the earliest literary reference to magnetism lies in a 4th-century BC book named after its author, Guiguzi. The 2nd-century BC annals, Lüshi Chunqiu, also notes: "The lodestone makes iron approach; some (force) is attracting it." The earliest mention of the attraction of a needle is in a 1st-century work Lunheng (Balanced Inquiries): "A lodestone attracts a needle." The 11th-century Chinese scientist Shen Kuo was the first person to write—in the Dream Pool Essays—of the magnetic needle compass and that it improved the accuracy of navigation by employing the astronomical concept of true north. By the 12th century, the Chinese were known to use the lodestone compass for navigation. They sculpted a directional spoon from lodestone in such a way that the handle of the spoon always pointed south. Alexander Neckam, by 1187, was the first in Europe to describe the compass and its use for navigation. In 1269, Peter Peregrinus de Maricourt wrote the Epistola de magnete, the first extant treatise describing the properties of magnets. In 1282, the properties of magnets and the dry compasses were discussed by Al-Ashraf Umar II, a Yemeni physicist, astronomer, and geographer. Leonardo Garzoni's only extant work, the Due trattati sopra la natura, e le qualità della calamita (Two treatises on the nature and qualities of the magnet), is the first known example of a modern treatment of magnetic phenomena. Written in years near 1580 and never published, the treatise had a wide diffusion. In particular, Garzoni is referred to as an expert in magnetism by Niccolò Cabeo, whose Philosophia Magnetica (1629) is just a re-adjustment of Garzoni's work. Garzoni's treatise was known also to Giovanni Battista Della Porta. In 1600, William Gilbert published his De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth). In this work he describes many of his experiments with his model earth called the terrella. From his experiments, he concluded that the Earth was itself magnetic and that this was the reason compasses pointed north whereas, previously, some believed that it was the pole star Polaris or a large magnetic island on the north pole that attracted the compass. An understanding of the relationship between electricity and magnetism began in 1819 with work by Hans Christian Ørsted, a professor at the University of Copenhagen, who discovered, by the accidental twitching of a compass needle near a wire, that an electric current could create a magnetic field. This landmark experiment is known as Ørsted's Experiment. Jean-Baptiste Biot and Félix Savart, both of whom in 1820 came up with the Biot–Savart law giving an equation for the magnetic field from a current-carrying wire. Around the same time, André-Marie Ampère carried out numerous systematic experiments and discovered that the magnetic force between two DC current loops of any shape is equal to the sum of the individual forces that each current element of one circuit exerts on each other current element of the other circuit. In 1831, Michael Faraday discovered that a time-varying magnetic flux induces a voltage through a wire loop. In 1835, Carl Friedrich Gauss hypothesized, based on Ampère's force law in its original form, that all forms of magnetism arise as a result of elementary point charges moving relative to each other. Wilhelm Eduard Weber advanced Gauss's theory to Weber electrodynamics. From around 1861, James Clerk Maxwell synthesized and expanded many of these insights into Maxwell's equations, unifying electricity, magnetism, and optics into the field of electromagnetism. However, Gauss's interpretation of magnetism is not fully compatible with Maxwell's electrodynamics. In 1905, Albert Einstein used Maxwell's equations in motivating his theory of special relativity, requiring that the laws held true in all inertial reference frames. Gauss's approach of interpreting the magnetic force as a mere effect of relative velocities thus found its way back into electrodynamics to some extent. Electromagnetism has continued to develop into the 21st century, being incorporated into the more fundamental theories of gauge theory, quantum electrodynamics, electroweak theory, and finally the standard model. Sources Magnetism, at its root, arises from three sources: Electric current Spin magnetic moments of elementary particles Changing electric fields The magnetic properties of materials are mainly due to the magnetic moments of their atoms' orbiting electrons. The magnetic moments of the nuclei of atoms are typically thousands of times smaller than the electrons' magnetic moments, so they are negligible in the context of the magnetization of materials. Nuclear magnetic moments are nevertheless very important in other contexts, particularly in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). Ordinarily, the enormous number of electrons in a material are arranged such that their magnetic moments (both orbital and intrinsic) cancel out. This is due, to some extent, to electrons combining into pairs with opposite intrinsic magnetic moments as a result of the Pauli exclusion principle (see electron configuration), and combining into filled subshells with zero net orbital motion. In both cases, the electrons preferentially adopt arrangements in which the magnetic moment of each electron is canceled by the opposite moment of another electron. Moreover, even when the electron configuration is such that there are unpaired electrons and/or non-filled subshells, it is often the case that the various electrons in the solid will contribute magnetic moments that point in different, random directions so that the material will not be magnetic. Sometimeseither spontaneously, or owing to an applied external magnetic fieldeach of the electron magnetic moments will be, on average, lined up. A suitable material can then produce a strong net magnetic field. The magnetic behavior of a material depends on its structure, particularly its electron configuration, for the reasons mentioned above, and also on the temperature. At high temperatures, random thermal motion makes it more difficult for the electrons to maintain alignment. Types Diamagnetism Diamagnetism appears in all materials and is the tendency of a material to oppose an applied magnetic field, and therefore, to be repelled by a magnetic field. However, in a material with paramagnetic properties (that is, with a tendency to enhance an external magnetic field), the paramagnetic behavior dominates. Thus, despite its universal occurrence, diamagnetic behavior is observed only in a purely diamagnetic material. In a diamagnetic material, there are no unpaired electrons, so the intrinsic electron magnetic moments cannot produce any bulk effect. In these cases, the magnetization arises from the electrons' orbital motions, which can be understood classically as follows: This description is meant only as a heuristic; the Bohr–Van Leeuwen theorem shows that diamagnetism is impossible according to classical physics, and that a proper understanding requires a quantum-mechanical description. All materials undergo this orbital response. However, in paramagnetic and ferromagnetic substances, the diamagnetic effect is overwhelmed by the much stronger effects caused by the unpaired electrons. Paramagnetism In a paramagnetic material there are unpaired electrons; i.e., atomic or molecular orbitals with exactly one electron in them. While paired electrons are required by the Pauli exclusion principle to have their intrinsic ('spin') magnetic moments pointing in opposite directions, causing their magnetic fields to cancel out, an unpaired electron is free to align its magnetic moment in any direction. When an external magnetic field is applied, these magnetic moments will tend to align themselves in the same direction as the applied field, thus reinforcing it. Ferromagnetism A ferromagnet, like a paramagnetic substance, has unpaired electrons. However, in addition to the electrons' intrinsic magnetic moment's tendency to be parallel to an applied field, there is also in these materials a tendency for these magnetic moments to orient parallel to each other to maintain a lowered-energy state. Thus, even in the absence of an applied field, the magnetic moments of the electrons in the material spontaneously line up parallel to one another. Every ferromagnetic substance has its own individual temperature, called the Curie temperature, or Curie point, above which it loses its ferromagnetic properties. This is because the thermal tendency to disorder overwhelms the energy-lowering due to ferromagnetic order. Ferromagnetism only occurs in a few substances; common ones are iron, nickel, cobalt, their alloys, and some alloys of rare-earth metals. Magnetic domains The magnetic moments of atoms in a ferromagnetic material cause them to behave something like tiny permanent magnets. They stick together and align themselves into small regions of more or less uniform alignment called magnetic domains or Weiss domains. Magnetic domains can be observed with a magnetic force microscope to reveal magnetic domain boundaries that resemble white lines in the sketch. There are many scientific experiments that can physically show magnetic fields. When a domain contains too many molecules, it becomes unstable and divides into two domains aligned in opposite directions so that they stick together more stably. When exposed to a magnetic field, the domain boundaries move, so that the domains aligned with the magnetic field grow and dominate the structure (dotted yellow area), as shown at the left. When the magnetizing field is removed, the domains may not return to an unmagnetized state. This results in the ferromagnetic material's being magnetized, forming a permanent magnet. When magnetized strongly enough that the prevailing domain overruns all others to result in only one single domain, the material is magnetically saturated. When a magnetized ferromagnetic material is heated to the Curie point temperature, the molecules are agitated to the point that the magnetic domains lose the organization, and the magnetic properties they cause cease. When the material is cooled, this domain alignment structure spontaneously returns, in a manner roughly analogous to how a liquid can freeze into a crystalline solid. Antiferromagnetism In an antiferromagnet, unlike a ferromagnet, there is a tendency for the intrinsic magnetic moments of neighboring valence electrons to point in opposite directions. When all atoms are arranged in a substance so that each neighbor is anti-parallel, the substance is antiferromagnetic. Antiferromagnets have a zero net magnetic moment because adjacent opposite moment cancels out, meaning that no field is produced by them. Antiferromagnets are less common compared to the other types of behaviors and are mostly observed at low temperatures. In varying temperatures, antiferromagnets can be seen to exhibit diamagnetic and ferromagnetic properties. In some materials, neighboring electrons prefer to point in opposite directions, but there is no geometrical arrangement in which each pair of neighbors is anti-aligned. This is called a canted antiferromagnet or spin ice and is an example of geometrical frustration. Ferrimagnetism Like ferromagnetism, ferrimagnets retain their magnetization in the absence of a field. However, like antiferromagnets, neighboring pairs of electron spins tend to point in opposite directions. These two properties are not contradictory, because in the optimal geometrical arrangement, there is more magnetic moment from the sublattice of electrons that point in one direction, than from the sublattice that points in the opposite direction. Most ferrites are ferrimagnetic. The first discovered magnetic substance, magnetite, is a ferrite and was originally believed to be a ferromagnet; Louis Néel disproved this, however, after discovering ferrimagnetism. Superparamagnetism When a ferromagnet or ferrimagnet is sufficiently small, it acts like a single magnetic spin that is subject to Brownian motion. Its response to a magnetic field is qualitatively similar to the response of a paramagnet, but much larger. Nagaoka magnetism Japanese physicist Yosuke Nagaoka conceived of a type of magnetism in a square, two-dimensional lattice where every lattice node had one electron. If one electron was removed under specific conditions, the lattice's energy would be minimal only when all electrons' spins were parallel. A variation on this was achieved experimentally by arranging the atoms in a triangular moiré lattice of molybdenum diselenide and tungsten disulfide monolayers. Applying a weak magnetic field and a voltage led to ferromagnetic behavior when 100–150% more electrons than lattice nodes were present. The extra electrons delocalized and paired with lattice electrons to form doublons. Delocalization was prevented unless the lattice electrons had aligned spins. The doublons thus created localized ferromagnetic regions. The phenomenon took place at 140 millikelvins. Other types of magnetism Metamagnetism Molecule-based magnets Single-molecule magnet Amorphous magnet Electromagnet An electromagnet is a type of magnet in which the magnetic field is produced by an electric current. The magnetic field disappears when the current is turned off. Electromagnets usually consist of a large number of closely spaced turns of wire that create the magnetic field. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet. The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet that needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field. Electromagnets are widely used as components of other electrical devices, such as motors, generators, relays, solenoids, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel. Electromagnetism was discovered in 1820. Magnetism, electricity, and special relativity As a consequence of Einstein's theory of special relativity, electricity and magnetism are fundamentally interlinked. Both magnetism lacking electricity, and electricity without magnetism, are inconsistent with special relativity, due to such effects as length contraction, time dilation, and the fact that the magnetic force is velocity-dependent. However, when both electricity and magnetism are taken into account, the resulting theory (electromagnetism) is fully consistent with special relativity. In particular, a phenomenon that appears purely electric or purely magnetic to one observer may be a mix of both to another, or more generally the relative contributions of electricity and magnetism are dependent on the frame of reference. Thus, special relativity "mixes" electricity and magnetism into a single, inseparable phenomenon called electromagnetism, analogous to how general relativity "mixes" space and time into spacetime. All observations on electromagnetism apply to what might be considered to be primarily magnetism, e.g. perturbations in the magnetic field are necessarily accompanied by a nonzero electric field, and propagate at the speed of light. Magnetic fields in a material In vacuum, where is the vacuum permeability. In a material, The quantity is called magnetic polarization. If the field is small, the response of the magnetization in a diamagnet or paramagnet is approximately linear: the constant of proportionality being called the magnetic susceptibility. If so, In a hard magnet such as a ferromagnet, is not proportional to the field and is generally nonzero even when is zero (see Remanence). Magnetic force The phenomenon of magnetism is "mediated" by the magnetic field. An electric current or magnetic dipole creates a magnetic field, and that field, in turn, imparts magnetic forces on other particles that are in the fields. Maxwell's equations, which simplify to the Biot–Savart law in the case of steady currents, describe the origin and behavior of the fields that govern these forces. Therefore, magnetism is seen whenever electrically charged particles are in motion—for example, from movement of electrons in an electric current, or in certain cases from the orbital motion of electrons around an atom's nucleus. They also arise from "intrinsic" magnetic dipoles arising from quantum-mechanical spin. The same situations that create magnetic fields—charge moving in a current or in an atom, and intrinsic magnetic dipoles—are also the situations in which a magnetic field has an effect, creating a force. Following is the formula for moving charge; for the forces on an intrinsic dipole, see magnetic dipole. When a charged particle moves through a magnetic field B, it feels a Lorentz force F given by the cross product: where is the electric charge of the particle, and v is the velocity vector of the particle Because this is a cross product, the force is perpendicular to both the motion of the particle and the magnetic field. It follows that the magnetic force does no work on the particle; it may change the direction of the particle's movement, but it cannot cause it to speed up or slow down. The magnitude of the force is where is the angle between v and B. One tool for determining the direction of the velocity vector of a moving charge, the magnetic field, and the force exerted is labeling the index finger "V", the middle finger "B", and the thumb "F" with your right hand. When making a gun-like configuration, with the middle finger crossing under the index finger, the fingers represent the velocity vector, magnetic field vector, and force vector, respectively.
Physical sciences
Physics
null
19719
https://en.wikipedia.org/wiki/Filter%20%28mathematics%29
Filter (mathematics)
In mathematics, a filter or order filter is a special subset of a partially ordered set (poset), describing "large" or "eventual" elements. Filters appear in order and lattice theory, but also topology, whence they originate. The notion dual to a filter is an order ideal. Special cases of filters include ultrafilters, which are filters that cannot be enlarged, and describe nonconstructive techniques in mathematical logic. Filters on sets were introduced by Henri Cartan in 1937. Nicolas Bourbaki, in their book Topologie Générale, popularized filters as an alternative to E. H. Moore and Herman L. Smith's 1922 notion of a net; order filters generalize this notion from the specific case of a power set under inclusion to arbitrary partially ordered sets. Nevertheless, the theory of power-set filters retains interest in its own right, in part for substantial applications in topology. Motivation Fix a partially ordered set (poset) . Intuitively, a filter  is a subset of whose members are elements large enough to satisfy some criterion. For instance, if , then the set of elements above is a filter, called the principal filter at . (If and are incomparable elements of , then neither the principal filter at nor is contained in the other.) Similarly, a filter on a set  contains those subsets that are sufficiently large to contain some given . For example, if is the real line and , then the family of sets including in their interior is a filter, called the neighborhood filter at . The in this case is slightly larger than , but it still does not contain any other specific point of the line. The above considerations motivate the upward closure requirement in the definition below: "large enough" objects can always be made larger. To understand the other two conditions, reverse the roles and instead consider as a "locating scheme" to find . In this interpretation, one searches in some space , and expects to describe those subsets of that contain the goal. The goal must be located somewhere; thus the empty set  can never be in . And if two subsets both contain the goal, then should "zoom in" to their common region. An ultrafilter describes a "perfect locating scheme" where each scheme component gives new information (either "look here" or "look elsewhere"). Compactness is the property that "every search is fruitful," or, to put it another way, "every locating scheme ends in a search result." A common use for a filter is to define properties that are satisfied by "generic" elements of some topological space. This application generalizes the "locating scheme" to find points that might be hard to write down explicitly. Definition A subset  of a partially ordered set  is a filter or dual ideal if the following are satisfied: Nontriviality The set is non-empty. Downward directed For every , there is some such that and . Upward closure For every and , the condition implies . If, additionally, , then is said to be a proper filter. Authors in set theory and mathematical logic often require all filters to be proper; this article will eschew that convention. An ultrafilter is a filter contained in no other proper filter. Filter bases A subset  of is a base or basis for if the upper set generated by (i.e., the smallest upwards-closed set containing ) is equal to . Since every filter is upwards-closed, every filter is a base for itself. Moreover, if is nonempty and downward directed, then generates an upper set  that is a filter (for which is a base). Such sets are called prefilters, as well as the aforementioned filter base/basis, and is said to be generated or spanned by . A prefilter is proper if and only if it generates a proper filter. Given , the set is the smallest filter containing , and sometimes written . Such a filter is called a principal filter; is said to be the principal element of , or generate . Refinement Suppose and are two prefilters on , and, for each , there is a , such that . Then we say that is than (or refines) ; likewise, is coarser than (or coarsens) . Refinement is a preorder on the set of prefilters. In fact, if also refines , then and are called equivalent, for they generate the same filter. Thus passage from prefilter to filter is an instance of passing from a preordering to associated partial ordering. Special cases Historically, filters generalized to order-theoretic lattices before arbitrary partial orders. In the case of lattices, downward direction can be written as closure under finite meets: for all , one has . Linear filters A linear (ultra)filter is an (ultra)filter on the lattice of vector subspaces of a given vector space, ordered by inclusion. Explicitly, a linear filter on a vector space  is a family  of vector subspaces of such that if and is a vector subspace of that contains , then and . A linear filter is proper if it does not contain . Filters on a set; subbases Given a set , the power set  is partially ordered by set inclusion; filters on this poset are often just called "filters on ," in an abuse of terminology. For such posets, downward direction and upward closure reduce to: Closure under finite intersections If , then so too is . Isotony If and , then . A proper/non-degenerate filter is one that does not contain , and these three conditions (including non-degeneracy) are Henri Cartan's original definition of a filter. It is common — though not universal — to require filters on sets to be proper (whatever one's stance on poset filters); we shall again eschew this convention. Prefilters on a set are proper if and only if they do not contain either. For every subset  of , there is a smallest filter  containing . As with prefilters, is said to generate or span ; a base for is the set  of all finite intersections of . The set is said to be a filter subbase when (and thus ) is proper. Proper filters on sets have the finite intersection property. If , then admits only the improper filter . Free filters A filter is said to be a free if the intersection of its members is empty. A proper principal filter is not free. Since the intersection of any finite number of members of a filter is also a member, no proper filter on a finite set is free, and indeed is the principal filter generated by the common intersection of all of its members. But a nonprincipal filter on an infinite set is not necessarily free: a filter is free if and only if it includes the Fréchet filter (see ). Examples See the image at the top of this article for a simple example of filters on the finite poset . Partially order , the space of real-valued functions on , by pointwise comparison. Then the set of functions "large at infinity,"is a filter on . One can generalize this construction quite far by compactifying the domain and completing the codomain: if is a set with distinguished subset  and is a poset with distinguished element , then is a filter in . The set is a filter in . More generally, if is any directed set, thenis a filter in , called the tail filter. Likewise any net  generates the eventuality filter . A tail filter is the eventuality filter for . The Fréchet filter on an infinite set  isIf is a measure space, then the collection is a filter. If , then is also a filter; the Fréchet filter is the case where is counting measure. Given an ordinal , a subset of is called a club if it is closed in the order topology of but has net-theoretic limit . The clubs of form a filter: the club filter, . The previous construction generalizes as follows: any club  is also a collection of dense subsets (in the ordinal topology) of , and meets each element of . Replacing with an arbitrary collection  of dense sets, there "typically" exists a filter meeting each element of , called a generic filter. For countable , the Rasiowa–Sikorski lemma implies that such a filter must exist; for "small" uncountable , the existence of such a filter can be forced through Martin's axiom. Let denote the set of partial orders of limited cardinality, modulo isomorphism. Partially order by: if there exists a strictly increasing . Then the subset of non-atomic partial orders forms a filter. Likewise, if is the set of injective modules over some given commutative ring, of limited cardinality, modulo isomorphism, then a partial order on is: if there exists an injective linear map . Given any infinite cardinal , the modules in that cannot be generated by fewer than elements form a filter. Every uniform structure on a set  is a filter on . Relationship to ideals The dual notion to a filter — that is, the concept obtained by reversing all and exchanging with  — is an order ideal. Because of this duality, any question of filters can be mechanically translated to a question about ideals and vice-versa; in particular, a prime or maximal filter is a filter whose corresponding ideal is (respectively) prime or maximal. A filter is an ultrafilter if and only if the corresponding ideal is minimal. In model theory For every filter  on a set , the set function defined byis finitely additive — a "measure," if that term is construed rather loosely. Moreover, the measures so constructed are defined everywhere if is an ultrafilter. Therefore, the statementcan be considered somewhat analogous to the statement that holds "almost everywhere." That interpretation of membership in a filter is used (for motivation, not actual ) in the theory of ultraproducts in model theory, a branch of mathematical logic. In topology In general topology and analysis, filters are used to define convergence in a manner similar to the role of sequences in a metric space. They unify the concept of a limit across the wide variety of arbitrary topological spaces. To understand the need for filters, begin with the equivalent concept of a net. A sequence is usually indexed by the natural numbers , which are a totally ordered set. Nets generalize the notion of a sequence by replacing with an arbitrary directed set. In certain categories of topological spaces, such as first-countable spaces, sequences characterize most topological properties, but this is not true in general. However, nets — as well as filters — always do characterize those topological properties. Filters do not involve any set external to the topological space , whereas sequences and nets rely on other directed sets. For this reason, the collection of all filters on is always a set, whereas the collection of all -valued nets is a proper class. Neighborhood bases Any point  in the topological space  defines a neighborhood filter or system : namely, the family of all sets containing in their interior. A set  of neighborhoods of is a neighborhood base at if generates . Equivalently, is a neighborhood of if and only if there exists such that . Convergent filters and cluster points A prefilter  converges to a point , written , if and only if generates a filter  that contains the neighborhood filter  — explicitly, for every neighborhood  of , there is some such that . Less explicitly, if and only if refines , and any neighborhood base at can replace in this condition. Clearly, every neighborhood base at converges to . A filter  (which generates itself) converges to if . The above can also be reversed to characterize the neighborhood filter : is the finest filter coarser than each filter converging to . If , then is called a limit (point) of . The prefilter is said to cluster at (or have as a cluster point) if and only if each element of has non-empty intersection with each neighborhood of . Every limit point is a cluster point but the converse is not true in general. However, every cluster point of an filter is a limit point.
Mathematics
Order theory
null
19722
https://en.wikipedia.org/wiki/Metallurgy
Metallurgy
Metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallic elements, their inter-metallic compounds, and their mixtures, which are known as alloys. Metallurgy encompasses both the science and the technology of metals, including the production of metals and the engineering of metal components used in products for both consumers and manufacturers. Metallurgy is distinct from the craft of metalworking. Metalworking relies on metallurgy in a similar manner to how medicine relies on medical science for technical advancement. A specialist practitioner of metallurgy is known as a metallurgist. The science of metallurgy is further subdivided into two broad categories: chemical metallurgy and physical metallurgy. Chemical metallurgy is chiefly concerned with the reduction and oxidation of metals, and the chemical performance of metals. Subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation (corrosion). In contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. Topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms. Historically, metallurgy has predominately focused on the production of metals. Metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. Metal alloys are often a blend of at least two different metallic elements. However, non-metallic elements are often added to alloys in order to achieve properties suitable for an application. The study of metal production is subdivided into ferrous metallurgy (also known as black metallurgy) and non-ferrous metallurgy, also known as colored metallurgy. Ferrous metallurgy involves processes and alloys based on iron, while non-ferrous metallurgy involves processes and alloys based on other metals. The production of ferrous metals accounts for 95% of world metal production. Modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. Some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals (including welding, brazing, and soldering). Emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials (semiconductors) and surface engineering. Etymology and pronunciation Metallurgy derives from the Ancient Greek , , "worker in metal", from , , "mine, metal" + , , "work" The word was originally an alchemist's term for the extraction of metals from minerals, the ending -urgy signifying a process, especially manufacturing: it was discussed in this sense in the 1797 Encyclopædia Britannica. In the late 19th century, metallurgy's definition was extended to the more general scientific study of metals, alloys, and related processes. In English, the pronunciation is the more common one in the United Kingdom. The pronunciation is the more common one in the United States US and is the first-listed variant in various American dictionaries, including Merriam-Webster Collegiate and American Heritage. History The earliest metal employed by humans appears to be gold, which can be found "native". Small amounts of natural gold, dating to the late Paleolithic period, 40,000 BC, have been found in Spanish caves. Silver, copper, tin and meteoric iron can also be found in native form, allowing a limited amount of metalworking in early cultures. Early cold metallurgy, using native copper not melted from mineral has been documented at sites in Anatolia and at the site of Tell Maghzaliyah in Iraq, dating from the 7th/6th millennia BC. The earliest archaeological support of smelting (hot metallurgy) in Eurasia is found in the Balkans and Carpathian Mountains, as evidenced by findings of objects made by metal casting and smelting dated to around 6200–5000 BC, with the invention of copper metallurgy. Certain metals, such as tin, lead, and copper can be recovered from their ores by simply heating the rocks in a fire or blast furnace in a process known as smelting. The first evidence of copper smelting, dating from the 6th millennium BC, has been found at archaeological sites in Majdanpek, Jarmovac and Pločnik, in present-day Serbia. The site of Pločnik has produced a smelted copper axe dating from 5,500 BC, belonging to the Vinča culture. The Balkans and adjacent Carpathian region were the location of major Chalcolithic cultures including Vinča, Varna, Karanovo, Gumelnița and Hamangia, which are often grouped together under the name of 'Old Europe'. With the Carpatho-Balkan region described as the 'earliest metallurgical province in Eurasia', its scale and technical quality of metal production in the 6th–5th millennia BC totally overshadowed that of any other contemporary production centre. The earliest documented use of lead (possibly native or smelted) in the Near East dates from the 6th millennium BC, is from the late Neolithic settlements of Yarim Tepe and Arpachiyah in Iraq. The artifacts suggest that lead smelting may have predated copper smelting. Metallurgy of lead has also been found in the Balkans during the same period. Copper smelting is documented at sites in Anatolia and at the site of Tal-i Iblis in southeastern Iran from . Copper smelting is first documented in the Delta region of northern Egypt in , associated with the Maadi culture. This represents the earliest evidence for smelting in Africa. The Varna Necropolis, Bulgaria, is a burial site located in the western industrial zone of Varna, approximately 4 km from the city centre, internationally considered one of the key archaeological sites in world prehistory. The oldest gold treasure in the world, dating from 4,600 BC to 4,200 BC, was discovered at the site. The gold piece dating from 4,500 BC, found in 2019 in Durankulak, near Varna is another important example. Other signs of early metals are found from the third millennium BC in Palmela, Portugal, Los Millares, Spain, and Stonehenge, United Kingdom. The precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. In approximately 1900 BC, ancient iron smelting sites existed in Tamil Nadu. In the Near East, about 3,500 BC, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. This represented a major technological shift known as the Bronze Age. The extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. The process appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines. Historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. This includes the ancient and medieval kingdoms and empires of the Middle East and Near East, ancient Iran, ancient Egypt, ancient Nubia, and Anatolia in present-day Turkey, Ancient Nok, Carthage, the Celts, Greeks and Romans of ancient Europe, medieval Europe, ancient and medieval China, ancient and medieval India, ancient and medieval Japan, amongst others. A 16th century book by Georg Agricola, De re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. Agricola has been described as the "father of metallurgy". Extraction Extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. In order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. Extractive metallurgists are interested in three primary streams: feed, concentrate (metal oxide/sulphide) and tailings (waste). After mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. Concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. Mining may not be necessary, if the ore body and physical environment are conducive to leaching. Leaching dissolves minerals in an ore body and results in an enriched solution. The solution is collected and processed to extract valuable metals. Ore bodies often contain more than one valuable metal. Tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. Additionally, a concentrate may contain more than one valuable metal. That concentrate would then be processed to separate the valuable metals into individual constituents. Metal and its alloys Much effort has been placed on understanding iron–carbon alloy system, which includes steels and cast irons. Plain carbon steels (those that contain essentially only carbon as an alloying element) are used in low-cost, high-strength applications, where neither weight nor corrosion are a major concern. Cast irons, including ductile iron, are also part of the iron-carbon system. Iron-Manganese-Chromium alloys (Hadfield-type steels) are also used in non-magnetic applications such as directional drilling. Other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. These metals are most often used as alloys with the noted exception of silicon, which is not a metal. Other forms include: Stainless steel, particularly Austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. Aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. Copper-nickel alloys (such as Monel) are used in highly corrosive environments and for non-magnetic applications. Nickel-based superalloys like Inconel are used in high-temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. For extremely high temperatures, single crystal alloys are used to minimize creep. In modern electronics, high purity single crystal silicon is essential for metal-oxide-silicon transistors (MOS) and integrated circuits. Production In production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. This involves production of alloys, shaping, heat treatment and surface treatment of product. The task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. To achieve this goal, the operating environment must be carefully considered. Determining the hardness of the metal using the Rockwell, Vickers, and Brinell hardness scales is a commonly used practice that helps better understand the metal's elasticity and plasticity for different applications and production processes. In a saltwater environment, most ferrous metals and some non-ferrous alloys corrode quickly. Metals exposed to cold or cryogenic conditions may undergo a ductile to brittle transition and lose their toughness, becoming more brittle and prone to cracking. Metals under continual cyclic loading can suffer from metal fatigue. Metals under constant stress at elevated temperatures can creep. Metalworking processes Casting – molten metal is poured into a shaped mold. Variants of casting include sand casting, investment casting, also called the lost wax process, die casting, centrifugal casting, both vertical and horizontal, and continuous castings. Each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion. Forging – a red-hot billet is hammered into shape. Rolling – a billet is passed through successively narrower rollers to create a sheet. Extrusion – a hot and malleable metal is forced under pressure through a die, which shapes it before it cools. Machining – lathes, milling machines and drills cut the cold metal to shape. Sintering – a powdered metal is heated in a non-oxidizing environment after being compressed into a die. Fabrication – sheets of metal are cut with guillotines or gas cutters and bent and welded into structural shape. Laser cladding – metallic powder is blown through a movable laser beam (e.g. mounted on a NC 5-axis machine). The resulting melted metal reaches a substrate to form a melt pool. By moving the laser head, it is possible to stack the tracks and build up a three-dimensional piece. 3D printing – Sintering or melting amorphous powder metal in a 3D space to make any object to shape. Cold-working processes, in which the product's shape is altered by rolling, fabrication or other processes, while the product is cold, can increase the strength of the product by a process called work hardening. Work hardening creates microscopic defects in the metal, which resist further changes of shape. Heat treatment Metals can be heat-treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. Common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering: Annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft-edged so that, when the metal is hit or stressed it dents or perhaps bends, rather than breaking; it is also easier to sand, grind, or cut annealed metal. Quenching is the process of cooling metal very quickly after heating, thus "freezing" the metal's molecules in the very hard martensite form, which makes the metal harder. Tempering relieves stresses in the metal that were caused by the hardening process; tempering makes the metal less hard while making it better able to sustain impacts without breaking. Often, mechanical and thermal treatments are combined in what are known as thermo-mechanical treatments for better properties and more efficient processing of materials. These processes are common to high-alloy special steels, superalloys and titanium alloys. Plating Electroplating is a chemical surface-treatment technique. It involves bonding a thin layer of another metal such as gold, silver, chromium or zinc to the surface of the product. This is done by selecting the coating material electrolyte solution, which is the material that is going to coat the workpiece (gold, silver, zinc). There needs to be two electrodes of different materials: one the same material as the coating material and one that is receiving the coating material. Two electrodes are electrically charged and the coating material is stuck to the work piece. It is used to reduce corrosion as well as to improve the product's aesthetic appearance. It is also used to make inexpensive metals look like the more expensive ones (gold, silver). Shot peening Shot peening is a cold working process used to finish metal parts. In the process of shot peening, small round shot is blasted against the surface of the part to be finished. This process is used to prolong the product life of the part, prevent stress corrosion failures, and also prevent fatigue. The shot leaves small dimples on the surface like a peen hammer does, which cause compression stress under the dimple. As the shot media strikes the material over and over, it forms many overlapping dimples throughout the piece being treated. The compression stress in the surface of the material strengthens the part and makes it more resistant to fatigue failure, stress failures, corrosion failure, and cracking. Thermal spraying Thermal spraying techniques are another popular finishing option, and often have better high temperature properties than electroplated coatings. Thermal spraying, also known as a spray welding process, is an industrial coating process that consists of a heat source (flame or other) and a coating material that can be in a powder or wire form, which is melted then sprayed on the surface of the material being treated at a high velocity. The spray treating process is known by many different names such as HVOF (High Velocity Oxygen Fuel), plasma spray, flame spray, arc spray and metalizing. Electroless deposition Electroless deposition (ED) or electroless plating is defined as the autocatalytic process through which metals and metal alloys are deposited onto nonconductive surfaces. These nonconductive surfaces include plastics, ceramics, and glass etc., which can then become decorative, anti-corrosive, and conductive depending on their final functions. Electroless deposition is a chemical processes that create metal coatings on various materials by autocatalytic chemical reduction of metal cations in a liquid bath. Characterization Metallurgists study the microscopic and macroscopic structure of metals using metallography, a technique invented by Henry Clifton Sorby. In metallography, an alloy of interest is ground flat and polished to a mirror finish. The sample can then be etched to reveal the microstructure and macrostructure of the metal. The sample is then examined in an optical or electron microscope, and the image contrast provides details on the composition, mechanical properties, and processing history. Crystallography, often using diffraction of x-rays or electrons, is another valuable tool available to the modern metallurgist. Crystallography allows identification of unknown materials and reveals the crystal structure of the sample. Quantitative crystallography can be used to calculate the amount of phases present as well as the degree of strain to which a sample has been subjected. Current advanced characterization techniques, which are used frequently in this field are: Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), Electron Backscattered Diffraction (EBSD) and Atom-Probe Tomography (APT).
Technology
Materials
null
19737
https://en.wikipedia.org/wiki/Maxwell%27s%20equations
Maxwell's equations
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside. Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays. In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as With the electric field, the magnetic field, the electric charge density and the current density. is the vacuum permittivity and the vacuum permeability. The equations have two major variants: The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale. The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials. The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences. The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation. Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics. History of the equations Conceptual descriptions Gauss's law Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space. Gauss's law for magnetism Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field. Faraday's law The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface. The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire. Ampère–Maxwell law The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve. Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space. The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics. Formulation in terms of electric and magnetic fields (microscopic or in vacuum version) In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see ). The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis. Key to the notation Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated. The equations introduce the electric field, , a vector field, and the magnetic field, , a pseudovector field, each generally having a time and location dependence. The sources are the total electric charge density (total charge per unit volume), , and the total electric current density (total current per unit area), . The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are: the permittivity of free space, , and the permeability of free space, , and the speed of light, Differential equations In the differential equations, the nabla symbol, , denotes the three-dimensional gradient operator, del, the symbol (pronounced "del dot") denotes the divergence operator, the symbol (pronounced "del cross") denotes the curl operator. Integral equations In the integral equations, is any volume with closed boundary surface , and is any surface with closed boundary curve , The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law: Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate. is a surface integral over the boundary surface , with the loop indicating the surface is closed is a volume integral over the volume , is a line integral around the boundary curve , with the loop indicating the curve is closed. is a surface integral over the surface , The total electric charge enclosed in is the volume integral over of the charge density (see the "macroscopic formulation" section below): where is the volume element. The net magnetic flux is the surface integral of the magnetic field passing through a fixed surface, : The net electric flux is the surface integral of the electric field passing through : The net electric current is the surface integral of the electric current density passing through : where denotes the differential vector element of surface area , normal to surface . (Vector area is sometimes denoted by rather than , but this conflicts with the notation for magnetic vector potential). Formulation in the SI Formulation in the Gaussian system The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of and into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension. Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units", the Maxwell equations become: The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1. Further changes are possible by absorbing factors of . This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics). Relationship between differential and integral formulations The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem. Flux and divergence According to the (purely mathematical) Gauss divergence theorem, the electric flux through the boundary surface can be rewritten as The integral version of Gauss's equation can thus be rewritten as Since is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is the differential equations formulation of Gauss equation up to a trivial rearrangement. Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives which is satisfied for all if and only if everywhere. Circulation and curl By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e. Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as Since can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied. The equivalence of Faraday's law in differential and integral form follows likewise. The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field. Charge conservation The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: i.e., By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: In particular, in an isolated system the total charge is conserved. Vacuum equations, electromagnetic waves and speed of light In a region with no charges () and no currents (), such as in vacuum, Maxwell's equations reduce to: Taking the curl of the curl equations, and using the curl of the curl identity we obtain The quantity has the dimension (T/L)2. Defining , the equations above have the form of the standard wave equations Already during Maxwell's lifetime, it was found that the known values for and give , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of and are defined constants, (which means that by definition ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value. In materials with relative permittivity, , and relative permeability, , the phase velocity of light becomes which is usually less than . In addition, and are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity . Macroscopic formulation The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping. The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents. "Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself. In the macroscopic equations, the influence of bound charge and bound current is incorporated into the displacement field and the magnetizing field , while the equations depend only on the free charges and free currents . This reflects a splitting of the total electric charge Q and current I (and their densities and J) into free and bound parts: The cost of this splitting is that the additional fields and need to be determined through phenomenological constituent equations relating these fields to the electric field and the magnetic field , together with the bound charge and current. See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum; and the macroscopic equations, dealing with free charge and current, practical to use within materials. Bound charge and current When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization of the material, its dipole moment per unit volume. If is uniform, a macroscopic separation of charge is produced only at the surfaces where enters and leaves the material. For non-uniform , a charge is also produced in the bulk. Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization . The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of and , which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume. Auxiliary fields, polarization and magnetization The definitions of the auxiliary fields are: where is the polarization field and is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density and bound current density in terms of polarization and magnetization are then defined as If we define the total, bound, and free charge and current density by and use the defining relations above to eliminate , and , the "macroscopic" Maxwell's equations reproduce the "microscopic" equations. Constitutive relations In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field and the electric field , as well as the magnetizing field and the magnetic field . Equivalently, we have to specify the dependence of the polarization (hence the bound charge) and the magnetization (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description. For materials without polarization and magnetization, the constitutive relations are (by definition) where is the permittivity of free space and the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal. An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization. More generally, for linear materials the constitutive relations are where is the permittivity and the permeability of the material. For the displacement field the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however. For homogeneous materials, and are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time). For isotropic materials, and are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors. Materials are generally dispersive, so and depend on the frequency of any incident EM waves. Even more generally, in the case of non-linear materials (see for example nonlinear optics), and are not necessarily proportional to , similarly or is not necessarily proportional to . In general and depend on both and , on location and time, and possibly other physical quantities. In applications one also has to describe how the free currents and charge density behave in terms of and possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form Alternative formulations Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential and the vector potential . Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect). Each table describes one formalism. See the main article for details of each formulation. The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well. Each table below describes one formalism. In the tensor calculus formulation, the electromagnetic tensor is an antisymmetric covariant order 2 tensor; the four-potential, , is a covariant vector; the current, , is a vector; the square brackets, , denote antisymmetrization of indices; is the partial derivative with respect to the coordinate, . In Minkowski space coordinates are chosen with respect to an inertial frame; , so that the metric tensor used to raise and lower indices is . The d'Alembert operator on Minkowski space is as in the vector formulation. In general spacetimes, the coordinate system is arbitrary, the covariant derivative , the Ricci tensor, and raising and lowering of indices are defined by the Lorentzian metric, and the d'Alembert operator is defined as . The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line. In the differential form formulation on arbitrary space times, is the electromagnetic tensor considered as a 2-form, is the potential 1-form, is the current 3-form, is the exterior derivative, and is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact. Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used. Solutions Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow. As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator). Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create. Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics. Overdetermination of Maxwell's equations Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of and ) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles. This explanation was first introduced by Julius Adams Stratton in 1941. Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account. Both identities , which reduce eight equations to six independent ones, are the true reason of overdetermination. Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws. For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing. Maxwell's equations as the classical limit of QED Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However they do not account for quantum effects and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED). Some observed electromagnetic phenomena are incompatible with Maxwell's equations. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances. Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be approximated using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. Variations Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well. Magnetic monopoles Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.
Physical sciences
Electrodynamics
null
19769
https://en.wikipedia.org/wiki/Mariculture
Mariculture
Mariculture, sometimes called marine farming or marine aquaculture, is a branch of aquaculture involving the cultivation of marine organisms for food and other animal products, in seawater. Subsets of it include (offshore mariculture), fish farms built on littoral waters (inshore mariculture), or in artificial tanks, ponds or raceways which are filled with seawater (onshore mariculture). An example of the latter is the farming of plankton and seaweed, shellfish like shrimp or oysters, and marine finfish, in saltwater ponds. Non-food products produced by mariculture include: fish meal, nutrient agar, jewellery (e.g. cultured pearls), and cosmetics. Types Onshore Although it sounds like a paradox, mariculture is practiced onshore variously in tanks, ponds or raceways which are supplied with seawater. The distinguishing traits of onshore mariculture are the use of seawater rather than fresh, and that food and nutrients are provided by the water column, not added artificially, a great savings in cost and preservation of the species' natural diet. Examples of inshore mariculture include the farming of algae (including plankton and seaweed), marine finfish, and shellfish (like shrimp and oysters), in manmade saltwater ponds. Inshore Inshore mariculture is farming marine species such as algae, fish, and shellfish in waters affected by the tide, which include both littoral waters and their estuarine environments, such as bays, brackish rivers, and naturally fed and flushing saltwater ponds. Popular cultivation techniques for inshore mariculture include creating or utilizing artificial reefs, pens, nets, and long-line arrays of floating cages moored to the bottom. As a result of simultaneous global development and evolution over time, the term "ranch" being associated typically with inshore mariculture techniques has proved problematical. It is applied without any standardized basis to everything from marine species being raised in floating pens, nested within artificial reefs, tended in cages (by the hundreds and even thousands) in long-lined groups, and even operant conditioning migratory species to return to the waters where they were born for harvesting (also known as "enhanced stocking"). Open ocean Raising marine organisms under controlled offshore in "open ocean" in exposed, high-energy marine environments beyond , is a relatively new approach to mariculture. Open ocean aquaculture (OOA) uses cages, nets, or long-line arrays that are moored or towed. Open ocean mariculture has the potential to be combined with offshore energy installation systems, such as wind-farms, to enable a more effective use of ocean space. Research and commercial open ocean aquaculture facilities are in operation or under development in Panama, Australia, Chile, China, France, Ireland, Italy, Japan, Mexico, and Norway. , two commercial open ocean facilities were operating in U.S. waters, raising threadfin near Hawaii and cobia near Puerto Rico. An operation targeting bigeye tuna recently received final approval. All U.S. commercial facilities are currently sited in waters under state or territorial jurisdiction. The largest deep water open ocean farm in the world is raising cobia 12 km off the northern coast of Panama in highly exposed sites. There has been considerable discussion as to how mariculture of seaweeds can be conducted in the open ocean as a means to regenerate decimated fish populations by providing both habitat and the basis of a trophic pyramid for marine life. It has been proposed that natural seaweed ecosystems can be replicated in the open ocean by creating the conditions for their growth through artificial upwelling and through submerged tubing that provide substrate. Proponents and permaculture experts recognise that such approaches correspond to the core principles of permaculture and thereby constitute marine permaculture. The concept envisions using artificial upwelling and floating, submerged platforms as substrate to replicate natural seaweed ecosystems that provide habitat and the basis of a trophic pyramid for marine life. Following the principles of permaculture, seaweeds and fish from marine permaculture arrays can be sustainably harvested with the potential of also sequestering atmospheric carbon, should seaweeds be sunk below a depth of one kilometer. As of 2020, a number of successful trials have taken place in Hawaii, the Philippines, Puerto Rico and Tasmania. The idea has received substantial public attention, notably featuring as a key solution covered by Damon Gameau’s documentary 2040 and in the book Drawdown: The Most Comprehensive Plan Ever Proposed to Reverse Global Warming edited by Paul Hawken. Species Algae Algaculture involves the farming of species of algae, including microalgae (such as phytoplankton) and macroalgae (such as seaweed). Uses of commercial and industrial algae cultivation include production of nutraceuticals such as omega-3 fatty acids (as algal oil) or natural food colorants and dyes, food, fertilizers, bioplastics, chemical feedstock (raw material), protein-rich animal/aquaculture feed, pharmaceuticals, and algal fuel, and can also be used as a means of pollution control and natural carbon sequestration. Shellfish Similarly to algae cultivation, shellfish can be farmed in multiple ways in both onshore and inshore mariculture: on ropes, in bags or cages, or directly on (or within) the bottom. Shellfish mariculture does not require feed or fertilizer inputs, nor insecticides or antibiotics, making shellfish mariculture a self-supporting system. Seed for shellfish cultivation is typically produced in commercial hatcheries, or by the farmers themselves. Among shellfish types raised by mariculture are shrimp, oysters (including artificial pearl cultivation), clams, mussels, abalone. Shellfish can also be used in integrated multi-species cultivation techniques, where shellfish can utilize waste generated by higher trophic-level organisms. The Māori people of New Zealand retain traditions of farming shellfish. Finfish Finfish species raised in mariculture include salmon, cod, scallops, certain species of prawn, European lobsters, abalone and sea cucumbers. Fish species selected to be raised in saltwater pens do not have any additional artificial feed requirements, as they live off of the naturally occurring nutrients within the water column. Typical practice calls for the juveniles to be planted on the bottom of the body of water within the pen, which utilize more of the water column within their sea pen as they grow and develop. Environmental effects Mariculture has rapidly expanded over the last two decades due to new technology, improvements in formulated feeds, greater biological understanding of farmed species, increased water quality within closed farm systems, greater demand for seafood products, site expansion and government interest. As a consequence, mariculture has been subject to some controversy regarding its social and environmental impacts. Commonly identified environmental impacts from marine farms are: Wastes from cage cultures; Farm escapees and invasives; Genetic pollution and disease and parasite transfer; Habitat modification. As with most farming practices, the degree of environmental impact depends on the size of the farm, the cultured species, stock density, type of feed, hydrography of the site, and husbandry methods. The adjacent diagram connects these causes and effects. Wastes from cage cultures Mariculture of finfish can require a significant amount of fishmeal or other high protein food sources. Originally, a lot of fishmeal went to waste due to inefficient feeding regimes and poor digestibility of formulated feeds which resulted in poor feed conversion ratios. In cage culture, several different methods are used for feeding farmed fish – from simple hand feeding to sophisticated computer-controlled systems with automated food dispensers coupled with in situ uptake sensors that detect consumption rates. In coastal fish farms, overfeeding primarily leads to increased disposition of detritus on the seafloor (potentially smothering seafloor dwelling invertebrates and altering the physical environment), while in hatcheries and land-based farms, excess food goes to waste and can potentially impact the surrounding catchment and local coastal environment. This impact is usually highly local, and depends significantly on the settling velocity of waste feed and the current velocity (which varies both spatially and temporally) and depth. Farm escapees and invasives The impact of escapees from aquaculture operations depends on whether or not there are wild conspecifics or close relatives in the receiving environment, and whether or not the escapee is reproductively capable. Several different mitigation/prevention strategies are currently employed, from the development of infertile triploids to land-based farms which are completely isolated from any marine environment. Escapees can adversely impact local ecosystems through hybridization and loss of genetic diversity in native stocks, increase negative interactions within an ecosystem (such as predation and competition), disease transmission and habitat changes (from trophic cascades and ecosystem shifts to varying sediment regimes and thus turbidity). The accidental introduction of invasive species is also of concern. Aquaculture is one of the main vectors for invasives following accidental releases of farmed stocks into the wild. One example is the Siberian sturgeon (Acipenser baerii) which accidentally escaped from a fish farm into the Gironde Estuary (Southwest France) following a severe storm in December 1999 (5,000 individual fish escaped into the estuary which had never hosted this species before). Molluscan farming is another example whereby species can be introduced to new environments by ‘hitchhiking’ on farmed molluscs. Also, farmed molluscs themselves can become dominate predators and/or competitors, as well as potentially spread pathogens and parasites. Genetic pollution, disease, and parasite transfer One of the primary concerns with mariculture is the potential for disease and parasite transfer. Farmed stocks are often selectively bred to increase disease and parasite resistance, as well as improving growth rates and quality of products. As a consequence, the genetic diversity within reared stocks decreases with every generation – meaning they can potentially reduce the genetic diversity within wild populations if they escape into those wild populations. Such genetic pollution from escaped aquaculture stock can reduce the wild population's ability to adjust to the changing natural environment. Species grown by mariculture can also harbour diseases and parasites (e.g., lice) which can be introduced to wild populations upon their escape. An example of this is the parasitic sea lice on wild and farmed Atlantic salmon in Canada. Also, non-indigenous species which are farmed may have resistance to, or carry, particular diseases (which they picked up in their native habitats) which could be spread through wild populations if they escape into those wild populations. Such ‘new’ diseases would be devastating for those wild populations because they would have no immunity to them. Habitat modification With the exception of benthic habitats directly beneath marine farms, most mariculture causes minimal destruction to habitats. However, the destruction of mangrove forests from the farming of shrimps is of concern. Globally, shrimp farming activity is a small contributor to the destruction of mangrove forests; however, locally it can be devastating. Mangrove forests provide rich matrices which support a great deal of biodiversity – predominately juvenile fish and crustaceans. Furthermore, they act as buffering systems whereby they reduce coastal erosion, and improve water quality for in situ animals by processing material and ‘filtering’ sediments. Others In addition, nitrogen and phosphorus compounds from food and waste may lead to blooms of phytoplankton, whose subsequent degradation can drastically reduce oxygen levels. If the algae are toxic, fish are killed and shellfish contaminated. These algal blooms are sometimes referred to as harmful algal blooms, which are caused by a high influx of nutrients, such as nitrogen and phosphorus, into the water due to run-off from land based human operations. Over the course of rearing various species, the sediment on bottom of the specific body of water becomes highly metallic with influx of copper, zinc and lead that is being introduced to the area. This influx of these heavy metals is likely due to the buildup of fish waste, uneaten fish feed, and the paint that comes off the boats and floats that are used in the mariculture operations. Sustainability Mariculture development may be sustained by basic and applied research and development in major fields such as nutrition, genetics, system management, product handling, and socioeconomics. One approach uses closed systems that have no direct interaction with the local environment. However, investment and operational cost are currently significantly higher than with open cages, limiting closed systems to their current role as hatcheries. Many studies have estimated that seafood will run out by 2048. Farmed fish will also become crucial to feeding the growing human population that will potentially reach 9.8 billion by 2050. Benefits Sustainable mariculture promises economic and environmental benefits. Economies of scale imply that ranching can produce fish at lower cost than industrial fishing, leading to better human diets and the gradual elimination of unsustainable fisheries. Consistent supply and quality control has enabled integration in food market channels. List of species farmed Fish European sea bass Bigeye tuna Cobia Grouper Snapper Pompano Salmon Pearlspot Yellowtail jack Mullet Pomfret Barramundi Shellfish/Crustaceans Abalone Oysters Prawn Mussels Plants Seaweeds Scientific literature Scientific literature on mariculture can be found in the following journals: Applied and Environmental Microbiology Aquaculture Aquaculture Research Journal of Marine Science Marine Resource Economics Ocean Shoreline Management Journal of Applied Phycology Journal of Experimental Marine Biology and Ecology Journal of Phycology Journal of Shellfish Research Reviews in Fish Biology and Fisheries Reviews in Fisheries Science
Technology
Aquaculture
null
19812
https://en.wikipedia.org/wiki/Project%20Mercury
Project Mercury
Project Mercury was the first human spaceflight program of the United States, running from 1958 through 1963. An early highlight of the Space Race, its goal was to put a man into Earth orbit and return him safely, ideally before the Soviet Union. Taken over from the US Air Force by the newly created civilian space agency NASA, it conducted 20 uncrewed developmental flights (some using animals), and six successful flights by astronauts. The program, which took its name from Roman mythology, cost $ (adjusted for inflation). The astronauts were collectively known as the "Mercury Seven", and each spacecraft was given a name ending with a "7" by its pilot. The Space Race began with the 1957 launch of the Soviet satellite Sputnik 1. This came as a shock to the American public, and led to the creation of NASA to expedite existing US space exploration efforts, and place most of them under civilian control. After the successful launch of the Explorer 1 satellite in 1958, crewed spaceflight became the next goal. The Soviet Union put the first human, cosmonaut Yuri Gagarin, into a single orbit aboard Vostok 1 on April 12, 1961. Shortly after this, on May 5, the US launched its first astronaut, Alan Shepard, on a suborbital flight. Soviet Gherman Titov followed with a day-long orbital flight in August 1961. The US reached its orbital goal on February 20, 1962, when John Glenn made three orbits around the Earth. When Mercury ended in May 1963, both nations had sent six people into space, but the Soviets led the US in total time spent in space. The Mercury space capsule was produced by McDonnell Aircraft, and carried supplies of water, food and oxygen for about one day in a pressurized cabin. Mercury flights were launched from Cape Canaveral Air Force Station in Florida, on launch vehicles modified from the Redstone and Atlas D missiles. The capsule was fitted with a launch escape rocket to carry it safely away from the launch vehicle in case of a failure. The flight was designed to be controlled from the ground via the Manned Space Flight Network, a system of tracking and communications stations; back-up controls were outfitted on board. Small retrorockets were used to bring the spacecraft out of its orbit, after which an ablative heat shield protected it from the heat of atmospheric reentry. Finally, a parachute slowed the craft for a water landing. Both astronaut and capsule were recovered by helicopters deployed from a US Navy ship. The Mercury project gained popularity, and its missions were followed by millions on radio and TV around the world. Its success laid the groundwork for Project Gemini, which carried two astronauts in each capsule and perfected space docking maneuvers essential for crewed lunar landings in the subsequent Apollo program announced a few weeks after the first crewed Mercury flight. Creation Project Mercury was officially approved on October 7, 1958, and publicly announced on December 17. Originally called Project Astronaut, President Dwight Eisenhower felt that gave too much attention to the pilot. Instead, the name Mercury was chosen from classical mythology, which had already lent names to rockets like the Greek Atlas and Roman Jupiter for the SM-65 and PGM-19 missiles. It absorbed military projects with the same aim, such as the Air Force Man in Space Soonest. Background Following the end of World War II, a nuclear arms race evolved between the US and the Soviet Union (USSR). Since the USSR did not have bases in the western hemisphere from which to deploy bomber planes, Joseph Stalin decided to develop intercontinental ballistic missiles, which drove a missile race. The rocket technology in turn enabled both sides to develop Earth-orbiting satellites for communications, and gathering weather data and intelligence. Americans were shocked when the Soviet Union placed the first satellite into orbit in October 1957, leading to a growing fear that the US was falling into a "missile gap". A month later, the Soviets launched Sputnik 2, carrying a dog into orbit. Though the animal was not recovered alive, it was obvious their goal was human spaceflight. Unable to disclose details of military space projects, President Eisenhower ordered the creation of a civilian space agency in charge of civilian and scientific space exploration. Based on the federal research agency National Advisory Committee for Aeronautics (NACA), it was named the National Aeronautics and Space Administration (NASA). The agency achieved its first goal of launching a satellite into space, the Pioneer 1, in 1958. The next goal was to put a man there. The limit of space (also known as the Kármán line) was defined at the time as a minimum altitude of , and the only way to reach it was by using rocket-powered boosters. This created risks for the pilot, including explosion, high g-forces and vibrations during lift off through a dense atmosphere, and temperatures of more than from air compression during reentry. In space, pilots would require pressurized chambers or space suits to supply fresh air. While there, they would experience weightlessness, which could potentially cause disorientation. Further potential risks included radiation and micrometeoroid strikes, both of which would normally be absorbed in the atmosphere. All seemed possible to overcome: experience from satellites suggested micrometeoroid risk was negligible, and experiments in the early 1950s with simulated weightlessness, high g-forces on humans, and sending animals to the limit of space, all suggested potential problems could be overcome by known technologies. Finally, reentry was studied using the nuclear warheads of ballistic missiles, which demonstrated a blunt, forward-facing heat shield could solve the problem of heating. Organization T. Keith Glennan had been appointed the first Administrator of NASA, with Hugh L. Dryden (last Director of NACA) as his Deputy, at the creation of the agency on October 1, 1958. Glennan would report to the president through the National Aeronautics and Space Council. The group responsible for Project Mercury was NASA's Space Task Group, and the goals of the program were to orbit a crewed spacecraft around Earth, investigate the pilot's ability to function in space, and to recover both pilot and spacecraft safely. Existing technology and off-the-shelf equipment would be used wherever practical, the simplest and most reliable approach to system design would be followed, and an existing launch vehicle would be employed, together with a progressive test program. Spacecraft requirements included: a launch escape system to separate the spacecraft and its occupant from the launch vehicle in case of impending failure; attitude control for orientation of the spacecraft in orbit; a retrorocket system to bring the spacecraft out of orbit; drag braking blunt body for atmospheric reentry; and landing on water. To communicate with the spacecraft during an orbital mission, an extensive communications network had to be built. In keeping with his desire to keep from giving the US space program an overtly military flavor, President Eisenhower at first hesitated to give the project top national priority (DX rating under the Defense Production Act), which meant that Mercury had to wait in line behind military projects for materials; however, this rating was granted in May 1959, a little more than a year and a half after Sputnik was launched. Contractors and facilities Twelve companies bid to build the Mercury spacecraft on a $20 million ($ adjusted for inflation) contract. In January 1959, McDonnell Aircraft Corporation was chosen to be prime contractor for the spacecraft. Two weeks earlier, North American Aviation, based in Los Angeles, was awarded a contract for Little Joe, a small rocket to be used for development of the launch escape system. The World Wide Tracking Network for communication between the ground and spacecraft during a flight was awarded to the Western Electric Company. Redstone rockets for suborbital launches were manufactured in Huntsville, Alabama, by the Chrysler Corporation and Atlas rockets by Convair in San Diego, California. For crewed launches, the Atlantic Missile Range at Cape Canaveral Air Force Station in Florida was made available by the USAF. This was also the site of the Mercury Control Center while the computing center of the communication network was in Goddard Space Center, Maryland. Little Joe rockets were launched from Wallops Island, Virginia. Astronaut training took place at Langley Research Center in Virginia, Lewis Flight Propulsion Laboratory in Cleveland, Ohio, and Naval Air Development Center Johnsville in Warminster, PA. Langley wind tunnels together with a rocket sled track at Holloman Air Force Base at Alamogordo, New Mexico were used for aerodynamic studies. Both Navy and Air Force aircraft were made available for the development of the spacecraft's landing system, and Navy ships and Navy and Marine Corps helicopters were made available for recovery. South of Cape Canaveral the town of Cocoa Beach boomed. From here, 75,000 people watched the first American orbital flight being launched in 1962. Spacecraft The Mercury spacecraft's principal designer was Maxime Faget, who started research for human spaceflight during the time of the NACA. It was long and wide; with the launch escape system added, the overall length was . With of habitable volume, the capsule was just large enough for a single crew member. Inside were 120 controls: 55 electrical switches, 30 fuses and 35 mechanical levers. The heaviest spacecraft, Mercury-Atlas 9, weighed fully loaded. Its outer skin was made of René 41, a nickel alloy able to withstand high temperatures. The spacecraft was cone shaped, with a neck at the narrow end. It had a convex base, which carried a heat shield (Item 2 in the diagram below) consisting of an aluminum honeycomb covered with multiple layers of fiberglass. Strapped to it was a retropack (1) consisting of three rockets deployed to brake the spacecraft during reentry. Between these were three posigrade rockets: minor rockets for separating the spacecraft from the launch vehicle at orbital insertion. The straps that held the package could be severed when it was no longer needed. Next to the heat shield was the pressurized crew compartment (3). Inside, an astronaut would be strapped to a form-fitting seat with instruments in front of him and with his back to the heat shield. Underneath the seat was the environmental control system supplying oxygen and heat, scrubbing the air of CO2, vapor and odors, and (on orbital flights) collecting urine. The recovery compartment (4) at the narrow end of the spacecraft contained three parachutes: a drogue to stabilize free fall and two main chutes, a primary and reserve. Between the heat shield and inner wall of the crew compartment was a landing skirt, deployed by letting down the heat shield before landing. On top of the recovery compartment was the antenna section (5) containing both antennas for communication and scanners for guiding spacecraft orientation. Attached was a flap used to ensure the spacecraft was faced heat shield first during reentry. A launch escape system (6) was mounted to the narrow end of the spacecraft containing three small solid-fueled rockets which could be fired briefly in a launch failure to separate the capsule safely from its booster. It would deploy the capsule's parachute for a landing nearby at sea. (
Technology
Programs and launch sites
null
19823
https://en.wikipedia.org/wiki/Maya%20numerals
Maya numerals
The Mayan numeral system was the system to represent numbers and calendar dates in the Maya civilization. It was a vigesimal (base-20) positional numeral system. The numerals are made up of three symbols: zero (a shell), one (a dot) and five (a bar). For example, thirteen is written as three dots in a horizontal row above two horizontal bars; sometimes it is also written as three vertical dots to the left of two vertical bars. With these three symbols, each of the twenty vigesimal digits could be written. Numbers after 19 were written vertically in powers of twenty. The Mayan used powers of twenty, just as the Hindu–Arabic numeral system uses powers of ten. For example, thirty-three would be written as one dot, above three dots atop two bars. The first dot represents "one twenty" or "1×20", which is added to three dots and two bars, or thirteen. Therefore, (1×20) + 13 = 33. {| class="mw-collapsible mw-collapsed" style="text-align:center;" |+Addition (single) |- style="font-size: 150%;" | (1×20) | + | 13 | = | 33 |- | | | | | |} Upon reaching 202 or 400, another row is started (203 or 8000, then 204 or 160,000, and so on). The number 429 would be written as one dot above one dot above four dots and a bar, or (1×202) + (1×201) + 9 = 429. {| class="mw-collapsible mw-collapsed" style="text-align:center;" |+Addition (multiple) |- style="font-size: 150%;" | (1×202) | + | (1×201) | + | 9 | = | 429 |- | | | | | | | |} Other than the bar and dot notation, Maya numerals were sometimes illustrated by face type glyphs or pictures. The face glyph for a number represents the deity associated with the number. These face number glyphs were rarely used, and are mostly seen on some of the most elaborate monumental carvings. There are different representations of zero in the Dresden Codex, as can be seen at page 43b (which is concerned with the synodic cycle of Mars). It has been suggested that these pointed, oblong "bread" representations are calligraphic variants of the PET logogram, approximately meaning "circular" or "rounded", and perhaps the basis of a derived noun meaning "totality" or "grouping", such that the representations may be an appropriate marker for a number position which has reached its totality. Addition and subtraction Adding and subtracting numbers below 20 using Mayan numerals is very simple. Addition is performed by combining the numeric symbols at each level: If five or more dots result from the combination, five dots are removed and replaced by a bar. If four or more bars result, four bars are removed and a dot is added to the next higher row. This also means that the value of 1 bar is 5. Similarly with subtraction, remove the elements of the subtrahend Symbol from the minuend symbol: If there are not enough dots in a minuend position, a bar is replaced by five dots. If there are not enough bars, a dot is removed from the next higher minuend symbol in the column and four bars are added to the minuend symbol which is being worked on. Modified vigesimal system in the Maya calendar The "Long Count" portion of the Maya calendar uses a variation on the strictly vigesimal numerals to show a Long Count date. In the second position, only the digits up to 17 are used, and the place value of the third position is not 20×20 = 400, as would otherwise be expected, but 18×20 = 360 so that one dot over two zeros signifies 360. Presumably, this is because 360 is roughly the number of days in a year. (The Maya had however a quite accurate estimation of 365.2422 days for the solar year at least since the early Classic era.) Subsequent positions use all twenty digits and the place values continue as 18×20×20 = 7,200 and 18×20×20×20 = 144,000, etc. Every known example of large numbers in the Maya system uses this 'modified vigesimal' system, with the third position representing multiples of 18×20. It is reasonable to assume, but not proven by any evidence, that the normal system in use was a pure base-20 system. Origins Several Mesoamerican cultures used similar numerals and base-twenty systems and the Mesoamerican Long Count calendar requiring the use of zero as a place-holder. The earliest long count date (on Stela 2 at Chiappa de Corzo, Chiapas) is from 36 BC. Since the eight earliest Long Count dates appear outside the Maya homeland, it is assumed that the use of zero and the Long Count calendar predated the Maya, and was possibly the invention of the Olmec. Indeed, many of the earliest Long Count dates were found within the Olmec heartland. However, the Olmec civilization had come to an end by the 4th century BC, several centuries before the earliest known Long Count dates—which suggests that zero was not an Olmec discovery. Unicode Mayan numerals codes in Unicode comprise the block 1D2E0 to 1D2F3
Mathematics
Basics
null
19830
https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann%20distribution
Maxwell–Boltzmann distribution
In physics (in particular in statistical mechanics), the Maxwell–Boltzmann distribution, or Maxwell(ian) distribution, is a particular probability distribution named after James Clerk Maxwell and Ludwig Boltzmann. It was first defined and used for describing particle speeds in idealized gases, where the particles move freely inside a stationary container without interacting with one another, except for very brief collisions in which they exchange energy and momentum with each other or with their thermal environment. The term "particle" in this context refers to gaseous particles only (atoms or molecules), and the system of particles is assumed to have reached thermodynamic equilibrium. The energies of such particles follow what is known as Maxwell–Boltzmann statistics, and the statistical distribution of speeds is derived by equating particle energies with kinetic energy. Mathematically, the Maxwell–Boltzmann distribution is the chi distribution with three degrees of freedom (the components of the velocity vector in Euclidean space), with a scale parameter measuring speeds in units proportional to the square root of (the ratio of temperature and particle mass). The Maxwell–Boltzmann distribution is a result of the kinetic theory of gases, which provides a simplified explanation of many fundamental gaseous properties, including pressure and diffusion. The Maxwell–Boltzmann distribution applies fundamentally to particle velocities in three dimensions, but turns out to depend only on the speed (the magnitude of the velocity) of the particles. A particle speed probability distribution indicates which speeds are more likely: a randomly chosen particle will have a speed selected randomly from the distribution, and is more likely to be within one range of speeds than another. The kinetic theory of gases applies to the classical ideal gas, which is an idealization of real gases. In real gases, there are various effects (e.g., van der Waals interactions, vortical flow, relativistic speed limits, and quantum exchange interactions) that can make their speed distribution different from the Maxwell–Boltzmann form. However, rarefied gases at ordinary temperatures behave very nearly like an ideal gas and the Maxwell speed distribution is an excellent approximation for such gases. This is also true for ideal plasmas, which are ionized gases of sufficiently low density. The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system. A list of derivations are: Maximum entropy probability distribution in the phase space, with the constraint of conservation of average energy Canonical ensemble. Distribution function For a system containing a large number of identical non-interacting, non-relativistic classical particles in thermodynamic equilibrium, the fraction of the particles within an infinitesimal element of the three-dimensional velocity space , centered on a velocity vector of magnitude , is given by where: is the particle mass; is the Boltzmann constant; is thermodynamic temperature; is a probability distribution function, properly normalized so that over all velocities is unity. One can write the element of velocity space as , for velocities in a standard Cartesian coordinate system, or as in a standard spherical coordinate system, where is an element of solid angle and . The Maxwellian distribution function for particles moving in only one direction, if this direction is , is which can be obtained by integrating the three-dimensional form given above over and . Recognizing the symmetry of , one can integrate over solid angle and write a probability distribution of speeds as the function This probability density function gives the probability, per unit speed, of finding the particle with a speed near . This equation is simply the Maxwell–Boltzmann distribution (given in the infobox) with distribution parameter The Maxwell–Boltzmann distribution is equivalent to the chi distribution with three degrees of freedom and scale parameter The simplest ordinary differential equation satisfied by the distribution is: or in unitless presentation: With the Darwin–Fowler method of mean values, the Maxwell–Boltzmann distribution is obtained as an exact result. Relaxation to the 2D Maxwell–Boltzmann distribution For particles confined to move in a plane, the speed distribution is given by This distribution is used for describing systems in equilibrium. However, most systems do not start out in their equilibrium state. The evolution of a system towards its equilibrium state is governed by the Boltzmann equation. The equation predicts that for short range interactions, the equilibrium velocity distribution will follow a Maxwell–Boltzmann distribution. To the right is a molecular dynamics (MD) simulation in which 900 hard sphere particles are constrained to move in a rectangle. They interact via perfectly elastic collisions. The system is initialized out of equilibrium, but the velocity distribution (in blue) quickly converges to the 2D Maxwell–Boltzmann distribution (in orange). Typical speeds The mean speed , most probable speed (mode) , and root-mean-square speed can be obtained from properties of the Maxwell distribution. This works well for nearly ideal, monatomic gases like helium, but also for molecular gases like diatomic oxygen. This is because despite the larger heat capacity (larger internal energy at the same temperature) due to their larger number of degrees of freedom, their translational kinetic energy (and thus their speed) is unchanged. In summary, the typical speeds are related as follows: The root mean square speed is directly related to the speed of sound in the gas, by where is the adiabatic index, is the number of degrees of freedom of the individual gas molecule. For the example above, diatomic nitrogen (approximating air) at , and the true value for air can be approximated by using the average molar weight of air (), yielding at (corrections for variable humidity are of the order of 0.1% to 0.6%). The average relative velocity where the three-dimensional velocity distribution is The integral can easily be done by changing to coordinates and Limitations The Maxwell–Boltzmann distribution assumes that the velocities of individual particles are much less than the speed of light, i.e. that . For electrons, the temperature of electrons must be . Derivation and related distributions Maxwell–Boltzmann statistics The original derivation in 1860 by James Clerk Maxwell was an argument based on molecular collisions of the Kinetic theory of gases as well as certain symmetries in the speed distribution function; Maxwell also gave an early argument that these molecular collisions entail a tendency towards equilibrium. After Maxwell, Ludwig Boltzmann in 1872 also derived the distribution on mechanical grounds and argued that gases should over time tend toward this distribution, due to collisions (see H-theorem). He later (1877) derived the distribution again under the framework of statistical thermodynamics. The derivations in this section are along the lines of Boltzmann's 1877 derivation, starting with result known as Maxwell–Boltzmann statistics (from statistical thermodynamics). Maxwell–Boltzmann statistics gives the average number of particles found in a given single-particle microstate. Under certain assumptions, the logarithm of the fraction of particles in a given microstate is linear in the ratio of the energy of that state to the temperature of the system: there are constants and such that, for all , The assumptions of this equation are that the particles do not interact, and that they are classical; this means that each particle's state can be considered independently from the other particles' states. Additionally, the particles are assumed to be in thermal equilibrium. This relation can be written as an equation by introducing a normalizing factor: where: is the expected number of particles in the single-particle microstate , is the total number of particles in the system, is the energy of microstate , the sum over index takes into account all microstates, is the equilibrium temperature of the system, is the Boltzmann constant. The denominator in is a normalizing factor so that the ratios add up to unity — in other words it is a kind of partition function (for the single-particle system, not the usual partition function of the entire system). Because velocity and speed are related to energy, Equation () can be used to derive relationships between temperature and the speeds of gas particles. All that is needed is to discover the density of microstates in energy, which is determined by dividing up momentum space into equal sized regions. Distribution for the momentum vector The potential energy is taken to be zero, so that all energy is in the form of kinetic energy. The relationship between kinetic energy and momentum for massive non-relativistic particles is where is the square of the momentum vector . We may therefore rewrite Equation () as: where: is the partition function, corresponding to the denominator in ; is the molecular mass of the gas; is the thermodynamic temperature; is the Boltzmann constant. This distribution of is proportional to the probability density function for finding a molecule with these values of momentum components, so: The normalizing constant can be determined by recognizing that the probability of a molecule having some momentum must be 1. Integrating the exponential in over all , , and yields a factor of So that the normalized distribution function is: The distribution is seen to be the product of three independent normally distributed variables , , and , with variance . Additionally, it can be seen that the magnitude of momentum will be distributed as a Maxwell–Boltzmann distribution, with . The Maxwell–Boltzmann distribution for the momentum (or equally for the velocities) can be obtained more fundamentally using the H-theorem at equilibrium within the Kinetic theory of gases framework. Distribution for the energy The energy distribution is found imposing where is the infinitesimal phase-space volume of momenta corresponding to the energy interval . Making use of the spherical symmetry of the energy-momentum dispersion relation this can be expressed in terms of as Using then () in (), and expressing everything in terms of the energy , we get and finally Since the energy is proportional to the sum of the squares of the three normally distributed momentum components, this energy distribution can be written equivalently as a gamma distribution, using a shape parameter, and a scale parameter, Using the equipartition theorem, given that the energy is evenly distributed among all three degrees of freedom in equilibrium, we can also split into a set of chi-squared distributions, where the energy per degree of freedom, is distributed as a chi-squared distribution with one degree of freedom, At equilibrium, this distribution will hold true for any number of degrees of freedom. For example, if the particles are rigid mass dipoles of fixed dipole moment, they will have three translational degrees of freedom and two additional rotational degrees of freedom. The energy in each degree of freedom will be described according to the above chi-squared distribution with one degree of freedom, and the total energy will be distributed according to a chi-squared distribution with five degrees of freedom. This has implications in the theory of the specific heat of a gas. Distribution for the velocity vector Recognizing that the velocity probability density is proportional to the momentum probability density function by and using we get which is the Maxwell–Boltzmann velocity distribution. The probability of finding a particle with velocity in the infinitesimal element about velocity is Like the momentum, this distribution is seen to be the product of three independent normally distributed variables , , and , but with variance . It can also be seen that the Maxwell–Boltzmann velocity distribution for the vector velocity is the product of the distributions for each of the three directions: where the distribution for a single direction is Each component of the velocity vector has a normal distribution with mean and standard deviation , so the vector has a 3-dimensional normal distribution, a particular kind of multivariate normal distribution, with mean and covariance , where is the identity matrix. Distribution for the speed The Maxwell–Boltzmann distribution for the speed follows immediately from the distribution of the velocity vector, above. Note that the speed is and the volume element in spherical coordinates where and are the spherical coordinate angles of the velocity vector. Integration of the probability density function of the velocity over the solid angles yields an additional factor of . The speed distribution with substitution of the speed for the sum of the squares of the vector components: In n-dimensional space In -dimensional space, Maxwell–Boltzmann distribution becomes: Speed distribution becomes: where is a normalizing constant. The following integral result is useful: where is the Gamma function. This result can be used to calculate the moments of speed distribution function: which is the mean speed itself which gives root-mean-square speed The derivative of speed distribution function: This yields the most probable speed (mode)
Physical sciences
Statistical mechanics
Physics
19833
https://en.wikipedia.org/wiki/Metastability
Metastability
In chemistry and physics, metastability is an intermediate energetic state within a dynamical system other than the system's state of least energy. A ball resting in a hollow on a slope is a simple example of metastability. If the ball is only slightly pushed, it will settle back into its hollow, but a stronger push may start the ball rolling down the slope. Bowling pins show similar metastability by either merely wobbling for a moment or tipping over completely. A common example of metastability in science is isomerisation. Higher energy isomers are long lived because they are prevented from rearranging to their preferred ground state by (possibly large) barriers in the potential energy. During a metastable state of finite lifetime, all state-describing parameters reach and hold stationary values. In isolation: the state of least energy is the only one the system will inhabit for an indefinite length of time, until more external energy is added to the system (unique "absolutely stable" state); the system will spontaneously leave any other state (of higher energy) to eventually return (after a sequence of transitions) to the least energetic state. The metastability concept originated in the physics of first-order phase transitions. It then acquired new meaning in the study of aggregated subatomic particles (in atomic nuclei or in atoms) or in molecules, macromolecules or clusters of atoms and molecules. Later, it was borrowed for the study of decision-making and information transmission systems. Metastability is common in physics and chemistry – from an atom (many-body assembly) to statistical ensembles of molecules (viscous fluids, amorphous solids, liquid crystals, minerals, etc.) at molecular levels or as a whole (see Metastable states of matter and grain piles below). The abundance of states is more prevalent as the systems grow larger and/or if the forces of their mutual interaction are spatially less uniform or more diverse. In dynamic systems (with feedback) like electronic circuits, signal trafficking, decisional, neural and immune systems, the time-invariance of the active or reactive patterns with respect to the external influences defines stability and metastability (see brain metastability below). In these systems, the equivalent of thermal fluctuations in molecular systems is the "white noise" that affects signal propagation and the decision-making. Statistical physics and thermodynamics Non-equilibrium thermodynamics is a branch of physics that studies the dynamics of statistical ensembles of molecules via unstable states. Being "stuck" in a thermodynamic trough without being at the lowest energy state is known as having kinetic stability or being kinetically persistent. The particular motion or kinetics of the atoms involved has resulted in getting stuck, despite there being preferable (lower-energy) alternatives. States of matter Metastable states of matter (also referred as metastates) range from melting solids (or freezing liquids), boiling liquids (or condensing gases) and sublimating solids to supercooled liquids or superheated liquid-gas mixtures. Extremely pure, supercooled water stays liquid below 0 °C and remains so until applied vibrations or condensing seed doping initiates crystallization centers. This is a common situation for the droplets of atmospheric clouds. Condensed matter and macromolecules Metastable phases are common in condensed matter and crystallography. This is the case for anatase, a metastable polymorph of titanium dioxide, which despite commonly being the first phase to form in many synthesis processes due to its lower surface energy, is always metastable, with rutile being the most stable phase at all temperatures and pressures. As another example, diamond is a stable phase only at very high pressures, but is a metastable form of carbon at standard temperature and pressure. It can be converted to graphite (plus leftover kinetic energy), but only after overcoming an activation energy – an intervening hill. Martensite is a metastable phase used to control the hardness of most steel. Metastable polymorphs of silica are commonly observed. In some cases, such as in the allotropes of solid boron, acquiring a sample of the stable phase is difficult. The bonds between the building blocks of polymers such as DNA, RNA, and proteins are also metastable. Adenosine triphosphate (ATP) is a highly metastable molecule, colloquially described as being "full of energy" that can be used in many ways in biology. Generally speaking, emulsions/colloidal systems and glasses are metastable. The metastability of silica glass, for example, is characterised by lifetimes on the order of 1098 years (as compared with the lifetime of the universe, which is thought to be around years). Sandpiles are one system which can exhibit metastability if a steep slope or tunnel is present. Sand grains form a pile due to friction. It is possible for an entire large sand pile to reach a point where it is stable, but the addition of a single grain causes large parts of it to collapse. The avalanche is a well-known problem with large piles of snow and ice crystals on steep slopes. In dry conditions, snow slopes act similarly to sandpiles. An entire mountainside of snow can suddenly slide due to the presence of a skier, or even a loud noise or vibration. Quantum mechanics Aggregated systems of subatomic particles described by quantum mechanics (quarks inside nucleons, nucleons inside atomic nuclei, electrons inside atoms, molecules, or atomic clusters) are found to have many distinguishable states. Of these, one (or a small degenerate set) is indefinitely stable: the ground state or global minimum. All other states besides the ground state (or those degenerate with it) have higher energies. Of all these other states, the metastable states are the ones having lifetimes lasting at least 102 to 103 times longer than the shortest lived states of the set. A metastable state is then long-lived (locally stable with respect to configurations of 'neighbouring' energies) but not eternal (as the global minimum is). Being excited – of an energy above the ground state – it will eventually decay to a more stable state, releasing energy. Indeed, above absolute zero, all states of a system have a non-zero probability to decay; that is, to spontaneously fall into another state (usually lower in energy). One mechanism for this to happen is through tunnelling. Nuclear physics Some energetic states of an atomic nucleus (having distinct spatial mass, charge, spin, isospin distributions) are much longer-lived than others (nuclear isomers of the same isotope), e.g. technetium-99m. The isotope tantalum-180m, although being a metastable excited state, is long-lived enough that it has never been observed to decay, with a half-life calculated to be least years, over 3 million times the current age of the universe. Atomic and molecular physics Some atomic energy levels are metastable. Rydberg atoms are an example of metastable excited atomic states. Transitions from metastable excited levels are typically those forbidden by electric dipole selection rules. This means that any transitions from this level are relatively unlikely to occur. In a sense, an electron that happens to find itself in a metastable configuration is trapped there. Since transitions from a metastable state are not impossible (merely less likely), the electron will eventually decay to a less energetic state, typically by an electric quadrupole transition, or often by non-radiative de-excitation (e.g., collisional de-excitation). This slow-decay property of a metastable state is apparent in phosphorescence, the kind of photoluminescence seen in glow-in-the-dark toys that can be charged by first being exposed to bright light. Whereas spontaneous emission in atoms has a typical timescale on the order of 10−8 seconds, the decay of metastable states can typically take milliseconds to minutes, and so light emitted in phosphorescence is usually both weak and long-lasting. Chemistry In chemical systems, a system of atoms or molecules involving a change in chemical bond can be in a metastable state, which lasts for a relatively long period of time. Molecular vibrations and thermal motion make chemical species at the energetic equivalent of the top of a round hill very short-lived. Metastable states that persist for many seconds (or years) are found in energetic valleys which are not the lowest possible valley (point 1 in illustration). A common type of metastability is isomerism. The stability or metastability of a given chemical system depends on its environment, particularly temperature and pressure. The difference between producing a stable vs. metastable entity can have important consequences. For instances, having the wrong crystal polymorph can result in failure of a drug while in storage between manufacture and administration. The map of which state is the most stable as a function of pressure, temperature and/or composition is known as a phase diagram. In regions where a particular state is not the most stable, it may still be metastable. Reaction intermediates are relatively short-lived, and are usually thermodynamically unstable rather than metastable. The IUPAC recommends referring to these as transient rather than metastable. Metastability is also used to refer to specific situations in mass spectrometry and spectrochemistry. Electronic circuits A digital circuit is supposed to be found in a small number of stable digital states within a certain amount of time after an input change. However, if an input changes at the wrong moment a digital circuit which employs feedback (even a simple circuit such as a flip-flop) can enter a metastable state and take an unbounded length of time to finally settle into a fully stable digital state. Computational neuroscience Metastability in the brain is a phenomenon studied in computational neuroscience to elucidate how the human brain recognizes patterns. Here, the term metastability is used rather loosely. There is no lower-energy state, but there are semi-transient signals in the brain that persist for a while and are different than the usual equilibrium state. In philosophy Gilbert Simondon invokes a notion of metastability for his understanding of systems that rather than resolve their tensions and potentials for transformation into a single final state rather, 'conserves the tensions in the equilibrium of metastability instead of nullifying them in the equilibrium of stability' as a critique of cybernetic notions of homeostasis.
Physical sciences
Physics basics: General
Physics
19838
https://en.wikipedia.org/wiki/Metallic%20bonding
Metallic bonding
Metallic bonding is a type of chemical bonding that arises from the electrostatic attractive force between conduction electrons (in the form of an electron cloud of delocalized electrons) and positively charged metal ions. It may be described as the sharing of free electrons among a structure of positively charged ions (cations). Metallic bonding accounts for many physical properties of metals, such as strength, ductility, thermal and electrical resistivity and conductivity, opacity, and lustre. Metallic bonding is not the only type of chemical bonding a metal can exhibit, even as a pure substance. For example, elemental gallium consists of covalently-bound pairs of atoms in both liquid and solid-state—these pairs form a crystal structure with metallic bonding between them. Another example of a metal–metal covalent bond is the mercurous ion (). History As chemistry developed into a science, it became clear that metals formed the majority of the periodic table of the elements, and great progress was made in the description of the salts that can be formed in reactions with acids. With the advent of electrochemistry, it became clear that metals generally go into solution as positively charged ions, and the oxidation reactions of the metals became well understood in their electrochemical series. A picture emerged of metals as positive ions held together by an ocean of negative electrons. With the advent of quantum mechanics, this picture was given a more formal interpretation in the form of the free electron model and its further extension, the nearly free electron model. In both models, the electrons are seen as a gas traveling through the structure of the solid with an energy that is essentially isotropic, in that it depends on the square of the magnitude, not the direction of the momentum vector k. In three-dimensional k-space, the set of points of the highest filled levels (the Fermi surface) should therefore be a sphere. In the nearly-free model, box-like Brillouin zones are added to k-space by the periodic potential experienced from the (ionic) structure, thus mildly breaking the isotropy. The advent of X-ray diffraction and thermal analysis made it possible to study the structure of crystalline solids, including metals and their alloys; and phase diagrams were developed. Despite all this progress, the nature of intermetallic compounds and alloys largely remained a mystery and their study was often merely empirical. Chemists generally steered away from anything that did not seem to follow Dalton's laws of multiple proportions; and the problem was considered the domain of a different science, metallurgy. The nearly-free electron model was eagerly taken up by some researchers in metallurgy, notably Hume-Rothery, in an attempt to explain why intermetallic alloys with certain compositions would form and others would not. Initially Hume-Rothery's attempts were quite successful. His idea was to add electrons to inflate the spherical Fermi-balloon inside the series of Brillouin-boxes and determine when a certain box would be full. This predicted a fairly large number of alloy compositions that were later observed. As soon as cyclotron resonance became available and the shape of the balloon could be determined, it was found that the balloon was not spherical as the Hume-Rothery believed, except perhaps in the case of caesium. This revealed how a model can sometimes give a whole series of correct predictions, yet still be wrong in its basic assumptions. The nearly-free electron debacle compelled researchers to modify the assumpition that ions flowed in a sea of free electrons. A number of quantum mechanical models were developed, such as band structure calculations based on molecular orbitals, and the density functional theory. These models either depart from the atomic orbitals of neutral atoms that share their electrons, or (in the case of density functional theory) departs from the total electron density. The free-electron picture has, nevertheless, remained a dominant one in introductory courses on metallurgy. The electronic band structure model became a major focus for the study of metals and even more of semiconductors. Together with the electronic states, the vibrational states were also shown to form bands. Rudolf Peierls showed that, in the case of a one-dimensional row of metallic atoms—say, hydrogen—an inevitable instability would break such a chain into individual molecules. This sparked an interest in the general question: when is collective metallic bonding stable, and when will a localized bonding take its place? Much research went into the study of clustering of metal atoms. As powerful as the band structure model proved to be in describing metallic bonding, it remains a one-electron approximation of a many-body problem: the energy states of an individual electron are described as if all the other electrons form a homogeneous background. Researchers such as Mott and Hubbard realized that the one-electron treatment was perhaps appropriate for strongly delocalized s- and p-electrons; but for d-electrons, and even more for f-electrons, the interaction with nearby individual electrons (and atomic displacements) may become stronger than the delocalized interaction that leads to broad bands. This gave a better explanation for the transition from localized unpaired electrons to itinerant ones partaking in metallic bonding. The nature of metallic bonding The combination of two phenomena gives rise to metallic bonding: delocalization of electrons and the availability of a far larger number of delocalized energy states than of delocalized electrons. The latter could be called electron deficiency. In 2D Graphene is an example of two-dimensional metallic bonding. Its metallic bonds are similar to aromatic bonding in benzene, naphthalene, anthracene, ovalene, etc. In 3D Metal aromaticity in metal clusters is another example of delocalization, this time often in three-dimensional arrangements. Metals take the delocalization principle to its extreme, and one could say that a crystal of a metal represents a single molecule over which all conduction electrons are delocalized in all three dimensions. This means that inside the metal one can generally not distinguish molecules, so that the metallic bonding is neither intra- nor inter-molecular. 'Nonmolecular' would perhaps be a better term. Metallic bonding is mostly non-polar, because even in alloys there is little difference among the electronegativities of the atoms participating in the bonding interaction (and, in pure elemental metals, none at all). Thus, metallic bonding is an extremely delocalized communal form of covalent bonding. In a sense, metallic bonding is not a 'new' type of bonding at all. It describes the bonding only as present in a chunk of condensed matter: be it crystalline solid, liquid, or even glass. Metallic vapors, in contrast, are often atomic (Hg) or at times contain molecules, such as Na2, held together by a more conventional covalent bond. This is why it is not correct to speak of a single 'metallic bond'. Delocalization is most pronounced for s- and p-electrons. Delocalization in caesium is so strong that the electrons are virtually freed from the caesium atoms to form a gas constrained only by the surface of the metal. For caesium, therefore, the picture of Cs+ ions held together by a negatively charged electron gas is very close to accurate (though not perfectly so). For other elements the electrons are less free, in that they still experience the potential of the metal atoms, sometimes quite strongly. They require a more intricate quantum mechanical treatment (e.g., tight binding) in which the atoms are viewed as neutral, much like the carbon atoms in benzene. For d- and especially f-electrons the delocalization is not strong at all and this explains why these electrons are able to continue behaving as unpaired electrons that retain their spin, adding interesting magnetic properties to these metals. Electron deficiency and mobility Metal atoms contain few electrons in their valence shells relative to their periods or energy levels. They are electron-deficient elements and the communal sharing does not change that. There remain far more available energy states than there are shared electrons. Both requirements for conductivity are therefore fulfilled: strong delocalization and partly filled energy bands. Such electrons can therefore easily change from one energy state to a slightly different one. Thus, not only do they become delocalized, forming a sea of electrons permeating the structure, but they are also able to migrate through the structure when an external electrical field is applied, leading to electrical conductivity. Without the field, there are electrons moving equally in all directions. Within such a field, some electrons will adjust their state slightly, adopting a different wave vector. Consequently, there will be more moving one way than another and a net current will result. The freedom of electrons to migrate also gives metal atoms, or layers of them, the capacity to slide past each other. Locally, bonds can easily be broken and replaced by new ones after a deformation. This process does not affect the communal metallic bonding very much, which gives rise to metals' characteristic malleability and ductility. This is particularly true for pure elements. In the presence of dissolved impurities, the normally easily formed cleavages may be blocked and the material become harder. Gold, for example, is very soft in pure form (24-karat), which is why alloys are preferred in jewelry. Metals are typically also good conductors of heat, but the conduction electrons only contribute partly to this phenomenon. Collective (i.e., delocalized) vibrations of the atoms, known as phonons that travel through the solid as a wave, are bigger contributors. However, a substance such as diamond, which conducts heat quite well, is not an electrical conductor. This is not a consequence of delocalization being absent in diamond, but simply that carbon is not electron deficient. Electron deficiency is important in distinguishing metallic from more conventional covalent bonding. Thus, we should amend the expression given above to: Metallic bonding is an extremely delocalized communal form of electron-deficient covalent bonding. Metallic radius The metallic radius is defined as one-half of the distance between the two adjacent metal ions in the metallic structure. This radius depends on the nature of the atom as well as its environment—specifically, on the coordination number (CN), which in turn depends on the temperature and applied pressure. When comparing periodic trends in the size of atoms it is often desirable to apply the so-called Goldschmidt correction, which converts atomic radii to the values the atoms would have if they were 12-coordinated. Since metallic radii are largest for the highest coordination number, correction for less dense coordinations involves multiplying by , where 0 < < 1. Specifically, for CN = 4, = 0.88; for CN = 6, = 0.96, and for CN = 8, = 0.97. The correction is named after Victor Goldschmidt who obtained the numerical values quoted above. The radii follow general periodic trends: they decrease across the period due to the increase in the effective nuclear charge, which is not offset by the increased number of valence electrons; but the radii increase down the group due to an increase in the principal quantum number. Between the 4d and 5d elements, the lanthanide contraction is observed—there is very little increase of the radius down the group due to the presence of poorly shielding f orbitals. Strength of the bond The atoms in metals have a strong attractive force between them. Much energy is required to overcome it. Therefore, metals often have high boiling points, with tungsten (5828 K) being extremely high. A remarkable exception is the elements of the zinc group: Zn, Cd, and Hg. Their electron configurations end in ...ns2, which resembles a noble gas configuration, like that of helium, more and more when going down the periodic table, because the energy differential to the empty np orbitals becomes larger. These metals are therefore relatively volatile, and are avoided in ultra-high vacuum systems. Otherwise, metallic bonding can be very strong, even in molten metals, such as gallium. Even though gallium will melt from the heat of one's hand just above room temperature, its boiling point is not far from that of copper. Molten gallium is, therefore, a very nonvolatile liquid, thanks to its strong metallic bonding. The strong bonding of metals in liquid form demonstrates that the energy of a metallic bond is not highly dependent on the direction of the bond; this lack of bond directionality is a direct consequence of electron delocalization, and is best understood in contrast to the directional bonding of covalent bonds. The energy of a metallic bond is thus mostly a function of the number of electrons which surround the metallic atom, as exemplified by the embedded atom model. This typically results in metals assuming relatively simple, close-packed crystal structures, such as FCC, BCC, and HCP. Given high enough cooling rates and appropriate alloy composition, metallic bonding can occur even in glasses, which have amorphous structures. Much biochemistry is mediated by the weak interaction of metal ions and biomolecules. Such interactions, and their associated conformational changes, have been measured using dual polarisation interferometry. Solubility and compound formation Metals are insoluble in water or organic solvents, unless they undergo a reaction with them. Typically, this is an oxidation reaction that robs the metal atoms of their itinerant electrons, destroying the metallic bonding. However metals are often readily soluble in each other while retaining the metallic character of their bonding. Gold, for example, dissolves easily in mercury, even at room temperature. Even in solid metals, the solubility can be extensive. If the structures of the two metals are the same, there can even be complete solid solubility, as in the case of electrum, an alloy of silver and gold. At times, however, two metals will form alloys with different structures than either of the two parents. One could call these materials metal compounds. But, because materials with metallic bonding are typically not molecular, Dalton's law of integral proportions is not valid; and often a range of stoichiometric ratios can be achieved. It is better to abandon such concepts as 'pure substance' or 'solute' in such cases and speak of phases instead. The study of such phases has traditionally been more the domain of metallurgy than of chemistry, although the two fields overlap considerably. Localization and clustering: from bonding to bonds The metallic bonding in complex compounds does not necessarily involve all constituent elements equally. It is quite possible to have one or more elements that do not partake at all. One could picture the conduction electrons flowing around them like a river around an island or a big rock. It is possible to observe which elements do partake: e.g., by looking at the core levels in an X-ray photoelectron spectroscopy (XPS) spectrum. If an element partakes, its peaks tend to be skewed. Some intermetallic materials, e.g., do exhibit metal clusters reminiscent of molecules; and these compounds are more a topic of chemistry than of metallurgy. The formation of the clusters could be seen as a way to 'condense out' (localize) the electron-deficient bonding into bonds of a more localized nature. Hydrogen is an extreme example of this form of condensation. At high pressures it is a metal. The core of the planet Jupiter could be said to be held together by a combination of metallic bonding and high pressure induced by gravity. At lower pressures, however, the bonding becomes entirely localized into a regular covalent bond. The localization is so complete that the (more familiar) H2 gas results. A similar argument holds for an element such as boron. Though it is electron-deficient compared to carbon, it does not form a metal. Instead it has a number of complex structures in which icosahedral B12 clusters dominate. Charge density waves are a related phenomenon. As these phenomena involve the movement of the atoms toward or away from each other, they can be interpreted as the coupling between the electronic and the vibrational states (i.e. the phonons) of the material. A different such electron-phonon interaction is thought to lead to a very different result at low temperatures, that of superconductivity. Rather than blocking the mobility of the charge carriers by forming electron pairs in localized bonds, Cooper pairs are formed that no longer experience any resistance to their mobility. Optical properties The presence of an ocean of mobile charge carriers has profound effects on the optical properties of metals, which can only be understood by considering the electrons as a collective, rather than considering the states of individual electrons involved in more conventional covalent bonds. Light consists of a combination of an electrical and a magnetic field. The electrical field is usually able to excite an elastic response from the electrons involved in the metallic bonding. The result is that photons cannot penetrate very far into the metal and are typically reflected, although some may also be absorbed. This holds equally for all photons in the visible spectrum, which is why metals are often silvery white or grayish with the characteristic specular reflection of metallic lustre. The balance between reflection and absorption determines how white or how gray a metal is, although surface tarnish can obscure the lustre. Silver, a metal with high conductivity, is one of the whitest. Notable exceptions are reddish copper and yellowish gold. The reason for their color is that there is an upper limit to the frequency of the light that metallic electrons can readily respond to: the plasmon frequency. At the plasmon frequency, the frequency-dependent dielectric function of the free electron gas goes from negative (reflecting) to positive (transmitting); higher frequency photons are not reflected at the surface, and do not contribute to the color of the metal. There are some materials, such as indium tin oxide (ITO), that are metallic conductors (actually degenerate semiconductors) for which this threshold is in the infrared, which is why they are transparent in the visible, but good reflectors in the infrared. For silver the limiting frequency is in the far ultraviolet, but for copper and gold it is closer to the visible. This explains the colors of these two metals. At the surface of a metal, resonance effects known as surface plasmons can result. They are collective oscillations of the conduction electrons, like a ripple in the electronic ocean. However, even if photons have enough energy, they usually do not have enough momentum to set the ripple in motion. Therefore, plasmons are hard to excite on a bulk metal. This is why gold and copper look like lustrous metals albeit with a dash of color. However, in colloidal gold the metallic bonding is confined to a tiny metallic particle, which prevents the oscillation wave of the plasmon from 'running away'. The momentum selection rule is therefore broken, and the plasmon resonance causes an extremely intense absorption in the green, with a resulting purple-red color. Such colors are orders of magnitude more intense than ordinary absorptions seen in dyes and the like, which involve individual electrons and their energy states.
Physical sciences
Chemical bonds
null