id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
88945
https://en.wikipedia.org/wiki/Square%20mile
Square mile
The square mile (abbreviated as sq mi and sometimes as mi2) is an imperial and US unit of measure for area. One square mile is equal to the area of a square with each side measuring a length of one mile. Equivalents One square mile is equal to: 4,014,489,600 square inches One square mile is also equivalent to: Similarly named units Miles square Square miles should not be confused with miles square, a square region with each side having a length of the value given. For example, a region which is 20 miles square ( × ) has an area of ; a rectangle of measuring × also has an area of , but is not 20 miles square. Section In the United States Public Land Survey System, "square mile" is an informal synonym for section.
Physical sciences
Area
Basics and measurement
88988
https://en.wikipedia.org/wiki/Anosmia
Anosmia
Anosmia, also known as smell blindness, is the loss of the ability to detect one or more smells. Anosmia may be temporary or permanent. It differs from hyposmia, which is a decreased sensitivity to some or all smells. Anosmia can be categorized into acquired anosmia and congenital anosmia. Acquired anosmia develops later in life due to various causes, such as upper respiratory infections, head trauma, or neurodegenerative diseases. In contrast, congenital anosmia is present from birth and is typically caused by genetic factors or developmental abnormalities of the olfactory system. While acquired anosmia may have potential treatments depending on the underlying cause, such as medications or surgery, congenital anosmia currently has no known cure, and management focuses on safety precautions and coping strategies. Anosmia can be due to a number of factors, including inflammation of the nasal mucosa, blockage of nasal passages, or destruction of temporal lobular tissue. Anosmia stemming from sinus inflammation is due to chronic mucosal changes in the lining of the paranasal sinus and in the middle and superior turbinates. When anosmia is caused by inflammatory changes in the nasal passageways, it is treated simply by reducing inflammation. It can be caused by chronic meningitis and neurosyphilis that would increase intracranial pressure over a long period of time, and, in some cases, by ciliopathy, including ciliopathy due to primary ciliary dyskinesia. The term derives from the Neo-Latin anosmia, based on Ancient Greek ἀν- (an-) + ὀσμή (osmḗ 'smell'; another related term, hyperosmia, refers to an increased ability to smell). Some people may be anosmic for one particular odor, a condition known as "specific anosmia". The absence of the sense of smell from birth is known as congenital anosmia. In the United States, 3% of people aged over 40 are affected by anosmia. Anosmia is a common symptom of COVID-19 and can persist as long COVID. Definition Anosmia is the inability to smell. It may be partial or total, and can be specific to certain smells. Reduced sensitivity to some or all smells is hyposmia. Signs and symptoms Anosmia can have a number of harmful effects. People with sudden onset anosmia may find food less appetizing, though congenital anosmics rarely complain about this, and none report a loss in weight. Loss of smell can also be dangerous because it hinders the detection of gas leaks, fire, and spoiled food. Misconceptions of anosmia as trivial can make it more difficult for a patient to receive the same types of medical aid as someone who has lost other senses, such as hearing or sight. Many experience one sided loss of smell, often as a result of minor head trauma. This type of anosmia is normally only detected if both of the nostrils are tested separately. Using this method of testing each nostril separately will often show a reduced or even completely absent sense of smell in either one nostril or both, something which is often not revealed if both nostrils are simultaneously tested. Losing an established and sentimental smell memory (e.g. the smell of grass, of the grandparents' attic, of a particular book, of loved ones, or of oneself) has been known to cause feelings of depression. Loss of the ability to smell may lead to the loss of libido, but this usually does not apply to those with olfactory dysfunction at birth. Often people who have loss of smell at birth report that they pretended to be able to smell as children because they thought that smelling was something that older/mature people could do, or did not understand the concept of smelling but did not want to appear different from others. When children get older, they often realize and report to their parents that they do not actually possess a sense of smell, often to the surprise of their parents. Causes A temporary loss of smell can be caused by a blocked nose or infection. In contrast, a permanent loss of smell may be caused by death of olfactory receptor neurons in the nose or by brain injury in which there is damage to the olfactory nerve or damage to brain areas that process smell (see olfactory system). The lack of the sense of smell at birth, usually due to genetic factors, is referred to as congenital anosmia. Family members of the patient with congenital anosmia are often found with similar histories; this suggests that the anosmia may follow an autosomal dominant pattern. Anosmia may very occasionally be an early sign of a degenerative brain disease such as Parkinson's disease and Alzheimer's disease. Another specific cause of permanent loss could be from damage to olfactory receptor neurons because of use of certain types of nasal spray; i.e., those that cause vasoconstriction of the nasal microcirculation. To avoid such damage and the subsequent risk of loss of smell, vasoconstricting nasal sprays should be used only when absolutely necessary and then for only a short amount of time. Non-vasoconstricting sprays, such as those used to treat allergy-related congestion, are safe to use for prescribed periods of time. Anosmia can also be caused by nasal polyps. These polyps are found in people with allergies, histories of sinusitis, and family history. Individuals with cystic fibrosis often develop nasal polyps. Amiodarone is a drug used in the treatment of arrhythmias of the heart. A clinical study demonstrated that the use of this drug induced anosmia in some patients. Although rare, there was a case in which a 66-year-old male was treated with amiodarone for ventricular tachycardia. After the use of the drug he began experiencing olfactory disturbance, however after decreasing the dosage of amiodarone, the severity of the anosmia decreased accordingly, suggesting a relationship between use of amiodarone to the development of anosmia. COVID-19-related anosmia Chemosensory disturbances, including loss of smell or taste, are the predominant neurological symptom of COVID-19. As many as 80% of COVID-19 patients exhibit some change in chemesthesis, including smell. Loss of smell has also been found to be more predictive of COVID-19 than all other symptoms, including fever, cough, or fatigue, based on a survey of 2 million participants in the UK and US. Google searches for "smell", "loss of smell", "anosmia", and other similar terms increased since the early months of the pandemic, and strongly correlated with increases in daily cases and deaths. Research into the mechanisms underlying these symptoms is currently ongoing. Many countries list anosmia as an official COVID-19 symptom, and some have developed "smell tests" as potential screening tools. In 2020, the Global Consortium for Chemosensory Research, a collaborative research organization of international smell and taste researchers, formed to investigate loss of smell and related chemosensory symptoms. Decision-making in COVID-19 patients Studies have indicated that patients who presented with anosmia during the acute phase of COVID-19 are more likely to develop changes in decision-making, exhibiting more impulsive responses, which are associated with functional and structural brain changes. Possible causes Diagnosis Diagnosis begins with a detailed history, including possible related injuries, such as upper respiratory infections or head injury. The examination may involve nasal endoscopy for obstructive factors such as polyps or swelling. A nervous system examination is performed to see if the cranial nerves are affected. On occasion, after head traumas, there are people who have unilateral anosmia. The sense of smell should be tested individually in each nostril. Many cases of congenital anosmia remain unreported and undiagnosed. Since the disorder is present from birth the individual may have little or no understanding of the sense of smell, hence is unaware of the deficit. It may also lead to reduction of appetite. Treatment Though anosmia caused by brain damage cannot be treated, anosmia caused by inflammatory changes in the mucosa may be treated with glucocorticoids. Reduction of inflammation through the use of oral glucocorticoids such as prednisone, followed by long term topical glucocorticoid nasal spray, would easily and safely treat the anosmia. A prednisone regimen is adjusted based on the degree of the thickness of mucosa, the discharge of oedema and the presence or absence of nasal polyps. However, the treatment is not permanent and may have to be repeated after a short while. Together with medication, pressure of the upper area of the nose must be mitigated through aeration and drainage. Anosmia caused by a nasal polyp may be treated by steroidal treatment or removal of the polyp. Although very early in development, gene therapy has restored a sense of smell in mice with congenital anosmia when caused by ciliopathy. In this case, a genetic condition had affected cilia in their bodies which normally enabled them to detect air-borne chemicals, and an adenovirus was used to implant a working version of the IFT88 gene into defective cells in the nose, which restored the cilia and allowed a sense of smell. Epidemiology In the United States, 3% of people aged over 40 are affected by anosmia. In 2012, smell was assessed in persons aged 40 years and older with rates of anosmia/severe hyposmia of 0.3% at age 40–49 rising to 14.1% at age 80+. Rates of hyposmia were much higher: 3.7% at age 40–49 and 25.9% at 80+. Famous people with anosmia Kathy Clugston, British radio presenter Ben Cohen of Ben and Jerry's Perrie Edwards, singer of Little Mix Lorenzo de' Medici, 15th-century ruler of Florence Bill Pullman, American actor Jason Sudeikis, American actor
Biology and health sciences
Symptoms and signs
Health
89074
https://en.wikipedia.org/wiki/K%C4%81k%C4%81p%C5%8D
Kākāpō
The kākāpō (; : ; Strigops habroptilus), sometimes known as the owl parrot or owl-faced parrot, is a species of large, nocturnal, ground-dwelling parrot of the superfamily Strigopoidea. It is endemic to New Zealand. Kākāpō can be up to long. They have a combination of unique traits among parrots: finely blotched yellow-green plumage, a distinct facial disc, owl-style forward-facing eyes with surrounding discs of specially-textured feathers, a large grey beak, short legs, large blue feet, relatively short wings and a short tail. It is the world's only flightless parrot, the world's heaviest parrot, and also is nocturnal, herbivorous, visibly sexually dimorphic in body size, has a low basal metabolic rate, and does not have male parental care. It is the only parrot to have a polygynous lek breeding system. It is also possibly one of the world's longest-living birds, with a reported lifespan of up to 100 years. Adult males weigh around ; the equivalent figure for females is . The anatomy of the kākāpō typifies the tendency of bird-evolution on oceanic islands. With few predators and abundant food, kākāpō exhibit island syndrome development, having a generally-robust torso physique at the expense of flight abilities, resulting in reduced shoulder- and wing-muscles, along with a diminished keel on the sternum. Like many other New Zealand bird species, the kākāpō was historically important to Māori, the indigenous people of New Zealand. It appears in Māori mythology. Heavily hunted in the past, it was used by the Māori both for its meat and for its feathers. The kākāpō is critically endangered; the total known population of living individuals is Known individuals are named, tagged and confined to four small New Zealand islands, all of which are clear of predators; however, in 2023, a reintroduction to mainland New Zealand (Sanctuary Mountain Maungatautari) was accomplished. Introduced mammalian predators, such as cats, rats, ferrets, and stoats almost wiped out the kākāpō. All conservation efforts were unsuccessful until the Kākāpō Recovery Programme began in 1995. Taxonomy The kākāpō was formally described and illustrated in 1845 by the English ornithologist George Robert Gray. He created a new genus and coined the binomial name . Gray was uncertain about the origin of his specimen and wrote, "This remarkable bird is found in one of the islands of the South Pacific Ocean." The type location has been designated as Dusky Sound on the southwest corner of New Zealand's South Island. The generic name is derived from the Ancient Greek , genitive ("owl"), and ("face"), while its specific epithet comes from ("soft"), and ("feather"). In 1955 the International Commission on Zoological Nomenclature (ICZN) ruled that the genus name was feminine. Based on this ruling many ornithologists used the form Strigops habroptila but in 2023 James L. Savage and Andrew Digby argued that under the current ICZN rules the specific epithet should be habroptilus. This view was accepted by ornithologists and in 2024 the International Ornithological Congress Checklist and the eBird/Clements Checklist changed the spelling of the binomial name back to Strigops habroptilus. The species is monotypic, as no subspecies are recognised. The name is Māori, from ("parrot") + ("night"); the name is both singular and plural. "Kākāpō" is increasingly written in New Zealand English with the macrons that indicate long vowels. The correct pronunciation in Māori is ; other colloquial pronunciations exist, however. These include the British English (), as defined in the Chambers Dictionary in 2003. The kākāpō is placed in the family Strigopidae together with the two species in the genus Nestor, the kea () and the kākā (). The birds are endemic to New Zealand. Molecular phylogenetic studies have shown that the family Strigopidae is basal to the other three parrot families in the order Psittaciformes and diverged from them 33–44 million years ago. The common ancestor of the kākāpō and the two Nestor species diverged 27–40 million years ago. Earlier ornithologists felt that the kākāpō might be related to the ground parrots and night parrot of Australia due to their similar colouration, but this is contradicted by molecular studies; rather, the cryptic colour seems to be adaptation to terrestrial habits that evolved twice convergently. Description The kākāpō is a large, rotund parrot. Adults can measure from in length with a wingspan of . Males are significantly heavier than females with an average weight of compared with just for females. Kākāpō are the heaviest living species of parrot and on average weigh about more than the largest flying parrot, the hyacinth macaw. The kākāpō cannot fly, having relatively short wings for its size and lacking the keel on the sternum (breastbone), where the flight muscles of other birds attach. It uses its wings for balance and to break its fall when leaping from trees. Unlike many other land birds, the kākāpō can accumulate large amounts of body fat. The upper parts of the kākāpō have yellowish moss-green feathers barred or mottled with black or dark brownish grey, blending well with native vegetation. Individuals may have strongly varying degrees of mottling and colour tone and intensity – museum specimens show that some birds had completely yellow colouring. The breast and flank are yellowish-green streaked with yellow. The belly, undertail, neck, and face are predominantly yellowish streaked with pale green and weakly mottled with brownish-grey. Because the feathers do not need the strength and stiffness required for flight, they are exceptionally soft, giving rise to the specific epithet habroptila. The kākāpō has a conspicuous facial disc of fine feathers resembling the face of an owl; thus, early European settlers called it the "owl parrot". The beak is surrounded by delicate feathers which resemble vibrissae or "whiskers"; it is possible kākāpō use these to sense the ground as they walk with its head lowered, but there is no evidence for this. The mandible is variable in colour, mostly ivory, with the upper part often bluish-grey. The eyes are dark brown. Kākāpō feet are large, scaly, and, as in all parrots, zygodactyl (two toes face forward and two backward). The pronounced claws are particularly useful for climbing. The ends of the tail feathers often become worn from being continually dragged on the ground. Females are easily distinguished from males as they have a narrower and less domed head, narrower and proportionally longer beak, smaller cere and nostrils, more slender and pinkish grey legs and feet, and proportionally longer tail. While their plumage colour is not very different from that of the male, the toning is more subtle, with less yellow and mottling. Nesting females also have a brood patch of bare skin on the belly. The kākāpō's altricial young are first covered with greyish white down, through which their pink skin can be easily seen. They become fully feathered at approximately 70 days old. Juvenile individuals tend to have duller green colouration, more uniform black barring, and less yellow present in their feathers. They are additionally distinguishable because of their shorter tails, wings, and beaks. At this stage, they have a ring of short feathers surrounding their irises that resembles eyelashes. Like many other parrots, kākāpō have a variety of calls. As well as the booms (see below for a recording) and chings of their mating calls, they will often loudly skraark. The kākāpō has a well-developed sense of smell, which complements its nocturnal lifestyle. It can distinguish between odours while foraging, a behaviour reported in only one other parrot species. The kākāpō has a large olfactory bulb ratio (longest diameter of the olfactory bulb/longest diameter of the brain) indicating that it does, indeed, have a more developed sense of smell than other parrots. One of the most striking characteristics of the kākāpō is its distinct musty-sweet odour. The smell often alerts predators to the presence of kākāpō. As a nocturnal species, the kākāpō has adapted its senses to living in darkness. Its optic tectum, nucleus rotundus, and entopallium are smaller in relation to its overall brain size than those of diurnal parrots. Its retina shares some qualities with that of other nocturnal birds but also has some qualities typical of diurnal birds, lending to best function around twilight. These modifications allow the kākāpō to have enhanced light sensitivity but with poor visual acuity. Internal anatomy The skeleton of the kākāpō differs from other parrots in several features associated with flightlessness. Firstly, it has the smallest relative wing size of any parrot. Its wing feathers are shorter, more rounded, less asymmetrical, and have fewer distal barbules to lock the feathers together. The sternum is small and has a low, vestigial keel and a shortened spina externa. As in other flightless birds and some flighted parrots, the furcula is not fused but consists of a pair of clavicles lying in contact with each coracoid. As in other flightless birds, the angle between the coracoid and sternum is enlarged. The kākāpō has a larger pelvis than other parrots. The proximal bones of the leg and wing are disproportionately long and the distal elements are disproportionately short. The pectoral musculature of the kākāpō is also modified by flightlessness. The pectoralis and supracoracoideus muscles are greatly reduced. The propatagialis tendo longus has no distinct muscle belly. The sternocoracoideus is tendinous. There is an extensive cucularis capitis clavicularis muscle that is associated with the large crop. Genetics Because kākāpō passed through a genetic bottleneck, in which their world population was reduced to 49 birds, they are extremely inbred and have low genetic diversity. This manifests in lower disease resistance and in fertility problems: 61% of kākāpō eggs fail to hatch. Beginning in 2015, the Kākāpō 125+ project has sequenced the genome of all living kākāpō, as well as some museum specimens. The project is a collaboration led by Genomics Aotearoa and a collaboration with a team of international collaborators. A DNA sequence analysis was performed on 35 kākāpō genomes of the surviving descendants of an isolated island population, and on 14 genomes, mainly from museum specimens, of the now extinct mainland population. An analysis of the long-term genetic impact of small population size indicated that the small island kākāpō population had a reduced number of harmful mutations compared to the number in mainland individuals. It was hypothesized that the reduced mutational load of the island population was due to a combination of genetic drift and the purging of deleterious mutations through increased inbreeding and purifying selection that occurred since the isolation of this population from the mainland about 10,000 years ago. Purging of deleterious mutations occurs when there is selection against recessive or partially recessive detrimental alleles as they are expressed in the homozygous state. Habitat Before the arrival of humans, the kākāpō was distributed throughout both main islands of New Zealand. Although it may have inhabited Stewart Island / Rakiura before human arrival, it has not been found in the extensive fossil collections from there. Kākāpō lived in a variety of habitats, including tussocklands, scrublands and coastal areas. It also inhabited forests dominated by podocarps (rimu, mataī, kahikatea, tōtara), beeches, tawa, and rātā. In Fiordland, areas of avalanche and slip debris with regenerating and heavily fruiting vegetation – such as five finger, wineberry, bush lawyer, tutu, hebes, and coprosmas – became known as "kākāpō gardens". The kākāpō is considered to be a "habitat generalist". Though they are now confined to islands free of predation, they were once able to live in nearly any climate present on the islands of New Zealand. They survived dry, hot summers on the North Island as well as cold winter temperatures in the sub-alpine areas of Fiordland. Kākāpō seem to have preferred broadleaf or mountain beech and Hall's tōtara forest with mild winters and high rainfall, but the species was not exclusively forest-dwelling. Ecology and behaviour The kākāpō is primarily nocturnal; it roosts under cover in trees or on the ground during the day and moves around its territories at night. Though the kākāpō cannot fly, it is an excellent climber, ascending to the crowns of the tallest trees. It can also "parachute" – descending by leaping and spreading its wings. In this way it may travel a few metres at an angle of less than 45 degrees. With only 3.3% of its mass made up of pectoral muscle, it is no surprise that the kākāpō cannot use its wings to lift its heavy body off the ground. Because of its flightlessness, it has very low metabolic demands in comparison to flighted birds. It is able to survive easily on very little or on very low quality food sources. Unlike most other bird species, the kākāpō is entirely herbivorous, feeding on fruits, seeds, leaves, stems, and rhizomes. When foraging, kākāpō tend to leave crescent-shaped wads of fiber in the vegetation behind them, called "browse signs". Having lost the ability to fly, it has developed strong legs. Locomotion is often by way of a rapid "jog-like" gait by which it can move several kilometres. A female has been observed making two return trips each night during nesting from her nest to a food source up to away and the male may walk from its home range to a mating arena up to away during the mating season (October–January). Young birds indulge in play fighting, and one bird will often lock the neck of another under its chin. The kākāpō is curious by nature and has been known to interact with humans. Conservation staff and volunteers have engaged extensively with some kākāpō, which have distinct personalities. Despite this, kākāpō are solitary birds. The kākāpō was a very successful species in pre-human New Zealand, and was well adapted to avoid the birds of prey which were their only predators. As well as the New Zealand falcon, there were two other birds of prey in pre-human New Zealand: Haast's eagle and Eyles' harrier. All these raptors soared overhead searching for prey in daylight, and to avoid them the kākāpō evolved camouflaged plumage and became nocturnal. When a kākāpō feels threatened, it freezes, so that it is more effectively camouflaged in the vegetation its plumage resembles. Kākāpō were not entirely safe at night, when the laughing owl was active, and it is apparent from owl nest deposits on Canterbury limestone cliffs that kākāpō were among their prey. Kākāpō defensive adaptations were no use, however, against the mammalian predators introduced to New Zealand by humans. Birds hunt very differently from mammals, relying on their powerful vision to find prey, and thus they usually hunt by day. Mammalian predators, in contrast to birds, often hunt by night, and rely on their sense of smell and hearing to find prey; a common way for humans to hunt kākāpō was by releasing trained dogs. Breeding Kākāpō are the only flightless bird that has a lek breeding system. Males loosely gather in an arena and compete with each other to attract females. Females listen to the males as they display, or "lek". They choose a mate based on the quality of his display; they are not pursued by the males in any overt way. No pair bond is formed; males and females meet only to mate. During the courting season, males leave their home ranges for hilltops and ridges where they establish their own mating courts. These leks can be up to from a kākāpō's usual territory and are an average of apart within the lek arena. Males remain in the region of their court throughout the courting season. At the start of the breeding season, males will fight to try to secure the best courts. They confront each other with raised feathers, spread wings, open beaks, raised claws and loud screeching and growling. Fighting may leave birds with injuries or even kill them. Mating occurs only approximately every five years, with the ripening of the rimu fruit. In mating years, males may make "booming" calls for 6–8 hours every night for more than four months. Each court consists of one or more saucer-shaped depressions or "bowls" dug in the ground by the male, up to deep and long enough to fit the half-metre length of the bird. The kākāpō is one of only a handful of birds in the world which actually constructs its leks. Bowls are often created next to rock faces, banks, or tree trunks to help reflect sound: the bowls themselves function as amplifiers to enhance the projection of the males' booming mating calls. Each male's bowls are connected by a network of trails or tracks which may extend along a ridge or in diameter around a hilltop. Males meticulously clear their bowls and tracks of debris. To attract females, males make loud, low-frequency (below 100Hz) booming calls from their bowls by inflating a thoracic sac. They start with low grunts, which increase in volume as the sac inflates. After a sequence of about 20 loud booms, the male kākāpō emits a high-frequency, metallic "ching" sound. He stands for a short while before again lowering his head, inflating his chest and starting another sequence of booms. The booms can be heard at least away on a still night; wind can carry the sound at least . Females are attracted by the booms of the competing males; they too may need to walk several kilometres from their territories to the arena. Once a female enters the court of one of the males, the male performs a display in which he rocks from side to side and makes clicking noises with his beak. He turns his back to the female, spreads his wings in display and walks backwards towards her. He will then attempt copulation for 40 minutes or more. Once the birds have mated, the female returns to her home territory to lay eggs and raise the chicks. The male continues booming in the hope of attracting another female. The female kākāpō lays 1–4 eggs per breeding cycle, with several days between eggs. The nest is placed on the ground under the cover of plants or in cavities such as hollow tree trunks. The female incubates the eggs beginning after the first egg is laid, but is forced to leave the nest every night in search of food. Predators are known to eat the eggs, and the embryos inside can also die of cold in the mother's absence. Kākāpō eggs usually hatch within 30 days, bearing fluffy grey chicks that are quite helpless. The female feeds the chicks for three months, and the chicks remain with the female for some months after fledging. The young chicks are just as vulnerable to predators as the eggs, and young have been killed by many of the same predators that attack adults. Chicks leave the nest at approximately 10 to 12 weeks of age. As they gain greater independence, their mothers may feed the chicks sporadically for up to 3 months. The kākāpō is long-lived, with an average life expectancy of 60 (plus or minus 20) years, and tends to reach adolescence before it starts breeding. Males start booming at about 5 years of age. It was thought that females reached sexual maturity at 9 years of age, but four five-year-old females have now been recorded reproducing. The kākāpō does not breed every year and has one of the lowest rates of reproduction among birds. Breeding occurs only in years when trees mast (fruit heavily), providing a plentiful food supply. Rimu mast occurs only every three to five years, so in rimu-dominant forests, such as those on Whenua Hou, kākāpō breeding occurs as infrequently. Another aspect of the kākāpō's breeding system is that a female can alter the sex ratio of her offspring depending on her condition. A female in good condition produces more male offspring (males have 30%–40% more body weight than females). Females produce offspring biased towards the dispersive sex when competition for resources (such as food) is high and towards the non-dispersive sex when food is plentiful. A female kākāpō will likely be able to produce eggs even when there are few resources, while a male kākāpō will be more capable of perpetuating the species when there are plenty, by mating with several females. This supports the Trivers–Willard hypothesis. The relationship between clutch sex ratio and maternal diet has conservation implications, because a captive population maintained on a high quality diet will produce fewer females and therefore fewer individuals valuable to the recovery of the species. Feeding The beak of the kākāpō is adapted for grinding food finely. For this reason, the kākāpō has a very small gizzard compared to other birds of their size. It is entirely herbivorous, eating native plants, seeds, fruits, pollen, fungi and even the sapwood of trees. A study in 1984 identified 25 plant species as kākāpō food. It is specifically fond of the fruit of the rimu tree, and will feed on it exclusively during seasons when it is abundant. The kākāpō strips out the nutritious parts of the plant out with its beak, leaving a ball of indigestible fibre. These little clumps of plant fibres are a distinctive sign of the presence of the bird. The kākāpō is believed to employ bacteria in the fore-gut to ferment and help digest plant matter. Kākāpō diet changes according to the season. The plants eaten most frequently during the year include some species of Lycopodium ramulosum, Lycopodium fastigium, Schizaea fistulosa, Blechnum minus, Blechnum procerum, Cyathodes juniperina, Dracophyllum longifolium, Olearia colensoi and Thelymitra venosa. Individual plants of the same species are often treated differently. Kākāpō may forage heavily in certain areas, leaving, on occasion, more than 30 droppings and conspicuous evidence of herbivory. These areas, which are mostly dominated by mānuka and yellow silver pine, range from 100 – 5,000 sq. metres (1,076 – 53,820 sq. feet) per individual. Preserved coprolites of kākāpō have been studied to obtain information on the historic diet of the bird. This research has identified 67 native plant genera previously unrecorded as food sources for kākāpō including native mistletoes as well as Dactylanthus taylorii. Conservation Fossil records indicate that in pre-Polynesian times, the kākāpō was New Zealand's third most common bird and it was widespread on all three main islands. However, the kākāpō population in New Zealand has declined massively since human settlement of the country, and its conservation status as ranked by the Department of Conservation continues to be "Nationally Critical". Since the 1890s, conservation efforts have been made to prevent extinction. The most successful scheme has been the Kākāpō Recovery Programme; this was implemented in 1995 and continues to this day. Kākāpō are absolutely protected under New Zealand's Wildlife Act 1953. The species is also listed under Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) meaning international export/import (including parts and derivatives) is regulated. Human impact The first factor in the decline of the kākāpō was the arrival of humans. Māori folklore suggests that the kākāpō was found throughout the country when the Polynesians first arrived in Aotearoa 700 years ago. Subfossil and midden deposits show that the bird was present throughout the North and South Island before and during early Māori times. Māori hunted the kākāpō for food and for their skins and feathers, which were made into cloaks. Due to its inability to fly, strong scent and habit of freezing when threatened, the kākāpō was easy prey for the Māori and their dogs. Its eggs and chicks were also preyed upon by the Polynesian rat or kiore, which the Māori brought to New Zealand as a stowaway. Furthermore, the deliberate clearing of vegetation by Māori reduced the habitable range for kākāpō. Although the kākāpō was extinct in many parts of the islands by the time Europeans arrived, including the Tararua and Aorangi Ranges, it was locally abundant in parts of New Zealand, such as the central North Island and forested parts of the South Island. Although kākāpō numbers were reduced by Māori settlement, they declined much more rapidly after European colonisation. Beginning in the 1840s, Pākehā settlers cleared vast tracts of land for farming and grazing, further reducing kākāpō habitat. They brought more dogs and other mammalian predators, including domestic cats, black rats and stoats. In the 1880s, large numbers of mustelids (stoats, ferrets and weasels) were released in New Zealand to reduce rabbit numbers, but they also preyed heavily on many native species including the kākāpō. Other browsing animals, such as introduced deer, competed with the kākāpō for food, and caused the extinction of some of its preferred plant species. The kākāpō was reportedly still present near the head of the Whanganui River as late as 1894, with one of the last records of a kākāpō in the North Island being a single bird caught in the Kaimanawa Ranges by Te Kepa Puawheawhe in 1895. Early protection efforts In 1891, the New Zealand government set aside Resolution Island in Fiordland as a nature reserve. In 1894, the government appointed Richard Henry as caretaker. A keen naturalist, Henry was aware that native birds were declining, and began catching and moving kākāpō and kiwi from the mainland to the predator-free Resolution Island. In six years, he moved more than 200 kākāpō to Resolution Island. By 1900, however, stoats had swum to Resolution Island and colonised it; they wiped out the nascent kākāpō population within 6 years. In 1903, three kākāpō were moved from Resolution Island to the nature reserve of Little Barrier Island (Hauturu-o-Toi) north-east of Auckland, but feral cats were present and the kākāpō were never seen again. In 1912, three kākāpō were moved to another reserve, Kapiti Island, north-west of Wellington. One of them survived until at least 1936, despite the presence of feral cats for part of the intervening period. By the 1920s, the kākāpō was extinct in the North Island and its range and numbers in the South Island were declining. 1950–1989 conservation efforts In the 1950s, the New Zealand Wildlife Service was established and began making regular expeditions to search for the kākāpō, mostly in Fiordland and what is now the Kahurangi National Park in the northwest of the South Island. Seven Fiordland expeditions between 1951 and 1956 found only a few recent signs. Finally, in 1958 a kākāpō was caught and released in the Milford Sound / Piopiotahi catchment area in Fiordland. Six more kākāpō were captured in 1961; one was released and the other five were transferred to the aviaries of the Mount Bruce Bird Reserve near Masterton in the North Island. Within months, four of the birds had died and the fifth died after about four years. In the next 12 years, regular expeditions found few signs of the kākāpō, indicating that numbers were continuing to decline. Only one bird was captured in 1967; it died the following year. By the early 1970s, it was uncertain whether the kākāpō was still an extant species. At the end of 1974, scientists located several more male kākāpō and made the first scientific observations of kākāpō booming. These observations led Don Merton to speculate for the first time that the kākāpō had a lek breeding system. From 1974 to 1978 a total of 18 kākāpō were discovered in Fiordland, but all were males. This raised the possibility that the species would become extinct, because there might be no surviving females. One male bird was captured in the Milford area in 1975, christened "Richard Henry", and transferred to Maud Island. All the birds the Wildlife Service discovered from 1951 to 1976 were in U-shaped glaciated valleys flanked by almost-vertical cliffs and surrounded by high mountains. Such extreme terrain had slowed colonisation by browsing mammals, leaving islands of virtually unmodified native vegetation. However, even here, stoats were present and by 1976 the kākāpō was gone from the valley floors and only a few males survived high on the most inaccessible parts of the cliffs. Before 1977, no expedition had been to Stewart Island to search for the bird. In 1977, sightings of kākāpō were reported on the island. An expedition to Rakiura found a track and bowl system on its first day; soon after, it located several dozen kākāpō. The finding in an area of fire-modified scrubland and forest raised hope that the population would include females. The total population was estimated at 100 to 200 birds. Mustelids have never colonised Stewart Island, but feral cats were present. During a survey, it was apparent that cats killed kākāpō at a rate of 56% per year. At this rate, the birds could not survive on the island and therefore an intensive cat control was introduced in 1982, after which no cat-killed kākāpō were found. However, to ensure the survival of the remaining birds, scientists decided later that this population should be transferred to predator-free islands; this operation was carried out between 1982 and 1997. Kākāpō Recovery programme In the 1980s the kākāpō were translocated to islands with no predators to maintain their genetic diversity, to avoid spreading harmful diseases, and to reduce interbreeding. In 1989, a Kākāpō Recovery plan was developed, and a Kākāpō Recovery programme was established in 1995. The New Zealand Department of Conservation replaced the Wildlife Service for this task. The first action of the plan was to relocate all the remaining kākāpō to suitable islands for them to breed. None of the New Zealand islands were ideal to establish kākāpō without rehabilitation by extensive re-vegetation and the eradication of introduced mammalian predators and competitors. Four islands were finally chosen: Maud, Little Barrier, Codfish and Mana. Sixty-five kākāpō (43 males, 22 females) were successfully transferred onto the four islands in five translocations. Some islands had to be rehabilitated several times when feral cats, stoats and weka kept appearing. Little Barrier Island was eventually viewed as unsuitable due to the rugged landscape, the thick forest and the continued presence of rats, and its birds were evacuated in 1998. Along with Mana Island, it was replaced with two new kākāpō sanctuaries: Chalky Island (Te Kākahu-o-Tamatea) and Anchor Island. The entire kākāpō population of Codfish Island was temporarily relocated in 1999 to Pearl Island in Port Pegasus while rats were being eliminated from Codfish. All kākāpō on Pearl and Chalky Islands were moved to Anchor Island in 2005. Supplementary feeding A key part of the Recovery Programme is the supplementary feeding of females. Kākāpō breed only once every two to five years, when certain plant species, primarily Dacrydium cupressinum (rimu), produce protein-rich fruit and seeds. During breeding years when rimu masts supplementary food is provided to kākāpō to increase the likelihood of individuals successfully breeding. In 1989, six preferred foods (apples, sweet potatoes, almonds, Brazil nuts, sunflower seeds and walnuts) were supplied ad libitum each night to 12 feeding stations. Males and females ate the supplied foods, and females nested on Little Barrier Island in the summers of 1989–1991 for the first time since 1982, although nesting success was low. Supplementary feeding affects the sex ratio of kākāpō offspring, and can be used to increase the number of female chicks by deliberately manipulating maternal condition. Today commercial parrot food is supplied to all individuals of breeding age on Whenua Hou and Anchor. The amount eaten and individual weights are carefully monitored to ensure that optimum body condition is maintained. Nest management Kākāpō nests are intensively managed by wildlife conservation staff. Before Polynesian rats were removed from Whenua Hou, the rats were a threat to the survival of young kākāpō. Of 21 chicks that hatched between 1981 and 1994, nine were either killed by rats or died and were subsequently eaten by rats. All kākāpō islands are now rat-free, but infrared cameras still allow rangers to remotely monitor the behaviour of females and chicks in nests. Data loggers record when mother kākāpō come and go, allowing rangers to pick a time to check on the health of chicks, and also indicate how hard females are having to work to find food. Because mother kākāpō often struggle to successfully rear multiple chicks, Kākāpō Recovery rangers will move chicks between nests as needed. Eggs are often removed from nests for incubation to reduce the likelihood of accidents, such as lost eggs or crushing. If chicks become ill, are not putting on weight, or there are too many chicks in the nest (and no available nest to move them to) they will be hand-reared by the Kākāpō Recovery team. In the 2019 season, eggs were also removed from nests to encourage females to re-nest. By hand-raising the first group of chicks in captivity and encouraging females to lay more eggs, the Kākāpō Recovery Team hoped that overall chick production would be increased. By the end of February 2020, the bird's summer breeding season, these efforts led to the production of 80 chicks, "a record number." Monitoring To monitor the kākāpō population continuously, each bird is equipped with a radio transmitter. Every known kākāpō, barring some young chicks, has been given a name by Kākāpō Recovery Programme officials, and detailed data is gathered about every individual. GPS transmitters are also being trialled to provide more detailed data about the movement of individual birds and their habitat use. The signals also provide behavioural data, letting rangers gather information about mating and nesting remotely. Reintroduction The Kākāpō Recovery programme has been successful, with the numbers of kākāpō increasing steadily. Adult survival rate and productivity have both improved significantly since the programme's inception. However, the main goal is to establish at least one viable, self-sustaining, unmanaged population of kākāpō as a functional component of the ecosystem in a protected habitat. To help meet this conservation challenge, Resolution Island () in Fiordland has been prepared for kākāpō re-introduction with ecological restoration including the eradication of stoats. Ultimately, the Kākāpō Recovery vision for the species is to restore the (Māori for "life-force") of the kākāpō by breeding 150 adult females. Four males were re-introduced to Sanctuary Mountain Maungatautari in the North Island on 21 July 2023, becoming the first kākāpō living in mainland New Zealand in almost 40 years. Despite extensive improvements to the perimeter fence, in October 2023, one of the kākāpō escaped by using a downed tree to climb out. The bird was located using the signal from its GPS transmitter and returned to the sanctuary. A second group of six birds was introduced to the sanctuary in September. However, two further kākāpō found a way over the fence, and in November the Department of Conservation temporarily removed three birds from the sanctuary to a southern predator-free island, leaving the kākāpō population in the sanctuary at seven. The Department commented that "Kākāpō are flightless but are excellent climbers who can use their wings to parachute from treetops". Fatal fungal infection In 2019 an outbreak of the fungal disease aspergillosis among kākāpō on the island of Whenua Hou infected 21 individuals and led to 9 deaths: two adults, five chicks and two juveniles. Over 50 birds were transported to veterinary centres for diagnostics and treatment. Population timeline 1977: Kākāpō rediscovered on Stewart Island / Rakiura 1989: Most kākāpō are removed from Rakiura to Whenua Hou and Hauturu-O-Toi 1995: Kākāpō population consists of 51 individuals; beginning of the Kakapo Recovery Programme 1999: Kākāpō removed from Hauturu 2002: A significant breeding season led to 24 chicks being hatched 2005: 41 females and 45 males, including four fledglings (3 females and 1 male); kākāpō established on Anchor Island 2009: The total kākāpō population rose to over 100 for the first time since monitoring began. Twenty-two of the 34 chicks had to be hand-reared because of a shortage of food on Codfish Island. December 2010: Death of the oldest known kākāpō, "Richard Henry", possibly 80 years old. 2012: Seven kākāpō transferred to Hauturu, in an attempt to establish a successful breeding programme. Kākāpō were last on the island in 1999. March 2014: With the kākāpō population having increased to 126, the bird's recovery was used by Melbourne artist Sayraphim Lothian as a metaphor for the recovery of Christchurch, parallelling the "indomitable spirit of these two communities and their determination to rebuild". 2016: First breeding on Anchor; a significant breeding season, with 32 chicks; kākāpō population grows to over 150 2018: After the death of 3 birds, the population reduced to 149 birds. 2019: An abundance of rimu fruit and the introduction of several new technologies (including artificial insemination and 'smart eggs') helped make 2019 the best breeding season on record, with over 200 eggs laid and 72 chicks fledged. According to the Kākāpō Recovery Team at the New Zealand Department of Conservation, this was the earliest and longest breeding season yet. Population reached 200 juvenile or older birds on 17 August 2019. 2022: The population increased to 252 birds after a productive breeding season and successful artificial insemination. 2023: Birds are reintroduced to the mainland for the first time. In Māori culture The kākāpō is associated with a rich tradition of Māori folklore and beliefs. The bird's irregular breeding cycle was understood to be associated with heavy fruiting or "masting" events of particular plant species such as the rimu, which led Māori to credit the bird with the ability to tell the future. Used to substantiate this claim were reported observations of these birds dropping the berries of the hinau and tawa trees (when they were in season) into secluded pools of water to preserve them as a food supply for the summer ahead; in legend this became the origin of the Māori practice of immersing food in water for the same purpose. Use for food and clothing The Māori considered the meat of the kākāpō a delicacy and, when the bird was widespread, hunted it for food. One source states that its flesh "resembles lamb in taste and texture", although European settlers have described the bird as having a "strong and slightly stringent [sic] flavour". In breeding years, the loud booming calls of the males at their mating arenas made it easy for Māori hunting parties to track the kākāpō down, and it was also hunted while feeding or when dust-bathing in dry weather. The bird was caught, generally at night, using snares, pitfall traps, or by groups of domesticated Polynesian dogs which accompanied hunting parties – sometimes they would use fire sticks of various sorts to dazzle a bird in the darkness, stopping it in their tracks and making the capture easier. Cooking was done in a hāngī or in gourds of boiling oil. The flesh of the bird could be preserved in its own fat and stored in containers for later consumption – hunters of the Ngāi Tahu would pack the flesh in baskets made from the inner bark of tōtara tree or in containers constructed from kelp. Bundles of kākāpō tail feathers were attached to the sides of these containers to provide decoration and a way to identify their contents. The Māori also used the bird's eggs for food. As well as eating the meat of the kākāpō, Māori would use kākāpō skins with the feathers still attached or individually weave in kākāpō feathers with flax fibre to create cloaks and capes. Each one required up to 11,000 feathers to make. Not only were these garments considered very beautiful, they also kept the wearer very warm. They were highly valued and considered as (treasures), so much so that the old Māori adage "You have a kākāpō cape and you still complain of the cold" was used to describe someone who is never satisfied. Only one cloak fully made of kākāpō feathers is known to still exist. It dates from the 1810s–1820s and is held in the Perth Museum in Scotland; the museum in collaboration with the British Museum and Māori advisers have restored the cloak. Kākāpō feathers were also used to decorate the heads of , but were removed before use in combat. Despite this, the kākāpō was also regarded as an affectionate pet by the Māori. This was corroborated by European settlers in New Zealand in the 19th century, among them George Edward Grey, who once wrote in a letter to an associate that his pet kākāpō's behaviour towards him and his friends was "more like that of a dog than a bird". In the media The conservation of the kākāpō has made the species well known. Many books and documentaries detailing the plight of the kākāpō have been produced in recent years, one of the earliest being Two in the Bush, made by Gerald Durrell for the BBC in 1962. A feature-length documentary, The Unnatural History of the Kakapo won two major awards at the Reel Earth Environmental Film Festival. Two of the most significant documentaries, both made by NHNZ, are Kakapo – Night Parrot (1982) and To Save the Kakapo (1997). The BBC's Natural History Unit featured the kākāpō, including a sequence with Sir David Attenborough in The Life of Birds. It was one of the endangered animals Douglas Adams and Mark Carwardine set out to find for the radio series and book Last Chance to See. An updated version of the series has been produced for BBC TV, in which Stephen Fry and Carwardine revisit the animals to see how they are getting on almost 20 years later, and in January 2009, they spent time filming the kākāpō on Codfish Island / Whenua Hou. Footage of a kākāpō named Sirocco attempting to mate with Carwardine's head was viewed by millions worldwide, leading to Sirocco becoming "spokes-bird" for New Zealand wildlife conservation in 2010. Sirocco became the inspiration for the party parrot, a popular animated emoji frequently associated with the workflow application Slack. The kākāpō was featured in the episode "Strange Islands" of the documentary series South Pacific, originally aired on 13 June 2009, in the episode "Worlds Apart" of the series The Living Planet, and in episode 3 of the BBC's New Zealand Earth's Mythical Islands. In a 2019 kākāpō awareness campaign, the Kākāpō Recovery Programme New Zealand National Partner, Meridian Energy, ran a Search for a Saxophonist to provide suitable mood music for encouraging mating to coincide with the 2019 kākāpō breeding season. The search and footage from the islands where breeding was taking place were featured on the Breakfast programme. The kākāpō was featured in the mobile game "Kākāpō Run" developed by a UK conservation charity. This game aimed to raise support for kākāpō conservation by engaging players in fun, educational gameplay. A study found that playing the game helped increase positive attitudes and actions related to kākāpō protection, such as support for managing invasive predators and responsible pet care, though it did not lead to more donations. The bird was voted New Zealand's bird of the year in 2008 and 2020.
Biology and health sciences
Psittaciformes
Animals
89195
https://en.wikipedia.org/wiki/Acetaldehyde
Acetaldehyde
Acetaldehyde (IUPAC systematic name ethanal) is an organic chemical compound with the formula , sometimes abbreviated as . It is a colorless liquid or gas, boiling near room temperature. It is one of the most important aldehydes, occurring widely in nature and being produced on a large scale in industry. Acetaldehyde occurs naturally in coffee, bread, and ripe fruit, and is produced by plants. It is also produced by the partial oxidation of ethanol by the liver enzyme alcohol dehydrogenase and is a contributing cause of hangover after alcohol consumption. Pathways of exposure include air, water, land, or groundwater, as well as drink and smoke. Consumption of disulfiram inhibits acetaldehyde dehydrogenase, the enzyme responsible for the metabolism of acetaldehyde, thereby causing it to build up in the body. The International Agency for Research on Cancer (IARC) has listed acetaldehyde as a Group 1 carcinogen. Acetaldehyde is "one of the most frequently found air toxins with cancer risk greater than one in a million". History Acetaldehyde was first observed by the Swedish pharmacist/chemist Carl Wilhelm Scheele (1774); it was then investigated by the French chemists Antoine François, comte de Fourcroy and Louis Nicolas Vauquelin (1800), and the German chemists Johann Wolfgang Döbereiner (1821, 1822, 1832) and Justus von Liebig (1835). In 1835, Liebig named it "aldehyde", and in the middle of the century the name was altered to "acetaldehyde". Production In 2013, global production was about 438 thousand tons. Before 1962, ethanol and acetylene were the major sources of acetaldehyde. Since then, ethylene is the dominant feedstock. The main method of production is the oxidation of ethene by the Wacker process, which involves oxidation of ethene using a homogeneous palladium/copper catalyst system: In the 1970s, the world capacity of the Wacker-Hoechst direct oxidation process exceeded 2 million tonnes annually. Smaller quantities can be prepared by the partial oxidation of ethanol in an exothermic reaction. This process typically is conducted over a silver catalyst at about . This method is one of the oldest routes for the industrial preparation of acetaldehyde. Other methods Hydration of acetylene Prior to the Wacker process and the availability of cheap ethylene, acetaldehyde was produced by the hydration of acetylene. This reaction is catalyzed by mercury(II) salts: The mechanism involves the intermediacy of vinyl alcohol, which tautomerizes to acetaldehyde. The reaction is conducted at , and the acetaldehyde formed is separated from water and mercury and cooled to . In the wet oxidation process, iron(III) sulfate is used to reoxidize the mercury back to the mercury(II) salt. The resulting iron(II) sulfate is oxidized in a separate reactor with nitric acid. The enzyme Acetylene hydratase discovered in the strictly anaerobic bacterium Pelobacter acetylenicus can catalyze an analogous reaction without involving any compounds of mercury. However, it has thus far not been brought to any large-scale or commercial use. Dehydrogenation of ethanol Traditionally, acetaldehyde was produced by the partial dehydrogenation of ethanol: In this endothermic process, ethanol vapor is passed at 260–290 °C over a copper-based catalyst. The process was once attractive because of the value of the hydrogen coproduct, but in modern times is not economically viable. Hydroformylation of methanol The hydroformylation of methanol with catalysts like cobalt, nickel, or iron salts also produces acetaldehyde, although this process is of no industrial importance. Similarly noncompetitive, acetaldehyde arises from synthesis gas with modest selectivity. Reactions Tautomerization to vinyl alcohol Like many other carbonyl compounds, acetaldehyde tautomerizes to give an enol (vinyl alcohol; IUPAC name: ethenol): ∆H298,g = +42.7 kJ/mol The equilibrium constant is 6 at room temperature, thus that the relative amount of the enol form in a sample of acetaldehyde is very small. At room temperature, acetaldehyde () is more stable than vinyl alcohol () by 42.7 kJ/mol: Overall the keto-enol tautomerization occurs slowly but is catalyzed by acids. Photo-induced keto-enol tautomerization is viable under atmospheric or stratospheric conditions. This photo-tautomerization is relevant to the Earth's atmosphere, because vinyl alcohol is thought to be a precursor to carboxylic acids in the atmosphere. Addition and condensation reactions Acetaldehyde is a common electrophile in organic synthesis. In addition reactions acetaldehyde is prochiral. It is used primarily as a source of the "" synthon in aldol reactions and related condensation reactions. Grignard reagents and organolithium compounds react with MeCHO to give hydroxyethyl derivatives. In one of the more spectacular addition reactions, formaldehyde in the presence of calcium hydroxide adds to MeCHO to give pentaerythritol, and formate. In a Strecker reaction, acetaldehyde condenses with cyanide and ammonia to give, after hydrolysis, the amino acid alanine. Acetaldehyde can condense with amines to yield imines; for example, with cyclohexylamine to give N-ethylidenecyclohexylamine. These imines can be used to direct subsequent reactions like an aldol condensation. It is also a building block in the synthesis of heterocyclic compounds. In one example, it converts, upon treatment with ammonia, to 5-ethyl-2-methylpyridine ("aldehyde-collidine"). Polymeric forms Three molecules of acetaldehyde condense to form "paraldehyde", a cyclic trimer containing C-O single bonds. Similarly condensation of four molecules of acetaldehyde give the cyclic molecule metaldehyde. Paraldehyde can be produced in good yields, using a sulfuric acid catalyst. Metaldehyde is only obtained in a few percent yield and with cooling, often using HBr rather than as the catalyst. At in the presence of acid catalysts, polyacetaldehyde is produced. There are two stereomers of paraldehyde and four of metaldehyde. The German chemist Valentin Hermann Weidenbusch (1821–1893) synthesized paraldehyde in 1848 by treating acetaldehyde with acid (either sulfuric or nitric acid) and cooling to . He found it quite remarkable that when paraldehyde was heated with a trace of the same acid, the reaction went the other way, recreating acetaldehyde. Although vinyl alcohol is a polymeric form of acetaldehyde (), polyvinyl alcohol cannot be produced from acetaldehyde. Acetal derivatives Acetaldehyde forms a stable acetal upon reaction with ethanol under conditions that favor dehydration. The product, , is formally named 1,1-diethoxyethane but is commonly referred to as "acetal". This can cause confusion as "acetal" is more commonly used to describe compounds with the functional groups RCH(OR')2 or RR'C(OR'')2 rather than referring to this specific compound — in fact, 1,1-diethoxyethane is also described as the diethyl acetal of acetaldehyde. Precursor to vinylphosphonic acid Acetaldehyde is a precursor to vinylphosphonic acid, which is used to make adhesives and ion conductive membranes. The synthesis sequence begins with a reaction with phosphorus trichloride: Biochemistry In the liver, the enzyme alcohol dehydrogenase oxidizes ethanol into acetaldehyde, which is then further oxidized into harmless acetic acid by acetaldehyde dehydrogenase. These two oxidation reactions are coupled with the reduction of to NADH. In the brain, the enzyme catalase is primarily responsible for oxidizing ethanol to acetaldehyde, and alcohol dehydrogenase plays a minor role. The last steps of alcoholic fermentation in bacteria, plants, and yeast involve the conversion of pyruvate into acetaldehyde and carbon dioxide by the enzyme pyruvate decarboxylase, followed by the conversion of acetaldehyde into ethanol. The latter reaction is again catalyzed by an alcohol dehydrogenase, now operating in the opposite direction. Many East Asian people have an ALDH2 mutation which makes them significantly less efficient at oxidizing acetaldehyde. On consuming alcohol, their bodies tend to accumulate excessive amounts of acetaldehyde, causing the so-called alcohol flush reaction. They develop a characteristic flush on the face and body, along with "nausea, headache and general physical discomfort". Ingestion of the drug disulfiram, which inhibits ALDH2, leads to a similar reaction . Uses Traditionally, acetaldehyde was mainly used as a precursor to acetic acid. This application has declined because acetic acid is produced more efficiently from methanol by the Monsanto and Cativa processes. Acetaldehyde is an important precursor to pyridine derivatives, pentaerythritol, and crotonaldehyde. Urea and acetaldehyde combine to give a useful resin. Acetic anhydride reacts with acetaldehyde to give ethylidene diacetate, a precursor to vinyl acetate, which is used to produce polyvinyl acetate. The global market for acetaldehyde is declining. Demand has been impacted by changes in the production of plasticizer alcohols, which has shifted because n-butyraldehyde is less often produced from acetaldehyde, instead being generated by hydroformylation of propylene. Likewise, acetic acid, once produced from acetaldehyde, is made predominantly by the lower-cost methanol carbonylation process. The impact on demand has led to increase in prices and thus slowdown in the market. China is the largest consumer of acetaldehyde in the world, accounting for almost half of global consumption in 2012. Major use has been the production of acetic acid. Other uses such as pyridines and pentaerythritol are expected to grow faster than acetic acid, but the volumes are not large enough to offset the decline in acetic acid. As a consequence, overall acetaldehyde consumption in China may grow slightly at 1.6% per year through 2018. Western Europe is the second-largest consumer of acetaldehyde worldwide, accounting for 20% of world consumption in 2012. As with China, the Western European acetaldehyde market is expected to increase only very slightly at 1% per year during 2012–2018. However, Japan could emerge as a potential consumer for acetaldehyde in next five years due to newfound use in commercial production of butadiene. The supply of butadiene has been volatile in Japan and the rest of Asia. This should provide the much needed boost to the flat market, as of 2013. Safety Exposure limits The threshold limit value is 25ppm (STEL/ceiling value) and the MAK (Maximum Workplace Concentration) is 50 ppm. At 50 ppm acetaldehyde, no irritation or local tissue damage in the nasal mucosa is observed. When taken up by the organism, acetaldehyde is metabolized rapidly in the liver to acetic acid. Only a small proportion is exhaled unchanged. After intravenous injection, the half-life in the blood is approximately 90 seconds. Dangers Toxicity Many serious cases of acute intoxication have been recorded. Acetaldehyde naturally breaks down in the human body. Irritation Acetaldehyde is an irritant of the skin, eyes, mucous membranes, throat, and respiratory tract. This occurs at concentrations as low as 1000 ppm. Symptoms of exposure to this compound include nausea, vomiting, and headache. These symptoms may not happen immediately. The perception threshold for acetaldehyde in air is in the range between 0.07 and 0.25 ppm. At such concentrations, the fruity odor of acetaldehyde is apparent. Conjunctival irritations have been observed after a 15-minute exposure to concentrations of 25 and 50 ppm, but transient conjunctivitis and irritation of the respiratory tract have been reported after exposure to 200 ppm acetaldehyde for 15 minutes. Carcinogenicity Acetaldehyde is carcinogenic in humans. In 1988 the International Agency for Research on Cancer stated, "There is sufficient evidence for the carcinogenicity of acetaldehyde (the major metabolite of ethanol) in experimental animals." In October 2009 the International Agency for Research on Cancer updated the classification of acetaldehyde stating that acetaldehyde included in and generated endogenously from alcoholic beverages is a Group I human carcinogen. In addition, acetaldehyde is damaging to DNA and causes abnormal muscle development as it binds to proteins. DNA crosslinks Acetaldehyde induces DNA interstrand crosslinks, a form of DNA damage. These can be repaired by either of two replication-coupled DNA repair pathways. The first is referred to as the FA pathway, because it employs gene products defective in Fanconi's anemia patients. This repair pathway results in increased mutation frequency and altered mutational spectrum. The second repair pathway requires replication fork convergence, breakage of the acetaldehyde crosslink, translesion synthesis by a Y-family DNA polymerase and homologous recombination. Aggravating factors Alzheimer's disease People with a genetic deficiency for the enzyme responsible for the conversion of acetaldehyde into acetic acid may have a greater risk of Alzheimer's disease. "These results indicate that the ALDH2 deficiency is a risk factor for LOAD [late-onset Alzheimer's disease] ..." Genetic conditions A study of 818 heavy drinkers found that those exposed to more acetaldehyde than normal through a genetic variant of the gene encoding for ADH1C, ADH1C*1, are at greater risk of developing cancers of the upper gastrointestinal tract and liver. Disulfiram The drug disulfiram (Antabuse) inhibits acetaldehyde dehydrogenase, an enzyme that oxidizes the compound into acetic acid. Metabolism of ethanol forms acetaldehyde before acetaldehyde dehydrogenase forms acetic acid, but with the enzyme inhibited, acetaldehyde accumulates. If one consumes ethanol while taking disulfiram, the hangover effect of ethanol is felt more rapidly and intensely (disulfiram-alcohol reaction). As such, disulfiram is sometimes used as a deterrent for alcoholics wishing to stay sober. Sources of exposure Indoor air Acetaldehyde is a potential contaminant in workplace, indoors, and ambient environments. Moreover, the majority of humans spend more than 90% of their time in indoor environments, increasing any exposure and the risk to human health. In a study in France, the mean indoor concentration of acetaldehydes measured in 16 homes was approximately seven times higher than the outside acetaldehyde concentration. The living room had a mean of 18.1±17.5 μg m−3 and the bedroom was 18.2±16.9 μg m−3, whereas the outdoor air had a mean concentration of 2.3±2.6 μg m−3. It has been concluded that volatile organic compounds (VOC) such as benzene, formaldehyde, acetaldehyde, toluene, and xylenes have to be considered priority pollutants with respect to their health effects. It has been pointed that in renovated or completely new buildings, the VOCs concentration levels are often several orders of magnitude higher. The main sources of acetaldehydes in homes include building materials, laminate, PVC flooring, varnished wood flooring, and varnished cork/pine flooring (found in the varnish, not the wood). It is also found in plastics, oil-based and water-based paints, in composite wood ceilings, particle-board, plywood, treated pine wood, and laminated chipboard furniture. Outdoor air The use of acetaldehyde is widespread in different industries, and it may be released into waste water or the air during production, use, transportation and storage. Sources of acetaldehyde include fuel combustion emissions from stationary internal combustion engines and power plants that burn fossil fuels, wood, or trash, oil and gas extraction, refineries, cement kilns, lumber and wood mills and paper mills. Acetaldehyde is also present in automobile and diesel exhaust. As a result, acetaldehyde is "one of the most frequently found air toxics with cancer risk greater than one in a million". Tobacco smoke Natural tobacco polysaccharides, including cellulose, have been shown to be the primary precursors making acetaldehyde a significant constituent of tobacco smoke. It has been demonstrated to have a synergistic effect with nicotine in rodent studies of addiction. Acetaldehyde is also the most abundant carcinogen in tobacco smoke; it is dissolved into the saliva while smoking. Cannabis smoke Acetaldehyde has been found in cannabis smoke. This finding emerged through the use of new chemical techniques that demonstrated the acetaldehyde present was causing DNA damage in laboratory settings. Alcohol consumption Many microbes produce acetaldehyde from ethanol, but they have a lower capacity to eliminate the acetaldehyde, which can lead to the accumulation of acetaldehyde in saliva, stomach acid, and intestinal contents. Fermented food and many alcoholic beverages can also contain significant amounts of acetaldehyde. Acetaldehyde, derived from mucosal or microbial oxidation of ethanol, tobacco smoke, and diet, appears to act as a cumulative carcinogen in the upper digestive tract of humans. According to European Commission's Scientific Committee on Consumer Safety's (SCCS) "Opinion on Acetaldehyde" (2012) the cosmetic products special risk limit is 5 mg/L and acetaldehyde should not be used in mouth-washing products. Plastics Acetaldehyde can be produced by the photo-oxidation of polyethylene terephthalate (PET), via a Type II Norrish reaction. Although the levels produced by this process are minute acetaldehyde has an exceedingly low taste/odor threshold of around 20–40 ppb and can cause an off-taste in bottled water. The level at which an average consumer could detect acetaldehyde is still considerably lower than any toxicity. Candida overgrowth Candida albicans in patients with potentially carcinogenic oral diseases has been shown to produce acetaldehyde in quantities sufficient to cause problems.
Physical sciences
Aldehydes and ketones
Chemistry
89202
https://en.wikipedia.org/wiki/Quilt
Quilt
A quilt is a multi-layered textile, traditionally composed of two or more layers of fabric or fiber. Commonly three layers are used with a filler material. These layers traditionally include a woven cloth top, a layer of batting or wadding, and a woven back combined using the techniques of quilting. This is the process of sewing on the face of the fabric, and not just the edges, to combine the three layers together to reinforce the material. Stitching patterns can be a decorative element. A single piece of fabric can be used for the top of a quilt (a "whole-cloth quilt"), but in many cases the top is created from smaller fabric pieces joined, or patchwork. The pattern and color of these pieces creates the design. Quilts may contain valuable historical information about their creators, "visualizing particular segments of history in tangible, textured ways". In the twenty-first century, quilts are frequently displayed as non-utilitarian works of art but historically quilts were often used as bedcovers; and this use persists today. (In modern English, the word "quilt" can also be used to refer to an unquilted duvet or comforter.) Uses There are many traditions regarding the uses of quilts. Quilts may be made or given to mark important life events such as marriage, the birth of a child, a family member leaving home, or graduations. Modern quilts are not always intended for use as bedding, and may be used as wall hangings, table runners, or tablecloths. Quilting techniques are often incorporated into garment design as well. Quilt shows and competitions are held locally, regionally, and nationally. There are international competitions as well, particularly in the United States, Japan, and Europe. The following list summarizes most of the reasons a person might decide to make a quilt: Bedding Decoration Armor (e.g., the garment called a gambeson) Commemoration (e.g., the AIDS Memorial Quilt) Education (e.g., a "Science" quilt or a "Gardening" quilt) Campaigning Documenting events / social history, etc. Artistic expression (e.g., Quilt art) Gift Fundraiser Traditions Quilting traditions are particularly prominent in the United States, where the necessity of creating warm bedding met the paucity of local fabrics in the early days of the colonies. Imported fabric was very expensive, and local homespun fabric was labor-intensive to create and tended to wear out sooner than commercial fabric. It was essential for most families to use and preserve textiles efficiently. Saving or salvaging small scraps of fabric was a part of life for all households. Small pieces of fabric were joined to make larger pieces, in units called "blocks". Creativity could be expressed in the block designs, or simple "utility quilts", with minimal decorative value, could be produced. Crib quilts for infants were needed in the cold of winter, but even early examples of baby quilts indicate the efforts that women made to welcome a new baby. Quilting was often a communal activity, involving all the women and girls in a family or in a larger community. There are also many historical examples of men participating in these quilting traditions. The tops were prepared in advance, and a quilting bee was arranged, during which the actual quilting was completed by multiple people. Quilting frames were often used to stretch the quilt layers and maintain even tension to produce high-quality quilting stitches and to allow many individual quilters to work on a single quilt at one time. Quilting bees were important social events in many communities, and were typically held between periods of high demand for farm labor. Quilts were frequently made to commemorate major life events, such as marriages. Quilts were often made for other events as well, such as graduations, or when individuals left their homes for other communities. One example of this is the quilts made as farewell gifts for pastors; some of these gifts were subscription quilts. For a subscription quilt, community members would pay to have their names embroidered on the quilt top, and the proceeds would be given to the departing minister. Sometimes the quilts were auctioned off to raise additional money, and the quilt might be donated back to the minister by the winner. A logical extension of this tradition led to quilts being made to raise money for other community projects, such as recovery from a flood or natural disaster, and later, for fundraising for war. Subscription quilts were made for all of America's wars. In a new tradition, quilt makers across the United States have been making quilts for wounded veterans of the Afghanistan and Iraq conflicts. There are many American traditions regarding the number of quilts a young woman (and her family) was expected to have made prior to her wedding for the establishment of her new home. Given the demands on a new wife, and the learning curve in her new role, it was prudent to provide her some reserve time with quilts already completed. Specific wedding quilts continue to be made today. Wedding ring quilts, which have a patchwork design of interlocking rings, have been made since the 1930s. White wholecloth quilts with high-quality, elaborate quilting, and often trapunto decorations as well, are also traditional for weddings. A superstition existed that it was bad luck to incorporate heart motifs in a wedding quilt (the couples’ hearts might be broken if such a design were included), so tulip motifs were often used to symbolize love in wedding quilts. The Museum of the Southern Jewish Experience in New Orleans holds a 19th-century exemplar of a "crazy quilt" (one without a pattern) "that was made by the Jewish Ladies’ Sewing Club of Canton, Miss., in 1885 to be raffled off to help fund the building of a synagogue there". (A photo of this quilt accompanies this citation.) The Museum's director, Kenneth Hoffman, says that this quilt involves "lots of little pieces that come together to make something greater than the sum of its parts, it’s crazy but it’s beautiful, it has a social aspect of ladies sitting together sewing, it has a religious aspect." William Rush Dunton (1868–1966), psychiatrist, collector, and scholar of American quilts incorporated quilting as part of his occupational therapy treatment. "Dr Dunton, the founder of the American Occupational Therapy Association, encouraged his patients to pursue quilting as a curative activity/therapeutic diversion...." The National Quilt Museum is in Paducah, Kentucky, in the Southern United States. It hosts QuiltWeek, an annual competition and celebration of that attracts artists and hobbyists from the world of quilting. QuiltWeek has been celebrated in a short documentary by Olivia Loomis Merrion called Quilt Fever. It explores what quilting means to its practitioners along with what it means to Paducah, which has earned the nickname "Quilt City, USA". Among the many television programs as well as YouTube channels devoted to quilting, Love of Quilting, which originates in a magazine of the same name, stands out for being aired on PBS. Techniques Patchwork and piecing One of the primary techniques involved in quilt making is patchwork, sewing together geometric pieces of fabric often to form a design or "block". Also called piecing, this technique can be achieved with hand stitching or with a sewing machine. Appliqué Appliqué is a sewing technique where an upper layer of fabric is sewn onto a ground fabric. The upper, applied fabric shape can be of any shape or contour. There are several different appliqué techniques and styles. In needle-turn appliqué, the raw edges of the appliquéd fabric are tucked beneath the design to minimize raveling or damage, and small hand stitches are made to secure down the design. The stitches are made with a hem stitch, so that the thread securing the fabric is minimally visible from the front of the work. There are other methods to secure the raw edge of the appliquéd fabric, and some people use basting stitches, fabric-safe glue, freezer paper, paper forms, or starching techniques to prepare the fabric that will be applied, prior to sewing it on. Supporting paper or other materials are typically removed after the sewing is complete. The ground fabric is often cut away from behind, after the sewing is complete, to minimize the bulk of the fabric in that region. A special form of appliqué is Broderie perse, which involves the appliqué of specific motifs that have been selected from a printed fabric. For example, a series of flower designs might be cut out of one fabric with a vine design, rearranged, and sewn down on a new fabric to create the image of a rose bush. Reverse appliqué Reverse appliqué is a sewing technique where a ground fabric is cut, another piece of fabric is placed under the ground fabric, the raw edges of the ground fabric are tucked under, and the newly folded edge is sewn down to the lower fabric. Stitches are made as inconspicuous as possible. Reverse appliqué techniques are often used in combination with traditional appliqué techniques, to give a variety of visual effects. Quilting A key component that defines a quilt is the stitches holding the three layers together—the quilting. Quilting, typically a running stitch, can be achieved by hand or by sewing machine. Hand quilting has often been a communally productive act with quilters sitting around a large quilting frame. One can also hand quilt with a hoop or other method. With the development of the sewing machine, some quilters began to use the sewing machine, and in more recent decades machine quilting has become quite commonplace, including with longarm quilting machines. Trapunto Trapunto is a sewing technique where two layers of fabric surrounding a layer of batting are quilted together, and then additional material is added to a portion of the design to increase the profile of relief as compared to the rest of the work. The effect of the elevation of one portion is often heightened by closely quilting the surrounding region, to compress the batting layer in that part of the quilt, thus receding the background even further. Cording techniques may also be used, where a channel is created by quilting, and a cord or yarn is pulled through the batting layer, causing a sharp change in the texture of the quilt. For example, several pockets may be quilted in the pattern of a flower, and then extra batting pushed through a slit in the backing fabric (which will later be sewn shut). The stem of the rose might be corded, creating a dimensional effect. The background could be quilted densely in a stipple pattern, causing the space around the rose bush to become less prominent. These techniques are typically executed with wholecloth quilts, and with batting and thread that matches the top fabric. Some artists have used contrasting colored thread, to create an outline effect. Colored batting behind the surface layer can create a shadowed effect. Brightly colored yarn cording behind white cloth can give a pastel effect on the surface. Embellishment Additional decorative elements may be added to the surface of a quilt to create a three-dimensional or whimsical effect. The most common objects sewn on are beads or buttons. Decorative trim, piping, sequins, found objects, or other items can also be secured to the surface. The topic of embellishment is explored further on another page. English paper piecing English paper piecing is a hand-sewing technique used to maximize accuracy when piecing complex angles together. A paper shape is cut with the exact dimensions of the desired piece. Fabric is then basted to the paper shape. Adjacent units are then placed face to face, and the seam is whipstitched together. When a given piece is completely surrounded by all the adjacent shapes, the basting thread is cut, and the basting and the paper shape are removed. Foundation piecing Foundation piecing is a sewing technique that allows maximum stability of the work as the piecing is created, minimizing the distorting effect of working with slender pieces or bias-cut pieces. In the most basic form of foundation piecing, a piece of paper is cut to the size of the desired block. For utility quilts, a sheet of newspaper was used. In modern foundation piecing, there are many commercially available foundation papers. A strip of fabric or a fabric scrap is sewn by machine to the foundation. The fabric is flipped back and pressed. The next piece of fabric is sewn through the initial piece and its foundation paper. Subsequent pieces are added sequentially. The block may be trimmed flush with the border of the foundation. After the blocks are sewn together, the paper is removed, unless the foundation is an acid-free material that will not damage the quilt over time. Intarsia Rarer and less well-known are quilts made by men in a military setting. They are made of broadcloth which is cut into elements abutting each other as intarsia and then over-sewn. Front and back of the work are in principle identical and the quilts reversible, except in cases where elements of appliqué, embroidery or trapunto have been added on the front, which is quite common in more elaborate or illustrative pieces. Quilting styles North America Amish Amish quilts are reflections of the Amish way of life. As a part of their religious commitment, Amish people have chosen to reject "worldly" elements in their dress and lifestyle, and their quilts historically reflected this, although today Amish make and use quilts in a variety of styles. Traditionally, the Amish use only solid colors in their clothing and the quilts they intend for their own use, in community-sanctioned colors and styles. In Lancaster, Pennsylvania, early Amish quilts were typically made of solid-colored, lightweight wool fabric, off the same bolts of fabric used for family clothing items, while in many Midwestern communities, cotton predominated. Classic Amish quilts often feature quilting patterns that contrast with the plain background. Antique Amish quilts are among the most highly prized by collectors and quilting enthusiasts. The color combinations used in a quilt can help experts determine the community in which the quilt was produced. Since the 1970s, Amish quiltmakers have made quilts for the consumer market, with quilt cottage industries and retail shops appearing in Amish settlements across North America. Baltimore album Baltimore album quilts originated in the region around Baltimore, Maryland, in the 1840s, where a unique and highly developed style of appliqué quilting briefly flourished. Baltimore album quilts are variations on album quilts, which are collections of appliquéd blocks, each with a different design. These designs often feature floral patterns, but many other motifs are used as well. Baskets of flowers, wreaths, buildings, books, and birds are common motifs. Designs are often highly detailed, and display the quiltmaker's skill. New dyeing techniques became available in this period, allowing the creation of new, bold colors, which the quilters used enthusiastically. New techniques for printing on the fabrics also allowed portions of fabric to be shaded, which heightens the three-dimensional effect of the designs. The background fabric is typically white or off-white, allowing maximal contrast to the delicate designs. India ink allowed handwritten accents and also allowed the blocks to be signed. Some of these quilts were created by professional quilters, and patrons could commission quilts made of new blocks, or select blocks that were already available for sale. There has been a resurgence of quilting in the Baltimore style, with many of the modern quilts experimenting with bending some of the old rules. Crazy quilts Crazy quilts are so named because their pieces are not regular, and they are scattered across the top of the quilt like "crazed" (cracked or crackled) pottery glazing. They were originally very refined, luxury items. Geometric pieces of rich fabrics were sewn together, and highly decorative embroidery was added. Such quilts were often effectively samplers of embroidery stitches and techniques, displaying the development of needle skills of those in the well-to-do late 19th-century home. They were show pieces, not used for warmth, but for display. The luxury fabrics used precluded frequent washing. They often took years to complete. Fabrics used included silks, wools, velvet, linen, and cotton. The mixture of fabric textures, such as a smooth silk next to a textured brocade or velvet, was embraced. Designs were applied to the surface, and other elements such as ribbons, lace, and decorative cording were used exuberantly. Names and dates were often part of the design, added to commemorate important events or associations of the maker. Politics were included in some, with printed campaign handkerchiefs and other preprinted textiles (such as advertising silks) included to declare the maker's sentiments. African-American By the time that early African-American quilting became a tradition in and of itself, it was already a combination of textile traditions from four civilizations of Central and West Africa: the Mande-speaking peoples, the Yoruba and Fon peoples, the Ejagham peoples, and the Kongo peoples. As textiles were traded heavily throughout the Caribbean, Central America, and the Southern United States, the traditions of each distinct region became intermixed. Originally, most of the textiles were made by men. Yet when enslaved Africans were brought to the United States, their work was divided according to traditional Western gender roles and women took over the tradition. However, this strong tradition of weaving left a visible mark on African-American quilting. The use of strips, reminiscent of the strips of reed and fabric used in men's traditional weaving, are used in fabric quilting. A break in a pattern symbolized a rebirth in the ancestral power of the creator or wearer. It also helped keep evil spirits away; evil is believed to travel in straight lines and a break in a pattern or line confuses the spirits and slows them down. This tradition is highly recognizable in African-American improvisations on European-American patterns. The traditions of improvisation and multiple patterning also protect the quilter from anyone copying their quilts. These traditions allow for a strong sense of ownership and creativity. In the 1980s, concurrent with the boom in art quilting in America, new attention was brought to African-American traditions and innovations. This attention came from two opposing points of view, one validating the practices of rural Southern African-American quilters and another asserting that there was no one style but rather the same individualization found among white quilters. John Vlach, in a 1976 exhibition, and Maude Wahlman, co-organizing a 1979 exhibition, both cited the use of strips, high-contrast colors, large design elements, and multiple patterns as characteristic and compared them to rhythms in black music. Building on the relationship between quilting and musical performance, African-American quilter Gwendolyn Ann Magee created a twelve-piece exhibition based on the lyrics of James Weldon Johnson's "Lift Every Voice and Sing", commonly known as the "Negro National Anthem". Cuesta Benberry, a quilt historian with a special interest in African-American works, published Always There: The African-American Presence in American Quilts in 1992 and organized an exhibition documenting the contributions of black quilters to mainstream American quilting. Eli Leon, a collector of African-American quilts, organized a traveling exhibition in 1987 that introduced both historic and current quilters, some loosely following patterns and others improvising, such as Rosie Lee Tompkins. He argued for the creativity of the irregular quilt, saying that these quilters saw the quilt block as "an invitation to variation" and felt that measuring "takes the heart outa things". At the same time, the Williams College Museum of Art was circulating Stitching Memories: African-American Story Quilts, an exhibition featuring a different approach to quilts, including most prominently the quilts of Faith Ringgold. However, it was not until 2002, when the Museum of Fine Arts, Houston, organized The Quilts of Gee's Bend, an exhibition that appeared in major museums around the country, including the Whitney Museum of American Art in New York, that art critics unknowingly adopted Leon's assertions. Story quilts Story quilts have much in common with pictorial quilts and the tradition of African-American quiltmakers and are often made as a form of quilt art. Usually adorned with extensive text and accompanying imagery, story quilts can contain short stories, poems, or extended essays and can be used as an alternative form of a picture book. Artist Faith Ringgold, known for her large portfolio of story quilts, has said she began making these narrative quilts with extensive text after being unable to find a publisher that would accept her autobiography. She began quilting so that "when my quilts were hung up to look at, or photographed for a book, people could still read my stories". Pictorial quilts Pictorial quilts often contain one-of-a-kind patterns and imagery. Instead of bringing together fabric in an abstract or patterned design, they use pieces of fabric to create objects on the quilt, resulting in a picture-based quilt. They were often made collaboratively as a fundraising effort. However, some pictorial quilts were individually created and tell a narrative through the images on the quilt. Some pictorial quilts consist of many squares, sometimes made by multiple people, while others have imagery that uses the entirety of quilt. Pictorial quilts were created in the United States, as well as in England and Ireland, beginning as early as 1795. Barn quilts Barn quilts are a type of folk art found in the United States (particularly the South and Midwest) and Canada. They take the patterns of traditional quilt squares, and recreate them either directly on the side of a barn or on a piece of wood or aluminum which is then attached to the side of a barn. Patterns are sometimes modeled off of family quilts, loved ones, patriotic themes, or important crops to the farm. The origins of the barn quilt are contested- some claim they date back almost 300 years, but some claim they were invented by Donna Sue Groves of Adams County, Ohio in 2001. Their origin is likely connected to barn advertisements. Many rural counties will display their barn quilts as part of a quilt trail, creating a route that connects barns with barn quilts to sponsor local tourism. Hawaiian Hawaiian quilts are wholecloth (not pieced) quilts, featuring large-scale symmetrical appliqué in solid colors on a solid color (usually white) background fabric. Traditionally, the quilter would fold a square piece of fabric into quarters or eighths and then cut out a border design, followed by a center design. The cutouts would then be appliquéd onto a contrasting background fabric. The center and border designs were typically inspired by local flora and often had rich personal associations for the creator, with deep cultural resonances. The most common color for the appliquéd design was red, due to the wide availability of Turkey-red fabric. Some of these textiles were not in fact quilted but were used as decorative coverings without the heavier batting, which was not needed in a tropical climate. Multiple colors were added over time as the tradition developed. Echo quilting, where a quilted outline of the appliqué pattern is repeated like ripples out to the edge of the quilt, is the most common quilting pattern employed on Hawaiian-style quilts. Beautiful examples are held in the collection of the Bernice Pauahi Bishop Museum, Honolulu, Hawaii. Native American star quilts Star Quilts are a Native-American form of quilting that arose among native women in the late 19th century as communities adjusted to the difficulties of reservation life and cultural disruption. They are made by many tribes, but came to be especially associated with Plains tribes, including the Lakota. While star patterns existed in earlier European-American forms of quilting, they came to take on special significance for many native artisans. Star quilts are more than an art form—they express important cultural and spiritual values of the native women who make them and continue to be used in ceremonies and to mark important points in a person's life, including curing or yuwipi ceremonies and memorials. Anthropologists (such as Bea Medicine) have documented important social and cultural connections between quilting and earlier important pre-reservation crafting traditions, such as women's quill-working societies and other crafts that were difficult to sustain after hunting and off-reservation travel was restricted by the US government. Star quilts have also become a source of income for many Native-American women, while retaining spiritual and cultural importance to their makers. Seminole patchwork Created by the Native Americans of southern Florida, Seminole strip piecing is based on a simple form of decorative patchwork. Seminole strip piecing has uses in quilts, wall hangings, and traditional clothing. Seminole patchwork is created by joining a series of horizontal strips to produce repetitive geometric designs. Europe The history of quilting in Europe goes back at least to Medieval times. Quilting was used not only for traditional bedding but also for warm clothing. Clothing quilted with fancy fabrics and threads was often a sign of nobility. British quilts Henry VIII of England's household inventories record dozens of "quyltes" and "coverpointes" among the bed linen, including a green silk one for his first wedding to Catherine of Aragon, quilted with metal threads, linen-backed, and worked with roses and pomegranates. An embroidered yellow silk quilt from Bengal dating from the 1620s, an early example of such fabric use in Britain, now held by the Colonial Williamsburg museum, has an ownership label of Catherine Colepeper, connecting it to Leeds Castle and the Smythe and Colepeper families. Thomas Smythe, a brother of the owner of Leeds Castle, was a founder and governor of the English East India Company. Otherwise known as Durham quilts, North Country quilts have a long history in northeastern England, dating back to the Industrial Revolution and beyond. North Country quilts are often wholecloth quilts, featuring dense quilting. Some are made of sateen fabrics, which further heightens the effect of the quilting. From the late 18th to the early 20th century, the Lancashire cotton industry produced quilts using a mechanized technique of weaving double cloth with an enclosed heavy cording weft, imitating the corded Provençal quilts made in Marseilles. Italian quilts Quilting was particularly common in Italy during the Renaissance. One particularly famous surviving example, now in two parts, is the 1360–1400 Tristan Quilt, a Sicilian-quilted linen textile representing scenes from the story of Tristan and Isolde and housed in the Victoria and Albert Museum and in the Bargello in Florence. Provençal quilts Provençal quilts, now often referred to as "boutis" (the Provençal word meaning "stuffing"), are wholecloth quilts traditionally made in the South of France since the 17th century. Two layers of fabric are quilted together with stuffing sandwiched between sections of the design, creating a raised effect. The three main forms of the Provençal quilt are matelassage (a double-layered wholecloth quilt with batting sandwiched between), corded quilting or piqûre de Marseille (also known as Marseilles work or piqué marseillais), and boutis. These terms are often debated and confused, but are all forms of stuffed quilting associated with the region. Asia China Throughout China, a simple method of producing quilts is employed. It involves setting up a temporary site. At the site, a frame is assembled within which a lattice work of cotton thread is made. Cotton batting, either new or retrieved from discarded quilts, is prepared in a mobile carding machine. The mechanism of the carding machine is powered by a small, petrol motor. The batting is then added, layer by layer, to the area within the frame. Between adjacent layers, a new lattice of thread is created with a wooden disk used to tamp down the layer. (See: Image series showing production method) Japan: Sashiko Sashiko (刺し子, literally "little stabs") is a Japanese tradition that evolved over time from a simple technique for reinforcing fabric made for heavy use in fishing villages. It is a form of decorative stitching, with no overlap of any two stitches. Piecing is not part of the tradition; instead, the focus is on heavy cotton thread work with large, even stitches on the base fabric. Deep blue indigo-dyed fabric with white stitches is the most traditional form, but inverse work with blue on white is also seen. Traditional medallion, tessellated, and geometric designs are the most common. Bangladeshi quilts Bangladeshi quilts, known as Kantha, are not pieced together. Rather, they consist of two to three pieces of cloth sewn together with decorative embroidery stitches. They are made out of worn-out clothes (saris) and are mainly used for bedding, although they may be used as a decorative piece as well. They are made by women mainly in the Monsoon season before winter. India: Kantha, Ralli, and Balaposh — the unquilted scented quilt Women in the Indus Region of the Indian subcontinent make beautiful quilts with bright colors and bold patterns. The quilts are called "Ralli" (or rilli, rilly, rallee, or rehli) derived from the local word "ralanna" meaning to mix or connect. Rallis are made in the southern provinces of Pakistan including Sindh, Baluchistan, and in the Cholistan Desert on the southern border of Punjab, as well as in the adjoining states of Gujarat and Rajasthan in India. In India Kantha originated from the Sanskrit word kontha, which means rags, as the blankets are made out of rags using different scrap pieces of cloth. Nakshi kantha consisting of a running (embroidery) stitch, similar to the Japanese Sashiko is used for decorating and reinforcing the cloth and sewing patterns. Katab work called in Kutch. It is popularly known as Koudhi in Karnataka. Such blankets are given as gifts to newborn babies in many parts of India. Lambani tribes wear skirts with such art. Muslim and Hindu women from a variety of tribes and castes in towns, villages, and also nomadic settings make rallis. Quiltmaking is an old tradition in the region perhaps dating back to the fourth millennium BC, judging by similar patterns found on ancient pottery. Jaipuri razai (quilt) is one of the most famous things in Jaipur because of the traditional art and process of making it. Jaipuri razai is printed by the process of Screen printing or block printing which are both handmade processes carried out by the local artisans of Jaipur, Sanganer, and Bagru. Jaipuri quilts are designed to keep you warm during winters without irritating your skin. By including elements of traditional art in your modern living spaces, you can preserve the essence of Indian culture wherever you live. Rallis are commonly used as a covering for wooden sleeping cots, as a floor covering, storage bag, or padding for workers or animals. In the villages, ralli quilts are an important part of a girl's dowry. Owning many ralli quilts is a measure of wealth. Parents present rallis to their daughters on their wedding day as a dowry. Rallis are made from scraps of cotton fabric dyed to the desired color. The most common colors are white, black, red, and yellow or orange with green, dark blue, or purple. For the bottoms of the rallis, the women use old pieces of tie-dye, ajrak, or other shawl fabric. Ralli quilts have a few layers of worn fabric or cotton fibers between the top and bottom layers. The layers are held together by thick colored thread stitched in straight lines. The women sit on the ground and do not use a quilting frame. Another kind of ralli quilt is the sami ralli, used by the samis and jogis. This type of ralli quilt is popular due to the many colors and the extensive hand-stitching employed in its construction. The number of patterns used on ralli quilts seems to be almost endless, as there is much individual expression and spontaneity in color within the traditional patterns. The three basic styles of rallis are: 1) patchwork quilts made from pieces of cloth torn into squares and triangles and then stitched together, 2) appliqué quilts made from intricate cut-out patterns in a variety of shapes, and 3) embroidered quilts where the embroidery stitches form patterns on solid colored fabric. A distinguishing feature of ralli patterning in patchwork and appliqué quilts is the diagonal placement of similar blocks as well as a variety of embellishments including mirrors, tassels, shells, and embroidery. Rural women in the Uttara Kannada region of India carry out traditional quilting practices that are interwoven with rituals around food availability and access. Primarily made in Yadgir, Bagalkot, Gulbarga, Angadibail and Haliyal, Kavudis are handmade patchwork quilts with around multiple layers including the batting or insulation. Africa, Oceania and South America Cook Islands: Tivaevae quilts Tivaevae are quilts made by Cook Island women for ceremonial occasions. Quilting is thought to have been imported to the Islands by missionaries. The quilts are highly prized and are given as gifts with other finely made works on important occasions such as weddings and christenings. Egyptian khayamiya Khayamiya is a form of suspended tent decoration or portable textile screen used across North Africa and the Middle East. It is an art form distinctive to Egypt, where they are still sewn by hand in the Street of the Tentmakers (Sharia Khayamiya) in Cairo. Whilst Khayamiya resemble quilts, they typically possess a heavy back layer and fine top layer of appliqué, without a central insulating layer. Kuna: Mola textiles Mola textiles are a distinct tradition created by the Kuna people of Panama and Colombia. They are famous for their bright colors and reverse appliqué techniques, which create designs with strong cultural and spiritual importance within the indigenous culture. Forms of animals, humans, or mythological figures are featured, with strong geometric designs in the voids around the main image. These textiles are not traditionally used as bedding, but use techniques common to the larger international quilting tradition. Molas have been very influential on modern quilting design. Block designs There are many traditional block designs and techniques that have been named. Log cabin quilts are pieced quilts featuring blocks made of strips of fabric, typically encircling a small centered square (traditionally a red square, symbolizing the hearth of the home), with light strips forming half the square and dark strips the other half. Dramatic contrast effects with light and dark fabrics are created by various layouts of the blocks when joined to form a quilt top. These different layout variations are often named; some layouts include Sunshine and Shadow, Straight Furrows, Streak of Lightning, and Barn Raising. Nine-patch blocks are often the first blocks a child is taught to make. The block consists of three rows of three squares. A checkerboard effect with alternating dark and light squares is most commonly used. The double wedding ring pattern first came to prominence during the Great Depression. The design consists of interlocking circles pieced with small arcs of fabric. The finished quilts are often given to commemorate marriages. Cathedral windows is a type of block that features reverse appliqué using large amounts of folded muslin, and consists of modular blocks in an interlocking circular design that frame small squares or diamonds of colorful lightweight cotton. The volume of fabric is high, and the tops are heavy. Because of the weight and the insulating value of the base fabric, these tops often are assembled without batting (and thus need no quilting stitches), and sometimes have no backing. Such a quilt may be called a "counterpane" and may serve mainly as a decorative bedspread. Quilting technique Quilts on display One of the most famous quilts in history is the AIDS Memorial Quilt, which was begun in San Francisco in 1987, and is cared for by The NAMES Project Foundation. Portions of it are periodically displayed in various arranged locations. Panels are made to memorialize a person lost to HIV, and each block is 3 feet by 6 feet. Many of the blocks are not made by traditional quilters, and the amateur creators may lack technical skill, but their blocks speak directly to the love and loss they have experienced. The blocks are not in fact quilted, as there is no stitching holding together batting and backing layers. Exuberant designs, with personal objects applied, are seen next to restrained and elegant designs. Each block is very personal, and they form a deeply moving sight when combined by the dozens and the hundreds. The quilt as a whole is still under construction, although the entire quilt is now so large that it cannot be assembled in complete form in any one location. Beginning with the Whitney Museum of American Art's 1971 exhibit, Abstract Design in American Quilts, quilts have frequently appeared on museum and gallery walls. The exhibit displayed quilts like paintings on its gallery walls, which has since become a standard way to exhibit quilts. The Whitney exhibit helped shift the perception of quilts from solely a domestic craft object to art objects, increasing art world interest in them. The National Quilt Museum is located in Paducah, Kentucky. The museum houses a large collection of contemporary quilts, and features approximately a dozen exhibitions each year showcasing the works of "today's quilters" from America and around the world. In 2010, the world-renowned Victoria and Albert Museum put on a comprehensive display of quilts from 1700 to 2010, while in 2009, the American Folk Art Museum in New York put on an exhibition of the work of kaleidoscope quilt maker Paula Nadelstern, marking the first time that museum has ever offered a solo show to a contemporary quilt artist. Many historic quilts can be seen in Bath at the American Museum in Britain, and Beamish Museum preserves examples of the North East England quiltmaking tradition. The largest known public collection of quilts is housed at the International Quilt Study Center & Museum at the University of Nebraska–Lincoln in Lincoln, Nebraska. In 2018 documentary filmmaker Ken Burns' personal collection of quilts was exhibited there. Examples of Tivaevae and other quilts can be found in the collection of the Museum of New Zealand Te Papa Tongarewa. The San Jose Museum of Quilts and Textiles in California also displays traditional and modern quilts. There is free admission to the museum on the first Friday of every month, as part of the San Jose Art Walk. The New England Quilt Museum is located in Lowell, Massachusetts. The Rocky Mountain Quilt Museum is located in Golden, Colorado. Numerous Hawaiian-style quilts can be seen at Bishop Museum, in Honolulu, Hawaii. In literature Ismat Chughtai wrote an Urdu-language story entitled "Lihaf" ("The Quilt", 1941) that led to scandal and an unsuccessful attempt at legal prosecution of the author because it was about a Lesbian relationship. The Quilter's Apprentice and many others by Jennifer Chiaverini Alias Grace by Margaret Atwood How to Make an American Quilt by Whitney Otto A Fine Balance by Rohinton Mistry Everyday Use by Alice Walker The Keeping Quilt by Patricia Polacco The Last Runaway by Tracy Chevalier
Technology
Other techniques
null
89246
https://en.wikipedia.org/wiki/Curve
Curve
In mathematics, a curve (also called a curved line in older texts) is an object similar to a line, but that does not have to be straight. Intuitively, a curve may be thought of as the trace left by a moving point. This is the definition that appeared more than 2000 years ago in Euclid's Elements: "The [curved] line is […] the first species of quantity, which has only one dimension, namely length, without any width nor depth, and is nothing else than the flow or run of the point which […] will leave from its imaginary moving some vestige in length, exempt of any width." This definition of a curve has been formalized in modern mathematics as: A curve is the image of an interval to a topological space by a continuous function. In some contexts, the function that defines the curve is called a parametrization, and the curve is a parametric curve. In this article, these curves are sometimes called topological curves to distinguish them from more constrained curves such as differentiable curves. This definition encompasses most curves that are studied in mathematics; notable exceptions are level curves (which are unions of curves and isolated points), and algebraic curves (see below). Level curves and algebraic curves are sometimes called implicit curves, since they are generally defined by implicit equations. Nevertheless, the class of topological curves is very broad, and contains some curves that do not look as one may expect for a curve, or even cannot be drawn. This is the case of space-filling curves and fractal curves. For ensuring more regularity, the function that defines a curve is often supposed to be differentiable, and the curve is then said to be a differentiable curve. A plane algebraic curve is the zero set of a polynomial in two indeterminates. More generally, an algebraic curve is the zero set of a finite set of polynomials, which satisfies the further condition of being an algebraic variety of dimension one. If the coefficients of the polynomials belong to a field , the curve is said to be defined over . In the common case of a real algebraic curve, where is the field of real numbers, an algebraic curve is a finite union of topological curves. When complex zeros are considered, one has a complex algebraic curve, which, from the topological point of view, is not a curve, but a surface, and is often called a Riemann surface. Although not being curves in the common sense, algebraic curves defined over other fields have been widely studied. In particular, algebraic curves over a finite field are widely used in modern cryptography. History Interest in curves began long before they were the subject of mathematical study. This can be seen in numerous examples of their decorative use in art and on everyday objects dating back to prehistoric times. Curves, or at least their graphical representations, are simple to create, for example with a stick on the sand on a beach. Historically, the term was used in place of the more modern term . Hence the terms and were used to distinguish what are today called lines from curved lines. For example, in Book I of Euclid's Elements, a line is defined as a "breadthless length" (Def. 2), while a line is defined as "a line that lies evenly with the points on itself" (Def. 4). Euclid's idea of a line is perhaps clarified by the statement "The extremities of a line are points," (Def. 3). Later commentators further classified lines according to various schemes. For example: Composite lines (lines forming an angle) Incomposite lines Determinate (lines that do not extend indefinitely, such as the circle) Indeterminate (lines that extend indefinitely, such as the straight line and the parabola) The Greek geometers had studied many other kinds of curves. One reason was their interest in solving geometrical problems that could not be solved using standard compass and straightedge construction. These curves include: The conic sections, studied in depth by Apollonius of Perga The cissoid of Diocles, studied by Diocles and used as a method to double the cube. The conchoid of Nicomedes, studied by Nicomedes as a method to both double the cube and to trisect an angle. The Archimedean spiral, studied by Archimedes as a method to trisect an angle and square the circle. The spiric sections, sections of tori studied by Perseus as sections of cones had been studied by Apollonius. A fundamental advance in the theory of curves was the introduction of analytic geometry by René Descartes in the seventeenth century. This enabled a curve to be described using an equation rather than an elaborate geometrical construction. This not only allowed new curves to be defined and studied, but it enabled a formal distinction to be made between algebraic curves that can be defined using polynomial equations, and transcendental curves that cannot. Previously, curves had been described as "geometrical" or "mechanical" according to how they were, or supposedly could be, generated. Conic sections were applied in astronomy by Kepler. Newton also worked on an early example in the calculus of variations. Solutions to variational problems, such as the brachistochrone and tautochrone questions, introduced properties of curves in new ways (in this case, the cycloid). The catenary gets its name as the solution to the problem of a hanging chain, the sort of question that became routinely accessible by means of differential calculus. In the eighteenth century came the beginnings of the theory of plane algebraic curves, in general. Newton had studied the cubic curves, in the general description of the real points into 'ovals'. The statement of Bézout's theorem showed a number of aspects which were not directly accessible to the geometry of the time, to do with singular points and complex solutions. Since the nineteenth century, curve theory is viewed as the special case of dimension one of the theory of manifolds and algebraic varieties. Nevertheless, many questions remain specific to curves, such as space-filling curves, Jordan curve theorem and Hilbert's sixteenth problem. Topological curve A topological curve can be specified by a continuous function from an interval of the real numbers into a topological space . Properly speaking, the curve is the image of However, in some contexts, itself is called a curve, especially when the image does not look like what is generally called a curve and does not characterize sufficiently For example, the image of the Peano curve or, more generally, a space-filling curve completely fills a square, and therefore does not give any information on how is defined. A curve is closed or is a loop if and . A closed curve is thus the image of a continuous mapping of a circle. A non-closed curve may also be called an open curve. If the domain of a topological curve is a closed and bounded interval , the curve is called a path, also known as topological arc (or just ). A curve is simple if it is the image of an interval or a circle by an injective continuous function. In other words, if a curve is defined by a continuous function with an interval as a domain, the curve is simple if and only if any two different points of the interval have different images, except, possibly, if the points are the endpoints of the interval. Intuitively, a simple curve is a curve that "does not cross itself and has no missing points" (a continuous non-self-intersecting curve). A plane curve is a curve for which is the Euclidean plane—these are the examples first encountered—or in some cases the projective plane. A is a curve for which is at least three-dimensional; a is a space curve which lies in no plane. These definitions of plane, space and skew curves apply also to real algebraic curves, although the above definition of a curve does not apply (a real algebraic curve may be disconnected). A plane simple closed curve is also called a Jordan curve. It is also defined as a non-self-intersecting continuous loop in the plane. The Jordan curve theorem states that the set complement in a plane of a Jordan curve consists of two connected components (that is the curve divides the plane in two non-intersecting regions that are both connected). The bounded region inside a Jordan curve is known as Jordan domain. The definition of a curve includes figures that can hardly be called curves in common usage. For example, the image of a curve can cover a square in the plane (space-filling curve), and a simple curve may have a positive area. Fractal curves can have properties that are strange for the common sense. For example, a fractal curve can have a Hausdorff dimension bigger than one (see Koch snowflake) and even a positive area. An example is the dragon curve, which has many other unusual properties. Differentiable curve Roughly speaking a is a curve that is defined as being locally the image of an injective differentiable function from an interval of the real numbers into a differentiable manifold , often More precisely, a differentiable curve is a subset of where every point of has a neighborhood such that is diffeomorphic to an interval of the real numbers. In other words, a differentiable curve is a differentiable manifold of dimension one. Differentiable arc In Euclidean geometry, an arc (symbol: ⌒) is a connected subset of a differentiable curve. Arcs of lines are called segments, rays, or lines, depending on how they are bounded. A common curved example is an arc of a circle, called a circular arc. In a sphere (or a spheroid), an arc of a great circle (or a great ellipse) is called a great arc. Length of a curve If is the -dimensional Euclidean space, and if is an injective and continuously differentiable function, then the length of is defined as the quantity The length of a curve is independent of the parametrization . In particular, the length of the graph of a continuously differentiable function defined on a closed interval is which can be thought of intuitively as using the Pythagorean theorem at the infinitesimal scale continuously over the full length of the curve. More generally, if is a metric space with metric , then we can define the length of a curve by where the supremum is taken over all and all partitions of . A rectifiable curve is a curve with finite length. A curve is called (or unit-speed or parametrized by arc length) if for any such that , we have If is a Lipschitz-continuous function, then it is automatically rectifiable. Moreover, in this case, one can define the speed (or metric derivative) of at as and then show that Differential geometry While the first examples of curves that are met are mostly plane curves (that is, in everyday words, curved lines in two-dimensional space), there are obvious examples such as the helix which exist naturally in three dimensions. The needs of geometry, and also for example classical mechanics are to have a notion of curve in space of any number of dimensions. In general relativity, a world line is a curve in spacetime. If is a differentiable manifold, then we can define the notion of differentiable curve in . This general idea is enough to cover many of the applications of curves in mathematics. From a local point of view one can take to be Euclidean space. On the other hand, it is useful to be more general, in that (for example) it is possible to define the tangent vectors to by means of this notion of curve. If is a smooth manifold, a smooth curve in is a smooth map . This is a basic notion. There are less and more restricted ideas, too. If is a manifold (i.e., a manifold whose charts are times continuously differentiable), then a curve in is such a curve which is only assumed to be (i.e. times continuously differentiable). If is an analytic manifold (i.e. infinitely differentiable and charts are expressible as power series), and is an analytic map, then is said to be an analytic curve. A differentiable curve is said to be if its derivative never vanishes. (In words, a regular curve never slows to a stop or backtracks on itself.) Two differentiable curves and are said to be equivalent if there is a bijective map such that the inverse map is also , and for all . The map is called a reparametrization of ; and this makes an equivalence relation on the set of all differentiable curves in . A arc is an equivalence class of curves under the relation of reparametrization. Algebraic curve Algebraic curves are the curves considered in algebraic geometry. A plane algebraic curve is the set of the points of coordinates such that , where is a polynomial in two variables defined over some field . One says that the curve is defined over . Algebraic geometry normally considers not only points with coordinates in but all the points with coordinates in an algebraically closed field . If C is a curve defined by a polynomial f with coefficients in F, the curve is said to be defined over F. In the case of a curve defined over the real numbers, one normally considers points with complex coordinates. In this case, a point with real coordinates is a real point, and the set of all real points is the real part of the curve. It is therefore only the real part of an algebraic curve that can be a topological curve (this is not always the case, as the real part of an algebraic curve may be disconnected and contain isolated points). The whole curve, that is the set of its complex point is, from the topological point of view a surface. In particular, the nonsingular complex projective algebraic curves are called Riemann surfaces. The points of a curve with coordinates in a field are said to be rational over and can be denoted . When is the field of the rational numbers, one simply talks of rational points. For example, Fermat's Last Theorem may be restated as: For , every rational point of the Fermat curve of degree has a zero coordinate. Algebraic curves can also be space curves, or curves in a space of higher dimension, say . They are defined as algebraic varieties of dimension one. They may be obtained as the common solutions of at least polynomial equations in variables. If polynomials are sufficient to define a curve in a space of dimension , the curve is said to be a complete intersection. By eliminating variables (by any tool of elimination theory), an algebraic curve may be projected onto a plane algebraic curve, which however may introduce new singularities such as cusps or double points. A plane curve may also be completed to a curve in the projective plane: if a curve is defined by a polynomial of total degree , then simplifies to a homogeneous polynomial of degree . The values of such that are the homogeneous coordinates of the points of the completion of the curve in the projective plane and the points of the initial curve are those such that is not zero. An example is the Fermat curve , which has an affine form . A similar process of homogenization may be defined for curves in higher dimensional spaces. Except for lines, the simplest examples of algebraic curves are the conics, which are nonsingular curves of degree two and genus zero. Elliptic curves, which are nonsingular curves of genus one, are studied in number theory, and have important applications to cryptography.
Mathematics
Geometry
null
89251
https://en.wikipedia.org/wiki/Sweet%20corn
Sweet corn
Sweet corn (Zea mays convar. saccharata var. rugosa), also called sweetcorn, sugar corn and pole corn, is a variety of maize grown for human consumption with a high sugar content. Sweet corn is the result of a naturally occurring recessive mutation in the genes which control conversion of sugar to starch inside the endosperm of the corn kernel. Sweet corn is picked when still immature (the milk stage) and prepared and eaten as a vegetable, unlike field corn, which is harvested when the kernels are dry and mature (dent stage). Since the process of maturation involves converting sugar to starch, sweet corn stores poorly and must be eaten fresh, canned, or frozen, before the kernels become tough and starchy. It is one of the six major types of corn, the others being dent corn, flint corn, pod corn, popcorn, and flour corn. According to the USDA, 100 grams of raw yellow sweet corn contains 3.43 g glucose, 1.94 g fructose, and 0.89 g sucrose. History In 1493, Christopher Columbus returned to Europe with corn seeds, although this revelation did not succeed due to inadequate education of how to produce corn. Sweet corn occurs as a spontaneous mutation in field corn and was grown by several Native American tribes. The European cultivation of sweet corn occurred when the Iroquois tribes grew the first recorded sweet corn (called 'Papoon') for European settlers in 1779. It soon became a popular food in the southern and central regions of the United States. Open pollinated cultivars of white sweet corn started to become widely available in the United States in the 19th century. Two of the most enduring cultivars, still available today, are 'Country Gentleman' (a Shoepeg corn with small kernels in irregular rows) and 'Stowell's Evergreen'. Sweet corn production in the 20th century was influenced by the following key developments: hybridization allowed for more uniform maturity, improved quality and disease resistance In 1933 'Golden Cross Bantam' was released. It is significant for being the first successful single-cross hybrid and the first specifically developed for disease resistance (Stewart's wilt in this case). identification of the separate gene mutations responsible for sweetness in corn and the ability to breed cultivars based on these characteristics: su (normal sugary) se (sugary enhanced, originally called Everlasting Heritage) sh2 (shrunken-2) There are currently hundreds of cultivars, with more constantly being developed. Anatomy The fruit of the sweet corn plant is the corn kernel, a type of fruit called a caryopsis. The ear is a collection of kernels on the cob. Because corn is a monocot, there is always an even number of rows of kernels. The ear is covered by tightly wrapped leaves called the husk. Silk is the name for the pistillate flowers, which emerge from the husk. The husk and silk are removed by hand, before boiling but not necessarily before roasting, in a process called husking or shucking. Consumption In most of Latin America, sweet corn is traditionally eaten with beans. Although both corn and beans contain all 9 essential amino acids, eating a wide variety of foods in one day that includes grains and beans ensures the right balance of essential amino acids. In Brazil, sweet corn cut off from the cobs is generally eaten with peas (where this combination, given the practicality of steamed canned grains in an urban diet, is a frequent addition to diverse meals such as salads, stews, seasoned white rice, risottos, soups, pasta, and whole sausage hot dogs). In Malaysia, there exists a variety unique to the Cameron Highlands named "pearl corn". The kernels are glossy white, resembling pearls, and can be eaten raw off the cob, although they are often boiled in water and salt. In the Philippines, boiled sweet corn kernels are served hot with margarine and cheese powder as an inexpensive snack sold by street vendors. Similarly, sweet corn in Indonesia is traditionally ground or soaked with milk, which makes available the B vitamin niacin in the corn, the absence of which would otherwise lead to pellagra. Cheese and condensed milk are added to sweet corn in the snack jasuke, short for jagung susu keju. In Brazil, a combination of ground sweet corn and milk is also the basis of various well-known dishes, such as pamonha and the pudding-like dessert , while sweet corn eaten directly off the cob tends to be served with butter. In Europe and Asia sweet corn is often used as a pizza topping or in salads. Corn on the cob is a sweet corn cob that has been boiled, steamed, or grilled whole; the kernels are then cut off and eaten or eaten directly off the cob. Creamed corn is sweet corn served in a milk or cream sauce. Sweet corn can also be eaten as baby corn. Corn soup can be made adding water, butter and flour, with salt and pepper for seasoning. In the United States, sweet corn is eaten as a steamed vegetable or on the cob, and is usually served with butter and salt. It can be found in Tex-Mex cooking in chili, tacos, and salads. Corn mixed and cooked with lima beans is one form of succotash. Sweet corn is one of the most popular vegetables in the United States, being most popular in the southern and central regions of the country, and can be purchased either fresh, canned, or frozen. Sweet corn ranks among the top ten vegetables in value and per capita consumption. If left to dry on the plant, kernels may be taken off the cob and cooked in oil where, unlike popcorn, they expand to about double the original kernel size and are often called corn nuts. Health benefits Cooking sweet corn increases levels of ferulic acid, which has anti-cancer properties. Cultivars Open pollinated (non-hybrid) corn has largely been replaced in the commercial market by sweeter, earlier hybrids, which also have the advantage of maintaining their sweet flavor longer. su cultivars are best when cooked within 30 minutes of harvest. Despite their short storage life, many open-pollinated cultivars such as 'Golden Bantam' remain popular for home gardeners and specialty markets or are marketed as heirloom seeds. Although less sweet, they are often described as more tender and flavorful than hybrids. Genetics Early cultivars, including those used by Native Americans, were the result of the mutant su ("sugary") or su1 () allele of an isoamylase. They contain about 5–10% sugar by weight. These varieties are juicy due to the phytoglycogen content, but they lose sugar quickly after harvest, with the content halving in 24 hours. Supersweet corn are cultivars of sweet corn which produce higher than normal levels of sugar developed by University of Illinois at Urbana–Champaign professor John Laughnan. He was investigating two specific genes in sweet corn, one of which, the sh2 mutation (, a Glucose-1-phosphate adenylyltransferase), caused the corn to shrivel when dry. After further investigation, Laughnan discovered that the endosperm of sh2 sweet corn kernels store less starch and from 4 to 10 times more sugar than normal su sweet corn. He published his findings in 1953, disclosing the advantages of growing supersweet sweet corn, but many corn breeders lacked enthusiasm for the new supersweet corn due to the seed shiveling reducing germination rate. Illinois Foundation Seeds Inc. was the first seed company to release a supersweet corn and it was called 'Illini Xtra Sweet', but widespread use of supersweet hybrids did not occur until the early 1980s. The popularity of supersweet corn rose due to its long shelf life and large sugar content when compared to conventional sweet corn. This has allowed the long-distance shipping of sweet corn and has enabled manufacturers to can sweet corn without adding extra sugar or salt. Breeding has resolved the germination rate issue, but it is still generally true that sh2 corn is less juicy than their su counterparts. sh2-i ("shrunken2-intermediate") cultivars under development exploits a different mutation on the same gene to try and create varieties that are both juicy and sweet. The third gene mutation to be discovered is the se (or se1) for "sugary enhanced" allele, responsible for so-called "Everlasting Heritage" cultivars, such as 'Kandy Korn'. Cultivars with the se alleles have a longer storage life and contain 12–20% sugar. The gene for Se1 has been located. All of the alleles responsible for sweet corn are recessive, so it must be isolated from other corn, such as field corn and popcorn, that release pollen at the same time; the endosperm develops from genes from both parents, and heterozygous kernels will be tough and starchy. The se and su alleles do not need to be isolated from each other. However supersweet cultivars containing the sh2 allele must be grown in isolation from other cultivars to avoid cross-pollination and resulting starchiness, either in space (various sources quote minimum quarantine distances from 100 to 400 feet or 30 to 120 m) or in time (i.e., the supersweet corn does not pollinate at the same time as other corn in nearby fields). Modern breeding methods have also introduced cultivars incorporating multiple gene types: sy (for synergistic) adds the sh2 gene to some kernels (usually 25%) on the same cob as a se base (either homozygous or heterozygous) augmented sh2 adds the se and su gene to a sh2 parent Often seed producers of the sy and augmented sh2 types will use brand names or trademarks to distinguish these cultivars instead of mentioning the genetics behind them. Generally these brands or trademarks will offer a choice of white, bi-color and yellow cultivars which otherwise have very similar characteristics. Genetically modified corn Genetically modified sweet corn is available to commercial growers to resist certain insects or herbicides, or both. Such transgenic varieties are not available to home or small acreage growers due to protocols that must be followed in their production.
Biology and health sciences
Grains
Plants
89260
https://en.wikipedia.org/wiki/Boeing%20777
Boeing 777
The Boeing 777, commonly referred to as the Triple Seven, is an American long-range wide-body airliner developed and manufactured by Boeing Commercial Airplanes. The 777 is the world's largest twinjet and the most-built wide-body airliner. The jetliner was designed to bridge the gap between Boeing's other wide body airplanes, the twin-engined 767 and quad-engined 747, and to replace aging DC-10 and L-1011 trijets. Developed in consultation with eight major airlines, the 777 program was launched in October 1990, with an order from United Airlines. The prototype aircraft rolled out in April 1994, and first flew in June of that year. The 777 entered service with the launch operator United Airlines in June 1995. Longer-range variants were launched in 2000, and first delivered in 2004. The 777 can accommodate a ten–abreast seating layout and has a typical 3-class capacity of 301 to 368 passengers, with a range of . The jetliner is recognizable for its large-diameter turbofan engines, raked wingtips, six wheels on each main landing gear, fully circular fuselage cross-section, and a blade-shaped tail cone. The 777 became the first Boeing airliner to use fly-by-wire controls and to apply a carbon composite structure in the tailplanes. The original 777 with a maximum takeoff weight (MTOW) of was produced in two fuselage lengths: the initial 777-200 was followed by the extended-range -200ER in 1997; and the longer 777-300 in 1998. These have since been known as 777 Classics and were powered by General Electric GE90, Pratt & Whitney PW4000, or Rolls-Royce Trent 800 engines. The extended-range 777-300ER, with a MTOW of , entered service in 2004, the longer-range 777-200LR in 2006, and the 777F freighter in 2009. These second-generation 777 variants have extended raked wingtips and are powered exclusively by GE90 engines. In November 2013, Boeing announced the development of the third generation 777X (variants include the 777-8, 777-9, and 777-8F), featuring composite wings with folding wingtips and General Electric GE9X engines, and slated for first deliveries in 2026. , Emirates was the largest operator with a fleet of 163 aircraft. , more than 60 customers have placed orders for 2,303 Triple Sevens across all variants, of which 1,741 have been delivered. This makes the 777 the best-selling wide-body airliner, while its best-selling variant is the 777-300ER with 833 delivered. The airliner initially competed with the Airbus A340 and McDonnell Douglas MD-11; since 2015 it has mainly competed with the Airbus A350. First-generation 777-200 variants are to be supplanted by Boeing's 787 Dreamliner. , the 777 has been involved in 31 aviation accidents and incidents, including five hull loss accidents out of eight total hull losses with 542 fatalities including 3 ground casualties. Development Background In the early 1970s, the Boeing 747, McDonnell Douglas DC-10, and the Lockheed L-1011 TriStar became the first generation of wide-body passenger airliners to enter service. In 1978, Boeing unveiled three new models: the twin-engine or twinjet Boeing 7N7 (later named Boeing 757) to replace its 727, the twinjet Boeing 7X7 (later named 767 to challenge the Airbus A300, and a trijet "777" concept to compete with the DC-10 and L-1011. The mid-size 757 and 767 launched to market success, due in part to 1980s' extended-range twin-engine operational performance standards (ETOPS) regulations governing transoceanic twinjet operations. These regulations allowed twin-engine airliners to make ocean crossings at up to three hours' distance from emergency diversionary airports. Under ETOPS rules, airlines began operating the 767 on long-distance overseas routes that did not require the capacity of larger airliners. The trijet "777" was later dropped, following marketing studies that favored the 757 and 767 variants. Boeing was left with a size and range gap in its product line between the 767-300ER and the 747-400. By the late 1980s, DC-10 and L-1011 models were expected to be retired in the next decade, prompting manufacturers to develop replacement designs. McDonnell Douglas was working on the MD-11, a stretched successor of the DC-10, while Airbus was developing its A330 and A340 series. In 1986, Boeing unveiled proposals for an enlarged 767, tentatively named 767-X, to target the replacement market for first-generation wide-bodies such as the DC-10, and to complement existing 767 and 747 models in the company lineup. The initial proposal featured a longer fuselage and larger wings than the existing 767, along with winglets. Later plans expanded the fuselage cross-section but retained the existing 767 flight deck, nose, and other elements. However, airline customers were uninterested in the 767-X proposals, and instead wanted an even wider fuselage cross-section, fully flexible interior configurations, short- to intercontinental-range capability, and an operating cost lower than that of any 767 stretch. Airline planners' requirements for larger aircraft had become increasingly specific, adding to the heightened competition among aircraft manufacturers. By 1988, Boeing realized that the only answer was a clean-sheet design, which became the twinjet 777. The company opted for the twin-engine configuration given past design successes, projected engine developments, and reduced-cost benefits. On December 8, 1989, Boeing began issuing offers to airlines for the 777. Design effort Alan Mulally served as the Boeing 777 program's director of engineering, and then was promoted in September 1992 to lead it as vice-president and general manager. The design phase of the all-new twinjet was different from Boeing's previous jetliners; eight major airlines (All Nippon Airways, American Airlines, British Airways, Cathay Pacific, Delta Air Lines, Japan Airlines, Qantas, and United Airlines) played a role in the 777 development. This was a departure from industry practice, where manufacturers typically designed aircraft with minimal customer input. The eight airlines that contributed to the design process became known within Boeing as the "Working Together" group. At the group's first meeting in January 1990, a 23-page questionnaire was distributed to the airlines, asking what each wanted in the design. By March 1990, the group had decided upon a baseline configuration: a cabin cross-section close to the 747's, capacity up to 325 passengers, flexible interiors, a glass cockpit, fly-by-wire controls, and 10 percent better seat-mile costs than the Airbus A330 and McDonnell Douglas MD-11. The development phase of the 777 coincided with United Airlines's replacement program for its aging DC-10s. On October 14, 1990, United became the launch customer with an order for 34 Pratt & Whitney-powered 777s valued at US$11 billion (~$ in ) and options for 34 more. The airline required that the new aircraft be capable of flying three different routes: Chicago to Hawaii, Chicago to Europe, and non-stop from Denver, a hot and high airport, to Hawaii. ETOPS certification was also a priority for United, given the overwater portion of United's Hawaii routes. In late 1991, Boeing selected its Everett factory in Washington, home of 747 and 787 production, as the 777's final assembly line (FAL). In January 1993, a team of United developers joined other airline teams and Boeing designers at the Everett factory. The 240 design teams, with up to 40 members each, addressed almost 1,500 design issues with individual aircraft components. The fuselage diameter was increased to suit Cathay Pacific, the baseline model grew longer for All Nippon Airways, and British Airways' input led to added built-in testing and interior flexibility, along with higher operating weight options. The 777 was the first commercial aircraft to be developed using an entirely computer-aided design (CAD) process. Each design drawing was created on a three-dimensional CAD software system known as CATIA, sourced from Dassault Systèmes and IBM. This allowed engineers to virtually assemble the 777 aircraft on a computer system to check for interference and verify that the thousands of parts fit properly before the actual assembly process—thus reducing costly rework. Boeing developed its high-performance visualization system, FlyThru, later called IVT (Integrated Visualization Tool) to support large-scale collaborative engineering design reviews, production illustrations, and other uses of the CAD data outside of engineering. Boeing was initially not convinced of CATIA's abilities and built a physical mock-up of the nose section to verify its results. The test was so successful that additional mock-ups were canceled. The 777 was completed with such precision that it was the first Boeing jetliner that did not require the details to be worked out on an expensive physical aircraft mock-up. This helped the design program to limit costs to a reported $5 billion. Testing and certification Major assembly of the first aircraft began on January 4, 1993. On April 9, 1994, the first 777, number WA001, was rolled out in a series of 15 ceremonies held during the day to accommodate the 100,000 invited guests. The first flight took place on June 12, 1994, under the command of chief test pilot John E. Cashman. This marked the start of an 11-month flight test program that was more extensive than testing for any previous Boeing model. Nine aircraft fitted with General Electric, Pratt & Whitney, and Rolls-Royce engines were flight tested at locations ranging from the desert airfield at Edwards Air Force Base in California to frigid conditions in Alaska, mainly Fairbanks International Airport. To satisfy ETOPS requirements, eight 180-minute single-engine test flights were performed. The first aircraft built was used by Boeing's nondestructive testing campaign from 1994 to 1996, and provided data for the -200ER and -300 programs. At the successful conclusion of flight testing, the 777 was awarded simultaneous airworthiness certification by the US Federal Aviation Administration (FAA) and European Joint Aviation Authorities (JAA) on April 19, 1995. Entry into service Boeing delivered the first 777 to United Airlines on May 15, 1995. The FAA awarded 180-minute ETOPS clearance ("ETOPS-180") for the Pratt & Whitney PW4084-engined aircraft on May 30, 1995, making it the first airliner to carry an ETOPS-180 rating at its entry into service. The first commercial flight took place on June 7, 1995, from London Heathrow Airport to Dulles International Airport near Washington, D.C. Longer ETOPS clearance of 207 minutes was approved in October 1996. On November 12, 1995, Boeing delivered the first model with General Electric GE90-77B engines to British Airways, which entered service five days later. Initial service was affected by gearbox bearing wear issues, which caused British Airways to temporarily withdraw its 777 fleet from transatlantic service in 1997, returning to full service later that year. General Electric subsequently announced engine upgrades. The first Rolls-Royce Trent 877-powered aircraft was delivered to Thai Airways International on March 31, 1996, completing the introduction of the three powerplants initially developed for the airliner. Each engine-aircraft combination had secured ETOPS-180 certification from its entry into service. By June 1997, orders for the 777 numbered 323 from 25 airlines, including launch customers that had ordered additional aircraft. Operations performance data established the consistent capabilities of the twinjet over long-haul transoceanic routes, leading to additional sales. By 1998, the 777 fleet had approached 900,000 flight hours. Boeing states that the 777 fleet has a dispatch reliability (rate of departure from the gate with no more than 15 minutes delay due to technical issues) above 99 percent. Improvement and stretching: -200ER/-300 After the baseline model, the 777-200, Boeing developed an increased gross weight variant with greater range and payload capability. Initially named 777-200IGW, the 777-200ER first flew on October 7, 1996, received FAA and JAA certification on January 17, 1997, and entered service with British Airways on February 9, 1997. Offering greater long-haul performance, the variant became the most widely ordered version of the aircraft through the early 2000s. On April 2, 1997, a Malaysia Airlines -200ER named "Super Ranger" broke the great circle "distance without landing" record for an airliner by flying eastward from Boeing Field, Seattle to Kuala Lumpur, a distance of , in 21 hours and 23 minutes. Following the introduction of the -200ER, Boeing turned its attention to a stretched version of the baseline model. On October 16, 1997, the 777-300 made its first flight. At in length, the -300 became the longest airliner yet produced (until the A340-600), and had a 20 percent greater overall capacity than the standard length model. The -300 was awarded type certification simultaneously from the FAA and JAA on May 4, 1998, and entered service with launch customer Cathay Pacific on May 27, 1998. The first generation of Boeing 777 models, the -200, -200ER, and -300 have since been known collectively as Boeing 777 Classics. These three early 777 variants had three engine options ranging from : General Electric GE90, Pratt & Whitney PW4000, or Rolls-Royce Trent 800. Production The production process included substantial international content, an unprecedented level of global subcontracting for a Boeing jetliner, later exceeded by the 787. International contributors included Mitsubishi Heavy Industries and Kawasaki Heavy Industries (fuselage panels), Fuji Heavy Industries, Ltd. (center wing section), Hawker de Havilland (elevators), and Aerospace Technologies of Australia (rudder). An agreement between Boeing and the Japan Aircraft Development Corporation, representing Japanese aerospace contractors, made the latter risk-sharing partners for 20 percent of the entire development program. To accommodate production of its new airliner, Boeing doubled the size of the Everett factory at the cost of nearly US$1.5 billion (~$ in ) to provide space for two new assembly lines. New production methods were developed, including a turn machine that could rotate fuselage subassemblies 180 degrees, giving workers access to upper body sections. By the start of production in 1993, the program had amassed 118 firm orders, with options for 95 more from 10 airlines. Total investment in the program was estimated at over $4 billion from Boeing, with an additional $2 billion from suppliers. Initially second to the 747 as Boeing's most profitable jetliner, the 777 became the company's most lucrative model in the 2000s. An analyst established the 777 program, assuming Boeing has fully recouped the plane's development costs, may account for $400 million of the company's pretax earnings in 2000, $50 million more than the 747. By 2004, the airliner accounted for the bulk of wide-body revenues for Boeing Commercial Airplanes. In 2007, orders for second-generation 777 models approached 350 aircraft, and in November of that year, Boeing announced that all production slots were sold out to 2012. The program backlog of 356 orders was valued at $95 billion at list prices in 2008. In 2010, Boeing announced plans to increase production from 5 aircraft per month to 7 aircraft per month by mid-2011, and 8.3 per month by early 2013. In November 2011, assembly of the 1,000th 777, a -300ER, began when it took 49 days to fully assemble one of these variants. The aircraft in question was built for Emirates airline, and rolled out of the production facility in March 2012. By the mid-2010s, the 777 had become prevalent on the longest flights internationally and had become the most widely used airliner for transpacific routes, with variants of the type operating over half of all scheduled flights and with the majority of transpacific carriers. By April 2014, with cumulative sales surpassing those of the 747, the 777 became the best-selling wide-body airliner; at existing production rates, the aircraft was on track to become the most-delivered wide-body airliner by mid-2016. By February 2015, the backlog of undelivered 777s totaled 278 aircraft, equivalent to nearly three years at the then production rate of 8.3 aircraft per month, causing Boeing to ponder the 2018–2020 time frame. In January 2016, Boeing confirmed plans to reduce the production rate of the 777 family from 8.3 per month to 7 per month in 2017 to help close the production gap between the 777 and 777X due to a lack of new orders. In August 2017, Boeing was scheduled to drop 777 production again to five per month. In 2018, assembling test 777-9 aircraft was expected to lower output to an effective rate of 5.5 per month. In March 2018, as previously predicted, the 777 overtook the 747 as the world's most produced wide body aircraft. Due to the impact of the COVID-19 pandemic on aviation, demand for new jets fell in 2020 and Boeing further reduced monthly 777 production from five to two aircraft. Second generation (777-X): -300ER/-200LR/F From the program's start, Boeing had considered building ultra-long-range variants. Early plans centered on a 777-100X proposal, a shortened variant of the -200 with reduced weight and increased range, similar to the 747SP. However, the -100X would have carried fewer passengers than the -200 while having similar operating costs, leading to a higher cost per seat. By the late 1990s, design plans shifted to longer-range versions of existing models. In March 1997, the Boeing board approved the 777-200X/300X specifications: 298 passengers in three classes over 8,600 nmi (15,900 km; ) for the 200X and (12,200 km; ) with 355 passengers in a tri-class layout for the 300X, with design freeze planned in May 1998, 200X certification in August 2000, and introduction in September and in January 2001 for the 300X. The wider wing was to be strengthened and the fuel capacity enlarged, and it was to be powered by simple derivatives with similar fans. GE was proposing a GE90-102B, while P&W offered its PW4098 and R-R was proposing a Trent 8100. Rolls-Royce was also studying a Trent 8102 over . Boeing was also studying a semi-levered, articulated main gear to help the take-off rotation of the proposed -300X, with its higher MTOW. By January 1999, its MTOW grew to , and thrust requirements increased to . A more powerful engine in the thrust class of was required, leading to talks between Boeing and engine manufacturers. General Electric offered to develop the GE90-115B engine, while Rolls-Royce proposed developing the Trent 8104 engine. In 1999, Boeing announced an agreement with General Electric, beating out rival proposals. Under the deal with General Electric, Boeing agreed to only offer GE90 engines on new 777 versions. On February 29, 2000, Boeing launched its next-generation twinjet program, initially called 777-X, and began issuing offers to airlines. Development was slowed by an industry downturn during the early 2000s. The first model to emerge from the program, the 777-300ER, was launched with an order for ten aircraft from Air France, along with additional commitments. On February 24, 2003, the -300ER made its first flight, and the FAA and EASA (European Aviation Safety Agency, successor to the JAA) certified the model on March 16, 2004. The first delivery to Air France took place on April 29, 2004. The -300ER, which combined the -300's added capacity with the -200ER's range, became the top-selling 777 variant in the late 2000s, benefitting as airlines replaced comparable four-engine models with twinjets for their lower operating costs. The second long-range model, the 777-200LR, rolled out on February 15, 2005, and completed its first flight on March 8, 2005. The -200LR was certified by both the FAA and EASA on February 2, 2006, and the first delivery to Pakistan International Airlines occurred on February 26, 2006. On November 10, 2005, the first -200LR set a record for the longest non-stop flight of a passenger airliner by flying eastward from Hong Kong to London. Lasting 22 hours and 42 minutes, the flight surpassed the -200LR's standard design range and was logged in the Guinness World Records. The production freighter model, the 777F, rolled out on May 23, 2008. The maiden flight of the 777F, which used the structural design and engine specifications of the -200LR along with fuel tanks derived from the -300ER, occurred on July 14, 2008. FAA and EASA type certification for the freighter was received on February 6, 2009, and the first delivery to launch customer Air France took place on February 19, 2009. By the late 2000s, the 777 was facing increased potential competition from Airbus' planned A350 XWB and internally from proposed 787 series, both airliners that offer fuel efficiency improvements. As a consequence, the 777-300ER received engine and aerodynamics improvement packages for reduced drag and weight. In 2010, the variant further received a maximum zero-fuel weight increase, equivalent to a higher payload of 20–25 passengers; its GE90-115B1 engines received a 1–2.5 percent thrust enhancement for increased takeoff weights at higher-altitude airports. Through these improvements, the 777 remains the largest twin-engine jetliner in the world. In 2011, the 787 Dreamliner entered service, the completed first stage a.k.a. the Yellowstone-2 (Y2) of a replacement aircraft initiative called the Boeing Yellowstone Project, which would replace large variants of the 767 (300/300ER/400) but also small variants of the 777 (-200/200ER/200LR). While the larger variants of the 777 (-300/300ER) as well as the 747 could eventually be replaced by a new generation aircraft, the Yellowstone-3 (Y3), which would draw upon technologies from the 787 Dreamliner (Y2). More changes were targeted for late 2012, including possible extension of the wingspan, along with other major changes, including a composite wing, a new generation engine, and different fuselage lengths. Emirates was reportedly working closely with Boeing on the project, in conjunction with being a potential launch customer for the new 777 generation. Among customers for the aircraft during this period, China Airlines ordered ten 777-300ER aircraft to replace 747-400s on long-haul transpacific routes (with the first of those aircraft entering service in 2015), noting that the 777-300ER's per seat cost is about 20% lower than the 747's costs (varying due to fuel prices). Improvement packages In tandem with the development of the third generation Boeing 777X, Boeing worked with General Electric to offer a 2% improvement in fuel efficiency to in-production 777-300ER aircraft. General Electric improved the fan module and the high-pressure compressor stage-1 blisk in the GE-90-115 turbofan, as well as reduced clearances between the tips of the turbine blades and the shroud during cruise. These improvements, of which the latter is the most important and was derived from work to develop the 787, were stated by GE to lower fuel burn by 0.5%. Boeing's wing modifications were intended to deliver the remainder. Boeing stated that every 1% improvement in the 777-300ER's fuel burn translates into being able to fly the aircraft another on the same load of fuel, or add ten passengers or of cargo to a "load limited" flight. In March 2015, additional details of the "improvement package" were unveiled. The 777-300ER was to shed by replacing the fuselage crown with tie rods and composite integration panels, similar to those used on the 787. The new flight control software would eliminate the need for the tail skid by keeping the tail off the runway surface regardless of the extent to which pilots command the elevators. Boeing was also redesigning the inboard flap fairings to reduce drag by reducing pressure on the underside of the wing. The outboard raked wingtip was to have a divergent trailing edge, described as a "poor man's airfoil" by Boeing; this was originally developed for the McDonnell Douglas MD-12 project. Another change involved elevator trim bias. These changes were to increase fuel efficiency and allow airlines to add 14 additional seats to the airplane, increasing per seat fuel efficiency by 5%. Mindful of the long time required to bring the 777X to the market, Boeing continued to develop improvement packages which improve fuel efficiency, as well as lower prices for the existing product. In January 2015, United Airlines ordered ten 777-300ERs, normally selling for around $150 million per aircraft, were purchased for $130 million each, a discount to bridge the production gap to the 777X. In 2019, the list price for the -200ER was $306.6 million, the -200LR: $346.9 million, the -300ER: $375.5 million and 777F: $352.3 million. The -200ER is the only Classic variant listed. Third generation (777X): -8/-8F/-9 In November 2013, with orders and commitments totaling 259 aircraft from Lufthansa, Emirates, Qatar Airways, and Etihad Airways, Boeing formally launched the 777X program, the third generation of the 777, with two models: the 777-8 and 777-9. The 777-9 is a further stretched variant with a capacity of over 400 passengers and a range of over , whereas the 777-8 is slated to seat approximately 350 passengers and have a range of over . Both models are to be equipped with new generation GE9X engines and feature new composite wings with folding wingtips. The first member of the 777X family was projected to enter service in 2020 at the time of the program announcement. The roll-out of the prototype 777X, a 777-9 model, occurred on March 13, 2019. The 777-9 first flew on January 25, 2020, with deliveries initially forecast for 2022 or 2023 and later delayed to 2025. Design Boeing introduced a number of advanced technologies with the 777 design, including fully digital fly-by-wire controls, fully software-configurable avionics, Honeywell LCD glass cockpit flight displays, and the first use of a fiber optic avionics network on a commercial airliner. Boeing made use of work done on the cancelled Boeing 7J7 regional jet, which utilized similar versions of the chosen technologies. In 2003, Boeing began offering the option of cockpit electronic flight bag computer displays. In 2013, Boeing announced that the upgraded 777X models would incorporate airframe, systems, and interior technologies from the 787. Fly-by-wire In designing the 777 as its first fly-by-wire commercial aircraft, Boeing decided to retain conventional control yokes rather than change to sidestick controllers as used in many fly-by-wire fighter aircraft and in many Airbus airliners. Along with traditional yoke and rudder controls, the cockpit features a simplified layout that retains similarities to previous Boeing models. The fly-by-wire system also incorporates flight envelope protection, a system that guides pilot inputs within a computer-calculated framework of operating parameters, acting to prevent stalls, overspeeds, and excessively stressful maneuvers. This system can be overridden by the pilot if deemed necessary. The fly-by-wire system is supplemented by mechanical backup. Airframe and systems The airframe incorporates the use of composite materials, accounting for nine percent of the original structural weight, while the third-generation models, the 777-8 and 777-9, feature more composite parts. Composite components include the cabin floor and rudder, with the 777 being the first Boeing airliner to use composite materials for both the horizontal and vertical stabilizers (empennage). The main fuselage cross-section is fully circular, and tapers rearward into a blade-shaped tail cone with a port-facing auxiliary power unit. The wings on the 777 feature a supercritical airfoil design that is swept back at 31.6 degrees and optimized for cruising at Mach 0.83 (revised after flight tests up to Mach 0.84). The wings are designed with increased thickness and a longer span than previous airliners, resulting in greater payload and range, improved takeoff performance, and a higher cruising altitude. The wings also serve as fuel storage, with longer-range models able to carry up to of fuel. This capacity allows the 777-200LR to operate ultra-long-distance, trans-polar routes such as Toronto to Hong Kong. In 2013, a new wing made of composite materials was introduced for the upgraded 777X, with a wider span and design features based on the 787's wings. Folding wingtips, long, were offered when the 777 was first launched, to appeal to airlines who might use gates made to accommodate smaller aircraft, but no airline purchased this option. Folding wingtips reemerged as a design feature at the announcement of the upgraded 777X in 2013. Smaller folding wingtips of in length will allow 777X models to use the same airport gates and taxiways as earlier 777s. These smaller folding wingtips are less complex than those proposed for earlier 777s, and internally only affect the wiring needed for wingtip lights. The aircraft features the largest landing gear and the biggest tires ever used in a commercial jetliner. The six-wheel bogies are designed to spread the load of the aircraft over a wide area without requiring an additional centerline gear. This helps reduce weight and simplifies the aircraft's braking and hydraulic systems. Each tire of a 777-300ER six-wheel main landing gear can carry a load of , which is heavier than other wide-bodies such as the 747-400. The aircraft has triple redundant hydraulic systems with only one system required for landing. A ram air turbine—a small retractable device which can provide emergency power—is also fitted in the wing root fairing. Interior The original 777 interior, also known as the Boeing Signature Interior, features curved panels, larger overhead bins, and indirect lighting. Seating options range from four to six–abreast in first class up to ten–abreast in economy. The 777's windows were the largest of any current commercial airliner until the 787, and measure for all models outside the 777-8 and -9. The cabin also features "Flexibility Zones", which entails deliberate placement of water, electrical, pneumatic, and other connection points throughout the interior space, allowing airlines to move seats, galleys, and lavatories quickly and more easily when adjusting cabin arrangements. Several aircraft have also been fitted with VIP interiors for non-airline use. Boeing designed a hydraulically damped toilet seat cover hinge that closes slowly. In February 2003, Boeing introduced overhead crew rests as an option on the 777. Located above the main cabin and connected via staircases, the forward flight crew rest contains two seats and two bunks, while the aft cabin crew rest features multiple bunks. The Signature Interior has since been adapted for other Boeing wide-body and narrow-body aircraft, including 737NG, 747-400, 757-300, and newer 767 models, including all 767-400ER models. The 747-8 and 767-400ER have also adopted the larger, more rounded windows of the original 777. In July 2011, Flight International reported that Boeing was considering replacing the Signature Interior on the 777 with a new interior similar to that on the 787, as part of a move towards a "common cabin experience" across all Boeing platforms. With the launch of the 777X in 2013, Boeing confirmed that the aircraft would be receiving a new interior featuring 787 cabin elements and larger windows. Further details released in 2014 included re-sculpted cabin sidewalls for greater interior room, noise-damping technology, and higher cabin humidity. Air France has a 777-300ER sub-fleet with 472 seats each, more than any other international 777, to achieve a cost per available seat kilometer (CASK) around €.05, similar to Level's 314-seat Airbus A330-200, its benchmark for low-cost, long-haul. Competing on similar French overseas departments destinations, Air Caraïbes has 389 seats on the A350-900 and 429 on the -1000. French Bee's is even more dense with its 411 seats A350-900, due to 10-abreast economy seating, reaching a €.04 CASK according to Air France, and lower again with its 480 seats on the -1000. Engines The initial 777 models (consisting of the 777-200, 777-200ER, 777-300) were launched with propulsion options from three manufacturers, GE Aviation, Pratt & Whitney, and Rolls-Royce, giving the airlines their choice of engines from competing firms. Each manufacturer agreed to develop an engine in the of thrust class for the world's largest twinjet, resulting in the General Electric GE90, Pratt & Whitney PW4000, or Rolls-Royce Trent 800 engines. The Trent 800 is the lightest of the three powerplants as it weighs 13,400 lb (6.078 t) dry, while the GE90 is , and the PW4000 is . For Boeing's second-generation 777 variants (777-300ER, 777-200LR, and 777F) greater thrust was needed to meet the aircraft requirements, and GE was selected as the exclusive engine manufacturer. The higher-thrust variants, GE90-110B1 and -115B, have a different architecture from that of the earlier GE90 versions. GE incorporated an advanced larger diameter fan made from composite materials which enhanced thrust at low flight speeds. However, GE also needed to increase core power to improve net thrust at high flight speeds. Consequently, GE elected to increase core capacity, which they achieved by removing one stage from the rear of the HP compressor and adding an additional stage to the LP compressor, which more than compensated for the reduction in HP compressor pressure ratio, resulting in a net increase in core mass flow. The higher-thrust GE90 variants are the first production engines to feature swept rotor blades. The nacelle has a maximum diameter of . Each of the 22 fan blades on the GE90-115B have a length of and a mass of less than . Variants Boeing uses two characteristics – fuselage length and range – to define its 777 models. Passengers and cargo capacity varies by fuselage length: the 777-300 has a stretched fuselage compared to the base 777-200. Three range categories were defined: the A-market would cover domestic and regional operations, the B-market would cover routes from Europe to the US West coast and the C-market the longest transpacific routes. The A-market would be covered by a (7,800 km; ) range, MTOW aircraft for 353 to 374 passengers powered by engines, followed by a (12,200 km; ) B-market range for 286 passengers in three-class, with unit thrust and of MTOW, an A340 competitor, basis of an A-market 409 to 434 passengers stretch, and eventually a (14,000 km; ) C-market with engines. When referring to different variants, the International Air Transport Association (IATA) code collapses the 777 model designator and the -200 or -300 variant designator to "772" or "773". The International Civil Aviation Organization (ICAO) aircraft type designator system adds a preceding manufacturer letter, in this case "B" for Boeing, hence "B772" or "B773". Designations may append a range identifier like "B77W" for the 777-300ER by the ICAO, "77W" for the IATA, though the -200ER is a company marketing designation and not certificated as such. Other notations include "773ER" and "773B" for the -300ER. 777-200 The initial 777-200 made its maiden flight on June 12, 1994, and was first delivered to United Airlines on May 15, 1995. With a 545,000 lb (247 t) MTOW and engines, it has a range of with 305 passenger seats in a three-class configuration. The -200 was primarily aimed at US domestic airlines, although several Asian carriers and British Airways have also operated the type. Nine different -200 customers have taken delivery of 88 aircraft, with 55 in airline service . The competing Airbus aircraft was the A330-300. In March 2016, United Airlines shifted operations with all 19 of its -200s to exclusively domestic US routes, including flights to and from Hawaii, and added more economy class seats by shifting to a ten-abreast configuration (a pattern that matched American Airlines' reconfiguration of the type). , Boeing no longer markets the -200, as indicated by its removal from the manufacturer's price listings for 777 variants. 777-200ER The B-market 777-200ER ("ER" for Extended Range), originally known as the 777-200IGW (increased gross weight), has additional fuel capacity and an increased MTOW enabling transoceanic routes. With a 658,000 lb (298 t) MTOW and engines, it has a range with 301 passenger seats in a three-class configuration. It was delivered first to British Airways on February 6, 1997. Thirty-three customers received 422 deliveries, with no unfilled orders . , 338 examples of the -200ER are in airline service. It competed with the A340-300. Boeing proposed the 787-10 to replace it. The value of a new -200ER rose from US$110 million at service entry to US$130 million in 2007; a 2007 model 777 was selling for US$30 million ten years later, while the oldest ones had a value around US$5–6 million, depending on the remaining engine time. The engine can be delivered de-rated with reduced engine thrust for shorter routes to lower the MTOW, reduce purchase price and landing fees (as 777-200 specifications) but can be re-rated to full standard. Singapore Airlines ordered over half of its -200ERs de-rated. 777-200LR Worldliner The 777-200LR Worldliner ("LR" for Long Range), the C-market model, entered service in 2006 as one of the longest-range commercial airliners. Boeing named it Worldliner as it can connect almost any two airports in the world, although it is still subject to ETOPS restrictions. It holds the world record for the longest nonstop flight by a commercial airliner. It has a maximum design range of . The -200LR was intended for ultra long-haul routes such as Los Angeles to Singapore. Developed alongside the -300ER, the -200LR features an increased MTOW and three optional auxiliary fuel tanks in the rear cargo hold. Other new features include extended raked wingtips, redesigned main landing gear, and additional structural strengthening. As with the -300ER and 777F, the -200LR is equipped with wingtip extensions of 12.8 ft (3.90 m). The -200LR is powered by GE90-110B1 or GE90-115B turbofans. The first -200LR was delivered to Pakistan International Airlines on February 26, 2006. Twelve different -200LR customers took delivery of 61 aircraft. Airlines operated 50 of the -200LR variant . Emirates is the largest operator of the LR variant with 10 aircraft. The closest competing aircraft from Airbus are the discontinued A340-500HGW and the current A350-900ULR. 777 Freighter The 777 Freighter (777F) is an all-cargo version of the twinjet, and shares features with the -200LR; these include its airframe, engines, and fuel capacity. The 777F is unofficially referred to as 777-200LRF by some cargo airlines. With a maximum payload of (similar to the of the Boeing 747-200F), it has a maximum range of 9,750 nmi (18,057 km; )) or 4,970 nmi (9,200 km; )) at its max structural payload. The 777F also features a new supernumerary area, which includes four business-class seats forward of the rigid cargo barrier, full main deck access, bunks, and a galley. As the aircraft promises improved operating economics compared to older freighters, airlines have viewed the 777F as a replacement for freighters such as the Boeing 747-200F, McDonnell Douglas DC-10, and McDonnell Douglas MD-11F. The first 777F was delivered to Air France on February 19, 2009. , 247 freighters have been ordered by 25 different customers with 45 unfilled orders. Operators had 202 of the 777F in service . 777-300 Launched at the Paris Air Show on June 26, 1995, its major assembly started in March 1997 and its body was joined on July 21, it was rolled-out on September 8 and made its first flight on October 16. The 777-300 was designed to be stretched by 20%: 60 extra seats to 368 in a three-class configuration, 75 more to 451 in two classes, or up to 550 in all-economy like the 747SR. The stretch is done with in ten frames forward and in nine frames aft for a length, longer than the 747-400. It uses the -200ER fuel capacity and engines with a MTOW. It has ground maneuvering cameras for taxiing and a tailskid to rotate, while the proposed MTOW -300X would have needed a semi-levered main gear. Its overwing fuselage section 44 was strengthened, with its skin thickness going from the -200's , and received a new evacuation door pair. Its operating empty weight with Rolls-Royce engines in typical tri-class layout is compared to for a similarly configured -200. Boeing wanted to deliver 170 -300s by 2006 and to produce 28 per year by 2002, to replace early Boeing 747s, burning one-third less fuel with 40% lower maintenance costs. With a 660,000 lb (299 t) MTOW and engines, it has a range of with 368 passengers in three-class. Eight different customers have taken delivery of 60 aircraft of the variant, of which 18 were powered by the PW4000 and 42 by the RR Trent 800 (none were ordered with the GE90, which was never certified on this variant), with 48 in airline service . The last -300 was delivered in 2006 while the longer-range -300ER started deliveries in 2004. 777-300ER The 777-300ER ("ER" for Extended Range) is the B-market version of the -300. Its higher MTOW and increased fuel capacity permits a maximum range of with 392 passengers in a two-class seating arrangement. The 777-300ER features extended raked wingtips, a strengthened fuselage and wings and a modified main landing gear. Its wings have an aspect ratio of 9.0. It is powered by the GE90-115B turbofan, the world's most powerful jet engine with a maximum thrust of . Following flight testing, aerodynamic refinements have reduced fuel burn by an additional 1.4%. At , FL300, -59 °C and at a weight, it burns of fuel per hour. Its operating empty weight is . The projected operational empty weight is in airline configuration, at a weight of and FL350, total fuel flow is at , rising to at . Since its launch, the -300ER has been a primary driver of the airplane's sales past the rival A330/340 series. Its direct competitors have included the Airbus A340-600 and the A350-1000. Using two engines produces a typical operating cost advantage of around 8–9% for the -300ER over the A340-600. Several airlines have acquired the -300ER as a 747-400 replacement amid rising fuel prices given its 20% fuel burn advantage. The -300ER has an operating cost of $44 per seat hour, compared to an Airbus A380's roughly $50 per seat hour and $90 per seat hour for a Boeing 747-400 . The first 777-300ER was delivered to Air France on April 29, 2004. The -300ER is the best-selling 777 variant, with Emirates being the largest operator with 123 777-300ER in service, having surpassed the -200ER in orders in 2010 and deliveries in 2013. , 784 300ERs were in service, A total of 831 were built with the last delivered to Aeroflot in 2021. Boeing ended 777-300ER production in 2024, and switched to the new 777X. 777X The third-generation of the 777, launched as the 777X, is to feature new GE9X engines and new composite wings with folding wingtips. It was launched in November 2013 with two variants: the 777-8 and the 777-9. The 777-8 provides seating for 395 passengers and has a range of , while the 777-9 has seating for 426 passengers and a range of over . A longer 777-10X, 777X Freighter, and 777X BBJ variants have also been proposed. Government and corporate Versions of the 777 have been acquired by government and private customers. The main purpose has been for VIP transport, including as an air transport for heads of state, although the aircraft has also been proposed for other military applications. 777 Business Jet (777 VIP) – the Boeing Business Jet version of the 777 that is sold to corporate customers. Boeing has received orders for 777 VIP aircraft based on the 777-200LR and 777-300ER passenger models. The aircraft are fitted with private jet cabins by third party contractors, and completion may take three years. KC-777 – this was a proposed tanker version of the 777. In September 2006, Boeing announced that it would produce the KC-777 if the United States Air Force (USAF) required a larger tanker than the KC-767, able to transport more cargo or personnel. In April 2007, Boeing offered its 767-based KC-767 Advanced Tanker instead of the KC-777 to replace the smaller Boeing KC-135 Stratotanker under the USAF's KC-X program. Boeing officials have described the KC-777 as suitable for the related KC-Z program to replace the wide-body McDonnell Douglas KC-10 Extender. In 2014, the Japanese government chose to procure two 777-300ERs to serve as the official air transport for the Emperor of Japan and Prime Minister of Japan. The aircraft, operated by the Japan Air Self-Defense Force under the callsign Japanese Air Force One, entered service in 2019 and replaced two 747-400s - the 777-300ER was specifically selected by the Ministry of Defense owing to its similar capabilities to the preceding 747 pair. Besides VIP transport, the 777s are also intended for use in emergency relief missions. 777s are serving or have served as official government transports for nations including Gabon (VIP-configured 777-200ER), Turkmenistan (VIP-configured 777-200LR) and the United Arab Emirates (VIP-configured 777-200ER and 777-300ER operated by Abu Dhabi Amiri Flight). Prior to returning to power as Prime Minister of Lebanon, Rafic Hariri acquired a 777-200ER as an official transport. The Indian government purchased two Air India 777-300ERs and converted them for VVIP transport operated by the Indian Air Force under the callsign Air India One; they entered service in 2021 replacing the Air India-owned 747s. In 2014, the USAF examined the possibility of adopting modified 777-300ERs or 777-9Xs to replace the Boeing 747-200 aircraft used as Air Force One. Although the USAF had preferred a four-engine aircraft, this was mainly due to precedent (existing aircraft were purchased when the 767 was just beginning to prove itself with ETOPS; decades later, the 777 and other twin jets established a comparable level of performance to quad-jet aircraft). Ultimately, the Air Force decided against the 777, and selected the Boeing 747-8 to become the next presidential aircraft. Aftermarket freighter conversions In the 2000s, Boeing began studying the conversion of 777-200ER and -200 passenger airliners into freighters, under the name 777 BCF (Boeing Converted Freighter). The company has been in discussion with several airline customers, including FedEx Express, UPS Airlines, and GE Capital Aviation Services, to provide launch orders for a 777 BCF program. 777-300ER Special Freighter (SF) In July 2018, Boeing was studying a 777-300ER freighter conversion, targeted for the volumetric market instead of the density market served by the production 777F. After having considered a -200ER P2F program, Boeing was hoping to conclude its study by the Fall as the 777X replacing aging -300ERs from 2020 will generate feedstock. New-build 777-300ERs may maintain the delivery rate at five per month, to bridge the production gap until the 777X is delivered. Within the 811 777-300ERs delivered and 33 to be delivered by October 2019, GE Capital Aviation Services (GECAS) anticipates up to 150-175 orders through 2030, the four to five month conversion costing around $35 million. In October 2019, Boeing and Israeli Aerospace Industries (IAI) launched the 777-300ERSF passenger to freighter conversion program with GECAS ordering 15 aircraft and 15 options, the first aftermarket 777 freighter conversion program. In June 2020, IAI received the first 777-300ER to be converted, from GECAS. In October 2020, GECAS announced the launch operator from 2023: Michigan-based Kalitta Air, already operating 24 747-400Fs, nine 767-300ERFs and three 777Fs. IAI should receive the first aircraft in December 2020 while certification and service entry was scheduled for late 2022. By March 2023, IAI had completed the first flight of a 777-300ER Special Freighter, converted for AerCap, as it had a backlog over 60 orders. The 777-300ER Special Freighter has a maximum payload of , a range of and shares the door aperture and aft position of the 777F. It has a cargo volume capacity of , 5,800 cu ft (164 m3) greater than the 777F (or % more) and can hold 47 standard 96 x 125 in pallet (P6P) positions, 10 more positions than a 777F or eight more than a 747-400F. With windows plugged, passenger doors deactivated, fuselage and floor reinforced, and a main-deck cargo door installed, the 777-300ERSF has 15% more volume than a 747-400BCF. Experimental Boeing has used 777 aircraft in two research and development programs. The first program, the Quiet Technology Demonstrator (QTD) was run in collaboration with Rolls-Royce and General Electric to develop and validate engine intake and exhaust modifications, including the chevrons subsequently used in the 737 MAX, 747-8 and 787 series. The tests were flown in 2001 and 2005. A further program, the ecoDemonstrator series, is intended to test and develop technologies and techniques to reduce aviation's environmental impact. The program started in 2011, with the first ecoDemonstrator aircraft flying in 2012. Various airframes have been used since to test a wide variety of technologies in collaboration with a range of industrial partners. 777s have been used on three occasions as of 2024. The first of these, a 777F in 2018, performed the world's first commercial airliner flights using 100% sustainable aviation fuel (SAF). In 2022-4, the testbed is a 777-200ER. Operators Boeing customers that have received the most 777s are Emirates, Singapore Airlines, United Airlines, ILFC, and American Airlines. Emirates is the largest airline operator , and is the only customer to have operated all 777 variants produced, including the -200, -200ER, -200LR, -300, -300ER, and 777F. The 1,000th 777 off the production line, a -300ER set to be Emirates' 102nd 777, was unveiled at a factory ceremony in March 2012. A total of 1,416 aircraft (all variants) were in airline service , with Emirates (163), United Airlines (91), Air France (70), Cathay Pacific (69), American Airlines (67), Qatar Airways (67), British Airways (58), Korean Air (53), All Nippon Airways (50), Singapore Airlines (46), and other operators with fewer aircraft of the type. In 2017, 777 Classics are reaching the end of their mainline service: with a -200 age ranging from three to 22 years, 43 Classic 777s or 7.5% of the fleet have been retired. Values of 777-200ERs have declined by 45% since January 2014, faster than Airbus A330s and Boeing 767s with 30%, due to the lack of a major secondary market but only a few budget, air charters and ACMI operators. In 2015, Richard H. Anderson, then Delta Air Lines' chairman and chief executive, said he had been offered 777-200s for less than US$10 million. To keep them cost-efficient, operators densify their 777s for about US$10 million each, like Scoot with 402 seats in its dual-class -200s, or Cathay Pacific which switched the 3–3–3 economy layout of 777-300s to 3–4–3 to seat 396 on regional services. Orders and deliveries The 777 surpassed 2,000 orders by the end of 2018. Accidents and incidents , the 777 had been involved in 31 aviation accidents and incidents, including a total of eight hull losses (five in-flight accidents), resulting in 542 fatalities (including three fatalities due to ground casualties), along with three hijackings. The first fatality involving the twinjet occurred in a fire while an aircraft was being refueled at Denver International Airport in the United States on September 5, 2001, during which a ground worker sustained fatal burns. The aircraft, operated by British Airways, sustained fire damage to the lower wing panels and engine housing; it was later repaired and returned to service. The first hull loss occurred on January 17, 2008, when a 777-200ER with Rolls-Royce Trent 895 engines, flying from Beijing to London as British Airways Flight 38, crash-landed approximately short of Heathrow Airport's runway 27L and slid onto the runway's threshold. There were 47 injuries and no fatalities. The impact severely damaged the landing gear, wing roots and engines. The accident was attributed to ice crystals suspended in the aircraft's fuel clogging the fuel-oil heat exchanger (FOHE). Two other minor momentary losses of thrust with Trent 895 engines occurred later in 2008. Investigators found these were also caused by ice in the fuel clogging the FOHE. As a result, the heat exchanger was redesigned. The second hull loss occurred on July 29, 2011, when a 777-200ER scheduled to operate as EgyptAir Flight 667 suffered a cockpit fire while parked at the gate at Cairo International Airport before its departure. The aircraft was evacuated with no injuries, and airport fire teams extinguished the fire. The aircraft sustained structural, heat and smoke damage, and was written off. Investigators focused on a possible short circuit between an electrical cable and a supply hose in the cockpit crew oxygen system. The third hull loss occurred on July 6, 2013, when a 777-200ER, operating as Asiana Airlines Flight 214, crashed while landing at San Francisco International Airport after touching down short of the runway. The 307 surviving passengers and crew on board evacuated before fire destroyed the aircraft. Two passengers, who had not been wearing their seatbelts, were ejected from the aircraft during the crash and were killed. A third passenger died six days later as a result of injuries sustained during the crash. These were the first fatalities in a crash involving a 777 since its entry into service in 1995. The official accident investigation concluded in June 2014 that the pilots committed 20 to 30 minor to significant errors in their final approach. Deficiencies in Asiana Airlines' pilot training and in Boeing's documentation of complex flight control systems were also cited as contributory factors. The fourth hull loss occurred on March 8, 2014, when a 777-200ER carrying 227 passengers and 12 crew, en route from Kuala Lumpur to Beijing as Malaysia Airlines Flight 370, was reported missing. Air Traffic Control's last reported coordinates for the aircraft were over the South China Sea. After the search for the aircraft began, Malaysia's prime minister announced on March 24, 2014, that after analysis of new satellite data it was now to be assumed "beyond reasonable doubt" that the aircraft had crashed in the Indian Ocean and there were no survivors. The cause remains unknown, but the Malaysian Government in January 2015, declared it an accident. US officials believe the most likely explanation to be that someone in the cockpit of Flight 370 re-programmed the aircraft's autopilot to travel south across the Indian Ocean. On July 29, 2015, an item later identified as a flaperon from the still missing aircraft was found on the island of Réunion in the western Indian Ocean, consistent with having drifted from the main search area. The fifth hull loss occurred on July 17, 2014, when a 777-200ER, bound for Kuala Lumpur from Amsterdam as Malaysia Airlines Flight 17 (MH17), was shot down by an anti-aircraft missile while flying over eastern Ukraine. All 298 people (283 passengers and 15 crew) on board were killed, making this the deadliest crash involving the Boeing 777. The incident was linked to the ongoing War in Donbas. On the basis of the Dutch Safety Board and the Joint Investigation Team official conclusions of May 2018, the governments of the Netherlands and Australia hold Russia responsible for the deployment of the Buk missile system used in shooting down the airliner from territory held by pro-Russian separatists. The sixth hull loss occurred on August 3, 2016, when a 777-300 crashed while landing and caught fire at Dubai Airport at the end of its flight as Emirates Flight 521. The preliminary investigation indicated that the aircraft was attempting a landing during active wind shear conditions. The pilots initiated a go-around procedure shortly after the wheels touched-down onto the runway; however, the aircraft settled back onto the ground, apparently due to late throttle application. As the undercarriage was in the process of being retracted, the aircraft landed on its rear underbody and engine nacelles, resulting in the separation of one engine, loss of control and subsequent crash. There were no passenger casualties of the 300 people on board, but one airport fireman was killed fighting the fire. The aircraft's fuselage and right wing were irreparably damaged by the fire. The seventh hull loss occurred on November 29, 2017, when a Singapore Airlines 777-200ER experienced a fire while being towed at Singapore Changi Airport. An aircraft technician was the only occupant on board and evacuated safely. The aircraft sustained heat damage and was written off. Another fire occurred on July 22, 2020 to an Ethiopian Airlines 777F while at the cargo area of Shanghai Pudong International Airport. The aircraft sustained heat damage and was written off as the eighth hull loss. Media reports on legal proceedings attribute the fire to the ignition of chlorine dioxide disinfection tablets at high temperatures in a humid environment on ground. On February 20, 2021, a 777-200 operating as United Airlines Flight 328 suffered a failure of its starboard engine. The cowling and other engine parts fell over a Denver suburb. The captain declared an emergency and returned to land at the Denver airport. An immediate examination, before any formal investigation, found that two fan blades had broken off. One blade had suffered metal fatigue and may have chipped another blade, which also broke off. Boeing recommended suspending flights of all 128 operational 777s equipped with Pratt & Whitney PW4000 engines until they had been inspected. Several countries also restricted flights of PW4000-equipped 777s in their territory. In 2018, a similar issue occurred on United Airlines Flight 1175 from San Francisco to Hawaii involving another 777-200 equipped with the same engine type. On May 21, 2024, Singapore Airlines Flight 321, operated by a 777-300ER, encountered severe turbulence over Myanmar that injured 104 passengers and crew and led to the death of a passenger, who died of a suspected heart attack. Aircraft on display The first prototype, a Boeing 777-200, B-HNL (ex. N7771), was built in 1994 and originally used by Boeing for flight testing and development. In 2000, it was sold to Cathay Pacific (as no delivery slots were available for newly-built 777s) and refurbished for passenger service. After 18 years of commercial service, B-HNL was retired in mid-2018 amid press reports that it was to be displayed at the Museum of Flight in Seattle. On September 18, 2018, Cathay Pacific and Boeing announced that B-HNL would be donated to the Pima Air & Space Museum near Tucson, Arizona, where it would be placed on permanent display. Three retired Saudia 777-200ER aircraft, formerly registered HZ-AKG, HZ-AKK, and HZ-AKP, respectively, were transported by road from Jeddah to Riyadh in September 2024 to be displayed at the Riyadh Season exhibition. The fuselage of each aircraft was to be used as a tourist attraction featuring aviation-themed exhibits and/or dining and retail options. Specifications
Technology
Specific aircraft_2
null
89303
https://en.wikipedia.org/wiki/Lace
Lace
Lace is a delicate fabric made of yarn or thread in an open weblike pattern, made by machine or by hand. Generally, lace is split into two main categories, needlelace and bobbin lace, although there are other types of lace, such as knitted or crocheted lace. Other laces such as these are considered as a category of their specific craft. Knitted lace, therefore, is an example of knitting. This article considers both needle lace and bobbin lace. While some experts say both needle lace and bobbin lace began in Italy in the late 1500s, there are some questions regarding its origins. Originally linen, silk, gold, or silver threads were used. Now lace is often made with cotton thread, although linen and silk threads are still available. Manufactured lace may be made of synthetic fiber. A few modern artists make lace with a fine copper or silver wire instead of thread. Etymology The word lace is from Middle English, from Old French las, noose, string, from Vulgar Latin *laceum, from Latin laqueus, noose; probably akin to lacere, to entice or ensnare. Description The Latin word from which "lace" is derived means "noose," and a noose describes an open space outlined with rope or thread. This description applies to many types of open fabric resulting from "looping, plaiting, twisting, or knotting...threads...by hand or machine." Types There are many types of lace, classified by how they are made. These include: Bobbin lace, as the name suggests, is made with bobbins and a pillow. The bobbins, turned from wood, bone, or plastic, hold threads which are woven together and held in place with pins stuck in the pattern on the pillow. The pillow contains straw, preferably oat straw or other materials such as sawdust, insulation styrofoam, or ethafoam. Also known as bone-lace. Chantilly lace is a type of bobbin lace. Chemical lace: the stitching area is stitched with embroidery threads that form a continuous motif. Afterwards, the stitching areas are removed and only the embroidery remains. The stitching ground is made of a water-soluble or non-heat-resistant material. Crocheted lace includes Irish crochet, pineapple crochet, and filet crochet. Cutwork, or whitework, is lace constructed by removing threads from a woven background, and the remaining threads wrapped or filled with embroidery. Knitted lace includes Shetland lace, such as the "wedding ring shawl", a lace shawl so fine that it can be pulled through a wedding ring. Knotted lace includes macramé and tatting. Machine-made lace is any style of lace created or replicated using mechanical means. Needle lace, such as Venetian Gros Point, is made using a needle and thread. This is the most flexible of the lace-making arts. While some types can be made more quickly than the finest of bobbin laces, others are very time-consuming. Some purists regard needle lace as the height of lace-making. The finest antique needle laces were made from a very fine thread that is not manufactured today. Tape lace makes the tape in the lace as it is worked, or uses a machine- or hand-made textile strip formed into a design, then joined and embellished with needle or bobbin lace. Tatting is a textile craft consisting of a series of knots and loops arranged with a shuttle or needle based process. History: Bobbin and needle lace Origins The origin of lace is disputed by historians. An Italian claim is a will of 1493 by the Milanese Sforza family. A Flemish claim is lace on the alb of a worshiping priest in a painting about 1485 by Hans Memling. But since lace evolved from other techniques, it is impossible to say that it originated in any one place. The fragility of lace also means that few exceedingly old specimens are extant. Early history Lace was used by clergy of the Catholic Church as part of vestments in religious ceremonies. When they first started to use lace and through the 16th century, they primarily used cutwork. Much of their lace was made of gold, silver, and silk. Wealthy people began to use such expensive lace in clothing trimmings and furnishings, such as cushion covers. In the 1300s and 1400s in the Italian states, heavy duties were imposed on lace, and strict sumptuary laws were passed. This led to less demand for lace. In the mid-1400s some lacemakers turned to using flax, which cost less, while others migrated, bringing the industry to other countries. However, lace did not come into widespread use until the 16th century in the northwestern part of the European continent. The popularity of lace increased rapidly and the cottage industry of lace making spread throughout Europe. The late 16th century marked the rapid development of lace, both needle lace and bobbin lace became dominant in both fashion as well as home décor. For enhancing the beauty of collars and cuffs, needle lace was embroidered with loops and picots. Sumptuary laws in many countries had a major impact on lace wearing and production throughout its early history, though in some countries they were often ignored or worked around. Italy Bobbin and needle lace were both being made in Italy early in the 1400s. Documenting lace in Italy in the 15th century is a list of fine laces from the inventory of Beatrice d'Este, Duchess of Milan, from 1493. Venice In Venice, lace making was originally the province of leisured noblewomen, using it as a pastime. Some of the wives of doges also supported lacemaking in the Republic. One, Giovanna Malipiero Dandolo, showed support in 1457 for a law protecting lacemakers. In 1476, the lace trade was seriously affected by a law which disallowed "silver and embroidery on any fabric and the Punto in Aria of linen threads made with a needle, or gold and silver threads." In 1595, Morosina Morosini, another doge's wife, founded a lace workshop for 130 women. In the early 1500s, the production of lace became a paid activity, accomplished by young girls working in the houses of noblewomen, creating lace for household use, and in convents. Lace was a popular Venetian export in the 1500s and 1600s, and the demand remained strong in Europe, even when the export of other items exported by Venice during this period slumped. The largest and most intricate pieces of Venetian lace became ruffs and collars for members of the nobility and for aristocrats. Belgium Lace was being made in Brussels in the 1400s, and samples of such lace survive. Belgium and Flanders became a major center for the creation of primarily bobbin lace starting in the 1500s, and some handmade lace is still being produced there today. Belgian-grown flax contributed to the lace industry in the country. It produced extremely fine linen threads that were a critical factor in the superior texture and quality of Belgian lace. Schools were founded to teach lacemaking to the young. The height of the production of lace there was in the 1700s. Brussels was known for Point d'Angleterre, Lierre and Bruges also were known for their own styles of lace. Belgian lacemakers either originated or developed laces such as Brussels or Brabant Lace, Lace of Flanders, Mechlin, Valenciennes and Binche. France Lace arrived in France when Catherine de Medici, newly married to King Henry II in 1533, brought Venetian lace-makers to her new homeland. The French royal court and the fashions popular there, influenced the lace that started to be made in France. It was delicate and graceful, compared to the heavier needle or point-laces of Venice. Examples of French lace are Alençon, Argentan, and Chantilly. The 17th century court of King Louis the XIV of France was known for its extravagance, and during his reign lace, particularly the delicate Alençon and Argentan varieties, was extremely popular as court dress. The frontange, a tall lace headdress, became fashionable in France at this time. Louis XIV's finance minister, Jean Baptiste Colbert, strengthened the lace industry by establishing lace schools and workshops in the country. Spain Lacemaking in Spain was established early, as by the 1600s its Point d'Espagne lace, made of gold and silver thread, was very popular. Lace was made for use in churches and for the mantilla. Lacemaking may have come to Spain from Italy in the 1500s, or from Flanders, its province at the time. This lace was much admired, and was made throughout the country. Germany Barbara Uttmann learned how to make bobbin lace as a girl from a Protestant refugee. In 1561 she started a lace-making workshop in Annaberg. By the time of her death in 1575, there were over 30,000 lacemakers in that area of Germany. Following the revocation of the Edict of Nantes in France in 1685, many Huguenot lacemakers moved to Hamburg and Berlin. The earliest known lace pattern book was printed in Cologne in 1527. England The lace that was made in England prior to the introduction of bobbin lace in the mid 1500s was primarily cutwork or drawn thread work. There is a 1554 mention of Sir Thomas Wyatt wearing a ruff trimmed with bone lace (some bobbins at the time were made of bone). The court of Queen Elizabeth of England maintained close ties with the French court, and so French lace began to be seen and appreciated in England. Lace was used on her court gowns, and became fashionable. There are two distinct areas of England where lacemaking was a significant industry: Devon and part of the South Midlands. Belgian lacemakers were encouraged to settle in Honiton in Devon at the end of the 16th century. They continued to make pillow and other lace, as they had in their homeland, but Honiton lace never got the acclaim that lace from France, Italy, and Belgium did. While the lace in Devon stayed stable, in the lace-making areas of the South Midlands there were changes brought by different groups of émigrés: Flemings, French Huguenots, and later, French escaping the Revolution. Catherine of Aragon, while exiled in Ampthill, England, was said to have supported the lace makers there by burning all her lace, and commissioning new pieces. This may be the origin of the lacemaker's holiday, Cattern's Day. On this day (25 or 26 November) lacemakers were given a day off from work, and Cattern cakes - small dough cakes made with caraway seeds, were used to celebrate. The English diarist Samuel Pepys often wrote about the lace used for his, his wife's, and his acquaintances' clothing, and on 10 May 1669, noted that he intended to remove the gold lace from the sleeves of his coat "as it is fit [he] should", possibly in order to avoid charges of ostentatious living. In 1840, Britain's Queen Victoria was married in lace, influencing the wedding dress style until now. The decline of the lace industry in England began about 1780, as was happening elsewhere. Some of the reasons include the increased popularity of clothing in the Classical style, the economic issues connected to war, and the increased production and use of machine-made laces. America American colonists of both British and Dutch origins strove to acquire lace accessories such as caps, ruffs, and other neckwear, and handkerchiefs. American women who afford lace textiles were also able to afford aprons and dresses trimmed with the technique or made only from lace. Because of sumptuary laws, such as one in Massachusetts in 1634, American citizens were not allowed to own or make their lace textiles. Sumptuary laws prevented spending on extravagance and luxury and classified who could own or make lace. This indicates that lace was being made in that colony at the time. Lacemaking was being taught in boarding schools by the mid 1700s, and newspaper advertisements starting in the early 1700s offered to teach the technique. Also in the 18th century, Ipswich, Massachusetts had become the only place in America known for producing handmade lace. By 1790, women in Ipswich, who were primarily from the British Midlands, were making 42,000 yards of silk bobbin lace intended for trimmings. George Washington reportedly purchased Ipswich Lace on a trip to the region in 1789. Machines to make lace began to be smuggled into the country in the early 1800s, as England did not permit these machines to be exported. The first lacemaking factory opened in Medway, Massachusetts in 1818. Ipswich had its own in 1824. The women there moved from making bobbin lace to decorating the machine-made net lace with darning and tambour stitches, creating what is known as Limerick lace. Lace was still much in demand in the 19th century. Lace trimmings on dresses, at seams, pockets, and collars were very popular. The lace being made in the United States was based on European patterns. By the turn of the 20th century, needlework and other magazines included lace patterns of a range of types. In North America in the 19th century, missionaries spread the knowledge of lace making to the Native American tribes. Sibyl Carter, an Episcopalian missionary, began to teach lacemaking to Ojibwa women in Minnesota in 1890. Classes were being held for members of many tribes throughout the US by the first decade of the 1900s St. John Francis Regis guided many women out of prostitution by establishing them in the lace making and embroidery trade, which is why he became the Patron Saint of lace making. Ireland Lace was made in Ireland from the 1730s onwards with several different lace-making schools founded across the country. Many regions acquired a name for high-quality work and others developed a distinctive style. Lace proved to be an important means of income for many poorer women. Several important schools of lace included: Carrickmacross lace, Kenmare lace, Limerick lace and Youghal lace. Patrons, designers, and lace makers Patron saints Some patron saints of lace include: St Anne St Catherine of Alexandria St Crispin St Elizabeth of Hungary St Helena of Constantine St John Regis St Paraskeva of the Balkans St Rose of Lima Historic Giovanna Dandolo (1457–1462) Barbara Uthmann (1514–1575) Morosina Morosini (1545–1614) Federico de Vinciolo (16th century) Caterina Angiola Pieroncini (18th century) Florence Vere O'Brien (1854-1936) Contemporary Rosa Elena Egipciaco Lace in art The earliest portraits showing lace are those of the early Florentine School. Later, in the 17th century, lace was very popular and painting styles were at the time realistic. This allows viewers to see the finery of lace. Painted portraits, primarily those of the wealthy or the nobility, depicted costly laces. This presented a challenge to the painters, who needed to represent not only their sitters accurately, but their intricate lace as well. The portrait of Nicolaes Hasselaer seen here was painted by Frans Hals in about 1627. It depicts a man dressed in a black garment with a lace collar. The collar is detailed enough that those who are expert in lace identification can tell what pattern it is. Hals created the lace effect with dabs of grey and white, using black paint to indicate the spaces between the threads. An image of an anonymous female artisan appears in The Lacemaker, a painting by the Dutch artist Johannes Vermeer (1632–1675), completed around 1669–1670.
Technology
Fabrics and fibers
null
89367
https://en.wikipedia.org/wiki/Emmer
Emmer
Emmer is a hybrid species of wheat, producing edible seeds that have been used as food since ancient times. The domesticated types are Triticum turgidum subsp. dicoccum and T. t. conv. durum. The wild plant is called T. t. subsp. dicoccoides. The seeds have an awned covering, the sharp spikes helping the seeds to become buried in the ground. The principal difference between the wild and the domestic forms is that the ripened seed head of the wild plant shatters and scatters the seed onto the ground, while in the domesticated emmer, the seed head remains intact, thus making it easier for people to harvest the grain. Along with einkorn, emmer was one of the first crops domesticated in the Near East. It was widely cultivated in the ancient world, but is now a relict crop in mountainous regions of Europe and Asia. Emmer is one of the three grains called farro in Italy. Etymology Emmer is first attested in 1908 in English as a loanword from German , variant of , from , 'starch', likely from Latin , itself borrowing from Ancient Greek . Description Like einkorn (T. monococcum) and spelt (T. spelta), emmer is a hulled wheat, meaning it has strong glumes (husks) that enclose the grains, and a semibrittle rachis. On threshing, a hulled wheat spike breaks up into spikelets that require milling or pounding to release the grains from the glumes. Wild emmer spikelets effectively self-cultivate by propelling themselves mechanically into soils with their awns. During a period of increased humidity during the night, the awns of the spikelet become erect and draw together, and in the process push the grain into the soil. During the daytime, the humidity drops and the awns slacken back again; however, fine silica hairs on the awns act as hooks in the soil and prevent the spikelets from backing out. During the course of alternating stages of daytime drying and nighttime humidity, the awns' pumping movements, which resemble a swimming frog kick, will drill the spikelet or more into the soil. Evolution Taxonomy and phylogeny Strong similarities in morphology and genetics show that wild emmer (T. dicoccoides Koern.) is the wild ancestor and a crop wild relative of domesticated emmer. Wild emmer still grows wild in the Near East. It is a tetraploid wheat formed by the hybridization of two diploid wild grasses, wild red einkorn (Triticum urartu), and the goatgrass Aegilops speltoides. The botanists Friedrich August Körnicke and Aaron Aaronsohn in the late 19th-century were the first to describe the wild emmer native to Palestine and adjacent countries. Earlier, in 1864, the Austrian botanist Carl Friedrich Kotschy collected specimens of the same wild emmer, without stating where he had collected them. Although cultivated in ancient Egypt, wild emmer has not been grown for human consumption in recent history, perhaps owing to the difficulty with which the chaff is separated from the seed kernels, formerly requiring the spikes to be pounded with mortar and pestle. Wild emmer is distinguished from common wheat by its tougher ear rachis and the beards releasing the grains easily, by their ear rachis becoming brittle when ripe and their firmly fitting beards. Wild emmer grows to a height of , and bears an elongated spike measuring , with long, protruding awns extending upwards. Avni et al., 2017 provides a complete emmer genome. History of cultivation Wild emmer is native to the Fertile Crescent of the Middle East, growing in the grass and woodland of hill country from modern-day Israel to Iran. The origin of wild emmer has been suggested, without universal agreement among scholars, to be the Karaca Dağ mountain region of southeastern Turkey. In 1906, Aaron Aaronsohn's discovery of wild emmer wheat growing in Rosh Pinna (Israel) created a stir in the botanical world. Emmer wheat has been found in archaeological excavations and ancient tombs. Emmer was collected from the wild and eaten by hunter gatherers for thousands of years before its domestication. Grains of wild emmer discovered at Ohalo II had a radiocarbon dating of 17,000 BC and at the Pre-Pottery Neolithic A (PPNA) site of Netiv Hagdud are 10,000–9,400 years old. The location of the earliest site of emmer domestication is still unclear and under debate. Some of the earliest sites with possible indirect evidence for emmer domestication during the Early Pre-Pottery Neolithic B include Tell Aswad, Çayönü, Cafer Höyük, Aşıklı Höyük, and Shillourokambos. Definitive evidence for the full domestication of emmer wheat is not found until the Middle Pre-Pottery Neolithic B (10,200 to 9,500 BP), at sites such as Beidha, Tell Ghoraifé, Tell es-Sultan (Jericho), Abu Hureyra, Tell Halula, Tell Aswad and Cafer Höyük. Emmer is found in a large number of Neolithic sites scattered around the fertile crescent. From its earliest days of cultivation, emmer was a more prominent crop than its cereal contemporaries and competitors, einkorn wheat and barley. Small quantities of emmer are present during Period 1 at Mehrgharh on the Indian subcontinent, showing that emmer was already cultivated there by 7000–5000 BC. In the Near East, in southern Mesopotamia in particular, cultivation of emmer wheat began to decline in the Early Bronze Age, from about 3000 BC, and barley became the standard cereal crop. This has been related to increased salinization of irrigated alluvial soils, of which barley is more tolerant, although this study has been challenged. Emmer had a special place in ancient Egypt, where it was the main wheat cultivated in Pharaonic times, although cultivated einkorn wheat was grown in great abundance during the Third Dynasty, and large quantities of it were found preserved, along with cultivated emmer wheat and barleys, in the subterranean chambers beneath the Step Pyramid at Saqqara. Neighbouring countries also cultivated einkorn, durum and common wheat. In the absence of any obvious functional explanation, the greater prevalence of emmer wheat in the diet of ancient Egypt may simply reflect a marked culinary or cultural preference, or may reflect growing conditions having changed after the Third Dynasty. Emmer and barley were the primary ingredients in ancient Egyptian bread and beer. Emmer recovered from the Phoenician settlement at Volubilis (in present-day Morocco) has been dated to the middle of the first millennium BC. Emmer wheat may be one of the five species of grain which have a special status in Judaism. One of these species may be either emmer or spelt. However, it is fairly certain that spelt did not grow in ancient Israel, and emmer was probably a significant crop until the end of the Iron Age.
Biology and health sciences
Grains
Plants
89425
https://en.wikipedia.org/wiki/Arrow%27s%20impossibility%20theorem
Arrow's impossibility theorem
Arrow's impossibility theorem is a key result in social choice theory, showing that no ranking-based decision rule can satisfy the requirements of rational choice theory. Most notably, Arrow showed that no such rule can satisfy all of a certain set of seemingly simple and reasonable conditions that include independence of irrelevant alternatives, the principle that a choice between two alternatives and should not depend on the quality of some third, unrelated option . The result is most often cited in discussions of voting rules. However, Arrow's theorem is substantially broader, and can be applied to methods of social decision-making other than voting. It therefore generalizes Condorcet's voting paradox, and shows similar problems exist for every collective decision-making procedure based on relative comparisons. Plurality-rule methods like first-past-the-post and ranked-choice (instant-runoff) voting are highly sensitive to spoilers, particularly in situations where they are not forced. By contrast, majority-rule (Condorcet) methods of ranked voting uniquely minimize the number of spoiled elections by restricting them to rare situations called cyclic ties. Under some idealized models of voter behavior (e.g. Black's left-right spectrum), spoiler effects can disappear entirely for these methods. Arrow's theorem does not cover rated voting rules, and thus cannot be used to inform their susceptibility to the spoiler effect. However, Gibbard's theorem shows these methods' susceptibility to strategic voting, and generalizations of Arrow's theorem describe cases where rated methods are susceptible to the spoiler effect. Background When Kenneth Arrow proved his theorem in 1950, it inaugurated the modern field of social choice theory, a branch of welfare economics studying mechanisms to aggregate preferences and beliefs across a society. Such a mechanism of study can be a market, voting system, constitution, or even a moral or ethical framework. Axioms of voting systems Preferences In the context of Arrow's theorem, citizens are assumed to have ordinal preferences, i.e. orderings of candidates. If and are different candidates or alternatives, then means is preferred to . Individual preferences (or ballots) are required to satisfy intuitive properties of orderings, e.g. they must be transitive—if and , then . The social choice function is then a mathematical function that maps the individual orderings to a new ordering that represents the preferences of all of society. Basic assumptions Arrow's theorem assumes as background that any non-degenerate social choice rule will satisfy: Unrestricted domain — the social choice function is a total function over the domain of all possible orderings of outcomes, not just a partial function. In other words, the system must always make some choice, and cannot simply "give up" when the voters have unusual opinions. Without this assumption, majority rule satisfies Arrow's axioms by "giving up" whenever there is a Condorcet cycle. Non-dictatorship — the system does not depend on only one voter's ballot. This weakens anonymity (one vote, one value) to allow rules that treat voters unequally. It essentially defines social choices as those depending on more than one person's input. Non-imposition — the system does not ignore the voters entirely when choosing between some pairs of candidates. In other words, it is possible for any candidate to defeat any other candidate, given some combination of votes. This is often replaced with the stronger Pareto efficiency axiom: if every voter prefers over , then should defeat . However, the weaker non-imposition condition is sufficient. Arrow's original statement of the theorem included non-negative responsiveness as a condition, i.e., that increasing the rank of an outcome should not make them lose—in other words, that a voting rule shouldn't penalize a candidate for being more popular. However, this assumption is not needed or used in his proof (except to derive the weaker condition of Pareto efficiency), and Arrow later corrected his statement of the theorem to remove the inclusion of this condition. Independence A commonly-considered axiom of rational choice is independence of irrelevant alternatives (IIA), which says that when deciding between and , one's opinion about a third option should not affect their decision. Independence of irrelevant alternatives (IIA) — the social preference between candidate and candidate should only depend on the individual preferences between and . In other words, the social preference should not change from to if voters change their preference about whether . This is equivalent to the claim about independence of spoiler candidates when using the standard construction of a placement function. IIA is sometimes illustrated with a short joke by philosopher Sidney Morgenbesser: Morgenbesser, ordering dessert, is told by a waitress that he can choose between blueberry or apple pie. He orders apple. Soon the waitress comes back and explains cherry pie is also an option. Morgenbesser replies "In that case, I'll have blueberry." Arrow's theorem shows that if a society wishes to make decisions while always avoiding such self-contradictions, it cannot use ranked information alone. Theorem Intuitive argument Condorcet's example is already enough to see the impossibility of a fair ranked voting system, given stronger conditions for fairness than Arrow's theorem assumes. Suppose we have three candidates (, , and ) and three voters whose preferences are as follows: If is chosen as the winner, it can be argued any fair voting system would say should win instead, since two voters (1 and 2) prefer to and only one voter (3) prefers to . However, by the same argument is preferred to , and is preferred to , by a margin of two to one on each occasion. Thus, even though each individual voter has consistent preferences, the preferences of society are contradictory: is preferred over which is preferred over which is preferred over . Because of this example, some authors credit Condorcet with having given an intuitive argument that presents the core of Arrow's theorem. However, Arrow's theorem is substantially more general; it applies to methods of making decisions other than one-man-one-vote elections, such as markets or weighted voting, based on ranked ballots. Formal statement Let be a set of alternatives. A voter's preferences over are a complete and transitive binary relation on (sometimes called a total preorder), that is, a subset of satisfying: (Transitivity) If is in and is in , then is in , (Completeness) At least one of or must be in . The element being in is interpreted to mean that alternative is preferred to alternative . This situation is often denoted or . Denote the set of all preferences on by . Let be a positive integer. An ordinal (ranked) social welfare function is a function which aggregates voters' preferences into a single preference on . An -tuple of voters' preferences is called a preference profile. Arrow's impossibility theorem: If there are at least three alternatives, then there is no social welfare function satisfying all three of the conditions listed below: Pareto efficiency If alternative is preferred to for all orderings , then is preferred to by . Non-dictatorship There is no individual whose preferences always prevail. That is, there is no such that for all and all and , when is preferred to by then is preferred to by . Independence of irrelevant alternatives For two preference profiles and such that for all individuals , alternatives and have the same order in as in , alternatives and have the same order in as in . Formal proof Arrow's proof used the concept of decisive coalitions. Definition: A subset of voters is a coalition. A coalition is decisive over an ordered pair if, when everyone in the coalition ranks , society overall will always rank . A coalition is decisive if and only if it is decisive over all ordered pairs. Our goal is to prove that the decisive coalition contains only one voter, who controls the outcome—in other words, a dictator. The following proof is a simplification taken from Amartya Sen and Ariel Rubinstein. The simplified proof uses an additional concept: A coalition is weakly decisive over if and only if when every voter in the coalition ranks , and every voter outside the coalition ranks , then . Thenceforth assume that the social choice system satisfies unrestricted domain, Pareto efficiency, and IIA. Also assume that there are at least 3 distinct outcomes. By Pareto, the entire set of voters is decisive. Thus by the group contraction lemma, there is a size-one decisive coalition—a dictator. Proofs using the concept of the pivotal voter originated from Salvador Barberá in 1980. The proof given here is a simplified version based on two proofs published in Economic Theory. Setup Assume there are n voters. We assign all of these voters an arbitrary ID number, ranging from 1 through n, which we can use to keep track of each voter's identity as we consider what happens when they change their votes. Without loss of generality, we can say there are three candidates who we call A, B, and C. (Because of IIA, including more than 3 candidates does not affect the proof.) We will prove that any social choice rule respecting unanimity and independence of irrelevant alternatives (IIA) is a dictatorship. The proof is in three parts: We identify a pivotal voter for each individual contest (A vs. B, B vs. C, and A vs. C). Their ballot swings the societal outcome. We prove this voter is a partial dictator. In other words, they get to decide whether A or B is ranked higher in the outcome. We prove this voter is the same person, hence this voter is a dictator. Part one: There is a pivotal voter for A vs. B Consider the situation where everyone prefers A to B, and everyone also prefers C to B. By unanimity, society must also prefer both A and C to B. Call this situation profile[0, x]. On the other hand, if everyone preferred B to everything else, then society would have to prefer B to everything else by unanimity. Now arrange all the voters in some arbitrary but fixed order, and for each i let profile i be the same as profile 0, but move B to the top of the ballots for voters 1 through i. So profile 1 has B at the top of the ballot for voter 1, but not for any of the others. Profile 2 has B at the top for voters 1 and 2, but no others, and so on. Since B eventually moves to the top of the societal preference as the profile number increases, there must be some profile, number k, for which B first moves above A in the societal rank. We call the voter k whose ballot change causes this to happen the pivotal voter for B over A. Note that the pivotal voter for B over A is not, a priori, the same as the pivotal voter for A over B. In part three of the proof we will show that these do turn out to be the same. Also note that by IIA the same argument applies if profile 0 is any profile in which A is ranked above B by every voter, and the pivotal voter for B over A will still be voter k. We will use this observation below. Part two: The pivotal voter for B over A is a dictator for B over C In this part of the argument we refer to voter k, the pivotal voter for B over A, as the pivotal voter for simplicity. We will show that the pivotal voter dictates society's decision for B over C. That is, we show that no matter how the rest of society votes, if pivotal voter ranks B over C, then that is the societal outcome. Note again that the dictator for B over C is not a priori the same as that for C over B. In part three of the proof we will see that these turn out to be the same too. In the following, we call voters 1 through k − 1, segment one, and voters k + 1 through N, segment two. To begin, suppose that the ballots are as follows: Every voter in segment one ranks B above C and C above A. Pivotal voter ranks A above B and B above C. Every voter in segment two ranks A above B and B above C. Then by the argument in part one (and the last observation in that part), the societal outcome must rank A above B. This is because, except for a repositioning of C, this profile is the same as profile k − 1 from part one. Furthermore, by unanimity the societal outcome must rank B above C. Therefore, we know the outcome in this case completely. Now suppose that pivotal voter moves B above A, but keeps C in the same position and imagine that any number (even all!) of the other voters change their ballots to move B below C, without changing the position of A. Then aside from a repositioning of C this is the same as profile k from part one and hence the societal outcome ranks B above A. Furthermore, by IIA the societal outcome must rank A above C, as in the previous case. In particular, the societal outcome ranks B above C, even though Pivotal Voter may have been the only voter to rank B above C. By IIA, this conclusion holds independently of how A is positioned on the ballots, so pivotal voter is a dictator for B over C. Part three: There exists a dictator In this part of the argument we refer back to the original ordering of voters, and compare the positions of the different pivotal voters (identified by applying parts one and two to the other pairs of candidates). First, the pivotal voter for B over C must appear earlier (or at the same position) in the line than the dictator for B over C: As we consider the argument of part one applied to B and C, successively moving B to the top of voters' ballots, the pivot point where society ranks B above C must come at or before we reach the dictator for B over C. Likewise, reversing the roles of B and C, the pivotal voter for C over B must be at or later in line than the dictator for B over C. In short, if kX/Y denotes the position of the pivotal voter for X over Y (for any two candidates X and Y), then we have shown kB/C ≤ kB/A ≤ kC/B. Now repeating the entire argument above with B and C switched, we also have kC/B ≤ kB/C. Therefore, we have kB/C = kB/A = kC/B and the same argument for other pairs shows that all the pivotal voters (and hence all the dictators) occur at the same position in the list of voters. This voter is the dictator for the whole election. Stronger versions Arrow's impossibility theorem still holds if Pareto efficiency is weakened to the following condition: Non-imposition For any two alternatives a and b, there exists some preference profile such that is preferred to by . Interpretation and practical solutions Arrow's theorem establishes that no ranked voting rule can always satisfy independence of irrelevant alternatives, but it says nothing about the frequency of spoilers. This led Arrow to remark that "Most systems are not going to work badly all of the time. All I proved is that all can work badly at times." Attempts at dealing with the effects of Arrow's theorem take one of two approaches: either accepting his rule and searching for the least spoiler-prone methods, or dropping one or more of his assumptions, such as by focusing on rated voting rules. Minimizing IIA failures: Majority-rule methods The first set of methods studied by economists are the majority-rule, or Condorcet, methods. These rules limit spoilers to situations where majority rule is self-contradictory, called Condorcet cycles, and as a result uniquely minimize the possibility of a spoiler effect among ranked rules. (Indeed, many different social welfare functions can meet Arrow's conditions under such restrictions of the domain. It has been proven, however, that under any such restriction, if there exists any social welfare function that adheres to Arrow's criteria, then Condorcet method will adhere to Arrow's criteria.) Condorcet believed voting rules should satisfy both independence of irrelevant alternatives and the majority rule principle, i.e. if most voters rank Alice ahead of Bob, Alice should defeat Bob in the election. Unfortunately, as Condorcet proved, this rule can be intransitive on some preference profiles. Thus, Condorcet proved a weaker form of Arrow's impossibility theorem long before Arrow, under the stronger assumption that a voting system in the two-candidate case will agree with a simple majority vote. Unlike pluralitarian rules such as ranked-choice runoff (RCV) or first-preference plurality, Condorcet methods avoid the spoiler effect in non-cyclic elections, where candidates can be chosen by majority rule. Political scientists have found such cycles to be fairly rare, suggesting they may be of limited practical concern. Spatial voting models also suggest such paradoxes are likely to be infrequent or even non-existent. Left-right spectrum Soon after Arrow published his theorem, Duncan Black showed his own remarkable result, the median voter theorem. The theorem proves that if voters and candidates are arranged on a left-right spectrum, Arrow's conditions are all fully compatible, and all will be met by any rule satisfying Condorcet's majority-rule principle. More formally, Black's theorem assumes preferences are single-peaked: a voter's happiness with a candidate goes up and then down as the candidate moves along some spectrum. For example, in a group of friends choosing a volume setting for music, each friend would likely have their own ideal volume; as the volume gets progressively too loud or too quiet, they would be increasingly dissatisfied. If the domain is restricted to profiles where every individual has a single-peaked preference with respect to the linear ordering, then social preferences are acyclic. In this situation, Condorcet methods satisfy a wide variety of highly-desirable properties, including being fully spoilerproof. The rule does not fully generalize from the political spectrum to the political compass, a result related to the McKelvey-Schofield chaos theorem. However, a well-defined Condorcet winner does exist if the distribution of voters is rotationally symmetric or otherwise has a uniquely-defined median. In most realistic situations, where voters' opinions follow a roughly-normal distribution or can be accurately summarized by one or two dimensions, Condorcet cycles are rare (though not unheard of). Generalized stability theorems The Campbell-Kelly theorem shows that Condorcet methods are the most spoiler-resistant class of ranked voting systems: whenever it is possible for some ranked voting system to avoid a spoiler effect, a Condorcet method will do so. In other words, replacing a ranked method with its Condorcet variant (i.e. elect a Condorcet winner if they exist, and otherwise run the method) will sometimes prevent a spoiler effect, but can never create a new one. In 1977, Ehud Kalai and Eitan Muller gave a full characterization of domain restrictions admitting a nondictatorial and strategyproof social welfare function. These correspond to preferences for which there is a Condorcet winner. Holliday and Pacuit devised a voting system that provably minimizes the number of candidates who are capable of spoiling an election, albeit at the cost of occasionally failing vote positivity (though at a much lower rate than seen in instant-runoff voting). Going beyond Arrow's theorem: Rated voting As shown above, the proof of Arrow's theorem relies crucially on the assumption of ranked voting, and is not applicable to rated voting systems. This opens up the possibility of passing all of the criteria given by Arrow. These systems ask voters to rate candidates on a numerical scale (e.g. from 0–10), and then elect the candidate with the highest average (for score voting) or median (graduated majority judgment). Because Arrow's theorem no longer applies, other results are required to determine whether rated methods are immune to the spoiler effect, and under what circumstances. Intuitively, cardinal information can only lead to such immunity if it's meaningful; simply providing cardinal data is not enough. Some rated systems, such as range voting and majority judgment, pass independence of irrelevant alternatives when the voters rate the candidates on an absolute scale. However, when they use relative scales, more general impossibility theorems show that the methods (within that context) still fail IIA. As Arrow later suggested, relative ratings may provide more information than pure rankings, but this information does not suffice to render the methods immune to spoilers. While Arrow's theorem does not apply to graded systems, Gibbard's theorem still does: no voting game can be straightforward (i.e. have a single, clear, always-best strategy). Meaningfulness of cardinal information Arrow's framework assumed individual and social preferences are orderings or rankings, i.e. statements about which outcomes are better or worse than others. Taking inspiration from the strict behaviorism popular in psychology, some philosophers and economists rejected the idea of comparing internal human experiences of well-being. Such philosophers claimed it was impossible to compare the strength of preferences across people who disagreed; Sen gives as an example that it would be impossible to know whether the Great Fire of Rome was good or bad, because despite killing thousands of Romans, it had the positive effect of letting Nero expand his palace. Arrow originally agreed with these positions and rejected cardinal utility, leading him to focus his theorem on preference rankings. However, he later stated that cardinal methods can provide additional useful information, and that his theorem is not applicable to them. John Harsanyi noted Arrow's theorem could be considered a weaker version of his own theorem and other utility representation theorems like the VNM theorem, which generally show that rational behavior requires consistent cardinal utilities. Nonstandard spoilers Behavioral economists have shown individual irrationality involves violations of IIA (e.g. with decoy effects), suggesting human behavior can cause IIA failures even if the voting method itself does not. However, past research has typically found such effects to be fairly small, and such psychological spoilers can appear regardless of electoral system. Balinski and Laraki discuss techniques of ballot design derived from psychometrics that minimize these psychological effects, such as asking voters to give each candidate a verbal grade (e.g. "bad", "neutral", "good", "excellent") and issuing instructions to voters that refer to their ballots as judgments of individual candidates. Similar techniques are often discussed in the context of contingent valuation. Esoteric solutions In addition to the above practical resolutions, there exist unusual (less-than-practical) situations where Arrow's requirement of IIA can be satisfied. Supermajority rules Supermajority rules can avoid Arrow's theorem at the cost of being poorly-decisive (i.e. frequently failing to return a result). In this case, a threshold that requires a majority for ordering 3 outcomes, for 4, etc. does not produce voting paradoxes. In spatial (n-dimensional ideology) models of voting, this can be relaxed to require only (roughly 64%) of the vote to prevent cycles, so long as the distribution of voters is well-behaved (quasiconcave). These results provide some justification for the common requirement of a two-thirds majority for constitutional amendments, which is sufficient to prevent cyclic preferences in most situations. Infinite populations Fishburn shows all of Arrow's conditions can be satisfied for uncountably infinite sets of voters given the axiom of choice; however, Kirman and Sondermann demonstrated this requires disenfranchising almost all members of a society (eligible voters form a set of measure 0), leading them to refer to such societies as "invisible dictatorships". Common misconceptions Arrow's theorem is not related to strategic voting, which does not appear in his framework, though the theorem does have important implications for strategic voting (being used as a lemma to prove Gibbard's theorem). The Arrovian framework of social welfare assumes all voter preferences are known and the only issue is in aggregating them. Monotonicity (called positive association by Arrow) is not a condition of Arrow's theorem. This misconception is caused by a mistake by Arrow himself, who included the axiom in his original statement of the theorem but did not use it. Dropping the assumption does not allow for constructing a social welfare function that meets his other conditions. Contrary to a common misconception, Arrow's theorem deals with the limited class of ranked-choice voting systems, rather than voting systems as a whole.
Mathematics
Game theory
null
89480
https://en.wikipedia.org/wiki/Flywheel
Flywheel
A flywheel is a mechanical device that uses the conservation of angular momentum to store rotational energy, a form of kinetic energy proportional to the product of its moment of inertia and the square of its rotational speed. In particular, assuming the flywheel's moment of inertia is constant (i.e., a flywheel with fixed mass and second moment of area revolving about some fixed axis) then the stored (rotational) energy is directly associated with the square of its rotational speed. Since a flywheel serves to store mechanical energy for later use, it is natural to consider it as a kinetic energy analogue of an electrical capacitor. Once suitably abstracted, this shared principle of energy storage is described in the generalized concept of an accumulator. As with other types of accumulators, a flywheel inherently smooths sufficiently small deviations in the power output of a system, thereby effectively playing the role of a low-pass filter with respect to the mechanical velocity (angular, or otherwise) of the system. More precisely, a flywheel's stored energy will donate a surge in power output upon a drop in power input and will conversely absorb any excess power input (system-generated power) in the form of rotational energy. Common uses of a flywheel include smoothing a power output in reciprocating engines, flywheel energy storage, delivering energy at higher rates than the source, and controlling the orientation of a mechanical system using gyroscope and reaction wheel. Flywheels are typically made of steel and rotate on conventional bearings; these are generally limited to a maximum revolution rate of a few thousand RPM. High energy density flywheels can be made of carbon fiber composites and employ magnetic bearings, enabling them to revolve at speeds up to 60,000 RPM (1 kHz). History The principle of the flywheel is found in the Neolithic spindle and the potter's wheel, as well as circular sharpening stones in antiquity. In the early 11th century, Ibn Bassal pioneered the use of flywheel in noria and saqiyah. The use of the flywheel as a general mechanical device to equalize the speed of rotation is, according to the American medievalist Lynn White, recorded in the De diversibus artibus (On various arts) of the German artisan Theophilus Presbyter (ca. 1070–1125) who records applying the device in several of his machines. In the Industrial Revolution, James Watt contributed to the development of the flywheel in the steam engine, and his contemporary James Pickard used a flywheel combined with a crank to transform reciprocating motion into rotary motion. Physics The kinetic energy (or more specifically rotational energy) stored by the flywheel's rotor can be calculated by . ω is the angular velocity, and is the moment of inertia of the flywheel about its axis of symmetry. The moment of inertia is a measure of resistance to torque applied on a spinning object (i.e. the higher the moment of inertia, the slower it will accelerate when a given torque is applied). The moment of inertia can be calculated for cylindrical shapes using mass () and radius (). For a solid cylinder it is , for a thin-walled empty cylinder it is approximately , and for a thick-walled empty cylinder with constant density it is . For a given flywheel design, the kinetic energy is proportional to the ratio of the hoop stress to the material density and to the mass. The specific tensile strength of a flywheel can be defined as . The flywheel material with the highest specific tensile strength will yield the highest energy storage per unit mass. This is one reason why carbon fiber is a material of interest. For a given design the stored energy is proportional to the hoop stress and the volume. An electric motor-powered flywheel is common in practice. The output power of the electric motor is approximately equal to the output power of the flywheel. It can be calculated by , where is the voltage of rotor winding, is stator voltage, and is the angle between two voltages. Increasing amounts of rotation energy can be stored in the flywheel until the rotor shatters. This happens when the hoop stress within the rotor exceeds the ultimate tensile strength of the rotor material. Tensile stress can be calculated by , where is the density of the cylinder, is the radius of the cylinder, and is the angular velocity of the cylinder. Design A rimmed flywheel has a rim, a hub, and spokes. Calculation of the flywheel's moment of inertia can be more easily analysed by applying various simplifications. One method is to assume the spokes, shaft and hub have zero moments of inertia, and the flywheel's moment of inertia is from the rim alone. Another is to lump moments of inertia of spokes, hub and shaft into the rim. These may be estimated as a percentage of the flywheel's moment of inertia, with the majority from the rim, so that . For example, if the moments of inertia of hub, spokes and shaft are deemed negligible, and the rim's thickness is very small compared to its mean radius (), the radius of rotation of the rim is equal to its mean radius and thus . A shaftless flywheel eliminates the annulus holes, shaft or hub. It has higher energy density than conventional design but requires a specialized magnetic bearing and control system. The specific energy of a flywheel is determined by, in which is the shape factor, the material's tensile strength and the density. While a typical flywheel has a shape factor of 0.3, the shaftless flywheel has a shape factor close to 0.6, out of a theoretical limit of about 1. A superflywheel consists of a solid core (hub) and multiple thin layers of high-strength flexible materials (such as special steels, carbon fiber composites, glass fiber, or graphene) wound around it. Compared to conventional flywheels, superflywheels can store more energy and are safer to operate. In case of failure, a superflywheel does not explode or burst into large shards like a regular flywheel, but instead splits into layers. The separated layers then slow a superflywheel down by sliding against the inner walls of the enclosure, thus preventing any further destruction. Although the exact value of energy density of a superflywheel would depend on the material used, it could theoretically be as high as 1200 Wh (4.4 MJ) per kg of mass for graphene superflywheels. The first superflywheel was patented in 1964 by the Soviet-Russian scientist Nurbei Guilia. Materials Flywheels are made from many different materials; the application determines the choice of material. Small flywheels made of lead are found in children's toys. Cast iron flywheels are used in old steam engines. Flywheels used in car engines are made of cast or nodular iron, steel or aluminum. Flywheels made from high-strength steel or composites have been proposed for use in vehicle energy storage and braking systems. The efficiency of a flywheel is determined by the maximum amount of energy it can store per unit weight. As the flywheel's rotational speed or angular velocity is increased, the stored energy increases; however, the stresses also increase. If the hoop stress surpass the tensile strength of the material, the flywheel will break apart. Thus, the tensile strength limits the amount of energy that a flywheel can store. In this context, using lead for a flywheel in a child's toy is not efficient; however, the flywheel velocity never approaches its burst velocity because the limit in this case is the pulling-power of the child. In other applications, such as an automobile, the flywheel operates at a specified angular velocity and is constrained by the space it must fit in, so the goal is to maximize the stored energy per unit volume. The material selection therefore depends on the application. Applications Flywheels are often used to provide continuous power output in systems where the energy source is not continuous. For example, a flywheel is used to smooth the fast angular velocity fluctuations of the crankshaft in a reciprocating engine. In this case, a crankshaft flywheel stores energy when torque is exerted on it by a firing piston and then returns that energy to the piston to compress a fresh charge of air and fuel. Another example is the friction motor which powers devices such as toy cars. In unstressed and inexpensive cases, to save on cost, the bulk of the mass of the flywheel is toward the rim of the wheel. Pushing the mass away from the axis of rotation heightens rotational inertia for a given total mass. A flywheel may also be used to supply intermittent pulses of energy at power levels that exceed the abilities of its energy source. This is achieved by accumulating energy in the flywheel over a period of time, at a rate that is compatible with the energy source, and then releasing energy at a much higher rate over a relatively short time when it is needed. For example, flywheels are used in power hammers and riveting machines. Flywheels can be used to control direction and oppose unwanted motions. Flywheels in this context have a wide range of applications: gyroscopes for instrumentation, ship stability, satellite stabilization (reaction wheel), keeping a toy spin spinning (friction motor), stabilizing magnetically-levitated objects (spin-stabilized magnetic levitation). Flywheels may also be used as an electric compensator, like a synchronous compensator, that can either produce or sink reactive power but would not affect the real power. The purposes for that application are to improve the power factor of the system or adjust the grid voltage. Typically, the flywheels used in this field are similar in structure and installation as the synchronous motor (but it is called synchronous compensator or synchronous condenser in this context). There are also some other kinds of compensator using flywheels, like the single phase induction machine. But the basic ideas here are the same, the flywheels are controlled to spin exactly at the frequency which you want to compensate. For a synchronous compensator, the voltage of rotor and stator also must be kept in phase, which is the same as keeping the magnetic field of rotor and the total magnetic field in phase (in the rotating frame reference).
Technology
Rigid components
null
89489
https://en.wikipedia.org/wiki/Inequality%20%28mathematics%29
Inequality (mathematics)
In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. The main types of inequality are less than (<) and greater than (>). Notation There are several different notations used to represent different kinds of inequalities: The notation a < b means that a is less than b. The notation a > b means that a is greater than b. In either case, a is not equal to b. These relations are known as strict inequalities, meaning that a is strictly less than or strictly greater than b. Equality is excluded. In contrast to strict inequalities, there are two types of inequality relations that are not strict: The notation a ≤ b or a ⩽ b or a ≦ b means that a is less than or equal to b (or, equivalently, at most b, or not greater than b). The notation a ≥ b or a ⩾ b or a ≧ b means that a is greater than or equal to b (or, equivalently, at least b, or not less than b). In the 17th and 18th centuries, personal notations or typewriting signs were used to signal inequalities. For example, In 1670, John Wallis used a single horizontal bar above rather than below the < and >. Later in 1734, ≦ and ≧, known as "less than (greater-than) over equal to" or "less than (greater than) or equal to with double horizontal bars", first appeared in Pierre Bouguer's work . After that, mathematicians simplified Bouguer's symbol to "less than (greater than) or equal to with one horizontal bar" (≤), or "less than (greater than) or slanted equal to" (⩽). The relation not greater than can also be represented by the symbol for "greater than" bisected by a slash, "not". The same is true for not less than, The notation a ≠ b means that a is not equal to b; this inequation sometimes is considered a form of strict inequality. It does not say that one is greater than the other; it does not even require a and b to be member of an ordered set. In engineering sciences, less formal use of the notation is to state that one quantity is "much greater" than another, normally by several orders of magnitude. The notation a ≪ b means that a is much less than b. The notation a ≫ b means that a is much greater than b. This implies that the lesser value can be neglected with little effect on the accuracy of an approximation (such as the case of ultrarelativistic limit in physics). In all of the cases above, any two symbols mirroring each other are symmetrical; a < b and b > a are equivalent, etc. Properties on the number line Inequalities are governed by the following properties. All of these properties also hold if all of the non-strict inequalities (≤ and ≥) are replaced by their corresponding strict inequalities (< and >) and — in the case of applying a function — monotonic functions are limited to strictly monotonic functions. Converse The relations ≤ and ≥ are each other's converse, meaning that for any real numbers a and b: Transitivity The transitive property of inequality states that for any real numbers a, b, c: If either of the premises is a strict inequality, then the conclusion is a strict inequality: Addition and subtraction A common constant c may be added to or subtracted from both sides of an inequality. So, for any real numbers a, b, c: In other words, the inequality relation is preserved under addition (or subtraction) and the real numbers are an ordered group under addition. Multiplication and division The properties that deal with multiplication and division state that for any real numbers, a, b and non-zero c: In other words, the inequality relation is preserved under multiplication and division with positive constant, but is reversed when a negative constant is involved. More generally, this applies for an ordered field. For more information, see § Ordered fields. Additive inverse The property for the additive inverse states that for any real numbers a and b: Multiplicative inverse If both numbers are positive, then the inequality relation between the multiplicative inverses is opposite of that between the original numbers. More specifically, for any non-zero real numbers a and b that are both positive (or both negative): All of the cases for the signs of a and b can also be written in chained notation, as follows: Applying a function to both sides Any monotonically increasing function, by its definition, may be applied to both sides of an inequality without breaking the inequality relation (provided that both expressions are in the domain of that function). However, applying a monotonically decreasing function to both sides of an inequality means the inequality relation would be reversed. The rules for the additive inverse, and the multiplicative inverse for positive numbers, are both examples of applying a monotonically decreasing function. If the inequality is strict (a < b, a > b) and the function is strictly monotonic, then the inequality remains strict. If only one of these conditions is strict, then the resultant inequality is non-strict. In fact, the rules for additive and multiplicative inverses are both examples of applying a strictly monotonically decreasing function. A few examples of this rule are: Raising both sides of an inequality to a power n > 0 (equiv., −n < 0), when a and b are positive real numbers: Taking the natural logarithm on both sides of an inequality, when a and b are positive real numbers: (this is true because the natural logarithm is a strictly increasing function.) Formal definitions and generalizations A (non-strict) partial order is a binary relation ≤ over a set P which is reflexive, antisymmetric, and transitive. That is, for all a, b, and c in P, it must satisfy the three following clauses: a ≤ a (reflexivity) if a ≤ b and b ≤ a, then a = b (antisymmetry) if a ≤ b and b ≤ c, then a ≤ c (transitivity) A set with a partial order is called a partially ordered set. Those are the very basic axioms that every kind of order has to satisfy. A strict partial order is a relation < that satisfies a ≮ a (irreflexivity), if a < b, then b ≮ a (asymmetry), if a < b and b < c, then a < c (transitivity), where means that does not hold. Some types of partial orders are specified by adding further axioms, such as: Total order: For every a and b in P, a ≤ b or b ≤ a . Dense order: For all a and b in P for which a < b, there is a c in P such that a < c < b. Least-upper-bound property: Every non-empty subset of P with an upper bound has a least upper bound (supremum) in P. Ordered fields If (F, +, ×) is a field and ≤ is a total order on F, then (F, +, ×, ≤) is called an ordered field if and only if: a ≤ b implies a + c ≤ b + c; 0 ≤ a and 0 ≤ b implies 0 ≤ a × b. Both and are ordered fields, but cannot be defined in order to make an ordered field, because −1 is the square of i and would therefore be positive. Besides being an ordered field, R also has the Least-upper-bound property. In fact, R can be defined as the only ordered field with that quality. Chained notation The notation a < b < c stands for "a < b and b < c", from which, by the transitivity property above, it also follows that a < c. By the above laws, one can add or subtract the same number to all three terms, or multiply or divide all three terms by same nonzero number and reverse all inequalities if that number is negative. Hence, for example, a < b + e < c is equivalent to a − e < b < c − e. This notation can be generalized to any number of terms: for instance, a1 ≤ a2 ≤ ... ≤ an means that ai ≤ ai+1 for i = 1, 2, ..., n − 1. By transitivity, this condition is equivalent to ai ≤ aj for any 1 ≤ i ≤ j ≤ n. When solving inequalities using chained notation, it is possible and sometimes necessary to evaluate the terms independently. For instance, to solve the inequality 4x < 2x + 1 ≤ 3x + 2, it is not possible to isolate x in any one part of the inequality through addition or subtraction. Instead, the inequalities must be solved independently, yielding x < and x ≥ −1 respectively, which can be combined into the final solution −1 ≤ x < . Occasionally, chained notation is used with inequalities in different directions, in which case the meaning is the logical conjunction of the inequalities between adjacent terms. For example, the defining condition of a zigzag poset is written as a1 < a2 > a3 < a4 > a5 < a6 > ... . Mixed chained notation is used more often with compatible relations, like <, =, ≤. For instance, a < b = c ≤ d means that a < b, b = c, and c ≤ d. This notation exists in a few programming languages such as Python. In contrast, in programming languages that provide an ordering on the type of comparison results, such as C, even homogeneous chains may have a completely different meaning. Sharp inequalities An inequality is said to be sharp if it cannot be relaxed and still be valid in general. Formally, a universally quantified inequality φ is called sharp if, for every valid universally quantified inequality ψ, if holds, then also holds. For instance, the inequality is sharp, whereas the inequality is not sharp. Inequalities between means There are many inequalities between means. For example, for any positive numbers a1, a2, ..., an we have where they represent the following means of the sequence: Harmonic mean : Geometric mean : Arithmetic mean : Quadratic mean : Cauchy–Schwarz inequality The Cauchy–Schwarz inequality states that for all vectors u and v of an inner product space it is true that where is the inner product. Examples of inner products include the real and complex dot product; In Euclidean space Rn with the standard inner product, the Cauchy–Schwarz inequality is Power inequalities A power inequality is an inequality containing terms of the form ab, where a and b are real positive numbers or variable expressions. They often appear in mathematical olympiads exercises. Examples: For any real x, If x > 0 and p > 0, then In the limit of p → 0, the upper and lower bounds converge to ln(x). If x > 0, then If x > 0, then If x, y, z > 0, then For any real distinct numbers a and b, If x, y > 0 and 0 < p < 1, then If x, y, z > 0, then If a, b > 0, then If a, b > 0, then If a, b, c > 0, then If a, b > 0, then Well-known inequalities Mathematicians often use inequalities to bound quantities for which exact formulas cannot be computed easily. Some inequalities are used so often that they have names: Azuma's inequality Bernoulli's inequality Bell's inequality Boole's inequality Cauchy–Schwarz inequality Chebyshev's inequality Chernoff's inequality Cramér–Rao inequality Hoeffding's inequality Hölder's inequality Inequality of arithmetic and geometric means Jensen's inequality Kolmogorov's inequality Markov's inequality Minkowski inequality Nesbitt's inequality Pedoe's inequality Poincaré inequality Samuelson's inequality Sobolev inequality Triangle inequality Complex numbers and inequalities The set of complex numbers with its operations of addition and multiplication is a field, but it is impossible to define any relation so that becomes an ordered field. To make an ordered field, it would have to satisfy the following two properties: if , then ; if and , then . Because ≤ is a total order, for any number a, either or (in which case the first property above implies that ). In either case ; this means that and ; so and , which means (−1 + 1) > 0; contradiction. However, an operation ≤ can be defined so as to satisfy only the first property (namely, "if , then "). Sometimes the lexicographical order definition is used: , if , or and It can easily be proven that for this definition implies . Systems of inequalities Systems of linear inequalities can be simplified by Fourier–Motzkin elimination. The cylindrical algebraic decomposition is an algorithm that allows testing whether a system of polynomial equations and inequalities has solutions, and, if solutions exist, describing them. The complexity of this algorithm is doubly exponential in the number of variables. It is an active research domain to design algorithms that are more efficient in specific cases.
Mathematics
Algebra
null
89529
https://en.wikipedia.org/wiki/Wallaby
Wallaby
A wallaby () is a small or middle-sized macropod native to Australia and New Guinea, with introduced populations in New Zealand, Hawaii, the United Kingdom and other countries. They belong to the same taxonomic family as kangaroos and sometimes the same genus, but kangaroos are specifically categorised into the four largest species of the family. The term "wallaby" is an informal designation generally used for any macropod that is smaller than a kangaroo or a wallaroo that has not been designated otherwise. There are nine species (eight extant and one extinct) of the brush wallaby (genus Notamacropus). Their head and body length is and the tail is long. The 19 known species of rock-wallabies (genus Petrogale) live among rocks, usually near water; two species in this genus are endangered. The two living species of hare-wallabies (genus Lagorchestes; two other species in this genus are extinct) are small animals that have the movements and some of the habits of hares. The three species (two extant and one extinct) of nail-tail wallabies (genus Onychogalea) have one notable feature: a horny spur at the tip of the tail; its function is unknown. The seven species of pademelons or scrub wallabies (genus Thylogale) of New Guinea, the Bismarck Archipelago, and Tasmania are small and stocky, with short hind limbs and pointed noses. The swamp wallaby (genus Wallabia) is the only species in its genus. Another wallaby that is monotypic is the quokka or short-tailed scrub wallaby (genus Setonix); this species is now restricted to two offshore islands of Western Australia which are free of introduced predators. The seven species of dorcopsises or forest wallabies (genera Dorcopsis (four species, with a fifth as yet undescribed) and Dorcopsulus (two species)) are all native to the island of New Guinea. One of the brush wallaby species, the dwarf wallaby (Notamacropus dorcopsulus), also native to New Guinea, is the smallest known wallaby species and one of the smallest known macropods. Its length is about from the nose to the end of the tail, and it weighs about . Wallabies are hunted for meat and fur. Etymology and terminology The name wallaby comes from Dharug walabi or waliba. Another early name for the wallaby, in use from at least 1802, was the brush-kangaroo. Young wallabies are referred to as "joeys", like many other marsupials. Adult male wallabies are referred to as "bucks", "boomers", or "jacks". Adult female wallabies are referred to as "does", "flyers", or "jills". A group of wallabies is called a "mob", "court", or "troupe". Scrub-dwelling and forest-dwelling wallabies are known as "pademelons" (genus Thylogale) and "dorcopsises" (genera Dorcopsis and Dorcopsulus), respectively. General description Although members of most wallaby species are small, some can grow up to approximately two metres in length (from the head to the end of the tail). Their powerful hind legs are not only used for bounding at high speeds and jumping great heights, but also to administer vigorous kicks to fend off potential predators. The tammar wallaby (Notamacropus eugenii) has elastic storage in the ankle extensor tendons, without which the animal's metabolic rate might be 30–50% greater. It has also been found that the design of spring-like tendon energy savings and economical muscle force generation is key for the two distal muscle–tendon units of the tammar wallaby (Macropus-Eugenii). Wallabies also have a powerful tail that is used mostly for balance and support. Diet Wallabies are herbivores whose diet consists of a wide range of grasses, vegetables, leaves and other foliage. Due to recent urbanization, many wallabies now feed in rural and urban areas. Wallabies cover vast distances for food and water, which is often scarce in their environment. Mobs of wallabies often congregate around the same water hole during the dry season. Threats Wallabies face several threats. Dingoes, domestic and feral dogs, feral cats, and red foxes are among their predators. Humans also pose a significant threat to wallabies due to increased interaction (wallabies can defend themselves with hard kicks and biting). Many wallabies have been involved in vehicular accidents, as they often feed near roads and urban areas. Classification Wallabies are not a distinct genetic group. Nevertheless, they fall into several broad categories. Brush wallabies of the genus Notamacropus, like the agile wallaby (Notamacropus agilis) and the red-necked wallaby (Notamacropus rufogriseus), are most closely related to the kangaroos and wallaroos and, aside from their size, look very similar. These are the ones most frequently seen, particularly in the southern states. Rock-wallabies (genus Petrogale), rather like the goats of the Northern Hemisphere, specialise in rugged terrain and have modified feet adapted to grip rock with skin friction rather than dig into soil with large claws. There are at least 19 species and the relationship between several of them is still poorly understood. Several species are endangered. Captive rock-wallaby breeding programs, like the one at Healesville Sanctuary, have had some success and a small number have recently been released into the wild. The banded hare-wallaby (Lagostrophus fasciatus) is thought to be the last remaining member of the once numerous subfamily Sthenurinae, and although once common across southern Australia, it is now restricted to two islands off the Western Australian coast which are free of introduced predators. It is not as closely related to the other hare-wallabies (genus Lagorchestes) as the hare-wallabies are to the other wallabies. New Guinea, which was, until fairly recent geological times, part of mainland Australia, has at least five species of wallabies. Natural range and habitat Wallabies are widely distributed across Australia, particularly in more remote, heavily timbered, or rugged areas, less so on the great semi-arid plains that are better suited to the larger, leaner, and more fleet-footed kangaroos. They also can be found on the island of New Guinea. Introduced populations Wallabies of several species have been introduced to other parts of the world, and there are a number of successfully breeding introduced populations, including: Kawau Island in New Zealand is home to large numbers of tammar, Parma, swamp and brush-tailed rock-wallabies from introductions made around 1870. They are considered pests on the island, but a programme to re-introduce them to Australia has met with only limited success. The Lake Tarawera area of New Zealand has a large tammar wallaby population. The South Canterbury district of New Zealand has a large population of Bennett's wallabies. On the Isle of Man in the Ballaugh Curraghs area, there is a population of around 560 red-necked wallabies, descended from a pair that escaped from the nearby Curraghs Wildlife Park in 1970. Hawaii has a small non-native population of wallabies in the upper regions of Kalihi Valley on the island of Oahu arising from an escape of zoo specimens of the brush-tailed rock-wallaby (Petrogale penicillata) in 1916. In the Peak District of England, a population was established around 1940 by five escapees from a local zoo, and as of September 2017, sightings were still being made in the area. At its peak in 1975, the population numbered around 60 individuals. The island of Inchconnachan in Loch Lomond, Scotland, has a population of around 28 red-necked wallabies introduced by Lady Colquhoun in the 1920s. Eradication to protect the native capercaillie has been proposed. There is also a small population on Lambay Island off the eastern coast of Ireland. Initially introduced in the 1950s and 1960s, more were introduced in the 1980s after a sudden population explosion at the Dublin Zoo. Populations in the United Kingdom that, for some periods, bred successfully included one near Teignmouth, Devon, another in the Ashdown Forest, East Sussex, Cornwall and one on the islands of Bute and Lundy. It has recently been reported by walkers in the Lickey Hills Country Park area of Birmingham that a pair of wallabies have been released or are loose there (East Tunnock Rambling Club Meeting, December 2010). In France, in the southern part of the Forest of Rambouillet, about west of Paris, there is a wild group of around 30 Bennett's wallabies. This population has been present since the 1970s, when some individuals escaped from the zoological park of Émancé after a storm. Species The term "wallaby" is not well defined and can mean any macropod of moderate or small size. Therefore, the listing below is arbitrary and taken from the complete list of macropods. Genus Notamacropus Agile wallaby (Notamacropus agilis) Black-striped wallaby (Notamacropus dorsalis) Parma wallaby (Notamacropus parma) (rediscovered, thought to have been extinct for 100 years) Red-necked wallaby or Bennett's wallaby (Notamacropus rufogriseus) Tammar wallaby (Notamacropus eugenii) Toolache wallaby (Notamacropus greyi) †(extinct) Western brush wallaby (Notamacropus irma) Whiptail wallaby (Notamacropus parryi) Genus Wallabia Swamp wallaby or black wallaby (Wallabia bicolor) Genus Petrogale Allied rock-wallaby (Petrogale assimilis) Black-flanked rock-wallaby (Petrogale lateralis) Brush-tailed rock-wallaby (Petrogale penicillata) Cape York rock-wallaby (Petrogale coenensis) Eastern short-eared rock-wallaby (Petrogale wilkinsi) Godman's rock-wallaby (Petrogale godmani) Herbert's rock-wallaby (Petrogale herberti) Mareeba rock-wallaby (Petrogale mareeba) Monjon (Petrogale burbidgei) Mount Claro rock-wallaby (Petrogale sharmani) Nabarlek (Petrogale concinna) Proserpine rock-wallaby (Petrogale persephone) Purple-necked rock-wallaby (Petrogale purpureicollis) Rothschild's rock-wallaby (Petrogale rothschildi) Short-eared rock-wallaby (Petrogale brachyotis) Unadorned rock-wallaby (Petrogale inornata) Yellow-footed rock-wallaby (Petrogale xanthopus) Genus Lagostrophus Banded hare-wallaby (Lagostrophus fasciatus) Genus Lagorchestes Eastern hare-wallaby (Lagorchestes leporides) †(extinct) Lake Mackay hare-wallaby (Lagorchestes asomatus) †(extinct) Rufous hare-wallaby (Lagorchestes hirsutus) Spectacled hare-wallaby (Lagorchestes conspicillatus)) Genus Onychogalea Bridled nail-tail wallaby (Onychogalea fraenata) Crescent nail-tail wallaby (Onychogalea lunata) † (extinct) Northern nail-tail wallaby (Onychogalea unguifera) Genus Dorcopsis Black dorcopsis (Dorcopsis atrata) Brown dorcopsis (Dorcopsis muelleri) Gray dorcopsis (Dorcopsis luctuosa) White-striped dorcopsis (Dorcopsis hageni) Genus Dorcopsulus Macleay's dorcopsis (Dorcopsulus macleayi) Small dorcopsis (Dorcopsulus vanhuemi) Genus Thylogale Brown's pademelon (Thylogale browni) Calaby's pademelon (Thylogale calabyi) Dusky pademelon (Thylogale brunii) Mountain pademelon (Thylogale lanatus) Red-legged pademelon (Thylogale stigmatica) Red-necked pademelon (Thylogale thetis) Tasmanian pademelon (Thylogale billardierii) Genus Setonix Quokka or short-tailed scrub wallaby (Setonix brachyurus)
Biology and health sciences
Diprotodontia
Animals
89547
https://en.wikipedia.org/wiki/Water%20vapor
Water vapor
Water vapor, water vapour or aqueous vapor is the gaseous phase of water. It is one state of water within the hydrosphere. Water vapor can be produced from the evaporation or boiling of liquid water or from the sublimation of ice. Water vapor is transparent, like most constituents of the atmosphere. Under typical atmospheric conditions, water vapor is continuously generated by evaporation and removed by condensation. It is less dense than most of the other constituents of air and triggers convection currents that can lead to clouds and fog. Being a component of Earth's hydrosphere and hydrologic cycle, it is particularly abundant in Earth's atmosphere, where it acts as a greenhouse gas and warming feedback, contributing more to total greenhouse effect than non-condensable gases such as carbon dioxide and methane. Use of water vapor, as steam, has been important for cooking, and as a major component in energy production and transport systems since the industrial revolution. Water vapor is a relatively common atmospheric constituent, present even in the solar atmosphere as well as every planet in the Solar System and many astronomical objects including natural satellites, comets and even large asteroids. Likewise the detection of extrasolar water vapor would indicate a similar distribution in other planetary systems. Water vapor can also be indirect evidence supporting the presence of extraterrestrial liquid water in the case of some planetary mass objects. Water vapor, which reacts to temperature changes, is referred to as a 'feedback', because it amplifies the effect of forces that initially cause the warming. Therefore, it is a greenhouse gas. Properties Evaporation Whenever a water molecule leaves a surface and diffuses into a surrounding gas, it is said to have evaporated. Each individual water molecule which transitions between a more associated (liquid) and a less associated (vapor/gas) state does so through the absorption or release of kinetic energy. The aggregate measurement of this kinetic energy transfer is defined as thermal energy and occurs only when there is differential in the temperature of the water molecules. Liquid water that becomes water vapor takes a parcel of heat with it, in a process called evaporative cooling. The amount of water vapor in the air determines how frequently molecules will return to the surface. When a net evaporation occurs, the body of water will undergo a net cooling directly related to the loss of water. In the US, the National Weather Service measures the actual rate of evaporation from a standardized "pan" open water surface outdoors, at various locations nationwide. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over 120 inches per year. Formulas can be used for calculating the rate of evaporation from a water surface such as a swimming pool. In some countries, the evaporation rate far exceeds the precipitation rate. Evaporative cooling is restricted by atmospheric conditions. Humidity is the amount of water vapor in the air. The vapor content of air is measured with devices known as hygrometers. The measurements are usually expressed as specific humidity or percent relative humidity. The temperatures of the atmosphere and the water surface determine the equilibrium vapor pressure; 100% relative humidity occurs when the partial pressure of water vapor is equal to the equilibrium vapor pressure. This condition is often referred to as complete saturation. Humidity ranges from 0 grams per cubic metre in dry air to 30 grams per cubic metre (0.03 ounce per cubic foot) when the vapor is saturated at 30 °C. Sublimation Sublimation is the process by which water molecules directly leave the surface of ice without first becoming liquid water. Sublimation accounts for the slow mid-winter disappearance of ice and snow at temperatures too low to cause melting. Antarctica shows this effect to a unique degree because it is by far the continent with the lowest rate of precipitation on Earth. As a result, there are large areas where millennial layers of snow have sublimed, leaving behind whatever non-volatile materials they had contained. This is extremely valuable to certain scientific disciplines, a dramatic example being the collection of meteorites that are left exposed in unparalleled numbers and excellent states of preservation. Sublimation is important in the preparation of certain classes of biological specimens for scanning electron microscopy. Typically the specimens are prepared by cryofixation and freeze-fracture, after which the broken surface is freeze-etched, being eroded by exposure to vacuum until it shows the required level of detail. This technique can display protein molecules, organelle structures and lipid bilayers with very low degrees of distortion. Condensation Water vapor will only condense onto another surface when that surface is cooler than the dew point temperature, or when the water vapor equilibrium in air has been exceeded. When water vapor condenses onto a surface, a net warming occurs on that surface. The water molecule brings heat energy with it. In turn, the temperature of the atmosphere drops slightly. In the atmosphere, condensation produces clouds, fog and precipitation (usually only when facilitated by cloud condensation nuclei). The dew point of an air parcel is the temperature to which it must cool before water vapor in the air begins to condense. Condensation in the atmosphere forms cloud droplets. Also, a net condensation of water vapor occurs on surfaces when the temperature of the surface is at or below the dew point temperature of the atmosphere. Deposition is a phase transition separate from condensation which leads to the direct formation of ice from water vapor. Frost and snow are examples of deposition. There are several mechanisms of cooling by which condensation occurs: 1) Direct loss of heat by conduction or radiation. 2) Cooling from the drop in air pressure which occurs with uplift of air, also known as adiabatic cooling. Air can be lifted by mountains, which deflect the air upward, by convection, and by cold and warm fronts. 3) Advective cooling - cooling due to horizontal movement of air. Importance and Uses Provides water for plants and animals: Water vapour gets converted to rain and snow that serve as a natural source of water for plants and animals. Controls evaporation: Excess water vapor in the air decreases the rate of evaporation. Determines climatic conditions: Excess water vapor in the air produces rain, fog, snow etc. Hence, it determines climatic conditions. Chemical reactions A number of chemical reactions have water as a product. If the reactions take place at temperatures higher than the dew point of the surrounding air the water will be formed as vapor and increase the local humidity, if below the dew point local condensation will occur. Typical reactions that result in water formation are the burning of hydrogen or hydrocarbons in air or other oxygen containing gas mixtures, or as a result of reactions with oxidizers. In a similar fashion other chemical or physical reactions can take place in the presence of water vapor resulting in new chemicals forming such as rust on iron or steel, polymerization occurring (certain polyurethane foams and cyanoacrylate glues cure with exposure to atmospheric humidity) or forms changing such as where anhydrous chemicals may absorb enough vapor to form a crystalline structure or alter an existing one, sometimes resulting in characteristic color changes that can be used for measurement. Measurement Measuring the quantity of water vapor in a medium can be done directly or remotely with varying degrees of accuracy. Remote methods such electromagnetic absorption are possible from satellites above planetary atmospheres. Direct methods may use electronic transducers, moistened thermometers or hygroscopic materials measuring changes in physical properties or dimensions. Impact on air density Water vapor is lighter or less dense than dry air. At equivalent temperatures it is buoyant with respect to dry air, whereby the density of dry air at standard temperature and pressure (273.15 K, 101.325 kPa) is 1.27 g/L and water vapor at standard temperature has a vapor pressure of 0.6 kPa and the much lower density of 0.0048 g/L. Calculations Water vapor and dry air density calculations at 0 °C: The molar mass of water is , as calculated from the sum of the atomic masses of its constituent atoms. The average molar mass of air (approx. 78% nitrogen, N2; 21% oxygen, O2; 1% other gases) is at standard temperature and pressure (STP). Obeying Avogadro's Law and the ideal gas law, moist air will have a lower density than dry air. At max. saturation (i. e. rel. humidity = 100% at 0 °C) the density will go down to 28.51 g/mol. STP conditions imply a temperature of 0 °C, at which the ability of water to become vapor is very restricted. Its concentration in air is very low at 0 °C. The red line on the chart to the right is the maximum concentration of water vapor expected for a given temperature. The water vapor concentration increases significantly as the temperature rises, approaching 100% (steam, pure water vapor) at 100 °C. However the difference in densities between air and water vapor would still exist (0.598 vs. 1.27 g/L). At equal temperatures At the same temperature, a column of dry air will be denser or heavier than a column of air containing any water vapor, the molar mass of diatomic nitrogen and diatomic oxygen both being greater than the molar mass of water. Thus, any volume of dry air will sink if placed in a larger volume of moist air. Also, a volume of moist air will rise or be buoyant if placed in a larger region of dry air. As the temperature rises the proportion of water vapor in the air increases, and its buoyancy will increase. The increase in buoyancy can have a significant atmospheric impact, giving rise to powerful, moisture rich, upward air currents when the air temperature and sea temperature reaches 25 °C or above. This phenomenon provides a significant driving force for cyclonic and anticyclonic weather systems (typhoons and hurricanes). Respiration and breathing Water vapor is a by-product of respiration in plants and animals. Its contribution to the pressure, increases as its concentration increases. Its partial pressure contribution to air pressure increases, lowering the partial pressure contribution of the other atmospheric gases (Dalton's Law). The total air pressure must remain constant. The presence of water vapor in the air naturally dilutes or displaces the other air components as its concentration increases. This can have an effect on respiration. In very warm air (35 °C) the proportion of water vapor is large enough to give rise to the stuffiness that can be experienced in humid jungle conditions or in poorly ventilated buildings. Lifting gas Water vapor has lower density than that of air and is therefore buoyant in air but has lower vapor pressure than that of air. When water vapor is used as a lifting gas by a thermal airship the water vapor is heated to form steam so that its vapor pressure is greater than the surrounding air pressure in order to maintain the shape of a theoretical "steam balloon", which yields approximately 60% the lift of helium and twice that of hot air. General discussion The amount of water vapor in an atmosphere is constrained by the restrictions of partial pressures and temperature. Dew point temperature and relative humidity act as guidelines for the process of water vapor in the water cycle. Energy input, such as sunlight, can trigger more evaporation on an ocean surface or more sublimation on a chunk of ice on top of a mountain. The balance between condensation and evaporation gives the quantity called vapor partial pressure. The maximum partial pressure (saturation pressure) of water vapor in air varies with temperature of the air and water vapor mixture. A variety of empirical formulas exist for this quantity; the most used reference formula is the Goff-Gratch equation for the SVP over liquid water below zero degrees Celsius: where , temperature of the moist air, is given in units of kelvin, and is given in units of millibars (hectopascals). The formula is valid from about −50 to 102 °C; however there are a very limited number of measurements of the vapor pressure of water over supercooled liquid water. There are a number of other formulae which can be used. Under certain conditions, such as when the boiling temperature of water is reached, a net evaporation will always occur during standard atmospheric conditions regardless of the percent of relative humidity. This immediate process will dispel massive amounts of water vapor into a cooler atmosphere. Exhaled air is almost fully at equilibrium with water vapor at the body temperature. In the cold air the exhaled vapor quickly condenses, thus showing up as a fog or mist of water droplets and as condensation or frost on surfaces. Forcibly condensing these water droplets from exhaled breath is the basis of exhaled breath condensate, an evolving medical diagnostic test. Controlling water vapor in air is a key concern in the heating, ventilating, and air-conditioning (HVAC) industry. Thermal comfort depends on the moist air conditions. Non-human comfort situations are called refrigeration, and also are affected by water vapor. For example, many food stores, like supermarkets, utilize open chiller cabinets, or food cases, which can significantly lower the water vapor pressure (lowering humidity). This practice delivers several benefits as well as problems. In Earth's atmosphere Gaseous water represents a small but environmentally significant constituent of the atmosphere. The percentage of water vapor in surface air varies from 0.01% at -42 °C (-44 °F) to 4.24% when the dew point is 30 °C (86 °F). Over 99% of atmospheric water is in the form of vapour, rather than liquid water or ice, and approximately 99.13% of the water vapour is contained in the troposphere. The condensation of water vapor to the liquid or ice phase is responsible for clouds, rain, snow, and other precipitation, all of which count among the most significant elements of what we experience as weather. Less obviously, the latent heat of vaporization, which is released to the atmosphere whenever condensation occurs, is one of the most important terms in the atmospheric energy budget on both local and global scales. For example, latent heat release in atmospheric convection is directly responsible for powering destructive storms such as tropical cyclones and severe thunderstorms. Water vapor is an important greenhouse gas owing to the presence of the hydroxyl bond which strongly absorbs in the infra-red. Water vapor is the "working medium" of the atmospheric thermodynamic engine which transforms heat energy from sun irradiation into mechanical energy in the form of winds. Transforming thermal energy into mechanical energy requires an upper and a lower temperature level, as well as a working medium which shuttles forth and back between both. The upper temperature level is given by the soil or water surface of the Earth, which absorbs the incoming sun radiation and warms up, evaporating water. The moist and warm air at the ground is lighter than its surroundings and rises up to the upper limit of the troposphere. There the water molecules radiate their thermal energy into outer space, cooling down the surrounding air. The upper atmosphere constitutes the lower temperature level of the atmospheric thermodynamic engine. The water vapor in the now cold air condenses out and falls down to the ground in the form of rain or snow. The now heavier cold and dry air sinks down to ground as well; the atmospheric thermodynamic engine thus establishes a vertical convection, which transports heat from the ground into the upper atmosphere, where the water molecules can radiate it to outer space. Due to the Earth's rotation and the resulting Coriolis forces, this vertical atmospheric convection is also converted into a horizontal convection, in the form of cyclones and anticyclones, which transport the water evaporated over the oceans into the interior of the continents, enabling vegetation to grow. Water in Earth's atmosphere is not merely below its boiling point (100 °C), but at altitude it goes below its freezing point (0 °C), due to water's highly polar attraction. When combined with its quantity, water vapor then has a relevant dew point and frost point, unlike e. g., carbon dioxide and methane. Water vapor thus has a scale height a fraction of that of the bulk atmosphere, as the water condenses and exits, primarily in the troposphere, the lowest layer of the atmosphere. Carbon dioxide () and methane, being well-mixed in the atmosphere, tend to rise above water vapour. The absorption and emission of both compounds contribute to Earth's emission to space, and thus the planetary greenhouse effect. This greenhouse forcing is directly observable, via distinct spectral features versus water vapor, and observed to be rising with rising levels. Conversely, adding water vapor at high altitudes has a disproportionate impact, which is why jet traffic has a disproportionately high warming effect. Oxidation of methane is also a major source of water vapour in the stratosphere, and adds about 15% to methane's global warming effect. In the absence of other greenhouse gases, Earth's water vapor would condense to the surface; this has likely happened, possibly more than once. Scientists thus distinguish between non-condensable (driving) and condensable (driven) greenhouse gases, i.e., the above water vapor feedback. Fog and clouds form through condensation around cloud condensation nuclei. In the absence of nuclei, condensation will only occur at much lower temperatures. Under persistent condensation or deposition, cloud droplets or snowflakes form, which precipitate when they reach a critical mass. Atmospheric concentration of water vapour is highly variable between locations and times, from 10 ppmv in the coldest air to 5% (50 000 ppmv) in humid tropical air, and can be measured with a combination of land observations, weather balloons and satellites. The water content of the atmosphere as a whole is constantly depleted by precipitation. At the same time it is constantly replenished by evaporation, most prominently from oceans, lakes, rivers, and moist earth. Other sources of atmospheric water include combustion, respiration, volcanic eruptions, the transpiration of plants, and various other biological and geological processes. At any given time there is about 1.29 x 1016 litres (3.4 x 1015 gal.) of water in the atmosphere. The atmosphere holds 1 part in 2500 of the fresh water, and 1 part in 100,000 of the total water on Earth. The mean global content of water vapor in the atmosphere is roughly sufficient to cover the surface of the planet with a layer of liquid water about 25 mm deep. The mean annual precipitation for the planet is about 1 metre, a comparison which implies a rapid turnover of water in the air – on average, the residence time of a water molecule in the troposphere is about 9 to 10 days. Global mean water vapour is about 0.25% of the atmosphere by mass and also varies seasonally, in terms of contribution to atmospheric pressure between 2.62 hPa in July and 2.33 hPa in December. IPCC AR6 expresses medium confidence in increase of total water vapour at about 1-2% per decade; it is expected to increase by around 7% per °C of warming. Episodes of surface geothermal activity, such as volcanic eruptions and geysers, release variable amounts of water vapor into the atmosphere. Such eruptions may be large in human terms, and major explosive eruptions may inject exceptionally large masses of water exceptionally high into the atmosphere, but as a percentage of total atmospheric water, the role of such processes is trivial. The relative concentrations of the various gases emitted by volcanoes varies considerably according to the site and according to the particular event at any one site. However, water vapor is consistently the commonest volcanic gas; as a rule, it comprises more than 60% of total emissions during a subaerial eruption. Atmospheric water vapor content is expressed using various measures. These include vapor pressure, specific humidity, mixing ratio, dew point temperature, and relative humidity. Radar and satellite imaging Because water molecules absorb microwaves and other radio wave frequencies, water in the atmosphere attenuates radar signals. In addition, atmospheric water will reflect and refract signals to an extent that depends on whether it is vapor, liquid or solid. Generally, radar signals lose strength progressively the farther they travel through the troposphere. Different frequencies attenuate at different rates, such that some components of air are opaque to some frequencies and transparent to others. Radio waves used for broadcasting and other communication experience the same effect. Water vapor reflects radar to a lesser extent than do water's other two phases. In the form of drops and ice crystals, water acts as a prism, which it does not do as an individual molecule; however, the existence of water vapor in the atmosphere causes the atmosphere to act as a giant prism. A comparison of GOES-12 satellite images shows the distribution of atmospheric water vapor relative to the oceans, clouds and continents of the Earth. Vapor surrounds the planet but is unevenly distributed. The image loop on the right shows monthly average of water vapor content with the units are given in centimeters, which is the precipitable water or equivalent amount of water that could be produced if all the water vapor in the column were to condense. The lowest amounts of water vapor (0 centimeters) appear in yellow, and the highest amounts (6 centimeters) appear in dark blue. Areas of missing data appear in shades of gray. The maps are based on data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on NASA's Aqua satellite. The most noticeable pattern in the time series is the influence of seasonal temperature changes and incoming sunlight on water vapor. In the tropics, a band of extremely humid air wobbles north and south of the equator as the seasons change. This band of humidity is part of the Intertropical Convergence Zone, where the easterly trade winds from each hemisphere converge and produce near-daily thunderstorms and clouds. Farther from the equator, water vapor concentrations are high in the hemisphere experiencing summer and low in the one experiencing winter. Another pattern that shows up in the time series is that water vapor amounts over land areas decrease more in winter months than adjacent ocean areas do. This is largely because air temperatures over land drop more in the winter than temperatures over the ocean. Water vapor condenses more rapidly in colder air. As water vapor absorbs light in the visible spectral range, its absorption can be used in spectroscopic applications (such as DOAS) to determine the amount of water vapor in the atmosphere. This is done operationally, e.g. from the Global Ozone Monitoring Experiment (GOME) spectrometers on ERS (GOME) and MetOp (GOME-2). The weaker water vapor absorption lines in the blue spectral range and further into the UV up to its dissociation limit around 243 nm are mostly based on quantum mechanical calculations and are only partly confirmed by experiments. Lightning generation Water vapor plays a key role in lightning production in the atmosphere. From cloud physics, usually clouds are the real generators of static charge as found in Earth's atmosphere. The ability of clouds to hold massive amounts of electrical energy is directly related to the amount of water vapor present in the local system. The amount of water vapor directly controls the permittivity of the air. During times of low humidity, static discharge is quick and easy. During times of higher humidity, fewer static discharges occur. Permittivity and capacitance work hand in hand to produce the megawatt outputs of lightning. After a cloud, for instance, has started its way to becoming a lightning generator, atmospheric water vapor acts as a substance (or insulator) that decreases the ability of the cloud to discharge its electrical energy. Over a certain amount of time, if the cloud continues to generate and store more static electricity, the barrier that was created by the atmospheric water vapor will ultimately break down from the stored electrical potential energy. This energy will be released to a local oppositely charged region, in the form of lightning. The strength of each discharge is directly related to the atmospheric permittivity, capacitance, and the source's charge generating ability. Extraterrestrial Water vapor is common in the Solar System and by extension, other planetary systems. Its signature has been detected in the atmospheres of the Sun, occurring in sunspots. The presence of water vapor has been detected in the atmospheres of all seven extraterrestrial planets in the Solar System, the Earth's Moon, and the moons of other planets, although typically in only trace amounts. Geological formations such as cryogeysers are thought to exist on the surface of several icy moons ejecting water vapor due to tidal heating and may indicate the presence of substantial quantities of subsurface water. Plumes of water vapor have been detected on Jupiter's moon Europa and are similar to plumes of water vapor detected on Saturn's moon Enceladus. Traces of water vapor have also been detected in the stratosphere of Titan. Water vapor has been found to be a major constituent of the atmosphere of dwarf planet, Ceres, largest object in the asteroid belt The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes." According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." Scientists studying Mars hypothesize that if water moves about the planet, it does so as vapor. The brilliance of comet tails comes largely from water vapor. On approach to the Sun, the ice many comets carry sublimes to vapor. Knowing a comet's distance from the sun, astronomers may deduce the comet's water content from its brilliance. Water vapor has also been confirmed outside the Solar System. Spectroscopic analysis of HD 209458 b, an extrasolar planet in the constellation Pegasus, provides the first evidence of atmospheric water vapor beyond the Solar System. A star called CW Leonis was found to have a ring of vast quantities of water vapor circling the aging, massive star. A NASA satellite designed to study chemicals in interstellar gas clouds, made the discovery with an onboard spectrometer. Most likely, "the water vapor was vaporized from the surfaces of orbiting comets." Other exoplanets with evidence of water vapor include HAT-P-11b and K2-18b.
Physical sciences
Inorganic compounds
null
89732
https://en.wikipedia.org/wiki/Antikythera%20mechanism
Antikythera mechanism
The Antikythera mechanism ( , ) is an Ancient Greek hand-powered orrery (model of the Solar System). It is the oldest known example of an analogue computer. It could be used to predict astronomical positions and eclipses decades in advance. It could also be used to track the four-year cycle of athletic games similar to an Olympiad, the cycle of the ancient Olympic Games. The artefact was among wreckage retrieved from a shipwreck off the coast of the Greek island Antikythera in 1901. In 1902, it was identified by archaeologist Valerios Stais as containing a gear. The device, housed in the remains of a wooden-framed case of (uncertain) overall size , was found as one lump, later separated into three main fragments which are now divided into 82 separate fragments after conservation efforts. Four of these fragments contain gears, while inscriptions are found on many others. The largest gear is about in diameter and originally had 223 teeth. All these fragments of the mechanism are kept at the National Archaeological Museum, Athens, along with reconstructions and replicas, to demonstrate how it may have looked and worked. In 2005, a team from Cardiff University used computer X-ray tomography and high resolution scanning to image inside fragments of the crust-encased mechanism and read the faintest inscriptions that once covered the outer casing. These scans suggest that the mechanism had 37 meshing bronze gears enabling it to follow the movements of the Moon and the Sun through the zodiac, to predict eclipses and to model the irregular orbit of the Moon, where the Moon's velocity is higher in its perigee than in its apogee. This motion was studied in the 2nd century BC by astronomer Hipparchus of Rhodes, and he may have been consulted in the machine's construction. There is speculation that a portion of the mechanism is missing and it calculated the positions of the five classical planets. The inscriptions were further deciphered in 2016, revealing numbers connected with the synodic cycles of Venus and Saturn. The instrument is believed to have been designed and constructed by Hellenistic scientists and been variously dated to about 87 BC, between 150 and 100 BC, or 205 BC. It must have been constructed before the shipwreck, which has been dated by multiple lines of evidence to approximately 70–60 BC. In 2022 researchers proposed its initial calibration date, not construction date, could have been 23 December 178 BC. Other experts propose 204 BC as a more likely calibration date. Machines with similar complexity did not appear again until the 14th century in western Europe. History Discovery Captain Dimitrios Kontos () and a crew of sponge divers from Symi island discovered the Antikythera wreck in early 1900, and recovered artefacts during the first expedition with the Hellenic Royal Navy, in 1900–01. This wreck of a Roman cargo ship was found at a depth of off Point Glyphadia on the Greek island of Antikythera. The team retrieved numerous large objects, including bronze and marble statues, pottery, unique glassware, jewellery, coins, and the mechanism. The mechanism was retrieved from the wreckage in 1901, probably July. It is unknown how the mechanism came to be on the cargo ship. All of the items retrieved from the wreckage were transferred to the National Museum of Archaeology in Athens for storage and analysis. The mechanism appeared to be a lump of corroded bronze and wood; it went unnoticed for two years, while museum staff worked on piecing together more obvious treasures, such as the statues. Upon removal from seawater, the mechanism was not treated, resulting in deformational changes. On 17 May 1902, archaeologist Valerios Stais found one of the pieces of rock had a gear wheel embedded in it. He initially believed that it was an astronomical clock, but most scholars considered the device to be prochronistic, too complex to have been constructed during the same period as the other pieces that had been discovered. The German philologist Albert Rehm became interested in the device, and first proposed that it was an astronomical calculator. Investigations into the object lapsed until British science historian and Yale University professor Derek J. de Solla Price became interested in 1951. In 1971, Price and Greek nuclear physicist Charalampos Karakalos made X-ray and gamma-ray images of the 82 fragments. Price published a paper on their findings in 1974. Two other searches for items at the Antikythera wreck site in 2012 and 2015 yielded art objects and a second ship which may, or may not, be connected with the treasure ship on which the mechanism was found. Also found was a bronze disc, embellished with the image of a bull. The disc has four "ears" which have holes in them, and it was thought it may have been part of the Antikythera mechanism, as a "cog wheel". There appears to be little evidence that it was part of the mechanism; it is more likely the disc was a bronze decoration on a piece of furniture. Origin The Antikythera mechanism is generally referred to as the first known analogue computer. The quality and complexity of the mechanism's manufacture suggests it must have had undiscovered predecessors during the Hellenistic period. Its construction relied on theories of astronomy and mathematics developed by Greek astronomers during the second century BC, and it is estimated to have been built in the late second century BC or the early first century BC. In 2008, research by the Antikythera Mechanism Research Project suggested the concept for the mechanism may have originated in the colonies of Corinth, since they identified the calendar on the Metonic Spiral as coming from Corinth, or one of its colonies in northwest Greece or Sicily. Syracuse was a colony of Corinth and the home of Archimedes, and the Antikythera Mechanism Research Project argued in 2008 that it might imply a connection with the school of Archimedes. It was demonstrated in 2017 that the calendar on the Metonic Spiral is of the Corinthian type, but cannot be that of Syracuse. Another theory suggests that coins found by Jacques Cousteau at the wreck site in the 1970s date to the time of the device's construction, and posits that its origin may have been from the ancient Greek city of Pergamon, home of the Library of Pergamum. With its many scrolls of art and science, it was second in importance only to the Library of Alexandria during the Hellenistic period. The ship carrying the device contained vases in the Rhodian style, leading to a hypothesis that it was constructed at an academy founded by Stoic philosopher Posidonius on that Greek island. Rhodes was a busy trading port and centre of astronomy and mechanical engineering, home to astronomer Hipparchus, who was active from about 140–120 BC. The mechanism uses Hipparchus' theory for the motion of the Moon, which suggests he may have designed or at least worked on it. It has been argued the astronomical events on the Parapegma of the mechanism work best for latitudes in the range of 33.3–37.0 degrees north; the island of Rhodes is located between the latitudes of 35.85 and 36.50 degrees north. In 2014, a study argued for a new dating of approximately 200 BC, based on identifying the start-up date on the Saros Dial, as the astronomical lunar month that began shortly after the new moon of 28 April 205 BC. According to this theory the Babylonian arithmetic style of prediction fits much better with the device's predictive models than the traditional Greek trigonometric style. A study by Iversen in 2017 reasons that the prototype for the device was from Rhodes, but that this particular model was modified for a client from Epirus in northwestern Greece; Iversen argues it was probably constructed no earlier than a generation before the shipwreck, a date supported by Jones in 2017. Further dives were undertaken in 2014 and 2015, in the hope of discovering more of the mechanism. A five-year programme of investigations began in 2014 and ended in October 2019, with a new five-year session starting in May 2020. In 2022 researchers proposed the mechanism's initial calibration date, not construction date, could have been 23 December 178 BC. Other experts propose 204 BC as a more likely calibration date. Machines with similar complexity did not appear again until the fourteenth century, with early examples being astronomical clocks of Richard of Wallingford and Giovanni de' Dondi. Design The original mechanism apparently came out of the Mediterranean as a single encrusted piece. Soon afterwards it fractured into three major pieces. Other small pieces have broken off in the interim from cleaning and handling, and others were found on the sea floor by the Cousteau expedition. Other fragments may still be in storage, undiscovered since their initial recovery; Fragment F was discovered in that way in 2005. Of the 82 known fragments, seven are mechanically significant and contain the majority of the mechanism and inscriptions. Another 16 smaller parts contain fractional and incomplete inscriptions. Many of the smaller fragments that have been found contain nothing of apparent value, but a few have inscriptions on them. Fragment 19 contains significant back door inscriptions including one reading "... 76 years ..." which refers to the Callippic cycle. Other inscriptions seem to describe the function of the back dials. In addition to this important minor fragment, 15 further minor fragments have remnants of inscriptions on them. Mechanics Information on the specific data obtained from the fragments is detailed in the supplement to the 2006 Nature article from Freeth et al. Operation On the front face of the mechanism, there is a fixed ring dial representing the ecliptic, the twelve zodiacal signs marked off with equal 30-degree sectors. This matched with the Babylonian custom of assigning one twelfth of the ecliptic to each zodiac sign equally, even though the constellation boundaries were variable. Outside that dial is another ring which is rotatable, marked off with the months and days of the Sothic Egyptian calendar, twelve months of 30 days plus five intercalary days. The months are marked with the Egyptian names for the months transcribed into the Greek alphabet. The first task is to rotate the Egyptian calendar ring to match the current zodiac points. The Egyptian calendar ignored leap days, so it advanced through a full zodiac sign in about 120 years. The mechanism was operated by turning a small hand crank (now lost) which was linked via a crown gear to the largest gear, the four-spoked gear visible on the front of fragment A, gear b1. This moved the date pointer on the front dial, which would be set to the correct Egyptian calendar day. The year is not selectable, so it is necessary to know the year currently set, or by looking up the cycles indicated by the various calendar cycle indicators on the back in the Babylonian ephemeris tables for the day of the year currently set, since most of the calendar cycles are not synchronous with the year. The crank moves the date pointer about 78 days per full rotation, so hitting a particular day on the dial would be easily possible if the mechanism were in good working condition. The action of turning the hand crank would also cause all interlocked gears within the mechanism to rotate, resulting in the simultaneous calculation of the position of the Sun and Moon, the moon phase, eclipse, and calendar cycles, and perhaps the locations of planets. The operator also had to be aware of the position of the spiral dial pointers on the two large dials on the back. The pointer had a "follower" that tracked the spiral incisions in the metal as the dials incorporated four and five full rotations of the pointers. When a pointer reached the terminal month location at either end of the spiral, the pointer's follower had to be manually moved to the other end of the spiral before proceeding further. Faces Front face The front dial has two concentric circular scales. The inner scale marks the Greek signs of the zodiac, with division in degrees. The outer scale, which is a movable ring that sits flush with the surface and runs in a channel, is marked off with what appear to be days and has a series of corresponding holes beneath the ring in the channel. Since the discovery of the mechanism more than a century ago, this outer ring has been presumed to represent a 365-day Egyptian solar calendar, but research (Budiselic, et al., 2020) challenged this presumption and provided direct statistical evidence there are 354 intervals, suggesting a lunar calendar. Since this initial discovery, two research teams, using different methods, independently calculated the interval count. Woan and Bayley calculate 354–355 intervals using two different methods, confirming with higher accuracy the Budiselic et al. findings and noting that "365 holes is not plausible". Malin and Dickens' best estimate is 352.3±1.5 and concluded that the number of holes (N) "has to be integral and the SE (standard error) of 1.5 indicates that there is less than a 5% probability that N is not one of the six values in the range 350 to 355. The chances of N being as high as 365 are less than 1 in 10,000. While other contenders cannot be ruled out, of the two values that have been proposed for N on astronomical grounds, that of Budiselic et al. (354) is by far the more likely." If one supports the 365 day presumption, it is recognized the mechanism predates the Julian calendar reform, but the Sothic and Callippic cycles had already pointed to a  day solar year, as seen in Ptolemy III's attempted calendar reform of 238 BC. The dials are not believed to reflect his proposed leap day (Epag. 6), but the outer calendar dial may be moved against the inner dial to compensate for the effect of the extra quarter-day in the solar year by turning the scale backward one day every four years. If one is in favour of the 354 day evidence, the most likely interpretation is that the ring is a manifestation of a 354-day lunar calendar. Given the era of the mechanism's presumed construction and the presence of Egyptian month names, it is possibly the first example of the Egyptian civil-based lunar calendar proposed by Richard Anthony Parker in 1950. The lunar calendar's purpose was to serve as a day-to-day indicator of successive lunations, and would also have assisted with the interpretation of the lunar phase pointer, and the Metonic and Saros dials. Undiscovered gearing, synchronous with the rest of the Metonic gearing of the mechanism, is implied to drive a pointer around this scale. Movement and registration of the ring relative to the underlying holes served to facilitate both a 1-in-76-year Callippic cycle correction, as well as convenient lunisolar intercalation. The dial also marks the position of the Sun on the ecliptic, corresponding to the current date in the year. The orbits of the Moon and the five planets known to the Greeks are close enough to the ecliptic to make it a convenient reference for defining their positions as well. The following three Egyptian months are inscribed in Greek letters on the surviving pieces of the outer ring: (Pachon) (Payni) (Epiphi) The other months have been reconstructed; some reconstructions of the mechanism omit the five days of the Egyptian intercalary month. The Zodiac dial contains Greek inscriptions of the members of the zodiac, which is believed to be adapted to the tropical month version rather than the sidereal: ( [Ram], Aries) ΤΑΥΡΟΣ (Tauros [Bull], Taurus) ΔΙΔΥΜΟΙ (Didymoi [Twins], Gemini) ΚΑΡΚΙΝΟΣ (Karkinos [Crab], Cancer) ΛΕΩΝ (Leon [Lion], Leo) ΠΑΡΘΕΝΟΣ (Parthenos [Maiden], Virgo) ΧΗΛΑΙ (Chelai [Scorpio's Claw or Zygos], Libra) ΣΚΟΡΠΙΟΣ (Skorpios [Scorpion], Scorpio) ΤΟΞΟΤΗΣ (Toxotes [Archer], Sagittarius) ΑΙΓΟΚΕΡΩΣ (Aigokeros [Goat-horned], Capricorn) ΥΔΡΟΧΟΟΣ (Hydrokhoos [Water carrier], Aquarius) ΙΧΘΥΕΣ (Ichthyes [Fish], Pisces) Also on the zodiac dial are single characters at specific points (see reconstruction at ref). They are keyed to a parapegma, a precursor of the modern day almanac inscribed on the front face above and beneath the dials. They mark the locations of longitudes on the ecliptic for specific stars. The parapegma above the dials reads (square brackets indicate inferred text): The parapegma beneath the dials reads: At least two pointers indicated positions of bodies upon the ecliptic. A lunar pointer indicated the position of the Moon, and a mean Sun pointer was shown, perhaps doubling as the current date pointer. The Moon position was not a simple mean Moon indicator which would indicate movement uniformly around a circular orbit; rather, it approximated the acceleration and deceleration of the Moon's elliptical orbit, through the earliest extant use of epicyclic gearing. It also tracked the precession of the moon's elliptical orbit around the ecliptic in an 8.88 year cycle. The mean Sun position is, by definition, the current date. It is speculated that since significant effort was taken to ensure the position of the Moon was correct, there was likely to have also been a "true sun" pointer in addition to the mean Sun pointer, to track the elliptical anomaly of the Sun (the orbit of Earth around the Sun), but there is no evidence of it among the fragments found. Similarly, neither is there the evidence of planetary orbit pointers for the five planets known to the Greeks among the fragments. But see Proposed gear schemes below. Mechanical engineer Michael Wright demonstrated there was a mechanism to supply the lunar phase in addition to the position. The indicator was a small ball embedded in the lunar pointer, half-white and half-black, which rotated to show the phase (new, first quarter, half, third quarter, full, and back). The data to support this function is available given the Sun and Moon positions as angular rotations; essentially, it is the angle between the two, translated into the rotation of the ball. It requires a differential gear, a gearing arrangement that sums or differences two angular inputs. Rear face In 2008, scientists reported new findings in Nature showing the mechanism not only tracked the Metonic calendar and predicted solar eclipses, but also calculated the timing of panhellenic athletic games, such as the ancient Olympic Games. Inscriptions on the instrument closely match the names of the months that are used on calendars from Epirus in northwestern Greece and with the island of Corfu, which in antiquity was known as Corcyra. On the back of the mechanism, there are five dials: the two large displays, the Metonic and the Saros, and three smaller indicators, the so-called Olympiad Dial, which has been renamed the Games dial as it did not track Olympiad years (the four-year cycle it tracks most closely is the Halieiad), the Callippic, and the exeligmos. The Metonic dial is the main upper dial on the rear of the mechanism. The Metonic cycle, defined in several physical units, is 235 synodic months, which is very close (to within less than 13 one-millionths) to 19 tropical years. It is therefore a convenient interval over which to convert between lunar and solar calendars. The Metonic dial covers 235 months in five rotations of the dial, following a spiral track with a follower on the pointer that keeps track of the layer of the spiral. The pointer points to the synodic month, counted from new moon to new moon, and the cell contains the Corinthian month names. () ΚΡΑΝΕΙΟΣ (Kraneios) ΛΑΝΟΤΡΟΠΙΟΣ (Lanotropios) ΜΑΧΑΝΕΥΣ (Machaneus, "mechanic", referring to Zeus the inventor) ΔΩΔΕΚΑΤΕΥΣ (Dodekateus) ΕΥΚΛΕΙΟΣ (Eukleios) ΑΡΤΕΜΙΣΙΟΣ (Artemisios) ΨΥΔΡΕΥΣ (Psydreus) ΓΑΜΕΙΛΙΟΣ (Gameilios) ΑΓΡΙΑΝΙΟΣ (Agrianios) ΠΑΝΑΜΟΣ (Panamos) ΑΠΕΛΛΑΙΟΣ (Apellaios) Thus, setting the correct solar time (in days) on the front panel indicates the current lunar month on the back panel, with resolution to within a week or so. Based on the fact that the calendar month names are consistent with all the evidence of the Epirote calendar and that the Games dial mentions the very minor Naa games of Dodona (in Epirus), it has been argued that the calendar on the mechanism is likely to be the Epirote calendar, and that this calendar was probably adopted from a Corinthian colony in Epirus, possibly Ambracia. It has been argued that the first month of the calendar, Phoinikaios, was ideally the month in which the autumn equinox fell, and that the start-up date of the calendar began shortly after the astronomical new moon of 23 August 205 BC. The Games dial is the right secondary upper dial; it is the only pointer on the instrument that travels in an anticlockwise direction as time advances. The dial is divided into four sectors, each of which is inscribed with a year indicator and the name of two Panhellenic Games: the "crown" games of Isthmia, Olympia, Nemea, and Pythia; and two lesser games: Naa (held at Dodona) and the Halieia of Rhodes. The inscriptions on each one of the four divisions are: The Saros dial is the main lower spiral dial on the rear of the mechanism. The Saros cycle is 18 years and days long (6585.333... days), which is very close to 223 synodic months (6585.3211 days). It is defined as the cycle of repetition of the positions required to cause solar and lunar eclipses, and therefore, it could be used to predict them—not only the month, but the day and time of day. The cycle is approximately 8 hours longer than an integer number of days. Translated into global spin, that means an eclipse occurs not only eight hours later, but one-third of a rotation farther to the west. Glyphs in 51 of the 223 synodic month cells of the dial specify the occurrence of 38 lunar and 27 solar eclipses. Some of the abbreviations in the glyphs read: Σ = ΣΕΛΗΝΗ ("Selene", Moon) Η = ΗΛΙΟΣ ("Helios", Sun) H\M = ΗΜΕΡΑΣ ("Hemeras", of the day) ω\ρ = ωρα ("hora", hour) N\Y = ΝΥΚΤΟΣ ("Nuktos", of the night) The glyphs show whether the designated eclipse is solar or lunar, and give the day of the month and hour. Solar eclipses may not be visible at any given point, and lunar eclipses are visible only if the moon is above the horizon at the appointed hour. In addition, the inner lines at the cardinal points of the Saros dial indicate the start of a new full moon cycle. Based on the distribution of the times of the eclipses, it has been argued the start-up date of the Saros dial was shortly after the astronomical new moon of 28 April 205 BC. The Exeligmos dial is the secondary lower dial on the rear of the mechanism. The exeligmos cycle is a 54-year triple Saros cycle that is 19,756 days long. Since the length of the Saros cycle is to a third of a day (namely, 6,585 days plus 8 hours), a full exeligmos cycle returns the counting to an integral number of days, as reflected in the inscriptions. The labels on its three divisions are: Blank or o ? (representing the number zero, assumed, not yet observed) H (number 8) means add 8 hours to the time mentioned in the display Iϛ (number 16) means add 16 hours to the time mentioned in the display Thus the dial pointer indicates how many hours must be added to the glyph times of the Saros dial in order to calculate the exact eclipse times. Doors The mechanism has a wooden casing with a front and a back door, both containing inscriptions. The back door appears to be the 'instruction manual'. On one of its fragments is written "76 years, 19 years" representing the Callippic and Metonic cycles. Also written is "223" for the Saros cycle. On another one of its fragments, it is written "on the spiral subdivisions 235" referring to the Metonic dial. Gearing The mechanism is remarkable for the level of miniaturisation and the complexity of its parts, which is comparable to that of 14th-century astronomical clocks. It has at least 30 gears, although mechanism expert Michael Wright has suggested the Greeks of this period were capable of implementing a system with many more gears. There is debate as to whether the mechanism had indicators for all five of the planets known to the ancient Greeks. No gearing for such a planetary display survives and all gears are accounted for—with the exception of one 63-toothed gear (r1) otherwise unaccounted for in fragment D. Fragment D is a small quasi-circular constriction that, according to Xenophon Moussas, has a gear inside a somewhat larger hollow gear. The inner gear moves inside the outer gear reproducing an epicyclical motion that, with a pointer, gives the position of planet Jupiter. The inner gear is numbered 45, "ME" in Greek, and the same number is written on two surfaces of this small cylindrical box. The purpose of the front face was to position astronomical bodies with respect to the celestial sphere along the ecliptic, in reference to the observer's position on the Earth. That is irrelevant to the question of whether that position was computed using a heliocentric or geocentric view of the Solar System; either computational method should, and does, result in the same position (ignoring ellipticity), within the error factors of the mechanism. The epicyclic solar system of Ptolemy (–)—hundreds of years after the apparent construction date of the mechanism—carried forward with more epicycles, and was more accurate predicting the positions of planets than the view of Copernicus (1473–1543), until Kepler (1571–1630) introduced the possibility that orbits are ellipses. Evans et al. suggest that to display the mean positions of the five classical planets would require only 17 further gears that could be positioned in front of the large driving gear and indicated using individual circular dials on the face. Freeth and Jones modelled and published details of a version using gear trains mechanically similar to the lunar anomaly system, allowing for indication of the positions of the planets, as well as synthesis of the Sun anomaly. Their system, they claim, is more authentic than Wright's model, as it uses the known skills of the Greeks and does not add excessive complexity or internal stresses to the machine. The gear teeth were in the form of equilateral triangles with an average circular pitch of 1.6 mm, an average wheel thickness of 1.4 mm and an average air gap between gears of 1.2 mm. The teeth were probably created from a blank bronze round using hand tools; this is evident because not all of them are even. Due to advances in imaging and X-ray technology, it is now possible to know the precise number of teeth and size of the gears within the located fragments. Thus the basic operation of the device is no longer a mystery and has been replicated accurately. The major unknown remains the question of the presence and nature of any planet indicators. A table of the gears, their teeth, and the expected and computed rotations of important gears follows. The gear functions come from Freeth et al. (2008) and for the lower half of the table from Freeth et al. (2012). The computed values start with 1 year per revolution for the b1 gear, and the remainder are computed directly from gear teeth ratios. The gears marked with an asterisk (*) are missing, or have predecessors missing, from the known mechanism; these gears have been calculated with reasonable gear teeth counts. (Lengths in days are calculated assuming the year to be 365.2425 days.) Table notes: There are several gear ratios for each planet that result in close matches to the correct values for synodic periods of the planets and the Sun. Those chosen above seem accurate, with reasonable tooth counts, but the specific gears actually used are unknown. Known gear scheme It is very probable there were planetary dials, as the complicated motions and periodicities of all planets are mentioned in the manual of the mechanism. The exact position and mechanisms for the gears of the planets is unknown. There is no coaxial system except for the Moon. Fragment D that is an epicycloidal system, is considered as a planetary gear for Jupiter (Moussas, 2011, 2012, 2014) or a gear for the motion of the Sun (University of Thessaloniki group). The Sun gear is operated from the hand-operated crank (connected to gear a1, driving the large four-spoked mean Sun gear, b1) and in turn drives the rest of the gear sets. The Sun gear is b1/b2 and b2 has 64 teeth. It directly drives the date/mean sun pointer (there may have been a second, "true sun" pointer that displayed the Sun's elliptical anomaly; it is discussed below in the Freeth reconstruction). In this discussion, reference is to modelled rotational period of various pointers and indicators; they all assume the input rotation of the b1 gear of 360 degrees, corresponding with one tropical year, and are computed solely on the basis of the gear ratios of the gears named. The Moon train starts with gear b1 and proceeds through c1, c2, d1, d2, e2, e5, k1, k2, e6, e1, and b3 to the Moon pointer on the front face. The gears k1 and k2 form an epicyclic gear system; they are an identical pair of gears that do not mesh, but rather, they operate face-to-face, with a short pin on k1 inserted into a slot in k2. The two gears have different centres of rotation, so the pin must move back and forth in the slot. That increases and decreases the radius at which k2 is driven, also necessarily varying its angular velocity (presuming the velocity of k1 is even) faster in some parts of the rotation than others. Over an entire revolution the average velocities are the same, but the fast-slow variation models the effects of the elliptical orbit of the Moon, in consequence of Kepler's second and third laws. The modelled rotational period of the Moon pointer (averaged over a year) is 27.321 days, compared to the modern length of a lunar sidereal month of 27.321661 days. The pin/slot driving of the k1/k2 gears varies the displacement over a year's time, and the mounting of those two gears on the e3 gear supplies a precessional advancement to the ellipticity modelling with a period of 8.8826 years, compared with the current value of precession period of the moon of 8.85 years. The system also models the phases of the Moon. The Moon pointer holds a shaft along its length, on which is mounted a small gear named r, which meshes to the Sun pointer at B0 (the connection between B0 and the rest of B is not visible in the original mechanism, so whether b0 is the current date/mean Sun pointer or a hypothetical true Sun pointer is unknown). The gear rides around the dial with the Moon, but is also geared to the Sun—the effect is to perform a differential gear operation, so the gear turns at the synodic month period, measuring in effect, the angle of the difference between the Sun and Moon pointers. The gear drives a small ball that appears through an opening in the Moon pointer's face, painted longitudinally half white and half black, displaying the phases pictorially. It turns with a modelled rotational period of 29.53 days; the modern value for the synodic month is 29.530589 days. The Metonic train is driven by the drive train b1, b2, l1, l2, m1, m2, and n1, which is connected to the pointer. The modelled rotational period of the pointer is the length of the 6939.5 days (over the whole five-rotation spiral), while the modern value for the Metonic cycle is 6939.69 days. The Olympiad train is driven by b1, b2, l1, l2, m1, m2, n1, n2, and o1, which mounts the pointer. It has a computed modelled rotational period of exactly four years, as expected. It is the only pointer on the mechanism that rotates anticlockwise; all of the others rotate clockwise. The Callippic train is driven by b1, b2, l1, l2, m1, m2, n1, n3, p1, p2, and q1, which mounts the pointer. It has a computed modelled rotational period of 27758 days, while the modern value is 27758.8 days. The Saros train is driven by b1, b2, l1, l2, m1, m3, e3, e4, f1, f2, and g1, which mounts the pointer. The modelled rotational period of the Saros pointer is 1646.3 days (in four rotations along the spiral pointer track); the modern value is 1646.33 days. The Exeligmos train is driven by b1, b2, l1, l2, m1, m3, e3, e4, f1, f2, g1, g2, h1, h2, and i1, which mounts the pointer. The modelled rotational period of the exeligmos pointer is 19,756 days; the modern value is 19755.96 days. It appears gears m3, n1-3, p1-2, and q1 did not survive in the wreckage. The functions of the pointers were deduced from the remains of the dials on the back face, and reasonable, appropriate gearage to fulfill the functions was proposed and is generally accepted. Reconstruction efforts Proposed gear schemes Because of the large space between the mean Sun gear and the front of the case and the size of and mechanical features on the mean Sun gear, it is very likely that the mechanism contained further gearing that either has been lost in or subsequent to the shipwreck, or was removed before being loaded onto the ship. This lack of evidence and nature of the front part of the mechanism has led to attempts to emulate what the Ancient Greeks would have done and because of the lack of evidence, many solutions have been put forward over the years. But as progress has been made on analyzing the internal structures and deciphering the inscriptions, earlier models have been ruled out and better models developed. Derek J. de Solla Price built a simple model in the 1970s. In 2002 Michael Wright designed and built the first workable model with the known mechanism and his emulation of a potential planetarium system. He suggested that along with the lunar anomaly, adjustments would have been made for the deeper, more basic solar anomaly (known as the "first anomaly"). He included pointers for this "true sun", Mercury, Venus, Mars, Jupiter, and Saturn, in addition to the known "mean sun" (current time) and lunar pointers. Evans, Carman, and Thorndike published a solution in 2010 with significant differences from Wright's. Their proposal centred on what they observed as irregular spacing of the inscriptions on the front dial face, which to them seemed to indicate an off-centre sun indicator arrangement; this would simplify the mechanism by removing the need to simulate the solar anomaly. They suggested that rather than accurate planetary indication (rendered impossible by the offset inscriptions) there would be simple dials for each individual planet, showing information such as key events in the cycle of planet, initial and final appearances in the night sky, and apparent direction changes. This system would lead to a much simplified gear system, with much reduced forces and complexity, as compared to Wright's model. Their proposal used simple meshed gear trains and accounted for the previously unexplained 63 toothed gear in fragment D. They proposed two face plate layouts, one with evenly spaced dials, and another with a gap in the top of the face, to account for criticism that they did not use the apparent fixtures on the b1 gear. They proposed that rather than bearings and pillars for gears and axles, they simply held weather and seasonal icons to be displayed through a window. In a paper published in 2012, Carman, Thorndike, and Evans also proposed a system of epicyclic gearing with pin and slot followers. Freeth and Jones published a proposal in 2012. They proposed a compact and feasible solution to the question of planetary indication. They also propose indicating the solar anomaly (that is, the sun's apparent position in the zodiac dial) on a separate pointer from the date pointer, which indicates the mean position of the Sun, as well as the date on the month dial. If the two dials are synchronised correctly, their front panel display is essentially the same as Wright's. Unlike Wright's model however, this model has not been built physically, and is only a 3-D computer model. The system to synthesise the solar anomaly is very similar to that used in Wright's proposal: three gears, one fixed in the centre of the b1 gear and attached to the Sun spindle, the second fixed on one of the spokes (in their proposal the one on the bottom left) acting as an idle gear, and the final positioned next to that one; the final gear is fitted with an offset pin and, over said pin, an arm with a slot that in turn, is attached to the sun spindle, inducing anomaly as the mean Sun wheel turns. The inferior planet mechanism includes the Sun (treated as a planet in this context), Mercury, and Venus. For each of the three systems, there is an epicyclic gear whose axis is mounted on b1, thus the basic frequency is the Earth year (as it is, in truth, for epicyclic motion in the Sun and all the planets—excepting only the Moon). Each meshes with a gear grounded to the mechanism frame. Each has a pin mounted, potentially on an extension of one side of the gear that enlarges the gear, but doesn't interfere with the teeth; in some cases, the needed distance between the gear's centre and the pin is farther than the radius of the gear itself. A bar with a slot along its length extends from the pin toward the appropriate coaxial tube, at whose other end is the object pointer, out in front of the front dials. The bars could have been full gears, although there is no need for the waste of metal, since the only working part is the slot. Also, using the bars avoids interference between the three mechanisms, each of which are set on one of the four spokes of b1. Thus there is one new grounded gear (one was identified in the wreckage, and the second is shared by two of the planets), one gear used to reverse the direction of the sun anomaly, three epicyclic gears and three bars/coaxial tubes/pointers, which would qualify as another gear each: five gears and three slotted bars in all. The superior planet systems—Mars, Jupiter, and Saturn—all follow the same general principle of the lunar anomaly mechanism. Similar to the inferior systems, each has a gear whose centre pivot is on an extension of b1, and which meshes with a grounded gear. It presents a pin and a centre pivot for the epicyclic gear which has a slot for the pin, and which meshes with a gear fixed to a coaxial tube and thence to the pointer. Each of the three mechanisms can fit within a quadrant of the b1 extension, and they are thus all on a single plane parallel with the front dial plate. Each one uses a ground gear, a driving gear, a driven gear, and a gear/coaxial tube/pointer, thus, twelve gears additional in all. In total, there are eight coaxial spindles of various nested sizes to transfer the rotations in the mechanism to the eight pointers. So in all, there are 30 original gears, seven gears added to complete calendar functionality, 17 gears and three slotted bars to support the six new pointers, for a grand total of 54 gears, three bars, and eight pointers in Freeth and Jones' design. On the visual representation Freeth provides, the pointers on the front zodiac dial have small, round identifying stones. He refers to a quote from an ancient papyrus: However, more recent discoveries and research have shown that the above models are not correct. In 2016, the numbers 462 and 442 were found in computed tomography scans of the inscriptions dealing with Venus and Saturn, respectively. These relate to the synodic cycles of these planets, and indicated that the mechanism was more accurate than previously thought. In 2018, based on the CT scans, the Antikythera Mechanism Research Project proposed changes in gearing and produced mechanical parts based on this. In March 2021, the Antikythera Research Team at University College London, led by Freeth, published a new proposed reconstruction of the entire Antikythera Mechanism. They were able to find gears that could be shared among the gear-trains for the different planets, by using rational approximations for the synodic cycles which have small prime factors, with the factors 7 and 17 being used for more than one planet. They conclude that none of the previous models "are at all compatible with all the currently known data", but their model is compatible with it. Freeth has directed a video explaining the discovery of the synodic cycle periods and the conclusions about how the mechanism worked. Accuracy Investigations by Freeth and Jones reveal their simulated mechanism is inaccurate. The Mars pointer is up to 38° wrong in some instances (these inaccuracies occur at the nodal points of Mars' retrograde motion, and the error recedes at other locations in the orbit). This is not due to inaccuracies in gearing ratios in the mechanism, but inadequacies in the Greek theory of planetary movements. The accuracy could not have been improved until when Ptolemy published his Almagest (particularly by adding the concept of the equant to his theory), then much later by the introduction of Kepler's laws of planetary motion in 1609 and 1619. In addition to theoretical accuracy, there is the issue of mechanical accuracy. Freeth and Jones note that the inevitable "looseness" in the mechanism due to the hand-built gears, with their triangular teeth and the frictions between gears, and in bearing surfaces, probably would have swamped the finer solar and lunar correction mechanisms built into it: While the device may have struggled with inaccuracies, due to the triangular teeth being hand-made, the calculations used and technology implemented to create the elliptical paths of the planets and retrograde motion of the Moon and Mars, by using a clockwork-type gear train with the addition of a pin-and-slot epicyclic mechanism, predated that of the first known clocks found in antiquity in Medieval Europe, by more than 1000 years. Archimedes' development of the approximate value of pi and his theory of centres of gravity, along with the steps he made towards developing the calculus, suggest the Greeks had enough mathematical knowledge beyond that of Babylonian algebra, to model the elliptical nature of planetary motion. Similar devices in ancient literature The level of refinement of the mechanism indicates that the device was not unique, and possibly required expertise built over several generations. However, such artefacts were commonly melted down for the value of the bronze and rarely survive to the present day. Roman world Cicero's De re publica (54-51 BC), a first century BC philosophical dialogue, mentions two machines that some modern authors consider as some kind of planetarium or orrery, predicting the movements of the Sun, the Moon, and the five planets known at that time. They were both built by Archimedes and brought to Rome by the Roman general Marcus Claudius Marcellus after the death of Archimedes at the siege of Syracuse in 212 BC. Marcellus had great respect for Archimedes and one of these machines was the only item he kept from the siege (the second was placed in the Temple of Virtue). The device was kept as a family heirloom, and Cicero has Philus (one of the participants in a conversation that Cicero imagined had taken place in a villa belonging to Scipio Aemilianus in the year 129 BC) saying that Gaius Sulpicius Gallus (consul with Marcellus's nephew in 166 BC, and credited by Pliny the Elder as the first Roman to have written a book explaining solar and lunar eclipses) gave both a "learned explanation" and a working demonstration of the device. Pappus of Alexandria (290 – ) stated that Archimedes had written a now lost manuscript on the construction of these devices titled On Sphere-Making. The surviving texts from ancient times describe many of his creations, some even containing simple drawings. One such device is his odometer, the exact model later used by the Romans to place their mile markers (described by Vitruvius, Heron of Alexandria and in the time of Emperor Commodus). The drawings in the text appeared functional, but attempts to build them as pictured had failed. When the gears pictured, which had square teeth, were replaced with gears of the type in the Antikythera mechanism, which were angled, the device was perfectly functional. If Cicero's account is correct, then this technology existed as early as the third century BC. Archimedes' device is also mentioned by later Roman era writers such as Lactantius (Divinarum Institutionum Libri VII), Claudian (In sphaeram Archimedes), and Proclus (Commentary on the first book of Euclid's Elements of Geometry) in the fourth and fifth centuries. Cicero also said that another such device was built "recently" by his friend Posidonius, "... each one of the revolutions of which brings about the same movement in the Sun and Moon and five wandering stars [planets] as is brought about each day and night in the heavens ..." It is unlikely that any one of these machines was the Antikythera mechanism found in the shipwreck since both the devices fabricated by Archimedes and mentioned by Cicero were located in Rome at least 30 years later than the estimated date of the shipwreck, and the third device was almost certainly in the hands of Posidonius by that date. The scientists who have reconstructed the Antikythera mechanism also agree that it was too sophisticated to have been a unique device. Eastern Mediterranean and others This evidence that the Antikythera mechanism was not unique adds support to the idea that there was an ancient Greek tradition of complex mechanical technology that was later, at least in part, transmitted to the Byzantine and Islamic worlds, where mechanical devices which were complex, albeit simpler than the Antikythera mechanism, were built during the Middle Ages. Fragments of a geared calendar attached to a sundial, from the fifth or sixth century Byzantine Empire, have been found; the calendar may have been used to assist in telling time. In the Islamic world, Banū Mūsā's Kitab al-Hiyal, or Book of Ingenious Devices, was commissioned by the Caliph of Baghdad in the early 9th century AD. This text described over a hundred mechanical devices, some of which may date back to ancient Greek texts preserved in monasteries. A geared calendar similar to the Byzantine device was described by the scientist al-Biruni around 1000, and a surviving 13th-century astrolabe also contains a similar clockwork device. It is possible that this medieval technology may have been transmitted to Europe and contributed to the development of mechanical clocks there. In the 11th century, Chinese polymath Su Song constructed a mechanical clock tower that told (among other measurements) the position of some stars and planets, which were shown on a mechanically rotated armillary sphere. Popular culture and museum replicas Several exhibitions have been staged worldwide, leading to the main "Antikythera shipwreck" exhibition at the National Archaeological Museum in Athens. , the Antikythera mechanism was displayed as part of a temporary exhibition about the Antikythera shipwreck, accompanied by reconstructions made by Ioannis Theofanidis, Derek de Solla Price, Michael Wright, the Thessaloniki University and Dionysios Kriaris. Other reconstructions are on display at the American Computer Museum in Bozeman, Montana, at the Children's Museum of Manhattan in New York, at Astronomisch-Physikalisches Kabinett in Kassel, Germany, at the Archimedes Museum in Olympia, Greece, and at the Musée des Arts et Métiers in Paris. The National Geographic documentary series Naked Science dedicated an episode to the Antikythera Mechanism entitled "Star Clock BC" that aired on 20 January 2011. A documentary, The World's First Computer, was produced in 2012 by the Antikythera mechanism researcher and film-maker Tony Freeth. In 2012, BBC Four aired The Two-Thousand-Year-Old Computer; it was also aired on 3 April 2013 in the United States on NOVA, the PBS science series, under the name Ancient Computer. It documents the discovery and 2005 investigation of the mechanism by the Antikythera Mechanism Research Project. A functioning Lego reconstruction of the Antikythera mechanism was built in 2010 by hobbyist Andy Carol, and featured in a short film produced by Small Mammal in 2011. On 17 May 2017, Google marked the 115th anniversary of the discovery with a Google Doodle. The YouTube channel Clickspring documents the creation of an Antikythera mechanism replica using the tools, techniques of machining and metallurgy, and materials that would have been available in ancient Greece, along with investigations into the possible technologies of the era. The film Indiana Jones and the Dial of Destiny (2023) features a plot around a fictionalized version of the mechanism (also referred to as Archimedes' Dial, the titular Dial of Destiny). In the film, the device was built by Archimedes as a temporal mapping system, and sought by a former Nazi scientist as a way to detect time portals in order to travel back in time and help Germany win World War II. A major plot point revolves around the fact that the device did not take Continental drift into account as the theory was unknown in Archimedes' time. On 8 February 2024, a 10X scale replica of the mechanism was built, installed, and inaugurated at the University of Sonora in Hermosillo, Sonora, Mexico. With the name of Monumental Antikythera Mechanism for Hermosillo (MAMH), Dr. Alfonso performed the inauguration. Also attending were Durazo Montaño, Governor of Sonora and Dr. Maria Rita Plancarte Martinez, Chancellor of the Universidad de Sonora, the Ambassador of Greece, Nikolaos Koutrokois, and a delegation from the Embassy.
Technology
Early computers
null
89792
https://en.wikipedia.org/wiki/Mitochondrial%20Eve
Mitochondrial Eve
In human genetics, the Mitochondrial Eve (more technically known as the Mitochondrial-Most Recent Common Ancestor, shortened to mt-Eve or mt-MRCA) is the matrilineal most recent common ancestor (MRCA) of all living humans. In other words, she is defined as the most recent woman from whom all living humans descend in an unbroken line purely through their mothers and through the mothers of those mothers, back until all lines converge on one woman. In terms of mitochondrial haplogroups, the mt-MRCA is situated at the divergence of macro-haplogroup L into L0 and L1–6. As of 2013, estimates on the age of this split ranged at around 155,000 years ago, consistent with a date later than the speciation of Homo sapiens but earlier than the recent out-of-Africa dispersal. The male analog to the "Mitochondrial Eve" is the "Y-chromosomal Adam" (or Y-MRCA), the individual from whom all living humans are patrilineally descended. As the identity of both matrilineal and patrilineal MRCAs is dependent on genealogical history (pedigree collapse), they need not have lived at the same time. As of 2015, estimates of the age of the Y-MRCA range around 200,000 to 300,000 years ago, roughly consistent with the emergence of anatomically modern humans. The name "Mitochondrial Eve" alludes to the biblical Eve, which has led to repeated misrepresentations or misconceptions in journalistic accounts on the topic. Popular science presentations of the topic usually point out such possible misconceptions by emphasizing the fact that the position of mt-MRCA is neither fixed in time (as the position of mt-MRCA moves forward in time as mitochondrial DNA (mtDNA) lineages become extinct), nor does it refer to a "first woman", nor the only living female of her time, nor the first member of a "new species". History Early research Early research using molecular clock methods was done during the late 1970s to early 1980s. Allan Wilson, Mark Stoneking, Rebecca L. Cann and Wesley Brown found that mutation in human mtDNA was unexpectedly fast, at 0.02 substitution per base (1%) in a million years, which is 5–10 times faster than in nuclear DNA. Related work allowed for an analysis of the evolutionary relationships among gorillas, chimpanzees (common chimpanzee and bonobo) and humans. With data from 21 human individuals, Brown published the first estimate on the age of the mt-MRCA at 180,000 years ago in 1980. A statistical analysis published in 1982 was taken as evidence for recent African origin (a hypothesis which at the time was competing with Asian origin of H. sapiens). 1987 publication By 1985, data from the mtDNA of 145 women of different populations, and of two cell lines, HeLa and GM 3043, derived from an African American and a ǃKung respectively, were available. After more than 40 revisions of the draft, the manuscript was submitted to Nature in late 1985 or early 1986 and published on 1 January 1987. The published conclusion was that all current human mtDNA originated from a single population from Africa, at the time dated to between 140,000 and 200,000 years ago. The dating for "Eve" was a blow to the multiregional hypothesis, which was debated at the time, and a boost to the theory of the recent origin model. Cann, Stoneking and Wilson did not use the term "Mitochondrial Eve" or even the name "Eve" in their original paper. It is however used by Cann in an article entitled "In Search of Eve" in the September–October 1987 issue of The Sciences. It also appears in the October 1987 article in Science by Roger Lewin, headlined "The Unmasking of Mitochondrial Eve". The biblical connotation was very clear from the start. The accompanying research news in Nature had the title "Out of the garden of Eden". Wilson himself preferred the term "Lucky Mother" and thought the use of the name Eve "regrettable". But the concept of Eve caught on with the public and was repeated in a Newsweek cover story (11 January 1988 issue featured a depiction of Adam and Eve on the cover, with the title "The Search for Adam and Eve"), and a cover story in Time on 26 January 1987. Criticism and later research Shortly after the 1987 publication, criticism of its methodology and secondary conclusions was published. Both the dating of mt-Eve and the relevance of the age of the purely matrilineal descent for population replacement were subjects of controversy during the 1990s; Alan Templeton (1997) asserted that the study did "not support the hypothesis of a recent African origin for all of humanity following a split between Africans and non-Africans 100,000 years ago" and also did "not support the hypothesis of a recent global replacement of humans coming out of Africa." The placement by of a relatively small population of humans in sub-Saharan Africa was consistent with the hypothesis of Cann (1982) and lent considerable support for the "recent out-of-Africa" scenario. In 1999, Krings et al. eliminated problems in molecular clocking postulated by Nei (1992) when it was found that the mtDNA sequence for the same region was substantially different from the MRCA relative to any human sequence. In 1997, published a study of mtDNA mutation rates in a single, well-documented family (the Romanov family of Russian royalty). In this study, they calculated a mutation rate upwards of twenty times higher than previous results. Although the original research did have analytical limitations, the estimate on the age of the mt-MRCA has proven robust. More recent age estimates have remained consistent with the 140–200 kya estimate published in 1987: A 2013 estimate dated Mitochondrial Eve to about 160 kya (within the reserved estimate of the original research) and Out of Africa II to about 95 kya. Another 2013 study (based on genome sequencing of 69 people from 9 different populations) reported the age of Mitochondrial Eve between 99 and 148 kya and that of the Y-MRCA between 120 and 156 kya. Female and mitochondrial ancestry Without a DNA sample, it is not possible to reconstruct the complete genetic makeup (genome) of any individual who died very long ago. By analysing descendants' DNA, however, parts of ancestral genomes are estimated by scientists. Mitochondrial DNA (mtDNA, the DNA located in mitochondria, different from the DNA in the nucleus of a cell) and Y-chromosome DNA are commonly used to trace ancestry in this manner. mtDNA is generally passed un-mixed from mothers to children of both sexes, along the maternal line, or matrilineally. Matrilineal descent goes back through mothers, to their mothers, until all female lineages converge. Branches are identified by one or more unique markers which give a mitochondrial "DNA signature" or "haplotype" (e.g. the CRS is a haplotype). Each marker is a DNA base-pair that has resulted from an SNP mutation. Scientists sort mitochondrial DNA results into more or less related groups, with more or less recent common ancestors. This leads to the construction of a DNA family tree where the branches are in biological terms clades, and the common ancestors such as Mitochondrial Eve sit at branching points in this tree. Major branches are said to define a haplogroup (e.g. CRS belongs to haplogroup H), and large branches containing several haplogroups are called "macro-haplogroups". The mitochondrial clade which Mitochondrial Eve defines is the species Homo sapiens sapiens itself, or at least the current population or "chronospecies" as it exists today. In principle, earlier Eves can also be defined going beyond the species, for example one who is ancestral to both modern humanity and Neanderthals, or, further back, an "Eve" ancestral to all members of genus Homo and chimpanzees in genus Pan. According to current nomenclature, Mitochondrial Eve's haplogroup was within mitochondrial haplogroup L because this macro-haplogroup contains all surviving human mitochondrial lineages today, and she must predate the emergence of L0. The variation of mitochondrial DNA between different people can be used to estimate the time back to a common ancestor, such as Mitochondrial Eve. This works because, along any particular line of descent, mitochondrial DNA accumulates mutations at the rate of approximately one every 3,500 years per nucleotide. A certain number of these new variants will survive into modern times and be identifiable as distinct lineages. At the same time some branches, including even very old ones, come to an end when the last family in a distinct branch has no daughters. Mitochondrial Eve is the most recent common matrilineal ancestor for all modern humans. Whenever one of the two most ancient branch lines dies out (by producing only non-matrilinear descendants at that time), the MRCA will move to a more recent female ancestor, always the most recent mother to have more than one daughter with living maternal line descendants alive today. The number of mutations that can be found distinguishing modern people is determined by two criteria: first and most obviously, the time back to her, but second and less obviously by the varying rates at which new branches have come into existence and old branches have become extinct. By looking at the number of mutations which have been accumulated in different branches of this family tree, and looking at which geographical regions have the widest range of least related branches, the region where Eve lived can be proposed. Popular reception and misconceptions Newsweek reported on Mitochondrial Eve based on the Cann et al. study in January 1988, under a heading of "Scientists Explore a Controversial Theory About Man's Origins". The edition sold a record number of copies. The popular name "mitochondrial Eve", of 1980s coinage, has contributed to a number of popular misconceptions. At first, the announcement of a "mitochondrial Eve" was even greeted with endorsement from young earth creationists, who viewed the theory as a validation of the biblical creation story. Due to such misunderstandings, authors of popular science publications since the 1990s have been emphatic in pointing out that the name is merely a popular convention, and that the mt-MRCA was not in any way the "first woman". Her position is purely the result of genealogical history of human populations later, and as matrilineal lineages die out, the position of mt-MRCA keeps moving forward to younger individuals over time. In River Out of Eden (1995), Richard Dawkins discussed human ancestry in the context of a "river of genes", including an explanation of the concept of Mitochondrial Eve. The Seven Daughters of Eve (2002) presented the topic of human mitochondrial genetics to a general audience. The Real Eve: Modern Man's Journey Out of Africa by Stephen Oppenheimer (2003) was adapted into a 2002 Discovery Channel documentary. Not the only woman One common misconception surrounding Mitochondrial Eve is that since all women alive today descended in a direct unbroken female line from her, she must have been the only woman alive at the time. However, nuclear DNA studies indicate that the effective population size of ancient humans never dropped below tens of thousands. Other women living during Eve's time may have descendants alive today but not in a direct female line. Not a fixed individual over time The definition of Mitochondrial Eve is fixed, but the woman in prehistory who fits this definition can change. That is, not only can our knowledge of when and where Mitochondrial Eve lived change due to new discoveries, but the actual Mitochondrial Eve can change. The Mitochondrial Eve can change, when a mother-daughter line comes to an end. It follows from the definition of Mitochondrial Eve that she had at least two daughters who both have unbroken female lineages that have survived to the present day. In every generation mitochondrial lineages end – when a woman with unique mtDNA dies with no daughters. When the mitochondrial lineages of daughters of Mitochondrial Eve die out, then the title of "Mitochondrial Eve" shifts forward from the remaining daughter through her matrilineal descendants, until the first descendant is reached who had two or more daughters who together have all living humans as their matrilineal descendants. Once a lineage has died out it is irretrievably lost and this mechanism can thus only shift the title of "Mitochondrial Eve" forward in time. Because mtDNA mapping of humans is very incomplete, the discovery of living mtDNA lines which predate our current concept of "Mitochondrial Eve" could result in the title moving to an earlier woman. This happened to her male counterpart, "Y-chromosomal Adam", when an older Y line, haplogroup A-00, was discovered. Not necessarily a contemporary of "Y-chromosomal Adam" Sometimes Mitochondrial Eve is assumed to have lived at the same time as Y-chromosomal Adam (from whom all living males are descended patrilineally), and perhaps even met and mated with him. Even if this were true, which is currently regarded as highly unlikely, this would only be a coincidence. Like Mitochondrial "Eve", Y-chromosomal "Adam" probably lived in Africa. A recent study (March 2013) concluded however that "Eve" lived much later than "Adam" – some 140,000 years later. (Earlier studies considered, conversely, that "Eve" lived earlier than "Adam".) More recent studies indicate that it is not impossible that Mitochondrial Eve and Y-chromosomal Adam might have lived around the same time. Not the most recent ancestor shared by all humans Mitochondrial Eve is the most recent common matrilineal ancestor, not the most recent common ancestor. Since the mtDNA is inherited maternally and recombination is either rare or absent, it is relatively easy to track the ancestry of the lineages back to a MRCA; however, this MRCA is valid only when discussing mitochondrial DNA. An approximate sequence from newest to oldest can list various important points in the ancestry of modern human populations: The human MRCA. The time period that human MRCA lived is unknown. Rohde et al put forth a "rough guess" that the MRCA could have existed 5000 years ago; however, the authors state that this estimate is "extremely tentative, and the model contains several obvious sources of error, as it was motivated more by considerations of theoretical insight and tractability than by realism." Just a few thousand years before the most recent single ancestor shared by all living humans was the time at which all humans who were then alive either left no descendants alive today or were common ancestors of all humans alive today. However, such a late date is difficult to reconcile with the geographical spread of our species and the consequent isolation of different groups from each other. For example, it is generally accepted that the indigenous population of Tasmania was isolated from all other humans between the rise in sea level after the last ice age some 8000 years ago and the arrival of Europeans. Estimates of the MRCA of even closely related human populations have been much more than 5000 years ago. The identical ancestors point. In other words, "each present-day human has exactly the same set of genealogical ancestors" alive at the "identical ancestors point" in time. This is far more recent than when Mitochondrial Eve was proposed to have lived. Mitochondrial Eve, the most recent female-line common ancestor of all living people. "Y-chromosomal Adam", the most recent male-line common ancestor of all living people.
Biology and health sciences
Homo
Biology
89796
https://en.wikipedia.org/wiki/Mitochondrial%20DNA
Mitochondrial DNA
Mitochondrial DNA (mtDNA and mDNA) is the DNA located in the mitochondria organelles in a eukaryotic cell that converts chemical energy from food into adenosine triphosphate (ATP). Mitochondrial DNA is a small portion of the DNA contained in a eukaryotic cell; most of the DNA is in the cell nucleus, and, in plants and algae, the DNA also is found in plastids, such as chloroplasts. Human mitochondrial DNA was the first significant part of the human genome to be sequenced. This sequencing revealed that human mtDNA has 16,569 base pairs and encodes 13 proteins. As in other vertebrates, the human mitochondrial genetic code differs slightly from nuclear DNA. Since animal mtDNA evolves faster than nuclear genetic markers, it represents a mainstay of phylogenetics and evolutionary biology. It also permits tracing the relationships of populations, and so has become important in anthropology and biogeography. Origin Nuclear and mitochondrial DNA are thought to have separate evolutionary origins, with the mtDNA derived from the circular genomes of bacteria engulfed by the ancestors of modern eukaryotic cells. This theory is called the endosymbiotic theory. In the cells of extant organisms, the vast majority of the proteins in the mitochondria (numbering approximately 1500 different types in mammals) are coded by nuclear DNA, but the genes for some, if not most, of them are thought to be of bacterial origin, having been transferred to the eukaryotic nucleus during evolution. The reasons mitochondria have retained some genes are debated. The existence in some species of mitochondrion-derived organelles lacking a genome suggests that complete gene loss is possible, and transferring mitochondrial genes to the nucleus has several advantages. The difficulty of targeting remotely-produced hydrophobic protein products to the mitochondrion is one hypothesis for why some genes are retained in mtDNA; colocalisation for redox regulation is another, citing the desirability of localised control over mitochondrial machinery. Recent analysis of a wide range of mtDNA genomes suggests that both these features may dictate mitochondrial gene retention. Genome structure and diversity Across all organisms, there are six main mitochondrial genome types, classified by structure (i.e. circular versus linear), size, presence of introns or plasmid like structures, and whether the genetic material is a singular molecule or collection of homogeneous or heterogeneous molecules. In many unicellular organisms (e.g., the ciliate Tetrahymena and the green alga Chlamydomonas reinhardtii), and in rare cases also in multicellular organisms (e.g. in some species of Cnidaria), the mtDNA is linear DNA. Most of these linear mtDNAs possess telomerase-independent telomeres (i.e., the ends of the linear DNA) with different modes of replication, which have made them interesting objects of research because many of these unicellular organisms with linear mtDNA are known pathogens. Animals Most (bilaterian) animals have a circular mitochondrial genome. Medusozoa and calcarea clades however include species with linear mitochondrial chromosomes. With a few exceptions, animals have 37 genes in their mitochondrial DNA: 13 for proteins, 22 for tRNAs, and 2 for rRNAs. Mitochondrial genomes for animals average about 16,000 base pairs in length. The anemone Isarachnanthus nocturnus has the largest mitochondrial genome of any animal at 80,923 bp. The smallest known mitochondrial genome in animals belongs to the comb jelly Vallicula multiformis, which consist of 9,961 bp. In February 2020, a jellyfish-related parasite – Henneguya salminicola – was discovered that lacks a mitochondrial genome but retains structures deemed mitochondrion-related organelles. Moreover, nuclear DNA genes involved in aerobic respiration and in mitochondrial DNA replication and transcription were either absent or present only as pseudogenes. This is the first multicellular organism known to have this absence of aerobic respiration and live completely free of oxygen dependency. Plants and fungi There are three different mitochondrial genome types in plants and fungi. The first type is a circular genome that has introns (type 2) and may range from 19 to 1000 kbp in length. The second genome type is a circular genome (about 20–1000 kbp) that also has a plasmid-like structure (1 kb) (type 3). The final genome type found in plants and fungi is a linear genome made up of homogeneous DNA molecules (type 5). Great variation in mtDNA gene content and size exists among fungi and plants, although there appears to be a core subset of genes present in all eukaryotes (except for the few that have no mitochondria at all). In Fungi, however, there is no single gene shared among all mitogenomes. Some plant species have enormous mitochondrial genomes, with Silene conica mtDNA containing as many as 11,300,000 base pairs. Surprisingly, even those huge mtDNAs contain the same number and kinds of genes as related plants with much smaller mtDNAs. The genome of the mitochondrion of the cucumber (Cucumis sativus) consists of three circular chromosomes (lengths 1556, 84 and 45 kilobases), which are entirely or largely autonomous with regard to their replication. Protists Protists contain the most diverse mitochondrial genomes, with five different types found in this kingdom. Type 2, type 3 and type 5 of the plant and fungal genomes also exist in some protists, as do two unique genome types. One of these unique types is a heterogeneous collection of circular DNA molecules (type 4) while the other is a heterogeneous collection of linear molecules (type 6). Genome types 4 and 6 each range from 1–200 kbp in size. The smallest mitochondrial genome sequenced to date is the 5,967 bp mtDNA of the parasite Plasmodium falciparum. Endosymbiotic gene transfer, the process by which genes that were coded in the mitochondrial genome are transferred to the cell's main genome, likely explains why more complex organisms such as humans have smaller mitochondrial genomes than simpler organisms such as protists. Replication Mitochondrial DNA is replicated by the DNA polymerase gamma complex which is composed of a 140 kDa catalytic DNA polymerase encoded by the POLG gene and two 55 kDa accessory subunits encoded by the POLG2 gene. The replisome machinery is formed by DNA polymerase, TWINKLE and mitochondrial SSB proteins. TWINKLE is a helicase, which unwinds short stretches of dsDNA in the 5' to 3' direction. All these polypeptides are encoded in the nuclear genome. During embryogenesis, replication of mtDNA is strictly down-regulated from the fertilized oocyte through the preimplantation embryo. The resulting reduction in per-cell copy number of mtDNA plays a role in the mitochondrial bottleneck, exploiting cell-to-cell variability to ameliorate the inheritance of damaging mutations. According to Justin St. John and colleagues, "At the blastocyst stage, the onset of mtDNA replication is specific to the cells of the trophectoderm. In contrast, the cells of the inner cell mass restrict mtDNA replication until they receive the signals to differentiate to specific cell types." Genes on the human mtDNA and their transcription The two strands of the human mitochondrial DNA are distinguished as the heavy strand and the light strand. The heavy strand is rich in guanine and encodes 12 subunits of the oxidative phosphorylation system, two ribosomal RNAs (12S and 16S), and 14 transfer RNAs (tRNAs). The light strand encodes one subunit, and 8 tRNAs. So, altogether mtDNA encodes for two rRNAs, 22 tRNAs, and 13 protein subunits, all of which are involved in the oxidative phosphorylation process. The complete sequence of the human mitochondrial DNA in graphic form Between most (but not all) protein-coding regions, tRNAs are present (see the human mitochondrial genome map). During transcription, the tRNAs acquire their characteristic L-shape that gets recognized and cleaved by specific enzymes. With the mitochondrial RNA processing, individual mRNA, rRNA, and tRNA sequences are released from the primary transcript. Folded tRNAs therefore act as secondary structure punctuations. Regulation of transcription The promoters for the initiation of the transcription of the heavy and light strands are located in the main non-coding region of the mtDNA called the displacement loop, the D-loop. There is evidence that the transcription of the mitochondrial rRNAs is regulated by the heavy-strand promoter 1 (HSP1), and the transcription of the polycistronic transcripts coding for the protein subunits are regulated by HSP2. Measurement of the levels of the mtDNA-encoded RNAs in bovine tissues has shown that there are major differences in the expression of the mitochondrial RNAs relative to total tissue RNA. Among the 12 tissues examined the highest level of expression was observed in heart, followed by brain and steroidogenic tissue samples. As demonstrated by the effect of the trophic hormone ACTH on adrenal cortex cells, the expression of the mitochondrial genes may be strongly regulated by external factors, apparently to enhance the synthesis of mitochondrial proteins necessary for energy production. Interestingly, while the expression of protein-encoding genes was stimulated by ACTH, the levels of the mitochondrial 16S rRNA showed no significant change. Mitochondrial inheritance In most multicellular organisms, mtDNA is inherited from the mother (maternally inherited). Mechanisms for this include simple dilution (an egg contains on average 200,000 mtDNA molecules, whereas a healthy human sperm has been reported to contain on average 5 molecules), degradation of sperm mtDNA in the male genital tract and in the fertilized egg; and, at least in a few organisms, failure of sperm mtDNA to enter the egg. Whatever the mechanism, this single parent (uniparental inheritance) pattern of mtDNA inheritance is found in most animals, most plants and also in fungi. In a study published in 2018, human babies were reported to inherit mtDNA from both their fathers and their mothers resulting in mtDNA heteroplasmy, a finding that has been rejected by other scientists. Female inheritance In sexual reproduction, mitochondria are normally inherited exclusively from the mother; the mitochondria in mammalian sperm are usually destroyed by the egg cell after fertilization. Also, mitochondria are present solely in the midpiece, which is used for propelling the sperm cells, and sometimes the midpiece, along with the tail, is lost during fertilization. In 1999 it was reported that paternal sperm mitochondria (containing mtDNA) are marked with ubiquitin to select them for later destruction inside the embryo. Some in vitro fertilization techniques, particularly injecting a sperm into an oocyte, may interfere with this. The fact that mitochondrial DNA is mostly maternally inherited enables genealogical researchers to trace maternal lineage far back in time. (Y-chromosomal DNA, paternally inherited, is used in an analogous way to determine the patrilineal history.) This is usually accomplished on human mitochondrial DNA by sequencing the hypervariable control regions (HVR1 or HVR2), and sometimes the complete molecule of the mitochondrial DNA, as a genealogical DNA test. HVR1, for example, consists of about 440 base pairs. These 440 base pairs are compared to the same regions of other individuals (either specific people or subjects in a database) to determine maternal lineage. Most often, the comparison is made with the revised Cambridge Reference Sequence. Vilà et al. have published studies tracing the matrilineal descent of domestic dogs from wolves. The concept of the Mitochondrial Eve is based on the same type of analysis, attempting to discover the origin of humanity by tracking the lineage back in time. The mitochondrial bottleneck Entities subject to uniparental inheritance and with little to no recombination may be expected to be subject to Muller's ratchet, the accumulation of deleterious mutations until functionality is lost. Animal populations of mitochondria avoid this through a developmental process known as the mtDNA bottleneck. The bottleneck exploits random processes in the cell to increase the cell-to-cell variability in mutant load as an organism develops: a single egg cell with some proportion of mutant mtDNA thus produces an embryo in which different cells have different mutant loads. Cell-level selection may then act to remove those cells with more mutant mtDNA, leading to a stabilisation or reduction in mutant load between generations. The mechanism underlying the bottleneck is debated, with a recent mathematical and experimental metastudy providing evidence for a combination of the random partitioning of mtDNAs at cell divisions and the random turnover of mtDNA molecules within the cell. Male inheritance Male mitochondrial DNA inheritance has been discovered in Plymouth Rock chickens. Evidence supports rare instances of male mitochondrial inheritance in some mammals as well. Specifically, documented occurrences exist for mice, where the male-inherited mitochondria were subsequently rejected. It has also been found in sheep, and in cloned cattle. Rare cases of male mitochondrial inheritance have been documented in humans. Although many of these cases involve cloned embryos or subsequent rejection of the paternal mitochondria, others document in vivo inheritance and persistence under lab conditions. Doubly uniparental inheritance of mtDNA is observed in bivalve mollusks. In those species, females have only one type of mtDNA (F), whereas males have F type mtDNA in their somatic cells, but M type of mtDNA (which can be as much as 30% divergent) in germline cells. Paternally inherited mitochondria have additionally been reported in some insects such as fruit flies, honeybees, and periodical cicadas. Mitochondrial donation An IVF technique known as mitochondrial donation or mitochondrial replacement therapy (MRT) results in offspring containing mtDNA from a donor female, and nuclear DNA from the mother and father. In the spindle transfer procedure, the nucleus of an egg is inserted into the cytoplasm of an egg from a donor female which has had its nucleus removed, but still contains the donor female's mtDNA. The composite egg is then fertilized with the male's sperm. The procedure is used when a woman with genetically defective mitochondria wishes to procreate and produce offspring with healthy mitochondria. The first known child to be born as a result of mitochondrial donation was a boy born to a Jordanian couple in Mexico on 6 April 2016. Mutations and disease Susceptibility The concept that mtDNA is particularly susceptible to reactive oxygen species generated by the respiratory chain due to its proximity remains controversial. mtDNA does not accumulate any more oxidative base damage than nuclear DNA. It has been reported that at least some types of oxidative DNA damage are repaired more efficiently in mitochondria than they are in the nucleus. mtDNA is packaged with proteins which appear to be as protective as proteins of the nuclear chromatin. Moreover, mitochondria evolved a unique mechanism which maintains mtDNA integrity through degradation of excessively damaged genomes followed by replication of intact/repaired mtDNA. This mechanism is not present in the nucleus and is enabled by multiple copies of mtDNA present in mitochondria. The outcome of mutation in mtDNA may be an alteration in the coding instructions for some proteins, which may have an effect on organism metabolism and/or fitness. Genetic illness Mutations of mitochondrial DNA can lead to a number of illnesses including exercise intolerance and Kearns–Sayre syndrome (KSS), which causes a person to lose full function of heart, eye, and muscle movements. Some evidence suggests that they might be major contributors to the aging process and age-associated pathologies. Particularly in the context of disease, the proportion of mutant mtDNA molecules in a cell is termed heteroplasmy. The within-cell and between-cell distributions of heteroplasmy dictate the onset and severity of disease and are influenced by complicated stochastic processes within the cell and during development. Mutations in mitochondrial tRNAs can be responsible for severe diseases like the MELAS and MERRF syndromes. Mutations in nuclear genes that encode proteins that mitochondria use can also contribute to mitochondrial diseases. These diseases do not follow mitochondrial inheritance patterns, but instead follow Mendelian inheritance patterns. Use in disease diagnosis Recently a mutation in mtDNA has been used to help diagnose prostate cancer in patients with negative prostate biopsy. mtDNA alterations can be detected in the bio-fluids of patients with cancer. mtDNA is characterized by the high rate of polymorphisms and mutations. Some of which are increasingly recognized as an important cause of human pathology such as oxidative phosphorylation (OXPHOS) disorders, maternally inherited diabetes and deafness (MIDD), Type 2 diabetes mellitus, Neurodegenerative disease, heart failure and cancer. Relationship with ageing Though the idea is controversial, some evidence suggests a link between aging and mitochondrial genome dysfunction. In essence, mutations in mtDNA upset a careful balance of reactive oxygen species (ROS) production and enzymatic ROS scavenging (by enzymes like superoxide dismutase, catalase, glutathione peroxidase and others). However, some mutations that increase ROS production (e.g., by reducing antioxidant defenses) in worms increase, rather than decrease, their longevity. Also, naked mole rats, rodents about the size of mice, live about eight times longer than mice despite having reduced, compared to mice, antioxidant defenses and increased oxidative damage to biomolecules. Once, there was thought to be a positive feedback loop at work (a 'Vicious Cycle'); as mitochondrial DNA accumulates genetic damage caused by free radicals, the mitochondria lose function and leak free radicals into the cytosol. A decrease in mitochondrial function reduces overall metabolic efficiency. However, this concept was conclusively disproved when it was demonstrated that mice, which were genetically altered to accumulate mtDNA mutations at accelerated rate do age prematurely, but their tissues do not produce more ROS as predicted by the 'Vicious Cycle' hypothesis. Supporting a link between longevity and mitochondrial DNA, some studies have found correlations between biochemical properties of the mitochondrial DNA and the longevity of species. The application of a mitochondrial-specific ROS scavenger, which lead to a significant longevity of the mice studied, suggests that mitochondria may still be well-implicated in ageing. Extensive research is being conducted to further investigate this link and methods to combat ageing. Presently, gene therapy and nutraceutical supplementation are popular areas of ongoing research. Bjelakovic et al. analyzed the results of 78 studies between 1977 and 2012, involving a total of 296,707 participants, and concluded that antioxidant supplements do not reduce all-cause mortality nor extend lifespan, while some of them, such as beta carotene, vitamin E, and higher doses of vitamin A, may actually increase mortality. In a recent study, it was shown that dietary restriction can reverse ageing alterations by affecting the accumulation of mtDNA damage in several organs of rats. For example, dietary restriction prevented age-related accumulation of mtDNA damage in the cortex and decreased it in the lung and testis. Neurodegenerative diseases Increased mtDNA damage is a feature of several neurodegenerative diseases. The brains of individuals with Alzheimer's disease have elevated levels of oxidative DNA damage in both nuclear DNA and mtDNA, but the mtDNA has approximately 10-fold higher levels than nuclear DNA. It has been proposed that aged mitochondria is the critical factor in the origin of neurodegeneration in Alzheimer's disease. Analysis of the brains of AD patients suggested an impaired function of the DNA repair pathway, which would cause reduce the overall quality of mtDNA. In Huntington's disease, mutant huntingtin protein causes mitochondrial dysfunction involving inhibition of mitochondrial electron transport, higher levels of reactive oxygen species and increased oxidative stress. Mutant huntingtin protein promotes oxidative damage to mtDNA, as well as nuclear DNA, that may contribute to Huntington's disease pathology. The DNA oxidation product 8-oxoguanine (8-oxoG) is a well-established marker of oxidative DNA damage. In persons with amyotrophic lateral sclerosis (ALS), the enzymes that normally repair 8-oxoG DNA damages in the mtDNA of spinal motor neurons are impaired. Thus oxidative damage to mtDNA of motor neurons may be a significant factor in the etiology of ALS. Correlation of the mtDNA base composition with animal life spans Over the past decade, an Israeli research group led by Professor Vadim Fraifeld has shown that strong and significant correlations exist between the mtDNA base composition and animal species-specific maximum life spans. As demonstrated in their work, higher mtDNA guanine + cytosine content (GC%) strongly associates with longer maximum life spans across animal species. An additional observation is that the mtDNA GC% correlation with the maximum life spans is independent of the well-known correlation between animal species metabolic rate and maximum life spans. The mtDNA GC% and resting metabolic rate explain the differences in animal species maximum life spans in a multiplicative manner (i.e., species maximum life span = their mtDNA GC% * metabolic rate). To support the scientific community in carrying out comparative analyses between mtDNA features and longevity across animals, a dedicated database was built named MitoAge. mtDNA mutational spectrum is sensitive to species-specific life-history traits De novo mutations arise either due to mistakes during DNA replication or due to unrepaired damage caused in turn by endogenous and exogenous mutagens. It has been long believed that mtDNA can be particularly sensitive to damage caused by reactive oxygen species (ROS), however G>T substitutions, the hallmark of the oxidative damage in the nuclear genome, are very rare in mtDNA and do not increase with age. Comparing the mtDNA mutational spectra of hundreds of mammalian species, it has been recently demonstrated that species with extended lifespans have an increased rate of A>G substitutions on single-stranded heavy chain. This discovery led to the hypothesis that A>G is a mitochondria-specific marker of age-associated oxidative damage. This finding provides a mutational (contrary to the selective one) explanation for the observation that long-lived species have GC-rich mtDNA: long-lived species become GC-rich simply because of their biased process of mutagenesis. An association between mtDNA mutational spectrum and species-specific life-history traits in mammals opens a possibility to link these factors together discovering new life-history-specific mutagens in different groups of organisms. Relationship with non-B (non-canonical) DNA structures Deletion breakpoints frequently occur within or near regions showing non-canonical (non-B) conformations, namely hairpins, cruciforms and cloverleaf-like elements. Moreover, there is data supporting the involvement of helix-distorting intrinsically curved regions and long G-tetrads in eliciting instability events. In addition, higher breakpoint densities were consistently observed within GC-skewed regions and in the close vicinity of the degenerate sequence motif YMMYMNNMMHM. Use in forensics Unlike nuclear DNA, which is inherited from both parents and in which genes are rearranged in the process of recombination, there is usually no change in mtDNA from parent to offspring. Although mtDNA also recombines, it does so with copies of itself within the same mitochondrion. Because of this and because the mutation rate of animal mtDNA is higher than that of nuclear DNA, mtDNA is a powerful tool for tracking ancestry through females (matrilineage) and has been used in this role to track the ancestry of many species back hundreds of generations. mtDNA testing can be used by forensic scientists in cases where nuclear DNA is severely degraded. Autosomal cells only have two copies of nuclear DNA, but can have hundreds of copies of mtDNA due to the multiple mitochondria present in each cell. This means highly degraded evidence that would not be beneficial for STR analysis could be used in mtDNA analysis. mtDNA may be present in bones, teeth, or hair, which could be the only remains left in the case of severe degradation. In contrast to STR analysis, mtDNA sequencing uses Sanger sequencing. The known sequence and questioned sequence are both compared to the Revised Cambridge Reference Sequence to generate their respective haplotypes. If the known sample sequence and questioned sequence originated from the same matriline, one would expect to see identical sequences and identical differences from the rCRS. Cases arise where there are no known samples to collect and the unknown sequence can be searched in a database such as EMPOP. The Scientific Working Group on DNA Analysis Methods recommends three conclusions for describing the differences between a known mtDNA sequence and a questioned mtDNA sequence: exclusion for two or more differences between the sequences, inconclusive if there is one nucleotide difference, or cannot exclude if there are no nucleotide differences between the two sequences. The rapid mutation rate (in animals) makes mtDNA useful for assessing genetic relationships of individuals or groups within a species and also for identifying and quantifying the phylogeny (evolutionary relationships; see phylogenetics) among different species. To do this, biologists determine and then compare the mtDNA sequences from different individuals or species. Data from the comparisons is used to construct a network of relationships among the sequences, which provides an estimate of the relationships among the individuals or species from which the mtDNAs were taken. mtDNA can be used to estimate the relationship between both closely related and distantly related species. Due to the high mutation rate of mtDNA in animals, the 3rd positions of the codons change relatively rapidly, and thus provide information about the genetic distances among closely related individuals or species. On the other hand, the substitution rate of mt-proteins is very low, thus amino acid changes accumulate slowly (with corresponding slow changes at 1st and 2nd codon positions) and thus they provide information about the genetic distances of distantly related species. Statistical models that treat substitution rates among codon positions separately, can thus be used to simultaneously estimate phylogenies that contain both closely and distantly related species Mitochondrial DNA was admitted into evidence for the first time ever in a United States courtroom in 1996 during State of Tennessee v. Paul Ware. In the 1998 United States court case of Commonwealth of Pennsylvania v. Patricia Lynne Rorrer, mitochondrial DNA was admitted into evidence in the State of Pennsylvania for the first time. The case was featured in episode 55 of season 5 of the true crime drama series Forensic Files (season 5). Mitochondrial DNA was first admitted into evidence in California, United States, in the successful prosecution of David Westerfield for the 2002 kidnapping and murder of 7-year-old Danielle van Dam in San Diego: it was used for both human and dog identification. This was the first trial in the U.S. to admit canine DNA. The remains of King Richard III, who died in 1485, were identified by comparing his mtDNA with that of two matrilineal descendants of his sister who were alive in 2013, 527 years after he died. Use in evolutionary biology and systematic biology mtDNA is conserved across eukaryotic organism given the critical role of mitochondria in cellular respiration. However, due to less efficient DNA repair (compared to nuclear DNA) it has a relatively high mutation rate (but slow compared to other DNA regions such as microsatellites) which makes it useful for studying the evolutionary relationships—phylogeny—of organisms. Biologists can determine and then compare mtDNA sequences among different species and use the comparisons to build an evolutionary tree for the species examined. For instance, while most nuclear genes are nearly identical between humans and chimpanzees, their mitochondrial genomes are 9.8% different. Human and gorilla mitochondrial genomes are 11.8% different, suggesting that humans may be more closely related to chimpanzees than gorillas. mtDNA in nuclear DNA Whole genome sequences of more than 66,000 people revealed that most of them had some mitochondrial DNA inserted into their nuclear genomes. More than 90% of these nuclear-mitochondrial segments (NUMTs) were inserted after humans diverged from the other apes. Results indicate such transfers currently occur as frequent as once in every ≈4,000 human births. It appears that organellar DNA is much more often transferred to nuclear DNA than previously thought. This observation also supports the idea of the endosymbiont theory that eukaryotes have evolved from endosymbionts which turned into organelles while transferring most of their DNA to the nucleus so that the organellar genome shrunk in the process. History Mitochondrial DNA was discovered in the 1960s by Margit M. K. Nass and Sylvan Nass by electron microscopy as DNase-sensitive threads inside mitochondria, and by Ellen Haslbrunner, Hans Tuppy and Gottfried Schatz by biochemical assays on highly purified mitochondrial fractions. Mitochondrial sequence databases Several specialized databases have been founded to collect mitochondrial genome sequences and other information. Although most of them focus on sequence data, some of them include phylogenetic or functional information. AmtDB: a database of ancient human mitochondrial genomes. InterMitoBase: an annotated database and analysis platform of protein-protein interactions for human mitochondria. (apparently last updated in 2010, but still available) MitoBreak: the mitochondrial DNA breakpoints database. MitoFish and MitoAnnotator: a mitochondrial genome database of fish.
Biology and health sciences
Organelles
Biology
89830
https://en.wikipedia.org/wiki/Seismic%20wave
Seismic wave
A seismic wave is a mechanical wave of acoustic energy that travels through the Earth or another planetary body. It can result from an earthquake (or generally, a quake), volcanic eruption, magma movement, a large landslide and a large man-made explosion that produces low-frequency acoustic energy. Seismic waves are studied by seismologists, who record the waves using seismometers, hydrophones (in water), or accelerometers. Seismic waves are distinguished from seismic noise (ambient vibration), which is persistent low-amplitude vibration arising from a variety of natural and anthropogenic sources. The propagation velocity of a seismic wave depends on density and elasticity of the medium as well as the type of wave. Velocity tends to increase with depth through Earth's crust and mantle, but drops sharply going from the mantle to Earth's outer core. Earthquakes create distinct types of waves with different velocities. When recorded by a seismic observatory, their different travel times help scientists locate the quake's hypocenter. In geophysics, the refraction or reflection of seismic waves is used for research into Earth's internal structure. Scientists sometimes generate and measure vibrations to investigate shallow, subsurface structure. Types Among the many types of seismic waves, one can make a broad distinction between body waves, which travel through the Earth, and surface waves, which travel at the Earth's surface. Other modes of wave propagation exist than those described in this article; though of comparatively minor importance for earth-borne waves, they are important in the case of asteroseismology. Body waves travel through the interior of the Earth. Surface waves travel across the surface. Surface waves decay more slowly with distance than body waves which travel in three dimensions. Particle motion of surface waves is larger than that of body waves, so surface waves tend to cause more damage. Body waves Body waves travel through the interior of the Earth along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves. This distinction was recognized in 1830 by the French mathematician Siméon Denis Poisson. Primary waves Primary waves (P waves) are compressional waves that are longitudinal in nature. P waves are pressure waves that travel faster than other waves through the earth to arrive at seismograph stations first, hence the name "Primary". These waves can travel through any type of material, including fluids, and can travel nearly 1.7 times faster than the S waves. In air, they take the form of sound waves, hence they travel at the speed of sound. Typical speeds are 330 m/s in air, 1450 m/s in water and about 5000 m/s in granite. Secondary waves Secondary waves (S waves) are shear waves that are transverse in nature. Following an earthquake event, S waves arrive at seismograph stations after the faster-moving P waves and displace the ground perpendicular to the direction of propagation. Depending on the propagational direction, the wave can take on different surface characteristics; for example, in the case of horizontally polarized S waves, the ground moves alternately to one side and then the other. S waves can travel only through solids, as fluids (liquids and gases) do not support shear stresses. S waves are slower than P waves, and speeds are typically around 60% of that of P waves in any given material. Shear waves can not travel through any liquid medium, so the absence of S waves in earth's outer core suggests a liquid state. Surface waves Seismic surface waves travel along the Earth's surface. They can be classified as a form of mechanical surface wave. Surface waves diminish in amplitude as they get farther from the surface and propagate more slowly than seismic body waves (P and S). Surface waves from very large earthquakes can have globally observable amplitude of several centimeters. Rayleigh waves Rayleigh waves, also called ground roll, are surface waves that propagate with motions that are similar to those of waves on the surface of water (note, however, that the associated seismic particle motion at shallow depths is typically retrograde, and that the restoring force in Rayleigh and in other seismic waves is elastic, not gravitational as for water waves). The existence of these waves was predicted by John William Strutt, Lord Rayleigh, in 1885. They are slower than body waves, e.g., at roughly 90% of the velocity of S waves for typical homogeneous elastic media. In a layered medium (e.g., the crust and upper mantle) the velocity of the Rayleigh waves depends on their frequency and wavelength.
Physical sciences
Seismology
Earth science
89847
https://en.wikipedia.org/wiki/IPod
IPod
The iPod is a discontinued series of portable media players and multi-purpose mobile devices that were designed and marketed by Apple Inc. from 2001 to 2022. The first version was released on November 10, 2001, about months after the Macintosh version of iTunes was released. Apple sold an estimated 450 million iPod products as of 2022. Apple discontinued the iPod product line on May 10, 2022. At over 20 years, the iPod brand is the longest-running to be discontinued by Apple. Some versions of the iPod can serve as external data storage devices, like other digital music players. Prior to macOS 10.15, Apple's iTunes software (and other alternative software) could be used to transfer music, photos, videos, games, contact information, e-mail settings, Web bookmarks, and calendars to the devices supporting these features from computers using certain versions of Apple macOS and Microsoft Windows operating systems. Before the release of iOS 5, the iPod branding was used for the media player included with the iPhone and iPad, which was separated into apps named "Music" and "Videos" on the iPod Touch. As of iOS 5, separate Music and Videos apps are standardized across all iOS-powered products. While the iPhone and iPad have essentially the same media player capabilities as the iPod line, they are generally treated as separate products. During the middle of 2010, iPhone sales overtook those of the iPod. History Portable MP3 players had existed since the mid-1990s; however, Apple found existing digital music players "big and clunky or small and useless" with user interfaces that were "unbelievably awful". They also identified weaknesses in existing models' attempt to negotiate the trade-off between capacity and portability: flash memory-based players held too few songs, while the hard drive based models were too big and heavy. To address these deficits, the company decided to develop its own MP3 player. At Apple CEO Steve Jobs’ direction, hardware engineering chief Jon Rubinstein recruited Tony Fadell, a former employee of General Magic and Philips, who had a business idea to invent a better MP3 player and build a complementary music sales store. Fadell had previously developed the Philips Velo and Nino PDA before starting a company called Fuse Systems to build the new MP3 player, but RealNetworks, Sony and Philips had already passed on the project. Rubinstein had already discovered the Toshiba hard disk drive while meeting with an Apple supplier in Japan, ultimately purchasing the rights to it for Apple. Rubinstein had also already made substantial progress on development of other key hardware elements, including the device's screen and battery. Fadell found support for his project with Apple Computer and was hired by Apple in 2001 as an independent contractor to work on the iPod project, then code-named project P-68. Because most of Apple's engineering manpower and resources were already dedicated to the iMac line, Fadell hired engineers from his startup company, Fuse, and veteran engineers from General Magic and Philips to build the core iPod development team. Time constraints forced Fadell to develop various components of the iPod outside Apple. Fadell partnered with a company called PortalPlayer to design software for the device; this work eventually took shape as the iPod OS. Within eight months, Tony Fadell's team and PortalPlayer had completed a prototype. The power supply was then designed by Michael Dhuey, while the display was designed in-house by Apple design engineer Jonathan Ive. The original iPod's physical appearance was inspired by the 1958 Braun T3 transistor radio designed by Dieter Rams, while the wheel-based user interface drew on Bang & Olufsen's BeoCom 6000 telephone. Apple CEO Steve Jobs set an exacting standard for the device's physical design; one anecdote relates an occasion on which Jobs dropped a prototype into an aquarium in front of engineers to demonstrate from bubbles leaving its housing that the current design contained unused internal space. Apple contracted another company, Pixo, to help design and implement the user interface (as well as Unicode, memory management, and event processing) under Jobs' direct supervision. The name iPod was proposed by Vinnie Chieco, a freelance copywriter, who (with others) was contracted by Apple to determine how to introduce the new player to the public. After Chieco saw a prototype, he was reminded of the phrase "Open the pod bay doors, Hal" from the classic sci-fi film 2001: A Space Odyssey, referring to the white EVA Pods of the Discovery One spaceship. Chieco's proposal drew an analogy between the relationship of the spaceship to the smaller independent pods and that of a personal computer to its companion music player. The product (which Fortune called "Apple's 21st-Century Walkman") was developed in less than one year and unveiled on October 23, 2001. Jobs announced it as a Mac-compatible product with a 5 GB hard drive that put "1,000 songs in your pocket." Apple researched the trademark and found that it was already in use. Joseph N. Grasso of New Jersey had originally listed an "iPod" trademark with the U.S. Patent and Trademark Office (USPTO) in July 2000 for Internet kiosks. The first iPod kiosks had been demonstrated to the public in New Jersey in March 1998, and commercial use began in January 2000, but the venture had apparently been discontinued by 2001. The trademark was registered by the USPTO in November 2003, and Grasso assigned it to Apple Computer, Inc. in 2005. Separately, the earliest recorded use in commerce of an "iPod" trademark was in 1991 by Chrysalis Corp. of Sturgis, Michigan, styled "iPOD", for office furniture. As development of the iPod progressed, Apple continued to refine the software's look and feel, rewriting much of the code. Starting with the iPod Mini, the Chicago font was replaced with Espy Sans. Later iPods switched fonts again to Podium Sans—a font similar to Apple's corporate font, Myriad. Color display iPods then adopted some Mac OS X themes like Aqua progress bars, and brushed metal meant to evoke a combination lock. On January 8, 2004, Hewlett-Packard (HP) announced that they would sell HP-branded iPods under a license agreement from Apple. Several new retail channels were used—including Walmart—and these iPods eventually made up 5% of all iPod sales. In July 2005, HP stopped selling iPods due to unfavorable terms and conditions imposed by Apple. In 2006, Apple partnered with Irish rock band U2 to present a special edition of the 5th-generation iPod. Like its predecessor, this iPod has the signatures of the four members of the band engraved on its back, but this one was the first time the company changed the color of the stainless steel back from a silver chrome to black. This iPod was only available with 30 GB of storage capacity. The special edition entitled purchasers to an exclusive video with 33 minutes of interviews and performance by U2, downloadable from the iTunes Store. In 2007, Apple modified the iPod interface again with the introduction of the sixth-generation iPod Classic and third-generation iPod Nano by changing the font to Helvetica and, in most cases, splitting the screen in half, displaying the menus on the left and album artwork, photos, or videos on the right. In mid-2015, several new color schemes for all of the current iPod models were spotted in the iTunes 12.2 update. Belgian website Belgium iPhone originally found the images after plugging in an iPod for the first time, and subsequent photos were discovered by Pierre Dandumont before being leaked. On July 27, 2017, Apple removed the iPod Nano and Shuffle from its stores, marking the end of Apple's production of standalone music players. On May 10, 2022, Apple discontinued the iPod Touch, the last remaining product in the iPod line. iOS 15 was the last iOS release the 7th generation iPod touch received, as future versions from iOS 16 onward no longer support the device. Hardware Audio Audio tests showed that the third-generation iPod has a weak bass response. The combination of the undersized DC-blocking capacitors and the typical low impedance of most consumer headphones form a high-pass filter, which attenuates the low-frequency bass output. Similar capacitors were used in the fourth-generation iPods. The problem is reduced when using high-impedance headphones and is completely masked when driving high-impedance (line level) loads, such as when using an external headphone amplifier. The first-generation iPod Shuffle uses a dual-transistor output stage, rather than a single capacitor-coupled output, and does not exhibit reduced bass response for any load. For all iPods released in 2006 and earlier, some equalizer (EQ) sound settings can easily distort the bass sound, even on undemanding tracks. This occurs when using EQ settings such as R&B, Rock, Acoustic, and Bass Booster, because the equalizer amplifies the digital audio level beyond the software's limit, causing distortion (clipping) on bass instruments. From the fifth-generation iPod on, Apple introduced a user-configurable volume limit in response to concerns about hearing loss. Users report that in the sixth-generation iPod, the maximum volume output level is limited to 100 dB in EU markets. Apple previously had to remove iPods from shelves in France for exceeding this legal limit. However, users who bought new sixth-generation iPods in late 2013 reported a new option that allowed them to disable the EU volume limit. Some have attributed this change to a software update that shipped with these devices. Older sixth-generation iPods, however, are unable to update to this software version. Connectivity Originally, a FireWire connection to the host computer was used to update songs or recharge the battery. The battery could also be charged with a power adapter that was included with the first four generations. The third generation began including a 30-pin dock connector, allowing for FireWire or USB connectivity. This provided better compatibility with non-Apple machines, as most of them did not have FireWire ports at the time. Eventually, Apple began shipping iPods with USB cables instead of FireWire, although the latter was available separately. As of the first-generation iPod Nano and the fifth-generation iPod Classic, Apple discontinued using FireWire for data transfer (while still allowing for use of FireWire to charge the device) in an attempt to reduce cost and form factor. As of the second-generation iPod Touch and the fourth-generation iPod Nano, FireWire charging ability has been removed. The second-, third-, and fourth-generation iPod Shuffle uses a single 3.5 mm minijack phone connector which acts as both a headphone jack or a USB data and charging port for the dock/cable. The dock connector also allowed the iPod to connect to accessories, which often supplement the iPod's music, video, and photo playback. Apple sold a few accessories, such as the now-discontinued iPod Hi-Fi, but most are manufactured by third parties such as Belkin and Griffin. Some peripherals use their own interface, while others use the iPod's own screen. Because the dock connector is a proprietary interface, the implementation of the interface requires paying royalties to Apple. Apple introduced a new 8-pin dock connector, named Lightning, on September 12, 2012 with their announcement of the iPhone 5, the fifth-generation iPod Touch, and the seventh-generation iPod Nano, which all feature it. The new connector replaces the older 30-pin dock connector used by older iPods, iPhones, and iPads. Apple Lightning cables have pins on both sides of the plug so it can be inserted with either side facing up. Bluetooth connectivity was added to the last model of the iPod Nano, and Wi-Fi to the iPod Touch. Accessories Many accessories have been made for the iPod line. A large number have been made by third-party companies, although many, such as the iPod Hi-Fi and iPod Socks, have been made by Apple. Some accessories added extra features that other music players have, such as sound recorders, FM radio tuners, wired remote controls, and composite video cables for TV connections. Other accessories offered unique features like the Nike+iPod pedometer and the iPod Camera Connector. Other notable accessories included external speakers, wireless remote controls, protective case, screen films, and wireless earphones. Among the first accessory manufacturers were Griffin Technology, Belkin, JBL, Bose, Monster Cable, and SendStation. BMW released the first iPod automobile interface, allowing drivers of newer BMW vehicles to control an iPod using either the built-in steering wheel controls or the radio head-unit buttons. Apple announced in 2005 that similar systems would be available for other vehicle brands, including Mercedes-Benz, Volvo, Nissan, Toyota, Alfa Romeo, Ferrari, Acura, Audi, Honda, Renault, Infiniti and Volkswagen. Scion offered standard iPod connectivity on all their cars. Some independent stereo manufacturers including JVC, Pioneer, Kenwood, Alpine, Sony, and Harman Kardon also had iPod-specific integration solutions. Alternative connection methods included adapter kits (that use the cassette deck or the CD changer port), audio input jacks, and FM transmitters such as the iTrip—although personal FM transmitters are illegal in some countries. Many car manufacturers have added audio input jacks as standard. Beginning in mid-2007, four major airlines, United, Continental, Delta, and Emirates, reached agreements to install iPod seat connections. The free service allowed passengers to power and charge an iPod, and view video and music libraries on individual seat-back displays. Originally KLM and Air France were reported to be part of the deal with Apple, but they later released statements explaining that they were only contemplating the possibility of incorporating such systems. Software The iPod line can play several audio file formats including MP3, AAC/M4A, Protected AAC, AIFF, WAV, Audible audiobook, and Apple Lossless. The iPod Photo introduced the ability to display JPEG, BMP, GIF, TIFF, and PNG image file formats. Fifth- and sixth-generation iPod Classic models, as well as third-generation iPod Nano models, can also play MPEG-4 (H.264/MPEG-4 AVC) and QuickTime video formats, with restrictions on video dimensions, encoding techniques and data rates. Originally, iPod software only worked with Classic Mac OS and macOS; iPod software for Microsoft Windows was launched with the second-generation model. Unlike most other media players, Apple does not support Microsoft's WMA audio format—but a converter for WMA files without digital rights management (DRM) is provided with the Windows version of iTunes. MIDI files also cannot be played, but can be converted to audio files using the "Advanced" menu in iTunes. Alternative open-source audio formats, such as Ogg Vorbis and FLAC, are not supported without installing custom firmware onto an iPod (e.g., Rockbox). During installation, an iPod is associated with one host computer. Each time an iPod connects to its host computer, iTunes can synchronize entire music libraries or music playlists either automatically or manually. Song ratings can be set on an iPod and synchronized later to the iTunes library, and vice versa. A user can access, play, and add music on a second computer if an iPod is set to manual and not automatic sync, but anything added or edited will be reversed upon connecting and syncing with the main computer and its library. If a user wishes to automatically sync music with another computer, an iPod's library will be entirely wiped and replaced with the other computer's library. Interface iPods with color displays use anti-aliased graphics and text, with sliding animations. All iPods (except the 3rd-generation iPod Shuffle, the 6th & 7th generation iPod Nano, and iPod Touch) have five buttons and the later generations have the buttons integrated into the click wheel – an innovation that gives an uncluttered, minimalist interface. The buttons perform basic functions such as menu, play, pause, next track, and previous track. Other operations, such as scrolling through menu items and controlling the volume, are performed by using the click wheel in a rotational manner. The 3rd-generation iPod Shuffle does not have any controls on the actual player; instead, it has a small control on the earphone cable, with volume-up and -down buttons and a single button for play and pause, next track, etc. The iPod Touch has no click-wheel; instead, it uses a touch screen along with a home button, sleep/wake button, and (on the second and third generations of the iPod Touch) volume-up and -down buttons. The user interface for the iPod Touch is identical to that of the iPhone. Differences include the lack of a phone application and the lack of a SIM card to connect to cellular data. Both devices use iOS. iTunes Store The iTunes Store (introduced April 28, 2003) is an online media store run by Apple and accessed through iTunes. The store became the market leader soon after its launch and Apple announced the sale of videos through the store on October 12, 2005. Full-length movies became available on September 12, 2006. At the time the store was introduced, purchased audio files used the AAC format with added encryption, based on the FairPlay DRM system. Up to five authorized computers and an unlimited number of iPods could play the files. Burning the files with iTunes as an audio CD, then re-importing would create music files without the DRM. The DRM could also be removed using third-party software. However, in a deal with Apple, EMI began selling DRM-free, higher-quality songs on the iTunes Stores, in a category called "iTunes Plus." While individual songs were made available at a cost of , 30¢ more than the cost of a regular DRM song, entire albums were available for the same price, , as DRM encoded albums. On October 17, 2007, Apple lowered the cost of individual iTunes Plus songs to per song, the same as DRM encoded tracks. On January 6, 2009, Apple announced that DRM has been removed from 80% of the music catalog and that it would be removed from all music by April 2009. iPods cannot play music files from competing music stores that use rival-DRM technologies like Microsoft's protected WMA or RealNetworks' Helix DRM. Example stores include Napster and MSN Music. RealNetworks claims that Apple is creating problems for itself by using FairPlay to lock users into using the iTunes Store. Steve Jobs stated that Apple makes little profit from song sales, although Apple uses the store to promote iPod sales. However, iPods can also play music files from online stores that do not use DRM, such as eMusic or Amie Street. Universal Music Group decided not to renew their contract with the iTunes Store on July 3, 2007. Universal will now supply iTunes in an 'at will' capacity. Apple debuted the iTunes Wi-Fi Music Store on September 5, 2007, in its Media Event entitled "The Beat Goes On...". This service allows users to access the Music Store from either an iPhone or an iPod Touch and download songs directly to the device that can be synced to the user's iTunes Library over a WiFi connection, or, in the case of an iPhone, the cellular network. Games Video games are playable on various versions of iPods. The original iPod had the game Brick (originally invented by Apple's co-founder Steve Wozniak) included as an easter egg hidden feature; later firmware versions added it as a menu option. Later revisions of the iPod added three more games: Parachute, Solitaire, and Music Quiz. In September 2006, the iTunes Store began to offer additional games for purchase with the launch of iTunes 7, compatible with the fifth generation iPod with iPod software 1.2 or later. Those games were: Bejeweled, Cubis 2, Mahjong, Mini Golf, Pac-Man, Tetris, Texas Hold 'Em, Vortex, Asphalt 4: Elite Racing and Zuma. Additional games have since been added. These games work on the 6th and 5th generation iPod Classic and the 5th and 4th generation iPod Nano. With third parties like Namco, Square Enix, Electronic Arts, Sega, and Hudson Soft all making games for the iPod, Apple's MP3 player has taken steps towards entering the video game handheld console market. Even video game magazines like GamePro and EGM have reviewed and rated most of their games as of late. The games are in the form of .ipg files, which are actually .zip archives in disguise. When unzipped, they reveal executable files along with common audio and image files, leading to the possibility of third party games. Apple has not publicly released a software development kit (SDK) for iPod-specific development. Apps produced with the iPhone SDK are compatible only with the iOS on the iPod Touch and iPhone, which cannot run click wheel-based games. File storage and transfer All iPods except for the iPod Touch can function in "disk mode" as mass storage devices to store data files but this has to be manually activated. If an iPod is formatted on a Mac OS computer, it uses the HFS+ file system format, which allows it to serve as a boot disk for a Mac computer. If it is formatted on Windows, the FAT32 format is used. With the release of the Windows-compatible iPod, the default file system used on the iPod line switched from HFS+ to FAT32, although it can be reformatted to either file system (excluding the iPod Shuffle which is strictly FAT32). Generally, if a new iPod (excluding the iPod Shuffle) is initially plugged into a computer running Windows, it will be formatted with FAT32, and if initially plugged into a Mac running Mac OS it will be formatted with HFS+. Unlike many other MP3 players, simply copying audio or video files to the drive with a typical file management application will not allow an iPod to properly access them. The user must use software that has been specifically designed to transfer media files to iPods so that the files are playable and viewable. Usually iTunes is used to transfer media to an iPod, though several alternative third-party applications are available on a number of different platforms. iTunes 7 and above can transfer purchased media of the iTunes Store from an iPod to a computer, provided that computer containing the DRM protected media is authorized to play it. Media files are stored on an iPod in a hidden folder, along with a proprietary database file. The hidden content can be accessed on the host operating system by enabling hidden files to be shown. The media files can then be recovered manually by copying the files or folders off the iPod. Many third-party applications also allow easy copying of media files from an iPod. Models and features While the suffix "Classic" was not introduced until the sixth generation, it has been applied here retroactively to all non-suffixed iPods for clarity. Patent disputes In 2005, Apple faced two lawsuits claiming patent infringement by the iPod line and its associated technologies: Advanced Audio Devices claimed the iPod line breached its patent on a "music jukebox", while a Hong Kong-based IP portfolio company called Pat-rights filed a suit claiming that Apple's FairPlay technology breached a patent issued to inventor Ho Keung Tse. The latter case also includes the online music stores of Sony, RealNetworks, Napster, and Musicmatch as defendants. Apple's application to the United States Patent and Trademark Office for a patent on "rotational user inputs", as used on the iPod interface, received a third "non-final rejection" (NFR) in August 2005. Also in August 2005, Creative Technology, one of Apple's main rivals in the MP3 player market, announced that it held a patent on part of the music selection interface used by the iPod line, which Creative Technology dubbed the "Zen Patent", granted on August 9, 2005. On May 15, 2006, Creative filed another suit against Apple with the United States District Court for the Northern District of California. Creative also asked the United States International Trade Commission to investigate whether Apple was breaching U.S. trade laws by importing iPods into the United States. On August 24, 2006, Apple and Creative announced a broad settlement to end their legal disputes. Apple will pay Creative US$100 million for a paid-up license, to use Creative's awarded patent in all Apple products. As part of the agreement, Apple will recoup part of its payment, if Creative is successful in licensing the patent. Creative then announced its intention to produce iPod accessories by joining the Made for iPod program. Sales Sales of iPods peaked in 2008, following rapid growth in the period of 2005 to 2007. In January 2007, Apple reported record quarterly revenue of US$7.1 billion, of which 48% was made from iPod sales. On April 9, 2007, it was announced that Apple had sold its one-hundred millionth iPod, making it the best-selling digital music player of all time. Its second-quarter revenue of US$5.2 billion, of which 32% was made from iPod sales. Apple and several industry analysts suggest that iPod users are likely to purchase other Apple products such as Mac computers. 42% of Apple's revenue for the First fiscal quarter of 2008 came from iPod sales (followed by 21% from notebook sales and 16% from desktop sales). On October 21, 2008, Apple reported that only 14.21% of total revenue for fiscal quarter 4 of the year 2008 came from iPods. At the September 9, 2009 keynote presentation at the Apple Event, Phil Schiller announced total cumulative sales of iPods exceeded 220 million. The continual decline of iPod sales since 2009 has not been a surprising trend for the Apple corporation, as Apple CFO Peter Oppenheimer explained in June 2009: "We expect our traditional MP3 players to decline over time as we cannibalize ourselves with the iPod Touch and the iPhone." Since 2009, the company's iPod sales have continually decreased every financial quarter and in 2013 a new model was not introduced onto the market. , Apple reported that total number of iPods sold worldwide was 350 million. Market share Since October 2004, the iPod line has dominated digital music player sales in the United States, with over 90% of the market for hard drive-based players and over 70% of the market for all types of players. During the year from January 2004 to January 2005, the high rate of sales caused its U.S. market share to increase from 31% to 65%, and in July 2005, this market share was measured at 74%. In January 2007, according to Bloomberg Online, the iPod market share reached 72.7%. In the Japanese market, iPod market share was 36% in 2005; nonetheless, it was still a market leader in the country. In Europe, Apple also led the market (especially the UK); however, local brands such as Archos managed to outsell Apple in certain categories. One of the reasons for the iPod's early success, having been released three years after the very first digital audio player (namely the MPMan), was its seamless integration with the company's iTunes software, and the ecosystem built around it such as the iTunes Music Store, as well as a competitive price. As a result, Apple achieved a dominance in the MP3 player market as Sony's Walkman did with personal cassette players two decades earlier. The software similarity between computer and player made it easy to transfer music over and synchronize it, tasks that were considered difficult on pre-iPod MP3 players such as those from Rio and Creative. Some of the iPod's chief competitors during its pinnacle include Creative's Zen, SanDisk's Sansa, Sony's Walkman, iriver, and Samsung's Yepp. The iPod's dominance was challenged numerous times: in 2004 Sony's first hard disk Walkman was designed to take on the iPod, accompanied by its own music store Sony Connect; Microsoft initially attempted to compete using a software platform called Portable Media Center, and in later years designed the Zune line; the most vocal rival was Creative, whose CEO in November 2004 "declared war" on the iPod. Samsung declared that they would take the top spot from Apple by 2007, while SanDisk ran a specific anti-iPod marketing campaign called iDon't. These competitors failed to make major dents, and Apple remained dominant in the fast-growing digital audio player market during the decade. Mobile phone manufacturers Nokia and Sony Ericsson also made "music phones" to rival iPod. A suggested factor of iPod's popularity has been cited to be Apple's popular iTunes Store catalog, playing a part in keeping Apple firmly market leader, while also helped by the mismanagement of others, such as Sony's unpopular SonicStage software. One notable exception where iPod was not faring well was in South Korea. As of 2005, Apple held a market share of less than 2%, compared to market leaders iriver, Samsung and Cowon. As of 2011, iPod held a 70% market share in global MP3 players. Its closest competitor was noted to be the Sansa line from SanDisk. Industry impact iPods often receive favorable reviews; scoring on looks, clean design, and ease of use. PC World wrote that iPod line has "altered the landscape for portable audio players". The iPod has also been credited with accelerating shifts within the music industry. The iPod's popularization of digital music storage allows users to abandon listening to entire albums and instead be able to choose specific singles which hastened the end of the album era in popular music. Criticism Battery problems The advertised battery life on most models is different from the real-world achievable life. For example, the fifth-generation iPod Classic was advertised as having up to 14 hours of music playback. However, an MP3.com report stated that this was virtually unachievable under real-life usage conditions, with a writer for the site getting, on average, less than 8 hours from an iPod. In 2003, class action lawsuits were brought against Apple complaining that the battery charges lasted for shorter lengths of time than stated and that the battery degraded over time. The lawsuits were settled by offering individuals with first- or second-generation iPods either store credit or a free battery replacement, and offering individuals with third-generation iPods an extended warranty that would allow them to get a replacement iPod if they experienced battery problems. As an instance of planned obsolescence, iPod batteries are not designed to be removed or replaced by the user, although some users have been able to open the case themselves, usually following instructions from third-party vendors of iPod replacement batteries. Compounding the problem, Apple initially would not replace worn-out batteries. The official policy was that the customer should buy a refurbished replacement iPod, at a cost almost equivalent to a brand new one. All lithium-ion batteries lose capacity during their lifetime even when not in use (guidelines are available for prolonging life-span) and this situation led to a market for third-party battery replacement kits. Apple announced a battery replacement program on November 14, 2003, a week before a high publicity stunt and website by the Neistat Brothers. The initial cost was , and it was lowered to in 2005. One week later, Apple offered an extended iPod warranty for . For the iPod Nano, soldering tools are needed because the battery is soldered onto the main board. Fifth generation iPods have their battery attached to the backplate with adhesive. The first generation iPod Nano may overheat and pose a health and safety risk. Affected iPod Nanos were sold between September 2005 and December 2006. This is due to a flawed battery used by Apple from a single battery manufacturer. Apple recommended that owners of affected iPod Nanos stop using them. Under an Apple product replacement program, affected Nanos were replaced with current generation Nanos free of charge. Reliability and durability iPods have been criticized for alleged short lifespan and fragile hard drives. A 2005 survey conducted on the MacInTouch website found that the iPod line had an average failure rate of 13.7% (although they note that comments from respondents indicate that "the true iPod failure rate may be lower than it appears"). It concluded that some models were more durable than others. In particular, failure rates for iPods employing hard drives were usually above 20% while those with flash memory had a failure rate below 10%. In late 2005, many users complained that the surface of the first-generation iPod Nano can become scratched easily, rendering the screen unusable. A class-action lawsuit was also filed. Apple initially considered the issue a minor defect, but later began shipping these iPods with protective sleeves. Labor disputes On June 11, 2006, the British tabloid The Mail on Sunday reported that iPods are mainly manufactured by workers who earn no more than US$50 per month and work 15-hour shifts. Apple investigated the case with independent auditors and found that, while some of the plant's labor practices met Apple's Code of Conduct, others did not: employees worked over 60 hours a week for 35% of the time and worked more than six consecutive days for 25% of the time. Foxconn, Apple's manufacturer, initially denied the abuses, but when an auditing team from Apple found that workers had been working longer hours than were allowed under Chinese law, they promised to prevent workers working more hours than the code allowed. Apple hired a workplace standards auditing company, Verité, and joined the Electronic Industry Code of Conduct Implementation Group to oversee the measures. On December 31, 2006, workers at the Foxconn factory in Longhua, Shenzhen formed a union affiliated with the All-China Federation of Trade Unions, the Chinese government-approved union umbrella organization. In 2010, a number of workers committed suicide at a Foxconn operations in China. Apple, HP, and others stated that they were investigating the situation. Foxconn guards have been videotaped beating employees. Another employee killed himself in 2009 when an Apple prototype went missing, and claimed in messages to friends, that he had been beaten and interrogated. As of 2006, the iPod was produced by about 14,000 workers in the U.S. and 27,000 overseas. Further, the salaries attributed to this product were overwhelmingly distributed to highly skilled U.S. professionals, as opposed to lower-skilled U.S. retail employees or overseas manufacturing labor. One interpretation of this result is that U.S. innovation can create more jobs overseas than domestically. Timeline of models
Technology
Specific hardware
null
89997
https://en.wikipedia.org/wiki/Bytecode
Bytecode
Bytecode (also called portable code or p-code) is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references (normally numeric addresses) that encode the result of compiler parsing and performing semantic analysis of things like type, scope, and nesting depths of program objects. The name bytecode stems from instruction sets that have one-byte opcodes followed by optional parameters. Intermediate representations such as bytecode may be output by programming language implementations to ease interpretation, or it may be used to reduce hardware and operating system dependence by allowing the same code to run cross-platform, on different devices. Bytecode may often be either directly executed on a virtual machine (a p-code machine, i.e., interpreter), or it may be further compiled into machine code for better performance. Since bytecode instructions are processed by software, they may be arbitrarily complex, but are nonetheless often akin to traditional hardware instructions: virtual stack machines are the most common, but virtual register machines have been built also. Different parts may often be stored in separate files, similar to object modules, but dynamically loaded during execution. Execution A bytecode program may be executed by parsing and directly executing the instructions, one at a time. This kind of bytecode interpreter is very portable. Some systems, called dynamic translators, or just-in-time (JIT) compilers, translate bytecode into machine code as necessary at runtime. This makes the virtual machine hardware-specific but does not lose the portability of the bytecode. For example, Java and Smalltalk code is typically stored in bytecode format, which is typically then JIT compiled to translate the bytecode to machine code before execution. This introduces a delay before a program is run, when the bytecode is compiled to native machine code, but improves execution speed considerably compared to interpreting source code directly, normally by around an order of magnitude (10x). Because of its performance advantage, today many language implementations execute a program in two phases, first compiling the source code into bytecode, and then passing the bytecode to the virtual machine. There are bytecode based virtual machines of this sort for Java, Raku, Python, PHP, Tcl, mawk and Forth (however, Forth is seldom compiled via bytecodes in this way, and its virtual machine is more generic instead). The implementation of Perl and Ruby 1.8 instead work by walking an abstract syntax tree representation derived from the source code. More recently, the authors of V8 and Dart have challenged the notion that intermediate bytecode is needed for fast and efficient VM implementation. Both of these language implementations currently do direct JIT compiling from source code to machine code with no bytecode intermediary. Examples ActionScript executes in the ActionScript Virtual Machine (AVM), which is part of Flash Player and AIR. ActionScript code is typically transformed into bytecode format by a compiler. Examples of compilers include one built into Adobe Flash Professional and one built into Adobe Flash Builder and available in the Adobe Flex SDK. Adobe Flash objects BANCStar, originally bytecode for an interface-building tool but used also as a language Berkeley Packet Filter EBPF Berkeley Pascal Byte Code Engineering Library C to Java virtual machine compilers CLISP implementation of Common Lisp used to compile only to bytecode for many years; however, now it also supports compiling to native code with the help of GNU lightning CMUCL and Scieneer Common Lisp implementations of Common Lisp can compile either to native code or to bytecode, which is far more compact Common Intermediate Language executed by Common Language Runtime, used by .NET languages such as C# Dalvik bytecode, designed for the Android platform, is executed by the Dalvik virtual machine Dis bytecode, designed for the Inferno (operating system), is executed by the Dis virtual machine EiffelStudio for the Eiffel programming language EM, the Amsterdam Compiler Kit virtual machine used as an intermediate compiling language and as a modern bytecode language Emacs is a text editor with most of its functions implemented by Emacs Lisp, its built-in dialect of Lisp. These features are compiled into bytecode. This architecture allows users to customize the editor with a high level language, which after compiling into bytecode yields reasonable performance. Embeddable Common Lisp implementation of Common Lisp can compile to bytecode or C code Common Lisp provides a disassemble function which prints to the standard output the underlying code of a specified function. The result is implementation-dependent and may or may not resolve to bytecode. Its inspection can be utilized for debugging and optimization purposes. Steel Bank Common Lisp, for instance, produces: (disassemble '(lambda (x) (print x))) ; disassembly for (LAMBDA (X)) ; 2436F6DF: 850500000F22 TEST EAX, [#x220F0000] ; no-arg-parsing entry point ; E5: 8BD6 MOV EDX, ESI ; E7: 8B05A8F63624 MOV EAX, [#x2436F6A8] ; #<FDEFINITION object for PRINT> ; ED: B904000000 MOV ECX, 4 ; F2: FF7504 PUSH DWORD PTR [EBP+4] ; F5: FF6005 JMP DWORD PTR [EAX+5] ; F8: CC0A BREAK 10 ; error trap ; FA: 02 BYTE #X02 ; FB: 18 BYTE #X18 ; INVALID-ARG-COUNT-ERROR ; FC: 4F BYTE #X4F ; ECX Ericsson implementation of Erlang uses BEAM bytecodes Ethereum's Virtual Machine (EVM) is the runtime environment, using its own bytecode, for transaction execution in Ethereum (smart contracts). Icon and Unicon programming languages Infocom used the Z-machine to make its software applications more portable Java bytecode, which is executed by the Java virtual machine ASM BCEL Javassist Keiko bytecode used by the Oberon-2 programming language to make it and the Oberon operating system more portable. KEYB, the MS-DOS/PC DOS keyboard driver with its resource file KEYBOARD.SYS containing layout information and short p-code sequences executed by an interpreter inside the resident driver. LLVM IR LSL, a scripting language used in virtual worlds compiles into bytecode running on a virtual machine. Second Life has the original Mono version, Inworldz developed the Phlox version. Lua language uses a register-based bytecode virtual machine m-code of the MATLAB language Malbolge is an esoteric machine language for a ternary virtual machine. Microsoft P-code used in Visual C++ and Visual Basic Multiplan O-code of the BCPL programming language OCaml language optionally compiles to a compact bytecode form p-code of UCSD Pascal implementation of the Pascal language Parrot virtual machine Pick BASIC also referred to as Data BASIC or MultiValue BASIC The R environment for statistical computing offers a bytecode compiler through the compiler package, now standard with R version 2.13.0. It is possible to compile this version of R so that the base and recommended packages exploit this. Pyramid 2000 adventure game Python scripts are being compiled on execution to Python's bytecode language, and the compiled files (.pyc) are cached inside the script's folder Compiled code can be analysed and investigated using a built-in tool for debugging the low-level bytecode. The tool can be initialized from the shell, for example: >>> import dis # "dis" - Disassembler of Python byte code into mnemonics. >>> dis.dis('print("Hello, World!")') 1 0 LOAD_NAME 0 (print) 2 LOAD_CONST 0 ('Hello, World!') 4 CALL_FUNCTION 1 6 RETURN_VALUE Scheme 48 implementation of Scheme using bytecode interpreter Bytecodes of many implementations of the Smalltalk language The Spin interpreter built into the Parallax Propeller microcontroller The SQLite database engine translates SQL statements into a bespoke byte-code format. Apple SWEET16 Tcl TIMI is used by compilers on the IBM i platform. Tiny BASIC Visual FoxPro compiles to bytecode WebAssembly YARV and Rubinius for Ruby ZCODE Zend Engine opcodes for PHP
Technology
Software development: General
null
90117
https://en.wikipedia.org/wiki/Ring%20Nebula
Ring Nebula
The Ring Nebula (also catalogued as Messier 57, M57 and NGC 6720) is a planetary nebula in the northern constellation of Lyra. Such a nebula is formed when a star, during the last stages of its evolution before becoming a white dwarf, expels a vast luminous envelope of ionized gas into the surrounding interstellar space. History This nebula was discovered by the French astronomer Charles Messier while searching for comets in late January 1779. Messier's report of his independent discovery of Comet Bode reached fellow French astronomer Antoine Darquier de Pellepoix two weeks later, who then independently rediscovered the nebula while following the comet. Darquier later reported that it was "...as large as Jupiter and resembles a planet which is fading" (which may have contributed to the use of the persistent "planetary nebula" terminology). It would be entered into Messier's catalogue as the 57th object. Messier and German-born astronomer William Herschel speculated that the nebula was formed by multiple faint stars that were unresolvable with his telescope. In 1800, German Count Friedrich von Hahn announced that he had discovered the faint central star at the heart of the nebula a few years earlier. He also noted that the interior of the ring had undergone changes, and said he could no longer find the central star. In 1864, English amateur astronomer William Huggins examined the spectra of multiple nebulae, discovering that some of these objects, including M57, displayed the spectra of bright emission lines characteristic of fluorescing glowing gases. Huggins concluded that most planetary nebulae were not composed of unresolved stars, as had been previously suspected, but were nebulosities. The nebula was first photographed by the Hungarian astronomer Eugene von Gothard in 1886. Observation M57 is found south of the bright star Vega, which forms the northwestern vertex of the Summer Triangle asterism. The nebula lies about 40% of the distance from Beta (β) to Gamma (γ) Lyrae, making it an easy target for amateur astronomers to find. The nebula disk has an angular size of , making it too small to be resolved with 10×50 binoculars. It is best observed using a telescope with an aperture of at least , but even a telescope will reveal its elliptical ring shape. Using a UHC or OIII filter greatly enhances visual observation, particularly in light polluted areas. The interior hole can be resolved by a instrument at a magnification of 100×. Larger instruments will show a few darker zones on the eastern and western edges of the ring and some faint nebulosity inside the disk. The central star, at magnitude 14.8, is difficult to spot. Properties M57 is from Earth. It has a visual magnitude of 8.8 and a dimmer photographic magnitude, of 9.7. Photographs taken over a period of 50 years show the rate of nebula expansion is roughly 1 arcsecond per century, which corresponds to spectroscopic observations as 20–. M57 is illuminated by a central white dwarf of 15.75v visual magnitude. All the interior parts of this nebula have a blue-green tinge that is caused by the doubly ionized oxygen emission lines at 495.7 and 500.7 nm. These observed so-called "forbidden lines" occur only in conditions of very low density containing a few atoms per cubic centimeter. In the outer region of the ring, part of the reddish hue is caused by hydrogen emission at 656.3 nm, forming part of the Balmer series of lines. Forbidden lines of ionized nitrogen or N II contribute to the reddishness at 654.8 and 658.3 nm. Nebula structure M57 is of the class of such starburst nebulae known as bipolar, whose thick equatorial rings visibly extend the structure through its main axis of symmetry. It appears to be a prolate spheroid with strong concentrations of material along its equator. From Earth, the symmetrical axis is viewed at about 30°. Overall, the observed nebulosity has been currently estimated to be expanding for approximately 1,610 ± 240 years. Structural studies find this planetary nebula exhibits knots characterized by well-developed symmetry. However, these are only silhouettes visible against the background emission of the nebula's equatorial ring. M57 may include internal N II emission lines located at the knots' tips that face the central star; however, most of these knots are neutral and appear only in extinction lines. Their existence shows they are probably only located closer to the ionization front than those found in the Lupus planetary IC 4406. Some of the knots do exhibit well-developed tails which are often detectable in optical thickness from the visual spectrum. Central star The central star was discovered by Hungarian astronomer Jenő Gothard on September 1, 1886, from images taken at his observatory in Herény, near Szombathely. Within the last two thousand years, the central star of the Ring Nebula has left the asymptotic giant branch after exhausting its supply of hydrogen fuel. Thus it no longer produces its energy through nuclear fusion and, in evolutionary terms, it is now becoming a compact white dwarf star. The central star now consists primarily of carbon and oxygen with a thin outer envelope composed of lighter elements. Its mass is about , with a surface temperature of . Currently it is 200 times more luminous than the Sun, but its apparent magnitude is only +15.75.
Physical sciences
Notable nebulae
null
90446
https://en.wikipedia.org/wiki/Equality%20%28mathematics%29
Equality (mathematics)
In mathematics, equality is a relationship between two quantities or expressions, stating that they have the same value, or represent the same mathematical object. Equality between and is written , and pronounced " equals ". In this equality, and are distinguished by calling them left-hand side (LHS), and right-hand side (RHS). Two objects that are not equal are said to be distinct. Equality is often considered a kind of primitive notion, meaning, it's not formally defined, but rather informally said to be "a relation each thing bears to itself and nothing else". This characterization is notably circular ("nothing else"). This makes equality a somewhat slippery idea to pin down. Basic properties about equality like reflexivity, symmetry, and transitivity have been understood intuitively since at least the ancient Greeks, but weren't symbolically stated as general properties of relations until the late 19th century by Giuseppe Peano. Other properties like substitution and function application weren't formally stated until the development of symbolic logic. There are generally two ways that equality is formalized in mathematics: through logic or through set theory. In logic, equality is a primitive predicate (a statement that may have free variables) with the reflexive property (called the Law of identity), and the substitution property. From those, one can derive the rest of the properties usually needed for equality. Logic also defines the principle of extensionality, which defines two objects of a certain kind to be equal if they satisfy the same external property (See the example of sets below). After the foundational crisis in mathematics at the turn of the 20th century, set theory (specifically Zermelo–Fraenkel set theory) became the most common foundation of mathematics in order to resolve the crisis. In set theory, any two sets are defined to be equal if they have all the same members. This is called the Axiom of extensionality. Usually set theory is defined within logic, and therefore uses the equality described above, however, if a logic system does not have equality, it is possible to define equality within set theory. Etymology The word equal is derived from the Latin ('like', 'comparable', 'similar'), which itself stems from ('level', 'just'). The word entered Middle English around the 14th century, borrowed from Old French (modern ). The equals sign , now universally accepted in mathematics for equality, was first recorded by Welsh mathematician Robert Recorde in The Whetstone of Witte (1557). The original form of the symbol was much wider than the present form. In his book, Recorde explains his design of the "Gemowe lines", from the Latin ('twin'), using two parallel lines to represent equality because he believed that "no two things could be more equal." Later, a vertical version was also used by some but never overtook Recorde's version. It was common into the 18th century to use an abbreviation of the word equals as the symbol for equality; examples included and , from the Latin . Diophantus's use of , short for ( 'equals'), in Arithmetica () is considered one of the first uses of an equals sign. Basic properties Reflexivity For every , one has . Symmetry For every and , if , then . Transitivity For every , , and , if and , then . Substitution Informally, this just means that if , then can replace in any mathematical expression or formula without changing its meaning. (For a formal explanation, see ) For example: Operation application For every and , with some operation , if , then . For example: The first three properties are generally attributed to Giuseppe Peano for being the first to explicitly state these as fundamental properties of equality in his (1889). However, the basic notions have always existed; for example, in Euclid's Elements (), he includes 'common notions': "Things that are equal to the same thing are also equal to one another" (transitivity), "Things that coincide with one another are equal to one another" (reflexivity), along with some operation-application properties for addition and subtraction. The operation-application property was also stated in Peano's , however, it had been common practice in algebra since at least Diophantus (). The substitution property is generally attributed to Gottfried Leibniz (). Equations An equation is a symbolic equality of two mathematical expressions connected with an equals sign (=). Equation solving is the problem of finding values of some variable, called , for which the specified equality is true. Each value of the unknown for which the equation holds is called a of the given equation; also stated as the equation. For example, the equation has the values and as its only solutions. The terminology is used similarly for equations with several unknowns. In mathematical logic and computer science, an equation may described as a binary formula or Boolean-valued expression, which may be true for some values of the variables (if any) and false for other values. More specifically, an equation represents a binary relation (i.e., a two-argument predicate) which may produce a truth value (true or false) from its arguments. In computer programming, the computation from the two expressions is known as comparison. An equation can be used to define a set, called its solution set. For example, the set of all solution pairs of the equation forms the unit circle in analytic geometry; therefore, this equation is called . Identities An identity is an equality that is true for all values of its variables in a given domain. An "equation" may sometimes mean an identity, but more often than not, it a subset of the variable space to be the subset where the equation is true. An example is , which is true for each real number . There is no standard notation that distinguishes an equation from an identity, or other use of the equality relation: one has to guess an appropriate interpretation from the semantics of expressions and the context. Sometimes, but not always, an identity is written with a triple bar: Definitions Equations are often used to introduce new terms or symbols for constants, assert equalities, and introduce shorthand for complex expressions, which is called "equal by definition", and often denoted with (). It is similar to the concept of assignment of a variable in computer science. For example, defines Euler's number, and is the defining property of the imaginary number . In mathematical logic, this is called an extension by definition (by equality) which is a conservative extension to a formal system. This is done by taking the equation defining the new constant symbol as a new axiom of the theory. The first recorded symbolic use of "Equal by definition" appeared in Logica Matematica (1894) by Cesare Burali-Forti, an Italian mathematician. Burali-Forti, in his book, used the notation (). In logic History Equality (or identity) is often considered a primitive notion, informally said to be "a relation each thing bears to itself and to no other thing". This tradition can be traced back to at least 350 BC by Aristotle: in his Categories, he defines the notion of quantity in terms of a more primitive equality, stating:"The most distinctive mark of quantity is that equality and inequality are predicated of it. Each of the aforesaid quantities is said to be equal or unequal. For instance, one solid is said to be equal or unequal to another; number, too, and time can have these terms applied to them, indeed can all those kinds of quantity that have been mentioned. That which is not a quantity can by no means, it would seem, be termed equal or unequal to anything else. One particular disposition or one particular quality, such as whiteness, is by no means compared with another in terms of equality and inequality but rather in terms of similarity. Thus it is the distinctive mark of quantity that it can be called equal and unequal." - (Translated by E. M. Edghill)The characterization as "a relation each thing bears to itself and to no other thing" is often criticized as circular ("no other thing"), and around the 17th century, with the growth of modern logic, it became necessary to have a more concrete description of equality. In foundations of mathematics, especially mathematical logic and analytic philosophy, equality is often axiomatized through the law of identity and the substitution property. The precursor to the substitution property of equality was first formulated by Gottfired Leibniz in his Discourse on Metaphysics (1686), stating, roughly, that "No two distinct things can have all properties in common." This has since broken into two principles, the substitution property (if , then any property of is a property of ), and its converse, the identity of indiscernibles (if and have all properties in common, then ). Its introduction to logic, and first symbolic formulation is due to Bertrand Russell and Alfred Whitehead in their Principia Mathematica (1910), who credit Leibniz for the idea. Axioms Law of identity: Stating that each thing is identical with itself, without restriction. That is, for every , . It is the first of the traditional three laws of thought. Stated symbolically as:Substitution property: Sometimes referred to as Leibniz's law, generally states that if two things are equal, then any property of one must be a property of the other. It can be stated formally as: for every and , and any formula (with a free variable ), if , then implies . Stated symbolically as:Function application is also sometimes included in the axioms of equality, but isn't necessary as it can be deduced from the other two axioms, and similarly for symmetry and transitivity. (See ) In first-order logic, these are axiom schemas, each of which specify an infinite set of axioms. If a theory has a predicate that satisfies the Law of Identity and Substitution property, it is common to say that it "has equality," or is "a theory with equality." The use of "equality" here is a misnomer in that an arbitrary binary predicate that satisfies those properties may not be true equality, and there is no property or list of properties one could add to correct for this. If, however, one is given that a predicate is true equality, then those properties are enough, since if has all the same properties as , and has the property of being equal to , then has the property of being equal to . Objections As mentioned above, these axioms don't explicitly define equality, in the sense that we still don't know if two objects are equal, only that if they're equal, then they have the same properties. If these axioms were to define a complete axiomatization of equality, meaning, if they were to define equality, then the converse of the second statement must be true. This is because any reflexive relation satisfying the substitution property within a given theory would be considered an "equality" for that theory. The converse of the Substitution property is the identity of indiscernibles, which states that two distinct things cannot have all their properties in common. Stated symbolically as: In mathematics, the identity of indiscernibles is usually rejected since indiscernibles in mathematical logic are not necessarily forbidden. Outside of pure math, the identity of indiscernibles has attracted much controversy and criticism, especially from corpuscular philosophy and quantum mechanics. Derivations of basic properties Reflexivity of Equality: Given some set with a relation induced by equality (), assume . Then by the Law of identity, thus . Symmetry of Equality: Given some set with a relation induced by equality (), assume there are elements such that . Then, take the formula . So we have . Since by assumption, and by Reflexivity, we have that . Transitivity of Equality: Given some set with a relation induced by equality (), assume there are elements such that and . Then take the formula . So we have . Since by symmetry, and by assumption, we have that . Function application: Given some function , assume there are elements and from its domain such that , then take the formula . So we have Since by assumption, and by reflexivity, we have that . In set theory Set theory is the branch of mathematics that studies sets, which can be informally described as "collections of objects." Although objects of any kind can be collected into a set, set theory – as a branch of mathematics – is mostly concerned with those that are relevant to mathematics as a whole. Sets are uniquely characterized by their elements; this means that two sets that have precisely the same elements are equal (they are the same set). In a formalized set theory, this is usually defined by an axiom called the Axiom of extensionality. For example, using set builder notation, Which states that "The set of all integers greater than 0 but not more than 3 is equal to the set containing only 1, 2, and 3", despite the differences in notation. José Ferreirós credits Richard Dedekind for being the first to explicitly state the principle, (although he does not assert it as a definition):"It very frequently happens that different things a, b, c ... considered for any reason under a common point of view, are collected together in the mind, and one then says that they form a system S; one calls the things a, b, c ... the elements of the system S, they are contained in S; conversely, S consists of these elements. Such a system S (or a collection, a manifold, a totality), as an object of our thought, is likewise a thing; it is completely determined when, for every thing, it is determined whether it is an element of S or not." - Richard Dedekind, 1888 (Translated by José Ferreirós) Background Around the turn of the 20th century, mathematics faced several paradoxes and counter-intuitive results. For example, Russell's paradox showed a contradiction of naive set theory, it was shown that the parallel postulate cannot be proved, the existence of mathematical objects that cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved with Peano arithmetic. The result was a foundational crisis of mathematics. The resolution of this crisis involved the rise of a new mathematical discipline called mathematical logic, which studies formal logic within mathematics. Subsequent discoveries in the 20th century then stabilized the foundations of mathematics into a coherent framework valid for all mathematics. This framework is based on a systematic use of axiomatic method and on set theory, specifically Zermelo–Fraenkel set theory, developed by Ernst Zermelo and Abraham Fraenkel. This set theory (and set theory in general) is now considered the most common foundation of mathematics. Extensionality The term extensionality, as used in 'Axiom of Extensionality has its roots in logic. An intensional definition describes the necessary and sufficient conditions for a term to apply to an object. For example: "An even number is an integer which is divisible by 2." An extensional definition instead lists all objects where the term applies. For example: "An even number is any one of the following integers: 0, 2, 4, 6, 8..., -2, -4, -8..." In logic, the extension of a predicate is the set of all things for which the predicate is true. The logical term was introduced to set theory in 1893, Gottlob Frege attempted to use this idea of an extension formally in his Foundations of Arithmetic, where, if is a predicate, its extension , is the set of all objects satisfying . For example if is "x is even" then is the set . In his work, he defined his infamous Basic Law V as:Stating that if two predicates have the same extensions (they are satisfied by the same set of objects) then they are logically equivalent, however, it was determined later that this axiom led to Russell's paradox. The first explicit statement of the modern Axiom of Extensionality was in 1908 by Ernst Zermelo in a paper on the well-ordering theorem, where he presented the first axiomatic set theory, now called Zermelo set theory, which became the basis of modern set theories. The specific term for "Extensionality" used by Zermelo was "Bestimmtheit".The specific English term "extensionality" only became common in mathematical and logical texts in the 1920s and 1930s, particularly with the formalization of logic and set theory by figures like Alfred Tarski and John von Neumann. Set equality based on first-order logic with equality In first-order logic with equality (See ), the axiom of extensionality states that two sets that contain the same elements are the same set. Logic axiom: Logic axiom: Set theory axiom: The first two are given by the substitution property of equality from first-order logic; the last is a new axiom of the theory. Incorporating half of the work into the first-order logic may be regarded as a mere matter of convenience, as noted by Azriel Lévy. "The reason why we take up first-order predicate calculus with equality is a matter of convenience; by this, we save the labor of defining equality and proving all its properties; this burden is now assumed by the logic." Set equality based on first-order logic without equality In first-order logic without equality, two sets are defined to be equal if they contain the same elements. Then the axiom of extensionality states that two equal sets are contained in the same sets. Set theory definition: Set theory axiom: Or, equivalently, one may choose to define equality in a way that mimics, the substitution property explicitly, as the conjunction of all atomic formuals: Set theory definition: Set theory axiom: In either case, the Axiom of Extensionality based on first-order logic without equality states: Proof of basic properties Reflexivity: Given a set , assume , it follows trivially that , and the same follows in reverse, therefore , thus . Symmetry: Given sets , such that , then , which implies , thus . Transitivity''''': Given sets , such that (1) and (2) , assume , then by (1), which implies by (2), and similarly for the reverse, therefore , thus . Similar relations Approximate equality Numerical approximation is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis. Calculations are likely to involve rounding errors and other approximation errors. Log tables, slide rules, and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation, expressed in a limited number of significant digits, although they can be programmed to produce more precise results. If viewed as a binary relation, (denoted by the symbol ) between real numbers or other things, if precisely defined, is not an equivalence relation since it's not transitive, even if modeled as a fuzzy relation. In computer science, equality is given by some relational operator. Real numbers are often approximated by floating-point numbers (A sequence of some fixed number of digits of a given base, scaled by an integer exponent of that base), thus it is common to store an expression that denotes the real number as to not lose precision. However, the equality of two real numbers given by an expression is known to be undecidable (specifically, real numbers defined by expressions involving the integers, the basic arithmetic operations, the logarithm and the exponential function). In other words, there cannot exist any algorithm for deciding such an equality (see Richardson's theorem). A questionable equality under test may be denoted using the symbol. Equivalence relation An equivalence relation is a mathematical relation that generalizes the idea of similarity or sameness. It is defined on a set as a binary relation that satisfies the three properties: reflexivity, symmetry, and transitivity. Reflexivity means that every element in is equivalent to itself ( for all ). Symmetry requires that if one element is equivalent to another, the reverse also holds (). Transitivity ensures that if one element is equivalent to a second, and the second to a third, then the first is equivalent to the third ( and ). These properties are enough to partition a set into disjoint equivalence classes. Conversely, every partition defines an equivalence class. The equivalence relation of equality is a special case, as, if restricted to a given set , it is the strictest possible equivalence relation on ; specifically, equality partitions a set into equivalence classes consisting of all singleton sets. Other equivalence relations, while less restrictive, often generalize equality by identifying elements based on shared properties or transformations, such as congruence in modular arithmetic or similarity in geometry. Congruence relation In abstract algebra, a congruence relation extends the idea of an equivalence relation to include the operation-application property. That is, given a set , and a set of operations on , then a congruence relation has the property that for all operations (here, written as unary to avoid cumbersome notation, but may be of any arity). A congruence relation on an algebraic structure such as a group, ring, or module is an equivalence relation that respects the operations defined on that structure. Isomorphism In mathematics, especially in abstract algebra and category theory, it is common to deal with objects that already have some internal structure. An isomorphism describes a kind of structure-preserving correspondence between two objects, establishing them as essentially identical in their structure or properties. More formally, an isomorphism is a bijective mapping (or morphism) between two sets or structures and such that and its inverse preserve the operations, relations, or functions defined on those structures. This means that any operation or relation valid in corresponds precisely to the operation or relation in under the mapping. For example, in group theory, a group isomorphism satisfies for all elements , where denotes the group operation. When two objects or systems are isomorphic, they are considered indistinguishable in terms of their internal structure, even though their elements or representations may differ. For instance, all cyclic groups of order are isomorphic to the integers, , with addition. Similarly, in linear algebra, two vector spaces are isomorphic if they have the same dimension, as there exists a linear bijection between their elements. The concept of isomorphism extends to numerous branches of mathematics, including graph theory (graph isomorphism), topology (homeomorphism), and algebra (group and ring isomorpisms), among others. Isomorphisms facilitate the classification of mathematical entities and enable the transfer of results and techniques between similar systems. Bridging the gap between isomorphism and equality was one motivation for the development of category theory, as well as for homotopy type theory and univalent foundations.
Mathematics
Other algebra topics
null
90655
https://en.wikipedia.org/wiki/Throat
Throat
In vertebrate anatomy, the throat is the front part of the neck, internally positioned in front of the vertebrae. It contains the pharynx and larynx. An important section of it is the epiglottis, separating the esophagus from the trachea (windpipe), preventing food and drinks being inhaled into the lungs. The throat contains various blood vessels, pharyngeal muscles, the nasopharyngeal tonsil, the tonsils, the palatine uvula, the trachea, the esophagus, and the vocal cords. Mammal throats consist of two bones, the hyoid bone and the clavicle. The "throat" is sometimes thought to be synonymous for the fauces. It works with the mouth, ears and nose, as well as a number of other parts of the body. Its pharynx is connected to the mouth, allowing speech to occur, and food and liquid to pass down the throat. It is joined to the nose by the nasopharynx at the top of the throat, and to the ear by its Eustachian tube. The throat's trachea carries inhaled air to the bronchi of the lungs. The esophagus carries food through the throat to the stomach. Adenoids and tonsils help prevent infection and are composed of lymph tissue. The larynx contains vocal cords, the epiglottis (preventing food/liquid inhalation), and an area known as the subglottic larynx, in children it is the narrowest section of the upper part of the throat. The jugulum is a low part of the throat, located slightly above the breast. The term jugulum is reflected both by the internal and external jugular veins, which pass through the jugulum.
Biology and health sciences
Gastrointestinal tract
Biology
90815
https://en.wikipedia.org/wiki/Military%20technology
Military technology
Military technology is the application of technology for use in warfare. It comprises the kinds of technology that are distinctly military in nature and not civilian in application, usually because they lack useful or legal civilian applications, or are dangerous to use without appropriate military training. The line is porous; military inventions have been brought into civilian use throughout history, with sometimes minor modification if any, and civilian innovations have similarly been put to military use. Military technology is usually researched and developed by scientists and engineers specifically for use in battle by the armed forces. Many new technologies came as a result of the military funding of science. On the other hand, the theories, strategies, concepts and doctrines of warfare are studied under the academic discipline of military science. Armament engineering is the design, development, testing and lifecycle management of military weapons and systems. It draws on the knowledge of several traditional engineering disciplines, including mechanical engineering, electrical engineering, mechatronics, electro-optics, aerospace engineering, materials engineering, and chemical engineering. History This section is divided into the broad cultural developments that affected military technology. Ancient technology The first use of stone tools may have begun during the Paleolithic Period. The earliest stone tools are from the site of Lomekwi, Turkana, dating from 3.3 million years ago. Stone tools diversified through the Pleistocene Period, which ended ~12,000 years ago. The earliest evidence of warfare between two groups is recorded at the site of Nataruk in Turkana, Kenya, where human skeletons with major traumatic injuries to the head, neck, ribs, knees and hands, including an embedded obsidian bladelet on a skull, are evidence of inter-group conflict between groups of nomadic hunter-gatherers 10,000 years ago. Humans entered the Bronze Age as they learned to smelt copper into an alloy with tin to make weapons. In Asia where copper-tin ores are rare, this development was delayed until trading in bronze began in the third millennium BCE. In the Middle East and Southern European regions, the Bronze Age follows the Neolithic period, but in other parts of the world, the Copper Age is a transition from Neolithic to the Bronze Age. Although the Iron Age generally follows the Bronze Age, in some areas the Iron Age intrudes directly on the Neolithic from outside the region, with the exception of Sub-Saharan Africa where it was developed independently. The first large-scale use of iron weapons began in Asia Minor around the 14th century BCE and in Central Europe around the 11th century BCE followed by the Middle East (about 1000 BCE) and India and China. The Assyrians are credited with the introduction of horse cavalry in warfare and the extensive use of iron weapons by 1100 BCE. Assyrians were also the first to use iron-tipped arrows. Post-classical technology The Wujing Zongyao (Essentials of the Military Arts), written by Zeng Gongliang, Ding Du, and others at the order of Emperor Renzong around 1043 during the Song dynasty illustrate the eras focus on advancing intellectual issues and military technology due to the significance of warfare between the Song and the Liao, Jin, and Yuan to their north. The book covers topics of military strategy, training, and the production and employment of advanced weaponry. Advances in military technology aided the Song dynasty in its defense against hostile neighbors to the north. The flamethrower found its origins in Byzantine-era Greece, employing Greek fire (a chemically complex, highly flammable petrol fluid) in a device with a siphon hose by the 7th century. The earliest reference to Greek Fire in China was made in 917, written by Wu Renchen in his Spring and Autumn Annals of the Ten Kingdoms. In 919, the siphon projector-pump was used to spread the 'fierce fire oil' that could not be doused with water, as recorded by Lin Yu in his , hence the first credible Chinese reference to the flamethrower employing the chemical solution of Greek fire (see also Pen Huo Qi). Lin Yu mentioned also that the 'fierce fire oil' derived ultimately from one of China's maritime contacts in the 'southern seas', Arabia . In the Battle of Langshan Jiang in 919, the naval fleet of the Wenmu King from Wuyue defeated a Huainan army from the Wu state; Wenmu's success was facilitated by the use of 'fire oil' ('huoyou') to burn their fleet, signifying the first Chinese use of gunpowder in a battle. The Chinese applied the use of double-piston bellows to pump petrol out of a single cylinder (with an upstroke and downstroke), lit at the end by a slow-burning gunpowder match to fire a continuous stream of flame. This device was featured in description and illustration of the Wujing Zongyao military manuscript of 1044. In the suppression of the Southern Tang state by 976, early Song naval forces confronted them on the Yangtze River in 975. Southern Tang forces attempted to use flamethrowers against the Song navy, but were accidentally consumed by their own fire when violent winds swept in their direction. Although the destructive effects of gunpowder were described in the earlier Tang dynasty by a Daoist alchemist, the earliest developments of the gun barrel and the projectile-fire cannon were found in late Song China. The first art depiction of the Chinese 'fire lance' (a combination of a temporary-fire flamethrower and gun) was from a Buddhist mural painting of Dunhuang, dated circa 950. These 'fire-lances' were widespread in use by the early 12th century, featuring hollowed bamboo poles as tubes to fire sand particles (to blind and choke), lead pellets, bits of sharp metal and pottery shards, and finally large gunpowder-propelled arrows and rocket weaponry. Eventually, perishable bamboo was replaced with hollow tubes of cast iron, and so too did the terminology of this new weapon change, from 'fire-spear' to 'fire-tube' . This ancestor to the gun was complemented by the ancestor to the cannon, what the Chinese referred to since the 13th century as the 'multiple bullets magazine erupter' , a tube of bronze or cast iron that was filled with about 100 lead balls. The earliest known depiction of a gun is a sculpture from a cave in Sichuan, dating to 1128, that portrays a figure carrying a vase-shaped bombard, firing flames and a cannonball. However, the oldest existent archaeological discovery of a metal barrel handgun is from the Chinese Heilongjiang excavation, dated to 1288. The Chinese also discovered the explosive potential of packing hollowed cannonball shells with gunpowder. Written later by Jiao Yu in his Huolongjing (mid-14th century), this manuscript recorded an earlier Song-era cast-iron cannon known as the 'flying-cloud thunderclap eruptor' (fei yun pi-li pao). The manuscript stated that: As noted before, the change in terminology for these new weapons during the Song period were gradual. The early Song cannons were at first termed the same way as the Chinese trebuchet catapult. A later Ming dynasty scholar known as Mao Yuanyi would explain this use of terminology and true origins of the cannon in his text of the Wubei Zhi, written in 1628: The 14th-century Huolongjing was also one of the first Chinese texts to carefully describe to the use of explosive land mines, which had been used by the late Song Chinese against the Mongols in 1277, and employed by the Yuan dynasty afterwards. The innovation of the detonated land mine was accredited to one Luo Qianxia in the campaign of defense against the Mongol invasion by Kublai Khan, Later Chinese texts revealed that the Chinese land mine employed either a rip cord or a motion booby trap of a pin releasing falling weights that rotated a steel flint wheel, which in turn created sparks that ignited the train of fuses for the land mines. Furthermore, the Song employed the earliest known gunpowder-propelled rockets in warfare during the late 13th century, its earliest form being the archaic Fire Arrow. When the Northern Song capital of Kaifeng fell to the Jurchens in 1126, it was written by Xia Shaozeng that 20,000 fire arrows were handed over to the Jurchens in their conquest. An even earlier Chinese text of the Wujing Zongyao ("Collection of the Most Important Military Techniques"), written in 1044 by the Song scholars Zeng Kongliang and Yang Weide, described the use of three spring or triple bow arcuballista that fired arrow bolts holding gunpowder packets near the head of the arrow. Going back yet even farther, the (1630, second edition 1664) of Fang Yizhi stated that fire arrows were presented to Emperor Taizu of Song (r. 960–976) in 960. Modern technology Armies The Islamic gunpowder empires introduced numerous developed firearms, cannon and small arms. During the period of Proto-industrialization, newly invented weapons were seen to be used in Mughal India. Rapid development in military technology had a dramatic impact on armies and navies in the industrialized world in 1740–1914. For land warfare, cavalry faded in importance, while infantry became transformed by the use of highly accurate more rapidly loading rifles, and the use of smokeless powder. Machine guns were developed in the 1860s in Europe. Rocket artillery and the Mysorean rockets were pioneered by Indian Muslim ruler Tipu Sultan and the French introduced much more accurate rapid-fire field artillery. Logistics and communications support for land warfare dramatically improved with use of railways and telegraphs. Industrialization provided a base of factories that could be converted to produce munitions, as well as uniforms, tents, wagons and essential supplies. Medical facilities were enlarged and reorganized based on improved hospitals and the creation of modern nursing, typified by Florence Nightingale in Britain during the Crimean War of 1854–56. Naval Naval warfare was transformed by many innovations, most notably the coal-based steam engine, highly accurate long-range naval guns, heavy steel armour for battleships, mines, and the introduction of the torpedo, followed by the torpedo boat and the destroyer. Coal after 1900 was eventually displaced by more efficient oil, but meanwhile navies with an international scope had to depend on a network of coaling stations to refuel. The British Empire provided them in abundance, as did the French Empire to a lesser extent. War colleges developed, as military theory became a specialty; cadets and senior commanders were taught the theories of Jomini, Clausewitz and Mahan, and engaged in tabletop war games. Around 1900, entirely new innovations such as submarines and airplanes appeared, and were quickly adapted to warfare by 1914. The British HMS Dreadnought (1906) incorporated so much of the latest technology in weapons, propulsion and armour that it at a stroke made all other battleships obsolescent. Organization and finance New financial tools were developed to fund the rapidly increasing costs of warfare, such as popular bond sales and income taxes, and the funding of permanent research centers. Many 19th century innovations were largely invented and promoted by lone individuals with small teams of assistants, such as David Bushnell and the submarine, John Ericsson and the battleship, Hiram Maxim and the machine gun, and Alfred Nobel and high explosives. By 1900 the military began to realize that they needed to rely much more heavily on large-scale research centers, which needed government funding. They brought in leaders of organized innovation such as Thomas Edison in the U.S. and chemist Fritz Haber of the Kaiser Wilhelm Institute in Germany. Postmodern technology The postmodern stage of military technology emerged in the 1940s, and one with recognition thanks to the high priority given during the war to scientific and engineering research and development regarding nuclear weapons, radar, jet engines, proximity fuses, advanced submarines, aircraft carriers, and other weapons. The high-priority continues into the 21st century. It involves the military application of advanced scientific research regarding nuclear weapons, jet engines, ballistic and guided missiles, radar, biological warfare, and the use of electronics, computers and software. Space During the Cold War, the world's two great superpowers – the Soviet Union and the United States of America – spent large proportions of their GDP on developing military technologies. The drive to place objects in orbit stimulated space research and started the Space Race. In 1957, the USSR launched the first artificial satellite, Sputnik 1. By the end of the 1960s, both countries regularly deployed satellites. Spy satellites were used by militaries to take accurate pictures of their rivals' military installations. As time passed the resolution and accuracy of orbital reconnaissance alarmed both sides of the Iron Curtain. Both the United States and the Soviet Union began to develop anti-satellite weapons to blind or destroy each other's satellites. Laser weapons, kamikaze style satellites, as well as orbital nuclear explosion were researched with varying levels of success. Spy satellites were, and continue to be, used to monitor the dismantling of military assets in accordance with arms control treaties signed between the two superpowers. To use spy satellites in such a manner is often referred to in treaties as "national technical means of verification". The superpowers developed ballistic missiles to enable them to use nuclear weaponry across great distances. As rocket science developed, the range of missiles increased and intercontinental ballistic missiles (ICBM) were created, which could strike virtually any target on Earth in a timeframe measured in minutes rather than hours or days. To cover large distances ballistic missiles are usually launched into sub-orbital spaceflight. As soon as intercontinental missiles were developed, military planners began programmes and strategies to counter their effectiveness. Mobilization A significant portion of military technology is about transportation, allowing troops and weaponry to be moved from their origins to the front. Land transport has historically been mainly by foot, land vehicles have usually been used as well, from chariots to tanks. When conducting a battle over a body of water, ships are used. There are historically two main categories of ships: those for transporting troops, and those for attacking other ships. Soon after the invention of aeroplanes, military aviation became a significant component of warfare, though usually as a supplementary role. The two main types of military aircraft are bombers, which attack land- or sea-based targets, and fighters, which attack other aircraft. Military vehicles are land combat or transportation vehicles, excluding rail-based, which are designed for or in significant use by military forces. List of military vehicles List of armoured fighting vehicles List of tanks Military aircraft includes any use of aircraft by a country's military, including such areas as transport, training, disaster relief, border patrol, search and rescue, surveillance, surveying, peacekeeping, and (very rarely) aerial warfare. List of aircraft List of aircraft weapons Warships are watercraft for combat and transportation in and on seas and oceans. Submarines Complex masting and sail systems found on warships during the Age of Sail List of historical ship and boat types List of aircraft carriers List of submarine classes Defence Fortifications are military constructions and buildings designed for defence in warfare. They range in size and age from the Great Wall of China to a Sangar. List of fortifications List of forts Sensors and communication Sensors and communication systems are used to detect enemies, coordinate movements of armed forces and guide weaponry. Early systems included flag signaling, telegraph and heliographs. Laser guidance Missile guidance Norden Bombsight Proximity fuse Radar Satellite guidance in guidance weapons Future technology The Defense Advanced Research Projects Agency is an agency of the United States Department of Defense responsible for the development of new technologies for use by the military. DARPA leads the development of military technology in the United States and today, has dozens of ongoing projects; everything from humanoid robots to bullets that can change path before reaching their target. China has a similar agency. Emerging territory Current militaries continue to invest in new technologies for the future. Such technologies include cognitive radar, 5G cellular networks, microchips, semiconductors, and large scale analytic engines. Additionally, many militaries seek to improve current laser technology. For example, Israeli Defense Forces utilize laser technology to disable small enemy machinery, but seek to move to more large scale capabilities in the coming years. Militaries across the world continue to perform research on autonomous technologies which allow for increased troop mobility or replacement of live soldiers. Autonomous vehicles and robots are expected to play a role in future conflicts; this has the potential to decrease loss of life in future warfare. Observers of transhumanism note high rates of technological terms in military literature, but low rates for explicitly transhuman-related terms. Today's hybrid style of warfare also calls for investments in information technologies. Increased reliance on computer systems has incentivized nations to push for increased efforts at managing large scale networks and having access to large scale data. New strategies of cyber and hybrid warfare includes, network attacks, media analysis, and media/ grass-roots campaigns on medias such as blog posts Cyberspace In 2011, the US Defense Department declared cyberspace a new domain of warfare; since then DARPA has begun a research project known as "Project X" with the goal of creating new technologies that will enable the government to better understand and map the cyber territory. Ultimately giving the Department of Defense the ability to plan and manage large-scale cyber missions across dynamic network environments.
Technology
Military technology: General
null
1728510
https://en.wikipedia.org/wiki/Permanganate
Permanganate
A permanganate () is a chemical compound with the manganate(VII) ion, , the conjugate base of permanganic acid. Because the manganese atom has a +7 oxidation state, the permanganate(VII) ion is a strong oxidising agent. The ion is a transition metal ion with a tetrahedral structure. Permanganate solutions are purple in colour and are stable in neutral or slightly alkaline media. The exact chemical reaction depends on the carbon-containing reactants present and the oxidant used. For example, trichloroethane (C2H3Cl3) is oxidised by permanganate ions to form carbon dioxide (CO2), manganese dioxide (MnO2), hydrogen ions (H+), and chloride ions (Cl−). 8 + 3 → 6 + 8 + + 4 + 9 In an acidic solution, permanganate(VII) is reduced to the pale pink manganese(II) (Mn2+) with an oxidation state of +2. 8  + + 5 e− → Mn2+ + 4 H2O In a strongly basic or alkaline solution, permanganate(VII) is reduced to the green manganate ion, with an oxidation state of +6. + e− → In a neutral solution, however, it gets reduced to the brown manganese dioxide MnO2 with an oxidation state of +4. 2 H2O + + 3 e− → MnO2 + 4 OH− Production Permanganates can be produced by oxidation of manganese compounds such as manganese chloride or manganese sulfate by strong oxidizing agents, for instance, sodium hypochlorite or lead dioxide: 2 MnCl2 + 5 NaClO + 6 NaOH → 2 NaMnO4 + 9 NaCl + 3 H2O 2 MnSO4 + 5 PbO2 + 3 H2SO4 → 2 HMnO4 + 5 PbSO4 + 2 H2O It may also be produced by the disproportionation of manganates, with manganese dioxide as a side-product: 3 Na2MnO4 + 2 H2O → 2 NaMnO4 + MnO2 + 4 NaOH They are produced commercially by electrolysis or air oxidation of alkaline solutions of manganate salts (). Usage This is a common and strong disinfectant, used regularly to sanitize baths, toilets, and wash basins. It is a cheap and extremely effective compound for the task. Properties Permanganates(VII) are salts of permanganic acid. They have a deep purple colour, due to a charge transfer transition from oxo ligand p orbitals to empty orbitals derived from manganese(VII) d orbitals. Permanganate(VII) is a strong oxidizer, and similar to perchlorate. It is therefore in common use in qualitative analysis that involves redox reactions (permanganometry). According to theory, permanganate is strong enough to oxidize water, but this does not actually happen to any extent. Besides this, it is stable. It is a useful reagent, but it is not very selective with organic compounds. Potassium permanganate is used as a disinfectant and water treatment additive in aquaculture. Manganates(VII) are not very stable thermally. For instance, potassium permanganate decomposes at 230 °C to potassium manganate and manganese dioxide, releasing oxygen gas: 2 KMnO4 → K2MnO4 + MnO2 + O2 A permanganate can oxidize an amine to a nitro compound, an alcohol to a ketone, an aldehyde to a carboxylic acid, a terminal alkene to a carboxylic acid, oxalic acid to carbon dioxide, and an alkene to a diol. This list is not exhaustive. In alkene oxidations one intermediate is a cyclic Mn(V) species: Compounds Ammonium permanganate, NH4MnO4 Barium permanganate, Ba(MnO4)2 Calcium permanganate, Ca(MnO4)2 Lithium permanganate, LiMnO4 Potassium permanganate, KMnO4 Sodium permanganate, NaMnO4 Silver permanganate, AgMnO4 Safety The fatal dose of permanganate is about 10 g, and several fatal intoxications have occurred. The strong oxidative effect leads to necrosis of the mucous membrane. For example, the esophagus is affected if the permanganate is swallowed. Only a limited amount is absorbed by the intestines, but this small amount shows severe effects on the kidneys and on the liver.
Physical sciences
Metallic oxyanions
Chemistry
1728672
https://en.wikipedia.org/wiki/Human%20impact%20on%20the%20environment
Human impact on the environment
Human impact on the environment (or anthropogenic environmental impact) refers to changes to biophysical environments and to ecosystems, biodiversity, and natural resources caused directly or indirectly by humans. Modifying the environment to fit the needs of society (as in the built environment) is causing severe effects including global warming, environmental degradation (such as ocean acidification), mass extinction and biodiversity loss, ecological crisis, and ecological collapse. Some human activities that cause damage (either directly or indirectly) to the environment on a global scale include population growth, neoliberal economic policies and rapid economic growth, overconsumption, overexploitation, pollution, and deforestation. Some of the problems, including global warming and biodiversity loss, have been proposed as representing catastrophic risks to the survival of the human species. The term anthropogenic designates an effect or object resulting from human activity. The term was first used in the technical sense by Russian geologist Alexey Pavlov, and it was first used in English by British ecologist Arthur Tansley in reference to human influences on climax plant communities. The atmospheric scientist Paul Crutzen introduced the term "Anthropocene" in the mid-1970s. The term is sometimes used in the context of pollution produced from human activity since the start of the Agricultural Revolution but also applies broadly to all major human impacts on the environment. Many of the actions taken by humans that contribute to a heated environment stem from the burning of fossil fuel from a variety of sources, such as: electricity, cars, planes, space heating, manufacturing, or the destruction of forests. Human overshoot Overconsumption Overconsumption is a situation where resource use has outpaced the sustainable capacity of the ecosystem. It can be measured by the ecological footprint, a resource accounting approach which compares human demand on ecosystems with the amount of planet matter ecosystems can renew. Estimates by the Global Footprint Network indicate that humanity's current demand is 70% higher than the regeneration rate of all of the planet's ecosystems combined. A prolonged pattern of overconsumption leads to environmental degradation and the eventual loss of resource bases. Humanity's overall impact on the planet is affected by many factors, not just the raw number of people. Their lifestyle (including overall affluence and resource use) and the pollution they generate (including carbon footprint) are equally important. In 2008, The New York Times stated that the inhabitants of the developed nations of the world consume resources like oil and metals at a rate almost 32 times greater than those of the developing world, who make up the majority of the human population. Human civilization has caused the loss of 83% of all wild mammals and half of plants. The world's chickens are triple the weight of all the wild birds, while domesticated cattle and pigs outweigh all wild mammals by 14 to 1. Global meat consumption is projected to more than double by 2050, perhaps as much as 76%, as the global population rises to more than 9 billion, which will be a significant driver of further biodiversity loss and increased Greenhouse gas emissions. Population growth and size Some scholars, environmentalists and advocates have linked human population growth or population size as a driver of environmental issues, including some suggesting this indicates an overpopulation scenario. In 2017, over 15,000 scientists around the world issued a second warning to humanity which asserted that rapid human population growth is the "primary driver behind many ecological and even societal threats." According to the Global Assessment Report on Biodiversity and Ecosystem Services, released by the United Nations' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services in 2019, human population growth is a significant factor in contemporary biodiversity loss. A 2021 report in Frontiers in Conservation Science proposed that population size and growth are significant factors in biodiversity loss, soil degradation and pollution. Some scientists and environmentalists, including Pentti Linkola, Jared Diamond and E. O. Wilson, posit that human population growth is devastating to biodiversity. Wilson for example, has expressed concern that when Homo sapiens reached a population of six billion their biomass exceeded that of any other large land dwelling animal species that had ever existed by over 100 times. However, attributing overpopulation as a cause of environmental issues is controversial. Demographic projections indicate that population growth is slowing and world population will peak in the 21st century, and many experts believe that global resources can meet this increased demand, suggesting a global overpopulation scenario is unlikely. Other projections have the population continuing to grow into the next century. While some studies, including the British government's 2021 Economics of Biodiversity review, posit that population growth and overconsumption are interdependent, critics suggest blaming overpopulation for environmental issues can unduly blame poor populations in the Global South or oversimplify more complex drivers, leading some to treat overconsumption as a separate issue. Advocates for further reducing fertility rates, among them Rodolfo Dirzo and Paul R. Ehrlich, argue that this reduction should primarily affect the "overconsuming wealthy and middle classes," with the ultimate goal being to shrink "the scale of the human enterprise" and reverse the "growthmania" which they say threatens biodiversity and the "life-support systems of humanity." Fishing and farming The environmental impact of agriculture varies based on the wide variety of agricultural practices employed around the world. Ultimately, the environmental impact depends on the production practices of the system used by farmers. The connection between emissions into the environment and the farming system is indirect, as it also depends on other climate variables such as rainfall and temperature. There are two types of indicators of environmental impact: "means-based", which is based on the farmer's production methods, and "effect-based", which is the impact that farming methods have on the farming system or on emissions to the environment. An example of a means-based indicator would be the quality of groundwater that is affected by the amount of nitrogen applied to the soil. An indicator reflecting the loss of nitrate to groundwater would be effect-based. The environmental impact of agriculture involves a variety of factors from the soil, to water, the air, animal and soil diversity, plants, and the food itself. Some of the environmental issues that are related to agriculture are climate change, deforestation, genetic engineering, irrigation problems, pollutants, soil degradation, and waste. Fishing The environmental impact of fishing can be divided into issues that involve the availability of fish to be caught, such as overfishing, sustainable fisheries, and fisheries management; and issues that involve the impact of fishing on other elements of the environment, such as by-catch and destruction of habitat such as coral reefs. According to the 2019 Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services report, overfishing is the main driver of mass species extinction in the oceans. These conservation issues are part of marine conservation, and are addressed in fisheries science programs. There is a growing gap between how many fish are available to be caught and humanity's desire to catch them, a problem that gets worse as the world population grows. Similar to other environmental issues, there can be conflict between the fishermen who depend on fishing for their livelihoods and fishery scientists who realize that if future fish populations are to be sustainable then some fisheries must reduce or even close. The journal Science published a four-year study in November 2006, which predicted that, at prevailing trends, the world would run out of wild-caught seafood in 2048. The scientists stated that the decline was a result of overfishing, pollution and other environmental factors that were reducing the population of fisheries at the same time as their ecosystems were being degraded. Yet again the analysis has met criticism as being fundamentally flawed, and many fishery management officials, industry representatives and scientists challenge the findings, although the debate continues. Many countries, such as Tonga, the United States, Australia and New Zealand, and international management bodies have taken steps to appropriately manage marine resources. The UN's Food and Agriculture Organization (FAO) released their biennial State of World Fisheries and Aquaculture in 2018 noting that capture fishery production has remained constant for the last two decades but unsustainable overfishing has increased to 33% of the world's fisheries. They also noted that aquaculture, the production of farmed fish, has increased from 120 million tonnes per year in 1990 to over 170 million tonnes in 2018. Populations of oceanic sharks and rays have been reduced by 71% since 1970, largely due to overfishing. More than three-quarters of the species comprising this group are now threatened with extinction. Irrigation The environmental impact of irrigation includes the changes in quantity and quality of soil and water as a result of irrigation and the ensuing effects on natural and social conditions at the tail-end and downstream of the irrigation scheme. The impacts stem from the changed hydrological conditions owing to the installation and operation of the scheme. An irrigation scheme often draws water from the river and distributes it over the irrigated area. As a hydrological result it is found that: the downstream river discharge is reduced the evaporation in the scheme is increased the groundwater recharge in the scheme is increased the level of the water table rises the drainage flow is increased. These may be called direct effects. Effects on soil and water quality are indirect and complex, and subsequent impacts on natural, ecological and socio-economic conditions are intricate. In some, but not all instances, water logging and soil salinization can result. However, irrigation can also be used, together with soil drainage, to overcome soil salinization by leaching excess salts from the vicinity of the root zone. Irrigation can also be done extracting groundwater by (tube)wells. As a hydrological result it is found that the level of the water descends. The effects may be water mining, land/soil subsidence, and, along the coast, saltwater intrusion. Irrigation projects can have large benefits, but the negative side effects are often overlooked. Agricultural irrigation technologies such as high powered water pumps, dams, and pipelines are responsible for the large-scale depletion of fresh water resources such as aquifers, lakes, and rivers. As a result of this massive diversion of freshwater, lakes, rivers, and creeks are running dry, severely altering or stressing surrounding ecosystems, and contributing to the extinction of many aquatic species. Agricultural land loss Lal and Stewart estimated global loss of agricultural land by degradation and abandonment at 12 million hectares per year. In contrast, according to Scherr, GLASOD (Global Assessment of Human-Induced Soil Degradation, under the UN Environment Programme) estimated that 6 million hectares of agricultural land per year had been lost to soil degradation since the mid-1940s, and she noted that this magnitude is similar to earlier estimates by Dudal and by Rozanov et al. Such losses are attributable not only to soil erosion, but also to salinization, loss of nutrients and organic matter, acidification, compaction, water logging and subsidence. Human-induced land degradation tends to be particularly serious in dry regions. Focusing on soil properties, Oldeman estimated that about 19 million square kilometers of global land area had been degraded; Dregne and Chou, who included degradation of vegetation cover as well as soil, estimated about 36 million square kilometers degraded in the world's dry regions. Despite estimated losses of agricultural land, the amount of arable land used in crop production globally increased by about 9% from 1961 to 2012, and is estimated to have been 1.396 billion hectares in 2012. Global average soil erosion rates are thought to be high, and erosion rates on conventional cropland generally exceed estimates of soil production rates, usually by more than an order of magnitude. In the US, sampling for erosion estimates by the US NRCS (Natural Resources Conservation Service) is statistically based, and estimation uses the Universal Soil Loss Equation and Wind Erosion Equation. For 2010, annual average soil loss by sheet, rill and wind erosion on non-federal US land was estimated to be 10.7 t/ha on cropland and 1.9 t/ha on pasture land; the average soil erosion rate on US cropland had been reduced by about 34% since 1982. No-till and low-till practices have become increasingly common on North American cropland used for production of grains such as wheat and barley. On uncultivated cropland, the recent average total soil loss has been 2.2 t/ha per year. In comparison with agriculture using conventional cultivation, it has been suggested that, because no-till agriculture produces erosion rates much closer to soil production rates, it could provide a foundation for sustainable agriculture. Land degradation is a process in which the value of the biophysical environment is affected by a combination of human-induced processes acting upon the land. It is viewed as any change or disturbance to the land perceived to be deleterious or undesirable. Natural hazards are excluded as a cause; however human activities can indirectly affect phenomena such as floods and bush fires. This is considered to be an important topic of the 21st century due to the implications land degradation has upon agronomic productivity, the environment, and its effects on food security. It is estimated that up to 40% of the world's agricultural land is seriously degraded. Meat production Environmental impacts associated with meat production include use of fossil energy, water and land resources, greenhouse gas emissions, and in some instances, rainforest clearing, water pollution and species endangerment, among other adverse effects. Steinfeld et al. of the FAO estimated that 18% of global anthropogenic GHG (greenhouse gas) emissions (estimated as 100-year carbon dioxide equivalents) are associated in some way with livestock production. FAO data indicate that meat accounted for 26% of global livestock product tonnage in 2011. Globally, enteric fermentation (mostly in ruminant livestock) accounts for about 27% of anthropogenic methane emissions, Despite methane's 100-year global warming potential, recently estimated at 28 without and 34 with climate-carbon feedbacks, methane emission is currently contributing relatively little to global warming. Although reduction of methane emissions would have a rapid effect on warming, the expected effect would be small. Other anthropogenic GHG emissions associated with livestock production include carbon dioxide from fossil fuel consumption (mostly for production, harvesting and transport of feed), and nitrous oxide emissions associated with the use of nitrogenous fertilizers, growing of nitrogen-fixing legume vegetation and manure management. Management practices that can mitigate GHG emissions from production of livestock and feed have been identified. Considerable water use is associated with meat production, mostly because of water used in production of vegetation that provides feed. There are several published estimates of water use associated with livestock and meat production, but the amount of water use assignable to such production is seldom estimated. For example, "green water" use is evapotranspirational use of soil water that has been provided directly by precipitation; and "green water" has been estimated to account for 94% of global beef cattle production's "water footprint", and on rangeland, as much as 99.5% of the water use associated with beef production is "green water". Impairment of water quality by manure and other substances in runoff and infiltrating water is a concern, especially where intensive livestock production is carried out. In the US, in a comparison of 32 industries, the livestock industry was found to have a relatively good record of compliance with environmental regulations pursuant to the Clean Water Act and Clean Air Act, but pollution issues from large livestock operations can sometimes be serious where violations occur. Various measures have been suggested by the US Environmental Protection Agency, among others, which can help reduce livestock damage to streamwater quality and riparian environments. Changes in livestock production practices influence the environmental impact of meat production, as illustrated by some beef data. In the US beef production system, practices prevailing in 2007 are estimated to have involved 8.6% less fossil fuel use, 16% less greenhouse gas emissions (estimated as 100-year carbon dioxide equivalents), 12% less withdrawn water use and 33% less land use, per unit mass of beef produced, than in 1977. From 1980 to 2012 in the US, while population increased by 38%, the small ruminant inventory decreased by 42%, the cattle-and-calves inventory decreased by 17%, and methane emissions from livestock decreased by 18%; yet despite the reduction in cattle numbers, US beef production increased over that period. Some impacts of meat-producing livestock may be considered environmentally beneficial. These include waste reduction by conversion of human-inedible crop residues to food, use of livestock as an alternative to herbicides for control of invasive and noxious weeds and other vegetation management, use of animal manure as fertilizer as a substitute for those synthetic fertilizers that require considerable fossil fuel use for manufacture, grazing use for wildlife habitat enhancement, and carbon sequestration in response to grazing practices, among others. Conversely, according to some studies appearing in peer-reviewed journals, the growing demand for meat is contributing to significant biodiversity loss as it is a significant driver of deforestation and habitat destruction. Moreover, the 2019 Global Assessment Report on Biodiversity and Ecosystem Services by IPBES also warns that ever increasing land use for meat production plays a significant role in biodiversity loss. A 2006 Food and Agriculture Organization report, Livestock's Long Shadow, found that around 26% of the planet's terrestrial surface is devoted to livestock grazing. Palm oil Palm oil is a type of vegetable oil, found in oil palm trees, which are native to West and Central Africa. Initially used in foods in developing countries, palm oil is now also used in food, cosmetic and other types of products in other nations as well. Over one-third of vegetable oil consumed globally is palm oil. Habitat loss The consumption of palm oil in food, domestic and cosmetic products all over the world means there is a high demand for it. To meet this, oil palm plantations are created, which means removing natural forests to clear space. This deforestation has taken place in Asia, Latin America and West Africa, with Malaysia and Indonesia holding 90% of global oil palm trees. These forests are home to a wide range of species, including many endangered animals, ranging from birds to rhinos and tigers. Since 2000, 47% of deforestation has been for the purpose of growing oil palm plantations, with around 877,000 acres being affected per year. Impact on biodiversity Natural forests are extremely biodiverse, with a wide range of organisms using them as their habitat. But oil palm plantations are the opposite. Studies have shown that oil palm plantations have less than 1% of the plant diversity seen in natural forests, and 47–90% less mammal diversity. This is not because of the oil palm itself, but rather because the oil palm is the only habitat provided in the plantations. The plantations are therefore known as a monoculture, whereas natural forests contain a wide variety of flora and fauna, making them highly biodiverse. One of the ways palm oil could be made more sustainable (although it is still not the best option) is through agroforestry, whereby the plantations are made up of multiple types of plants used in trade – such as coffee or cocoa. While these are more biodiverse than monoculture plantations, they are still not as effective as natural forests. In addition to this, agroforestry does not bring as many economic benefits to workers, their families and the surrounding areas. Roundtable on Sustainable Palm Oil (RSPO) The RSPO is a non-profit organisation that has developed criteria that its members (of which, as of 2018, there are over 4,000) must follow to produce, source and use sustainable palm oil (Certified Sustainable Palm Oil; CSPO). Currently, 19% of global palm oil is certified by the RSPO as sustainable. The CSPO criteria states that oil palm plantations cannot be grown in the place of forests or other areas with endangered species, fragile ecosystems, or those that facilitate the needs of local communities. It also calls for a reduction in pesticides and fires, along with several rules for ensuring the social wellbeing of workers and the local communities. Ecosystem impacts Environmental degradation Human activity is causing environmental degradation, which is the deterioration of the environment through depletion of resources such as air, water and soil; the destruction of ecosystems; habitat destruction; the extinction of wildlife; and pollution. It is defined as any change or disturbance to the environment perceived to be deleterious or undesirable. As indicated by the I=PAT equation, environmental impact (I) or degradation is caused by the combination of an already very large and increasing human population (P), continually increasing economic growth or per capita affluence (A), and the application of resource-depleting and polluting technology (T). According to a 2021 study published in Frontiers in Forests and Global Change, roughly 3% of the planet's terrestrial surface is ecologically and faunally intact, meaning areas with healthy populations of native animal species and little to no human footprint. Many of these intact ecosystems were in areas inhabited by indigenous peoples. Habitat fragmentation According to a 2018 study in Nature, 87% of the oceans and 77% of land (excluding Antarctica) have been altered by anthropogenic activity, and 23% of the planet's landmass remains as wilderness. Habitat fragmentation is the reduction of large tracts of habitat leading to habitat loss. Habitat fragmentation and loss are considered as being the main cause of the loss of biodiversity and degradation of the ecosystem all over the world. Human actions are greatly responsible for habitat fragmentation, and loss as these actions alter the connectivity and quality of habitats. Understanding the consequences of habitat fragmentation is important for the preservation of biodiversity and enhancing the functioning of the ecosystem. Both agricultural plants and animals depend on pollination for reproduction. Vegetables and fruits are an important diet for human beings and depend on pollination. Whenever there is habitat destruction, pollination is reduced and crop yield as well. Many plants also rely on animals and most especially those that eat fruit for seed dispersal. Therefore, the destruction of habitat for animal severely affects all the plant species that depend on them. Mass extinction Biodiversity generally refers to the variety and variability of life on Earth, and is represented by the number of different species there are on the planet. Since its introduction, Homo sapiens (the human species) has been killing off entire species either directly (such as through hunting) or indirectly (such as by destroying habitats), causing the extinction of species at an alarming rate. Humans are the cause of the current mass extinction, called the Holocene extinction, driving extinctions to 100 to 1000 times the normal background rate. Though most experts agree that human beings have accelerated the rate of species extinction, some scholars have postulated without humans, the biodiversity of the Earth would grow at an exponential rate rather than decline. The Holocene extinction continues, with meat consumption, overfishing, ocean acidification and the amphibian crisis being a few broader examples of an almost universal, cosmopolitan decline in biodiversity. Human overpopulation (and continued population growth) along with overconsumption, especially by the super-affluent, are considered to be the primary drivers of this rapid decline. The 2017 World Scientists' Warning to Humanity stated that, among other things, this sixth extinction event unleashed by humanity could annihilate many current life forms and consign them to extinction by the end of this century. A 2022 scientific review published in Biological Reviews confirms that a biodiversity loss crisis caused by human activity, which the researchers describe as a sixth mass extinction event, is currently underway. A June 2020 study published in PNAS argues that the contemporary extinction crisis "may be the most serious environmental threat to the persistence of civilization, because it is irreversible" and that its acceleration "is certain because of the still fast growth in human numbers and consumption rates." Biodiversity loss It has been estimated that from 1970 to 2016, 68% of the world's wildlife has been destroyed due to human activity. In South America, there is believed to be a 70 percent loss. A May 2018 study published in PNAS found that 83% of wild mammals, 80% of marine mammals, 50% of plants and 15% of fish have been lost since the dawn of human civilization. Currently, livestock make up 60% of the biomass of all mammals on earth, followed by humans (36%) and wild mammals (4%). According to the 2019 global biodiversity assessment by IPBES, human civilization has pushed one million species of plants and animals to the brink of extinction, with many of these projected to vanish over the next few decades. When plant biodiversity declines, the remaining plants face diminishing productivity. Biodiversity loss threatens ecosystem productivity and services such as food, fresh water, raw materials and medicinal resources. A 2019 report that assessed a total of 28,000 plant species concluded that close to half of them were facing a threat of extinction. The failure of noticing and appreciating plants is regarded as "plant blindness", and this is a worrying trend as it puts more plants at the threat of extinction than animals. Our increased farming has come at a higher cost to plant biodiversity as half of the habitable land on Earth is used for agriculture, and this is one of the major reasons behind the plant extinction crisis. Defaunation is the loss of animals from ecological communities. Invasive species Invasive species are defined by the U.S. Department of Agriculture as non-native to the specific ecosystem, and whose presence is likely to harm the health of humans or the animals in said system. Introductions of non-native species into new areas have brought about major and permanent changes to the environment over large areas. Examples include the introduction of Caulerpa taxifolia into the Mediterranean, the introduction of oat species into the California grasslands, and the introduction of privet, kudzu, and purple loosestrife to North America. Rats, cats, and goats have radically altered biodiversity in many islands. Additionally, introductions have resulted in genetic changes to native fauna where interbreeding has taken place, as with buffalo with domestic cattle, and wolves with domestic dogs. Human Introduced Invasive Species Cats Domestic and feral cats globally are particularly notorious for their destruction of native birds and other animal species. This is especially true for Australia, which attributes over two-thirds of mammal extinction to domestic and feral cats, and over 1.5 billion deaths to native animals each year. Because domesticated outside cats are fed by their owners, they can continue to hunt even when prey populations decline and they would otherwise go elsewhere. This is a major problem for places where there is a highly diverse and dense number of lizards, birds, snakes, and mice populating the area. Roaming outdoor cats can also be attributed to the transmission of harmful diseases like rabies and toxoplasmosis to the native wildlife population. Burmese Python Another example of a destructive introduced invasive species is the Burmese Python. Originating from parts of Southeast Asia, the Burmese Python has made the most notable impact in the Southern Florida Everglades of the United States. After a breeding facility breach in 1992 due to flooding and snake owners releasing unwanted pythons back into the wild, the population of the Burmese Python would boom in the warm climate of Florida in the following years. This impact has been felt most significantly at the southernmost regions of the Everglades. A study in 2012 compared native species population counts in Florida from 1997 and found that raccoon populations declined 99.3%, opossums 98.9%, and rabbit/fox populations effectively disappeared Hybrid boars In the 1980s, Canadian pig farmers introduced wild boars from the United Kingdom into their breeding programs, leading to a hybrid with more meat. However, when the pork market collapsed in 2001, many of these hybrids were released into the wild. These hybrids, now numbering around 62,000 are thriving in the Canadian prairies due to their adaptation to harsh winters, with thick fur and long legs, and tusks sharp enough to dig through soil for food. They cause significant agricultural damage and have grown to a point where even substantial culling efforts are insufficient. This issue has escalated to the extent that these boars are starting to migrate into northern US states, raising concerns about potential crop damage and the spread of diseases like African swine flu, which could severely impact the pork industry. Coral reef decline Water pollution Domestic, industrial and agricultural wastewater can be treated in wastewater treatment plants for treatment before being released into aquatic ecosystems. Treated wastewater still contains a range of different chemical and biological contaminants which may influence surrounding ecosystems. Climate change Contemporary climate change is the result of increasing atmospheric greenhouse gas concentrations, which is caused primarily by combustion of fossil fuel (coal, oil, natural gas), and by deforestation, land use changes, and cement production. Such massive alteration of the global carbon cycle has only been possible because of the availability and deployment of advanced technologies, ranging in application from fossil fuel exploration, extraction, distribution, refining, and combustion in power plants and automobile engines and advanced farming practices. Livestock contributes to climate change both through the production of greenhouse gases and through destruction of carbon sinks such as rain-forests. According to the 2006 United Nations/FAO report, 18% of all greenhouse gas emissions found in the atmosphere are due to livestock. The raising of livestock and the land needed to feed them has resulted in the destruction of millions of acres of rainforest and as global demand for meat rises, so too will the demand for land. Ninety-one percent of all rainforest land deforested since 1970 is now used for livestock. Impacts through the atmosphere Acid deposition The air pollutants released from the burning of fossil fuels usually comes back to earth in the form of acid rain. Acid rain is a form of precipitation which has high sulfuric and nitric acids, which can also occur in the form of a fog or snow. Acid rain has numerous ecological impacts on streams, lakes, wetlands and other aquatic environments. It damages forests, robs the soil of its essential nutrients, and releases aluminium in the soil, which creates difficulties in the absorption of water for local plant life. Researchers have discovered that kelp, eelgrass and other aquatic vegetation absorbs carbon dioxide and hence reduces ocean acidity. Scientists, therefore, say that growing these plants could help in mitigating the damaging effects of acidification on marine life. Ozone depletion Disruption of the nitrogen cycle Of particular concern is N2O, which has an average atmospheric lifetime of 114–120 years, and is 300 times more effective than CO2 as a greenhouse gas. NOx produced by industrial processes, automobiles and agricultural fertilization and NH3 emitted from soils (i.e., as an additional byproduct of nitrification) and livestock operations are transported to downwind ecosystems, influencing N cycling and nutrient losses. Six major effects of NOx and NH3 emissions have been identified: decreased atmospheric visibility due to ammonium aerosols (fine particulate matter [PM]) elevated ozone concentrations ozone and PM affects human health (e.g. respiratory diseases, cancer) increases in radiative forcing and global warming decreased agricultural productivity due to ozone deposition ecosystem acidification and eutrophication. Technology impacts The applications of technology often result in unavoidable and unexpected environmental impacts, which according to the I = PAT equation is measured as resource use or pollution generated per unit GDP. Environmental impacts caused by the application of technology are often perceived as unavoidable for several reasons. First, given that the purpose of many technologies is to exploit, control, or otherwise "improve" upon nature for the perceived benefit of humanity while at the same time, the myriad of processes in nature have been optimized and are continually adjusted by evolution, any disturbance of these natural processes by technology is likely to result in negative environmental consequences. Second, the conservation of mass principle and the first law of thermodynamics (i.e., conservation of energy) dictate that whenever material resources or energy are moved around or manipulated by technology, environmental consequences are inescapable. Third, according to the second law of thermodynamics, order can be increased within a system (such as the human economy) only by increasing disorder or entropy outside the system (i.e., the environment). Thus, technologies can create "order" in the human economy (i.e., order as manifested in buildings, factories, transportation networks, communication systems, etc.) only at the expense of increasing "disorder" in the environment. According to several studies, increased entropy is likely to correlate to negative environmental impacts. Mining industry The environmental impact of mining includes erosion, formation of sinkholes, loss of biodiversity, and contamination of soil, groundwater and surface water by chemicals from mining processes. In some cases, additional forest logging is done in the vicinity of mines to increase the available room for the storage of the created debris and soil. Even though plants need some heavy metals for their growth, excess of these metals is usually toxic to them. Plants that are polluted with heavy metals usually depict reduced growth, yield and performance. Pollution by heavy metals decreases the soil organic matter composition resulting in a decline in soil nutrients which then leads to a decline in the growth of plants or even death. Besides creating environmental damage, the contamination resulting from leakage of chemicals also affect the health of the local population. Mining companies in some countries are required to follow environmental and rehabilitation codes, ensuring the area mined is returned to close to its original state. Some mining methods may have significant environmental and public health effects. Heavy metals usually exhibit toxic effects towards the soil biota, and this is through the affection of the microbial processes and decreases the number as well as activity of soil microorganisms. Low concentration of heavy metals also has high chances of inhibiting the plant's physiological metabolism. Energy industry The environmental impact of energy harvesting and consumption is diverse. In recent years there has been a trend towards the increased commercialization of various renewable energy sources. In the real world, consumption of fossil fuel resources leads to global warming and climate change. However, little change is being made in many parts of the world. If the peak oil theory proves true, more explorations of viable alternative energy sources, could be more friendly to the environment. Rapidly advancing technologies can achieve a transition of energy generation, water and waste management, and food production towards better environmental and energy usage practices using methods of systems ecology and industrial ecology. Biodiesel The environmental impact of biodiesel includes energy use, greenhouse gas emissions and some other kinds of pollution. A joint life cycle analysis by the US Department of Agriculture and the US Department of Energy found that substituting 100% biodiesel for petroleum diesel in buses reduced life cycle consumption of petroleum by 95%. Biodiesel reduced net emissions of carbon dioxide by 78.45%, compared with petroleum diesel. In urban buses, biodiesel reduced particulate emissions 32 percent, carbon monoxide emissions 35 percent, and emissions of sulfur oxides 8%, relative to life cycle emissions associated with use of petroleum diesel. Life cycle emissions of hydrocarbons were 35% higher and emission of various nitrogen oxides (NOx) were 13.5% higher with biodiesel. Life cycle analyses by the Argonne National Laboratory have indicated reduced fossil energy use and reduced greenhouse gas emissions with biodiesel, compared with petroleum diesel use. Biodiesel derived from various vegetable oils (e.g. canola or soybean oil), is readily biodegradable in the environment compared with petroleum diesel. Coal mining and burning The environmental impact of coal mining and -burning is diverse. Legislation passed by the US Congress in 1990 required the United States Environmental Protection Agency (EPA) to issue a plan to alleviate toxic air pollution from coal-fired power plants. After delay and litigation, the EPA now has a court-imposed deadline of 16 March 2011, to issue its report. Surface coal mining has the greatest impact on the environment due to its unique extraction process requiring drilling and blasting, which releases macro amounts of airborne particles into the air. This airborne particulate matter releases harmful toxins into the atmosphere such as ammonia, carbon monoxide, and nitrogen oxides. These toxins then lead to many detrimental health effects such as respiratory illnesses and cardiovascular disease. Although coal is the most widely utilized source of energy around the world, the burning of coal emits poisonous toxins into the air, leading to various health ailments of the skin, blood and lung diseases, and various forms of cancer, while also contributing to global warming by the emission of these toxins into the environment. The technology for mining activity has advanced over the years, leading to an increase in mine waste leading to more pollution problems, according to the Safe Drinking Water Foundation Studies that have been conducted in various countries like India, have proven that coal mining has a detrimental effect on other biotic and abiotic factors including vegetation and soil, leading to a decrease in plant populations in mining sites Electricity generation Nuclear power The environmental impact of nuclear power results from the nuclear fuel cycle processes including mining, processing, transporting and storing fuel and radioactive fuel waste. Released radioisotopes pose a health danger to human populations, animals and plants as radioactive particles enter organisms through various transmission routes. Radiation is a carcinogen and causes numerous effects on living organisms and systems. The environmental impacts of nuclear power plant disasters such as the Chernobyl disaster, the Fukushima Daiichi nuclear disaster and the Three Mile Island accident, among others, persist indefinitely, though several other factors contributed to these events including improper management of fail safe systems and natural disasters putting uncommon stress on the generators. The radioactive decay rate of particles varies greatly, dependent upon the nuclear properties of a particular isotope. Radioactive Plutonium-244 has a half-life of 80.8 million years, which indicates the time duration required for half of a given sample to decay, though very little plutonium-244 is produced in the nuclear fuel cycle and lower half-life materials have lower activity thus giving off less dangerous radiation. Oil shale industry The environmental impact of the oil shale industry includes the consideration of issues such as land use, waste management, water and air pollution caused by the extraction and processing of oil shale. Surface mining of oil shale deposits causes the usual environmental impacts of open-pit mining. In addition, the combustion and thermal processing generate waste material, which must be disposed of, and harmful atmospheric emissions, including carbon dioxide, a major greenhouse gas. Experimental in-situ conversion processes and carbon capture and storage technologies may reduce some of these concerns in future, but may raise others, such as the pollution of groundwater. Petroleum The environmental impact of petroleum is often negative because it is toxic to almost all forms of life. Petroleum, a common word for oil or natural gas, is closely linked to virtually all aspects of present society, especially for transportation and heating for both homes and for commercial activities. Reservoirs The environmental impact of reservoirs is coming under ever increasing scrutiny as the world demand for water and energy increases and the number and size of reservoirs increases. Dams and the reservoirs can be used to supply drinking water, generate hydroelectric power, increasing the water supply for irrigation, provide recreational opportunities and flood control. However, adverse environmental and sociological impacts have also been identified during and after many reservoir constructions. Although the impact varies greatly between different dams and reservoirs, common criticisms include preventing sea-run fish from reaching their historical mating grounds, less access to water downstream, and a smaller catch for fishing communities in the area. Advances in technology have provided solutions to many negative impacts of dams but these advances are often not viewed as worth investing in if not required by law or under the threat of fines. Whether reservoir projects are ultimately beneficial or detrimental—to both the environment and surrounding human populations— has been debated since the 1960s and probably long before that. In 1960 the construction of Llyn Celyn and the flooding of Capel Celyn provoked political uproar which continues to this day. More recently, the construction of Three Gorges Dam and other similar projects throughout Asia, Africa and Latin America have generated considerable environmental and political debate. Wind power Manufacturing Cleaning agents The environmental impact of cleaning agents is diverse. In recent years, measures have been taken to reduce these effects. Nanotechnology Nanotechnology's environmental impact can be split into two aspects: the potential for nanotechnological innovations to help improve the environment, and the possibly novel type of pollution that nanotechnological materials might cause if released into the environment. As nanotechnology is an emerging field, there is great debate regarding to what extent industrial and commercial use of nanomaterials will affect organisms and ecosystems. Paint The environmental impact of paint is diverse. Traditional painting materials and processes can have harmful effects on the environment, including those from the use of lead and other additives. Measures can be taken to reduce environmental impact, including accurately estimating paint quantities so that wastage is minimized, use of paints, coatings, painting accessories and techniques that are environmentally preferred. The United States Environmental Protection Agency guidelines and Green Star ratings are some of the standards that can be applied. Paper Plastics Some scientists suggest that by 2050 there could be more plastic than fish in the oceans. A December 2020 study published in Nature found that human-made materials, or anthropogenic mass, exceeds all living biomass on Earth, with plastic alone outweighing the mass of all terrestrial and marine animals combined. Pesticides The environmental impact of pesticides is often greater than what is intended by those who use them. Over 98% of sprayed insecticides and 95% of herbicides reach a destination other than their target species, including nontarget species, air, water, bottom sediments, and food. Pesticide contaminates land and water when it escapes from production sites and storage tanks, when it runs off from fields, when it is discarded, when it is sprayed aerially, and when it is sprayed into water to kill algae. The amount of pesticide that migrates from the intended application area is influenced by the particular chemical's properties: its propensity for binding to soil, its vapor pressure, its water solubility, and its resistance to being broken down over time. Factors in the soil, such as its texture, its ability to retain water, and the amount of organic matter contained in it, also affect the amount of pesticide that will leave the area. Some pesticides contribute to global warming and the depletion of the ozone layer. Pharmaceuticals and personal care Transport The environmental impact of transport is significant because it is a major user of energy, and burns most of the world's petroleum. This creates air pollution, including nitrous oxides and particulates, and is a significant contributor to global warming through emission of carbon dioxide, for which transport is the fastest-growing emission sector. By subsector, road transport is the largest contributor to global warming. Environmental regulations in developed countries have reduced the individual vehicles emission; however, this has been offset by an increase in the number of vehicles, and more use of each vehicle. Some pathways to reduce the carbon emissions of road vehicles considerably have been studied. Energy use and emissions vary largely between modes, causing environmentalists to call for a transition from air and road to rail and human-powered transport, and increase transport electrification and energy efficiency. Other environmental impacts of transport systems include traffic congestion and automobile-oriented urban sprawl, which can consume natural habitat and agricultural lands. By reducing transportation emissions globally, it is predicted that there will be significant positive effects on Earth's air quality, acid rain, smog and climate change. The health impact of transport emissions is also of concern. A recent survey of the studies on the effect of traffic emissions on pregnancy outcomes has linked exposure to emissions to adverse effects on gestational duration and possibly also intrauterine growth. Aviation The environmental impact of aviation occurs because aircraft engines emit noise, particulates, and gases which contribute to climate change and global dimming. Despite emission reductions from aircraft engines and more fuel-efficient and less polluting turbofan and turboprop engines, the rapid growth of air travel in recent years contributes to an increase in total pollution attributable to aviation. In the EU, greenhouse gas emissions from aviation increased by 87% between 1990 and 2006. Among other factors leading to this phenomenon are the increasing number of hypermobile travellers and social factors that are making air travel commonplace, such as frequent flyer programs. There is an ongoing debate about possible taxation of air travel and the inclusion of aviation in an emissions trading scheme, with a view to ensuring that the total external costs of aviation are taken into account. Roads The environmental impact of roads includes the local effects of highways (public roads) such as on noise pollution, light pollution, water pollution, habitat destruction/disturbance and local air quality; and the wider effects including climate change from vehicle emissions. The design, construction and management of roads, parking and other related facilities as well as the design and regulation of vehicles can change the impacts to varying degrees. Shipping The environmental impact of shipping includes greenhouse gas emissions and oil pollution. In 2007, carbon dioxide emissions from shipping were estimated at 4 to 5% of the global total, and estimated by the International Maritime Organization (IMO) to rise by up to 72% by 2020 if no action is taken. There is also a potential for introducing invasive species into new areas through shipping, usually by attaching themselves to the ship's hull. The First Intersessional Meeting of the IMO Working Group on Greenhouse Gas Emissions from Ships took place in Oslo, Norway on 23–27 June 2008. It was tasked with developing the technical basis for the reduction mechanisms that may form part of a future IMO regime to control greenhouse gas emissions from international shipping, and a draft of the actual reduction mechanisms themselves, for further consideration by IMO's Marine Environment Protection Committee (MEPC). Military General military spending and military activities have marked environmental effects. The United States military is considered one of the worst polluters in the world, responsible for over 39,000 sites contaminated with hazardous materials. Several studies have also found a strong positive correlation between higher military spending and higher carbon emissions where increased military spending has a larger effect on increasing carbon emissions in the Global North than in the Global South. Military activities also affect land use and are extremely resource-intensive. The military does not solely have negative effects on the environment. There are several examples of militaries aiding in land management, conservation, and greening of an area. Additionally, certain military technologies have proven extremely helpful for conservationists and environmental scientists. As well as the cost to human life and society, there is a significant environmental impact of war. Scorched earth methods during, or after war have been in use for much of recorded history but with modern technology war can cause a far greater devastation on the environment. Unexploded ordnance can render land unusable for further use or make access across it dangerous or fatal. Light pollution Artificial light at night is one of the most obvious physical changes that humans have made to the biosphere, and is the easiest form of pollution to observe from space. The main environmental impacts of artificial light are due to light's use as an information source (rather than an energy source). The hunting efficiency of visual predators generally increases under artificial light, changing predator prey interactions. Artificial light also affects dispersal, orientation, migration, and hormone levels, resulting in disrupted circadian rhythms. Fast fashion Fast fashion has become one of the most successful industries in many capitalist societies with the increase in globalisation. Fast fashion is the cheap mass production of clothing, which is then sold on at very low prices to consumers. Today, the industry is worth £2 trillion. Environmental impacts In terms of carbon dioxide emissions, the fast fashion industry contributes between 4–5 billion tonnes per year, equating to 8–10% of total global emissions. Carbon dioxide is a greenhouse gas, meaning it causes heat to get trapped in the atmosphere, rather than being released into space, raising the Earth's temperature – known as global warming. Alongside greenhouse gas emissions the industry is also responsible for almost 35% of microplastic pollution in the oceans. Scientists have estimated that there are approximately 12–125 trillion tonnes of microplastic particles in the Earth's oceans. These particles are ingested by marine organisms, including fish later eaten by humans. The study states that many of the fibres found are likely to have come from clothing and other textiles, either from washing, or degradation. Textile waste is a huge issue for the environment, with around 2.1 billion tonnes of unsold or faulty clothing being disposed per year. Much of this is taken to landfill, but the majority of materials used to make clothes are not biodegradable, resulting in them breaking down and contaminating soil and water. Fashion, much like most other industries such as agriculture, requires a large volume of water for production. The rate and quantity at which clothing is produced in fast fashion means the industry uses 79 trillion litres of water every year. Water consumption has proven to be very detrimental to the environment and its ecosystems, leading to water depletion and water scarcity. Not only do these affect marine organisms, but also human's food sources, such as crops. The industry is culpable for roughly one-fifth of all industrial water pollution. Society and culture Warnings by the scientific community There are many publications from the scientific community to warn everyone about growing threats to sustainability, in particular threats to "environmental sustainability". The World Scientists' Warning to Humanity in 1992 begins with: "Human beings and the natural world are on a collision course". About 1,700 of the world's leading scientists, including most Nobel Prize laureates in the sciences, signed this warning letter. The letter mentions severe damage to the atmosphere, oceans, ecosystems, soil productivity, and more. It said that if humanity wants to prevent the damage, steps need to be taken: better use of resources, abandonment of fossil fuels, stabilization of human population, elimination of poverty and more. More warning letters were signed in 2017 and 2019 by thousands of scientists from over 150 countries which called again to reduce overconsumption (including eating less meat), reducing fossil fuels use and other resources and so forth.
Physical sciences
Earth science basics: General
Earth science
1729337
https://en.wikipedia.org/wiki/Planckian%20locus
Planckian locus
In physics and color science, the Planckian locus or black body locus is the path or locus that the color of an incandescent black body would take in a particular chromaticity space as the blackbody temperature changes. It goes from deep red at low temperatures through orange, yellowish, white, and finally bluish white at very high temperatures. A color space is a three-dimensional space; that is, a color is specified by a set of three numbers (the CIE coordinates X, Y, and Z, for example, or other values such as hue, colorfulness, and luminance) which specify the color and brightness of a particular homogeneous visual stimulus. A chromaticity is a color projected into a two-dimensional space that ignores brightness. For example, the standard CIE XYZ color space projects directly to the corresponding chromaticity space specified by the two chromaticity coordinates known as x and y, making the familiar chromaticity diagram shown in the figure. The Planckian locus, the path that the color of a black body takes as the blackbody temperature changes, is often shown in this standard chromaticity space. Planckian locus in the XYZ color space In the CIE XYZ color space, the three coordinates defining a color are given by X, Y, and Z: where M(λ,T) is the spectral radiant exitance of the light being viewed, and X(λ), Y(λ) and Z(λ) are the color matching functions of the CIE standard colorimetric observer, shown in the diagram on the right, and λ is the wavelength. The Planckian locus is determined by substituting into the above equations the black body spectral radiant exitance, which is given by Planck's law: where: c1 = 2hc2 is the first radiation constant c2 = hc/k is the second radiation constant and M is the black body spectral radiant exitance (power per unit area per unit wavelength: watt per square meter per meter (W/m3)) T is the temperature of the black body h is the Planck constant c is the speed of light k is the Boltzmann constant This will give the Planckian locus in CIE XYZ color space. If these coordinates are XT, YT, ZT where T is the temperature, then the CIE chromaticity coordinates will be Note that in the above formula for Planck's Law, you might as well use c1L = 2hc2 (the first radiation constant for spectral radiance) instead of c1 (the “regular” first radiation constant), in which case the formula would give the spectral radiance L(λ,T) of the black body instead of the spectral radiant exitance M(λ,T). However, this change only affects the absolute values of XT, YT and ZT, not the values relative to each other. Since XT, YT and ZT are usually normalized to YT = 1 (or YT = 100) and are normalized when xT and yT are calculated, the absolute values of XT, YT and ZT do not matter. For practical reasons, c1 might therefore simply be replaced by 1. Approximation The Planckian locus in xy space is depicted as a curve in the chromaticity diagram above. While it is possible to compute the CIE xy co-ordinates exactly given the above formulas, it is faster to use approximations. Since the mired scale changes more evenly along the locus than the temperature itself, it is common for such approximations to be functions of the reciprocal temperature. Kim et al. use a cubic spline: The Planckian locus can also be approximated in the CIE 1960 color space, which is used to compute CCT and CRI, using the following expressions: This approximation is accurate to within and for . Alternatively, one can use the chromaticity (x, y) coordinates estimated from above to derive the corresponding (u, v), if a larger range of temperatures is required. The inverse calculation, from chromaticity co-ordinates (x, y) on or near the Planckian locus to correlated color temperature, is discussed in . Correlated color temperature The mathematical procedure for determining the correlated color temperature involves finding the closest point to the light source's white point on the Planckian locus. Since the CIE's 1959 meeting in Brussels, the Planckian locus has been computed using the CIE 1960 color space, also known as MacAdam's (u,v) diagram. Today, the CIE 1960 color space is deprecated for other purposes: Owing to the perceptual inaccuracy inherent to the concept, it suffices to calculate to within 2 K at lower CCTs and 10 K at higher CCTs to reach the threshold of imperceptibility. International Temperature Scale The Planckian locus is derived by the determining the chromaticity values of a Planckian radiator using the standard colorimetric observer. The relative spectral power distribution (SPD) of a Planckian radiator follows Planck's law, and depends on the second radiation constant, . As measuring techniques have improved, the General Conference on Weights and Measures has revised its estimate of this constant, with the International Temperature Scale (and briefly, the International Practical Temperature Scale). These successive revisions caused a shift in the Planckian locus and, as a result, the correlated color temperature scale. Before ceasing publication of standard illuminants, the CIE worked around this problem by explicitly specifying the form of the SPD, rather than making references to black bodies and a color temperature. Nevertheless, it is useful to be aware of previous revisions in order to be able to verify calculations made in older texts: = (ITS-27). Note: Was in effect during the standardization of Illuminants A, B, C (1931), however the CIE used the value recommended by the U.S. National Bureau of Standards, 1.435 × 10−2 = (IPTS-48). In effect for Illuminant series D (formalized in 1967). = (ITS-68), (ITS-90). Often used in recent papers. = (CODATA 2010) = (CODATA 2014) = (CODATA 2018). Current value, as of 2020. The 2019 revision of the SI fixed the Boltzmann constant to an exact value. Since the Planck constant and the speed of light were already fixed to exact values, that means that c2 is now an exact value as well. Note that ... doesn't indicate a repeating fraction; it merely means that of this exact value only the first ten digits are shown.
Physical sciences
Thermodynamics
Physics
1729542
https://en.wikipedia.org/wiki/Neural%20network%20%28biology%29
Neural network (biology)
A neural network, also called a neuronal network, is an interconnected population of neurons (typically containing multiple neural circuits). Biological neural networks are studied to understand the organization and functioning of nervous systems. Closely related are artificial neural networks, machine learning models inspired by biological neural networks. They consist of artificial neurons, which are mathematical functions that are designed to be analogous to the mechanisms used by neural circuits. Overview A biological neural network is composed of a group of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible. Apart from electrical signalling, there are other forms of signalling that arise from neurotransmitter diffusion. Artificial intelligence, cognitive modelling, and artificial neural networks are information processing paradigms inspired by how biological neural systems process data. Artificial intelligence and cognitive modelling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots. Neural network theory has served to identify better how the neurons in the brain function and provide the basis for efforts to create artificial intelligence. History The preliminary theoretical base for contemporary neural networks was independently proposed by Alexander Bain (1873) and William James (1890). In their work, both thoughts and body activity resulted from interactions among neurons within the brain. For Bain, every activity led to the firing of a certain set of neurons. When activities were repeated, the connections between those neurons strengthened. According to his theory, this repetition was what led to the formation of memory. The general scientific community at the time was skeptical of Bain's theory because it required what appeared to be an inordinate number of neural connections within the brain. It is now apparent that the brain is exceedingly complex and that the same brain “wiring” can handle multiple problems and inputs. James' theory was similar to Bain's; however, he suggested that memories and actions resulted from electrical currents flowing among the neurons in the brain. His model, by focusing on the flow of electrical currents, did not require individual neural connections for each memory or action. C. S. Sherrington (1898) conducted experiments to test James' theory. He ran electrical currents down the spinal cords of rats. However, instead of demonstrating an increase in electrical current as projected by James, Sherrington found that the electrical current strength decreased as the testing continued over time. Importantly, this work led to the discovery of the concept of habituation. McCulloch and Pitts (1943) also created a computational model for neural networks based on mathematics and algorithms. They called this model threshold logic. These early models paved the way for neural network research to split into two distinct approaches. One approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence. The parallel distributed processing of the mid-1980s became popular under the name connectionism. The text by Rumelhart and McClelland (1986) provided a full exposition on the use of connectionism in computers to simulate neural processes. Artificial neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated, as it is not clear to what degree artificial neural networks mirror brain function. Neuroscience Theoretical and computational neuroscience is the field concerned with the analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling. The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (neural network models) and theory (statistical learning theory and information theory). Types of models Many models are used; defined at different levels of abstraction, and modeling different aspects of neural systems. They range from models of the short-term behaviour of individual neurons, through models of the dynamics of neural circuitry arising from interactions between individual neurons, to models of behaviour arising from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and their relation to learning and memory, from the individual neuron to the system level. Connectivity In August 2020 scientists reported that bi-directional connections, or added appropriate feedback connections, can accelerate and improve communication between and in modular neural networks of the brain's cerebral cortex and lower the threshold for their successful communication. They showed that adding feedback connections between a resonance pair can support successful propagation of a single pulse packet throughout the entire network. The connectivity of a neural network stems from its biological structures and is usually challenging to map out experimentally. Scientists used a variety of statistical tools to infer the connectivity of a network based on the observed neuronal activities, i.e., spike trains. Recent research has shown that statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances, providing deeper insights into the structure of neural circuits and their computational properties. Recent improvements While initially research had been concerned mostly with the electrical characteristics of neurons, a particularly important part of the investigation in recent years has been the exploration of the role of neuromodulators such as dopamine, acetylcholine, and serotonin on behaviour and learning. Biophysical models, such as BCM theory, have been important in understanding mechanisms for synaptic plasticity, and have had applications in both computer science and neuroscience.
Biology and health sciences
Biology basics
Biology
1730534
https://en.wikipedia.org/wiki/September%20equinox
September equinox
The September equinox (or southward equinox) is the moment when the Sun appears to cross the celestial equator, heading southward. Because of differences between the calendar year and the tropical year, the September equinox may occur from September 21 to 24. At the equinox, the Sun as viewed from the equator rises due east and sets due west. Before the Southward equinox, the Sun rises and sets more northerly, and afterwards, it rises and sets more southerly. The equinox may be taken to mark the end of astronomical summer and the beginning of astronomical autumn (autumnal equinox) in the Northern Hemisphere, while marking the end of astronomical winter and the start of astronomical spring (vernal equinox) in the Southern Hemisphere. Occurrences The September equinox is one point in time commonly used to determine the length of the tropical year. The dates and times of the September equinoxes that occur from the year 2018 to 2028 (UTC) are listed as follows: Constellation The point where the Sun crosses the celestial equator southwards is called the First Point of Libra. However, because of the precession of the equinoxes, this point is no longer in the constellation Libra, but rather in Virgo. The solar point of the September equinox passed from Libra and into Virgo in −729 (730 BCE) and will enter Leo in 2439. Apparent movement of the Sun in relation to the horizon At the equinox, the Sun rises directly in the east and sets directly in the west. However, because of refraction it will usually appear slightly above the horizon at the moment when its "true" middle is rising or setting. For viewers at the north or south poles, it moves virtually horizontally on or above the horizon, not obviously rising or setting apart from the movement in "declination" (and hence altitude) of a little under a half (0.39) degree per day. For observers in either hemisphere not at the poles, the Sun rises and sets more and more to the south during the 3 months following the September equinox. This period is the second half of a 6-month long southerly movement, beginning with the June solstice when the Sun rises and sets at its most northern point. Culture Calendars The September equinox marked the first day of the French Republican Calendar. Commemorations West Asia The Southward equinox marks the first day of Mehr or Libra in the Iranian calendar. It is one of the Iranian festivals called Jashne Mehregan, or the festival of sharing or love in Zoroastrianism. East Asia In Japan is a public holiday. Higan (お彼岸) is a Buddhist holiday exclusively celebrated by Japanese sects during both the Spring and Autumnal Equinox. In Korea, Chuseok is a major harvest festival and a three-day holiday celebrated around the Autumn Equinox. The Mid-Autumn Festival is celebrated on the 15th day of the 8th lunar month, often near the autumnal equinox day, and is an official holiday in mainland China, Hong Kong, Taiwan and in many countries with a significant Chinese minority. As the lunar calendar is not synchronous with the Gregorian calendar, this date could be anywhere from mid-September to early October. The traditional East Asian calendars divide a year into 24 solar terms (节气, literally "climatic segments"), and the autumnal equinox (Qiūfēn, ) marks the middle of the autumn season. In this context, the Chinese character 分 means "(equal) division" (within a season). Judaism The Jewish Sukkot usually falls on the first full moon after the northern hemisphere autumnal equinox, although occasionally (In the modern Jewish calendar, three times every 19 years) it will occur on the second full moon. Rosh Hashanah falls on a new moon close to this equinox. Europe Dożynki is a Slavic harvest festival. In pre-Christian times the feast usually fell on the autumn equinox. The Southward equinox was "New Year's Day" in the French Republican Calendar, which was in use from 1793 to 1805. The French First Republic was proclaimed and the French monarchy was abolished on September 21, 1792, making the following day (the equinox day that year) the first day of the "Republican Era" in France. The start of every year was to be determined by astronomical calculations following the real Sun and not the mean Sun. The traditional harvest festival in the United Kingdom was celebrated on the Sunday of the full moon closest to the September equinox. Neopaganism Neopagans observe the September equinox as a cardinal point on the Wheel of the Year. In the Northern Hemisphere some varieties of paganism adapt Autumn Equinox traditions. In the Southern Hemisphere, the vernal equinox corresponds with Ostara. Americas The reconstructed Cahokia Woodhenge, a large timber circle located at the Mississippian culture Cahokia archaeological site near Collinsville, Illinois, is the site of annual equinox and solstice sunrise observances. An announcement for the 2017 observance said "Out of respect for Native American beliefs, no rituals or ceremonies will be held at the free event. But visitors will stand in the same place where the Mississippian people once gathered to watch the sun rise."
Physical sciences
Celestial sphere: General
Astronomy
1730537
https://en.wikipedia.org/wiki/March%20equinox
March equinox
The March equinox or northward equinox is the equinox on the Earth when the subsolar point appears to leave the Southern Hemisphere and cross the celestial equator, heading northward as seen from Earth. The March equinox is known as the vernal equinox (spring equinox) in the Northern Hemisphere and as the autumnal equinox (autumn equinox or fall equinox) in the Southern Hemisphere. On the Gregorian calendar at 0° longitude, the northward equinox can occur as early as 19 March (which happened most recently in 1796, and will happen next in 2044). And it can occur as late as 21 March (which happened most recently in 2007, and will happen next in 2102). For a common year the computed time slippage is about 5 hours 49 minutes later than the previous year, and for a leap year about 18 hours 11 minutes earlier than the previous year. Balancing the increases of the common years against the losses of the leap years keeps the calendar date of the March equinox from drifting more than one day from 20 March each year. The March equinox may be taken to mark the beginning of astronomical spring and the end of astronomical winter in the Northern Hemisphere but marks the beginning of astronomical autumn and the end of astronomical summer in the Southern Hemisphere. In astronomy, the March equinox is the zero point of sidereal time and, consequently, the right ascension and ecliptic longitude. It also serves as a reference for calendars and celebrations in many cultures and religions. Constellation The point where the Sun crosses the celestial equator northwards is called the First Point of Aries. However, due to the precession of the equinoxes, this point is no longer in the constellation Aries, but rather in Pisces. By the year 2600 it will be in Aquarius. The Earth's axis causes the First Point of Aries to travel westwards across the sky at a rate of roughly one degree every 72 years. Based on the modern constellation boundaries, the northward equinox passed from Taurus into Aries in the year −1865 (1866 BC), passed into Pisces in the year −67 (68 BC), will pass into Aquarius in the year 2597, and will pass into Capricornus in the year 4312. It passed by (but not into) a 'corner' of Cetus at 0°10′ distance in the year 1489. Apparent movement of the Sun In its apparent motion on the day of an equinox, the Sun's disk crosses the Earth's horizon directly to the east at sunrise; and again, some 12 hours later, directly to the west at sunset. The March equinox, like all equinoxes, is characterized by having an almost exactly equal amount of daylight and night across most latitudes on Earth. Culture Calendars The Babylonian calendar began with the first new moon after the March equinox, the day after the return of the Sumerian goddess Inanna (later known as Ishtar) from the underworld, in the Akitu ceremony, with parades through the Ishtar Gate to the Eanna temple and the ritual re-enactment of the marriage to Tammuz, or Sumerian Dummuzi. The Persian calendar begins each year at the northward equinox, observationally determined at Tehran. The Indian national calendar starts the year on the day next to the vernal equinox on 22 March (21 March in leap years) with a 30-day month (31 days in leap years), then has 5 months of 31 days followed by 6 months of 30 days. Julian calendar The Julian calendar reform lengthened seven months and replaced the intercalary month with an intercalary day to be added every four years to February. It was based on a length for the year of 365 days and 6 hours (365.25 d), while the mean tropical year is about 11 minutes and 15 seconds less than that. This had the effect of adding about three quarters of an hour every four years. The effect accumulated from inception in 45 BC until the 16th century, when the northern vernal equinox fell on 10 or 11 March. The date in 1452 was 11 March, 11:52 (Julian). In 2547 it will be 20 March, 21:18 (Gregorian) and 3 March, 21:18 (Julian). Commemorations Abrahamic tradition The Jewish Passover usually falls on the first full moon after the Northern Hemisphere vernal equinox, although occasionally (currently three times every 19 years) it will occur on the second full moon. The Christian Churches calculate Easter as the first Sunday after the first full moon on or after the March equinox. The official church definition for the equinox is 21 March. The Eastern Orthodox Churches use the older Julian calendar, while the western churches use the Gregorian calendar, and the western full moons currently fall four, five or 34 days before the eastern ones. The result is that the two Easters generally fall on different days but they sometimes coincide. The earliest possible western Easter date in any year is 22 March on each calendar. The latest possible western Easter date in any year is 25 April. Iranian tradition The northward equinox marks the first day of various calendars including the Iranian calendar. The ancient Iranian peoples' new year's festival of Nowruz can be celebrated 20 March or 21 March. According to the ancient Persian mythology Jamshid, the mythological king of Persia, ascended to the throne on this day and each year this is commemorated with festivities for two weeks. Along with Iranian peoples, it is also a holiday celebrated by Turkic people, the North Caucasus and in Albania. It is also a holiday for Zoroastrians, adherents of the Baháʼí Faith and Nizari Ismaili Muslims irrespective of ethnicity. West Asia and North Africa In many Arab countries, Mother's Day is celebrated on the northward equinox. Sham el-Nessim is a modern celebration which is claimed by some to have been celebrated in ancient Egypt but with little evidence. It is one of the public holidays in Egypt. It is assumed by some that sometime during Egypt's Christian period (–639) the date moved to Easter Monday, but before then it coincided with the vernal equinox. South and Southeast Asia According to the sidereal solar calendar, celebrations which originally coincided with the March equinox now take place throughout South Asia and parts of Southeast Asia on the day when the Sun enters the sidereal Aries, generally around 14 April. In Cambodia, the Angkor Wat Equinox is a solar phenomenon which dates back to the reign of Suryavarman II. East Asia The traditional East Asian calendars divide a year into 24 solar terms (节气, literally "climatic segments"), and the vernal equinox (Chūnfēn, ) marks the middle of the spring. In this context, the Chinese character 分 means "(equal) division" (within a season). In Japan, Vernal Equinox Day (春分の日 Shunbun no hi) is an official national holiday, and is spent visiting family graves and holding family reunions. Higan (お彼岸) is a Buddhist holiday exclusively celebrated by Japanese sects during both the Spring and Autumnal Equinox. Europe Dita e Verës or Verëza is the Albanian pagan feast that celebrates the spring equinox: the beginning of the spring-summer period. It is traditionally celebrated throughout Albanian inhabited territories, also officially in Albania. Hilaria was an ancient Roman festival commemorating the death and resurrection of Attis. Lieldienas in Norse paganism, a Dísablót was celebrated on the vernal equinox. Drowning of Marzanna The Americas Spring equinox in Teotihuacán The reconstructed Cahokia Woodhenge, a large timber circle located at the Mississippian culture Cahokia archaeological site near Collinsville, Illinois, is the site of annual equinox and solstice sunrise observances. Out of respect for Native American beliefs these events do not feature ceremonies or rituals of any kind. Modern culture World Storytelling Day is a global celebration of the art of oral storytelling, celebrated every year on the day of the northward equinox. World Citizen Day occurs on the northward equinox. The Baháʼí calendar year starts at the sunset preceding the March equinox calculated for Tehran. In Annapolis, Maryland, United States, boatyard employees and sailboat owners celebrate the spring equinox with the "Burning of the Socks" festival. Traditionally, the boating community wears socks only during the winter. These are burned at the approach of warmer weather, which brings more customers and work to the area. Officially, nobody then wears socks until the next equinox. Neopagans observe the March equinox (referred to as Ostara) as a cardinal point on the Wheel of the Year. In the northern hemisphere some varieties of paganism adapt vernal equinox celebrations, while in the southern hemisphere pagans adapt autumnal traditions. International Astrology Day On 20 March 2014 and 20 March 2018, the March equinox was commemorated by an animated Google Doodle.
Physical sciences
Celestial sphere: General
Astronomy
1730553
https://en.wikipedia.org/wiki/Milliradian
Milliradian
A milliradian (SI-symbol mrad, sometimes also abbreviated mil) is an SI derived unit for angular measurement which is defined as a thousandth of a radian (0.001 radian). Milliradians are used in adjustment of firearm sights by adjusting the angle of the sight compared to the barrel (up, down, left, or right). Milliradians are also used for comparing shot groupings, or to compare the difficulty of hitting different sized shooting targets at different distances. When using a scope with both mrad adjustment and a reticle with mrad markings (called an "mrad/mrad scope"), the shooter can use the reticle as a ruler to count the number of mrads a shot was off-target, which directly translates to the sight adjustment needed to hit the target with a follow-up shot. Optics with mrad markings in the reticle can also be used to make a range estimation of a known size target, or vice versa, to determine a target size if the distance is known, a practice called "milling". Milliradians are generally used for very small angles, which allows for very accurate mathematical approximations to more easily calculate with direct proportions, back and forth between the angular separation observed in an optic, linear subtension on target, and range. In such applications it is useful to use a unit for target size that is a thousandth of the unit for range, for instance by using the metric units millimeters for target size and meters for range. This coincides with the definition of the milliradian where the arc length is defined as of the radius. A common adjustment value in firearm sights is 1 cm at 100 meters which equals =  mrad. The true definition of a milliradian is based on a unit circle with a radius of one and an arc divided into 1,000 mrad per radian, hence 2,000 π or approximately 6,283.185 milliradians in one turn, and rifle scope adjustments and reticles are calibrated to this definition. There are also other definitions used for land mapping and artillery which are rounded to more easily be divided into smaller parts for use with compasses, which are then often referred to as "mils", "lines", or similar. For instance there are artillery sights and compasses with 6,400 NATO mils, 6,000 Warsaw Pact mils or 6,300 Swedish "streck" per turn instead of 360° or 2π radians, achieving higher resolution than a 360° compass while also being easier to divide into parts than if true milliradians were used. History The milliradian (approximately 6,283.185 in a circle) was first used in the mid-19th century by Charles-Marc Dapples (1837–1920), a Swiss engineer and professor at the University of Lausanne. Degrees and minutes were the usual units of angular measurement but others were being proposed, with "grads" (400 gradians in a circle) under various names having considerable popularity in much of northern Europe. However, Imperial Russia used a different approach, dividing a circle into equilateral triangles (60° per triangle, 6 triangles in a circle) and hence 600 units to a circle. Around the time of the start of World War I, France was experimenting with the use of millièmes or angular mils (6400 in a circle) for use with artillery sights instead of decigrades (4000 in a circle). The United Kingdom was also trialing them to replace degrees and minutes. They were adopted by France although decigrades also remained in use throughout World War I. Other nations also used decigrades. The United States, which copied many French artillery practices, adopted angular mils, later known as NATO mils. Before 2007 the Swedish defence forces used "streck" (6300 in a circle, streck meaning lines or marks) (together with degrees for some navigation) which is closer to the milliradian but then changed to NATO mils. After the Bolshevik Revolution and the adoption of the metric system of measurement (e.g. artillery replaced "units of base" with meters) the Red Army expanded the 600 unit circle into a 6000 mil circle. Hence the Russian mil has a somewhat different origin than those derived from French artillery practices. In the 1950s, NATO adopted metric units of measurement for land and general use. NATO mils, meters, and kilograms became standard, although degrees remained in use for naval and air purposes, reflecting civil practices. Mathematical principle Use of the milliradian is practical because it is concerned with small angles, and when using radians the small angle approximation shows that the angle approximates to the sine of the angle, that is . This allows a user to dispense with trigonometry and use simple ratios to determine size and distance with high accuracy for rifle and short distance artillery calculations by using the handy property of subtension: One mrad approximately subtends one meter at a distance of one thousand meters. More in detail, because , instead of finding the angular distance denoted by θ (Greek letter theta) by using the tangent function , one can instead make a good approximation by using the definition of a radian and the simplified formula: Since a radian is mathematically defined as the angle formed when the length of a circular arc equals the radius of the circle, a milliradian, is the angle formed when the length of a circular arc equals of the radius of the circle. Just like the radian, the milliradian is dimensionless, but unlike the radian where the same unit must be used for radius and arc length, the milliradian needs to have a ratio between the units where the subtension is a thousandth of the radius when using the simplified formula. Approximation error The approximation error by using the simplified linear formula will increase as the angle increases. For example, a % (or parts per billion) error for an angle of 0.1 mrad, for instance by assuming 0.1 mrad equals 1 cm at 100 m 0.03% error for 30 mrad, i.e. assuming 30 mrad equals 30 m at 1 km 2.9% error for 300 mrad, i.e. assuming 300 mrad equals 300 m at 1 km The approximation using mrad is more precise than using another common system where 1′ (minute of arc) is approximated as 1 inch at 100 yards, where comparably there is a: 4.72% error by assuming that an angle of 1′ equals 1 inch at 100 yd 4.75% error for 100′, i.e. assuming 100′ equals 100 in at 100 yd 7.36% error for 1000′, i.e. assuming 1000′ equals 1000 inches at 100 yd Sight adjustment Milliradian adjustment is commonly used as a unit for clicks in the mechanical adjustment knobs (turrets) of iron and scope sights both in the military and civilian shooting sports. New shooters are often explained the principle of subtensions in order to understand that a milliradian is an angular measurement. Subtension is the physical amount of space covered by an angle and varies with distance. Thus, the subtension corresponding to a mrad (either in an mrad reticle or in mrad adjustments) varies with range. Knowing subtensions at different ranges can be useful for sighting in a firearm if there is no optic with an mrad reticle available, but involves mathematical calculations, and is therefore not used very much in practical applications. Subtensions always change with distance, but an mrad (as observed through an optic) is always an mrad regardless of distance. Therefore, ballistic tables and shot corrections are given in mrads, thereby avoiding the need for mathematical calculations. If a rifle scope has mrad markings in the reticle (or there is a spotting scope with an mrad reticle available), the reticle can be used to measure how many mrads to correct a shot even without knowing the shooting distance. For instance, assuming a precise shot fired by an experienced shooter missed the target by 0.8 mrad as seen through an optic, and the firearm sight has 0.1 mrad adjustments, the shooter must then dial 8 clicks on the scope to hit the same target under the same conditions. Common click values General purpose scopes Gradations (clicks) of ′,  mrad and ′ are used in general purpose sights for hunting, target and long range shooting at varied distances. The click values are fine enough to get dialed in for most target shooting and coarse enough to keep the number of clicks down when dialing. Speciality scopes  mrad, ′ and  mrad are used in speciality scope sights for extreme precision at fixed target ranges such as benchrest shooting. Some specialty iron sights used in ISSF 10 m, 50 m and 300 meter rifle come with adjustments in either  mrad or  mrad. The small adjustment value means these sights can be adjusted in very small increments. These fine adjustments are however not very well suited for dialing between varied distances such as in field shooting because of the high number of clicks that will be required to move the line of sight, making it easier to lose track of the number of clicks than in scopes with larger click adjustments. For instance to move the line of sight 0.4 mrad, a 0.1 mrad scope must be adjusted 4 clicks, while comparably a 0.05 mrad and 0.025 mrad scope must be adjusted 8 and 16 clicks respectively. Others  mrad and  mrad can be found in some short range sights, mostly with capped turrets, but are not very widely used. Subtensions at different distances Subtension refers to the length between two points on a target, and is usually given in either centimeters, millimeters or inches. Since an mrad is an angular measurement, the subtension covered by a given angle (angular distance or angular diameter) increases with viewing distance to the target. For instance the same angle of 0.1 mrad will subtend 10 mm at 100 meters, 20 mm at 200 meters, etc., or similarly 0.39 inches at 100 m, 0.78 inches at 200 m, etc. Subtensions in mrad based optics are particularly useful together with target sizes and shooting distances in metric units. The most common scope adjustment increment in mrad based rifle scopes is 0.1 mrad, which are sometimes called "one centimeter clicks" since 0.1 mrad equals exactly 1 cm at 100 meters, 2 cm at 200 meters, etc. Similarly, an adjustment click on a scope with 0.2 mrad adjustment will move the point of bullet impact 2 cm at 100 m and 4 cm at 200 m, etc. When using a scope with both mrad adjustment and a reticle with mrad markings (called a mrad/mrad scope), the shooter can spot his own bullet impact and easily correct the sight if needed. If the shot was a miss, the mrad reticle can simply be used as a "ruler" to count the number of milliradians the shot was off target. The number of milliradians to correct is then multiplied by ten if the scope has 0.1 mrad adjustments. If for instance the shot was 0.6 mrad to the right of the target, 6 clicks will be needed to adjust the sight. This way there is no need for math, conversions, knowledge of target size or distance. This is true for a first focal plane scope at all magnifications, but a variable second focal plane must be set to a given magnification (usually its maximum magnification) for any mrad scales to be correct. When using a scope with mrad adjustments, but without mrad markings in the reticle (i.e. a standard duplex cross-hair on a hunting or benchrest scope), sight correction for a known target subtension and known range can be calculated by the following formula, which utilizes the fact that an adjustment of 1 mrad changes the impact as many millimeters as there are meters: For instance: = 0.4 mrad, or 4 clicks with a  mrad adjustment scope. = 0.05 mrad, or 1 click with a 0.05 mrad adjustment scope. In firearm optics, where 0.1 mrad per click is the most common mrad based adjustment value, another common rule of thumb is that an adjustment of  mrad changes the impact as many centimeters as there are hundreds of meters. In other words, 1 cm at 100 meters, 2.25 cm at 225 meters, 0.5 cm at 50 meters, etc. See the table below Adjustment range and base tilt The horizontal and vertical adjustment range of a firearm sight is often advertised by the manufacturer using mrads. For instance a rifle scope may be advertised as having a vertical adjustment range of 20 mrad, which means that by turning the turret the bullet impact can be moved a total of 20 meters at 1000 meters (or 2 m at 100 m, 4 m at 200 m, 6 m at 300 m etc.). The horizontal and vertical adjustment ranges can be different for a particular sight, for instance a scope may have 20 mrad vertical and 10 mrad horizontal adjustment. Elevation differ between models, but about 10–11 mrad are common in hunting scopes, while scopes made for long range shooting usually have an adjustment range of 20–30 mrad (70–100 moa). Sights can either be mounted in neutral or tilted mounts. In a neutral mount (also known as "flat base" or non-tilted mount) the sight will point reasonably parallel to the barrel, and be close to a zero at 100 meters (about 1 mrad low depending on rifle and caliber). After zeroing at 100 meters the sight will thereafter always have to be adjusted upwards to compensate for bullet drop at longer ranges, and therefore the adjustment below zero will never be used. This means that when using a neutral mount only about half of the scope's total elevation will be usable for shooting at longer ranges: In most regular sport and hunting rifles (except for in long range shooting), sights are usually mounted in neutral mounts. This is done because the optical quality of the scope is best in the middle of its adjustment range, and only being able to use half of the adjustment range to compensate for bullet drop is seldom a problem at short and medium range shooting. However, in long range shooting tilted scope mounts are common since it is very important to have enough vertical adjustment to compensate for the bullet drop at longer distances. For this purpose scope mounts are sold with varying degrees of tilt, but some common values are: 3 mrad, which equals 3 m at 1000 m (or 0.3 m at 100 m) 6 mrad, which equals 6 m at 1000 m (or 0.6 m at 100 m) 9 mrad, which equals 9 m at 1000 m (or 0.9 m at 100 m) With a tilted mount the maximum usable scope elevation can be found by: The adjustment range needed to shoot at a certain distance varies with firearm, caliber and load. For example, with a certain .308 load and firearm combination, the bullet may drop 13 mrad at 1000 meters (13 meters). To be able to reach out, one could either: Use a scope with 26 mrad of adjustment in a neutral mount, to get a usable adjustment of = 13 mrad Use a scope with 14 mrad of adjustment and a 6 mrad tilted mount to achieve a maximum adjustment of + 6 = 13 mrad Shot groupings A shot grouping is the spread of multiple shots on a target, taken in one shooting session. The group size on target in milliradians can be obtained by measuring the spread of the rounds on target in millimeters with a caliper and dividing by the shooting distance in meters. This way, using milliradians, one can easily compare shot groupings or target difficulties at different shooting distances. If the firearm is attached in a fixed mount and aimed at a target, the shot grouping measures the firearm's mechanical precision and the uniformity of the ammunition. When the firearm also is held by a shooter, the shot grouping partly measures the precision of the firearm and ammunition, and partly the shooter's consistency and skill. Often the shooters' skill is the most important element towards achieving a tight shot grouping, especially when competitors are using the same match grade firearms and ammunition. Range estimation with mrad reticles Many telescopic sights used on rifles have reticles that are marked in mrad. This can either be accomplished with lines or dots, and the latter is generally called mil-dots. The mrad reticle serves two purposes, range estimation and trajectory correction. With a mrad reticle-equipped scope the distance to an object can be estimated with a fair degree of accuracy by a trained user by determining how many milliradians an object of known size subtends. Once the distance is known, the drop of the bullet at that range (see external ballistics), converted back into milliradians, can be used to adjust the aiming point. Generally mrad-reticle scopes have both horizontal and vertical crosshairs marked; the horizontal and vertical marks are used for range estimation and the vertical marks for bullet drop compensation. Trained users, however, can also use the horizontal dots to compensate for bullet drift due to wind. Milliradian-reticle-equipped scopes are well suited for long shots under uncertain conditions, such as those encountered by military and law enforcement snipers, varmint hunters and other field shooters. These riflemen must be able to aim at varying targets at unknown (sometimes long) distances, so accurate compensation for bullet drop is required. Angle can be used for either calculating target size or range if one of them is known. Where the range is known the angle will give the size, where the size is known then the range is given. When out in the field angle can be measured approximately by using calibrated optics or roughly using one's fingers and hands. With an outstretched arm one finger is approximately 30 mrad wide, a fist 150 mrad and a spread hand 300 mrad. Milliradian reticles often have dots or marks with a spacing of 1 mrad in between, but graduations can also be finer and coarser (i.e. 0.8 or 1.2 mrad). Units for target size and range While a radian is defined as an angle on the unit circle where the arc and radius have equal length, a milliradian is defined as the angle where the arc length is one thousandth of the radius. Therefore, when using milliradians for range estimation, the unit used for target distance needs to be thousand times as large as the unit used for target size. Metric units are particularly useful in conjunction with a mrad reticle because the mental arithmetic is much simpler with decimal units, thereby requiring less mental calculation in the field. Using the range estimation formula with the units meters for range and millimeters for target size it is just a matter of moving decimals and do the division, without the need of multiplication with additional constants, thus producing fewer rounding errors. The same holds true for calculating target distance in kilometers using target size in meters. Also, in general the same unit can be used for subtension and range if multiplied with a factor of thousand, i.e. If using the imperial units yards for distance and inches for target size, one has to multiply by a factor of ≈ 27.78, since there are 36 inches in one yard. If using the metric unit meters for distance and the imperial unit inches for target size, one has to multiply by a factor of 25.4, since one inch is defined as 25.4 millimeters. Practical examples Land Rovers are about 3 to 4 m long, "smaller tank" or APC/MICV at about 6 m (e.g. T-34 or BMP) and about 10 m for a "big tank." From the front a Land Rover is about 1.5 m wide, most tanks around 3–3.5 m. So a SWB Land Rover from the side is one finger wide at about 100 m. A modern tank would have to be at a bit over 300 m. If, for instance a target known to be 1.5 m in height (1500 mm) is measured to 2.8 mrad in the reticle, the range can be estimated to: So if the above-mentioned 6 m long BMP (6000 mm) is viewed at 6 mrad its distance is 1000 m, and if the angle of view is twice as large (12 mrad) the distance is half as much, 500 m. When used with some riflescopes of variable objective magnification and fixed reticle magnification (where the reticle is in the second focal plane), the formula can be modified to: Where mag is scope magnification. However, a user should verify this with their individual scope since some are not calibrated at 10× . As above target distance and target size can be given in any two units of length with a ratio of 1000:1. Mixing mrad and minutes of arc It is possible to purchase rifle scopes with a mrad reticle and minute-of-arc turrets, but it is general consensus that such mixing should be avoided. It is preferred to either have both a mrad reticle and mrad adjustment (mrad/mrad), or a minute-of-arc reticle and minute-of-arc adjustment to utilize the strength of each system. Then the shooter can know exactly how many clicks to correct based on what he sees in the reticle. If using a mixed system scope that has a mrad reticle and arcminute adjustment, one way to make use of the reticle for shot corrections is to exploit that 14′ approximately equals 4 mrad, and thereby multiplying an observed corrections in mrad by a fraction of when adjusting the turrets. Conversion table for firearms In the table below conversions from mrad to metric values are exact (e.g. 0.1 mrad equals exactly 1 cm at 100 meters), while conversions of minutes of arc to both metric and imperial values are approximate. 0.1 mrad equals exactly 1 cm at 100 m 1 mrad ≈ 3.44′, so  mrad ≈ ′ 1′ ≈ 0.291 mrad (or 2.91 cm at 100 m, approximately 3 cm at 100 m) Definitions for maps and artillery Because of the definition of pi, in a circle with a diameter of one there are 2000  milliradians () per full turn. In other words, one real milliradian covers just under of the circumference of a circle, which is the definition used by telescopic rifle sight manufacturers in reticles for stadiametric rangefinding. For maps and artillery, three rounded definitions are used which are close to the real definition, but more easily can be divided into parts. The different map and artillery definitions are sometimes referred to as "angular mils", and are: of a circle in NATO countries. of a circle in the former Soviet Union and Finland (Finland phasing out the standard in favour of the NATO standard). of a circle in Sweden. The Swedish term for this is streck, literally "line". Reticles in some artillery sights are calibrated to the relevant artillery definition for that military, i.e. the Carl Zeiss OEM-2 artillery sight made in East Germany from 1969 to 1976 is calibrated for the eastern bloc 6000 mil circle. Various symbols have been used to represent angular mils for compass use: mil, MIL and similar abbreviations are often used by militaries in the English speaking part of the world. ‰, called "artillery per milles" (German: Artilleriepromille), a symbol used by the Swiss Army. ¯, called "artillery line" (German: artilleristische Strich), a symbol used by the German Army (not to be confused with Compass Point (German: Nautischer Strich, 32 "nautical lines" per circle) which sometimes use the same symbol. However, the DIN standard (DIN 1301 part 3) is to use ¯ for artillery lines, and " for nautical lines.) ₥, called "thousandths" (French: millièmes), a symbol used on some older French compasses. v (Finnish: piiru, Swedish: delstreck), a symbol used by the Finnish Defence Forces for the standard Warsaw Pact mil. Sometimes just marked as v if superscript is not available. Conversion table for compasses Use in artillery sights Artillery uses angular measurement in gun laying, the azimuth between the gun and its target many kilometers away and the elevation angle of the barrel. This means that artillery uses mils to graduate indirect fire azimuth sights (called dial sights or panoramic telescopes), their associated instruments (directors or aiming circles), their elevation sights (clinometers or quadrants), together with their manual plotting devices, firing tables and fire control computers. Artillery spotters typically use their calibrated binoculars to move fired projectiles' impact onto a target. Here they know the approximate range to the target and so can read off the angle (+ quick calculation) to give the left/right corrections in meters. A mil is a meter at a range of one thousand meters (for example, to move the impact of an artillery round 100 meters by a gun firing from 3 km away, it is necessary to shift the direction by 100/3 = 33.3 mils.) Other scientific and technological uses The milliradian (and other SI multiples) is also used in other fields of science and technology for describing small angles, i.e. measuring alignment, collimation, and beam divergence in optics, and accelerometers and gyroscopes in inertial navigation systems.
Physical sciences
Angle
Basics and measurement
1731484
https://en.wikipedia.org/wiki/Amusia
Amusia
Amusia is a musical disorder that appears mainly as a defect in processing pitch but also encompasses musical memory and recognition. Two main classifications of amusia exist: acquired amusia, which occurs as a result of brain damage, and congenital amusia, which results from a music-processing anomaly present since birth. Studies have shown that congenital amusia is a deficit in fine-grained pitch discrimination and that 4% of the population has this disorder. Acquired amusia may take several forms. Patients with brain damage may experience the loss of ability to produce musical sounds while sparing speech, much like aphasics lose speech selectively but can sometimes still sing. Other forms of amusia may affect specific sub-processes of music processing. Current research has demonstrated dissociations between rhythm, melody, and emotional processing of music. Amusia may include impairment of any combination of these skill sets. Signs and symptoms Symptoms of amusia are generally categorized as receptive, clinical, or mixed. Symptoms of receptive amusia, sometimes referred to as "musical deafness" or "tone deafness", include the inability to recognize familiar melodies, the loss of ability to read musical notation, and the inability to detect wrong or out-of tune notes. Clinical, or expressive, symptoms include the loss of ability to sing, write musical notation, and/or play an instrument. A mixed disorder is a combination of expressive and receptive impairment. Clinical symptoms of acquired amusia are much more variable than those of congenital amusia and are determined by the location and nature of the lesion. Brain injuries may affect motor or expressive functioning, including the ability to sing, whistle, or hum a tune (oral-expressive amusia), the ability to play an instrument (instrumental amusia or musical apraxia), and the ability to write music (musical agraphia). Additionally, brain damage to the receptive dimension affects the faculty to discriminate tunes (receptive or sensorial amusia), the ability to read music (musical alessia), and the ability to identify songs that were familiar prior to the brain damage (amnesic amusia). Those with congenital amusia show impaired performance on discrimination, identification and imitation of sentences with intonational differences in pitch direction in their final word. This suggests that amusia can in subtle ways impair language processing. Social and emotional Amusic individuals have a remarkable sparing of emotional responses to music in the context of severe and lifelong deficits in processing music. Some individuals with amusia describe music as unpleasant. Others simply refer to it as noise and find it annoying. This can have social implications because amusics often try to avoid music, which in many social situations is not an option. In China and other countries where tonal languages are spoken, amusia may have the more pronounced social and emotional impact of experiencing difficulty in speaking and understanding the language. However, context clues are often strong enough to determine the correct meaning, similarly to how homophones can be understood. Related diseases Amusia has been classified as a learning disability that affects musical abilities. Research suggests that in congenital amusia, younger subjects can be taught tone differentiation techniques. This finding leads researchers to believe that amusia is related to dyslexia and other similar disorders. Research has been shown that amusia may be related to an increase in size of the cerebral cortex, which may be a result of a malformation in cortical development. Conditions such as dyslexia and epilepsy are due to a malformation in cortical development and also lead to an increase in cortical thickness, which leads researchers to believe that congenital amusia may be caused by the identical phenomenon in a different area of the brain. Amusia is also similar to aphasia in that they affect similar areas of the brain near the temporal lobe. Most cases of those with amusia do not show any symptoms of aphasia. However, a number of cases have shown that those who have aphasia can exhibit symptoms of amusia, especially in acquired aphasia. The two are not mutually exclusive and having one does not imply possession of the other. In acquired amusia, inability to perceive music correlates with an inability to perform other higher-level functions. In this case, as musical ability improves, so too do the higher cognitive functions which suggests that musical ability is closely related to these higher-level functions, such as memory and learning, mental flexibility, and semantic fluency. Amusia can also be related to aprosody, a disorder in which the person's speech is affected, becoming extremely monotonous. It has been found that both amusia and aprosody can arise from seizures occurring in the non-dominant hemisphere. They can also both arise from lesions to the brain, as can Broca's aphasia come about simultaneously with amusia from injury. There is a relation between musical abilities and the components of speech; however, it is not understood very well. Diagnosis The diagnosis of amusia requires multiple investigative tools all described in the Montreal Protocol for Identification of Amusia. This protocol has at its center the Montreal Battery of Evaluation of Amusia (MBEA), which involves a series of tests that evaluate the use of musical characteristics known to contribute to the memory and perception of conventional music, but the protocol also allows for the ruling out of other conditions that can explain the clinical signs observed. The battery comprises six subtests which assess the ability to discriminate pitch contour, musical scales, pitch intervals, rhythm, meter, and memory. An individual is considered amusic if they perform two standard deviations below the mean obtained by musically competent controls. This musical pitch disorder represents a phenotype that serves to identify the associated neuro-genetic factors. Both MRI-based brain structural analyses and electroencephalography (EEG) are common methods employed to uncover brain anomalies associated with amusia (See Neuroanatomy). Additionally, voxel-based morphometry (VBM) is used to detect anatomical differences between the MRIs of amusic brains and musically intact brains, specifically with respect increased and/or decreased amounts of white and grey matter. Classifications There are two general classifications of amusia: congenital amusia and acquired amusia. Congenital amusia Congenital amusia, commonly known as tone deafness or a tin ear, refers to a musical disability that cannot be explained by prior brain lesion, hearing loss, cognitive defects, or lack of environmental stimulation, and it affects about 4% of the population. Individuals with congenital amusia seem to lack the musical predispositions with which most people are born. They are unable to recognize or hum familiar tunes even if they have normal audiometry and above-average intellectual and memory skills. Also, they do not show sensitivity to dissonant chords in a melodic context, which, as discussed earlier, is one of the musical predispositions exhibited by infants. The hallmark of congenital amusia is a deficit in fine-grained pitch discrimination, and this deficit is most apparent when congenital amusics are asked to pick out a wrong note in a given melody. If the distance between two successive pitches is small, congenital amusics are not able to detect a pitch change. As a result of this defect in pitch perception, a lifelong musical impairment may emerge due to a failure to internalize musical scales. A lack of fine-grained pitch discrimination makes it extremely difficult for amusics to enjoy and appreciate music, which consists largely of small pitch changes. Tone-deaf people seem to be disabled only when it comes to music as they can fully interpret the prosody or intonation of human speech. Tone deafness has a strong negative correlation with belonging to societies with tonal languages. This could be evidence that the ability to reproduce and distinguish between notes may be a learned skill; conversely, it may suggest that the genetic predisposition towards accurate pitch discrimination may influence the linguistic development of a population towards tonality. A correlation between allele frequencies and linguistic typological features has been recently discovered, supporting the latter hypothesis. Tone deafness is also associated with other musical-specific impairments such as the inability to keep time with music (beat deafness, or the lack of rhythm), or the inability to remember or recognize a song. These disabilities can appear separately, but some research shows that they are more likely to appear in tone-deaf people. Experienced musicians, such as W. A. Mathieu, have addressed tone deafness in adults as correctable with training. Acquired amusia Acquired amusia is a musical disability that shares the same characteristics as congenital amusia, but rather than being inherited, it is the result of brain damage. It is also more common than congenital amusia. While it has been suggested that music is processed by music-specific neural networks in the brain, this view has been broadened to show that music processing also encompasses generic cognitive functions, such as memory, attention, and executive processes. A study was published in 2009 which investigated the neural and cognitive mechanisms that underlie acquired amusia and contribute to its recovery. The study was performed on 53 stroke patients with a left or right hemisphere middle cerebral artery (MCA) infarction one week, three months, and six months after the stroke occurred. Amusic subjects were identified one week following their stroke, and over the course of the study, amusics and non-amusics were compared in both brain lesion location and their performances on neuropsychological tests. Results showed that there was no significant difference in the distribution of left and right hemisphere lesions between amusic and non-amusic groups, but that the amusic group had a significantly higher number of lesions to the frontal lobe and auditory cortex. Temporal lobe lesions were also observed in patients with amusia. Amusia is a common occurrence following an ischemic MCA stroke, as evidenced by the 60% of patients who were found to be amusic at the one-week post-stroke stage. While significant recovery takes place over time, amusia can persist for long periods of time. Test results suggest that acquired amusia and its recovery in the post-stroke stage are associated with a variety of cognitive functions, particularly attention, executive functioning and working memory. Neuroanatomy Neurologically intact individuals appear to be born musical. Even before they are able to talk, infants show remarkable musical abilities that are similar to those of adults in that they are sensitive to musical scales and a regular tempo. Also, infants are able to differentiate between consonant and dissonant intervals. These perceptual skills indicate that music-specific predispositions exist. Prolonged exposure to music develops and refines these skills. Extensive musical training does not seem to be necessary in the processing of chords and keys. The development of musical competence most likely depends on the encoding of pitch along musical scales and maintaining a regular pulse, both of which are key components in the structure of music and aid in perception, memory, and performance. Also, the encoding of pitch and temporal regularity are both likely to be specialized for music processing. Pitch perception is absolutely crucial to processing music. The use of scales and the organization of scale tones around a central tone (called the tonic) assign particular importance to notes in the scale and cause non-scale notes to sound out of place. This enables the listener to ascertain when a wrong note is played. However, in individuals with amusia, this ability is either compromised or lost entirely. Music-specific neural networks exist in the brain for a variety of music-related tasks. It has been shown that Broca's area is involved in the processing of musical syntax. Furthermore, brain damage can disrupt an individual's ability to tell the difference between tonal and atonal music and detect the presence of wrong notes, but can preserve the individual's ability to assess the distance between pitches and the direction of the pitch. The opposite scenario can also occur, in which the individual loses pitch discrimination capabilities, but can sense and appreciate the tonal context of the work. Distinct neural networks also exist for music memories, singing, and music recognition. Neural networks for music recognition are particularly intriguing. A patient can undergo brain damage that renders them unable to recognize familiar melodies that are presented without words. However, the patient maintains the ability to recognize spoken lyrics or words, familiar voices, and environmental sounds. The reverse case is also possible, in which the patient cannot recognize spoken words, but can still recognize familiar melodies. These situations overturn previous claims that speech recognition and music recognition share a single processing system. Instead, it is clear that there are at least two distinct processing modules: one for speech and one for music. Many research studies of individuals with amusia show that a number of cortical regions appear to be involved in processing music. Some report that the primary auditory cortex, secondary auditory cortex, and limbic system are responsible for this faculty, while more recent studies suggest that lesions in other cortical areas, abnormalities in cortical thickness, and deficiency in neural connectivity and brain plasticity may contribute to amusia. While various causes of amusia exist, some general findings that provide insight to the brain mechanisms involved in music processing are discussed below. Pitch relations Studies suggest that the analysis of pitch is primarily controlled by the right temporal region of the brain. The right secondary auditory cortex processes pitch change and manipulation of fine tunes; specifically, this region distinguishes the multiple pitches that characterize melodic tunes as contour (pitch direction) and interval (frequency ratio between successive notes) information. The right superior temporal gyrus recruits and evaluates contour information, while both right and left temporal regions recruit and evaluate interval information. In addition, the right anterolateral part of Heschl's gyrus (primary auditory cortex) is also concerned with processing pitch information. Temporal relations The brain analyzes the temporal (rhythmic) components of music in two ways: (1) it segments the ongoing sequences of music into temporal events based on duration, and (2) it groups those temporal events to understand the underlying beat to music. Studies on rhythmic discrimination reveal that the right temporal auditory cortex is responsible for temporal segmenting, and the left temporal auditory cortex is responsible for temporal grouping. Other studies suggest the participation of motor cortical areas in rhythm perception and production. Therefore, a lack of involvement and networking between bilateral temporal cortices and neural motor centers may contribute to both congenital and acquired amusia. Memory Memory is required in order to process and integrate both melodic and rhythmic aspects of music. Studies suggest that there is a rich interconnection between the right temporal gyrus and frontal cortical areas for working memory in music appreciation. This connection between the temporal and frontal regions of the brain is extremely important since these regions play critical roles in music processing. Changes in the temporal areas of the amusic brain are most likely associated with deficits in pitch perception and other musical characteristics, while changes in the frontal areas are potentially related to deficits in cognitive processing aspects, such as memory, that are needed for musical discrimination tasks. Memory is also concerned with the recognition and internal representation of tunes, which help to identify familiar songs and confer the ability to sing tunes in one's head. The activation of the superior temporal region and left inferior temporal and frontal areas is responsible for the recognition of familiar songs, and the right auditory cortex (a perceptual mechanism) is involved in the internal representation of tunes. These findings suggest that any abnormalities and/or injuries to these regions of the brain could facilitate amusia. Other regions of the brain possibly linked to amusia Lesions in (or the absence of) associations between the right temporal lobe and inferior frontal lobe. In nine of ten tone-deaf people, the superior arcuate fasciculus in the right hemisphere could not be detected, suggesting a disconnection between the posterior superior temporal gyrus and the posterior inferior frontal gyrus. Researchers suggested the posterior superior temporal gyrus was the origin of the disorder. Cortical thickness and reduced white matter – in a recent study, voxel-based morphometry, an imaging technique used to explore structural differences in the brain, revealed a decrease in white matter concentration in the right inferior frontal gyrus of amusic individuals as compared to controls. Lack of extensive exposure to music could be a contributing factor to this white matter reduction. For example, amusic individuals may be less inclined to listen to music than others, which could ultimately cause reduced myelination of connections to the frontal areas of the brain. Involvement of the parahippocampal gyrus (responsible for the emotional reaction to music) Treatment Currently, no forms of treatment have proven effective in treating amusia. One study has shown tone differentiation techniques to have some success; however, future research on treatment of this disorder will be necessary to verify this technique as an appropriate treatment. History In 1825, Franz Joseph Gall mentioned a "musical organ" in a specific region of the human brain that could be spared or disrupted after a traumatic event resulting in brain damage. In 1865, Jean-Baptiste Bouillaud described the first series of cases that involved the loss of music abilities that were due to brain injury. In 1878, Grant Allen was the first to describe in the medical literature what would later be termed congenital amusia, calling it "note-deafness". Later, during the late nineteenth century, several influential neurologists studied language in an attempt to construct a theory of cognition. While not studied as thoroughly as language, music and visual processing were also studied. In 1888–1890, August Knoblauch produced a cognitive model for music processing and termed it amusia. This model for music processing was the earliest produced. While the possibility that certain individuals may be born with musical deficits is not a new notion, the first documented case of congenital amusia was published only in 2002. The study was conducted with a female volunteer, referred to as Monica, who declared herself to be musically impaired in response to an advertisement in the newspaper. Monica had no psychiatric or neurological history, nor did she have any hearing loss. MRI scans showed no abnormalities. Monica also scored above average on a standard intelligence test, and her working memory was evaluated and found to be normal. However, Monica had a lifelong inability to recognize or perceive music, which had persisted even after involvement with music through church choir and band during her childhood and teenage years. Monica said that she does not enjoy listening to music because, to her, it sounded like noise and evoked a stressful response. In order to determine if Monica's disorder was amusia, she was subjected to the MBEA series of tests. One of the tests dealt with Monica's difficulties in discriminating pitch variations in sequential notes. In this test, a pair of melodies was played, and Monica was asked if the second melody in the pair contained a wrong note. Monica's score on this test was well below the average score generated by the control group. Further tests showed that Monica struggled with recognizing highly familiar melodies, but that she had no problems in recognizing the voices of well-known speakers. Thus, it was concluded that Monica's deficit seemed limited to music. A later study showed that not only do amusics experience difficulty in discriminating variations in pitch, but they also exhibit deficits in perceiving patterns in pitch. This finding led to another test that was designed to assess the presence of a deficiency in pitch perception. In this test, Monica heard a sequence of five piano tones of constant pitch followed by a comparison sequence of five piano tones in which the fourth tone could be the same pitch as the other notes in the sequence or a completely different pitch altogether. Monica was asked to respond "yes" if she detected a pitch change on the fourth tone or respond "no" if she could not detect a pitch change. Results showed that Monica could barely detect a pitch change as large as two semitones (whole tone), or half steps. While this pitch-processing deficit is extremely severe, it does not seem to include speech intonation. This is because pitch variations in speech are very coarse compared with those used in music. In conclusion, Monica's learning disability arose from a basic problem in pitch discrimination, which is viewed as the origin of congenital amusia. Research Over the past decade, much has been discovered about amusia. However, there remains a great deal more to learn. While a method of treatment for people with amusia has not been defined, tone differentiation techniques have been used on amusic patients with some success. It was found with this research that children reacted positively to these tone differentiation techniques, while adults found the training annoying. However, further research in this direction would aid in determining if this would be a viable treatment option for people with amusia. Additional research can also serve to indicate which processing component in the brain is essential for normal music development. Also, it would be extremely beneficial to investigate musical learning in relation to amusia since this could provide valuable insights into other forms of learning disabilities such as dysphasia and dyslexia. Notable cases In fiction Horatio Hornblower Trilby O'Ferrall from Trilby Grace from Home on the Range James Fraser from Outlander by Diana Gabaldon Rodrigo De Souza from Mozart in the Jungle
Biology and health sciences
Disabilities
Health
1731616
https://en.wikipedia.org/wiki/Rotary%20cannon
Rotary cannon
A rotary cannon, rotary autocannon, rotary gun or Gatling cannon, is any large-caliber multiple-barreled automatic firearm that uses a Gatling-type rotating barrel assembly to deliver a sustained saturational direct fire at much greater rates of fire than single-barreled autocannons of the same caliber. The loading, firing and ejection functions are performed simultaneously in different barrels as the whole assembly rotates, and the rotation also permits the barrels some time to cool. Rotary cannons, external or self-driven are used in aircraft over reciprocating bolt autocannons which are more prone to jamming in high g environments. The rotating barrels on nearly all modern Gatling-type guns are powered by an external force such as an electric motor, although internally powered gas-operated versions have also been developed. The cyclic multi-barrel design synchronizes the firing/reloading sequence. Each barrel fires a single cartridge when it reaches a certain position in the rotation, after which the spent casing is ejected at a different position and then a new round is loaded at another position. During the cycle, the barrel has more time to dissipate some heat away to the surrounding air. Due to the usually cumbersome size and weight of rotary cannons, they are typically mounted on weapons platforms such as vehicles, aircraft, or ships, where they are often used in close-in weapon systems. History In 1852 a revolving barrel gun with a unique method of ignition was proposed by an Irish immigrant to America by the name of Delany. The Gatling gun was another gun to use rotating barrels. It was designed by the American inventor Dr. Richard J. Gatling in 1861 and patented in 1862. Hand cranked and hopper fed, it could fire at a rate of 200 rounds per minute. The Gatling gun was a field weapon, first used in warfare during the American Civil War and subsequently by European and Russian armies. The design was steadily improved; by 1876 the Gatling gun had a theoretical rate of fire of 1,200 rounds per minute, although 400 rounds per minute was more readily achievable in combat. By 1893, the M1893 Gatling gun could fire 800 to 900 rounds per minute. Gatling also developed examples of the M1893 powered by an electric motor driving the crank with a belt. Tests demonstrated the electric Gatling could fire up to 1,500 rpm in bursts. Ultimately, the Gatling's weight and cumbersome artillery carriage hindered its ability to keep up with infantry forces over difficult ground. It was eventually superseded by lighter and more mobile machine guns such as the Maxim gun. All models of Gatling guns were declared obsolete by the U.S. Army in 1911, after 45 years of service. Development of modern Gatling-type guns After the Gatling gun was replaced in service by newer non-rotating, recoil- or gas-operated machine guns, the approach of using multiple rotating barrels fell into disuse for many decades. Some examples were developed during the interwar years but only existed as prototypes or were rarely used. During World War I, Imperial Germany worked on the Fokker-Leimberger, an externally powered 12-barrel Gatling gun that could fire more than 7,200 7.92×57mm rounds per minute. After World War II, the U.S. Army Air Force determined that an automatic cannon of improved design with an extremely high rate of fire was required to achieve a sufficient number of large-caliber hits on fast-moving enemy jet aircraft. A larger-caliber cannon shell was deemed desirable as it could contain more explosives—compared to .30 and .50 caliber ammunition previously used—and thus able to destroy aircraft with only a few hits on target. However, autocannons suffered from a lower rate of fire than machine guns; a possible solution, the M39 revolver cannon, had problems with overheating and excessive barrel wear. In June 1946, the General Electric Company was awarded a U.S. military defense contract to develop an aircraft gun with a high rate of fire which GE termed Project Vulcan. While researching prior work, ordnance engineers recalled the experimental electrically driven Gatling weapons of the turn of the 20th century. In 1946, a Model 1903 Gatling gun was borrowed from a museum and set up with an electric motor drive by General Electric engineers. During test firing, the 40-year-old design briefly managed a rate of fire of 5,000 rounds per minute. In 1949 General Electric began testing the first model of its modified Gatling design, now called the Vulcan Gun. The first prototype was designated the T45 (Model A). It fired ammunition at about 2,500 rounds per minute from six barrels driven by an electric motor. In 1950, GE delivered ten initial model A T45 guns for evaluation. Thirty-three model C T45 guns were delivered in 1952 in three calibers: .60 cal., 20mm, and 27mm, for additional testing. After extensive testing, the T171 20mm gun was selected for further development. In 1956, the T171 20mm gun was standardized by the U.S. Army and U.S. Air Force as the M61 20mm Vulcan aircraft gun. One of the main reasons for the resurgence of the electrically or hydraulically powered multiple-barrel design is the weapon's tolerance for continuously high rates of fire. For example, 1000 rounds per minute of continuous fire from a conventional single-barreled weapon ordinarily results in rapid barrel heating followed by stoppages caused by overheating. In contrast, a five-barreled rotary gun firing 1000 rounds per minute fires only 200 rounds per barrel per minute, an acceptable rate of fire for continuous use. A multiple-barrel design also overcomes the limiting factor of the loading and extraction sequence. In a single-barrel design, these tasks must alternate, limiting the rate of fire. A multiple-barrel design allows loading and extraction to occur simultaneously on different barrels as they rotate. The design is also resistant to defective ammunition, which can cause normal machine guns to malfunction when a cartridge fails to load, fire, or eject from the weapon. Since the power source of a multiple-barrel design is external, it can simply extract defective rounds as it would a regular, spent cartridge. Models M61 Vulcan and other designs The M61 Vulcan 20 mm autocannon is the best-known of a family of weapons designed by General Electric and currently manufactured by General Dynamics. The M61 is a six-barreled 20mm rotary cannon that fires at up to 6,600 rounds per minute. Similar systems are available in calibers ranging from 5.56 mm to 30 mm (the prototype T249 Vigilante AA platform featured a 37 mm chambering). Another multi-barrel design is the hydraulically driven GAU-8 Avenger 30 mm autocannon, carried on the A-10 Thunderbolt II (Warthog) attack aircraft, a heavily armored close air-support aircraft. It is a seven-barreled cannon designed for tank-killing and is currently the largest bore multi-barrel weapon active in the U.S. arsenal, and heaviest autocannon ever mounted into an aircraft, outweighing the WW II German Bordkanone BK 7,5 75mm aircraft-mount, tank-killing single barrel autocannon by some 630 kg (1,389 lb), with ammunition. The Gryazev-Shipunov GSh-6-23 and GSh-6-30 are Russian gas-powered rotary cannon with maximum cyclic rates of 9,000 to 10,000 rounds per minute. Self-driven examples While electric motors were used to rotate the Vulcan barrels, a few examples of self-operated Gatling-derived weapons use the blow-forward, recoil or gas impulse from their ammunition. The Bangerter machine gun uses a blow-forward operation and is the most complex example. The Slostin machine gun uses a similar operation but with gas pistons on each barrels. The GShG-7.62 machine gun and GSh-6-23, both use a more effective, simpler gas piston drive in the center of the barrel cluster. Minigun The Minigun is a type of rotary machine gun. During the Vietnam War, the 7.62 mm caliber M134 Minigun was originally created to arm rotary-wing aircraft, and could be fitted to various helicopters as either a crew-served or a remotely operated weapon. It has a rate of fire from 2,000 to 6,000 rounds per minute from a 4,000-round linked belt. As the GAU-2B/A, the Minigun was also used on the U.S. Air Force AC-47, AC-119 and Lockheed AC-130 gunships. The AC-47 was known during the Vietnam War as "Puff the Magic Dragon" and was said to be "the only thing that scared the VC". This weapon was also used on selected USAF helicopters. With sophisticated navigation and target identification tools, Miniguns can be used effectively even against concealed targets. The crew's ability to concentrate the Minigun's fire very tightly produces the appearance of the 'Red Tornado' from the light of the tracers, as the gun platform circles a target at night.
Technology
Firearms
null
16861908
https://en.wikipedia.org/wiki/Paper
Paper
Paper is a thin sheet material produced by mechanically or chemically processing cellulose fibres derived from wood, rags, grasses, herbivore dung, or other vegetable sources in water. Once the water is drained through a fine mesh leaving the fibre evenly distributed on the surface, it can be pressed and dried. The papermaking process developed in east Asia, probably China, at least as early as 105 CE, by the Han court eunuch Cai Lun, although the earliest archaeological fragments of paper derive from the 2nd century BCE in China. Although paper was originally made in single sheets by hand, today it is mass-produced on large machines—some making reels 10 metres wide, running at 2,000 metres per minute and up to 600,000 tonnes a year. It is a versatile material with many uses, including printing, painting, graphics, signage, design, packaging, decorating, writing, and cleaning. It may also be used as filter paper, wallpaper, book endpaper, conservation paper, laminated worktops, toilet tissue, currency, and security paper, or in a number of industrial and construction processes. History The oldest known archaeological fragments of the immediate precursor to modern paper date to the 2nd century BCE in China. The pulp papermaking process is ascribed to Cai Lun, a 2nd-century CE Han court eunuch. It has been said that knowledge of papermaking was passed to the Islamic world after the Battle of Talas in 751 CE when two Chinese papermakers were captured as prisoners and used to extract 'the secrets' of papermaking. Although the veracity of this story is uncertain, paper started to be made in Samarkand soon after. In the 13th century, the knowledge and uses of paper spread from the Middle East to medieval Europe, where the first water-powered paper mills were built. Because paper was introduced to the West through the city of Baghdad, it was first called bagdatikos. In the 19th century, industrialization greatly reduced the cost of manufacturing paper. In 1844, the Canadian inventor Charles Fenerty and the German inventor Friedrich Gottlob Keller independently developed processes for pulping wood fibres. Early sources of fibre Before the industrialisation of paper production the most common fibre source was recycled fibres from used textiles, called rags. The rags were from hemp, linen and cotton. A process for removing printing inks from recycled paper was invented by German jurist Justus Claproth in 1774. Today this method is called deinking. It was not until the introduction of wood pulp in 1843 that paper production was not dependent on recycled materials from ragpickers. Etymology The word paper is etymologically derived from Latin , which comes from the Greek (), the word for the plant. Papyrus is a thick, paper-like material produced from the pith of the plant, which was used in ancient Egypt and other Mediterranean cultures for writing before the introduction of paper. Although the word paper is etymologically derived from papyrus, the two are produced very differently and the development of the first is distinct from the development of the second. Papyrus is a lamination of natural plant fibre, while paper is manufactured from fibres whose properties have been changed by maceration. Papermaking Chemical pulping To make pulp from wood, a chemical pulping process separates lignin from cellulose fibre. A cooking liquor is used to dissolve the lignin, which is then washed from the cellulose; this preserves the length of the cellulose fibres. Paper made from chemical pulps are also known as wood-free papers (not to be confused with tree-free paper); this is because they do not contain lignin, which deteriorates over time. The pulp can also be bleached to produce white paper, but this consumes 5% of the fibres. Chemical pulping processes are not used to make paper made from cotton, which is already 90% cellulose. There are three main chemical pulping processes: the sulfite process dates back to the 1840s and was the dominant method before the second world war. The kraft process, invented in the 1870s and first used in the 1890s, is now the most commonly practised strategy; one of its advantages is the chemical reaction with lignin produces heat, which can be used to run a generator. Most pulping operations using the kraft process are net contributors to the electricity grid or use the electricity to run an adjacent paper mill. Another advantage is that this process recovers and reuses all inorganic chemical reagents. Soda pulping is another specialty process used to pulp straws, bagasse and hardwoods with high silicate content. Mechanical pulping There are two major mechanical pulps: thermomechanical pulp (TMP) and groundwood pulp (GW). In the TMP process, wood is chipped and then fed into steam-heated refiners, where the chips are squeezed and converted to fibres between two steel discs. In the groundwood process, debarked logs are fed into grinders where they are pressed against rotating stones to be made into fibres. Mechanical pulping does not remove the lignin, so the yield is very high, > 95%; however, lignin causes the paper thus produced to turn yellow and become brittle over time. Mechanical pulps have rather short fibres, thus producing weak paper. Although large amounts of electrical energy are required to produce mechanical pulp, it costs less than the chemical kind. De-inked pulp Paper recycling processes can use either chemically or mechanically produced pulp; by mixing it with water and applying mechanical action the hydrogen bonds in the paper can be broken and fibres separated again. Most recycled paper contains a proportion of virgin fibre for the sake of quality; generally speaking, de-inked pulp is of the same quality or lower than the collected paper it was made from. There are three main classifications of recycled fibre: Mill broke or internal mill waste – This incorporates any substandard or grade-change paper made within the paper mill itself, which then goes back into the manufacturing system to be re-pulped back into paper. Such out-of-specification paper is not sold and is therefore often not classified as genuine reclaimed recycled fibre; however most paper mills have been reusing their own waste fibre for many years, long before recycling became popular. Preconsumer waste – This is offcut and processing waste, such as guillotine trims and envelope blank waste; it is generated outside the paper mill and could potentially go to landfill, and is a genuine recycled fibre source; it includes de-inked preconsumer waste (recycled material that has been printed but did not reach its intended end use, such as waste from printers and unsold publications). Postconsumer waste – This is fibre from paper that has been used for its intended end use and includes office waste, magazine papers and newsprint. As the vast majority of this material has been printed – either digitally or by more conventional means such as lithography or rotogravure – it will either be recycled as printed paper or go through a de-inking process first. Recycled papers can be made from 100% recycled materials or blended with virgin pulp, although they are (generally) not as strong nor as bright as papers made from the latter. Additives Besides the fibres, pulps may contain fillers such as chalk or china clay, which improve its characteristics for printing or writing. Additives for sizing purposes may be mixed with it or applied to the paper web later in the manufacturing process; the purpose of such sizing is to establish the correct level of surface absorbency to suit ink or paint. Producing paper The pulp is fed to a paper machine, where it is formed as a paper web and the water is removed from it by pressing and drying. Pressing the sheet removes the water by force. Once the water is forced from the sheet, a special kind of felt, which is not to be confused with the traditional one, is used to collect the water. When making paper by hand, a blotter sheet is used instead. Drying involves using air or heat to remove water from the paper sheets. In the earliest days of papermaking, this was done by hanging the sheets like laundry; in more modern times, various forms of heated drying mechanisms are used. On the paper machine, the most common is the steam-heated can dryer. These can reach temperatures above and are used in long sequences of more than forty cans where the heat produced by these can easily dry the paper to less than six percent moisture. Finishing The paper may then undergo sizing to alter its physical properties for use in various applications. Paper at this point is uncoated. Coated paper has a thin layer of material such as calcium carbonate or china clay applied to one or both sides in order to create a surface more suitable for high-resolution halftone screens. (Uncoated papers are rarely suitable for screens above 150 lpi.) Coated or uncoated papers may have their surfaces polished by calendering. Coated papers are divided into matte, semi-matte or silk, and gloss. Gloss papers give the highest optical density in the printed image. The paper is then fed onto reels if it is to be used on web printing presses, or cut into sheets for other printing processes or other purposes. The fibres in the paper basically run in the machine direction. Sheets are usually cut "long-grain", i.e. with the grain parallel to the longer dimension of the sheet. Continuous form paper (or continuous stationery) is cut to width with holes punched at the edges, and folded into stacks. Paper grain All paper produced by paper machines such as the Fourdrinier Machine are wove paper, i.e. the wire mesh that transports the web leaves a pattern that has the same density along the paper grain and across the grain. Textured finishes, watermarks and wire patterns imitating hand-made laid paper can be created by the use of appropriate rollers in the later stages of the machine. Wove paper does not exhibit "laidlines", which are small regular lines left behind on paper when it was handmade in a mould made from rows of metal wires or bamboo. Laidlines are very close together. They run perpendicular to the "chainlines", which are further apart. Handmade paper similarly exhibits "deckle edges", or rough and feathery borders. Applications Paper can be produced with a wide variety of properties, depending on its intended use. Published, written, or informational items For representing value: paper money, bank note, cheque, security (see security paper), voucher, ticket For storing information: book, notebook, graph paper, punched card, photographic paper For published materials, publications, and reading materials: books, newspapers, magazines, posters, pamphlets, maps, signs, labels, advertisements, billboards. For individual use: diary, notebooks, writing pads, memo pads journals, planners, note to remind oneself, etc.; for temporary personal use: scratch paper For business and professional use: copier paper, ledger paper, typing paper, computer printer paper. Specialized paper for forms and documents such as invoices, receipts, tickets, vouchers, bills, contracts, official forms, agreements. For communication: between individuals and/or groups of people: letter, post cards, airmail, telegrams, newsprint, card stock For organizing and sending documents: envelopes, file folders, packaging, pocket folders, partition folders. For artistic works and uses; drawing paper, pastels, water color paintings, sketch pads, charcoal drawings, For special printed items using more elegant forms of paper; stationery, parchment, Packaging and industrial uses For packaging: corrugated box, paper bag, envelope, wrapping paper, paper string For cleaning: toilet paper, paper towels, facial tissue. For food utensils and containers: wax paper, paper plates and paper cups, beverage cartons, tea bags, condiments, food packaging, coffee filters, cupcake cups. For construction: papier-mâché, origami paper, paper planes, quilling, paper honeycomb, sandpaper, used as a core material in composite materials, paper engineering, construction paper, paper yarn, and paper clothing For other uses: emery paper, blotting paper, litmus paper, universal indicator paper, paper chromatography, electrical insulation paper (see also fishpaper), filter paper, wallpaper It is estimated that paper-based storage solutions captured 0.33% of the total in 1986 and only 0.007% in 2007, even though in absolute terms the world's capacity to store information on paper increased from 8.7 to 19.4 petabytes. It is estimated that in 1986 paper-based postal letters represented less than 0.05% of the world's telecommunication capacity, with sharply decreasing tendency after the massive introduction of digital technologies. Paper has a major role in the visual arts. It is used by itself to form two- and three-dimensional shapes and collages. It has also evolved to being a structural material used in furniture design. Watercolor paper has a long history of production and use. Types, thickness and weight The thickness of paper is often measured by caliper, which is typically given in thousandths of an inch in the United States and in micrometres (μm) in the rest of the world. Paper may be between thick. Paper is often characterized by weight. In the United States, the weight is the weight of a ream (bundle of 500 sheets) of varying "basic sizes" before the paper is cut into the size it is sold to end customers. For example, a ream of 20 lb, paper weighs 5 pounds because it has been cut from larger sheets into four pieces. In the United States, printing paper is generally 20 lb, 24 lb, 28 lb, or 32 lb at most. Cover stock is generally 68 lb, and 110 lb or more is considered card stock. In Europe and other regions using the ISO 216 paper-sizing system, the weight is expressed in grams per square metre (g/m2 or usually gsm) of the paper. Printing paper is generally between 60 gsm and 120 gsm. Anything heavier than 160 gsm is considered card. The weight of a ream therefore depends on the dimensions of the paper and its thickness. Most commercial paper sold in North America is cut to standard paper sizes based on customary units and is defined by the length and width of a sheet of paper. The ISO 216 system used in most other countries is based on the surface area of a sheet of paper, not on a sheet's width and length. It was first adopted in Germany in 1922 and generally spread as nations adopted the metric system. The largest standard size paper is A0 (A zero), measuring one square metre (approx. 1189 × 841 mm). A1 is half the size of a sheet of A0 (i.e., 594 mm × 841 mm), such that two sheets of A1 placed side by side are equal to one sheet of A0. A2 is half the size of a sheet of A1, and so forth. Common sizes used in the office and the home are A4 and A3 (A3 is the size of two A4 sheets). The density of paper ranges from for tissue paper to for some specialty paper. Printing paper is about . Paper may be classified into seven categories: Printing papers of wide variety. Wrapping papers for the protection of goods and merchandise. This includes wax and kraft papers. Writing paper suitable for stationery requirements. This includes ledger, bank, and bond paper. Blotting papers containing little or no size. Drawing papers usually with rough surfaces used by artists and designers, including cartridge paper. Handmade papers including most decorative papers, Ingres papers, Japanese paper and tissues, all characterized by lack of grain direction. Specialty papers including cigarette paper, toilet tissue, and other industrial papers. Some paper types include: Bank paper Banana paper Bond paper Book paper Coated paper: glossy and matte surface Construction paper/sugar paper Cotton paper Fish paper (vulcanized fibres for electrical insulation) Inkjet paper Kraft paper Laid paper Leather paper Mummy paper Oak tag paper Sandpaper Troublewit, specially pleated paper Tyvek paper Wallpaper Washi Waterproof paper Wax paper Wove paper Xuan paper Paper stability Much of the early paper made from wood pulp contained significant amounts of alum, a variety of aluminium sulfate salt that is significantly acidic. Alum was added to paper to assist in sizing, making it somewhat water resistant so that inks did not "run" or spread uncontrollably. Early papermakers did not realize that the alum they added liberally to cure almost every problem encountered in making their product would be eventually detrimental. The cellulose fibres that make up paper are hydrolyzed by acid, and the presence of alum eventually degrades the fibres until the acidic paper disintegrates in a process known as "slow fire". Documents written on rag paper are significantly more stable. The use of non-acidic additives to make paper is becoming more prevalent, and the stability of these papers is less of an issue. Paper made from mechanical pulp contains significant amounts of lignin, a major component in wood. In the presence of light and oxygen, lignin reacts to give yellow materials, which is why newsprint and other mechanical paper yellows with age. Paper made from bleached kraft or sulfite pulps does not contain significant amounts of lignin and is therefore better suited for books, documents and other applications where whiteness of the paper is essential. Paper made from wood pulp is not necessarily less durable than a rag paper. The aging behaviour of a paper is determined by its manufacture, not the original source of the fibres. Furthermore, tests sponsored by the Library of Congress prove that all paper is at risk of acid decay, because cellulose itself produces formic, acetic, lactic and oxalic acids. Mechanical pulping yields almost a tonne of pulp per tonne of dry wood used, which is why mechanical pulps are sometimes referred to as "high yield" pulps. With almost twice the yield as chemical pulping, mechanical pulps is often cheaper. Mass-market paperback books and newspapers tend to use mechanical papers. Book publishers tend to use acid-free paper, made from fully bleached chemical pulps for hardback and trade paperback books. Environmental impact The production and use of paper has a number of adverse effects on the environment. Worldwide consumption of paper has risen by 400% in the past 40 years leading to increase in deforestation, with 35% of harvested trees being used for paper manufacture. Most paper companies also plant trees to help regrow forests. Logging of old growth forests accounts for less than 10% of wood pulp, but is one of the most controversial issues. Paper waste accounts for up to 40% of total waste produced in the United States each year, which adds up to 71.6 million tons of paper waste per year in the United States alone. The average office worker in the US prints 31 pages every day. Americans also use in the order of 16 billion paper cups per year. Conventional bleaching of wood pulp using elemental chlorine produces and releases into the environment large amounts of chlorinated organic compounds, including chlorinated dioxins. Dioxins are recognized as a persistent environmental pollutant, regulated internationally by the Stockholm Convention on Persistent Organic Pollutants. Dioxins are highly toxic, and health effects on humans include reproductive, developmental, immune and hormonal problems. They are known to be carcinogenic. Over 90% of human exposure is through food, primarily meat, dairy, fish and shellfish, as dioxins accumulate in the food chain in the fatty tissue of animals. The paper pulp and print industries emitted together about 1% of world greenhouse-gas emissions in 2010 and about 0.9% in 2012. Current production and use In the 2022−2024 edition of the annual "Pulp and paper capacites survey", the Food and Agriculture Organization of the United Nations (FAO) reports that Asia has superseded North America as the top pulp- and paper-producing continent. FAO figures for 2021 show the production of graphic papers continuing its decline from a mid-2000s peak to hover below 100 million tonnes a year. By contrast, the production of other papers and paperboard – which includes cardboard and sanitary products – has continued to soar, exceeding 320 million tonnes. FAO has documented the expanding production of cardboard in paper and paperboard, which has been increasing in response to the spread of e-commerce since the 2010s. Data from FAO suggest that it has been even further boosted by COVID-19-related lockdowns. Future Some manufacturers have started using a new, significantly more environmentally friendly alternative to expanded plastic packaging. Made out of paper, and known commercially as PaperFoam, the new packaging has mechanical properties very similar to those of some expanded plastic packaging, but is biodegradable and can also be recycled with ordinary paper. With increasing environmental concerns about synthetic coatings (such as PFOA) and the higher prices of hydrocarbon based petrochemicals, there is a focus on zein (corn protein) as a coating for paper in high grease applications such as popcorn bags. Also, synthetics such as Tyvek and Teslin have been introduced as printing media as a more durable material than paper.
Technology
Materials
null
16862071
https://en.wikipedia.org/wiki/Pipe%20flow
Pipe flow
In fluid mechanics, pipe flow is a type of fluid flow within a closed conduit, such as a pipe, duct or tube. It is also called as Internal flow. The other type of flow within a conduit is open channel flow. These two types of flow are similar in many ways, but differ in one important aspect. Pipe flow does not have a free surface which is found in open-channel flow. Pipe flow, being confined within closed conduit, does not exert direct atmospheric pressure, but does exert hydraulic pressure on the conduit. Not all flow within a closed conduit is considered pipe flow. Storm sewers are closed conduits but usually maintain a free surface and therefore are considered open-channel flow. The exception to this is when a storm sewer operates at full capacity, and then can become pipe flow. Energy in pipe flow is expressed as head and is defined by the Bernoulli equation. In order to conceptualize head along the course of flow within a pipe, diagrams often contain a hydraulic grade line (HGL). Pipe flow is subject to frictional losses as defined by the Darcy-Weisbach formula. Laminar-turbulence transition The behavior of pipe flow is governed mainly by the effects of viscosity and gravity relative to the inertial forces of the flow. Depending on the effect of viscosity relative to inertia, as represented by the Reynolds number, the flow can be either laminar or turbulent. For circular pipes of different surface roughness, at a Reynolds number below the critical value of approximately 2000 pipe flow will ultimately be laminar, whereas above the critical value turbulent flow can persist, as shown in Moody chart. For non-circular pipes, such as rectangular ducts, the critical Reynolds number is shifted, but still depending on the aspect ratio. Earlier transition to turbulence, happening at Reynolds number one order of magnitude smaller, i.e. , can happen in channels with special geometrical shapes, such as the Tesla valve. Flow through pipes can roughly be divided into two: Laminar flow - see Hagen-Poiseuille flow Turbulent flow - see Moody diagram
Physical sciences
Fluid mechanics
Physics
16866635
https://en.wikipedia.org/wiki/Evolution%20of%20fungi
Evolution of fungi
Fungi diverged from other life around 1.5 billion years ago, with the glomaleans branching from the "higher fungi" (dikaryans) at ~, according to DNA analysis. (Schüssler et al., 2001; Tehler et al., 2000) Fungi probably colonized the land during the Cambrian, over , (Taylor & Osborn, 1996), and possibly 635 million years ago during the Ediacaran, but terrestrial fossils only become uncontroversial and common during the Devonian, . Early evolution Evidence from DNA analysis suggests that all fungi are descended from a most recent common ancestor that lived at least 1.2 to 1.5 billion years ago. It is probable that these earliest fungi lived in water, and had flagella. However, a 2.4-billion-year-old basalt from the Palaeoproterozoic Ongeluk Formation in South Africa containing filamentous fossils in vescicles and fractures, that form mycelium-like structures may push back the origin of the Kingdom over one billion years before. The earliest terrestrial fungus fossils, or at least fungus-like fossils, have been found in South China from around 635 million years ago. The researchers who reported on these fossils suggested that these fungus-like organisms may have played a role in oxygenating Earth's atmosphere in the aftermath of the Cryogenian glaciations. About 250 million years ago fungi became abundant in many areas, based on the fossil record, and could even have been the dominant form of life on the earth at that time. Fossil record A rich diversity of fungi is known from the lower Devonian Rhynie chert; an earlier record is absent. Since fungi do not biomineralise, they do not readily enter the fossil record; there are only three claims of early fungi. One from the Ordovician has been dismissed on the grounds that it lacks any distinctly fungal features, and is held by many to be contamination; the position of a "probable" Proterozoic fungus is still not established, and it may represent a stem group fungus. There is also a case for a fungal affinity for the enigmatic microfossil Ornatifilum. Since the fungi form a sister group to the animals, the two lineages must have diverged before the first animal lineages, which are known from fossils as early as the Ediacaran. In contrast to plants and animals, the early fossil record of the fungi is meager. Factors that likely contribute to the under-representation of fungal species among fossils include the nature of fungal fruiting bodies, which are soft, fleshy, and easily degradable tissues and the microscopic dimensions of most fungal structures, which therefore are not readily evident. Fungal fossils are difficult to distinguish from those of other microbes, and are most easily identified when they resemble extant fungi. Often recovered from a permineralized plant or animal host, these samples are typically studied by making thin-section preparations that can be examined with light microscopy or transmission electron microscopy. Compression fossils are studied by dissolving the surrounding matrix with acid and then using light or scanning electron microscopy to examine surface details. The earliest fossils possessing features typical of fungi date to the Paleoproterozoic era, some (Ma); these multicellular benthic organisms had filamentous structures capable of anastomosis, in which hyphal branches recombine. Other recent studies (2009) estimate the arrival of fungal organisms at about 760–1060 Ma on the basis of comparisons of the rate of evolution in closely related groups. For much of the Paleozoic Era (542–251 Ma), the fungi appear to have been aquatic and consisted of organisms similar to the extant Chytrids in having flagellum-bearing spores. Phylogenetic analyses suggest that the flagellum was lost early in the evolutionary history of the fungi, and consequently, the majority of fungal species lack a flagellum. The evolutionary adaptation from an aquatic to a terrestrial lifestyle necessitated a diversification of ecological strategies for obtaining nutrients, including parasitism, saprobism, and the development of mutualistic relationships such as mycorrhiza and lichenization. Recent (2009) studies suggest that the ancestral ecological state of the Ascomycota was saprobism, and that independent lichenization events have occurred multiple times. In May 2019, scientists reported the discovery of a fossilized fungus, named Ourasphaira giraldae, in the Canadian Arctic, that may have grown on land a billion years ago, well before plants were living on land. Earlier, it had been presumed that the fungi colonized the land during the Cambrian (542–488.3 Ma), also long before land plants. Fossilized hyphae and spores recovered from the Ordovician of Wisconsin (460 Ma) resemble modern-day Glomerales, and existed at a time when the land flora likely consisted of only non-vascular bryophyte-like plants; but these have been dismissed as contamination. Prototaxites, which was probably a fungus or lichen, would have been the tallest organism of the late Silurian. Fungal fossils do not become common and uncontroversial until the early Devonian (416–359.2 Ma), when they are abundant in the Rhynie chert, mostly as Zygomycota and Chytridiomycota. At about this same time, approximately 400 Ma, the Ascomycota and Basidiomycota diverged, and all modern classes of fungi were present by the Late Carboniferous (Pennsylvanian, 318.1–299 Ma). Lichen-like fossils have been found in the Doushantuo Formation in southern China dating back to 635–551 Ma. Lichens were a component of the early terrestrial ecosystems, and the estimated age of the oldest terrestrial lichen fossil is 400 Ma; this date corresponds to the age of the oldest known sporocarp fossil, a Paleopyrenomycites species found in the Rhynie Chert. The oldest fossil with microscopic features resembling modern-day basidiomycetes is Palaeoancistrus, found permineralized with a fern from the Pennsylvanian. Rare in the fossil record are the homobasidiomycetes (a taxon roughly equivalent to the mushroom-producing species of the agaricomycetes). Two amber-preserved specimens provide evidence that the earliest known mushroom-forming fungi (the extinct species Archaeomarasmius legletti) appeared during the mid-Cretaceous, 90 Ma. Some time after the Permian-Triassic extinction event (251.4 Ma), a fungal spike (originally thought to be an extraordinary abundance of fungal spores in sediments) formed, suggesting that fungi were the dominant life form at this time, representing nearly 100% of the available fossil record for this period. However, the proportion of fungal spores relative to spores formed by algal species is difficult to assess, the spike did not appear worldwide, and in many places it did not fall on the Permian-Triassic boundary. Approximately 66 million years ago, immediately after the Cretaceous-Tertiary (K-T) extinction, there was a dramatic increase in evidence of fungi. Fungi appear to have had the chance to flourish due to the extinction of most plant and animal species, and the resultant fungal bloom has been described as like "a massive compost heap". The lack of K-T extinction in fungal evolution is also supported by molecular data. Phylogenetic comparative analyses of a tree consisting of 5,284 agaricomycete species do not show signal for a mass extinction event around the Cretaceous-Tertiary boundary.
Biology and health sciences
Basics_4
Biology
3303072
https://en.wikipedia.org/wiki/String%20%28physics%29
String (physics)
In physics, a string is a physical entity postulated in string theory and related subjects. Unlike elementary particles, which are zero-dimensional or point-like by definition, strings are one-dimensional extended entities. Researchers often have an interest in string theories because theories in which the fundamental entities are strings rather than point particles automatically have many properties that some physicists expect to hold in a fundamental theory of physics. Most notably, a theory of strings that evolve and interact according to the rules of quantum mechanics will automatically describe quantum gravity. Overview In string theory, the strings may be open (forming a segment with two endpoints) or closed (forming a loop like a circle) and may have other special properties. Prior to 1995, there were five known versions of string theory incorporating the idea of supersymmetry (these five are known as superstring theories) and two versions without supersymmetry known as bosonic string theories, which differed in the type of strings and in other aspects. Today these different superstring theories are thought to arise as different limiting cases of a single theory called M-theory. In string theories of particle physics, the strings are very tiny; much smaller than can be observed in today's particle accelerators. The characteristic length scale of strings is typically on the order of the Planck length, about 10−35 meter, the scale at which the effects of quantum gravity are believed to become significant. Therefore on much larger length scales, such as the scales visible in physics laboratories, such entities would appear to be zero-dimensional point particles. Strings are able to vibrate as harmonic oscillators, and different vibrational states of the same string are interpreted as different types of particles. In string theories, strings vibrating at different frequencies constitute the multiple fundamental particles found in the current Standard Model of particle physics. Strings are also sometimes studied in nuclear physics where they are used to model flux tubes. As the string propagates through spacetime, a string sweeps out a two-dimensional surface called its worldsheet. This is analogous to the one-dimensional worldline traced out by a point particle. The physics of a string is described by means of a two-dimensional conformal field theory associated with the worldsheet. The formalism of two-dimensional conformal field theory also has many applications outside of string theory, for example in condensed matter physics and parts of pure mathematics. Types of strings Closed and open strings Strings can be either open or closed. A closed string is a string that has no end-points, and therefore is topologically equivalent to a circle. An open string, on the other hand, has two end-points and is topologically equivalent to a line interval. Not all string theories contain open strings, but every theory must contain closed strings, as interactions between open strings can always result in closed strings. The oldest superstring theory containing open strings was type I string theory. However, the developments in string theory in the 1990s have shown that the open strings should always be thought of as ending on a new physical degree of freedom called D-branes, and the spectrum of possibilities for open strings has significantly increased. Open and closed strings are generally associated with characteristic vibrational modes. One of the vibration modes of a closed string can be identified as the graviton. In certain string theories, the lowest-energy vibration of an open string is a tachyon and can undergo tachyon condensation. Other vibrational modes of open strings exhibit the properties of photons and gluons. Orientation Strings can also possess an orientation, which can be thought of as an internal "arrow" that distinguishes the string from one with the opposite orientation. By contrast, an unoriented string is one with no such arrow on it.
Physical sciences
Particle physics: General
Physics
3304705
https://en.wikipedia.org/wiki/HPV%20vaccine
HPV vaccine
Human papillomavirus (HPV) vaccines are vaccines intended to provide acquired immunity against infection by certain types of human papillomavirus (HPV). The first HPV vaccine became available in 2006. Currently there are six licensed HPV vaccines: three bivalent (protect against two types of HPV), two quadrivalent (against four), and one nonavalent vaccine (against nine) All have excellent safety profiles and are highly efficacious, or have met immunobridging standards. All of them protect against HPV types 16 and 18, which are together responsible for approximately 70% of cervical cancer cases globally. The quadrivalent vaccines provide additional protection against HPV types 6 and 11. The nonavalent provides additional protection against HPV types 31, 33, 45, 52 and 58. It is estimated that HPV vaccines may prevent 70% of cervical cancer, 80% of anal cancer, 60% of vaginal cancer, 40% of vulvar cancer, and show more than 90% effectiveness in preventing HPV-positive oropharyngeal cancers. They also protect against penile cancer. They additionally prevent genital warts (also known as anogenital warts), with the quadrivalent and nonavalent vaccines providing virtually complete protection. The WHO recommends a one or two-dose schedule for girls aged 9–14 years, the same for girls and women aged 15–20 years, and two doses with a 6-month interval for women older than 21 years. The vaccines provide protection for at least five to ten years. The primary target group in most of the countries recommending HPV vaccination is young adolescent girls, aged 9–14. The vaccination schedule depends on the age of the vaccine recipient. As of 2023, 27% of girls aged 9–14 years worldwide received at least one dose (37 countries were implementing the single-dose schedule, 45% of girls aged 9–14 years old vaccinated in that year). As of September 2024, 57 countries are implementing the single-dose schedule. At least 144 countries (at least 74% of WHO member states) provided the HPV vaccine in their national immunization schedule for girls, as of November 2024. As of 2022, 47 countries (24% of WHO member states) also did it for boys. Vaccinating a large portion of the population may also benefit the unvaccinated by way of herd immunity. The HPV vaccine is on the World Health Organization's List of Essential Medicines. The World Health Organization (WHO) recommends HPV vaccines as part of routine vaccinations in all countries, along with other prevention measures. The WHO's priority purpose of HPV immunization is the prevention of cervical cancer, which accounts for 82% of all HPV-related cancers and more than 95% of which are caused by HPV. 88% (2020 figure) of cervical cancers and 90% of deaths occur in low- and middle-income countries and 2% (2020 figure) in high-income countries. The WHO-recommended primary target population for HPV vaccination is girls aged 9–14 years before they become sexually active. It aims the introduction of the HPV vaccine in all countries and has set a target of reaching a coverage of 90% of girls fully vaccinated with HPV vaccine by age 15 years. Females aged ≥15 years, boys, older males or men who have sex with men (MSM) are secondary target populations. HPV vaccination is the most cost-effective public health measure against cervical cancer, particularly in resource-constrained settings. Cervical cancer screening is still required following vaccination. Preventive vaccines A growing number of vaccine products initially prequalified for use in a 2-dose schedule can now be used in a single-dose schedule. Cecolin (WHO prequalified HPV vaccine product, confirmed for use in a single-dose schedule), in the second edition of WHO's technical document on considerations for HPV vaccine product choice Cervarix (bivalent) Gardasil (quadrivalent) and Gardasil 9 nonavalent vaccine) Walrinvax (WHO prequalified with a two-dose schedule on 2 August 2024) Medical uses HPV vaccines are used to prevent HPV infection and therefore in particular cervical cancer. Vaccinating females between the ages of nine to thirteen is typically recommended, with many countries also vaccinating males in that age range. In the United States, the Centers for Disease Control and Prevention (CDC) recommends that all 11- to 12-year-olds receive two doses of HPV vaccine, administered 6 to 12 months apart. The vaccines require three doses for those ages 15 and above. Gardasil is a three-dose (injection) vaccine. HPV vaccines are recommended in the United States for women and men who are 9–26 years of age and are also approved for those who are 27–45 years of age. HPV vaccination of a large percentage of people within a population has been shown to decrease rates of HPV infections, with part of the benefit from herd immunity. Since the vaccines only cover some high-risk types of HPV, cervical cancer screening is recommended even after vaccination. In the US, the recommendation is for women to receive routine Pap smears beginning at age 21. In Australia, the national screening program has changed from the two yearly cytology (pap smears) to being based on tests for HPV DNA, based on work by Karen Canfell and others. As of 2021, the World Health Organization recommends HPV DNA testing as the preferred screening method. Efficacy The HPV vaccine has been shown to prevent cervical dysplasia from the high-risk HPV types 16 and 18 and provide some protection against a few closely related high-risk HPV types. However, other high-risk HPV types are not affected by the vaccine. The protection against HPV 16 and 18 has lasted at least eight years after vaccination for Gardasil and more than nine years for Cervarix. It is thought that booster vaccines will not be necessary. As of September 2024, 57 countries are implementing the single-dose schedule. A growing number of vaccine products initially prequalified for use in a 2-dose schedule can now be used in a single-dose schedule. Before, it was unsure whether two doses of the vaccine may work as well as three doses. The US Centers for Disease Control and Prevention (CDC) recommends two doses in those less than 15 years and three doses in those over 15 years. A single dose might be effective. A study with 9vHPV, a 9-valent HPV vaccine that protects against HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58, came to the result that the rate of high-grade cervical, vulvar, or vaginal disease was the same as when using a quadrivalent HPV vaccine. A lack of a difference may have been caused by the study design of including women 16 to 26 years of age, who may largely already have been infected with the five additional HPV types that are additionally covered by the 9-valent vaccine. Neither Cervarix nor Gardasil prevent other sexually transmitted infections, and they do not treat existing HPV infections or cervical cancer. Gardasil When Gardasil was first introduced, it was recommended as a prevention for cervical cancer for women 25 years old or younger. Evidence suggests that HPV vaccines are effective in preventing cervical cancer for women up to 45 years of age. Gardasil and Gardasil 9 protect against HPV types 6 and 11 which can cause genital warts, with the quadrivalent and nonavalent vaccines providing virtually complete protection. Adenocarcinoma HPV types 16, 18, and 45 contribute to 94% of cervical adenocarcinoma (cancers originating in the glandular cells of the cervix). While most cervical cancer arises in the squamous cells, adenocarcinomas make up a sizable minority of cancers. Further, Pap smears are not as effective at detecting adenocarcinomas, so where Pap screening programs are in place, a larger proportion of the remaining cancers are adenocarcinomas. Trials suggest that HPV vaccines may also reduce the incidence of adenocarcinoma. Males As of 2022, 47 countries (24% of WHO member states) have introduced HPV vaccine in their national immunization programme for boys. For instance, it is the case in Switzerland, Portugal, Canada, Australia, Ireland, South Korea, Hong Kong, the United Kingdom, New Zealand, the Netherlands, and the United States. In males also, Gardasil and Gardasil 9 protect against HPV types 6 and 11 which can cause genital warts, with the quadrivalent and nonavalent vaccines providing virtually complete protection. They reduce their risk of precancerous lesions caused by HPV. This reduction in precancerous lesions is predicted to reduce the rates of penile and anal cancer in men. Gardasil has been shown to also be effective in preventing high-risk HPV types 16 and 18 in males. While Gardasil and the Gardasil 9 vaccines have been approved for males, a third HPV vaccine, Cervarix, has not. Unlike the Gardasil-based vaccines, Cervarix does not protect against genital warts. Since penile and anal cancers are much less common than cervical cancer, HPV vaccination of young men is likely to be much less cost-effective than for young women. Gardasil is also used among men who have sex with men (MSM), who are at higher risk for genital warts, penile cancer, and anal cancer. Recommendations by national bodies Australia Australia introduced HPV vaccination for boys in 2013. Ireland Ireland introduced HPV vaccination for boys aged 13 as part of their National Immunization Plan in 2019. UK UK introduced HPV vaccination for boys aged 12 as part of their National Immunization Plan in 2019. Portugal Portugal introduced universal HPV vaccination for boys aged 10 years and above as part of its National Immunization Plan in 2020. United States On 9 September 2009, an advisory panel recommended that the Food and Drug Administration (FDA) of the USA license Gardasil in the United States for boys and men ages 9–26 for the prevention of genital warts. Soon after that, the vaccine was approved by the FDA for use in males aged 9 to 26 for prevention of genital warts and anal cancer. In 2011, an advisory panel for the US Centers for Disease Control and Prevention (CDC) recommended the vaccine for boys ages 11–12. This was intended to prevent genital warts and anal cancers in males, and possibly prevent head and neck cancer (though the vaccine's effectiveness against head and neck cancers has not yet been proven). The committee also made the vaccination recommendation for males 13 to 21 years who have not been vaccinated previously or who have not completed the three-dose series. For those under the age of 27 who have not been fully vaccinated the CDC recommends vaccination. Also in 2011, Harald zur Hausen's support for vaccinating boys (so that they will be protected, and thereby so will women) was joined by professors Harald Moi and Ole-Erik Iversen. In 2018, the US Food and Drug Administration (FDA) released a summary basis for regulatory action and approval for expansion of usage and indication for Gardasil 9, the 9-valent HPV vaccine, to include men and women 27 to 45 years of age. Public health World Health Organization (WHO) The HPV vaccine is on the WHO Model List of Essential Medicines. The WHO recommends HPV vaccines as part of routine vaccinations in all countries, along with other prevention measures. The WHO's priority purpose of HPV immunization is the prevention of cervical cancer, which accounts for 82% of all HPV-related cancers and more than 95% of which are caused by HPV. The WHO has a global strategy for cervical cancer elimination. Its first pillar is having 90% of girls fully vaccinated with the HPV vaccine by 15 years of age. The WHO-recommended primary target population for HPV vaccination is girls aged 9–14 years before they become sexually active. Females aged ≥15 years, boys, older males or MSM are secondary target populations. Cervical cancer screening is still required following vaccination. Global Cervical cancer The large majority of cervical cancer cases in 2020 (88%) occurred in LMICs, where they account for 17% of all cancers in women, compared with only 2% in high-income countries (HICs). In sub-Saharan Africa, the region with the highest rates of young WLWH, approximately 20% of cervical cancer cases occur in WLWH [women living with HIV]. HPV infection is more likely to persist and to progress to cancer in WLWH.33 Mortality rates vary 50-fold between countries, ranging from <2 per 100 000 women in some HICs to >40 per 100 000 in some countries of sub-Saharan Africa. Of the 20 hardest hit countries by cervical cancer, 19 are in Africa. The US National Cancer Institute states "Widespread vaccination has the potential to reduce cervical cancer deaths around the world by as much as two-thirds if all women were to take the vaccine and if protection turns out to be long-term. In addition, the vaccines can reduce the need for medical care, biopsies, and invasive procedures associated with the follow-up from abnormal Pap tests, thus helping to reduce health care costs and anxieties related to abnormal Pap tests and follow-up procedures." In 2004, preventive vaccines already protected against the two HPV types (16 and 18) that cause about 70% of cervical cancers worldwide. Because of the distribution of HPV types associated with cervical cancer, the vaccines were likely to be most effective in Asia, Europe, and North America. Some other high-risk types cause a larger percentage of cancers in other parts of the world. Vaccines that protect against more of the types common in cancers would prevent more cancers, and be less subject to regional variation. For instance, a vaccine against the seven types most common in cervical cancers (16, 18, 45, 31, 33, 52, 58) would prevent an estimated 87% of cervical cancers worldwide. In 2008, only 41% of women with cervical cancer in the developing world got medical treatment. Therefore, prevention of HPV by vaccination may be a more effective way of lowering the disease burden in developing countries than cervical screening. The European Society of Gynecological Oncology sees the developing world as most likely to benefit from HPV vaccination. However, individuals in many resource-limited nations, Kenya for example, are unable to afford the vaccine. In more developed countries, populations that do not receive adequate medical care, such as the poor or minorities in the United States or parts of Europe also have less access to cervical screening and appropriate treatment, and are similarly more likely to benefit. In 2009, Dr. Diane Harper, a researcher for the HPV vaccines, questioned whether the benefits of the vaccine outweigh its risks in countries where Pap smear screening is common. She has also encouraged women to continue pap screening after they are vaccinated and to be aware of potential adverse effects. United States In 2012, according to the CDC, the use of the HPV vaccine had cut rates of infection with HPV-6, -11, -16, and -18 in half in American teenagers (from 11.5% to 4.3%) and by one-third in American women in their early twenties (from 18.5% to 12.1%). Side effects HPV vaccines are safe and well tolerated and can be used in persons who are immunocompromised or HIV-infected. Pain at the site of injection occurs in between 35% and 88% of people Redness and swelling at the site and fever may also occur. No link to Guillain–Barré syndrome has been found. There is no increased risk of serious adverse effects. Extensive clinical trial and post-marketing safety surveillance data indicate that both Gardasil and Cervarix are well tolerated and safe. When comparing the HPV vaccine to a placebo (control) vaccine taken by women, there is no difference in the risk of severe adverse events. United States , there were more than 57 million doses of Gardasil vaccine distributed in the United States, though it is unknown how many were administered. There have been 22,000 Vaccine Adverse Event Reporting System (VAERS) reports following the vaccination. 92% were reports of events considered to be non-serious (e.g., fainting, pain, and swelling at the injection site (arm), headache, nausea, and fever), and the rest were considered to be serious (death, permanent disability, life-threatening illness, and hospitalization). However, VAERS reports include any reported effects whether coincidental or causal. In response to concerns regarding the rates of adverse events associated with the vaccine, the CDC stated: "When evaluating data from VAERS, it is important to note that for any reported event, no cause-and-effect relationship has been established. VAERS receives reports on all potential associations between vaccines and adverse events." , in the US there were 44 reports of death in females after receiving the vaccine. None of the 27 confirmed deaths of women and girls who had taken the vaccine were linked to the vaccine. There is no evidence suggesting that Gardasil causes or raises the risk of Guillain–Barré syndrome. Additionally, there have been rare reports of blood clots forming in the heart, lungs, and legs. A 2015 review conducted by the European Medicines Agency's Pharmacovigilance Risk Assessment Committee concluded that evidence does not support the idea that HPV vaccination causes complex regional pain syndrome or postural orthostatic tachycardia syndrome. , the CDC continued to recommend Gardasil vaccination for the prevention of four types of HPV. The manufacturer of Gardasil has committed to ongoing research assessing the vaccine's safety. According to the Centers for Disease Control and Prevention (CDC) and the FDA, the rate of adverse side effects related to Gardasil immunization in the safety review was consistent with what has been seen in the safety studies carried out before the vaccine was approved and were similar to those seen with other vaccines. However, a higher proportion of syncope (fainting) was seen with Gardasil than is usually seen with other vaccines. The FDA and CDC have reminded healthcare providers that, to prevent falls and injuries, all vaccine recipients should remain seated or lying down and be closely observed for 15 minutes after vaccination. The HPV vaccination does not appear to reduce the willingness of women to undergo pap tests. Contraindications While the use of HPV vaccines can help reduce cervical cancer deaths by two-thirds around the world, not everyone is eligible for vaccination. Some factors exclude people from receiving HPV vaccines. These factors include: People with history of immediate hypersensitivity to vaccine components. Patients with a hypersensitivity to yeast should not receive Gardasil since yeast is used in its production. People with moderate or severe acute illnesses. This does not completely exclude patients from vaccination but postpones the time of vaccination until the illness has improved. Pregnancy In the Gardasil clinical trials, 1,115 pregnant women received the HPV vaccine. Overall, the proportions of pregnancies with an adverse outcome were comparable in subjects who received Gardasil and subjects who received a placebo. However, the clinical trials had a relatively small sample size. , the vaccine is not recommended for pregnant women. The FDA has classified the HPV vaccine as a pregnancy Category B, meaning there is no apparent harm to the fetus in animal studies. HPV vaccines have not been causally related to adverse pregnancy outcomes or adverse effects on the fetus. However, data on vaccination during pregnancy is very limited, and vaccination during the pregnancy term should be delayed until more information is available. If a woman is found to be pregnant during the three-dose series of vaccination, the series should be postponed until pregnancy has been completed. While there is no indication for intervention for vaccine dosages administered during pregnancy, patients and healthcare providers are encouraged to report exposure to vaccines to the appropriate HPV vaccine pregnancy registry. Mechanism of action The HPV vaccines are based on hollow virus-like particles (VLPs) assembled from recombinant HPV coat proteins. The natural virus capsid is composed of two proteins, L1 and L2, but vaccines only contain L1. Gardasil contains inactive L1 proteins from four different HPV strains: 6, 11, 16, and 18, synthesized in the yeast Saccharomyces cerevisiae. Each vaccine dose contains 225 μg of aluminum, 9.56 mg of sodium chloride, 0.78 mg of L-histidine, 50 μg of polysorbate 80, 35 μg of sodium borate, and water. The combination of ingredients totals 0.5 mL. HPV types 16 and 18 cause about 70% of all cervical cancer. Gardasil also targets HPV types 6 and 11, which together cause about 90 percent of all cases of genital warts. Gardasil and Cervarix are designed to elicit virus-neutralizing antibody responses that prevent initial infection with the HPV types represented in the vaccine. The vaccines have been shown to offer 100 percent protection against the development of cervical pre-cancers and genital warts caused by the HPV types in the vaccine, with few or no side effects. The protective effects of the vaccine are expected to last a minimum of 4.5 years after the initial vaccination. While the study period was not long enough for cervical cancer to develop, the prevention of these cervical precancerous lesions (or dysplasias) is believed highly likely to result in the prevention of those cancers. History In 1983, Harald zur Hausen culminated decades of research with the discovery that certain variants of human papillomaviruses (HPVs) could be found in a majority of tested cervical cancer specimens. This provided strong scientific evidence for a link between the viral infection and cervical cancer, and provided strong motivations for further research into HPVs. In 1990, Ian Frazer partnered with Jian Zhou and Xiao-Yi Sun at the University of Queensland in Australia to create synthetic HPVs for study in the lab. While working towards this goal, they were able to synthetically produce some of the capsid proteins of the HPVs, L1 and L2. Recognizing the potential of these proteins to form the basis of a vaccine, they filed a provisional patent on their production process in Australia in 1991. The further invention then stalled while convincing developers of the market for the vaccine, and also while patent offices determined who the discovery belonged to. Three other organizations, the US National Cancer Institute, Georgetown University, and University of Rochester, were also vying for the patent as a result of contributions in the space. After providing evidence of the correctness of their L1 sequencing in 2004, the US patent court of appeals accorded priority to the University of Queensland in 2009. As a result, the University of Queensland receives royalty payments from the sale of these vaccines even today. By the early 2000s, developers, convinced of the market of the vaccine, had begun refining, researching, and trialing L1-based HPV vaccines. In 2006, the FDA approved the first preventive HPV vaccine, marketed by Merck & Co. under the trade name Gardasil. According to a Merck press release, by the second quarter of 2007 it had been approved in 80 countries, many under fast-track or expedited review. Early in 2007, GlaxoSmithKline filed for approval in the United States for a similar preventive HPV vaccine, known as Cervarix. In June 2007, this vaccine was licensed in Australia, and it was approved in the European Union in September 2007. Cervarix was approved for use in the US in October 2009. Harald zur Hausen was awarded half of the $1.4 million Nobel Prize in Medicine in 2008 for his work showing that cervical cancer is caused by certain types of HPVs. In December 2014, the US Food and Drug Administration (FDA) approved a vaccine called Gardasil 9 to protect females between the ages of 9 and 26 and males between the ages of 9 and 15 against nine strains of HPV. Gardasil 9 protects against infection from the strains covered by the first generation of Gardasil (HPV-6, HPV-11, HPV-16, and HPV-18) and protects against five other HPV strains responsible for 20% of cervical cancers (HPV-31, HPV-33, HPV-45, HPV-52, and HPV-58). Society and culture Economics , vaccinating girls and young women was estimated to be cost-effective in the low and middle-income countries, especially in places without organized programs for screening cervical cancer. When the cost of the vaccine itself, or the cost of administering it to individuals, were higher, or if cervical cancer screening were readily available, then vaccination was less likely to be cost-effective. From a public health point of view, vaccinating men as well as women decreases the virus pool within the population but is only cost-effective to vaccinate men when the uptake in the female population is extremely low. In the United States, the cost per quality-adjusted life year is greater than US$100,000 for vaccinating the male population, compared to less than US$50,000 for vaccinating the female population. This assumes a 75% vaccination rate. In 2013, the two companies that sell the most common vaccines announced a price cut to less than US$5 per dose to poor countries, as opposed to US$130 per dose in the US. Brand names The vaccine is sold under various brand names including Gardasil, Cervarix, Cecolin, and Walrinvax. Vaccine implementation The primary target group in most of the countries recommending HPV vaccination is young adolescent girls, aged 9–14. It's particularly cost-effective in resource-constrained settings. The vaccination schedule depends on the age of the vaccine recipient. As of 2023, 27% of girls aged 9–14 years worldwide received at least one dose (37 countries were implementing the single-dose schedule). Global coverage for the first dose of HPV vaccine in girls grew from 20% in 2022 to 27% in 2023. As of 10 September 2024, 57 countries are implementing the single-dose schedule. Vaccinating a large portion of the population may also benefit the unvaccinated by way of herd immunity. HPV vaccine introductions have been hampered by global supply shortages since 2018. Between 2019 and 2021, due to the COVID-19 pandemic, HPV vaccination programs have been significantly affected in the United States, low-income and lower-middle-income countries. In developed countries, the widespread use of cervical "Pap smear" screening programs has reduced the incidence of invasive cervical cancer by 50% or more. Preventive vaccines reduce but do not eliminate the chance of getting cervical cancer. Therefore, experts recommend that women combine the benefits of both programs by seeking regular Pap smear screening, even after vaccination. School-entry vaccination requirements were found to increase the use of the HPV vaccine. HPV vaccine included in national immunization program At least 144 countries (at least 74% of WHO member states) provided the HPV vaccine in their national immunization schedule for girls, as of November 2024. As of 2022, 47 countries (24% of WHO member states) also did it for boys. Africa Of the 20 hardest hit countries by cervical cancer, 19 are in Africa. In 2013, with support from Gavi, the Vaccine Alliance, eight low-income countries, mainly in sub-Saharan Africa, began the rollout of the HPV vaccine. Algeria No Angola No Chad No Central African Republic No Democratic Republic of Congo No Ghana No (GAVI support in 2013) Guinea-Bissau No Kenya Both Cervarix and Gardasil are approved for use within Kenya by the Pharmacy and Poisons Board. However, at a cost of 20,000 Kenyan shillings, which is more than the average annual income for a family, the director of health promotion in the Ministry of Health, Nicholas Muraguri, states that many Kenyans are unable to afford the vaccine. It has received GAVI support in 2013. Madagascar No (GAVI support in 2013) Malawi Yes (GAVI support in 2013) Mozambique Yes (GAVI support for HPV demonstration projects in 2014) Niger No (GAVI support in 2013) Nigeria Yes Rwanda Yes (GAVI support in 2014) Senegal Yes Sierra Leone Yes (GAVI support in 2013) South Africa Cervical cancer represents the most common cause of cancer-related deaths—more than 3,000 deaths per year—among women in South Africa because of high HIV prevalence, making the introduction of the vaccine highly desirable. A Papanicolaou test program was established in 2000 to help screen for cervical cancer, but since this program has not been implemented widely, vaccination would offer more efficient form of prevention. In May 2013 the Minister of Health of South Africa, Aaron Motsoaledi, announced the government would provide free HPV vaccines for girls aged 9 and 10 in the poorest 80% of schools starting in February 2014 and the fifth quintile later on. South Africa will be the first African country with an immunisation schedule that includes vaccines to protect people from HPV infection, but because the effectiveness of the vaccines in women who later become infected with HIV is not yet fully understood, it is difficult to assess how cost-effective the vaccine will be. Negotiations are currently underway for more affordable HPV vaccines since they are up to 10 times more expensive than others already included in the immunization schedule. United Republic of Tanzania Yes (GAVI support in 2013) Zimbabwe Yes (GAVI support for HPV demonstration projects in 2014) Australia In April 2007, Australia became the second country—after Austria—to introduce a government-funded National Human Papillomavirus (HPV) Vaccination Program to protect young women against HPV infections that can lead to cancers and disease. The National HPV Vaccination Program is listed on the National Immunisation Program (NIP) Schedule and funded under the Immunise Australia Program. The Immunise Australia Program is a joint Federal, State, and Territory Government initiative to increase immunisation rates for vaccine-preventable diseases. The National HPV Vaccination Program for females was made up of two components: an ongoing school-based program for 12- and 13-year-old girls; and a time-limited catch-up program (females aged 14–26 years) delivered through schools, general practices, and community immunization services, which ceased on 31 December 2009. During 2007–2009, an estimated 83% of females aged 12–17 years received at least one dose of HPV vaccine and 70% completed the 3-dose HPV vaccination course. By 2017, HPV coverage data on the Immunise Australia website show that by 15 years of age, over 82% of Australian females had received all three doses. Since the National HPV Vaccination Program commenced in 2007, there has been a reduction in HPV-related infections in young women. A study published in The Journal of Infectious Diseases in October 2012 found the prevalence of vaccine-preventable HPV types (6, 11, 16, and 18) in Papanicolaou test results of women aged 18–24 years has significantly decreased from 28.7% to 6.7% four years after the introduction of the National HPV Vaccination Program. A 2011 report published found the diagnosis of genital warts (caused by HPV types 6 and 11) had also decreased in young women and men. In October 2010, the Australian regulatory agency, the Therapeutic Goods Administration, extended the registration of the quadrivalent vaccine (Gardasil) to include use in males aged 9 through 26 years of age, for the prevention of external genital lesions and infection with HPV types 6, 11, 16 and 18. In November 2011, the Pharmaceutical Benefits Advisory Committee (PBAC) recommended the extension of the National HPV Vaccination Program to include males. The PBAC made its recommendation on the preventive health benefits that can be achieved, such as a reduction in the incidence of anal and penile cancers and other HPV-related diseases. In addition to the direct benefit to males, it was estimated that routine HPV vaccination of adolescent males would contribute to the reduction of vaccine HPV-type infection and associated disease in women through herd immunity. In 2012, the Australian Government announced it would be extending the National HPV Vaccination Program to include males, through the National Immunisation Program Schedule. Updated results were reported in 2014. Since February 2013, free HPV vaccine has been provided through school-based programs for: males and females aged 12–13 years (ongoing program); and males aged between 14 and 15 years – until the end of the school year in 2014 (catch-up program). Canada HPV vaccines were first approved in Canada in July 2006 for use in females, and February 2010 for use in males. The vaccines Cervarix, Gardasil, and Gardasil 9 are authorized for use in Canada, with Gardasil 9 the primary vaccine used. All provinces and territories (except Quebec) administer Gardasil 9 on a two or three-dose schedule: individuals under age 15 are given two doses, while individuals who are immunocompromised, living with HIV, or age 15+ are given three doses. Quebec provides two doses to individuals under 18 years (the first dose is Gardasil 9, and the second dose is Cervarix) and three doses of Gardasil 9 to people age 18+. The administration of free vaccination programs is provided by individual province and territory governments. All provincial and territorial governments offer free vaccination for school-aged children, irrespective of gender. The school grades in which the vaccine is provided varies by province and territory: grade 4 and secondary 3 (Quebec); grade 6 (British Columbia, Manitoba, Newfoundland and Labrador, Nunavut, Prince Edward Island, Saskatchewan, Yukon); grades 6 and 9 (Alberta); grades 4-6 (Northwest Territories); or grade 7 (New Brunswick, Nova Scotia, Ontario). Publicly funded HPV vaccines are also provided in certain provinces and territories for other groups of people, such as men who have sex with men, individuals living with HIV, and individuals who identify as transgender. Individuals who do not qualify for any of the publicly funded programs can privately purchase the three-dose HPV vaccine series for $510 to $630. China GlaxoSmithKline China announced in 2016, that Cervarix (HPV vaccine 16 and 18) had been approved by the China Food and Drug Administration (CFDA). Cervarix is registered in China for girls aged 9 to 45, adopting 3-dose program within 6 months. Cervarix was launched in China in 2017, and it was the first approved HPV vaccine in China. Colombia The vaccine was introduced in 2012, approved for girls aged 9. The HPV vaccine was initially offered to girls aged 9 and older, and attending the fourth grade of school. Since 2013 the age of coverage was extended to girls in school from grade four (who have reached the age of 9) to grade eleven (independent of age); and no schooling from age 9–17 years 11 months and 29 days old. Costa Rica Since June 2019, the vaccine has been administered compulsorily by the state, free of charge to girls at ten years of age. Europe As of 2020, the European Centre for Disease Prevention and Control (ECDC) reports that the vaccine uptake among females is the following: Finland, Hungary, Iceland, Malta, Norway, Portugal, Spain, Sweden, and the UK have reported national coverage above 70%. In some countries, including France and Germany, coverage has been consistently below 50%, though recently increasing in France. Hong Kong HPV vaccines are approved for use in Hong Kong. As part of the Hong Kong Childhood Immunisation Programme, HPV vaccines became mandatory for students in the 2019/2020 school year, exclusively for females at primary 5 and 6 levels. India HPV vaccine (both Gardasil and Cervarix) was introduced in Indian markets in 2008, but it is yet to be included in the country's universal immunization programme. In Punjab and Sikkim (states of India), it is included in the state immunization program and the coverage is up to 97% of targeted girls. HPV vaccination has been recommended by the National Technical Advisory Group on Immunization, but has not been implemented in India as of 2018. In 2023, Serum Institute of India (SII) developed a new vaccine Cervavax targeting HPV types 6, 11, 16, and 18. The newly developed vaccine shows equal capability to Merck's Gardasil 9. Cervavax vaccine isn't commercially available yet. In 2024, the HPV vaccine drive was announced by Finance Minister Nirmala Sitharaman as part of Nari Shakti ("Women Power") campaign but hasn't been implemented yet. The vaccine is commercially available in the market at a price between ₹ 3,000 ($35) and ₹ 15,000 ($180). Ireland The HPV vaccination programme in Ireland is part of the national strategy to protect females from cervical cancer. Since 2009, the Health Service Executive has offered the HPV vaccine, free of charge, to all girls from the first year onwards (ages 12–13). Secondary schools began implementing the vaccine program on an annual basis from September 2010 onwards. The programme was expanded to include males in 2019. Two HPV vaccines are licensed for use in Ireland: Cervarix and Gardasil. To ensure high uptake, the vaccine is administered to teenagers aged 12–13 in their first year of secondary school, with the first dose administered between September and October and the final dose in April of the following year. Males and females aged 12–13 who are outside of the traditional school setting (home school, etc.) are invited to Health service Executive clinics for their vaccines. HPV vaccination in Ireland is not mandatory and consent is obtained before vaccination. For males and females aged 16 and under, consent is granted by a parent or guardian unless it is explicitly refused by the child. Any male or female aged 16 and over may provide their own consent if they want to be vaccinated. HIQA has stated the vaccine will provide further protection, particularly to men who have sex with men. The vaccine has been extended following evidence that 25% of HPV cancers occur in men. Additionally, HIQA is aiming to replace the current vaccination, which covers 4 major HPV strains, with an updated vaccine protecting against nine strains. The cost with the "gender-neutral nine-talent" vaccine is estimated to be nearly €11.66 million over the next five years. Israel Introduced in 2012. Target age group 13–14. Fully financed by national health authorities only for this age group. For the year 2013–2014, girls in the eighth grade may get the vaccine free of charge only in school, and not in Ministry of Health offices or clinics. Girls in the ninth grade may receive the vaccine free of charge only at Ministry of Health offices, and not in schools or clinics. Religious and conservative groups are expected to refuse the vaccination. Japan The quadrivalent vaccine has been approved for males and the 9-valent one for females. Since 2010, young women in Japan have been eligible to receive the cervical cancer vaccination for free. In June 2013, the Japanese Ministry of Health, Labor and Welfare mandated that, before administering the vaccine, medical institutions must inform women that the ministry does not recommend it. However, the vaccine is still available at no cost to Japanese women who choose to accept the vaccination. It is widely available only since April 2013. Fully financed by national health authorities to females aged 11 to 16 years. In June 2013, however, Japan's Vaccine Adverse Reactions Review Committee (VARRC) suspended the recommendation of the vaccine due to fears of adverse events. This directive has been criticized by researchers at the University of Tokyo as a failure of governance since the decision was taken without the presentation of adequate scientific evidence. At the time, Ministry spokespeople emphasized that "The decision does not mean that the vaccine itself is problematic from the viewpoint of safety," but that they wanted time to conduct analyses on possible adverse effects, "to offer information that can make the people feel more at ease." However, the suspension of the Ministry's endorsement was still in place as of February 2019, by which time the HPV vaccination rate among younger women fell from approximately 70% in 2013 to 1% or less. Over an overlapping time period (2009–2019), the age-adjusted mortality rate from cervical cancer increased by 9.6%. Japan to Resume Active Promotion of HPV Vaccinations in April 2022. In December 2021, the Ministry of Health, Labour and Welfare has decided to allow free vaccines to women born between fiscal year 1997 and 2005 after eight-year hiatus. A panel of Japan's Ministry of Health, Labour and Welfare agreed to give women (born between fiscal 1997 and fiscal 2005), free vaccinations, if they missed the country's free vaccination program. 225,993 girls were vaccinated for the first round of routine vaccination in 2022, and the vaccination rate was 42.2%. The Osaka University Graduate School of Medicine and Faculty of Medicine reported the first vaccination rate and cumulative first vaccination rate for each year of birth in 2022 at a meeting of the Ministry of Health, Labor and Welfare. For 12-year-old girls born in 2010, the rate was 2.8%. Laos In 2013, Laos began implementation of the HPV vaccine, with the assistance of Gavi, the Vaccine Alliance. Malaysia In 2010, Malaysia launched a national vaccination program to provide three doses of HPV vaccines to all 13-year-old girls. In 2015, the program transitioned to a two-dose regimen. High rates of school enrolment for 13-year-olds (96.0%) and retention of female students in secondary schools have made it possible for the HPV vaccination to be integrated into the School Health Service Program and ensure equal access to the HPV vaccine between urban and rural areas. Mexico The vaccine was introduced in 2008 to 5% of the population. This percentage of the population had the lowest development index which correlates with the highest incidence of cervical cancer. The HPV vaccine is delivered to girls 12 – 16 years old following the 0-2-6 dosing schedule. By 2009 Mexico had expanded the vaccine use to girls, 9–12 years of age, the dosing schedule in this group was different, the time elapsed between the first and second dose was six months, and the third dose 60 months later. In 2011 Mexico approved a nationwide use of HPV vaccination program to include vaccination of all 9-year-old girls. New Zealand Immunization as of 2017 is free for males and females aged 9 to 26 years. The public funding began on 1 September 2008. The vaccine was initially offered only to girls, usually through a school-based program in Year 8 (approximately age 12), but also through general practices and some family planning clinics. Over 200,000 New Zealand girls and young women have received HPV immunization. Panama The vaccine was added to the national immunization program in 2008, to target 10-year-old girls. South Korea On 27 July 2007, South Korean government approved Gardasil for use in girls and women aged 9 to 26 and boys aged 9 to 15. Approval for use in boys was based on safety and immunogenicity but not efficacy. Since 2016, HPV vaccination has been part of the National Immunization Program, offered free of charge to all children under 12 in South Korea, with costs fully covered by the Korean government. For 2016 only, Korean girls born between 1 January 2003 and 31 December 2004 were also eligible to receive the free vaccinations as a limited-time offer. From 2017, the free vaccines are available to those under 12 only. Trinidad and Tobago Introduced in 2013. Target Group 9–26. Fully financed by national health authorities. But was suspended later on that year owing to objections and concerns raised by the Catholic Board, but fully available in local health centers. United Arab Emirates The World Health Organization ranks cervical cancer as the fourth most frequent cancer among women in UAE, at 7.4 per 100,000 women, and according to Abu Dhabi Health Authority, the cancer is also the seventh highest cause of death of women in the U.A.E. In 2007, the HPV vaccine was approved for girls and young women, 15 to 26 years of age, and offered optionally at hospitals and clinics. Moreover, starting 1 June 2013, the vaccine was offered free of charge for women between the ages of 18 and 26, in Abu Dhabi. However, on 14 September 2018, the U.A.E's Ministry of Health and Community Protection announced that HPV vaccine became a mandatory part of the routine vaccinations for all girls in the U.A.E. The vaccine is to be administers to all school girls in the 8th grade girls, aged 13. United Kingdom In the UK the vaccine is licensed for females aged 9–26, for males aged 9–15, and for men who have sex with men aged 18–45. HPV vaccination was introduced into the national immunisation programme in September 2008, for girls aged 12–13 across the UK. A two-year catch-up campaign started in Autumn 2009 to vaccinate all girls up to 18 years of age. Catch-up vaccination was offered to girls aged between 16 and 18 from autumn 2009, and girls aged between 15 and 17 from autumn 2010. It will be many years before the vaccination programme affects cervical cancer incidence so women are advised to continue accepting their invitations for cervical screening. Men who have sex with men up to and including the age of 45 became eligible for free HPV vaccination on the NHS in April 2018. They get the vaccine by visiting sexual health clinics and HIV clinics in England. A meta-analysis of vaccinations for men who have sex with men showed that this strategy is most effective when combined with gender-neutral vaccination of all boys, regardless of their sexual orientation. From the 2019/2020 school year, it is expected that 12- to 13-year-old boys will also become eligible for the HPV vaccine as part of the national immunisation programme. This follows a statement by the Joint Committee on Vaccination and Immunisation. The first dose of the HPV vaccine will be offered routinely to boys aged 12 and 13 in school year 8, in the same way that it is currently (May 2018) offered to girls. Boots UK opened a private HPV vaccination service to boys and men aged 12–44 years in April 2017 at a cost of £150 per vaccination. In children aged 12–14 years two doses are recommended, while those aged 15–44 years a course of three is recommended. Cervarix was the HPV vaccine offered from its introduction in September 2008, to August 2012, with Gardasil being offered from September 2012. The change was motivated by Gardasil's added protection against genital warts. United States Adoption On 30 August 2021, fifteen leading academic and freestanding cancer centers with membership in the Association of American Cancer Institutes (AACI), all National Cancer Institute (NCI)-designated cancer centers, the American Cancer Society, the American Society of Clinical Oncology, the American Association for Cancer Research, and the St. Jude Children's Research Hospital have issued a joint statement urging the US health care systems, physicians, parents, children, and young adults to get HPV vaccination and other recommended vaccinations back on track during the National Immunization Awareness Month. , about one-quarter of US females aged 13–17 years had received at least one of the three HPV shots. , the proportion of such females receiving an HPV vaccination had risen to 38%. The government began recommending vaccination for boys in 2011; , the vaccination rate among boys (at least one dose) had reached 35%. According to the US Centers for Disease Control and Prevention (CDC), getting as many girls vaccinated as early and as quickly as possible will reduce the cases of cervical cancer among middle-aged women in 30 to 40 years and reduce the transmission of this highly communicable infection. Barriers include the limited understanding by many people that HPV causes cervical cancer, the difficulty of getting pre-teens and teens into the doctor's office to get a shot, and the high cost of the vaccine ($120/dose, $360 total for the three required doses, plus the cost of doctor visits). Community-based interventions can increase the uptake of HPV vaccination among adolescents. A survey was conducted in 2009 to gather information about knowledge and adoption of the HPV vaccine. Thirty percent of 13- to 17-year-olds and 9% of 18- to 26-year-olds out of the total 1,011 young women surveyed reported receipt of at least one HPV injection. Knowledge about HPV varied; however, 5% or fewer subjects believed that the HPV vaccine precluded the need for regular cervical cancer screening or safe-sex practices. Few girls and young women overestimate the protection provided by the vaccine. Despite moderate uptake, many females at risk of acquiring HPV have not yet received the vaccine. For example, young black women are less likely to receive HPV vaccines compared to young white women. Additionally, young women of all races and ethnicities without health insurance are less likely to get vaccinated. As of 2017, Gardasil 9 is the only HPV vaccine available in the United States as it provides protection against more HPV types than the earlier approved vaccines (the original Gardasil and Cervarix). Since the approval of Gardasil in 2006 and despite low vaccine uptake, prevalence of HPV among teenagers aged 14–19 has been cut in half with an 88% reduction among vaccinated women. No decline in prevalence was observed in other age groups, indicating the vaccine to have been responsible for the sharp decline in cases. The drop in number of infections is expected to in turn lead to a decline in cervical and other HPV-related cancers in the future. Legislation Four states have laws that require HPV vaccination for school students: Hawaii, Rhode Island, Virginia, and Washington D.C. Students in those states must have started HPV vaccination before entering the 7th grade. All school immunization laws grant exemptions to children for medical reasons, with other "opt-out" policies varying by state. Shortly after the first HPV vaccine was approved, bills to make the vaccine mandatory for school attendance were introduced in many states. Only two such bills passed (in Virginia and Washington DC) during the first four years after vaccine introduction. Mandates have been effective at increasing uptake of other vaccines, such as mumps, measles, rubella, and hepatitis B (which is also sexually transmitted). However most such efforts developed for five or more years after vaccine release, while financing and supply were arranged, further safety data was gathered, and education efforts increased understanding, before mandates were considered. Most public policies including school mandates have not been effective in promoting HPV vaccination while receiving a recommendation from a physician increased the probability of vaccination. In July 2015, Rhode Island added an HPV vaccine requirement for admittance into public schools. This mandate requires all students entering the seventh grade to receive at least one dose of the HPV vaccine starting in August 2015, all students entering the eighth grade to receive at least two doses of the HPV vaccine starting in August 2016, and all students entering the ninth grade to receive at least three doses of the HPV vaccine starting in August 2017. No legislative action is required for the Rhode Island Department of Health to add new vaccine mandates. Rhode Island is the only state that requires the vaccine for both male and female 7th graders. Immigrants Between July 2008 and December 2009, proof of the first of three doses of HPV Gardasil vaccine was required for women ages 11–26 intending to legally enter the United States. This requirement stirred controversy because of the cost of the vaccine, and because all the other vaccines so required to prevent diseases that are spread by respiratory route and considered highly contagious. The Centers for Disease Control and Prevention repealed all HPV vaccination directives for immigrants effective 14 December 2009. Uptake in the United States appears to vary by ethnicity and whether someone was born outside the United States. Coverage Measures have been considered including requiring insurers to cover HPV vaccination and funding HPV vaccines for those without insurance. The cost of the HPV vaccines for females under 18 who are uninsured is covered under the federal Vaccines for Children Program. As of 23 September 2010, vaccines are required to be covered by insurers under the Patient Protection and Affordable Care Act. HPV vaccines specifically are to be covered at no charge for women, including those who are pregnant or nursing. Medicaid covers HPV vaccination in accordance with the ACIP recommendations, and immunizations are a mandatory service under Medicaid for eligible individuals under age 21. In addition, Medicaid includes the Vaccines for Children Program. This program provides immunization services for children 18 and under who are Medicaid eligible, uninsured, underinsured, receiving immunizations through a Federally Qualified Health Center or Rural Health Clinic, or are Native American or Alaska Native. The vaccine manufacturers also offer help for people who cannot afford HPV vaccination. GlaxoSmithKline's Vaccines Access Program provides Cervarix free of charge 1-877-VACC-911 to low-income women, ages 19 to 25, who do not have insurance. Merck's Vaccine Patient Assistance Program 1-800-293-3881 provides Gardasil free to low-income women and men, ages 19 to 26, who do not have insurance, including immigrants who are legal residents. Opposition in the United States The idea that the HPV vaccine is linked to increased sexual behavior is not supported by scientific evidence. A review of nearly 1,400 adolescent girls found no difference in teen pregnancy, incidence of sexually transmitted infection, or contraceptive counseling regardless of whether they received the HPV vaccine. Thousands of Americans die each year from cancers preventable by the vaccine. A disproportionate rate of HPV-related cancers exists amongst LatinX populations, leading researchers to explore how communication and messaging can be adjusted to address vaccine hesitancy. Insurance companies There has been significant opposition from health insurance companies to covering the cost of the vaccine ($360). Religious and conservative groups Opposition due to the safety of the vaccine has been addressed through studies, but there is still some opposition focused on the sexual implications of the vaccine. Conservative groups in the US have opposed the concept of making HPV vaccination mandatory for pre-adolescent girls, claiming that making the vaccine mandatory is a violation of parental rights and that it will give a false sense of immunity to sexually transmitted infection, leading to early sexual activity. (See Peltzman effect) Both the Family Research Council and the group Focus on the Family support widespread (universal) availability of HPV vaccines but oppose mandatory HPV vaccinations for entry to public school. Parents also express confusion over recent mandates for entry to public school, pointing out that HPV is transmitted through sexual contact, not through attending school with other children. Conservative groups are concerned children will see the vaccine as a safeguard against STIs and will have sex sooner than they would without the vaccine while failing to use contraceptives. However, the American Academy of Pediatrics disagreed with the argument that the vaccine increases sexual activity among teens. Christine Peterson, director of the University of Virginia's Gynecology Clinic, said "The presence of seat belts in cars doesn't cause people to drive less safely. The presence of a vaccine in a person's body doesn't cause them to engage in risk-taking behavior they would not otherwise engage in." A 2018 study of college-aged students found that HPV vaccination did not increase sexual activity. Parental opposition Many parents opposed to providing the HPV vaccine to their pre-teens agree the vaccine is safe and effective, but find talking to their children about sex uncomfortable. Elizabeth Lange, of Waterman Pediatrics in Providence, RI, addresses this concern by emphasizing what the vaccine is doing for the child. Lange suggests parents should focus on the cancer prevention aspect without being distracted by words like 'sexually transmitted'. Everyone wants cancer prevention, yet here parents are denying their children a form of protection due to the nature of the cancer—Lange suggests that this much controversy would not surround a breast cancer or colon cancer vaccine. The HPV vaccine is suggested for 11-year-olds because it should be administered before possible exposure to HPV, but also because the immune system has the highest response for creating antibodies around this age. Lange also emphasized the studies showing that the HPV vaccine does not cause children to be more promiscuous than they would be without the vaccine. Controversy over the HPV vaccine remains present in the media. Parents in Rhode Island have created a Facebook group called "Rhode Islanders Against Mandated HPV Vaccinations" in response to Rhode Island's mandate that males and females entering the 7th grade, as of September 2015, be vaccinated for HPV before attending public school. Physician impact The effectiveness of a physician's recommendation for the HPV vaccine also contributes to low vaccination rates and controversy surrounding the vaccine. A 2015 study of national physician communication and support for the HPV vaccine found physicians routinely recommend HPV vaccines less strongly than they recommend Tdap or meningitis vaccines, find the discussion about HPV to be long and burdensome, and discuss the HPV vaccine last, after all other vaccines. Researchers suggest these factors discourage patients and parents from setting up timely HPV vaccines. To increase vaccination rates, this issue must be addressed and physicians should be better trained to handle discussing the importance of the HPV vaccine with patients and their families. Ethics Some researchers have compared the need for adolescent HPV vaccination to that of other childhood diseases such as chicken pox, measles, and mumps. This is because vaccination before infection decreases the risk of several forms of cancer. There has been some controversy around the HPV vaccine's rollout and distribution. Countries have taken different routes based on economics and social climate leading to issues of forced vaccination and marginalization of segments of the population in some cases. The rollout of a country's vaccination program is more divisive, compared to the act of providing vaccination against HPV. In more affluent countries, arguments have been made for publicly funded programs aimed at vaccinating all adolescents voluntarily. These arguments are supported by World Health Organization (WHO) surveys showing the effectiveness of cervical cancer prevention with HPV vaccination. In developing countries, the cost of the vaccine, dosing schedule, and other factors have led to suboptimal levels of vaccination. Future research is focused on low-cost generics and single-dose vaccination in efforts to make the vaccine more accessible. Research There are high-risk HPV types that are not affected by available vaccines. Ongoing research is focused on the development of HPV vaccines that will offer protection against a broader range of HPV types. One such method is a vaccine based on the minor capsid protein L2, which is highly conserved across HPV genotypes. Efforts for this have included boosting the immunogenicity of L2 by linking together short amino acid sequences of L2 from different oncogenic HPV types or by displaying L2 peptides on a more immunogenic carrier. There is also substantial research interest in the development of therapeutic vaccines, which seek to elicit immune responses against established HPV infections and HPV-induced cancers. After exposure Although HPV vaccination is most encouraged before any exposure to the target strains, its use is still beneficial in women who have contracted some of the target types because it's unlikely for a person to have been exposed to all target types. According to an 2008 article by the editor-in-chief of Harvard Women's Health Watch, the quadrivalent vaccine is able to reduce the occurrence of warts and precancerous lesions in HPV-positive women, and also appeared to reduce the chance of infection by non-targeted types. A 2023 review article finds that vaccination reduces the chance of further HPV-associated diseases even in those already showing HPV-related precancers and diseases. At this point the standard vaccine is not believed to be therapeutic, so this effect is attributed to the vaccine preventing the establishment of new infections. Therapeutic vaccines In addition to preventive vaccines, laboratory research, and several human clinical trials are focused on the development of therapeutic HPV vaccines. In general, these vaccines focus on the main HPV oncogenes, E6 and E7. Since expression of E6 and E7 is required for promoting the growth of cervical cancer cells (and cells within warts), it is hoped that immune responses against the two oncogenes might eradicate established tumors. There is a working therapeutic HPV vaccine. It has gone through three clinical trials. The vaccine is officially called the MEL-1 vaccine but also known as the MVA-E2 vaccine. In a study it has been suggested that an immunogenic peptide pool containing epitopes that can be effective against all the high-risk HPV strains circulating globally and 14 conserved immunogenic peptide fragments from four early proteins (E1, E2, E6 and E7) of 16 high-risk HPV types providing CD8+ responses. Therapeutic DNA vaccine VGX-3100, which consists of plasmids pGX3001 and pGX3002, has been granted a waiver by the European Medicines Agency for pediatric treatment of squamous intraepithelial lesions of the cervix caused by HPV types 16 and 18. According to an article published 16 September 2015 in The Lancet, which reviewed the safety, efficacy, and immunogenicity of VGX-3100 in a double-blind, randomized controlled trial (phase 2b) targeting HPV-16 and HPV-18 E6 and E7 proteins for cervical intraepithelial neoplasia 2/3, it is the first therapeutic vaccine to show efficacy against CIN 2/3 associated with HPV-16 and HPV-18. In June 2017, VGX-3100 entered a phase III clinical trial called REVEAL-1 for the treatment of HPV-induced high-grade squamous intraepithelial lesions. The estimated completion time for collecting primary clinical endpoint data is August 2019. As of October 2020, there are multiple therapeutic HPV vaccines in active development and in clinical trials, based on diverse vaccine platforms (protein-based, viral vector, bacterial vector, lipid encapsulated mRNA). Awards In 2009, as part of the Q150 celebrations, the cervical cancer vaccine was announced as one of the Q150 Icons of Queensland for its role in "innovation and invention". In 2017, National Cancer Institute scientists Douglas R. Lowy and John T. Schiller received the Lasker-DeBakey Clinical Medical Research Award for their contributions leading to the development of HPV vaccines.
Biology and health sciences
Vaccines
Health
11469677
https://en.wikipedia.org/wiki/Intensive%20animal%20farming
Intensive animal farming
Intensive animal farming, industrial livestock production, and macro-farms, also known as factory farming, is a type of intensive agriculture, specifically an approach to animal husbandry designed to maximize production while minimizing costs. To achieve this, agribusinesses keep livestock such as cattle, poultry, and fish at high stocking densities, at large scale, and using modern machinery, biotechnology, and global trade. The main products of this industry are meat, milk and eggs for human consumption. While intensive animal farming can produce large amounts of meat at low cost with reduced human labor, it is controversial as it raises several ethical concerns, including animal welfare issues (confinement, mutilations, stress-induced aggression, breeding complications), harm to the environment and wildlife (greenhouse gases, deforestation, eutrophication), public health risks (zoonotic diseases, pandemic risks, antibiotic resistance), and worker exploitation, particularly of undocumented workers. History Intensive animal farming is a relatively recent development in the history of agriculture, utilizing scientific discoveries and technological advances to enable changes in agricultural methods that increase production. Innovations from the late 19th century generally parallel developments in mass production in other industries in the latter part of the Industrial Revolution. The discovery of vitamins and their role in animal nutrition, in the first two decades of the 20th century, led to vitamin supplements, which allowed chickens to be raised indoors. The discovery of antibiotics and vaccines facilitated raising livestock in larger numbers by reducing disease. Chemicals developed for use in World War II gave rise to synthetic pesticides. Developments in shipping networks and technology have made long-distance distribution of agricultural produce feasible. Agricultural production across the world doubled four times between 1820 and 1975 (1820 to 1920; 1920 to 1950; 1950 to 1965; and 1965 to 1975) to feed a global population of one billion human beings in 1800 and 6.5 billion in 2002. During the same period, the number of people involved in farming dropped as the process became more automated. In the 1930s, 24 percent of the American population worked in agriculture compared to 1.5 percent in 2002; in 1940, each farm worker supplied 11 consumers, whereas in 2002, each worker supplied 90 consumers. The era of factory farming in Britain began in 1947 when a new Agriculture Act granted subsidies to farmers to encourage greater output by introducing new technology, in order to reduce Britain's reliance on imported meat. The United Nations writes that "intensification of animal production was seen as a way of providing food security." In 1966, the United States, United Kingdom and other industrialized nations, commenced factory farming of beef and dairy cattle and domestic pigs. As a result, farming became concentrated on fewer larger farms. For example, in 1967, there were one million pig farms in America; as of 2002, there were 114,000. In 1992, 28% of American pigs were raised on farms selling >5,000 pigs per year; as of 2022 this grew to 94.5%. From its American and West European heartland, intensive animal farming became globalized in the later years of the 20th century and is still expanding and replacing traditional practices of stock rearing in an increasing number of countries. In 1990 intensive animal farming accounted for 30% of world meat production and by 2005, this had risen to 40%. Process The aim is to produce large quantities of meat, eggs, or milk at the lowest possible cost. Food is supplied in place. Methods employed to maintain health and improve production may include the use of disinfectants, antimicrobial agents, anthelmintics, hormones and vaccines; protein, mineral and vitamin supplements; frequent health inspections; biosecurity; and climate-controlled facilities. Physical restraints, for example, fences or creeps, are used to control movement or actions regarded as undesirable. Breeding programs are used to produce animals more suited to the confined conditions and able to provide a consistent food product. Industrial production was estimated to account for 39 percent of the sum of global production of these meats and 50 percent of total egg production. In the US, according to its National Pork Producers Council, 80 million of its 95 million pigs slaughtered each year are reared in industrial settings. The major concentration of the industry occurs at the slaughter and meat processing phase, with only four companies slaughtering and processing 81 percent of cows, 73 percent of sheep, 57 percent of pigs and 50 percent of chickens. This concentration at the slaughter phase may be in large part due to regulatory barriers that may make it financially difficult for small slaughter plants to be built, maintained or remain in business. Factory farming may be no more beneficial to livestock producers than traditional farming because it appears to contribute to overproduction that drives down prices. Through "forward contracts" and "marketing agreements", meatpackers are able to set the price of livestock long before they are ready for production. These strategies often cause farmers to lose money, as half of all U.S. family farming operations did in 2007. Many of the nation's livestock producers would like to market livestock directly to consumers but with limited USDA inspected slaughter facilities, livestock grown locally can not typically be slaughtered and processed locally. Small farmers are often absorbed into factory farm operations, acting as contract growers for the industrial facilities. In the case of poultry contract growers, farmers are required to make costly investments in construction of sheds to house the birds, buy required feed and drugs – often settling for slim profit margins, or even losses. Research has shown that many immigrant workers in concentrated animal farming operations (CAFOs) in the United States receive little to no job-specific training or safety and health information regarding the hazards associated with these jobs. Workers with limited English proficiency are significantly less likely to receive any work-related training, since it is often only provided in English. As a result, many workers do not perceive their jobs as dangerous. This causes inconsistent personal protective equipment (PPE) use, and can lead to workplace accidents and injuries. Immigrant workers are also less likely to report any workplace hazards and injuries. Types Intensive farms hold large numbers of animals, typically cows, pigs, turkeys, geese, or chickens, often indoors, typically at high densities. Intensive production of livestock and poultry is widespread in developed nations. For 2002–2003, the United Nations' Food and Agriculture Organization (FAO) estimates of industrial production as a percentage of global production were 7 percent for beef and veal, 0.8 percent for sheep and goat meat, 42 percent for pork, and 67 percent for poultry meat. Chickens The major milestone in 20th-century poultry production was the discovery of vitamin D, which made it possible to keep chickens in confinement year-round. Before this, chickens did not thrive during the winter (due to lack of sunlight), and egg production, incubation, and meat production in the off-season were all very difficult, making poultry a seasonal and expensive proposition. Year-round production lowered costs, especially for broilers. At the same time, egg production was increased by scientific breeding. After a few false starts, (such as the Maine Experiment Station's failure at improving egg production) success was shown by Professor Dryden at the Oregon Experiment Station. Improvements in production and quality were accompanied by lower labor requirements. In the 1930s through the early 1950s, 1,500 hens provided a full-time job for a farm family in America. In the late 1950s, egg prices had fallen so dramatically that farmers typically tripled the number of hens they kept, putting three hens into what had been a single-bird cage or converting their floor-confinement houses from a single deck of roosts to triple-decker roosts. Not long after this, prices fell still further and large numbers of egg farmers left the business. This fall in profitability was accompanied by a general fall in prices to the consumer, allowing poultry and eggs to lose their status as luxury foods. Robert Plamondon reports that the last family chicken farm in his part of Oregon, Rex Farms, had 30,000 layers and survived into the 1990s. However, the standard laying house of the current operators is around 125,000 hens. The vertical integration of the egg and poultry industries was a late development, occurring after all the major technological changes had been in place for years (including the development of modern broiler rearing techniques, the adoption of the Cornish Cross broiler, the use of laying cages, etc.). By the late 1950s, poultry production had changed dramatically. Large farms and packing plants could grow birds by the tens of thousands. Chickens could be sent to slaughterhouses for butchering and processing into prepackaged commercial products to be frozen or shipped fresh to markets or wholesalers. Meat-type chickens currently grow to market weight in six to seven weeks, whereas only fifty years ago it took three times as long. This is due to genetic selection and nutritional modifications (but not the use of growth hormones, which are illegal for use in poultry in the US and many other countries, and have no effect). Once a meat consumed only occasionally, the common availability and lower cost has made chicken a common meat product within developed nations. Growing concerns over the cholesterol content of red meat in the 1980s and 1990s further resulted in increased consumption of chicken. Today, eggs are produced on large egg ranches on which environmental parameters are well controlled. Chickens are exposed to artificial light cycles to stimulate egg production year-round. In addition, forced molting is commonly practiced in the US, in which manipulation of light and food access triggers molting, in order to increase egg size and production. Forced molting is controversial, and is prohibited in the EU. On average, a chicken lays one egg a day, but not on every day of the year. This varies with the breed and time of year. In 1900, average egg production was 83 eggs per hen per year. In 2000, it was well over 300. In the United States, laying hens are butchered after their second egg laying season. In Europe, they are generally butchered after a single season. The laying period begins when the hen is about 18–20 weeks old (depending on breed and season). Males of the egg-type breeds have little commercial value at any age, and all those not used for breeding (roughly fifty percent of all egg-type chickens) are killed soon after hatching. The old hens also have little commercial value. Thus, the main sources of poultry meat 100 years ago (spring chickens and stewing hens) have both been entirely supplanted by meat-type broiler chickens. Pigs In America, intensive piggeries (or hog lots) are a type of concentrated animal feeding operation (CAFO), specialized for the raising of domestic pigs up to slaughter weight. In this system, grower pigs are housed indoors in group-housing or straw-lined sheds, whilst pregnant sows are confined in sow stalls (gestation crates) and give birth in farrowing crates. The use of sow stalls has resulted in lower production costs and concomitant animal welfare concerns. Many of the world's largest producers of pigs (such as U.S. and Canada) use sow stalls, but some nations (such as the UK) and U.S. states (such as Florida and Arizona) have banned them. Intensive piggeries are generally large warehouse-like buildings. Indoor pig systems allow the pig's condition to be monitored, ensuring minimum fatalities and increased productivity. Buildings are ventilated and their temperature regulated. Most domestic pig varieties are susceptible to heat stress, and all pigs lack sweat glands and cannot cool themselves. Pigs have a limited tolerance to high temperatures and heat stress can lead to death. Maintaining a more specific temperature within the pig-tolerance range also maximizes growth and growth to feed ratio. In an intensive operation pigs will lack access to a wallow (mud), which is their natural cooling mechanism. Intensive piggeries control temperature through ventilation or drip water systems (dropping water to cool the system). Pigs are naturally omnivorous and are generally fed a combination of grains and protein sources (soybeans, or meat and bone meal). Larger intensive pig farms may be surrounded by farmland where feed-grain crops are grown. Alternatively, piggeries are reliant on the grains industry. Pig feed may be bought packaged or mixed on-site. The intensive piggery system, where pigs are confined in individual stalls, allows each pig to be allotted a portion of feed. The individual feeding system also facilitates individual medication of pigs through feed. This has more significance to intensive farming methods, as the close proximity to other animals enables diseases to spread more rapidly. To prevent disease spreading and encourage growth, drug programs such as antibiotics, vitamins, hormones and other supplements are pre-emptively administered. Indoor systems, especially stalls and pens (i.e. 'dry', not straw-lined systems) allow for the easy collection of waste. In an indoor intensive pig farm, manure can be managed through a lagoon system or other waste-management system. However, odor remains a problem which is difficult to manage. The way animals are housed in intensive systems varies. Breeding sows spend the bulk of their time in sow stalls during pregnancy or farrowing crates, with their litters, until to be sent for the market. Piglets often receive range of treatments including castration, tail docking to reduce tail biting, teeth clipped (to reduce injuring their mother's nipples, gum disease and prevent later tusk growth) and their ears notched to assist identification. Treatments are usually made without pain killers. Weak runts may be slain shortly after birth. Piglets also may be weaned and removed from the sows at between two and five weeks old and placed in sheds. However, grower pigs – which comprise the bulk of the herd – are usually housed in alternative indoor housing, such as batch pens. During pregnancy, the use of a stall may be preferred as it facilitates feed-management and growth control. It also prevents pig aggression (e.g. tail biting, ear biting, vulva biting, food stealing). Group pens generally require higher stockmanship skills. Such pens will usually not contain straw or other material. Alternatively, a straw-lined shed may house a larger group (i.e. not batched) in age groups. Cattle Cattle are domesticated ungulates, a member of the family Bovidae, in the subfamily Bovinae, and descended from the aurochs (Bos primigenius). They are raised as livestock for their flesh (called beef and veal), dairy products (milk), leather and as draught animals. As of 2009–2010 it is estimated that there are 1.3–1.4 billion head of cattle in the world. The most common interactions with cattle involve daily feeding, cleaning and milking. Many routine husbandry practices involve ear tagging, dehorning, loading, medical operations, vaccinations and hoof care, as well as training and sorting for agricultural shows and sales. Once cattle obtain an entry-level weight, about , they are transferred from the range to a feedlot to be fed a specialized animal feed which consists of corn byproducts (derived from ethanol production), barley, and other grains as well as alfalfa and cottonseed meal. The feed also contains premixes composed of microingredients such as vitamins, minerals, chemical preservatives, antibiotics, fermentation products, and other essential ingredients that are purchased from premix companies, usually in sacked form, for blending into commercial rations. Because of the availability of these products, farmers using their own grain can formulate their own rations and be assured the animals are getting the recommended levels of minerals and vitamins. There are many potential impacts on human health due to the modern cattle industrial agriculture system. There are concerns surrounding the antibiotics and growth hormones used, increased E. coli contamination, higher saturated fat contents in the meat because of the feed, and also environmental concerns. As of 2010, in the U.S. 766,350 producers participate in raising beef. The beef industry is segmented with the bulk of the producers participating in raising beef calves. Beef calves are generally raised in small herds, with over 90% of the herds having less than 100 head of cattle. Fewer producers participate in the finishing phase which often occurs in a feedlot, but nonetheless there are 82,170 feedlots in the United States. Aquaculture Integrated multi-trophic aquaculture (IMTA), also called integrated aquaculture, is a practice in which the by-products (wastes) from one species are recycled to become inputs (fertilizers, food) for another, making aquaculture intensive. Fed aquaculture (e.g. fish and shrimp) is combined with inorganic extractive (e.g. seaweed) and organic extractive (e.g. shellfish) aquaculture to create balanced systems for environmental sustainability (biomitigation), economic stability (product diversification and risk reduction) and social acceptability (better management practices). The system is multi-trophic as it makes use of species from different trophic or nutritional level, unlike traditional aquaculture. Ideally, the biological and chemical processes in such a system should balance. This is achieved through the appropriate selection and proportions of different species providing different ecosystem functions. The co-cultured species should not just be biofilters, but harvestable crops of commercial value. A working IMTA system should result in greater production for the overall system, based on mutual benefits to the co-cultured species and improved ecosystem health, even if the individual production of some of the species is lower compared to what could be reached in monoculture practices over a short-term period. Regulation In various jurisdictions, intensive animal production of some kinds is subject to regulation for environmental protection. In the United States, a Concentrated Animal Feeding Operation (CAFO) that discharges or proposes to discharge waste requires a permit and implementation of a plan for management of manure nutrients, contaminants, wastewater, etc., as applicable, to meet requirements pursuant to the federal Clean Water Act. Some data on regulatory compliance and enforcement are available. In 2000, the US Environmental Protection Agency published 5-year and 1-year data on environmental performance of 32 industries, with data for the livestock industry being derived mostly from inspections of CAFOs. The data pertain to inspections and enforcement mostly under the Clean Water Act, but also under the Clean Air Act and Resource Conservation and Recovery Act. Of the 32 industries, livestock production was among the top seven for environmental performance over the 5-year period, and was one of the top two in the final year of that period, where good environmental performance is indicated by a low ratio of enforcement orders to inspections. The five-year and final-year ratios of enforcement/inspections for the livestock industry were 0.05 and 0.01, respectively. Also in the final year, the livestock industry was one of the two leaders among the 32 industries in terms of having the lowest percentage of facilities with violations. In Canada, intensive livestock operations are subject to provincial regulation, with definitions of regulated entities varying among provinces. Examples include Intensive Livestock Operations (Saskatchewan), Confined Feeding Operations (Alberta), Feedlots (British Columbia), High-density Permanent Outdoor Confinement Areas (Ontario) and Feedlots or Parcs d'Engraissement (Manitoba). In Canada, intensive animal production, like other agricultural sectors, is also subject to various other federal and provincial requirements. In the United States, farmed animals are excluded by half of all state animal cruelty laws including the federal Animal Welfare Act. The 28-hour law, enacted in 1873 and amended in 1994 states that when animals are being transported for slaughter, the vehicle must stop every 28 hours and the animals must be let out for exercise, food, and water. The United States Department of Agriculture claims that the law does not apply to birds. The Humane Slaughter Act is similarly limited. Originally passed in 1958, the Act requires that livestock be stunned into unconsciousness prior to slaughter. This Act also excludes birds, who make up more than 90 percent of the animals slaughtered for food, as well as rabbits and fish. Individual states all have their own animal cruelty statutes; however many states have right-to-farm laws that serve as a provision to exempt standard agricultural practices. In the United States there is an attempt to regulate farms in the most realistic way possible. The easiest way to effectively regulate the most animals with a limited number of resources and time is to regulate the large farms. In New York State many Animal Feeding Operations are not considered CAFOs since they have less than 300 cows. These farms are not regulated to the level that CAFOs are. Which may lead to unchecked pollution and nutrient leaching. The EPA website illustrates the scale of this problem by saying in New York State's Bay watershed there are 247 animal feeding operations and only 68 of them are State Pollutant Discharge Elimination System (SPDES) permitted CAFOs. In Ohio animal welfare organizations reached a negotiated settlement with farm organizations while in California, Proposition 2, Standards for Confining Farm Animals, an initiated law was approved by voters in 2008. Regulations have been enacted in other states and plans are underway for referendum and lobbying campaigns in other states. An action plan was proposed by the USDA in February 2009, called the Utilization of Manure and Other Agricultural and Industrial Byproducts. This program's goal is to protect the environment and human and animal health by using manure in a safe and effective manner. In order for this to happen, several actions need to be taken and these four components include: Improving the Usability of Manure Nutrients through More Effective Animal Nutrition and Management Maximizing the Value of Manure through Improved Collection, Storage, and Treatment Options Utilizing Manure in Integrated Farming Systems to Improve Profitability and Protect Soil, Water, and Air Quality Using Manure and Other Agricultural Byproducts as a Renewable Energy Source In 2012 Australia's largest supermarket chain, Coles, announced that as of January 1, 2013, they will stop selling company branded pork and eggs from animals kept in factory farms. The nation's other dominant supermarket chain, Woolworths, has already begun phasing out factory farmed animal products. All of Woolworth's house brand eggs are now cage-free, and by mid-2013 all of their pork will come from farmers who operate stall-free farms. In June 2021, the European Commission announced the plan of a ban on cages for a number of animals, including egg-laying hens, female breeding pigs, calves raised for veal, rabbits, ducks, and geese, by 2027. Animal welfare In the UK, the Farm Animal Welfare Council was set up by the government to act as an independent advisor on animal welfare in 1979 and expresses its policy as five freedoms: from hunger and thirst; from discomfort; from pain, injury or disease; to express normal behavior; from fear and distress. There are differences around the world as to which practices are accepted and there continue to be changes in regulations with animal welfare being a strong driver for increased regulation. For example, the EU is bringing in further regulation to set maximum stocking densities for meat chickens by 2010, where the UK Animal Welfare Minister commented, "The welfare of meat chickens is a major concern to people throughout the European Union. This agreement sends a strong message to the rest of the world that we care about animal welfare." Factory farming is greatly debated throughout Australia, with many people disagreeing with the methods and ways in which the animals in factory farms are treated. Animals are often under stress from being kept in confined spaces and will attack each other. In an effort to prevent injury leading to infection, their beaks, tails and teeth are removed. Many piglets will die of shock after having their teeth and tails removed, because painkilling medicines are not used in these operations. Factory farms are a popular way to gain space, with animals such as chickens being kept in spaces smaller than an A4 page. For example, in the UK, debeaking of chickens is deprecated, but it is recognized that it is a method of last resort, seen as better than allowing vicious fighting and ultimately cannibalism. Between 60 and 70 percent of six million breeding sows in the U.S. are confined during pregnancy, and for most of their adult lives, in gestation crates. According to pork producers and many veterinarians, sows will fight if housed in pens. The largest pork producer in the U.S. said in January 2007 that it will phase out gestation crates by 2017. They are being phased out in the European Union, with a ban effective in 2013 after the fourth week of pregnancy. With the evolution of factory farming, there has been a growing awareness of the issues amongst the wider public, not least due to the efforts of animal rights and welfare campaigners. As a result, gestation crates, one of the more contentious practices, are the subject of laws in the U.S., Europe and around the world to phase out their use as a result of pressure to adopt less confined practices. Death rates for sows have been increasing in the US from prolapse, which has been attributed to intensive breeding practices. Sows produce on average 23 piglets a year. In the United States alone, over 20 million chickens, 330,000 pigs and 166,000 cattle die during transport to slaughterhouses annually, and some 800,000 pigs are incapable of walking upon arrival. This is often due to being exposed to extreme temperatures and trauma. Demonstrations From 2011 to 2014 each year between 15,000 and 30,000 people gathered under the theme We are fed up! in Berlin to protest against industrial livestock production. Human health impact According to the U.S. Centers for Disease Control and Prevention (CDC), farms on which animals are intensively reared can cause adverse health reactions in farm workers. Workers may develop acute and chronic lung disease, musculoskeletal injuries, and may catch infections that transmit from animals to human beings (such as tuberculosis). Pesticides are used to control organisms which are considered harmful and they save farmers money by preventing product losses to pests. In the US, about a quarter of pesticides used are used in houses, yards, parks, golf courses, and swimming pools and about 70% are used in agriculture. However, pesticides can make their way into consumers' bodies which can cause health problems. One source of this is bioaccumulation in animals raised on factory farms. "Studies have discovered an increase in respiratory, neurobehavioral, and mental illnesses among the residents of communities next to factory farms." The CDC writes that chemical, bacterial, and viral compounds from animal waste may travel in the soil and water. Residents near such farms report problems such as unpleasant smell, flies and adverse health effects. The CDC has identified a number of pollutants associated with the discharge of animal waste into rivers and lakes, and into the air. Antibiotic use in livestock may create antibiotic-resistant pathogens; parasites, bacteria, and viruses may be spread; ammonia, nitrogen, and phosphorus can reduce oxygen in surface waters and contaminate drinking water; pesticides and hormones may cause hormone-related changes in fish; animal feed and feathers may stunt the growth of desirable plants in surface waters and provide nutrients to disease-causing micro-organisms; trace elements such as arsenic and copper, which are harmful to human health, may contaminate surface waters. Zoonotic diseases such as coronavirus disease 2019 (COVID-19), which caused the COVID-19 pandemic, are increasingly linked to environmental changes associated with intensive animal farming. The disruption of pristine forests driven by logging, mining, road building through remote places, rapid urbanisation and population growth is bringing people into closer contact with animal species they may never have been near before. According to Kate Jones, chair of ecology and biodiversity at University College London, the resulting transmission of disease from wildlife to humans is now "a hidden cost of human economic development". Intensive farming may make the evolution and spread of harmful diseases easier. Many communicable animal diseases spread rapidly through densely spaced populations of animals and crowding makes genetic reassortment more likely. However, small family farms are more likely to introduce bird diseases and more frequent association with people into the mix, as happened in the 2009 flu pandemic. In the European Union, growth hormones are banned on the basis that there is no way of determining a safe level. The UK has stated that in the event of the EU raising the ban at some future date, to comply with a precautionary approach, it would only consider the introduction of specific hormones, proven on a case-by-case basis. In 1998, the EU banned feeding animals antibiotics that were found to be valuable for human health. Furthermore, in 2006 the EU banned all drugs for livestock that were used for growth promotion purposes. As a result of these bans, the levels of antibiotic resistance in animal products and within the human population showed a decrease. The international trade in animal products increases the risk of global transmission of virulent diseases such as swine fever, BSE, foot and mouth and bird flu. In the United States, the use of antibiotics in livestock is still prevalent. The FDA reports that 80 percent of all antibiotics sold in 2009 were administered to livestock animals, and that many of these antibiotics are identical or closely related to drugs used for treating illnesses in humans. Consequently, many of these drugs are losing their effectiveness on humans, and the total healthcare costs associated with drug-resistant bacterial infections in the United States are between $16.6 billion and $26 billion annually. Methicillin-resistant Staphylococcus aureus (MRSA) has been identified in pigs and humans raising concerns about the role of pigs as reservoirs of MRSA for human infection. One study found that 20% of pig farmers in the United States and Canada in 2007 harbored MRSA. A second study revealed that 81% of Dutch pig farms had pigs with MRSA and 39% of animals at slaughter carried the bug were all of the infections were resistant to tetracycline and many were resistant to other antimicrobials. A more recent study found that MRSA ST398 isolates were less susceptible to tiamulin, an antimicrobial used in agriculture, than other MRSA or methicillin susceptible S. aureus. Cases of MRSA have increased in livestock animals. CC398 is a new clone of MRSA that has emerged in animals and is found in intensively reared production animals (primarily pigs, but also cattle and poultry), where it can be transmitted to humans. Although dangerous to humans, CC398 is often asymptomatic in food-producing animals. A 2011 nationwide study reported nearly half of the meat and poultry sold in U.S. grocery stores – 47 percent – was contaminated with S. aureus, and more than half of those bacteria – 52 percent – were resistant to at least three classes of antibiotics. Although Staph should be killed with proper cooking, it may still pose a risk to consumers through improper food handling and cross-contamination in the kitchen. The senior author of the study said, "The fact that drug-resistant S. aureus was so prevalent, and likely came from the food animals themselves, is troubling, and demands attention to how antibiotics are used in food-animal production today." In April 2009, lawmakers in the Mexican state of Veracruz accused large-scale hog and poultry operations of being breeding grounds of a pandemic swine flu, although they did not present scientific evidence to support their claim. A swine flu which have quickly killed more than 100 infected persons in that area, appears to have begun in the vicinity of a Smithfield subsidiary pig CAFO (concentrated animal feeding operation). Environmental impact Intensive factory farming has grown to become the biggest threat to the global environment through the loss of ecosystem services and global warming. It is a major driver to global environmental degradation and biodiversity loss. The process in which feed needs to be grown for animal use only is often grown using intensive methods which involve a significant amount of fertiliser and pesticides. This sometimes results in the pollution of water, soil and air by agrochemicals and manure waste, and use of limited resources such as water and energy at unsustainable rates. Industrial production of pigs and poultry is an important source of greenhouse gas emissions and is predicted to become more so. On intensive pig farms, the animals are generally kept on concrete with slats or grates for the manure to drain through. The manure is usually stored in slurry form (slurry is a liquid mixture of urine and feces). During storage on farm, slurry emits methane and when manure is spread on fields it emits nitrous oxide and causes nitrogen pollution of land and water. Poultry manure from factory farms emits high levels of nitrous oxide and ammonia. Large quantities and concentrations of waste are produced. Air quality and groundwater are at risk when animal waste is improperly recycled. Environmental impacts of factory farming include: Deforestation for animal feed production Unsustainable pressure on land for production of high-protein/high-energy animal feed Pesticide, herbicide and fertilizer manufacture and use for feed production Unsustainable use of water for feed-crops, including groundwater extraction Pollution of soil, water and air by nitrogen and phosphorus from fertiliser used for feed-crops and from manure Land degradation (reduced fertility, soil compaction, increased salinity, desertification) Loss of biodiversity due to eutrophication, acidification, pesticides and herbicides Worldwide reduction of genetic diversity of livestock and loss of traditional breeds Species extinctions due to livestock-related habitat destruction (especially feed-cropping) Insect farming is a potential alternative to traditional livestock that causes less environmental damage. However, it remains less resource-efficient than direct plant consumption, as insects necessarily consume more food than they produce. Additionally, farmed insects are often used to feed livestock, rather than humans directly.
Technology
Agriculture_2
null
12421431
https://en.wikipedia.org/wiki/Oriental%20pied%20hornbill
Oriental pied hornbill
The oriental pied hornbill (Anthracoceros albirostris) is an Indo-Malayan pied hornbill, a large canopy-dwelling bird belonging to the family Bucerotidae. Two other common names for this species are Sunda pied hornbill (convexus) and Malaysian pied hornbill. The oriental pied hornbill is considered to be among the smallest and most common of the Asian hornbills. It has the largest distribution in the genus and occurs in the Indian Subcontinent and throughout Southeast Asia. Its natural habitat is subtropical or tropical moist lowland forests. Its diet includes fruit, insects, shellfish, small reptiles and small mammals and birds including their eggs. Taxonomy The Oriental hornbill, of the family Bucerotidae, belongs to the genus Anthracoceros, which consists of five species. Species in this genus are divided into two groups, Indo-Malayan pied hornbills and black hornbills. A. albirostris is grouped under the Indo-Malayan pied hornbills, based on plumage similarities, along with the Indian pied hornbill (A. coronatus) and the Palawan hornbill (A. marchei). The black hornbills include A. malayanus and A. montani. A. albirostris can be further categorized into two subspecies, A. a. albirostris and A. a. convexus. Description The oriental pied hornbill is a medium size frugivore with a head-to-tail length of and a wingspan of . The bill measures for males and for females. It can weigh between , averaging for males and for females. The plumage of the head, neck, back, wings and upper breast is black with a slight green sheen. The tail is black with white tips on all the feathers except the central feathers (rectories). The plumage of their lower breast, lower abdomen, thighs, under-wing and all the tips of the wings except the three basal secondaries and two outer primaries is white, as is the circumorbital skin around the eyes and on the throat skin. A blue tinge can sometimes be noticed on the throat of adults. Casques of mature oriental pied hornbills are laterally flattened “cylinders”, which may form a protruding horn. Males and females are similar in coloration. Males can be distinguished from females by their larger body size, yellow bill, which has a black base, and bright red eyes. Females have a slightly smaller body size, a yellow bill and casque with a partly black, browned patched mandible, and grayish-brown eyes. Juvenile oriental pied hornbills resemble the adults, but have an undeveloped casque and a smaller bill. Their black plumage lacks the green gloss found on adults. The calls of the oriental pied hornbill have been described as crowlike sounds, braying sounds or harsh crackles and screeches Distribution and habitat The oriental pied hornbill has the largest distribution of the genus and occurs in the Indian Subcontinent and throughout Southeast Asia. Its range encompasses eastern and northern India, Nepal, Bangladesh, Bhutan, Tibet, Myanmar, Thailand, Laos, Cambodia, Vietnam, Peninsular Malaysia, Singapore, Indonesia, Brunei and the Sunda shelf islands. Its natural habitat is subtropical or tropical moist lowland forests including dry and semi-evergreen forests, dry and moist deciduous forests, subtropical broadleaf forests, secondary forests, plantations and woodlands. Behaviour and ecology Feeding Hornbills are predominantly frugivores. The oriental pied hornbill's diet consists of wild fruits such as figs (Ficus spp.), melanoxylon berries, rambutans, palm fruit, papaya and fruits of liana plants. It will also take large insects (grasshoppers), small birds (finches), small reptiles (lizards and snakes), amphibians such as frogs, fish, and bats. Its diet differs slightly between the breeding and non-breeding season. During the non-breeding season, oriental pied hornbills feed more on non-fig fruit such as small sized berries, drupes, arillate capsules and lianas (woody vines), however the availability of these food items is lower in the breeding season, which suggests that the species increases its habitat range during that time. They also tend to feed in flocks during the non breeding season. When foraging for food, they tend to select a few common species of fruit trees. They show a preference towards trees belonging to the families Annonaceae, Meliaceae and Myristicaceae. Other target species include Rourea minor, Polyalthia viridis, Cinnamomum subavenium, Trichosanthes tricuspidata, and many others. Feeding on a diversity of fruits ensure that nutritional requirements are met. In the non-breeding season fruits that are selected for are generally sugar rich, while lipid-rich fruits and invertebrates are highly selected for during the breeding season. Oriental pied hornbills are important large seed dispersers, promoting seedling recruitment by translocating the seeds of the fruits they feed on. Few other bird species outside the hornbill family have large enough gape widths to allow them to disperse large seeds to special microsites or open habitats. Seed dispersal behavior of hornbills thus helps shape forest communities, and disruption of this animal-plant interaction may have significant impacts on the reshaping of forest communities. Reproduction Hornbills are generally monogamous and breed between January and June; oriental pied hornbills typically commence breeding in February. This coincides with the onset of rain depending on geographic location, and peak abundance of fruit. Hornbills are secondary cavity nesters, meaning that they typically do not excavate their own nesting sites but use those created by other birds or by branches breaking off. Because hornbills rely on pre-excavated cavities, selection of suitable nest-sites within their environment has major impacts on breeding success. When females have selected and entered their nest, they seal the cavity with a mixture of saliva, mud, fruit, droppings and tree bark, leaving only a small opening through which food may be passed in. The male forages for the female and chicks, and the female feeds the nestlings. Chicks remain inside the nest with the female for several months until there are ready to fledge. Oriental pied hornbills have shown to return to their previous nest for subsequent nesting seasons. Nest selection Hornbills select nest sites based on the availability and type of fruiting trees, as well as on the availability and quality of nest site cavities in their particular habitat. Some oriental pied hornbills have demonstrated tree species preferences for nest site selection. In Rajaji National Park in India, oriental pied hornbills nest in a variety of tree species such as Bombax ceiba, Careya arborea, Cordia myxa, Lagerstroemia parviflora, Mitragyma parviflora, Terminalia belerica, Shorea robusta, and Syzigium cumini. The main difference in the structural characteristics of nest cavities between hornbill species is cavity size, which is highly correlated with body size. Cavities preferred by the oriental pied hornbill are elongate and may be located at a height between 1–18 m or more. Cavity entrance shape is rounder than for other hornbills. Oriental pied hornbills tend to select nesting sites in close proximity to rivers or other bodies of water. Compared to other hornbill species such as the great pied hornbill and wreathed hornbill, the oriental pied hornbill demonstrates tolerance to disturbed habitats. Nests have been found in disturbed, secondary forest areas such as plantations, degraded forests and logging sites, while other hornbill species tend to avoid such sites. Nests found in human disturbed areas are however often unsuccessful or abandoned; in general, hornbills prefer undisturbed forest areas. Because oriental pied hornbills inhabit various habitats, nest structural characteristics may vary from one habitat to another, and may also vary between hornbill species, which have overlapping habitats. Habitat overlap among hornbill species may lead to intra & inter specific competition, whereby hornbills compete for limited nest-sites. Competition for nest-sites with other species such as squirrels, lizard and other cavity nesting birds can also have critical impacts on breeding success. Conservation The species has an extremely wide range and appears to be the hornbill species most adaptable to habitat alterations; it is thus not currently considered to be threatened. What declines in oriental pied hornbill population have been reported are mainly caused by legal and illegal logging, which decreases the availability of suitable nesting and fruiting trees. A. albirostris are subject to some hunting pressure (casques are sold as souvenirs) and are popular as pets in some areas. It has also been noted that the species has been almost completely extirpated from southern China. In Singapore, the local population went locally extinct in the 1960s, but bounced back in the 1990s and hornbills are now widespread around the island. Conservation efforts such as captive breeding and reintroduction are currently in practice. Breeding in captivity has so far shown a low success rate. In some areas, such as Cambodia, artificial nests made from iron tanks are installed in nesting sites to provide alternative nesting sites for hornbills when natural nest-site availability is low and to aid reintroduction.
Biology and health sciences
Coraciiformes
Animals
409825
https://en.wikipedia.org/wiki/Troodontidae
Troodontidae
Troodontidae is a clade of bird-like theropod dinosaurs from the Late Jurassic to Late Cretaceous. During most of the 20th century, troodontid fossils were few and incomplete and they have therefore been allied, at various times, with many dinosaurian lineages. More recent fossil discoveries of complete and articulated specimens (including specimens which preserve feathers, eggs, embryos, and complete juveniles), have helped to increase understanding about this group. Anatomical studies, particularly studies of the most primitive troodontids, like Sinovenator, demonstrate striking anatomical similarities with Archaeopteryx and primitive dromaeosaurids, and demonstrate that they are relatives comprising a clade called Paraves. Evolution The oldest definitive troodontid known is Hesperornithoides from the Late Jurassic of Wyoming. The slightly older Koparion of Utah is only represented by a single tooth, and small maniraptoran teeth from the Middle Jurassic of England were identified as those of indeterminate troodontids in 2023. Over the Cretaceous, troodontids radiated throughout western North America, Asia, and Europe, suggesting a mostly Laurasian distribution for the group. However, in 2013, a single diagnostic tooth from the latest Cretaceous (Maastrichtian) Kallamedu Formation of southern India was identified as a troodontid, suggesting that troodontids either also inhabited Gondwana or managed to disperse to India from elsewhere prior to its separation as an island continent. The potential Gondwanan occurrence of troodontids is supported by the existence of Middle Jurassic remains, which suggest that they originated prior to the breakup of Pangaea. However, due to the lack of other remains from the region, it has been suggested that the existence of Gondwanan troodontids should be regarded as provisional. Description Troodontids are a group of small, bird-like, gracile maniraptorans. All troodontids have unique features of the skull, such as large numbers of closely spaced teeth in the lower jaw. Troodontids have sickle-claws and raptorial hands, and some of the highest non-avian encephalization quotients, suggesting that they were behaviourally advanced and had keen senses. They had unusually long legs compared to other theropods, with a large, curved claw on their retractable second toes, similar to the "sickle-claw" of the dromaeosaurids. However, the sickle-claws of troodontids were not as large or recurved as in dromaeosaurids, and in some instances could not be held off the ground and "retracted" to the same degree. In at least one troodontid, Borogovia, the second toe could not be held far off the ground at all and the claw was straight, not curved or sickle-like. Troodontids had unusually large brains among dinosaurs, comparable to those of living flightless birds. Their eyes were also large, and pointed forward, indicating that they had good binocular vision. The ears of troodontids were also unusual among theropods, having enlarged middle ear cavities, indicating acute hearing ability. The placement of this cavity near the eardrum may have aided in the detection of low-frequency sounds. In some troodontids, ears were also asymmetrical, with one ear placed higher on the skull than the other, a feature shared only with some owls. The specialization of the ears may indicate that troodontids hunted in a manner similar to owls, using their hearing to locate small prey. Diet Although most paleontologists believe that they were predatory carnivores, the many small, coarsely serrated teeth, large denticle size, and U-shaped jaws of some species (particularly Troodon) suggest that some species may have been omnivorous or herbivorous. Some suggest that the large denticle size is reminiscent of the teeth of extant iguanine lizards. In contrast, a few species, such as Byronosaurus, had large numbers of needle-like teeth, which seem best-suited for picking up small prey, such as birds, lizards and small mammals. Other morphological characteristics of the teeth, such as the detailed form of the denticles and the presence of blood grooves, also seem to indicate carnivory. Analyses of barium/calcium and strontium/calcium ratios, which are higher in carnivores due to bioaccumulation, found low ratios in teeth of Stenonychosaurus, suggesting that it had a diet ranging from mixed to plant-dominant omnivory. Though little is known directly about the predatory behavior of troodontids, Fowler and colleagues theorize that the longer legs and smaller sickle claws (as compared to dromaeosaurids) indicate a more cursorial lifestyle, though the study indicates that troodontids were still likely to have used the unguals for prey manipulation. The proportions of the metatarsals, tarsals and unguals of troodontids appear indicative of their having nimbler, but weaker feet, perhaps better adapted for capturing and subduing smaller prey. This suggests an ecological separation from the slower but more powerful Dromaeosauridae. Classification Troodontid fossils were among the first dinosaur remains described. Initially, Leidy (1856) assumed they were lacertilian (lizards), but, by 1924, they were referred to Dinosauria by Gilmore, who suggested that they were ornithischians and allied them with the pachycephalosaurian Stegoceras in a Troodontidae. It was not until 1945 that C.M. Sternberg recognized Troodontidae as a theropod family. Since 1969, Troodontidae has typically been allied with Dromaeosauridae, in a clade (natural group) known as Deinonychosauria, but this was by no means a consensus. Holtz (in 1994) erected the clade Bullatosauria, uniting Ornithomimosauria (the "ostrich-dinosaurs") and Troodontidae, on the basis of characteristics including, among others, an inflated braincase (parabasisphenoid) and a long, low opening in the upper jaw (the maxillary fenestra). Features of the pelvis also suggested they were less advanced than dromaeosaurids. New discoveries of primitive troodontids from China (such as Sinovenator and Mei), however, display strong similarities between Troodontidae, Dromaeosauridae and the primitive bird Archaeopteryx, and most paleontologists, including Holtz, now consider troodontids to be much more closely related to birds than they are to ornithomimosaurs, causing the clade Bullatosauria to be abandoned. One study of theropod systematics by members of the has uncovered striking similarities among the most basal dromaeosaurids, troodontids, and Archaeopteryx. This clade is together called Paraves by Novas and Pol. The extensive cladistic analysis conducted by Turner et al., (2012) supported the monophyly of Troodontidae. Taxonomy Family Troodontidae Albertavenator Almas Geminiraptor Jianianhualong Liaoningvenator Sinornithoides Talos Tochisaurus Xixiasaurus Subfamily Jinfengopteryginae Jinfengopteryx IGM 100/1128 Subfamily Sinovenatorinae Daliansaurus Mei Sinovenator Sinusonasus Subfamily Troodontinae Borogovia Byronosaurus? Gobivenator Latenivenatrix Linhevenator Pectinodon Philovenator Saurornithoides Stenonychosaurus Troodon Urbacodon Zanabazar Phylogeny There are multiple possibilities of the genera included in Troodontidae as well as how they are related. Very primitive species, such as Anchiornis huxleyi, have alternately been found to be early troodontids, early members of the closely related group Avialae, or more primitive paravians by various studies. The cladogram below follows the results of a study by Lefèvre et al. 2017. Shen et al. (2017a) explored troodontid phylogeny using a modified version of the Tsuihiji et al. (2014) analysis. It was in turn based on data published by Gao et al. (2012), a slightly modified version of the Xu et al. (2011) analysis, focusing on advanced troodontids. A simplified version is shown below. In 2014, Brusatte, Lloyd, Wang and Norell published an analysis on Coelurosauria, based on data from Turner et al. (2012) who named a third subfamily of troodontids, Jinfengopteryginae. Their analysis included more basal troodontid species but failed to resolve many of their interrelationships, resulting in large "polytomies" (sets of species where the branching order in the family tree is uncertain). An updated version of the Brusatte et al. analysis was provided by Shen et at. (2017b), who included more taxa and recovered greater resolution. Shen et at. named a fourth subfamily of troodontids, the Sinovenatorinae. A simplified version of their analysis is shown below. Troodontinae is a subfamily of troodontid dinosaurs. The subfamily was first used in 2017 for the group of troodontids descended from the last common ancestor of Gobivenator mongoliensis and Zanabazar junior, but has been redefined to be the least inclusive clade containing Saurornithoides mongoliensis and Troodon formosus, utilizing the type species of the clade. Below is a cladogram of the Troodontinae as published by Aaron van der Reest and Phil Currie, in 2017. Paleobiology Many troodontid nests, including eggs that contain fossilized embryos, have been described. Hypotheses about troodontid reproduction have been developed from this evidence (see Troodon). A few troodont fossils, including specimens of Mei and Sinornithoides, demonstrate that these animals roosted like birds, with their heads tucked under their forelimbs. These fossils, as well as numerous skeletal similarities to birds and related feathered dinosaurs, support the idea that troodontids probably bore a bird-like feathered coat. The discovery of fully feathered, primitive troodontids, such as Jianianhualong, lend support to this. In 2004, Mark Norell and colleagues described two partial troodontid skulls (specimen numbers IGM 100/972 and IGM 100/974) found in a nest of oviraptorid eggs in the Djadokhta Formation of Mongolia. The nest is quite certainly that of an oviraptorosaur, since an oviraptorid embryo is still preserved inside one of the eggs. The two partial troodontid skulls were first described by Norell et al. (1994) as dromaeosaurids, but reassigned to the troodontid Byronosaurus after further study. The troodontids were either hatchlings or embryos, and fragments of eggshell are adhered to them although it seems to be oviraptorid eggshell. The presence of tiny troodontids in an oviraptorid nest is an enigma. Hypotheses explaining how they came to be there include that they were the prey of the adult oviraptorid, that they were there to prey on oviraptorid hatchlings, or that some troodontids may have been nest parasites. Feeding Troodontid feeding was discovered to be typical of coelurosaurian theropods, with a characteristic "puncture and pull" feeding method seen also in such theropods as the dromaeosauridae and tyrannosauridae. Studies of wear patterns on the teeth of dromaeosaurids by Angelica Torices et al., indicate that dromaeosaurid teeth share similar wear patterns to those seen in the aforementioned groups. However, micro wear on the teeth indicated that dromaeosaurids likely preferred larger prey items than the troodontids with which they often shared their environment. Such differences in dietary preferences likely allowed them to inhabit the same ecosystems. The same study also indicated that dromaeosaurids such as Dromaeosaurus and Saurornitholestes (two dromaeosaurids analyzed in the study) likely included bone in their diet and were better adapted to handle struggling prey while troodontids, equipped with weaker jaws, preyed on softer-bodied animals and prey items such as invertebrates and carrion that either was immobile or could likely be swallowed whole. Flight Compared to most other paravians, troodontids are unspecialised for aerial locomotion. However, Jinfengopteryx ranks closely with non-avian theropods known to engage in powered flight like Microraptor and Rahonavis. Bird evolution Troodontids are important in research into the origin of birds because they share many anatomical characters with early birds. Crucially, the substantially complete Hesperornithoides ("Lori") is a troodontid from the Late Jurassic Morrison Formation, close to the time of Archaeopteryx. The discovery of Jurassic troodonts is positive physical evidence that derived deinonychosaurs were present before the time that avians arose. This fact strongly invalidates the "temporal paradox" cited by the few remaining opponents of the idea that birds are closely related to dinosaurs.
Biology and health sciences
Theropods
Animals
409951
https://en.wikipedia.org/wiki/Isothermal%20process
Isothermal process
An isothermal process is a type of thermodynamic process in which the temperature T of a system remains constant: ΔT = 0. This typically occurs when a system is in contact with an outside thermal reservoir, and a change in the system occurs slowly enough to allow the system to be continuously adjusted to the temperature of the reservoir through heat exchange (see quasi-equilibrium). In contrast, an adiabatic process is where a system exchanges no heat with its surroundings (Q = 0). Simply, we can say that in an isothermal process For ideal gases only, internal energy while in adiabatic processes: Etymology The noun isotherm is derived from the Ancient Greek words (), meaning "equal", and (), meaning "heat". Examples Isothermal processes can occur in any kind of system that has some means of regulating the temperature, including highly structured machines, and even living cells. Some parts of the cycles of some heat engines are carried out isothermally (for example, in the Carnot cycle). In the thermodynamic analysis of chemical reactions, it is usual to first analyze what happens under isothermal conditions and then consider the effect of temperature. Phase changes, such as melting or evaporation, are also isothermal processes when, as is usually the case, they occur at constant pressure. Isothermal processes are often used as a starting point in analyzing more complex, non-isothermal processes. Isothermal processes are of special interest for ideal gases. This is a consequence of Joule's second law which states that the internal energy of a fixed amount of an ideal gas depends only on its temperature. Thus, in an isothermal process the internal energy of an ideal gas is constant. This is a result of the fact that in an ideal gas there are no intermolecular forces. Note that this is true only for ideal gases; the internal energy depends on pressure as well as on temperature for liquids, solids, and real gases. In the isothermal compression of a gas there is work done on the system to decrease the volume and increase the pressure. Doing work on the gas increases the internal energy and will tend to increase the temperature. To maintain the constant temperature energy must leave the system as heat and enter the environment. If the gas is ideal, the amount of energy entering the environment is equal to the work done on the gas, because internal energy does not change. For isothermal expansion, the energy supplied to the system does work on the surroundings. In either case, with the aid of a suitable linkage the change in gas volume can perform useful mechanical work. For details of the calculations, see calculation of work. For an adiabatic process, in which no heat flows into or out of the gas because its container is well insulated, Q = 0. If there is also no work done, i.e. a free expansion, there is no change in internal energy. For an ideal gas, this means that the process is also isothermal. Thus, specifying that a process is isothermal is not sufficient to specify a unique process. Details for an ideal gas For the special case of a gas to which Boyle's law applies, the product pV (p for gas pressure and V for gas volume) is a constant if the gas is kept at isothermal conditions. The value of the constant is nRT, where n is the number of moles of the present gas and R is the ideal gas constant. In other words, the ideal gas law pV = nRT applies. Therefore: holds. The family of curves generated by this equation is shown in the graph in Figure 1. Each curve is called an isotherm, meaning a curve at a same temperature T. Such graphs are termed indicator diagrams and were first used by James Watt and others to monitor the efficiency of engines. The temperature corresponding to each curve in the figure increases from the lower left to the upper right. Calculation of work In thermodynamics, the reversible work involved when a gas changes from state A to state B is where p for gas pressure and V for gas volume. For an isothermal (constant temperature T), reversible process, this integral equals the area under the relevant PV (pressure-volume) isotherm, and is indicated in purple in Figure 2 for an ideal gas. Again, p =  applies and with T being constant (as this is an isothermal process), the expression for work becomes: In IUPAC convention, work is defined as work on a system by its surroundings. If, for example, the system is compressed, then the work is done on the system by the surrounding so the work is positive and the internal energy of the system increases. Conversely, if the system expands (i.e., system surrounding expansion, so free expansions not the case), then the work is negative as the system does work on the surroundings and the internal energy of the system decreases. It is also worth noting that for ideal gases, if the temperature is held constant, the internal energy of the system U also is constant, and so ΔU = 0. Since the first law of thermodynamics states that ΔU = Q + W in IUPAC convention, it follows that Q = −W for the isothermal compression or expansion of ideal gases. Example of an isothermal process The reversible expansion of an ideal gas can be used as an example of work produced by an isothermal process. Of particular interest is the extent to which heat is converted to usable work, and the relationship between the confining force and the extent of expansion. During isothermal expansion of an ideal gas, both and change along an isotherm with a constant product (i.e., constant T). Consider a working gas in a cylindrical chamber 1 m high and 1 m2 area (so 1m3 volume) at 400 K in static equilibrium. The surroundings consist of air at 300 K and 1 atm pressure (designated as ). The working gas is confined by a piston connected to a mechanical device that exerts a force sufficient to create a working gas pressure of 2 atm (state ). For any change in state that causes a force decrease, the gas will expand and perform work on the surroundings. Isothermal expansion continues as long as the applied force decreases and appropriate heat is added to keep = 2 [atm·m3] (= 2 atm × 1 m3). The expansion is said to be internally reversible if the piston motion is sufficiently slow such that at each instant during the expansion the gas temperature and pressure is uniform and conform to the ideal gas law. Figure 3 shows the relationship for = 2 [atm·m3] for isothermal expansion from 2 atm (state ) to 1 atm (state ). The work done (designated ) has two components. First, expansion work against the surrounding atmosphere pressure (designated as ), and second, usable mechanical work (designated as ). The output here could be movement of the piston used to turn a crank-arm, which would then turn a pulley capable of lifting water out of flooded salt mines. The system attains state ( = 2 [atm·m3] with = 1 atm and = 2 m3) when the applied force reaches zero. At that point, equals –140.5 kJ, and is –101.3 kJ. By difference, = –39.1 kJ, which is 27.9% of the heat supplied to the process (- 39.1 kJ / - 140.5 kJ). This is the maximum amount of usable mechanical work obtainable from the process at the stated conditions. The percentage of is a function of and , and approaches 100% as approaches zero. To pursue the nature of isothermal expansion further, note the red line on Figure 3. The fixed value of causes an exponential increase in piston rise vs. pressure decrease. For example, a pressure decrease from 2 to 1.9 atm causes a piston rise of 0.0526 m. In comparison, a pressure decrease from 1.1 to 1 atm causes a piston rise of 0.1818 m. Entropy changes Isothermal processes are especially convenient for calculating changes in entropy since, in this case, the formula for the entropy change, ΔS, is simply where Qrev is the heat transferred (internally reversible) to the system and T is absolute temperature. This formula is valid only for a hypothetical reversible process; that is, a process in which equilibrium is maintained at all times. A simple example is an equilibrium phase transition (such as melting or evaporation) taking place at constant temperature and pressure. For a phase transition at constant pressure, the heat transferred to the system is equal to the enthalpy of transformation, ΔHtr, thus Q = ΔHtr. At any given pressure, there will be a transition temperature, Ttr, for which the two phases are in equilibrium (for example, the normal boiling point for vaporization of a liquid at one atmosphere pressure). If the transition takes place under such equilibrium conditions, the formula above may be used to directly calculate the entropy change . Another example is the reversible isothermal expansion (or compression) of an ideal gas from an initial volume VA and pressure PA to a final volume VB and pressure PB. As shown in Calculation of work, the heat transferred to the gas is . This result is for a reversible process, so it may be substituted in the formula for the entropy change to obtain . Since an ideal gas obeys Boyle's law, this can be rewritten, if desired, as . Once obtained, these formulas can be applied to an irreversible process, such as the free expansion of an ideal gas. Such an expansion is also isothermal and may have the same initial and final states as in the reversible expansion. Since entropy is a state function (that depends on an equilibrium state, not depending on a path that the system takes to reach that state), the change in entropy of the system is the same as in the reversible process and is given by the formulas above. Note that the result Q = 0 for the free expansion can not be used in the formula for the entropy change since the process is not reversible. The difference between the reversible and irreversible is found in the entropy of the surroundings. In both cases, the surroundings are at a constant temperature, T, so that ΔSsur = −; the minus sign is used since the heat transferred to the surroundings is equal in magnitude and opposite in sign to the heat Q transferred to the system. In the reversible case, the change in entropy of the surroundings is equal and opposite to the change in the system, so the change in entropy of the universe is zero. In the irreversible, Q = 0, so the entropy of the surroundings does not change and the change in entropy of the universe is equal to ΔS for the system.
Physical sciences
Thermodynamics
Physics
410009
https://en.wikipedia.org/wiki/Rotation%20%28mathematics%29
Rotation (mathematics)
Rotation in mathematics is a concept originating in geometry. Any rotation is a motion of a certain space that preserves at least one point. It can describe, for example, the motion of a rigid body around a fixed point. Rotation can have a sign (as in the sign of an angle): a clockwise rotation is a negative magnitude so a counterclockwise turn has a positive magnitude. A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire -dimensional flat of fixed points in a -dimensional space. Mathematically, a rotation is a map. All rotations about a fixed point form a group under composition called the rotation group (of a particular space). But in mechanics and, more generally, in physics, this concept is frequently understood as a coordinate transformation (importantly, a transformation of an orthonormal basis), because for any motion of a body there is an inverse transformation which if applied to the frame of reference results in the body being at the same coordinates. For example, in two dimensions rotating a body clockwise about a point keeping the axes fixed is equivalent to rotating the axes counterclockwise about the same point while the body is kept fixed. These two types of rotation are called active and passive transformations. Related definitions and terminology The rotation group is a Lie group of rotations about a fixed point. This (common) fixed point or center is called the center of rotation and is usually identified with the origin. The rotation group is a point stabilizer in a broader group of (orientation-preserving) motions. For a particular rotation: The axis of rotation is a line of its fixed points. They exist only in . The plane of rotation is a plane that is invariant under the rotation. Unlike the axis, its points are not fixed themselves. The axis (where present) and the plane of a rotation are orthogonal. A representation of rotations is a particular formalism, either algebraic or geometric, used to parametrize a rotation map. This meaning is somehow inverse to the meaning in the group theory. Rotations of (affine) spaces of points and of respective vector spaces are not always clearly distinguished. The former are sometimes referred to as affine rotations (although the term is misleading), whereas the latter are vector rotations. See the article below for details. Definitions and representations In Euclidean geometry A motion of a Euclidean space is the same as its isometry: it leaves the distance between any two points unchanged after the transformation. But a (proper) rotation also has to preserve the orientation structure. The "improper rotation" term refers to isometries that reverse (flip) the orientation. In the language of group theory the distinction is expressed as direct vs indirect isometries in the Euclidean group, where the former comprise the identity component. Any direct Euclidean motion can be represented as a composition of a rotation about the fixed point and a translation. In one-dimensional space, there are only trivial rotations. In two dimensions, only a single angle is needed to specify a rotation about the origin – the angle of rotation that specifies an element of the circle group (also known as ). The rotation is acting to rotate an object counterclockwise through an angle about the origin; see below for details. Composition of rotations sums their angles modulo 1 turn, which implies that all two-dimensional rotations about the same point commute. Rotations about different points, in general, do not commute. Any two-dimensional direct motion is either a translation or a rotation; see Euclidean plane isometry for details. Rotations in three-dimensional space differ from those in two dimensions in a number of important ways. Rotations in three dimensions are generally not commutative, so the order in which rotations are applied is important even about the same point. Also, unlike the two-dimensional case, a three-dimensional direct motion, in general position, is not a rotation but a screw operation. Rotations about the origin have three degrees of freedom (see rotation formalisms in three dimensions for details), the same as the number of dimensions. A three-dimensional rotation can be specified in a number of ways. The most usual methods are: Euler angles (pictured at the left). Any rotation about the origin can be represented as the composition of three rotations defined as the motion obtained by changing one of the Euler angles while leaving the other two constant. They constitute a mixed axes of rotation system because angles are measured with respect to a mix of different reference frames, rather than a single frame that is purely external or purely intrinsic. Specifically, the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third is an intrinsic rotation (a spin) around an axis fixed in the body that moves. Euler angles are typically denoted as α, β, γ, or φ, θ, ψ. This presentation is convenient only for rotations about a fixed point. Axis–angle representation (pictured at the right) specifies an angle with the axis about which the rotation takes place. It can be easily visualised. There are two variants to represent it: as a pair consisting of the angle and a unit vector for the axis, or as a Euclidean vector obtained by multiplying the angle with this unit vector, called the rotation vector (although, strictly speaking, it is a pseudovector). Matrices, versors (quaternions), and other algebraic things: see the section Linear and Multilinear Algebra Formalism for details. A general rotation in four dimensions has only one fixed point, the centre of rotation, and no axis of rotation; see rotations in 4-dimensional Euclidean space for details. Instead the rotation has two mutually orthogonal planes of rotation, each of which is fixed in the sense that points in each plane stay within the planes. The rotation has two angles of rotation, one for each plane of rotation, through which points in the planes rotate. If these are and then all points not in the planes rotate through an angle between and . Rotations in four dimensions about a fixed point have six degrees of freedom. A four-dimensional direct motion in general position is a rotation about certain point (as in all even Euclidean dimensions), but screw operations exist also. Linear and multilinear algebra formalism When one considers motions of the Euclidean space that preserve the origin, the distinction between points and vectors, important in pure mathematics, can be erased because there is a canonical one-to-one correspondence between points and position vectors. The same is true for geometries other than Euclidean, but whose space is an affine space with a supplementary structure; see an example below. Alternatively, the vector description of rotations can be understood as a parametrization of geometric rotations up to their composition with translations. In other words, one vector rotation presents many equivalent rotations about all points in the space. A motion that preserves the origin is the same as a linear operator on vectors that preserves the same geometric structure but expressed in terms of vectors. For Euclidean vectors, this expression is their magnitude (Euclidean norm). In components, such operator is expressed with orthogonal matrix that is multiplied to column vectors. As it was already stated, a (proper) rotation is different from an arbitrary fixed-point motion in its preservation of the orientation of the vector space. Thus, the determinant of a rotation orthogonal matrix must be 1. The only other possibility for the determinant of an orthogonal matrix is , and this result means the transformation is a hyperplane reflection, a point reflection (for odd ), or another kind of improper rotation. Matrices of all proper rotations form the special orthogonal group. Two dimensions In two dimensions, to carry out a rotation using a matrix, the point to be rotated counterclockwise is written as a column vector, then multiplied by a rotation matrix calculated from the angle : . The coordinates of the point after rotation are , and the formulae for and are The vectors and have the same magnitude and are separated by an angle as expected. Points on the plane can be also presented as complex numbers: the point in the plane is represented by the complex number This can be rotated through an angle by multiplying it by , then expanding the product using Euler's formula as follows: and equating real and imaginary parts gives the same result as a two-dimensional matrix: Since complex numbers form a commutative ring, vector rotations in two dimensions are commutative, unlike in higher dimensions. They have only one degree of freedom, as such rotations are entirely determined by the angle of rotation. Three dimensions As in two dimensions, a matrix can be used to rotate a point to a point . The matrix used is a matrix, This is multiplied by a vector representing the point to give the result The set of all appropriate matrices together with the operation of matrix multiplication is the rotation group SO(3). The matrix is a member of the three-dimensional special orthogonal group, , that is it is an orthogonal matrix with determinant 1. That it is an orthogonal matrix means that its rows are a set of orthogonal unit vectors (so they are an orthonormal basis) as are its columns, making it simple to spot and check if a matrix is a valid rotation matrix. Above-mentioned Euler angles and axis–angle representations can be easily converted to a rotation matrix. Another possibility to represent a rotation of three-dimensional Euclidean vectors are quaternions described below. Quaternions Unit quaternions, or versors, are in some ways the least intuitive representation of three-dimensional rotations. They are not the three-dimensional instance of a general approach. They are more compact than matrices and easier to work with than all other methods, so are often preferred in real-world applications. A versor (also called a rotation quaternion) consists of four real numbers, constrained so the norm of the quaternion is 1. This constraint limits the degrees of freedom of the quaternion to three, as required. Unlike matrices and complex numbers two multiplications are needed: where is the versor, is its inverse, and is the vector treated as a quaternion with zero scalar part. The quaternion can be related to the rotation vector form of the axis angle rotation by the exponential map over the quaternions, where is the rotation vector treated as a quaternion. A single multiplication by a versor, either left or right, is itself a rotation, but in four dimensions. Any four-dimensional rotation about the origin can be represented with two quaternion multiplications: one left and one right, by two different unit quaternions. Further notes More generally, coordinate rotations in any dimension are represented by orthogonal matrices. The set of all orthogonal matrices in dimensions which describe proper rotations (determinant = +1), together with the operation of matrix multiplication, forms the special orthogonal group . Matrices are often used for doing transformations, especially when a large number of points are being transformed, as they are a direct representation of the linear operator. Rotations represented in other ways are often converted to matrices before being used. They can be extended to represent rotations and transformations at the same time using homogeneous coordinates. Projective transformations are represented by matrices. They are not rotation matrices, but a transformation that represents a Euclidean rotation has a rotation matrix in the upper left corner. The main disadvantage of matrices is that they are more expensive to calculate and do calculations with. Also in calculations where numerical instability is a concern matrices can be more prone to it, so calculations to restore orthonormality, which are expensive to do for matrices, need to be done more often. More alternatives to the matrix formalism As was demonstrated above, there exist three multilinear algebra rotation formalisms: one with U(1), or complex numbers, for two dimensions, and two others with versors, or quaternions, for three and four dimensions. In general (even for vectors equipped with a non-Euclidean Minkowski quadratic form) the rotation of a vector space can be expressed as a bivector. This formalism is used in geometric algebra and, more generally, in the Clifford algebra representation of Lie groups. In the case of a positive-definite Euclidean quadratic form, the double covering group of the isometry group is known as the Spin group, . It can be conveniently described in terms of a Clifford algebra. Unit quaternions give the group . In non-Euclidean geometries In spherical geometry, a direct motion of the -sphere (an example of the elliptic geometry) is the same as a rotation of -dimensional Euclidean space about the origin (). For odd , most of these motions do not have fixed points on the -sphere and, strictly speaking, are not rotations of the sphere; such motions are sometimes referred to as Clifford translations. Rotations about a fixed point in elliptic and hyperbolic geometries are not different from Euclidean ones. Affine geometry and projective geometry have not a distinct notion of rotation. In relativity A generalization of a rotation applies in special relativity, where it can be considered to operate on a four-dimensional space, spacetime, spanned by three space dimensions and one of time. In special relativity, this space is called Minkowski space, and the four-dimensional rotations, called Lorentz transformations, have a physical interpretation. These transformations preserve a quadratic form called the spacetime interval. If a rotation of Minkowski space is in a space-like plane, then this rotation is the same as a spatial rotation in Euclidean space. By contrast, a rotation in a plane spanned by a space-like dimension and a time-like dimension is a hyperbolic rotation, and if this plane contains the time axis of the reference frame, is called a "Lorentz boost". These transformations demonstrate the pseudo-Euclidean nature of the Minkowski space. Hyperbolic rotations are sometimes described as "squeeze mappings" and frequently appear on Minkowski diagrams that visualize (1 + 1)-dimensional pseudo-Euclidean geometry on planar drawings. The study of relativity deals with the Lorentz group generated by the space rotations and hyperbolic rotations. Whereas rotations, in physics and astronomy, correspond to rotations of celestial sphere as a 2-sphere in the Euclidean 3-space, Lorentz transformations from induce conformal transformations of the celestial sphere. It is a broader class of the sphere transformations known as Möbius transformations. Discrete rotations Importance Rotations define important classes of symmetry: rotational symmetry is an invariance with respect to a particular rotation. The circular symmetry is an invariance with respect to all rotation about the fixed axis. As was stated above, Euclidean rotations are applied to rigid body dynamics. Moreover, most of mathematical formalism in physics (such as the vector calculus) is rotation-invariant; see rotation for more physical aspects. Euclidean rotations and, more generally, Lorentz symmetry described above are thought to be symmetry laws of nature. In contrast, the reflectional symmetry is not a precise symmetry law of nature. Generalizations The complex-valued matrices analogous to real orthogonal matrices are the unitary matrices , which represent rotations in complex space. The set of all unitary matrices in a given dimension forms a unitary group of degree ; and its subgroup representing proper rotations (those that preserve the orientation of space) is the special unitary group of degree . These complex rotations are important in the context of spinors. The elements of are used to parametrize three-dimensional Euclidean rotations (see above), as well as respective transformations of the spin (see representation theory of SU(2)).
Mathematics
Geometry: General
null
410401
https://en.wikipedia.org/wiki/Cramp
Cramp
A cramp is a sudden, involuntary, painful skeletal muscle contraction or overshortening associated with electrical activity; while generally temporary and non-damaging, they can cause significant pain and a paralysis-like immobility of the affected muscle. A cramp usually goes away on its own over a period of several seconds or (sometimes) minutes. Cramps are common and tend to occur at rest, usually at night (nocturnal leg cramps). They are also often associated with pregnancy, physical exercise or overexertion, and age (common in older adults); in such cases, cramps are called idiopathic, because there is no underlying pathology. In addition to those benign conditions cramps are also associated with many pathological conditions. Cramp definition is narrower than the definition of muscle spasm: spasms include any involuntary abnormal muscle contractions, while cramps are sustained and painful. True cramps can be distinguished from other cramp-like conditions. Cramps are different from muscle contracture, which is also painful and involuntary, but which is electrically silent. The main distinguishing features of cramps from dystonia are suddenness with acute onset of pain, involvement of only one muscle and spontaneous resolution of cramps or their resolution after stretching the affected muscle. Restless leg syndrome is not considered the same as muscle cramps and should not be confused with rest cramps. Causes Skeletal muscle cramps may be caused by muscle fatigue or a lack of electrolytes such as sodium (a condition called hyponatremia), potassium (called hypokalemia), or magnesium (called hypomagnesemia). Some skeletal muscle cramps do not have a known cause. Motor neuron disorders (e.g., amyotrophic lateral sclerosis), metabolic disorders (e.g., liver failure), some medications (e.g., diuretics and inhaled beta‐agonists), and haemodialysis may also cause muscle cramps. Causes of cramping include hyperflexion, hypoxia, exposure to large changes in temperature, dehydration, or low blood salt. Muscle cramps can also be a symptom or complication of pregnancy; kidney disease; thyroid disease; hypokalemia, hypomagnesemia, or hypocalcaemia (as conditions); restless legs syndrome; varicose veins; and multiple sclerosis. As early as 1965, researchers observed that leg cramps and restless legs syndrome can result from excess insulin, sometimes called hyperinsulinemia. Skeletal muscle cramps Under normal circumstances, skeletal muscles can be voluntarily controlled. Skeletal muscles that cramp the most often are the calves, thighs, and arches of the foot, and in North America are sometimes called a "Charley horse" or a "corky". Such cramping is associated with strenuous physical activity and can be intensely painful; however, they can even occur while inactive and relaxed. Around 40% of people who experience skeletal cramps are likely to endure extreme muscle pain, and may be unable to use the entire limb that contains the "locked-up" muscle group. It may take up to a week for the muscle to return to a pain-free state, depending on the person's fitness level, age, and several other factors. Nocturnal leg cramps Nocturnal leg cramps are involuntary muscle contractions that occur in the calves, soles of the feet, or other muscles in the body during the night or (less commonly) while resting. The duration of nocturnal leg cramps is variable, with cramps lasting anywhere from a few seconds to several minutes. Muscle soreness may remain after the cramp itself ends. These cramps are more common in older people. They happen quite frequently in teenagers and in some people while exercising at night. Besides being painful, a nocturnal leg cramp can cause much distress and anxiety. The precise cause of these cramps is unclear. Potential contributing factors include dehydration, low levels of certain minerals (magnesium, potassium, calcium, and sodium, although the evidence has been mixed), and reduced blood flow through muscles attendant in prolonged sitting or lying down. Nocturnal leg cramps (almost exclusively calf cramps) are considered "normal" during the late stages of pregnancy. They can, however, vary in intensity from mild to extremely painful. A lactic acid buildup around muscles can trigger cramps; however, they happen during anaerobic respiration when a person is exercising or engaging in an activity where the heartbeat rises. Medical conditions associated with leg cramps are cardiovascular disease, hemodialysis, cirrhosis, pregnancy, and lumbar canal stenosis. Differential diagnoses include restless legs syndrome, claudication, myositis, and peripheral neuropathy. All of them can be differentiated through careful history and physical examination. Gentle stretching and massage, putting some pressure on the affected leg by walking or standing, or taking a warm bath or shower may help to end the cramp. If the cramp is in the calf muscle, dorsiflexing the foot (lifting the toes back toward the shins) will stretch the muscle and provide almost immediate relief. There is limited evidence supporting the use of magnesium, calcium channel blockers, carisoprodol, and vitamin B12. Quinine is no longer recommended for treatment of nocturnal leg cramps due to potential fatal hypersensitivity reactions and thrombocytopenia. Arrhythmias, cinchonism, and hemolytic uremic syndrome can also occur at higher dosages. Cramps caused by treatments Various medications may cause nocturnal leg cramps: Diuretics, especially potassium sparing Intravenous (IV) iron sucrose Conjugated estrogens Teriparatide Naproxen Raloxifene Long acting adrenergic beta-agonists (LABAs) Hydroxymethylglutaryl-coenzyme A reductase inhibitors (HMG-CoA inhibitors or statins) Statins may sometimes cause myalgia and cramps among other possible side effects. Raloxifene (Evista) is a medication associated with a high incidence of leg cramps. Additional factors, which increase the probability for these side effects, are physical exercise, age, history of cramps, and hypothyroidism. Up to 80% of athletes using statins experience significant adverse muscular effects, including cramps; the rate appears to be approximately 10–25% in a typical statin-using population. In some cases, adverse effects disappear after switching to a different statin; however, they should not be ignored if they persist, as they can, in rare cases, develop into more serious problems. Coenzyme Q10 supplementation can be helpful to avoid some statin-related adverse effects, but currently there is not enough evidence to prove the effectiveness in avoiding myopathy or myalgia. Treatment Stretching, massage, and drinking plenty of liquids may be helpful in treating simple muscle cramps. Medication The antimalarial drug quinine is a traditional treatment that may be slightly effective for reducing the number of cramps, the intensity of cramps, and the number of days a person experiences cramps. Quinine has not been shown to reduce the duration (length) of a muscle cramp. Quinine treatment may lead to haematologic and cardiac toxicity. Due to its low effectiveness and negative side effects, its use as a medication for treating muscle cramps is not recommended by the FDA. Magnesium is commonly used to treat muscle cramps. Moderate quality evidence indicates that magnesium is not effective for treating or preventing cramps in older adults. It is not known if magnesium helps cramps due to pregnancy, liver cirrhosis, other medical conditions, or exercising. Oral magnesium treatment does not appear to have significant major side effects, however, it may be associated with diarrhea and nausea in 11–37% of people who use this medicine. With exertional heat cramps due to electrolyte abnormalities (primarily potassium loss and not calcium, magnesium, and sodium), appropriate fluids and sufficient potassium improves symptoms. Vitamin B complex, naftidrofuryl, lidocaine, and calcium channel blockers may be effective for muscle cramps. Prevention Adequate conditioning, stretching, mental preparation, hydration, and electrolyte balance are likely helpful in preventing muscle cramps.
Biology and health sciences
Symptoms and signs
Health
410530
https://en.wikipedia.org/wiki/Sanitary%20sewer
Sanitary sewer
A sanitary sewer is an underground pipe or tunnel system for transporting sewage from houses and commercial buildings (but not stormwater) to a sewage treatment plant or disposal. Sanitary sewers are a type of gravity sewer and are part of an overall system called a "sewage system" or sewerage. Sanitary sewers serving industrial areas may also carry industrial wastewater. In municipalities served by sanitary sewers, separate storm drains may convey surface runoff directly to surface waters. An advantage of sanitary sewer systems is that they avoid combined sewer overflows. Sanitary sewers are typically much smaller in diameter than combined sewers which also transport urban runoff. Backups of raw sewage can occur if excessive stormwater inflow or groundwater infiltration occurs due to leaking joints, defective pipes etc. in aging infrastructure. Purpose Sewage treatment is less effective when sanitary waste is diluted with stormwater, and combined sewer overflows occur when runoff from heavy rainfall or snowmelt exceeds the hydraulic capacity of sewage treatment plants. To overcome these disadvantages, some cities built separate sanitary sewers to collect only municipal wastewater and exclude stormwater runoff, which is collected in separate storm drains. The decision to build a combined sewer system or two separate systems is mainly based on the need for sewage treatment and the cost of providing treatment during heavy rain events. Many cities with combined sewer systems built their systems prior to installing sewage treatment plants, and have not subsequently replaced those sewer systems. Types Conventional gravity sewers In the developed world, sewers are pipes from buildings to one or more levels of larger underground trunk mains, which transport the sewage to sewage treatment facilities. Vertical pipes, usually made of precast concrete, called manholes, connect the mains to the surface. Depending upon site application and use, these vertical pipes can be cylindrical, eccentric, or concentric. The manholes are used for access to the sewer pipes for inspection and maintenance, and as a means to vent sewer gases. They also facilitate vertical and horizontal angles in otherwise straight pipelines. Pipes conveying sewage from an individual building to a common gravity sewer line are called laterals. Branch sewers typically run under streets receiving laterals from buildings along that street and discharge by gravity into trunk sewers at manholes. Larger cities may have sewers called interceptors, receiving flow from multiple trunk sewers. Design and sizing of sanitary sewers considers the population to be served over the anticipated life of the sewer, per capita wastewater production, and flow peaking from timing of daily routines. Minimum sewer diameters are often specified to prevent blockage by solid materials flushed down toilets; gradients may be selected to maintain flow velocities generating sufficient turbulence to minimize solids deposition within the sewer. Commercial and industrial wastewater flows are also considered, but diversion of surface runoff to storm drains eliminates wet weather flow peaks of inefficient combined sewers. Force mains A force main or rising main is a pumped sewer that may be necessary where gravity sewers serve areas at lower elevations than the sewage treatment plant, or distant areas at similar elevations. A lift station is a sewer sump that lifts accumulated sewage to a higher elevation. They may also be used to prime an inverted siphon used to cross underneath rivers or other obstructions. The pump may discharge to another gravity sewer or directly to a treatment plant. Force mains are typically constructed of welded steel or HDPE jointed to resist pressures within the pipe. Force mains are substantially different from pressure sewers which serve individual properties or groups of properties and provide a means of injecting sewage into a local gravity main. Effluent sewer Effluent sewer systems, also called septic tank effluent drainage (STED) or solids-free sewer (SFS) systems, have septic tanks that collect sewage from residences and businesses, and the effluent that comes out of the tank is sent to either a centralized sewage treatment plant or a distributed treatment system for further treatment. Most of the solids are removed by the septic tanks, so the treatment plant can be much smaller than a typical plant. In addition, because of the vast reduction in solid waste, a pumping system, rather than a gravity system, can be used to move the wastewater. The pipes have small diameters, typically . Because the waste stream is pressurized, they can be laid just below the ground surface along the land's contour. Pressure sewer Where it is impossible or impractical to discharge sewage from a property into a gravity sanitary sewer, a pressure sewer may provide an alternative means of connection. A macerator pump in a pumping well close to the property ejects sewage through a small diameter high pressure pipe into the nearest gravity sewer. Simplified sewer Simplified sanitary sewers consist of small-diameter pipes, typically around , often laid at fairly flat gradients (1 in 200). Although the investment cost for simplified sanitary sewers can be about half the cost of conventional sewers, the requirements for operation and maintenance are usually higher. Simplified sewers are most common in Brazil and other developing countries. Vacuum sewer In low-lying communities, wastewater is often conveyed by vacuum sewer. Pipelines range in size from pipes of in diameter up to in diameter. Vacuum sewer systems use differential atmospheric pressure to move the liquid to a central vacuum station. Maintenance Sanitary sewer overflow can occur due to blocked or broken sewer lines, infiltration of excessive stormwater or malfunction of pumps. In these cases untreated sewage is discharged from a sanitary sewer into the environment prior to reaching sewage treatment facilities. To avoid such overflows, maintenance is required. Blockage prevention campaigns or regulations (e.g. requiring the use of grease interceptors by some customers) may also be necessary. The maintenance requirements vary with the type of sanitary sewer. In general, all sewers deteriorate with age, but infiltration and inflow are problems unique to sanitary sewers, since both combined sewers and storm drains are sized to carry these contributions. Holding infiltration to acceptable levels requires a higher standard of maintenance than necessary for structural integrity considerations of combined sewers. A comprehensive construction inspection program is required to prevent inappropriate connection of cellar, yard, and roof drains to sanitary sewers. The probability of inappropriate connections is higher where combined sewers and sanitary sewers are found in close proximity, because construction personnel may not recognize the difference. Many older cities still use combined sewers while adjacent suburbs were built with separate sanitary sewers. For decades, when sanitary sewer pipes cracked or experienced other damage, the only option was an expensive excavation, removal and replacement of the damaged pipe, typically requiring street repavement afterwards. In the mid-1950s a unit was invented where two units at each end with a special cement mixture in between was pulled from one manhole cover to the next, coating the pipe with the cement under high pressure, which then cured rapidly, sealing all cracks and breaks in the pipe. Today, a similar method using epoxy resin is used by some municipalities to re-line aging or damaged pipes, effectively creating a "pipe in a pipe". These methods may be unsuitable for locations where the full diameter of the original pipe is required to carry expected flows, and may be an unwise investment if greater wastewater flows may be anticipated from population growth, increased water use, or new service connections within the expected service life of the repair. Another popular method for replacing aged or damaged lines is called pipe bursting, where a new pipe, typically PVC or ABS plastic, is drawn through the old pipe behind an "expander head" that breaks apart the old pipe as the new one is drawn through behind it. These methods are most suitable for trunk sewers, since repair of lines with lateral connections is complicated by making provisions to receive lateral flows without accepting undesirable infiltration from inadequately sealed junctions. Ventilation Some sewers have tall vent pipes to release foul gases well up away from people. Common names for these pipes are stink pipe, stink pole, stench pipe and sewer ventilation pipe. History Sanitary sewers evolved from combined sewers built where water was plentiful. Animal feces accumulated on city streets while animal-powered transport moved people and goods. Accumulations of animal feces encouraged dumping chamber pots into streets where night soil collection was impractical. Combined sewers were built to use surface runoff to flush waste off streets and move it underground to places distant from populated areas. Sewage treatment became necessary as population expanded, but increased volumes and pumping capacity required for treatment of diluted waste from combined sewers is more expensive than treating undiluted sewage. Communities that have urbanized in the mid-20th century or later generally have built separate systems for sewage (sanitary sewers) and stormwater, because precipitation causes widely varying flows, reducing sewage treatment plant efficiency. In the UK, the term "foul sewer" was also in use for a sanitary sewer.
Technology
Food, water and health
null
410663
https://en.wikipedia.org/wiki/Whip
Whip
A whip is a blunt weapon or implement used in a striking motion to create sound or pain. Whips can be used for flagellation against humans or animals to exert control through pain compliance or fear of pain, or be used as an audible cue through the distinct whipcrack effect. The portion used for striking is generally either a firm rod designed for direct contact, or a flexible line requiring a specialized swing. The former is easier and more precise, the latter offers longer reach and greater force. Some varieties, such as a hunting whip or lunge whip, have an extended stock section in addition to the line. Whips such as the "cat o' nine tails" and knout are specifically developed for corporal punishment or torture on human targets. Certain religious practices and BDSM activities involve the self-use of whips or the use of whips between consenting partners. Misuse on animals may be considered animal cruelty, and misuse on humans may be viewed as assault. Use Whips are generally used on animals to provide directional guidance or to encourage movement. Some whips are designed to control animals by imparting discomfort by tapping or pain by a full-force strike that produces pain compliance. Some whips provide guidance by the use of sound, such as cracking of a bullwhip. Other uses of whips are to provide a visual directional cue by extending the reach and visibility of the human arm. In modern times, the pain stimulus is still used in some animal training, and is permitted in many fields, including most equestrianism disciplines, some of which mandate carrying a whip. The whip can be a vital tool to back up riding aids when applied correctly, particularly when initial commands are ignored. However, many competition governing bodies limit the use of whips, and severe penalties may be in place for over-use of the whip, including disqualification and fines. Improper overuse of whips may be considered animal cruelty in some jurisdictions. Whip use by sound never or rarely strikes the animal; instead, a long, flexible whip is cracked to produce a very sharp, loud sound. This usage also functions as a form of operant conditioning: most animals will flinch away from the sound instinctively, making it effective for driving sled dogs, livestock and teams of harnessed animals like oxen and mules. The sound is loud enough to affect multiple animals at once, making whip-cracking more efficient under some circumstances. This technique can be used as part of an escalation response, with sound being used first prior to a pain stimulus being applied, again as part of operant conditioning. Whips used without painful stimulus, as an extension of the human hand or arm, are a visual command, or to tap an animal, or to exert pressure. Such use may be related to operant conditioning where the subject is conditioned to associate the whip with irritation, discomfort or pain, but in other cases, a whip can be used as a simple tool to provide a cue connected to positive reinforcement for compliant behavior. In the light of modern attitudes towards the potential for cruelty in whips, other names have gained currency among practitioners such as whips called a "wand" or a "stick," calling the lash a "string" or a "popper". Cracking The loud sound of a whip-crack is produced by a ripple in the material of whip travelling towards the tip, rapidly escalating in speed until it breaches the speed of sound, more than 30 times the speed of the initial movement in the handle. The crack is thus a small sonic boom. Whips were the first man-made objects to break the sound barrier. Most stick type whips cannot make a crack by themselves, unless they either have a very long lash, such as a longe whip, or are very flexible with a moderately long lash, like certain styles of buggy whip. But any design can be banged against another object, such as leather boot, to make a loud noise. Short, stiff crops often have a wide leather "popper" at the end which makes a particularly loud noise when slapped against an animal, boot, or other object. Types Stockwhip Stockwhips (or stock whips), including bullwhips and the Australian stockwhip, are a type of single-tailed leather whip with a very long lash but a short handle. Stockwhips are primarily used to make a loud cracking sound via special techniques that break the sound barrier, to move livestock (cattle, sheep, horses, etc.) away from the sound. It is generally not used to actually strike an animal, as it would inflict excessive pain and is difficult to apply with precision. Australian The Australian stockwhip is often said to have originated in the English hunting whip, but it has since become a distinct type of whip. Today, it is used primarily by stockmen. Unlike the short, embedded handle of a bullwhip, the stockwhip handle is not fitted inside the lash and is usually longer. A stockwhip's handle is connected to the thong by a joint typically made of a few strands of thick leather (which is called a keeper). This allows the whip to hang across a stockman's arm when not being used. The handles are normally longer than those of a bullwhip, being between . The thong can be from long. Stockwhips are also almost exclusively made from tanned kangaroo hide. The Australian stockwhip was shown internationally when lone rider Steve Jefferys reared his Australian Stock Horse and cracked the stockwhip to commence the 2000 Sydney Olympic Games Opening Ceremony. Bullwhip A bullwhip consists of a handle between in length, and a lash composed of a braided thong between long. Some whips have an exposed wooden grip, others have an intricately braided leather covered handle. Unlike the Australian stock whip, the thong connects in line with the handle (rather than with a joint), or even engulfs the handle entirely. At the end of the lash is the "fall" and cracker or popper. The fall is a single piece of leather between in length. During trick shots or target work, the fall is usually the portion of the whip used to cut, strike, or wrap around the target. The cracker is the portion of the whip that makes the loud "sonic boom" sound, but a whip without a cracker will still make a sonic boom, simply not as loud. Other There are other variations and lengths of stock whips. The yard whip is a type of smaller stockwhip. The yard whip is used on ground in cattle yards and other small areas where speed and precision is needed. The yard whip is also used by younger children that are not strong enough to handle a large stock whip. The cattle drafter (or drafting whip) is a cane or fibreglass rod with a handgrip, knob and wrist strap. The cane length is about and the flapper length is about long. These whips are used in cattle yards and also when moving pigs. The bullock-whip was used by an Australian bullock team driver (bullocky). The thong was long, or more, and often made of greenhide. A long handle was cut from spotted gum or another native tree and was frequently taller than the bullock driver's shoulder. The bullocky walked beside the team and kept the bullocks moving with taps from the long handle as well as using the thong as needed. The Rose whip is another variation of the stockwhip that was pioneered in Canada in the early 19th century, though it largely fell out of use by the 1880s. The Rose whips were effective in animal yards and other small areas. It was pioneered by an American farmer, Jack Liao. The Raman whip is a similar variation of the stockwhip which closely relates to the Rose whip. This variation was pioneered in the small Ontario city of Hamilton in the early 20th century, though it largely fell out of use by the 1920s. Raman whips were effective on horse farms, horse derbies, and in other rural areas. It was pioneered by the South African inventor, Delaware Kumar. Cow whip The Florida cow whip used by Floridian cowboys is a two-piece unit like the stockwhip and is connected to the handle by threading two strands of the thong through a hollow part of a wooden handle before being tied off. The cowwhip is heavier than the Australian stockwhip. Early cowwhips were made mostly of cowhide or buckskin. Modern cow whips are made of flat nylon parachute cord, which, unlike those made from leather, are still effective when wet. Most cowwhips have handles that average , and thongs that average . A good cowwhip can produce a loud crack by a simple push of the handle. This can make it more convenient to use than a bullwhip in a thick vegetated environment with less swinging room. The Tampa Bay Whip Enthusiasts give demonstrations of the Florida Cracker Cowboy in costume at the annual Heritage Village Civil War Days festival, located in Largo, Florida every year in May. Signalwhip Signal whips (or signalwhips) are a type of single-tailed whip, originally designed to control dog teams. A signal whip usually measures between in length. Signal whips and snake whips are similar. What distinguishes a signal whip from a snake whip is the absence of a "fall". A fall is a piece of leather attached to the end of the body of the whip. In a snake whip, the "cracker" attaches to the fall. In a signal whip, the cracker attaches directly to the body of the whip. Snakewhip Snake whips (or snakewhips) are a type of single-tailed whip. The name snake whip is derived from the fact that this type of whip has no handle inside and so can be curled up into a small circle which resembles a coiled snake. They were once commonly carried in the saddlebag by cowboys of the old west. A full sized snake whip is usually at least in length (excluding the fall and cracker at the tip of the whip) and around one inch in diameter at the butt of the whip. A pocket snake whip can be curled up small enough to fit into a large pocket, and ranges in size from in length. The pocket snake whip is primarily a whip for occasional use, such as in loading cattle. Both of these types of snake whips are made with a leather shot bag running approximately three quarters of the length of the whip. Blacksnakes are the traditional whips used in Montana and Wyoming. The blacksnake has a heavy shot load extending from the butt well down the thong, and the whip is flexible right to the butt. They range in size from in length. Some types concentrate a load in the butt (often a lead ball or steel ball-bearing) to facilitate its use as improvised blackjack. Equestrian Horse whips or riding whips are artificial aids used by equestrians while riding, driving, or handling horses from the ground. There are many different kinds, but all feature a handle, a long, semi-flexible shaft, and either a popper or lash at the end, depending on use. Riding whips rarely exceed 48" from handle to popper, horse whips used for ground training and carriage driving are sometimes longer. The term "whip" is the generic word for riding whips, the term "crop" is more specific, referring to a short, stiff whip used primarily in English riding disciplines such as show jumping or hunt seat. Some of the more common types of horse whips include: Dressage whips are up to long, including lash or popper, and are used to refine the aids of the rider, not to hurt the horse. They generally ask for more impulsion, and are long enough that they can reach behind the rider's leg to tap the horse while the rider still holds the reins with both hands. The shaft is slightly flexible and tapers to a fine point at the tip. A similar, but slightly longer whip is used in saddle seat style English riding. Longe whips have a shaft about long and a lash of equal or greater length. They are used to direct the horse as it is 'moved on a circle around the person standing in the centre, a process known as "longeing" (pronounced ) The whip is used to guide and signal direction and pace, and is not used with force against the horse. Taking the place of the rider's leg aids, the positioning of the longe whip in relation to the horse gives the horse signals. Occasionally, due to the long lash, it may be cracked to enforce a command. Driving whips for carts, carriages, and coaches have a stock about the same length as a longe whips. The lash should be long enough to reach the shoulder of the forward-most horse from the driver's seat. A crop or "bat" has a fairly stiff stock, and is only in length, with a "popper" - a looped flap of leather - at the end. Because it is too short to reach behind the riders leg while still holding the reins, it is most often used by taking the reins in one hand and hitting the horse behind the rider's leg, using the crop, held in the other hand. Less often, it may be used to tap the horse on the shoulder as a simple reminder to the animal that the rider is carrying it. It is to back up the leg aids, when the horse is not moving forward, or occasionally as a disciplinary measure (such as when a horse refuses or runs out on a jump). Crops or bats are most commonly seen in sports such as show jumping, hunt seat style English riding, horse racing, and in rodeo speed sports such as barrel racing. A hunting whip is not precisely a horse whip, though it is carried by a mounted rider. It has a stock about the same length as a crop, except its "stock" is stiff, not flexible. On one end of the stock it has a lash that is ~1 m in length, on the other end it has a hook, which is used to help the rider open and close gates while out fox hunting. The hunting whip is not intended to be used on the horse, but rather the lash is there to remind the hounds to stay away from the horse's hooves, and it can also be used as a communication device to the hounds. A quirt is a short, flexible piece of thickly braided leather with two wide pieces of leather at the end, which makes a loud crack when it strikes an animal or object. They inflict more noise than pain. Quirts are occasionally carried on horses used in western riding disciplines, but because the action of a quirt is slow, they are not used to correct or guide the horse, but are more apt to be used by a rider to reach out and strike at animals, such as cattle that are being herded from horseback. A show cane is a short, stiff cane that may be plain, leather covered, or covered with braided leather. Traditional canes are made from a stick of holly, cherry or birch wood, which is dressed and polished. They are rarely used now except in formal show hacking events. Rudyard Kipling's short story Garm - a Hostage mentions a long whip used by a horseback rider in India to defend an accompanying pet dog from risk of attack by native pariah dogs. This probably was a hunting whip. In Victorian literature cads and bounders are depicted as being horsewhipped or threatened with horsewhipping for seduction of young women or breach of promise (to marry), usually by her brothers or father. Examples are found in the works of Benjamin Disraeli and Anthony Trollope who includes such a scene in Doctor Thorne. It is also mentioned, though not depicted, in comic novels by Evelyn Waugh and P.G. Wodehouse. As late as the 1970s the historian Desmond Seward was reported by the Daily Telegraph to have been threatened with horsewhipping for besmirching the reputation of Richard III in a biography. Cat o' nine tails The cat o' nine tails is a type of multi-tailed whip that originated as an implement for severe physical punishment, notably in the Royal Navy and Army of the United Kingdom, and also as a judicial punishment in Britain and some other countries. The cat is made up of nine knotted thongs of cotton cord, about long, designed to lacerate the skin and cause intense pain. It traditionally has nine thongs as a result of the manner in which rope is plaited. Thinner rope is made from three strands of yarn plaited together, and thicker rope from three strands of thinner rope plaited together. To make a cat o' nine tails, a rope is unravelled into three small ropes, each of which is unravelled again. Weapons The Bian (), also known as Chinese whip or hard whip, is a type of tubular-shaped club or rod weapon designed to inflict blunt damage with a whipping motion. A typical hard whip is made with metal and has a length of around 90 centimetres. Bamboo node-like protrusions are attached to the weapon body at regular intervals to reduce the contact surface and enhance the striking effect. The whip is stiff and does not bend. It weighs 7 or 8 kilograms. The weapon is used mainly on horseback with one hand, sometimes with two whips in both hands. The chain whip, also known as the soft whip, is a weapon used in some Chinese martial arts, particularly traditional Chinese disciplines, in addition to modern and traditional wushu. It consists of several metal rods, which are joined end-to-end by rings to form a flexible chain. Generally, the whip has a handle at one end and a metal dart, used for slashing or piercing an opponent, at the other. A cloth flag is often attached at or near the dart end of the whip and a second flag may cover the whip's handle. According to the book The Chain Whip, a whip in Chinese historical text may refer to either the soft whip or the hard whip due to an ambiguity in the Chinese language. "Both the hard whip and the soft whip can both be referred to simply as whip (鞭) in Chinese." Qilinbian Qilinbian (麒麟鞭, literally meaning "unicorn whip") is a metal whip invented in China in the late 1900s. The 15 cm handle is made from a steel chain wrapped with leather. The lash is made of steel rods decreasing in size linked by progressively smaller steel rings. Lash varies between 150 cm and 180 cm and is attached to a fall and a cracker. Total weight is 1–2 kg. It is used for physical exercise and in performances. Animal morphology Some organisms exhibit whip-like appendages in their physiology: Many unicellular organisms and spermatozoa have one or two whip-like flagella, which are used for propulsion. "Flagellum" is Latin for "whip". The tails of some large lizards (e.g. iguanas and monitor lizards) are used and optimized for whipping, and larger lizards can seriously injure a human with a well-placed strike. The biological names of some lizards reference this with the terms Mastigo- or -mastix, which derive from the Greek term for "whip". The whip snakes are so-called from their physical resemblance, and were associated with myths that they could whip with their body in self-defense, since proven false. Uropygi arachnids are also known as "whip scorpions" due to the shape of their tails. It has been proposed that some sauropod dinosaurs could crack the ends of their tails like coach whips as a sound signal, as well as a form of defense against any attackers. Newer, more sophisticated models, however, suggest that while the tails of some Diplodocid Dinosaurs could be used as whips, they likely would not have been able to break the sound barrier. In popular culture The whip is widely—if only situationally—portrayed across many avenues of popular culture. Whips have appeared in many cartoons, television shows, video games (including a central role in the Castlevania franchise), and numerous feature films, ranging from the original Zorro (1919) to Raiders of the Lost Ark (1981) and Catwoman (2004). The popular investigative-entertainment program MythBusters tested the various capabilities of whips shown in the film Raiders of the Lost Ark during "The Busters of the Lost Myths" episode. With exact trained usage, the show demonstrated that it is possible to disarm a pistol-wielding opponent with a long whip strike. The episode also demonstrates that a wood log, with sufficient friction, could be used as an overhang to grapple with a whip, swing across a chasm and neatly disengage. Using a high-speed camera they were also able to verify that the tip of a whip can break the speed of sound. In the Sherlock Holmes series of stories by Sir Arthur Conan Doyle, Holmes occasionally carries a loaded hunting crop as his favorite weapon. (For example, see "The Adventure of the Six Napoleons".) Such crops were sold at one time. Loading refers to the practice of filling the shaft and head with heavy metal (e.g., steel, lead) to provide some heft.
Technology
Melee weapons
null
410720
https://en.wikipedia.org/wiki/Brussels%20sprout
Brussels sprout
The Brussels sprout is a member of the Gemmifera cultivar group of cabbages (Brassica oleracea), grown for its edible buds. Etymology Though native to the Mediterranean region with other cabbage species, Brussels sprouts first appeared in northern Europe during the 5th century; they were later cultivated in the 13th century near Brussels, Belgium, from which their name derives. The group name Gemmifera (or lowercase and italicized gemmifera as a variety name) means "bud-bearing". Description The leaf vegetables are typically in diameter and resemble miniature cabbages. Cultivation History Predecessors to modern Brussels sprouts were probably cultivated in Ancient Rome. Brussels sprouts as they are now known were grown possibly as early as the 13th century in what is now Belgium. The first written reference dates to 1587. During the 16th century, they enjoyed a popularity in the southern Netherlands that eventually spread throughout the cooler parts of Northern Europe, reaching Britain by the 17th century. Brussels sprouts grow in temperature ranges of , with highest yields at . Fields are ready for harvest 90 to 180 days after planting. The edible sprouts grow like buds in helical patterns along the side of long, thick stalks of about in height, maturing over several weeks from the lower to the upper part of the stalk. Sprouts may be picked by hand into baskets, in which case several harvests are made of five to 15 sprouts at a time, or by cutting the entire stalk at once for processing, or by mechanical harvester, depending on variety. Each stalk can produce , although the commercial yield is about per stalk. Harvest season in temperate zones of the northern latitudes is September to March, making Brussels sprouts a traditional winter-stock vegetable. In the home garden, harvest can be delayed as quality does not suffer from freezing. Sprouts are considered to be sweetest after a frost. Brussels sprouts are a cultivar group of the same species as broccoli, cabbage, collard greens, kale, and kohlrabi; they are cruciferous (they belong to the family Brassicaceae; old name Cruciferae). Many cultivars are available; some are purple in color, such as 'Ruby Crunch' or 'Red Bull'. The purple varieties are hybrids between purple cabbage and regular green Brussels sprouts developed by a Dutch botanist in the 1940s, yielding a variety with some of the red cabbage's purple colors and greater sweetness. Contemporary Brussels sprouts In the 1990s, Dutch scientist Hans van Doorn identified the chemicals that make Brussels sprouts bitter: sinigrin and progoitrin. This enabled Dutch seed companies to cross-breed archived low-bitterness varieties with modern high-yield varieties, over time producing a significant increase in the popularity of the vegetable. Europe In Continental Europe, the largest producers are the Netherlands, at 82,000 metric tons, and Germany, at 10,000 tons. The United Kingdom has production comparable to that of the Netherlands, but its crop is generally not exported. Mexico Second to the Netherlands in export volume is Mexico, where the climate allows nearly year-round production. The Baja region is the main supplier to the US market, but produce also comes from the Mexicali, San Luis and coastal areas. United States It is unclear when Brussels sprouts were introduced to the United States, but French settlers in Louisiana are known to have grown them. The first commercial plantings began in the Louisiana delta in 1925, and much of these plantings would move to the Californian Central Coast by 1939. Currently, several thousand acres are planted in coastal areas of San Mateo, Santa Cruz, and Monterey counties of California, which offer an ideal combination of coastal fog and cool temperatures year-round. The harvest season lasts from June through January. Most U.S. production is in California, with a smaller percentage of the crop grown in Skagit Valley, Washington, where cool springs, mild summers, and rich soil abounds, and to a lesser degree on Long Island, New York. Total US production is around 32,000 tons, with a value of $27 million. About 80 to 85% of U.S. production is for the frozen food market, with the remainder for fresh consumption. Once harvested, sprouts last 3–5 weeks under ideal near-freezing conditions before wilting and discoloring, and about half as long at refrigerator temperature. North American varieties are generally in diameter. Uses Nutrition Raw Brussels sprouts are 86% water, 9% carbohydrates, 3% protein, and negligible fat. In a 100 gram reference amount, they supply high levels (20% or more of the Daily Value, DV) of vitamin C (102% DV) and vitamin K (169% DV), with more moderate amounts of B vitamins, such as vitamin B6, as well as folate; essential minerals and dietary fiber exist in moderate to low amounts (table). Culinary The most common method of preparing Brussels sprouts for cooking begins with cutting the buds off the stalk. Any surplus stem is cut away, and any loose surface leaves are peeled and discarded. Once cut and cleaned, the buds are typically cooked by boiling, steaming, stir frying, grilling, slow cooking, or roasting. Some cooks make a single cut or a cross in the center of the stem to aid the penetration of heat. The cross cut may, however, be ineffective, with it being commonly believed to cause the sprouts to be waterlogged when boiled. Overcooking renders the buds gray and soft, and they then develop a strong flavor and odor that some dislike for its garlic- or onion-odor properties. The odor is associated with the glucosinolate sinigrin, a sulfur compound having characteristic pungency. For taste, roasting Brussels sprouts is a common way to cook them to enhance flavor. Common toppings or additions include Parmesan cheese and butter, balsamic vinegar, brown sugar, chestnuts, or pepper. Gallery
Biology and health sciences
Leafy vegetables
Plants
410768
https://en.wikipedia.org/wiki/Sea-Monkeys
Sea-Monkeys
Sea-Monkeys is a marketing term for brine shrimp (Artemia) sold as novelty aquarium pets. Developed in the United States in 1957 by Harold von Braunhut, they are sold as eggs intended to be added to water, and most often come bundled in a kit of three pouches and instructions. Sometimes a small tank and additional pouches are included. The product was marketed in the 1960s and 70s, especially in comic books, and remains a presence in popular culture. History Ant farms had been popularized in 1956 by Milton Levine. Harold von Braunhut invented a brine-shrimp-based product the next year, 1957. Von Braunhut collaborated with a marine biologist, Anthony D’Agostino, to develop the proper mix of nutrients and chemicals in dry form that could be added to plain tap water to create a suitable habitat for the shrimp to thrive. Von Braunhut was granted a patent for this process on July 4, 1972. They were initially called "Instant Life" and sold for $0.49, but von Braunhut changed the name to "Sea-Monkeys" in 1962. The new name was based on their salt-water habitat, together with the supposed resemblance of the animals' tails to those of monkeys. Sea-Monkeys were intensely marketed in comic books throughout the 1960s and early 1970s using illustrations by the comic-book illustrator Joe Orlando. These showed humanoid animals that bore no resemblance to the crustaceans. Many purchasers were disappointed by the dissimilarity and by the short lifespan of the animals. Von Braunhut is quoted as stating: "I think I bought something like 3.2 million pages of comic book advertising a year. It worked beautifully." Use A colony is started by adding the contents of a packet labeled "Water Purifier" to a tank of water. This packet contains salt, water conditioner, and brine shrimp eggs. After 24 hours, this is augmented with the contents of a packet labeled "Instant Life Eggs", containing more eggs, yeast, borax, soda, salt, some food, and sometimes a dye. Shortly after that, Sea-Monkeys hatch from the eggs that were in the "Water Purifier" packet. "Growth Food" containing yeast and spirulina is then added every seven days. The best temperature for hatching is 24-27°C. Additional pouches can be purchased on the official website, though these are not required for the well-being of the Sea-Monkeys. Artemia usually has a lifespan of two to three months. Under ideal home conditions, pet sea-monkeys have been observed to live for up to five years. As they are easy to breed and care for, brine shrimp are also often used as a model organism in scientific research to study developmental biology, genetics, and toxicology. Biology The animals sold as Sea-Monkeys are claimed to be an artificial breed known as Artemia NYOS, formed by hybridizing different species of Artemia. The manufacturer also claims that they live longer and grow bigger than ordinary brine shrimp. They undergo cryptobiosis or anhydrobiosis, a condition of apparent lifelessness which allows them to survive the desiccation of the temporary pools in which they live. Sea-monkeys are known for their unique life-cycle. They hatch from eggs that can remain dormant for years until they are exposed to water. Once the eggs are in water, they hatch into nauplius larvae, which eventually develop into adult Sea-Monkeys. The entire life cycle takes around 8-10 weeks. Astronaut John Glenn took Sea-Monkeys into space on October 29, 1998, aboard Space Shuttle Discovery during mission STS-95. After nine days in space, they were returned to Earth and hatched eight weeks later, apparently unaffected by their travels. However, earlier experiments on Apollo 16 and Apollo 17, where the eggs (along with other biological systems in a state of rest, such as spores, seeds, and cysts) traveled to the Moon and back and were exposed to significant cosmic rays, observed a high sensitivity to cosmic radiation in the Artemia salina eggs; only 10% of the embryos which were induced to develop from eggs survived to adulthood. The most-common mutations found during the developmental stages of the irradiated eggs were deformations of the abdomen or deformations on the swimming-appendages and naupliar eye of the nauplius.
Biology and health sciences
Crustaceans
Animals
410778
https://en.wikipedia.org/wiki/Micromachinery
Micromachinery
Micromachines are mechanical objects that are fabricated in the same general manner as integrated circuits. They are generally considered to be between 100 nanometres to 100 micrometres in size, although that is debatable. The applications of micromachines include accelerometers that detect when a car has hit an object and trigger an airbag. Complex systems of gears and levers are another application. Fabrication The fabrication of these devices is usually done by two techniques, surface micromachining and bulk micromachining. To do bulk micromachining, the region needed is highly doped with boron and the unwanted silicon is etched in liquid silicon etches. This technique is termed an etchstop as the doping of boron produces an unetchable layer/pattern. Transducers Most micromachines act as transducers; in other words, they are either sensors or actuators. Sensors convert information from the environment into interpretable electrical signals. One example of a micromachine sensor is a resonant chemical sensor. A lightly damped mechanical object vibrates much more at one frequency than any other, and this frequency is called its resonance frequency. A chemical sensor is coated with a special polymer that attracts certain molecules, such as those found in anthrax, and when those molecules attach to the sensor, its mass increases. The increased mass alters the resonance frequency of the mechanical object, which is detected with circuitry. Actuators convert electrical signals and energy into motion of some kind. The three most common types of actuators are electrostatic, thermal, and magnetic. Electrostatic actuators use the force of electrostatic energy to move objects. Two mechanical elements, one that is stationary (the stator) and one that is movable (the rotor) have two different voltages applied to them, which creates an electric field. The field competes with a restoring force on the rotor (normally a spring force produced by the bending or stretching of the rotor) to move it. The greater the electric field, the further the rotor will move. Thermal actuators use the force of thermal expansion to move objects. When a material is heated, it expands an amount depending on material properties. Two objects can be connected in such a way that one object is heated more than the other and expands more, and this imbalance creates motion. The direction of motion depends on the connection between the objects. This is seen in a "heatuator", which is a U-shaped beam with one wide arm and one narrow arm. When a current is passed through the object, heat is created. The narrow arm is heated more than the wide arm because they have the same current density. Since the two arms are connected at the top, the stretching hot arm pushes in the direction of the cold arm. Magnetic actuators used fabricated magnetic layers to create forces.
Technology
Machinery and tools: General
null
410783
https://en.wikipedia.org/wiki/Cover%20crop
Cover crop
In agriculture, cover crops are plants that are planted to cover the soil rather than for the purpose of being harvested. Cover crops manage soil erosion, soil fertility, soil quality, water, weeds, pests, diseases, biodiversity and wildlife in an agroecosysteman ecological system managed and shaped by humans. Cover crops can increase microbial activity in the soil, which has a positive effect on nitrogen availability, nitrogen uptake in target crops, and crop yields. Cover crops reduce water pollution risks and remove CO2 from the atmosphere. Cover crops may be an off-season crop planted after harvesting the cash crop. Cover crops are nurse crops in that they increase the survival of the main crop being harvested, and are often grown over the winter. In the United States, cover cropping may cost as much as $35 per acre. Soil erosion Although cover crops can perform multiple functions in an agroecosystem simultaneously, they are often grown for the sole purpose of preventing soil erosion. Soil erosion is a process that can irreparably reduce the productive capacity of an agroecosystem. Cover crops reduce soil loss by improving soil structure and increasing infiltration, protecting the soil surface, scattering raindrop energy, and reducing the velocity of the movement of water over the soil surface. Dense cover crop stands physically slow down the velocity of rainfall before it contacts the soil surface, preventing soil splashing and erosive surface runoff. Additionally, vast cover crop root networks help anchor the soil in place and increase soil porosity, producing suitable habitat networks for soil macrofauna. It keeps the enrichment of the soil good for the next few years. Soil fertility management One of the primary uses of cover crops is to increase soil fertility. These types of cover crops are referred to as "green manure". They are used to manage a range of soil macronutrients and micronutrients. Of the various nutrients, the impact that cover crops have on nitrogen management has received the most attention from researchers and farmers because nitrogen is often the most limiting nutrient in crop production. Often, green manure crops are grown for a specific period, and then plowed under before reaching full maturity to improve soil fertility and quality. The stalks left block the soil from being eroded. Green manure crops are commonly leguminous, meaning they are part of the pea family, Fabaceae. This family is unique in that all of the species in it set pods, such as bean, lentil, lupins and alfalfa. Leguminous cover crops are typically high in nitrogen and can often provide the required quantity of nitrogen for crop production. In conventional farming, this nitrogen is typically applied in chemical fertilizer form. In organic farming, nitrogen inputs may take the form of organic fertilizers, compost, cover crop seed, and fixation by legume cover crops. This quality of cover crops is called fertilizer replacement value. Another quality unique to leguminous cover crops is that they form symbiotic relationships with the rhizobial bacteria that reside in legume root nodules. Lupins is nodulated by the soil microorganism Bradyrhizobium sp. (Lupinus). Bradyrhizobia are encountered as microsymbionts in other leguminous crops (Argyrolobium, Lotus, Ornithopus, Acacia, Lupinus) of Mediterranean origin. These bacteria convert biologically unavailable atmospheric nitrogen gas () to biologically available ammonium () through the process of biological nitrogen fixation. In general, cover crops increase soil microbial activity, which has a positive effect on nitrogen availability in the soil, nitrogen uptake in target crops, and crop yields. Prior to the advent of the Haber–Bosch process, an energy-intensive method developed to carry out industrial nitrogen fixation and create chemical nitrogen fertilizer, most nitrogen introduced to ecosystems arose through biological nitrogen fixation. Some scientists believe that widespread biological nitrogen fixation, achieved mainly through the use of cover crops, is the only alternative to industrial nitrogen fixation in the effort to maintain or increase future food production levels. Industrial nitrogen fixation has been criticized as an unsustainable source of nitrogen for food production due to its reliance on fossil fuel energy and the environmental impacts associated with chemical nitrogen fertilizer use in agriculture. Such widespread environmental impacts include nitrogen fertilizer losses into waterways, which can lead to eutrophication (nutrient loading) and ensuing hypoxia (oxygen depletion) of large bodies of water. An example of this is in the Mississippi Valley Basin, where years of fertilizer nitrogen loading into the watershed from agricultural production have resulted in an annual summer hypoxic "dead zone" off the Gulf of Mexico that reached an area of over 22,000 square kilometers in 2017. The ecological complexity of marine life in this zone has been diminishing as a consequence. As well as bringing nitrogen into agroecosystems through biological nitrogen fixation, types of cover crops known as "catch crops" are used to retain and recycle soil nitrogen already present. The catch crops take up surplus nitrogen remaining from fertilization of the previous crop, preventing it from being lost through leaching, or gaseous denitrification or volatilization. Catch crops are typically fast-growing annual cereal species adapted to scavenge available nitrogen efficiently from the soil. The nitrogen fixed in catch crop biomass is released back into the soil once the cash crop is incorporated as a green manure or otherwise begins to decompose. An example of green manure use comes from Nigeria, where the cover crop Mucuna pruriens (velvet bean) has been found to increase the availability of phosphorus in soil after a farmer applies rock phosphate. Soil quality management Cover crops can also improve soil quality by increasing soil organic matter levels through the input of cover crop biomass over time. Increased soil organic matter enhances soil structure as well as the water and nutrient holding and buffering capacities of the soil. It can also lead to increased soil carbon sequestration, which has been promoted as a strategy to help offset the rise in atmospheric carbon dioxide levels. Soil quality is managed to produce optimum conditions for crops to flourish. The principal factors affecting soil quality are soil salination, pH, microorganism balance, and the prevention of soil contamination. It is noted that if soil quality is properly managed and maintained, it forms the foundation for a healthy and productive environment. One can design and manage a crop that will produce a healthy environment for quite some time. Water management By reducing soil erosion, cover crops often also reduce both the rate and quantity of water that drains off the field, which would normally pose environmental risks to waterways and ecosystems downstream. Cover crop biomass acts as a physical barrier between rainfall and the soil surface, allowing raindrops to steadily trickle down through the soil profile. Also, as stated above, cover crop root growth results in the formation of soil pores, which, in addition to enhancing soil macrofauna habitat provides pathways for water to filter through the soil profile rather than draining off the field as surface flow. With increased water infiltration, the potential for soil water storage and the recharge of aquifers can be improved. Just before cover crops are killed (by such practices including mowing, tilling, discing, rolling, or herbicide application) they contain a large amount of moisture. When the cover crop is incorporated into the soil, or left on the soil surface, it often increases soil moisture. In agroecosystems where water for crop production is in short supply, cover crops can be used as a mulch to conserve water by shading and cooling the soil surface. This reduces the evaporation of soil moisture and helps preserve soil nutrients. Weed management Thick cover crop stands often compete well with weeds during the cover crop growth period, and can prevent most germinated weed seeds from completing their life cycle and reproducing. If the cover crop is flattened down on the soil surface rather than incorporated into the soil as a green manure after its growth is terminated, it can form a nearly impenetrable mat. This drastically reduces light transmittance to weed seeds, which in many cases reduces weed seed germination rates. Furthermore, even when weed seeds germinate, they often run out of stored energy for growth before building the necessary structural capacity to break through the cover crop mulch layer. This is often termed the cover crop smother effect. Some cover crops suppress weeds both during growth and after death. During growth these cover crops compete vigorously with weeds for available space, light, and nutrients, and after death they smother the next flush of weeds by forming a mulch layer on the soil surface. For example, researchers found that when using Melilotus officinalis (yellow sweetclover) as a cover crop in an improved fallow system (where a fallow period is intentionally improved by any number of different management practices, including the planting of cover crops), weed biomass only constituted between 1–12% of total standing biomass at the end of the cover crop growing season. Furthermore, after cover crop termination, the yellow sweetclover residues suppressed weeds to levels 75–97% lower than in fallow (no yellow sweetclover) systems. In addition to competition-based or physical weed suppression, certain cover crops are known to suppress weeds through allelopathy. This occurs when certain biochemical cover crop compounds are degraded that happen to be toxic to, or inhibit seed germination of, other plant species. Some well known examples of allelopathic cover crops are Secale cereale (rye), Vicia villosa (hairy vetch), Trifolium pratense (red clover), Sorghum bicolor (sorghum-sudangrass), and species in the family Brassicaceae, particularly mustards. In one study, rye cover crop residues were found to have provided between 80% and 95% control of early season broadleaf weeds when used as a mulch during the production of different cash crops such as soybean, tobacco, corn, and sunflower. In general, cover crops need not compete with cash crops, as they can be grown and terminated early on the season before other crops are established. In a 2010 study released by the Agricultural Research Service (ARS), scientists examined how rye seeding rates and planting patterns affected cover crop production. The results show that planting more pounds per acre of rye increased the cover crop's production as well as decreased the amount of weeds. The same was true when scientists tested seeding rates on legumes and oats; a higher density of seeds planted per acre decreased the amount of weeds and increased the yield of legume and oat production. The planting patterns, which consisted of either traditional rows or grid patterns, did not seem to have a significant impact on the cover crop's production or on the weed production in either cover crop. The ARS scientists concluded that increased seeding rates could be an effective method of weed control. Cornell University's Sustainable Cropping Systems Lab released a study in May 2023 investigating the effectiveness of time-sensitive planting and strategic coupling of cover crop variants with phylogenetically similar cash crops. The primary researcher, Uriel Menalled, discovered that if cover and cash crops are planted in accordance with his research findings, farmers can decrease weed growth by up to 99%. The study provides farmers with a comprehensive framework to identify cover crops that would best suit their existing cropping rotations. In sum, the results from this study support an understanding that phylogenetic relatedness can be harnessed to significantly suppress weed growth. Disease management In the same way that allelopathic properties of cover crops can suppress weeds, they can also break disease cycles and reduce populations of bacterial and fungal diseases, and parasitic nematodes. Species in the family Brassicaceae, such as mustards, have been widely shown to suppress fungal disease populations through the release of naturally occurring toxic chemicals during the degradation of glucosinolate compounds in their plant cell tissues. Pest management Some cover crops are used as so-called "trap crops", to attract pests away from the crop of value and toward what the pest sees as a more favorable habitat. Trap crop areas can be established within crops, within farms, or within landscapes. In many cases, the trap crop is grown during the same season as the food crop being produced. The limited area occupied by these trap crops can be treated with a pesticide once pests are drawn to the trap in large enough numbers to reduce pest populations. In some organic systems, farmers drive over the trap crop with a large vacuum-based implement to physically pull the pests off the plants and out of the field. This system has been recommended for use to help control the lygus bugs in organic strawberry production. Another example of trap crops is nematode-resistant white mustard (Sinapis alba) and radish (Raphanus sativus). They can be grown after a main (cereal) crop and trap nematodes, for example, the beet cyst nematode and the Columbian root knot nematode. When grown, nematodes hatch and are attracted to the roots. After entering the roots they cannot reproduce in the root due to a hypersensitive resistance reaction of the plant. Hence the nematode population is greatly reduced, by 70–99%, depending on species and cultivation time. Other cover crops are used to attract natural predators of pests by imitating elements of their habitat. This is a form of biological control known as habitat augmentation, but achieved with the use of cover crops. Findings on the relationship between cover crop presence and predator–pest population dynamics have been mixed, suggesting the need for detailed information on specific cover crop types and management practices to best complement a given integrated pest management strategy. For example, the predator mite Euseius tularensis (Congdon) is known to help control the pest citrus thrips in Central California citrus orchards. Researchers found that the planting of several different leguminous cover crops (such as bell bean, woollypod vetch, New Zealand white clover, and Austrian winter pea) provided sufficient pollen as a feeding source to cause a seasonal increase in E. tularensis populations, which with good could potentially introduce enough predatory pressure to reduce pest populations of citrus thrips. Biodiversity and wildlife Although cover crops are normally used to serve one of the above discussed purposes, they often serve as habitat for wildlife. The use of cover crops adds at least one more dimension of plant diversity to a cash crop rotation. Since the cover crop is typically not a crop of value, its management is usually less intensive, providing a window of "soft" human influence on the farm. This relatively "hands-off" management, combined with the increased on-farm heterogeneity produced by the establishment of cover crops, increases the likelihood that a more complex trophic structure will develop to support a higher level of wildlife diversity. In one study, researchers compared arthropod and songbird species composition and field use between conventionally and cover cropped cotton fields in the Southern United States. The cover cropped cotton fields were planted to clover, which was left to grow in between cotton rows throughout the early cotton growing season (stripcover cropping). During the migration and breeding season, they found that songbird densities were 7–20 times higher in the cotton fields with an integrated clover cover crop than in the conventional cotton fields. Arthropod abundance and biomass was also higher in the clover c-cover fields throughout much of the songbird breeding season, which was attributed to an increased supply of flower nectar from the clover. The clover cover crop enhanced songbird habitat by providing covering sites, and an increased food source from higher arthropod populations.
Technology
Soil and soil management
null
410923
https://en.wikipedia.org/wiki/Neutron%20radiation
Neutron radiation
Neutron radiation is a form of ionizing radiation that presents as free neutrons. Typical phenomena are nuclear fission or nuclear fusion causing the release of free neutrons, which then react with nuclei of other atoms to form new nuclides—which, in turn, may trigger further neutron radiation. Free neutrons are unstable, decaying into a proton, an electron, plus an electron antineutrino. Free neutrons have a mean lifetime of 887 seconds (14 minutes, 47 seconds). Neutron radiation is distinct from alpha, beta and gamma radiation. Sources Neutrons may be emitted from nuclear fusion or nuclear fission, or from other nuclear reactions such as radioactive decay or particle interactions with cosmic rays or within particle accelerators. Large neutron sources are rare, and usually limited to large-sized devices such as nuclear reactors or particle accelerators, including the Spallation Neutron Source. Neutron radiation was discovered from observing an alpha particle colliding with a beryllium nucleus, which was transformed into a carbon nucleus while emitting a neutron, Be(α, n)C. The combination of an alpha particle emitter and an isotope with a large (α, n) nuclear reaction probability is still a common neutron source. Neutron radiation from fission The neutrons in nuclear reactors are generally categorized as slow (thermal) neutrons or fast neutrons depending on their energy. Thermal neutrons are similar in energy distribution (the Maxwell–Boltzmann distribution) to a gas in thermodynamic equilibrium; but are easily captured by atomic nuclei and are the primary means by which elements undergo nuclear transmutation. To achieve an effective fission chain reaction, neutrons produced during fission must be captured by fissionable nuclei, which then split, releasing more neutrons. In most fission reactor designs, the nuclear fuel is not sufficiently refined to absorb enough fast neutrons to carry on the chain reaction, due to the lower cross section for higher-energy neutrons, so a neutron moderator must be introduced to slow the fast neutrons down to thermal velocities to permit sufficient absorption. Common neutron moderators include graphite, ordinary (light) water and heavy water. A few reactors (fast neutron reactors) and all nuclear weapons rely on fast neutrons. Cosmogenic neutrons Cosmogenic neutrons are produced from cosmic radiation in the Earth's atmosphere or surface, as well as in particle accelerators. They often possess higher energy levels compared to neutrons found in reactors. Many of these neutrons activate atomic nuclei before reaching the Earth's surface, while a smaller fraction interact with nuclei in the atmospheric air. When these neutrons interact with nitrogen-14 atoms, they can transform them into carbon-14 (14C), which is extensively utilized in radiocarbon dating. Uses Cold, thermal and hot neutron radiation is most commonly used in scattering and diffraction experiments, to assess the properties and the structure of materials in crystallography, condensed matter physics, biology, solid state chemistry, materials science, geology, mineralogy, and related sciences. Neutron radiation is also used in Boron Neutron Capture Therapy to treat cancerous tumors due to its highly penetrating and damaging nature to cellular structure. Neutrons can also be used for imaging of industrial parts termed neutron radiography when using film, neutron radioscopy when taking a digital image, such as through image plates, and neutron tomography for three-dimensional images. Neutron imaging is commonly used in the nuclear industry, the space and aerospace industry, as well as the high reliability explosives industry. Ionization mechanisms and properties Neutron radiation is often called indirectly ionizing radiation. It does not ionize atoms in the same way that charged particles such as protons and electrons do (exciting an electron), because neutrons have no charge. However, neutron interactions are largely ionizing, for example when neutron absorption results in gamma emission and the gamma ray (photon) subsequently removes an electron from an atom, or a nucleus recoiling from a neutron interaction is ionized and causes more traditional subsequent ionization in other atoms. Because neutrons are uncharged, they are more penetrating than alpha radiation or beta radiation. In some cases they are more penetrating than gamma radiation, which is impeded in materials of high atomic number. In materials of low atomic number such as hydrogen, a low energy gamma ray may be more penetrating than a high energy neutron. Health hazards and protection In health physics, neutron radiation is a type of radiation hazard. Another, more severe hazard of neutron radiation, is neutron activation, the ability of neutron radiation to induce radioactivity in most substances it encounters, including bodily tissues. This occurs through the capture of neutrons by atomic nuclei, which are transformed to another nuclide, frequently a radionuclide. This process accounts for much of the radioactive material released by the detonation of a nuclear weapon. It is also a problem in nuclear fission and nuclear fusion installations as it gradually renders the equipment radioactive such that eventually it must be replaced and disposed of as low-level radioactive waste. Neutron radiation protection relies on radiation shielding. Due to the high kinetic energy of neutrons, this radiation is considered the most severe and dangerous radiation to the whole body when it is exposed to external radiation sources. In comparison to conventional ionizing radiation based on photons or charged particles, neutrons are repeatedly bounced and slowed (absorbed) by light nuclei so hydrogen-rich material is more effective at shielding than iron nuclei. The light atoms serve to slow down the neutrons by elastic scattering so they can then be absorbed by nuclear reactions. However, gamma radiation is often produced in such reactions, so additional shielding must be provided to absorb it. Care must be taken to avoid using materials whose nuclei undergo fission or neutron capture that causes radioactive decay of nuclei, producing gamma rays. Neutrons readily pass through most material, and hence the absorbed dose (measured in grays) from a given amount of radiation is low, but interact enough to cause biological damage. The most effective shielding materials are water, or hydrocarbons like polyethylene or paraffin wax. Water-extended polyester (WEP) is effective as a shielding wall in harsh environments due to its high hydrogen content and resistance to fire, allowing it to be used in a range of nuclear, health physics, and defense industries. Hydrogen-based materials are suitable for shielding as they are proper barriers against radiation. Concrete (where a considerable number of water molecules chemically bind to the cement) and gravel provide a cheap solution due to their combined shielding of both gamma rays and neutrons. Boron is also an excellent neutron absorber (and also undergoes some neutron scattering). Boron decays into carbon or helium and produces virtually no gamma radiation with boron carbide, a shield commonly used where concrete would be cost prohibitive. Commercially, tanks of water or fuel oil, concrete, gravel, and B4C are common shields that surround areas of large amounts of neutron flux, e.g., nuclear reactors. Boron-impregnated silica glass, standard borosilicate glass, high-boron steel, paraffin, and Plexiglas have niche uses. Because neutrons that strike the hydrogen nucleus (proton, or deuteron) impart energy to that nucleus, they in turn break from their chemical bonds and travel a short distance before stopping. Such hydrogen nuclei are high linear energy transfer particles, and are in turn stopped by ionization of the material they travel through. Consequently, in living tissue, neutrons have a relatively high relative biological effectiveness, and are roughly ten times more effective at causing biological damage compared to gamma or beta radiation of equivalent energy exposure. These neutrons can either cause cells to change in their functionality or to completely stop replicating, causing damage to the body over time. Neutrons are particularly damaging to soft tissues like the cornea of the eye. Effects on materials High-energy neutrons damage and degrade materials over time; bombardment of materials with neutrons creates collision cascades that can produce point defects and dislocations in the material, the creation of which is the primary driver behind microstructural changes occurring over time in materials exposed to radiation. At high neutron fluences this can lead to embrittlement of metals and other materials, and to neutron-induced swelling in some of them. This poses a problem for nuclear reactor vessels and significantly limits their lifetime (which can be somewhat prolonged by controlled annealing of the vessel, reducing the number of the built-up dislocations). Graphite neutron moderator blocks are especially susceptible to this effect, known as Wigner effect, and must be annealed periodically. The Windscale fire was caused by a mishap during such an annealing operation. Radiation damage to materials occurs as a result of the interaction of an energetic incident particle (a neutron, or otherwise) with a lattice atom in the material. The collision causes a massive transfer of kinetic energy to the lattice atom, which is displaced from its lattice site, becoming what is known as the primary knock-on atom (PKA). Because the PKA is surrounded by other lattice atoms, its displacement and passage through the lattice results in many subsequent collisions and the creations of additional knock-on atoms, producing what is known as the collision cascade or displacement cascade. The knock-on atoms lose energy with each collision, and terminate as interstitials, effectively creating a series of Frenkel defects in the lattice. Heat is also created as a result of the collisions (from electronic energy loss), as are possibly transmuted atoms. The magnitude of the damage is such that a single 1 MeV neutron creating a PKA in an iron lattice produces approximately 1,100 Frenkel pairs. The entire cascade event occurs over a timescale of 1 × 10−13 seconds, and therefore, can only be "observed" in computer simulations of the event. The knock-on atoms terminate in non-equilibrium interstitial lattice positions, many of which annihilate themselves by diffusing back into neighboring vacant lattice sites and restore the ordered lattice. Those that do not or cannot leave vacancies, which causes a local rise in the vacancy concentration far above that of the equilibrium concentration. These vacancies tend to migrate as a result of thermal diffusion towards vacancy sinks (i.e., grain boundaries, dislocations) but exist for significant amounts of time, during which additional high-energy particles bombard the lattice, creating collision cascades and additional vacancies, which migrate towards sinks. The main effect of irradiation in a lattice is the significant and persistent flux of defects to sinks in what is known as the defect wind. Vacancies can also annihilate by combining with one another to form dislocation loops and later, lattice voids. The collision cascade creates many more vacancies and interstitials in the material than equilibrium for a given temperature, and diffusivity in the material is dramatically increased as a result. This leads to an effect called radiation-enhanced diffusion, which leads to microstructural evolution of the material over time. The mechanisms leading to the evolution of the microstructure are many, may vary with temperature, flux, and fluence, and are a subject of extensive study. Radiation-induced segregation results from the aforementioned flux of vacancies to sinks, implying a flux of lattice atoms away from sinks; but not necessarily in the same proportion to alloy composition in the case of an alloyed material. These fluxes may therefore lead to depletion of alloying elements in the vicinity of sinks. For the flux of interstitials introduced by the cascade, the effect is reversed: the interstitials diffuse toward sinks resulting in alloy enrichment near the sink. Dislocation loops are formed if vacancies form clusters on a lattice plane. If these vacancy concentration expand in three dimensions, a void forms. By definition, voids are under vacuum, but may became gas-filled in the case of alpha-particle radiation (helium) or if the gas is produced as a result of transmutation reactions. The void is then called a bubble, and leads to dimensional instability (neutron-induced swelling) of parts subject to radiation. Swelling presents a major long-term design problem, especially in reactor components made out of stainless steel. Alloys with crystallographic isotropy, such as Zircaloys are subject to the creation of dislocation loops, but do not exhibit void formation. Instead, the loops form on particular lattice planes, and can lead to irradiation-induced growth, a phenomenon distinct from swelling, but that can also produce significant dimensional changes in an alloy. Irradiation of materials can also induce phase transformations in the material: in the case of a solid solution, the solute enrichment or depletion at sinks radiation-induced segregation can lead to the precipitation of new phases in the material. The mechanical effects of these mechanisms include irradiation hardening, embrittlement, creep, and environmentally-assisted cracking. The defect clusters, dislocation loops, voids, bubbles, and precipitates produced as a result of radiation in a material all contribute to the strengthening and embrittlement (loss of ductility) in the material. Embrittlement is of particular concern for the material comprising the reactor pressure vessel, where as a result the energy required to fracture the vessel decreases significantly. It is possible to restore ductility by annealing the defects out, and much of the life-extension of nuclear reactors depends on the ability to safely do so. Creep is also greatly accelerated in irradiated materials, though not as a result of the enhanced diffusivities, but rather as a result of the interaction between lattice stress and the developing microstructure. Environmentally-assisted cracking or, more specifically, irradiation-assisted stress corrosion cracking (IASCC) is observed especially in alloys subject to neutron radiation and in contact with water, caused by hydrogen absorption at crack tips resulting from radiolysis of the water, leading to a reduction in the required energy to propagate the crack.
Physical sciences
Nuclear physics
Physics
411018
https://en.wikipedia.org/wiki/Hepatitis%20E
Hepatitis E
Hepatitis E is inflammation of the liver caused by infection with the hepatitis E virus (HEV); it is a type of viral hepatitis. Hepatitis E has mainly a fecal-oral transmission route that is similar to hepatitis A, although the viruses are unrelated. HEV is a positive-sense, single-stranded, nonenveloped, RNA icosahedral virus and one of five known human hepatitis viruses: A, B, C, D, and E. Like hepatitis A, hepatitis E usually follows an acute and self-limiting course of illness (the condition is temporary and the individual recovers) with low death rates in resource-rich areas; however, it can be more severe in pregnant women and people with a weakened immune system, with substantially higher death rates. In pregnant women, especially in the third trimester, the disease is more often severe and is associated with a clinical syndrome called fulminant liver failure, with death rates around 20%. Whereas pregnant women may have a rapid and severe course, organ transplant recipients who receive medications to weaken the immune system and prevent organ rejection can develop a slower and more persistent form called chronic hepatitis E, which is so diagnosed after 3 months of continuous viremia. HEV can be clustered genetically into 8 genotypes, and genotypes 3 and 4 tend to be the ones that cause chronic hepatitis in the immunosuppressed. In 2017, hepatitis E was estimated to affect more than 19 million people. Those most commonly at risk of HEV are men aged 15 to 35 years of age. A preventive vaccine (HEV 239) is approved for use in China. The virus was discovered in 1983 by researchers investigating an outbreak of unexplained hepatitis among Soviet soldiers serving in Afghanistan. The earliest well-documented epidemic of hepatitis E occurred in 1955 in New Delhi and affected tens of thousands of people (hepatitis E virus was identified as the etiological agent at fault retrospectively through testing of stored samples). Signs and symptoms Acute infection The average incubation period of hepatitis E is 40 days, ranging from 2 to 8 weeks. After a short prodromal phase symptoms may include jaundice, fatigue, and nausea, though most HEV infections are asymptomatic. The symptomatic phase coincides with elevated hepatic aminotransferase levels. Viral RNA becomes detectable in stool and blood serum during the incubation period. Serum IgM and IgG antibodies against HEV appear just before the onset of clinical symptoms. Recovery leads to virus clearance from the blood, while the virus may persist in stool for much longer. Recovery is also marked by disappearance of IgM antibodies and increase of levels of IgG antibodies. Chronic infection While usually lasting weeks and then resolving, in people with weakened immune systems—particularly in people who have had solid organ transplant—hepatitis E may cause a chronic infection. Occasionally this may result in a life-threatening illness such as fulminant liver failure or liver cirrhosis. Other organs Infection with hepatitis E virus can also lead to problems in other organs. For some of these reported conditions such as musculoskeletal or immune-mediated manifestations the relationship is not entirely clear, but for several neurological and blood conditions the relationship appears more consistent: Acute pancreatitis (HEV genotype 1) Neurological complications (though the mechanism of neurological damage is unknown at this point.) include: Guillain-Barré syndrome (acute limb weakness due to nerve involvement), neuralgic amyotrophy (arm and shoulder weakness, also known as Parsonage-Turner syndrome), acute transverse myelitis and acute meningoencephalitis. Glomerulonephritis with nephrotic syndrome and/or cryoglobulinemia Mixed cryoglobulinemia, where antibodies in the bloodstream react inappropriately at low temperatures Severe thrombocytopenia (low platelet count in the blood) which confers an increased risk of dangerous bleeding Infection in pregnancy Pregnant women show a more severe course of infection than other populations. Liver failure with mortality rates of 20% to 25% has been reported from outbreaks of genotype 1 and 2 HEV in developing countries. Besides signs of an acute infections, adverse effects on the mother and fetus may include preterm delivery, abortion, stillbirth, and neonatal death. The pathological and biological mechanisms behind the adverse outcomes of pregnancy infections remain largely unclear. Increased viral replication and influence of hormonal changes on the immune system are currently thought to contribute to worsening the course of infection. Furthermore, studies showing evidence for viral replication in the placenta or reporting the full viral life cycle in placental-derived cells in vitro suggest that the human placenta may be a site of viral replication outside the liver. The primary reason for HEV severity in pregnancy remains enigmatic. Virology Classification HEV is classified into the family Hepeviridae, which is divided in two genera, Orthohepevirus (all mammalian and avian HEV isolates) and Piscihepevirus (cutthroat trout HEV). Only one serotype of the human virus is known, and classification is based on the nucleotide sequences of the genome. Genotype 1 can be further subclassified into five subtypes, genotype 2 into two subtypes, and genotypes 3 and 4 have been divided into 10 and seven subtypes. Additionally there are genotypes 5, 6, 7 and 8. Rat HEV was first isolated from Norway rats in Germany, and a 2018 CDC article indicated the detection of rat HEV RNA in a transplant recipient. Distribution Genotype 1 has been isolated from tropical and several subtropical countries in Asia and Africa. Genotype 2 has been isolated from Mexico, Nigeria, and Chad. Genotype 3 has been isolated almost worldwide including Asia, Europe, Oceania, and North and South America. Genotype 4 appears to be limited to Asia and indigenous cases from Europe. Genotypes 1 and 2 are restricted to humans and often associated with large outbreaks and epidemics in developing countries with poor sanitation conditions. Genotypes 3 and 4 infect humans, pigs, and other animal species and have been responsible for sporadic cases of hepatitis E in both developing and industrialized countries. Transmission Hepatitis E (genotype 1 and, to a lesser extent genotype 2) is endemic and can cause outbreaks in Southeast Asia, northern and central Africa, India, and Central America. It is spread mainly by the fecal–oral route due to contamination of water supplies or food; direct person-to-person transmission is uncommon. In contrast to genotypes 1 and 2, genotypes 3 and 4 cause sporadic cases thought to be contracted zoonotically, from direct contact with animals or indirectly from contaminated water or undercooked meat. Outbreaks of epidemic hepatitis E most commonly occur after heavy rainfalls, especially monsoons because of their disruption of water supplies; heavy flooding can causes sewage to contaminate water supplies. The World Health Organization recommendation for chlorine on HEV inactivation, a free chlorine residual of for 30 min (pH, <8.0) Major outbreaks have occurred in New Delhi, India (30,000 cases in 1955–1956), Burma (20,000 cases in 1976–1977), Kashmir, India (52,000 cases in 1978), Kanpur, India (79,000 cases in 1991), and China (100,000 cases between 1986 and 1988). According to Rein et al., HEV genotypes 1 and 2 caused some 20.1 million hepatitis E infections, along with 3.4 million cases of symptomatic disease, and 70,000 deaths in 2005; however the aforementioned paper did not estimate the burden of genotypes 3 and 4. According to the Department for Environment, Food and Rural Affairs, evidence indicated the increase in hepatitis E in the U.K. was due to food-borne zoonoses, citing a study that found in the U.K. that 10% of pork sausages contained the hepatitis E virus. Some research suggests that food must reach a temperature of for 20 minutes to eliminate the risk of infection. The Animal Health and Veterinary Laboratories Agency discovered hepatitis E in almost half of all pigs in Scotland. Hepatitis E infection appeared to be more common in people on hemodialysis, although the specific risk factors for transmission are not clear. Animal reservoir Hepatitis E due to genotypes other than 1 and 2 is thought to be a zoonosis, in that animals are thought to be the primary reservoir; deer and swine have frequently been implicated. Domestic animals have been reported as a reservoir for the hepatitis E virus, with some surveys showing infection rates exceeding 95% among domestic pigs. Replicative virus has been found in the small intestine, lymph nodes, colon, and liver of experimentally infected pigs. Transmission after consumption of wild boar meat and uncooked deer meat has been reported as well. The rate of transmission to humans by this route and the public health importance of this are, however, still unclear. Other animal reservoirs are possible but unknown at this time A number of other small mammals have been identified as potential reservoirs: the lesser bandicoot rat (Bandicota bengalensis), the black rat (Rattus rattus brunneusculus) and the Asian house shrew (Suncus murinus). A new virus designated rat hepatitis E virus has been isolated. Genomics HEV has three open reading frames (ORFs) encoding two polyproteins (O1 and O2 protein). ORF2 encodes three capsid proteins whereas O1 encodes seven fragments involved in viral replication, among others. The smallest ORF of the HEV genome, ORF3 is translated from a subgenomic RNA into O3, a protein of 113–115 amino acids. ORF3 is proposed to play critical roles in immune evasion by HEV. Previous studies showed that ORF3 is bound to viral particles found in patient sera and produced in cell culture. Although in cultured cells ORF3 has not appeared essential for HEV RNA replication, viral assembly, or infection, it is required for particle release. Virus lifecycle The lifecycle of hepatitis E virus is unknown; the capsid protein obtains viral entry by binding to a cellular receptor. ORF2 (c-terminal) moderates viral entry by binding to HSC70. Geldanamycin blocks the transport of HEV239 capsid protein, but not the binding/entry of the truncated capsid protein, which indicates that Hsp90 plays an important part in HEV transport. Diagnosis In terms of the diagnosis of hepatitis E, only a laboratory blood test that confirms the presence of HEV RNA or IgM antibodies to HEV can be trusted. In the United States no serologic tests for diagnosis of HEV infection have ever been authorized by the Food and Drug Administration. The World Health Organization has developed an international standard strain for detection and quantification of HEV RNA. In acute infection the viremic window for detection of HEV RNA closes 3 weeks after symptoms begin. Virological markers Assuming that vaccination has not occurred, tests may show: if the person's immune system is normal, then if IgM anti-HEV is negative, then there is no evidence of recent HEV infection if IgM anti-HEV is positive, then the person is likely to have a recent or current HEV infection if the person's immune system is weakened by disease or medical treatment, as in the case of a person who has received a solid organ transplant, then if IgM anti-HEV is negative, then if additional blood testing reveals positive HEV RNA then the person has HEV infection negative HEV RNA then there is no evidence of current or recent infection if IgM anti-HEV is positive, then the person is likely to have a recent or current HEV infection, and HEV RNA may be useful to track resolution Prevention Sanitation Sanitation is the most important measure in prevention of hepatitis E; this consists of proper treatment and disposal of human waste, higher standards for public water supplies, improved personal hygiene procedures, and sanitary food preparation. Thus, prevention strategies of this disease are similar to those of many other diseases that plague developing nations. Cooking meat at for five minutes kills the hepatitis E virus, different temperatures means different time to inactivate the virus. Blood products The amount of virus present in blood products required to cause transfusion-transmitted infection (TTI) appears variable. Transfusion transmission of hepatitis E virus can be screened via minipool HEV NAT (Nucleic acid testing) screening. NAT is a technique used to screen blood molecularly, when blood donations are received; it screens for TTI. Vaccines A vaccine based on recombinant viral proteins was developed in the 1990s and tested in a high-risk population (in Nepal) in 2001. The vaccine appeared to be effective and safe, but development was stopped for lack of profitability, since hepatitis E is rare in developed countries. No hepatitis E vaccine is licensed for use in the United States. The exception is China; after more than a year of scrutiny and inspection by China's State Food and Drug Administration (SFDA), a hepatitis E vaccine developed by Chinese scientists was available at the end of 2012. The vaccine—called HEV 239 by its developer Xiamen Innovax Biotech—was approved for prevention of hepatitis E in 2012 by the Chinese Ministry of Science and Technology, following a controlled trial on 100,000+ people from Jiangsu Province where none of those vaccinated became infected during a 12-month period, compared to 15 in the group given placebo. The first vaccine batches came out of Innovax's factory in late October 2012, to be sold to Chinese distributors. Due to lack of evidence, the World Health Organization has not made a recommendation regarding routine use of the HEV 239 vaccine as of 2015. Its 2015 position was that national authorities may decide to use the vaccine based on their local epidemiology. Treatment There is no drug that has established safety and effectiveness for hepatitis E, and there have been no large randomized clinical trials of antiviral drugs. Reviews of existing small studies suggest that ribavirin can be considered effective in immunocompromised people who have developed chronic infection. Chronic HEV infection is associated with immunosuppressive therapies, and when that happens in individuals with solid-organ transplantation, reducing immunosuppressive medications can result in clearance of HEV in one third of patients. Epidemiology The hepatitis E virus causes around 20 million infections a year. These result in around three million acute illnesses and resulted in 44,000 deaths during 2015. Pregnant women are particularly at risk of complications due to HEV infection, who can develop an acute form of the disease that is fatal in 30% of cases or more. HEV is a major cause of illness and of death in the developing world and disproportionate cause of deaths among pregnant women. Hepatitis E is endemic in Central Asia, while Central America and the Middle East have reported outbreaks. Increasingly, hepatitis E is being seen in developed nations, with reports in 2015 of 848 cases of hepatitis E virus infection in England and Wales. Recent outbreaks In October 2007, an epidemic of hepatitis E occurred in Kitgum District of northern Uganda. This outbreak progressed to become one of the largest known hepatitis E outbreaks in the world. By June 2009, it had resulted in illness in 10,196 persons and 160 deaths. The aforementioned outbreak occurred despite no previous epidemics having been documented in the country, women were the most affected by HEV. In July 2012, an outbreak was reported in South Sudanese refugee camps in Maban County near the Sudan border. South Sudan's Ministry of Health reported over 400 cases and 16 fatalities as of 13 September 2012. Progressing further, as of 2 February 2013, 88 died due to the outbreak. The medical charity Medecins Sans Frontieres said it treated almost 4000 people. In April 2014, an outbreak in the Biratnagar Municipality of Nepal resulted in infection of over 6000 locals and at least 9 dead. During an outbreak in Namibia, the number of affected people rose from 490 in January 2018, to 5014 (with 42 deaths) by April 2019, to 6151 cases (with 56 deaths) by August 2019; the WHO estimated that the case fatality rate was 0.9%. In Hong Kong in May 2020, there were at least 10 cases of hepatitis E that were transmitted by rats, and possibly hundreds of cases that had a transmission mechanism that is not fully understood. 2024 outbreak in Finland. A record number of hepatitis E cases have been diagnosed in Finland so far this year, according to figures released on Tuesday by public health authority THL. Data from the authority's Infectious Diseases Register showed that a total of 92 lab-confirmed infections have been recorded since the beginning of January until 12 March, with 42 people requiring hospital treatment. The outbreak has been suspected to be caused by a batch of mettwurst that has been recalled. Evolution The strains of HEV that exist today may have arisen from a shared ancestor virus 536 to 1344 years ago. Another analysis has dated the origin of Hepatitis E to ~6000 years ago, with a suggestion that this was associated with domestication of pigs. At some point, two clades may have diverged — an anthropotropic form and an enzootic form — which subsequently evolved into genotypes 1 and 2 and genotypes 3 and 4, respectively. Whereas genotype 2 remains less commonly detected than other genotypes, genetic evolutionary analyses suggest that genotypes 1, 3, and 4 have spread substantially during the past 100 years.
Biology and health sciences
Viral diseases
Health
411021
https://en.wikipedia.org/wiki/Chlamydia%20trachomatis
Chlamydia trachomatis
Chlamydia trachomatis () is a Gram-negative, anaerobic bacterium responsible for chlamydia and trachoma. C. trachomatis exists in two forms, an extracellular infectious elementary body (EB) and an intracellular non-infectious reticulate body (RB). The EB attaches to host cells and enter the cell using effector proteins, where it transforms into the metabolically active RB. Inside the cell, RBs rapidly replicate before transitioning back to EBs, which are then released to infect new host cells. The earliest description of C. trachomatis was in 1907 by Stanislaus von Prowazek and Ludwig Halberstädter as a protozoan. It was later thought to be a virus due to its small size and inability to grow in laboratories. It was not until 1966 when it was discovered as a bacterium by electron microscopy after its internal structures were visually observed. There are currently 18 serovars of C. trachomatis, each associated with specific diseases affecting mucosal cells in the lungs, genital tracts, and ocular systems. Infections are often asymptomatic, but can lead to severe complications such as pelvic inflammatory disease in women and epididymitis in men. The bacterium also causes urethritis, conjunctivitis, and lymphogranuloma venereum in both sexes. C. trachomatis genitourinary infections are diagnosed more frequently in women than in men, with the highest prevalence occurring in females aged 15 to 19 years of age. Infants born from mothers with active chlamydia infections have a pulmonary infection rate that is less than 10%. Globally, approximately 84 million people are affected by C. trachomatis eye infections, with 8 million cases resulting in blindness. C. trachomatis is the leading infectious cause of blindness and the most common sexually transmitted bacterium. The impact of C. trachomatis on human health has been driving vaccine research since its discovery. Currently, no vaccines are available, largely due to the complexity of the immunological pathways involved in C. trachomatis, which remain poorly understood. However, C. trachomatis infections may be treated with several antibiotics, with tetracycline being the preferred option. Description Chlamydia trachomatis is a gram-negative bacterium that can replicate exclusively within a host cell, making it an obligate intracellular pathogen. Over the course of its life cycle, C. trachomatis takes on two distinct forms to facilitate infection and replication. Elementary bodies (EBs) are 200 to 400 nanometers across and are surrounded by a rigid cell wall that enables them to survive in an extracellular form. When an EB encounters a susceptible host cell, it binds to the cell surface and is internalized. The second form, reticulate bodies (RBs) are 600 to 1500 nanometers across, and are found only within host cells. RBs have increased metabolic activity and are adapted for replication. Neither form is motile. The evolution of C. trachomatis includes a reduced genome of approximately 1.04 megabases, encoding approximately 900 genes. In addition to the chromosome that contains most of the genome, nearly all C. trachomatis strains carry a 7.5 kilobase plasmid that contains 8 genes. The role of this plasmid is unknown, although strains without the plasmid have been isolated, suggesting it is not essential for bacterial survival. Several important metabolic functions are not encoded in the C. trachomatis genome and are instead scavenged from the host cell. Carbohydrate metabolism C. trachomatis has a reduced metabolic capacity due to its smaller genome, which lacks genes for many biosynthetic pathways including those required for complete carbohydrate metabolism. The bacterium is largely dependent on the host cell for metabolic intermediates and energy, particularly in the form of adenosine triphosphate (ATP). C. trachomatis lacks several enzymes necessary for independent glucose metabolism, and instead utilizes two ATP/ADP translocases (Npt1 and Npt2) to import ATP from the host cell. Other metabolites including amino acids, nucleotides, and lipids are also transported from the host. A critical enzyme involved in glycolysis, hexokinase, is absent in C. trachomatis, preventing the production of glucose-6-phosphate (G6P). Instead, G6P from the host cell is taken up by the metabolically active reticulate bodies (RBs) through a G6P transporter (UhpC antiporter). Although C. trachomatis lacks a complete independent glycolysis pathway, it has genes encoding for all the enzymes required for the Pentose Phosphate Pathway (PPP), gluconeogenesis, and glycogen synthesis and degradation. A suppressor of glycolysis, p53, is expressed less frequently in c. trachomatis cells, increasing the rate at which glycolysis occurs, even in the presence of oxygen. As a result, C. trachomatis infection is associated with increased production of pyruvate, lactate, and glutamate by the bacterium due to activity of the pyruvate dehydrogenase kinase 2 enzyme limiting conversion of pyruvate to acetyl-coenzyme A. The pyruvate is instead turned into lactate, which allows the cell to grow almost unobstructed by immune response due to the acidic properties of lactate. Excess glycolytic products are, in turn, brought into the PPP to create nucleotides and for biosynthesis. This type of growth is very similar to the Warburg effect observed in cancer cells. Life cycle Like other Chlamydia species, C. trachomatis has a life cycle consisting of two morphologically distinct forms. First, C. trachomatis attaches to a new host cell as a small spore-like form called the elementary body. The elementary body enters the host cell, surrounded by a host vacuole, called an inclusion. Within the inclusion, C. trachomatis transforms into a larger, more metabolically active form called the reticulate body. The reticulate body substantially modifies the inclusion, making it a more hospitable environment for rapid replication of the bacteria, which occurs over the following 30 to 72 hours. The massive number of intracellular bacteria then transition back to resistant elementary bodies before causing the cell to rupture and being released into the environment. These new elementary bodies are then shed in the semen or released from epithelial cells of the female genital tract and attach to new host cells. Classification C. trachomatis are bacteria in the genus Chlamydia, a group of obligate intracellular parasites of eukaryotic cells. Chlamydial cells cannot carry out energy metabolism and they lack biosynthetic pathways. C. trachomatis strains are generally divided into three biovars based on the type of disease they cause. These are further subdivided into several serovars based on the surface antigens recognized by the immune system. Serovars A through C cause trachoma, which is the world's leading cause of preventable infectious blindness. Serovars D through K infect the genital tract, causing pelvic inflammatory disease, ectopic pregnancies, and infertility. Serovars L1 through L3 cause an invasive infection of the lymph nodes near the genitals, called lymphogranuloma venereum. C. trachomatis is thought to have diverged from other Chlamydia species around 6 million years ago. This genus contains a total of nine species: C. trachomatis, C. muridarum, C. pneumoniae, C. pecorum, C. suis, C. abortus, C. felis, C. caviae, and C. psittaci. The closest relative to C. trachomatis is C. muridarum, which infects mice. C. trachomatis along with C. pneumoniae have been found to infect humans to a greater extent. C. trachomatis exclusively infects humans. C. pneumoniae is found to also infect horses, marsupials, and frogs. Some of the other species can have a considerable impact on human health due to their known zoonotic transmission. Role in disease Clinical signs and symptoms of C. trachomatis infection in the genitalia present as the chlamydia infection, which may be asymptomatic or may resemble a gonorrhea infection. Both are common causes of multiple other conditions including pelvic inflammatory disease and urethritis. C. trachomatis is the single most important infectious agent associated with blindness (trachoma), and it also affects the eyes in the form of inclusion conjunctivitis and is responsible for about 19% of adult cases of conjunctivitis. C. trachomatis in the lungs presents as the chlamydia pneumoniae respiratory infection and can affect all ages. Pathogenesis Elementary bodies are generally present in the semen of infected men and vaginal secretions of infected women. When they come into contact with a new host cell, the elementary bodies bind to the cell via interaction between adhesins on their surface and several host receptor proteins and heparan sulfate proteoglycans. Once attached, the bacteria inject various effector proteins into the host cell using a type three secretion system. These effectors trigger the host cell to take up the elementary bodies and prevent the cell from triggering apoptosis. Within 6 to 8 hours after infection, the elementary bodies transition to reticulate bodies and a number of new effectors are synthesized. These effectors include a number of proteins that modify the inclusion membrane (Inc proteins), as well as proteins that redirect host vesicles to the inclusion. 8 to 16 hours after infection, another set of effectors are synthesized, driving acquisition of nutrients from the host cell. At this stage, the reticulate bodies begin to divide, coinciding with the expansion of the inclusion. If several elementary bodies have infected a single cell, their inclusions will fuse at this point to create a single large inclusion in the host cell. From 24 to 72 hours after infection, reticulate bodies transition to elementary bodies which are released either by lysis of the host cell or extrusion of the entire inclusion into the host genital tract. Virulence Factors The chlamydial plasmid, a DNA molecule existing separately from the genome of C. trachomatis, functions to enhance genetic diversity via the genes encoded. The plasmid gene protein 3 (pgp3) has been linked to the establishment of persistent infection within the genital tract by suppressing the host immune response. Polymorphic outer membrane proteins (Pmp proteins) on the surface of C. trachomatis use tropism to bind specific host cell receptors, which in turn initiates infection. Pmp proteins B, D, and H have been most associated with eliciting a pro-inflammatory response through the release of cytokines. CPAF (Chlamydia Protease-like Activity Factor) functions by preventing the host from triggering the proper immune response. C. trachomatis use of CPAF targets and cleaves proteins that restructure the Golgi apparatus and activate DNA repair so that C. trachomatis is able to use the host cell machinery and proteins to its advantage. Presentation Most people infected with C. trachomatis are asymptomatic. However, the bacteria can present in one of three ways: genitourinary (genitals), pulmonary (lungs), and ocular (eyes). Genitourinary cases can include genital discharge, vaginal bleeding, itchiness (pruritus), painful urination (dysuria), among other symptoms. Often, symptoms are similar to those of a urinary tract infection. When C. trachomatis presents in the eye in the form of trachoma, it begins by gradually thickening the eyelids and eventually begins to pull the eyelashes into the eyelid. In the form of inclusion conjunctivitis, the infection presents with redness, swelling, mucopurulent discharge from the eye, and most other symptoms associated with adult conjunctivitis. C. trachomatis may latently infect the chorionic villi tissues of pregnant women, thereby impacting pregnancy outcome. Prevalence Three times as many women are diagnosed with genitourinary C. trachomatis infections as men. Women aged 15–19 have the highest prevalence, followed by women aged 20–24, although the rate of increase of diagnosis is greater for men than for women. Risk factors for genitourinary infections include unprotected sex with multiple partners, lack of condom use, and low socioeconomic status living in urban areas. Pulmonary infections can occur in infants born to women with active chlamydia infections, although the rate of infection is less than 10%. Ocular infections take the form of inclusion conjunctivitis or trachoma, both in adults and children. About 84 million worldwide develop C. trachomatis eye infections and 8 million are blinded as a result of the infection. Trachoma is the primary source of infectious blindness in some parts of rural Africa and Asia and is a neglected tropical disease that has been targeted by the World Health Organization for elimination by 2020. Inclusion conjunctivitis from C. trachomatis is responsible for about 19% of adult cases of conjunctivitis. Treatment Treatment depends on the infection site, age of the patient, and whether another infection is present. Having a C. trachomatis and one or more other sexually transmitted infections at the same time is possible. Treatment is often done with both partners simultaneously to prevent reinfection. C. trachomatis may be treated with several antibiotic medications, including azithromycin, erythromycin, ofloxacin, and tetracycline. Tetracycline is the most preferred antibiotic to treat C.trachomatis and has the highest success rate. Azithromycin and doxycycline have equal efficacy to treat C. trachomatis with 97 and 98 percent success, respectively. Azithromycin is dosed as a 1 gram tablet that is taken by mouth as a single dose, primarily to help with concerns of non-adherence. Treatment with generic doxycycline 100  mg twice a day for 7 days has equal success with expensive delayed-release doxycycline 200 mg once a day for 7 days. Erythromycin is less preferred as it may cause gastrointestinal side effects, which can lead to non-adherence. Levofloxacin and ofloxacin are generally no better than azithromycin or doxycycline and are more expensive. If treatment is necessary during pregnancy, levofloxacin, ofloxacin, tetracycline, and doxycycline are not prescribed. In the case of a patient who is pregnant, the medications typically prescribed are azithromycin, amoxicillin, and erythromycin. Azithromycin is the recommended medication and is taken as a 1 gram tablet taken by mouth as a single dose. Despite amoxicillin having fewer side effects than the other medications for treating antenatal C. trachomatis infection, there have been concerns that pregnant women who take penicillin-class antibiotics can develop a chronic persistent chlamydia infection. Tetracycline is not used because some children and even adults can not withstand the drug, causing harm to the mother and fetus. Retesting during pregnancy can be performed three weeks after treatment. If the risk of reinfection is high, screening can be repeated throughout pregnancy. If the infection has progressed, ascending the reproductive tract and pelvic inflammatory disease develops, damage to the fallopian tubes may have already occurred. In most cases, the C. trachomatis infection is then treated on an outpatient basis with azithromycin or doxycycline. Treating the mother of an infant with C. trachomatis of the eye, which can evolve into pneumonia, is recommended. The recommended treatment consists of oral erythromycin base or ethylsuccinate 50 mg/kg/day divided into four doses daily for two weeks while monitoring for symptoms of infantile hypertrophic pyloric stenosis (IHPS) in infants less than 6 weeks old. There have been a few reported cases of C.trachomatis strains that were resistant to multiple antibiotic treatments. However, as of 2018, this is not a major cause of concern as antibiotic resistance is rare in C.trachomatis compared to other infectious bacteria. Laboratory tests Chlamydia species are readily identified and distinguished from other Chlamydia species using DNA-based tests. Tests for Chlamydia can be ordered from a doctor, a lab or online. Most strains of C. trachomatis are recognized by monoclonal antibodies (mAbs) to epitopes in the VS4 region of MOMP. However, these mAbs may also cross-react with two other Chlamydia species, C. suis and C. muridarum. Nucleic acid amplification tests (NAATs) tests find the genetic material (DNA) of Chlamydia bacteria. These tests are the most sensitive tests available, meaning they are very accurate and are unlikely to have false-negative test results. A polymerase chain reaction (PCR) test is an example of a nucleic acid amplification test. This test can also be done on a urine sample, urethral swabs in men, or cervical or vaginal swabs in women. Nucleic acid hybridization tests (DNA probe test) also find Chlamydia DNA. A probe test is very accurate but is not as sensitive as NAATs. Enzyme-linked immunosorbent assay (ELISA, EIA) finds substances (Chlamydia antigens) that trigger the immune system to fight Chlamydia infection. Chlamydia Elementary body (EB)-ELISA could be used to stratify different stages of infection based upon Immunoglobulin-γ status of the infected individuals Direct fluorescent antibody test also finds Chlamydia antigens. Chlamydia cell culture is a test in which the suspected Chlamydia sample is grown in a vial of cells. The pathogen infects the cells, and after a set incubation time (48 hours), the vials are stained and viewed on a fluorescent light microscope. Cell culture is more expensive and takes longer (two days) than the other tests. The culture must be grown in a laboratory. Research Studies have revealed antibiotic resistance in Chlamydia trachomatis. Mutations in the 23S rRNA gene, including A2057G and A2059G, have been identified as significant contributors to resistance against azithromycin, a commonly used treatment. This resistance is linked to treatment failures and persistent infections, necessitating ongoing research into alternative antibiotics, such as moxifloxacin, as well as non-antibiotic approaches like bacteriophage therapy. These innovations aim to combat resistance while reducing the overall burden of antibiotic misuse, which has been closely associated with the rise of resistant strains in C. trachomatis populations. Additionally, diagnostic improvements have played a vital role in identifying C. trachomatis infections more efficiently. Nucleic acid amplification tests (NAATs), such as DNA- and RNA-based tests, have shown high sensitivity and specificity, making them the gold standard for detecting asymptomatic infections. NAATs have facilitated broader screening programs, particularly in high-risk populations, and are integral to public health initiatives aimed at controlling the spread of C. trachomatis. Research continues into point-of-care diagnostic tools, which promise faster results and greater accessibility, especially in low-resource settings. In the area of vaccine development, creating an effective vaccine for C. trachomatis has proven challenging due to the complex immune responses the bacterium elicits. Subunit vaccines, which target outer membrane proteins like MOMP (Major Outer Membrane Protein) and polymorphic membrane proteins (Pmp), are being explored in both animal models and early human trials. While these vaccines show promise in inducing partial immunity in murine models, further research is needed to evaluate their efficacy in humans. The goal is to develop a vaccine that can prevent reinfection without causing harmful inflammatory responses. History C. trachomatis was first described in 1907 by Stanislaus von Prowazek and Ludwig Halberstädter in scrapings from trachoma cases. Thinking they had discovered a "mantled protozoan", they named the organism "Chlamydozoa" from the Greek "Chlamys" meaning mantle. Over the next several decades, "Chlamydozoa" was thought to be a virus as it was small enough to pass through bacterial filters and unable to grow on known laboratory media. However, in 1966, electron microscopy studies showed C. trachomatis to be a bacterium. This is essentially due to the fact that they were found to possess DNA, RNA, and ribosomes like other bacteria. It was originally believed that Chlamydia lacked peptidoglycan because researchers were unable to detect muramic acid in cell extracts. Subsequent studies determined that C. trachomatis synthesizes both muramic acid and peptidoglycan, but relegates it to the microbe's division septum and does not utilize it for construction of a cell wall. The bacterium is still classified as gram-negative. C. trachomatis agent was first cultured and isolated in the yolk sacs of eggs by Tang Fei-fan et al. in 1957. This was a significant milestone because it became possible to preserve these agents, which could then be used for future genomic and phylogenetic studies. The isolation of C. trachomatis coined the term isolate to describe how C. trachomatis has been isolated from an in vivo setting into a "strain" in cell culture. Only a few "isolates" have been studied in detail, limiting the information that can be found on the evolutionary history of C. trachomatis. Evolution In the 1990s it was shown that there are several species of Chlamydia. Chlamydia trachomatis was first described in historical records in Ebers papyrus written between 1553 and 1550 BC. In the ancient world, it was known as the blinding disease trachoma. The disease may have been closely linked with humans and likely predated civilization. It is now known that C. trachomatis comprises 19 serovars which are identified by monoclonal antibodies that react to epitopes on the major outer-membrane protein (MOMP). Comparison of amino acid sequences reveals that MOMP contains four variable segments: S1,2 ,3 and 4. Different variants of the gene that encodes for MOMP, differentiate the genotypes of the different serovars. The antigenic relatedness of the serovars reflects the homology levels of DNA between MOMP genes, especially within these segments. Furthermore, there have been over 220 Chlamydia vaccine trials done on mice and other non-human host species to target C. muridarum and C. trachomatis strains. However, it has been difficult to translate these results to the human species due to physiological and anatomical differences. Future trials are working with closely related species to humans.
Biology and health sciences
Gram-negative bacteria
Plants
411065
https://en.wikipedia.org/wiki/Elapidae
Elapidae
Elapidae (, commonly known as elapids , from , variant of "sea-fish") is a family of snakes characterized by their permanently erect fangs at the front of the mouth. Most elapids are venomous, with the exception of the genus Emydocephalus. Many members of this family exhibit a threat display of rearing upwards while spreading out a neck flap. Elapids are endemic to tropical and subtropical regions around the world, with terrestrial forms in Asia, Australia, Africa, and the Americas and marine forms in the Pacific and Indian Oceans. Members of the family have a wide range of sizes, from the white-lipped snake to the king cobra. Most species have neurotoxic venom that is channeled by their hollow fangs, and some may contain other toxic components in varying proportions. The family includes 55 genera with around 360 species and over 170 subspecies. Description Terrestrial elapids look similar to the Colubridae; almost all have long, slender bodies with smooth scales, a head covered with large shields (and not always distinct from the neck), and eyes with rounded pupils. Also like colubrids, their behavior is usually quite active and fast, with most of the females being oviparous (egg-layers). Exceptions to these generalizations occur; for example, certain adders (Acanthophis) have commonalities with the Viperidae family, such as shorter, stout bodies, rough/keeled scales, broad heads, cat-like pupils and ovoviviparous (internal hatchings with live births). Furthermore, they can also be sluggish, ambush predators with partially fragmented head shields, similar to rattlesnakes or Gaboon vipers. Sea snakes (the Hydrophiinae), sometimes considered to be a separate family, have adapted to a marine way of life in different ways and to various degrees. All have evolved paddle-like tails for swimming and the ability to excrete salt. Most also have laterally compressed bodies, their ventral scales are much reduced in size, their nostrils are located dorsally (no internasal scales), and they give birth to live young (viviparity). The reduction in ventral scaling has greatly diminished their terrestrial mobility, but aids in swimming. Members of this family have a wide range of sizes. Drysdalia species are small serpents typically and down to in length. Cobras, mambas, and taipans are mid- to large sized snakes which can reach or above. The king cobra is the world's longest venomous snake with a maximum length of and an average mass of . Dentition All elapids have a pair of proteroglyphous fangs to inject venom from glands located towards the rear of the upper jaw (except for the genus Emydocephalus, in which fangs are present as a vestigial feature but without venom production, as they have specialized toward a fish egg diet, making them the only non-venomous elapids). The fangs, which are enlarged and hollow, are the first two teeth on each maxillary bone. Usually only one fang is in place on each side at any time. The maxilla is intermediate in both length and mobility between typical colubrids (long, less mobile) and viperids (very short, highly mobile). When the mouth is closed, the fangs fit into grooved slots in the buccal floor and usually below the front edge of the eye and are angled backwards; some elapids (Acanthophis, taipan, mamba, and king cobra) have long fangs on quite mobile maxillae and can make fast strikes. A few species are capable of spraying their venom from forward-facing holes in their fangs for defense, as exemplified by spitting cobras. Behavior Most elapids are terrestrial, while some are strongly arboreal (African Pseudohaje and Dendroaspis, Australian Hoplocephalus). Many species are more or less specialized burrowers (e.g. Ogmodon, Parapistocalamus, Simoselaps, Toxicocalamus, and Vermicella) in either humid or arid environments. Some species have very generalised diets (euryphagy), but many taxa have narrow prey preferences (stenophagy) and correlated morphological specializations, for example feeding almost exclusively on other serpents (especially the king cobra and kraits). Elapids may display a series of warning signs if provoked, either obviously or subtly. Cobras and mambas lift their inferior body parts, expand hoods, and hiss if threatened; kraits often curl up before hiding their heads down their bodies. In general, sea snakes are able to respire through their skin. Experiments with the yellow-bellied sea snake, Hydrophis platurus, have shown that this species can satisfy about 20% of its oxygen requirements in this manner, allowing for prolonged dives. The sea kraits (Laticauda spp.) are the sea snakes least adapted to aquatic life. Their bodies are less compressed laterally, and they have thicker bodies and ventral scaling. Because of this, they are capable of some land movement. They spend much of their time on land, where they lay their eggs and digest prey. Distribution Terrestrial elapids are found worldwide in tropical and subtropical regions, mostly in the Southern Hemisphere. Most prefer humid tropical environments, though there are many that can still be found in arid environments. Sea snakes occur mainly in the Indian Ocean and the south-west Pacific. They occupy coastal waters and shallows, and are common in coral reefs. However, the range of Hydrophis platurus extends across the Pacific to the coasts of Central and South America. Venom Venoms of species in the Elapidae are mainly neurotoxic for immobilizing prey and defense. The main group of toxins are PLA2 and three-finger toxins (3FTx). Other toxic components in some species comprise cardiotoxins and cytotoxins, which cause heart dysfunctions and cellular damage, respectively. Cobra venom also contains hemotoxins that clot or solidify blood. Most members are venomous to varying extents, and some are considered among the world's most venomous snakes based upon their murine values, such as the taipans. Large species, mambas and cobras included, are dangerous for their ability to inject large quantities of venom upon a single envenomation and/or striking at a high position proximal to the victim's brain, which is vulnerable to neurotoxicity. Antivenom is promptly required to be administered if bitten by any elapids. Specific antivenoms are the only cure to treat elapidae bites. There are commercial monovalent and polyvalent antivenoms for cobras, mambas, and some other important elapids. Recently, experimental antivenoms based on recombinant toxins have shown that it is feasible to create antivenoms with a wide spectrum of coverage. The venom of spitting cobras is more cytotoxic rather than neurotoxic. It damages local cells, especially those in eyes, which are deliberately targeted by the snakes. The venom may cause intense pain on contact with the eye and may lead to blindness. It is not lethal on skin if no wound provides any chance for the toxins to enter the bloodstream. Taxonomy The table below lists out all of the elapid genera and no subfamilies. In the past, many subfamilies were recognized, or have been suggested for the Elapidae, including the Elapinae, Hydrophiinae (sea snakes), Micrurinae (coral snakes), Acanthophiinae (Australian elapids), and the Laticaudinae (sea kraits). Currently, none are universally recognized. Molecular evidence via techniques like karyotyping, protein electrophoretic analyses, immunological distance and DNA sequencing, suggests reciprocal monophyly of two groups: African, Asian, and New World Elapinae versus Australasian and marine Hydrophiinae. The Australian terrestrial elapids are technically 'hydrophiines', although they are not sea snakes. It is believed that the Laticauda and the 'true sea snakes' evolved separately from Australasian land snakes. Asian cobras, coral snakes, and American coral snakes also appear to be monophyletic, while African cobras do not. The type genus for the Elapidae was originally Elaps, but the group was moved to another family. In contrast to what is typical of botany, the family Elapidae was not renamed. In the meantime, Elaps was renamed Homoroselaps and moved back to the Elapidae. However, Nagy et al. (2005) regard it as a sister taxon to Atractaspis, which should have been assigned to the Atractaspididae. * Not including the nominate subspecies Conservation With the dangers the taxa presents given their venomous nature it is very difficult for activists and conservationists alike to get species on protection lists such as the IUCN red-list and CITES Apenndix lists. Some of the protected species are: Vulnerable: Ophiophagus hannah (King cobra) Austrelaps labialis (Pygmy copperhead) Denisonia maculate (Ornamental snake) Echiopsis atriceps (Lake Cronin snake) E. curta (Bardick) Furina dunmalli (Dunmall's snake) Hoplocephalus bungaroides (Broad-headed snake) Ogmodon vitianus (Fiji snake) Lower Risk/Near threatened: Elapognathus minor (Short-nosed snake) Simoselaps calonotus (Black-striped snake) This however does not touch the number of elapidae that are under threat, for instance 9% of elapid sea snakes are threatened with another 6% near-threatened. A rather large road block that stands in the way of more species being put under protection is lack of knowledge of the taxa; many known species have little research done on their behaviors or actual population as they live in very remote areas or live in habitats that are so vast its nearly impossible to conduct population studies, like the sea snakes.
Biology and health sciences
Snakes
Animals
411174
https://en.wikipedia.org/wiki/Brine%20shrimp
Brine shrimp
Artemia is a genus of aquatic crustaceans also known as brine shrimp or sea monkeys. It is the only genus in the family Artemiidae. The first historical record of the existence of Artemia dates back to the first half of the 10th century AD from Lake Urmia, Iran, with an example called by an Iranian geographer an "aquatic dog", although the first unambiguous record is the report and drawings made by Schlösser in 1757 of animals from Lymington, England. Artemia populations are found worldwide, typically in inland saltwater lakes, but occasionally in oceans. Artemia are able to avoid cohabiting with most types of predators, such as fish, by their ability to live in waters of very high salinity (up to 25%). The ability of the Artemia to produce dormant eggs, known as cysts, has led to extensive use of Artemia in aquaculture. The cysts may be stored indefinitely and hatched on demand to provide a convenient form of live feed for larval fish and crustaceans. Nauplii of the brine shrimp Artemia constitute the most widely used food item, and over of dry Artemia cysts are marketed worldwide annually with most of the cysts being harvested from the Great Salt Lake in Utah. In addition, the resilience of Artemia makes them ideal animals for running biological toxicity assays and it has become a model organism used to test the toxicity of chemicals. Breeds of Artemia are sold as novelty gifts under the marketing name Sea-Monkeys. Description The brine shrimp Artemia comprises a group of seven to nine species very likely to have diverged from an ancestral form living in the Mediterranean area about , around the time of the Messinian salinity crisis. The Laboratory of Aquaculture & Artemia Reference Center at Ghent University possesses the largest known Artemia cyst collection, a cyst bank containing over 1,700 Artemia population samples collected from different locations around the world. Artemia is a typical primitive arthropod with a segmented body to which is attached broad leaf-like appendages. The body usually consists of 19 segments, the first 11 of which have pairs of appendages, the next two which are often fused together carry the reproductive organs, and the last segments lead to the tail. The total length is usually about for the adult male and for the female, but the width of both sexes, including the legs, is about . The body of Artemia is divided into head, thorax, and abdomen. The entire body is covered with a thin, flexible exoskeleton of chitin to which muscles are attached internally and which is shed periodically. In female Artemia, a moult precedes every ovulation. For brine shrimp, many functions, including swimming, digestion and reproduction are not controlled through the brain; instead, local nervous system ganglia may control some regulation or synchronisation of these functions. Autotomy, the voluntary shedding or dropping of parts of the body for defence, is also controlled locally along the nervous system. Artemia have two types of eyes. They have two widely separated compound eyes mounted on flexible stalks. These compound eyes are the main optical sense organ in adult brine shrimps. The median eye, or the naupliar eye, is situated anteriorly in the centre of the head and is the only functional optical sense organ in the nauplii, which is functional until the adult stage. Ecology and behavior Brine shrimp can tolerate any levels of salinity from 25‰ to 250‰ (25–250 g/L), with an optimal range of 60‰–100‰, and occupy the ecological niche that can protect them from predators. Physiologically, optimal levels of salinity are about 30–35‰, but due to predators at these salt levels, brine shrimp seldom occur in natural habitats at salinities of less than 60–80‰. Locomotion is achieved by the rhythmic beating of the appendages acting in pairs. Respiration occurs on the surface of the legs through fibrous, feather-like plates (lamellar epipodites). Reproduction Males differ from females by having the second antennae markedly enlarged, and modified into clasping organs used in mating. Adult female brine shrimp ovulate approximately every 140 hours. In favourable conditions, the female brine shrimp can produce eggs that almost immediately hatch. While in extreme conditions, such as low oxygen level or salinity above 150‰, female brine shrimp produce eggs with a chorion coating which has a brown colour. These eggs, also known as cysts, are metabolically inactive and can remain in total stasis for two years while in dry oxygen-free conditions, even at temperatures below freezing. This characteristic is called cryptobiosis, meaning "hidden life". While in cryptobiosis, brine shrimp eggs can survive temperatures of liquid air () and a small percentage can survive above boiling temperature () for up to two hours. Once placed in briny (salt) water, the eggs hatch within a few hours. The nauplius larvae are less than 0.4 mm in length when they first hatch. Parthenogenesis Parthenogenesis is a natural form of reproduction in which growth and development of embryos occur without fertilisation. Thelytoky is a particular form of parthenogenesis in which the development of a female individual occurs from an unfertilised egg. Automixis is a form of thelytoky, but there are different kinds of automixis. The kind of automixis relevant here is one in which two haploid products from the same meiosis combine to form a diploid zygote. Diploid Artemia parthenogenetica reproduce by automictic parthenogenesis with central fusion (see diagram) and low but nonzero recombination. Central fusion of two of the haploid products of meiosis (see diagram) tends to maintain heterozygosity in transmission of the genome from mother to offspring, and to minimise inbreeding depression. Low crossover recombination during meiosis likely restrains the transition from heterozygosity to homozygosity over successive generations. Diet In their first stage of development, Artemia do not feed but consume their own energy reserves stored in the cyst. Wild brine shrimp eat microscopic planktonic algae. Cultured brine shrimp can also be fed particulate foods including yeast, wheat flour, soybean powder or egg yolk. Genetics, genomics and transcriptomics Artemia comprises sexually reproducing, diploid species and several obligate parthenogenetic Artemia populations consisting of different clones and ploidies (2n->5n). Several genetic maps have been published for Artemia. The past years, different transcriptomic studies have been performed to elucidate biological responses in Artemia, such as its response to salt stress, toxins, infection and diapause termination. These studies also led to various fully assembled Artemia transcriptomes. Recently, the Artemia genome was assembled and annotated, revealing a genome containing an unequaled 58% of repeats, genes with unusually long introns and adaptations unique to the extremophilic nature of Artemia in high salt and low oxygen environments. These adaptations include a unique energy-intensive endocytosis-based salt excretion strategy resembling salt excretion strategies of plants, as well as several survival strategies for extreme environments it has in common with the extremophilic tardigrade. Aquaculture Fish farm owners search for a cost-effective, easy to use, and available food that is preferred by the fish. From cysts, brine shrimp nauplii can readily be used to feed fish and crustacean larvae just after a one-day incubation. Instar I (the nauplii that just hatched and with large yolk reserves in their body) and instar II nauplii (the nauplii after first moult and with functional digestive tracts) are more widely used in aquaculture, because they are easy for operation, rich in nutrients, and small, which makes them suitable for feeding fish and crustacean larvae live or after drying. Toxicity test Artemia found favor as a model organism for use in toxicological assays, despite the recognition that it is too robust an organism to be a sensitive indicator species. In pollution research Artemia, the brine shrimp, has had extensive use as a test organism and in some circumstances is an acceptable alternative to the toxicity testing of mammals in the laboratory. The fact that millions of brine shrimp are so easily reared has been an important help in assessing the effects of a large number of environmental pollutants on the shrimps under well controlled experimental conditions. Conservation Overall, brine shrimp are abundant, but some populations and localized species do face threats, especially from habitat loss to introduced species. For example, A. franciscana of the Americas has been widely introduced to places outside its native range and is often able to outcompete local species, such as A. salina in the Mediterranean region. Among the highly localized species are A. urmiana from Lake Urmia in Iran. Once abundant, the species has drastically declined due to drought, leading to fears that it was almost extinct. However, a second population of this species has recently been discovered in the Koyashskoye Salt Lake, Ukraine. A. monica, the species commonly known as Mono Lake brine shrimp, can be found in Mono Lake, Mono County, California. In 1987, Dennis D. Murphy from Stanford University petitioned the United States Fish and Wildlife Service to add A. monica to the endangered species list under the Endangered Species Act (1973). The diversion of water by the Los Angeles Department of Water and Power resulted in rising salinity and concentration of sodium hydroxide in Mono Lake. Despite the presence of trillions of brine shrimp in the lake, the petition contended that the increase in pH would endanger them. The threat to the lake's water levels was addressed by a revision to California State Water Resources Control Board's policy, and the US Fish and Wildlife Service found on 7 September 1995 that the Mono Lake brine shrimp did not warrant listing. Space experiment Scientists have taken the eggs of brine shrimp to outer space to test the impact of radiation on life. Brine shrimp cysts were flown on the U.S. Biosatellite 2, Apollo 16, and Apollo 17 missions, and on the Russian Bion-3 (Cosmos 782), Bion-5 (Cosmos 1129), Foton 10, and Foton 11 flights. Some of the Russian flights carried European Space Agency experiments. On Apollo 16 and Apollo 17, the cysts traveled to the Moon and back. Cosmic rays that passed through an egg would be detected on the photographic film in its container. Some eggs were kept on Earth as experimental controls as part of the tests. Also, as the take-off in a spacecraft involves a lot of shaking and acceleration, one control group of egg cysts was accelerated to seven times the force of gravity and vibrated mechanically from side to side for several minutes so that they could experience the same violence of a rocket take-off. About 400 eggs were in each experimental group. All the egg cysts from the experiment were then placed in salt water to hatch under optimum conditions. The results showed A. salina eggs are highly sensitive to cosmic radiation; 90% of the embryos induced to develop from hit eggs died at different developmental stages.
Biology and health sciences
Crustaceans
Animals
411441
https://en.wikipedia.org/wiki/Brayton%20cycle
Brayton cycle
The Brayton cycle, also known as the Joule cycle, is a thermodynamic cycle that describes the operation of certain heat engines that have air or some other gas as their working fluid. It is characterized by isentropic compression and expansion, and isobaric heat addition and rejection, though practical engines have adiabatic rather than isentropic steps. The most common current application is in airbreathing jet engines and gas turbine engines. The engine cycle is named after George Brayton (1830–1892), the American engineer, who developed the Brayton Ready Motor in 1872, using a piston compressor and piston expander. An engine using the cycle was originally proposed and patented by Englishman John Barber in 1791, using a reciprocating compressor and a turbine expander. There are two main types of Brayton cycles: closed and open. In a closed cycle, the working gas stays inside the engine. Heat is introduced with a heat exchanger or external combustion and expelled with a heat exchanger. With the open cycle, air from the atmosphere is drawn in, goes through three steps of the cycle, and is expelled again to the atmosphere. Open cycles allow for internal combustion. Although the cycle is open, it is conventionally assumed for the purposes of thermodynamic analysis that the exhaust gases are reused in the intake, enabling analysis as a closed cycle. History In 1872, George Brayton applied for a patent for his "Ready Motor", a reciprocating heat engine operating on a gas power cycle. The engine was a two-stroke and produced power on every revolution. Brayton engines used a separate piston compressor and piston expander, with compressed air heated by internal fire as it entered the expander cylinder. The first versions of the Brayton engine were vapor engines which mixed fuel with air as it entered the compressor; town gas was used or a surface carburetor was also used for mobile operation . The fuel / air was contained in a reservoir / tank and then it was admitted to the expansion cylinder and burned. As the fuel/air mixture entered the expansion cylinder, it was ignited by a pilot flame. A screen was used to prevent the fire from entering or returning to the reservoir. In early versions of the engine, this screen sometimes failed and an explosion would occur. In 1874, Brayton solved the explosion problem by adding the fuel just prior to the expander cylinder. The engine now used heavier fuels such as kerosene and fuel oil. Ignition remained a pilot flame. Brayton produced and sold "Ready Motors" to perform a variety of tasks like water pumping, mill operation, running generators, and marine propulsion. The "Ready Motors" were produced from 1872 to sometime in the 1880s; several hundred such motors were likely produced during this time period. Brayton licensed the design to Simone in the UK. Many variations of the layout were used; some were single-acting and some were double-acting. Some had under walking beams; others had overhead walking beams. Both horizontal and vertical models were built. Sizes ranged from less than one to over 40 horsepower. Critics of the time claimed the engines ran smoothly and had a reasonable efficiency. Brayton-cycle engines were some of the first internal combustion engines used for motive power. In 1875, John Holland used a Brayton engine to power the world's first self-propelled submarine (Holland boat #1). In 1879, a Brayton engine was used to power a second submarine, the Fenian Ram. John Philip Holland's submarines are preserved in the Paterson Museum in the Old Great Falls Historic District of Paterson, New Jersey. In 1878, George B. Selden patented the first internal combustion automobile. Inspired by the internal combustion engine invented by Brayton displayed at the Centennial Exposition in Philadelphia in 1876, Selden patented a four-wheel car working on a smaller, lighter, multicylinder version. He then filed a series of amendments to his application which stretched out the legal process, resulting in a delay of 16 years before the patent was granted on November 5, 1895. In 1903, Selden sued Ford for patent infringement and Henry Ford fought the Selden patent until 1911. Selden had never actually produced a working car, so during the trial, two machines were constructed according to the patent drawings. Ford argued his cars used the four-stroke Alphonse Beau de Rochas cycle or Otto cycle and not the Brayton-cycle engine used in the Selden auto. Ford won the appeal of the original case. In 1887, Brayton developed and patented a four-stroke direct-injection oil engine. The fuel system used a variable-quantity pump and liquid-fuel, high-pressure, spray-type injection. The liquid was forced through a spring-loaded, relief-type valve (injector) which caused the fuel to become divided into small droplets. Injection was timed to occur at or near the peak of the compression stroke. A platinum igniter provided the source of ignition. Brayton describes the invention as: “I have discovered that heavy oils can be mechanically converted into a finely divided condition within a firing portion of the cylinder, or in a communicating firing chamber.” Another part reads, “I have for the first time, so far as my knowledge extends, regulated speed by variably controlling the direct discharge of liquid fuel into the combustion chamber or cylinder into a finely divided condition highly favorable to immediate combustion.” This was likely the first engine to use a lean-burn system to regulate engine speed and output. In this manner, the engine fired on every power stroke and speed and output were controlled solely by the quantity of fuel injected. In 1890, Brayton developed and patented a four-stroke, air-blast oil engine. The fuel system delivered a variable quantity of vaporized fuel to the center of the cylinder under pressure at or near the peak of the compression stroke. The ignition source was an igniter made from platinum wire. A variable-quantity injection pump provided the fuel to an injector where it was mixed with air as it entered the cylinder. A small crank-driven compressor provided the source for air. This engine also used the lean-burn system. Rudolf Diesel originally proposed a very high compression, constant-temperature cycle where the heat of compression would exceed the heat of combustion, but after several years of experiments, he realized that the constant-temperature cycle would not work in a piston engine. Early Diesel engines use an air blast system which was pioneered by Brayton in 1890. Consequently, these early engines use the constant-pressure cycle. Early gas turbine history 1791 First patent for a gas turbine (John Barber, United Kingdom) 1904 Unsuccessful gas turbine project by Franz Stolze in Berlin (first axial compressor) 1906 Armengaud-Lemale gas turbine in France (centrifugal compressor, no useful power) 1910 First gas turbine featuring intermittent combustion (Holzwarth 150 kW, constant volume combustion) 1923 First exhaust-gas turbocharger to increase the power of diesel engines 1939 Turbojet powered Heinkel He 178, world's first jet aircraft, makes first flight. World's first gas turbine for power generation by Brown-Boveri, Neuchâtel, Switzerland (velox burner, aerodynamics by Stodola) Models A Brayton-type engine consists of three components: a compressor, a mixing chamber, and an expander. Modern Brayton engines are almost always a turbine type, although Brayton only made piston engines. In the original 19th-century Brayton engine, ambient air is drawn into a piston compressor, where it is compressed; ideally an isentropic process. The compressed air then passes through a mixing chamber where fuel is added, an isobaric process. The pressurized air and fuel mixture is then ignited in an expansion cylinder and energy is released, causing the heated air and combustion products to expand through a piston/cylinder, another ideally isentropic process. Some of the work extracted by the piston/cylinder is used to drive the compressor through a crankshaft arrangement. Gas turbine engines are also Brayton engines, with three components: an air compressor, a combustion chamber, and a gas turbine. Ideal Brayton cycle: isentropic process – ambient air is drawn into the compressor, where it is pressurized. isobaric process – the compressed air then passes through a combustion chamber, where fuel is burned, heating that air—a constant-pressure process, since the chamber is open to flow in and out. isentropic process – the heated, pressurized air then gives up its energy, expanding through a turbine (or series of turbines). Some of the work extracted by the turbine is used to drive the compressor. isobaric process – heat rejection (in the atmosphere). Actual Brayton cycle: adiabatic process – compression isobaric process – heat addition adiabatic process – expansion isobaric process – heat rejection Since neither the compression nor the expansion can be truly isentropic, losses through the compressor and the expander represent sources of inescapable working inefficiencies. In general, increasing the compression ratio is the most direct way to increase the overall power output of a Brayton system. The efficiency of the ideal Brayton cycle is , where is the heat capacity ratio. Figure 1 indicates how the cycle efficiency changes with an increase in pressure ratio. Figure 2 indicates how the specific power output changes with an increase in the gas turbine inlet temperature for two different pressure ratio values. The highest gas temperature in the cycle occurs where work transfer to the high pressure turbine (rotor inlet) takes place. This is lower than the highest gas temperature in the engine (combustion zone). The maximum cycle temperature is limited by the turbine materials and required turbine life. This also limits the pressure ratios that can be used in the cycle. For a fixed-turbine inlet temperature, the net work output per cycle increases with the pressure ratio (thus the thermal efficiency) and the net work output. With less work output per cycle, a larger mass flow rate (thus a larger system) is needed to maintain the same power output, which may not be economical. In most common designs, the pressure ratio of a gas turbine ranges from about 11 to 16. Methods to increase power The power output of a Brayton engine can be improved by: Reheat, wherein the working fluid—in most cases air—expands through a series of turbines, then is passed through a second combustion chamber before expanding to ambient pressure through a final set of turbines, has the advantage of increasing the power output possible for a given compression ratio without exceeding any metallurgical constraints (typically about 1000 °C). The use of an afterburner for jet aircraft engines can also be referred to as "reheat"; it is a different process in that the reheated air is expanded through a thrust nozzle rather than a turbine. The metallurgical constraints are somewhat alleviated, enabling much higher reheat temperatures (about 2000 °C). Reheat is most often used to improve the specific power, and is usually associated with a drop in efficiency; this effect is especially pronounced in afterburners due to the extreme amounts of extra fuel used. In overspray, after the first compressor stage, water is injected into the compressor, thus increasing the mass-flow inside the compressor, increasing the turbine output power significantly and reducing compressor outlet temperatures. In the second compressor stage, the water is completely converted to a gas form, offering some intercooling via its latent heat of vaporization. Methods to improve efficiency The efficiency of a Brayton engine can be improved by: Increasing pressure ratio, as Figure 1 above shows, increasing the pressure ratio increases the efficiency of the Brayton cycle. This is analogous to the increase of efficiency seen in the Otto cycle when the compression ratio is increased. However, practical limits occur when it comes to increasing the pressure ratio. First of all, increasing the pressure ratio increases the compressor discharge temperature. Since the turbine temperature has a limit determined by metallurgical and life constraints the allowable rise in temperature (fuel added) in the combustor becomes smaller. Also, because the length of the compressor blades becomes progressively smaller in the higher pressure stages a constant running gap, through the compressor, between the blade tips and the engine casing becomes a bigger percentage of the compressor blade height increasing air leakage past the tips. This causes a drop in compressor efficiency, and is most likely to occur in smaller gas turbines (since blades are inherently smaller to begin with). Finally, as can be seen in Figure 1, the efficiency levels off as pressure ratio increases. Hence, little gain is expected by increasing the pressure ratio further if it is already at a high level. Recuperator – If the Brayton cycle is run at a low pressure ratio and a high temperature increase in the combustion chamber, the exhaust gas (after the last turbine stage) might still be hotter than the compressed inlet gas (after the last compression stage but before the combustor). In that case, a heat exchanger can be used to transfer thermal energy from the exhaust to the already compressed gas, before it enters the combustion chamber. The thermal energy transferred is effectively reused, thus increasing efficiency. However, this form of heat recycling is only possible if the engine is run in a low-efficiency mode with low pressure ratio in the first place. Transferring heat from the outlet (after the last turbine) to the inlet (before the first compressor stage) would reduce efficiency, as hotter inlet air means more volume, thus more work for the compressor. For engines with liquid cryogenic fuels, namely hydrogen, it might be feasible, though, to use the fuel to cool the inlet air before compression to increase efficiency. This concept is extensively studied for the SABRE engine. A Brayton engine also forms half of the combined cycle system, which combines with a Rankine engine to further increase overall efficiency. However, although this increases overall efficiency, it does not actually increase the efficiency of the Brayton cycle itself. Cogeneration systems make use of the waste heat from Brayton engines, typically for hot water production or space heating. Variants Closed Brayton cycle A closed Brayton cycle recirculates the working fluid; the air expelled from the turbine is reintroduced into the compressor, this cycle uses a heat exchanger to heat the working fluid instead of an internal combustion chamber. The closed Brayton cycle is used, for example, in closed-cycle gas turbine and Solar Brayton cycle In 2002, a hybrid open solar Brayton cycle was operated for the first time consistently and effectively with relevant papers published, in the frame of the EU SOLGATE program. The air was heated from 570 to over 1000K into the combustor chamber. Further hybridization was achieved during the EU Solhyco project running a hybridized Brayton cycle with solar energy and biodiesel only. This technology was scaled up to 4.6 MW within the project Solugas located near Seville, where it is currently demonstrated at precommercial scale. Reverse Brayton cycle A Brayton cycle that is driven in reverse uses work to move heat. This makes it a form of gas refrigeration cycle. When air is the working fluid, it is known as the Bell Coleman cycle. It is also used in the LNG industry for subcooling LNG using power from a gas turbine to drive the compressor. Inverted Brayton cycle This is an open Brayton cycle which also generates work from heat, but with a different order of the stages. Incoming air is first heated at atmospheric pressure, and then passes through the turbine, generating work. The gas, now at a pressure lower than atmospheric, is cooled in a heat exchanger. The compressor raises the pressure again so the gas can be expelled to the atmosphere.
Physical sciences
Thermodynamics
Physics
411673
https://en.wikipedia.org/wiki/Candida%20albicans
Candida albicans
Candida albicans is an opportunistic pathogenic yeast that is a common member of the human gut flora. It can also survive outside the human body. It is detected in the gastrointestinal tract and mouth in 40–60% of healthy adults. It is usually a commensal organism, but it can become pathogenic in immunocompromised individuals under a variety of conditions. It is one of the few species of the genus Candida that cause the human infection candidiasis, which results from an overgrowth of the fungus. Candidiasis is, for example, often observed in HIV-infected patients. C. albicans is the most common fungal species isolated from biofilms either formed on (permanent) implanted medical devices or on human tissue. C. albicans, C. tropicalis, C. parapsilosis, and C. glabrata are together responsible for 50–90% of all cases of candidiasis in humans. A mortality rate of 40% has been reported for patients with systemic candidiasis due to C. albicans. By one estimate, invasive candidiasis contracted in a hospital causes 2,800 to 11,200 deaths yearly in the US. Nevertheless, these numbers may not truly reflect the true extent of damage this organism causes, given studies indicating that C. albicans can cross the blood–brain barrier in mice. C. albicans is commonly used as a model organism for fungal pathogens. It is generally referred to as a dimorphic fungus since it grows both as yeast and filamentous cells. However, it has several different morphological phenotypes including opaque, GUT, and pseudohyphal forms. C. albicans was for a long time considered an obligate diploid organism without a haploid stage. This is, however, not the case. Next to a haploid stage C. albicans can also exist in a tetraploid stage. The latter is formed when diploid C. albicans cells mate when they are in the opaque form. The diploid genome size is approximately 29 Mb, and up to 70% of the protein coding genes have not yet been characterized. C. albicans is easily cultured in the lab and can be studied both in vivo and in vitro. Depending on the media different studies can be done as the media influences the morphological state of C. albicans. A special type of medium is CHROMagar Candida, which can be used to identify different Candida species. Etymology "Candida albicans" can be read as tautological. "Candida" comes from the Latin word "candidus", meaning "shining white". "Albicans itself is the present participle of the Latin word "albicō", meaning "becoming white". This leads to one possible interpretation as the redundant phrase "pure white becoming white". It is often shortly referred to as thrush, candidiasis, or candida. More than a hundred synonyms have been used to describe C. albicans. Over 200 species have been described within the candida genus. The oldest reference to thrush, most likely caused by C. albicans, dates back to 400 BC in Hippocrates' work Of the Epidemics describing oral candidiasis. Genome The genome of C. albicans is almost 16Mb for the haploid size (28Mb for the diploid stage) and consists of 8 sets of chromosome pairs called chr1A, chr2A, chr3A, chr4A, chr5A, chr6A, chr7A and . The second set (C. albicans is diploid) has similar names but with a B at the end. Chr1B, chr2B, ... and chrRB. The whole genome contains 6,198 open reading frames (ORFs). Seventy percent of these ORFs have not yet been characterized. The whole genome has been sequenced making it one of the first fungi to be completely sequenced (next to Saccharomyces cerevisiae and Schizosaccharomyces pombe). All open reading frames (ORFs) are also available in Gateway-adapted vectors. Next to this ORFeome there is also the availability of a GRACE (gene replacement and conditional expression) library to study essential genes in the genome of C. albicans. The most commonly used strains to study C. albicans are the WO-1 and SC5314 strains. The WO-1 strain is known to switch between white-opaque form with higher frequency while the SC5314 strain is the strain used for gene sequence reference. One of the most important features of the C. albicans genome is the high heterozygosity. At the base of this heterozygosity lies the occurrence of numeric and structural chromosomal rearrangements and changes as means of generating genetic diversity by chromosome length polymorphisms (contraction/expansion of repeats), reciprocal translocations, chromosome deletions, Nonsynonymous single-nucleotide polymorphisms and trisomy of individual chromosomes. These karyotypic alterations lead to changes in the phenotype, which is an adaptation strategy of this fungus. These mechanisms are further being explored with the availability of the complete analysis of the C. albicans genome. An unusual feature of the genus Candida is that in many of its species (including C. albicans and C. tropicalis, but not, for instance, C. glabrata) the CUG codon, which normally specifies leucine, specifies serine in these species. This is an unusual example of a departure from the standard genetic code, and most such departures are in start codons or, for eukaryotes, mitochondrial genetic codes. This alteration may, in some environments, help these Candida species by inducing a permanent stress response, a more generalized form of the heat shock response. However, this different codon usage makes it more difficult to study C. albicans protein-protein interactions in the model organism S. cerevisiae. To overcome this problem a C. albicans specific two-hybrid system was developed. The genome of C. albicans is highly dynamic, contributed by the different CUG translation, and this variability has been used advantageously for molecular epidemiological studies and population studies in this species. The genome sequence has allowed for identifying the presence of a parasexual cycle (no detected meiotic division) in C. albicans. This study of the evolution of sexual reproduction in six Candida species found recent losses in components of the major meiotic crossover-formation pathway, but retention of a minor pathway. The authors suggested that if Candida species undergo meiosis it is with reduced machinery, or different machinery, and indicated that unrecognized meiotic cycles may exist in many species. In another evolutionary study, introduction of partial CUG identity redefinition (from Candida species) into Saccharomyces cerevisiae clones caused a stress response that negatively affected sexual reproduction. This CUG identity redefinition, occurring in ancestors of Candida species, was thought to lock these species into a diploid or polyploid state with possible blockage of sexual reproduction. Morphology C. albicans exhibits a wide range of morphological phenotypes due to phenotypic switching and bud to hypha transition. The yeast-to-hyphae transition (filamentation) is a rapid process and induced by environmental factors. Phenotypic switching is spontaneous, happens at lower rates and in certain strains up to seven different phenotypes are known. The best studied switching mechanism is the white to opaque switching (an epigenetic process). Other systems have been described as well. Two systems (the high-frequency switching system and white to opaque switching) were discover by David R. Soll and colleagues. Switching in C. albicans is often, but not always, influenced by environmental conditions such as the level of CO2, anaerobic conditions, medium used and temperature. In its yeast form C. albicans ranges from 10 to 12 microns. Spores can form on the pseudohyphae called chlamydospores which survive when put in unfavorable conditions such as dry or hot seasons. Yeast-to-hypha switching Although often referred to as dimorphic, C. albicans is, in fact, polyphenic (often also referred to as pleomorphic). When cultured in standard yeast laboratory medium, C. albicans grows as ovoid "yeast" cells. However, mild environmental changes in temperature, CO2, nutrients and pH can result in a morphological shift to filamentous growth. Filamentous cells share many similarities with yeast cells. Both cell types seem to play a specific, distinctive role in the survival and pathogenicity of C. albicans. Yeast cells seem to be better suited for the dissemination in the bloodstream while hyphal cells have been proposed as a virulence factor. Hyphal cells are invasive and speculated to be important for tissue penetration, colonization of organs and surviving plus escaping macrophages. The transition from yeast to hyphal cells is termed to be one of the key factors in the virulence of C. albicans; however, it is not deemed necessary. When C. albicans cells are grown in a medium that mimics the physiological environment of a human host, they grow as filamentous cells (both true hyphae and pseudohyphae). C. albicans can also form chlamydospores, the function of which remains unknown, but it is speculated they play a role in surviving harsh environments as they are most often formed under unfavorable conditions. The cAMP-PKA signaling cascade is crucial for the morphogenesis and an important transcriptional regulator for the switch from yeast like cells to filamentous cells is EFG1. High-frequency switching Besides the well-studied yeast-to-hyphae transition other switching systems have been described. One such system is the "high-frequency switching" system. During this switching different cellular morphologies (phenotypes) are generated spontaneously. This type of switching does not occur en masse, represents a variability system and it happens independently from environmental conditions. The strain 3153A produces at least seven different colony morphologies. In many strains the different phases convert spontaneously to the other(s) at a low frequency. The switching is reversible, and colony type can be inherited from one generation to another. Being able to switch through so many different (morphological) phenotypes makes C. albicans able to grow in different environments, both as a commensal and as a pathogen. In the 3153A strain, a gene called SIR2 (for silent information regulator), which seems to be important for phenotypic switching, has been found. SIR2 was originally found in Saccharomyces cerevisiae (brewer's yeast), where it is involved in chromosomal silencing—a form of transcriptional regulation, in which regions of the genome are reversibly inactivated by changes in chromatin structure (chromatin is the complex of DNA and proteins that make chromosomes). In yeast, genes involved in the control of mating type are found in these silent regions, and SIR2 represses their expression by maintaining a silent-competent chromatin structure in this region. The discovery of a C. albicans SIR2 implicated in phenotypic switching suggests it, too, has silent regions controlled by SIR2, in which the phenotype-specific genes may reside. How SIR2 itself is regulated in S. cerevisiae may yet provide more clues as to the switching mechanisms of C. albicans. White-opaque switching Next to the dimorphism and the first described high-frequency switching system C. albicans undergoes another high-frequency switching process called white-opaque switching, which is another phenotypic switching process in C. albicans. It was the second high-frequency switching system discovered in C. albicans. The white-opaque switch is an epigenetic switching system. Phenotypic switching is often used to refer to white-opaque switching, which consists of two phases: one that grows as round cells in smooth, white colonies (referred to as white form) and one that is rod-like and grows as flat, gray colonies (called opaque form). This switch between white cells and opaque cells is important for the virulence and the mating process of C. albicans as the opaque form is the mating competent form, being a million times more efficient in mating compared to the white type. This switching between white and opaque form is regulated by the WOR1 regulator (White to Opaque Regulator 1) which is controlled by the mating type locus (MTL) repressor (a1-α2) that inhibits the expression of WOR1. Besides the white and opaque phase there is also a third one: the gray phenotype. This phenotype shows the highest ability to cause cutaneous infections. The white, opaque, and gray phenotypes form a phenotypic switching system where white cells switch to and from the opaque phase, white cells can irreversibly switch to the gray phase, and both white and gray cells can switch to and from the opaque/an opaque-like phase, respectively. Since it is often difficult to differentiate between white, opaque and gray cells phloxine B, a dye, can be added to the medium. A potential regulatory molecule in the white to opaque switching is Efg1p, a transcription factor found in the WO-1 strain that regulates dimorphism, and more recently has been suggested to help regulate phenotypic switching. Efg1p is expressed only in the white and not in the gray cell-type, and overexpression of Efg1p in the gray form causes a rapid conversion to the white form. Environmental stress Glucose starvation is a likely common environmental stress encountered by C. albicans in its natural habitat. Glucose starvation causes an increase in intracellular reactive oxygen. This stress can lead to mating between two individuals of the same mating type, an interaction that may be frequent in nature under stressful conditions. White-GUT switch A very special type of phenotypic switch is the white-GUT switch (Gastrointestinally-IndUced Transition). GUT cells are extremely adapted to survival in the digestive tract by metabolic adaptations to available nutrients in the digestive tract. The GUT cells live as commensal organisms and outcompete other phenotypes. The transition from white to GUT cells is driven by passage through the gut where environmental parameters trigger this transition by increasing the WOR1 expression. Role in disease Candida is found worldwide but most commonly compromises immunocompromised individuals diagnosed with serious diseases such as HIV and cancer. Candida are ranked as one of the most common groups of organisms that cause hospital-acquired infections. Especially high-risk individuals are patients that have recently undergone surgery, a transplant or are in the Intensive Care Units (ICU), C. albicans infections is the top source of fungal infections in critically ill or otherwise immunocompromised patients. These patients predominantly develop oropharyngeal or thrush candidiasis, which can lead to malnutrition and interfere with the absorption of medication. Methods of transmission include mother to infant through childbirth, people-to-people acquired infections that most commonly occur in hospital settings where immunocompromised patients acquire the yeast from healthcare workers and has a 40% incident rate. People can become infected after having sex with a woman that has an existing vaginal yeast infection. Parts of the body that are commonly infected include the skin, genitals, throat, mouth, and blood. Distinguishing features of vaginal infection include discharge, and dry and red appearance of vaginal mucosa or skin. Candida continues to be the fourth most commonly isolated organism in bloodstream infections. Healthy people usually do not suffer (severely) from superficial infections caused by a local alteration in cellular immunity as seen by asthma patients that use oral corticosteroids. Superficial and local infections It commonly occurs as a superficial infection on mucous membranes in the mouth or vagina. Once in their lives around 75% of women will suffer from vulvovaginal candidiasis (VVC) and about 90% of these infections are caused by C. albicans. It may also affect a number of other regions. For example, higher prevalence of colonization of C. albicans was reported in young individuals with tongue piercing, in comparison to unpierced matched individuals, but not in healthy young individuals who use intraoral orthodontic acrylic appliances. To infect host tissue, the usual unicellular yeast-like form of C. albicans reacts to environmental cues and switches into an invasive, multicellular filamentous form, a phenomenon called dimorphism. In addition, an overgrowth infection is considered a superinfection, the term usually applied when an infection becomes opportunistic and very resistant to antifungals. It then becomes suppressible by antibiotics. The infection is prolonged when the original sensitive strain is replaced by the antifungal-resistant strain. Candidiasis is known to cause gastrointestinal (GI) symptoms particularly in immunocompromised patients or those receiving steroids (e.g. to treat asthma) or antibiotics. Recently, there is an emerging literature that an overgrowth of fungus in the small intestine of non-immunocompromised subjects may cause unexplained GI symptoms. Small intestinal fungal overgrowth (SIFO) is characterized by the presence of an excessive number of fungal organisms in the small intestine associated with gastrointestinal symptoms. The most common symptoms observed in these patients were belching, bloating, indigestion, nausea, diarrhea, and gas. The underlying mechanism(s) that predisposes to SIFO is unclear. Further studies are needed; both to confirm these observations and to examine the clinical relevance of fungal overgrowth. Systemic infections Systemic fungal infections (fungemias) including those by C. albicans have emerged as important causes of morbidity and mortality in immunocompromised patients (e.g., AIDS, cancer chemotherapy, organ or bone marrow transplantation). C. albicans often forms biofilms inside the body. Such C. albicans biofilms may form on the surface of implantable medical devices or organs. In these biofilms it is often found together with Staphylococcus aureus. Such multispecies infections lead to higher mortalities. In addition hospital-acquired infections by C. albicans have become a cause of major health concerns. Especially once candida cells are introduced in the bloodstream a high mortality, up to 40–60% can occur. Although Candida albicans is the most common cause of candidemia, there has been a decrease in the incidence and an increased isolation of non-albicans species of Candida in recent years. Preventive measures include maintaining a good oral hygiene, keeping a healthy lifestyle including good nutrition, the careful use of antibiotics, treatment of infected areas and keeping skin dry and clean, free from open wounds. Role of C. albicans in Crohn's disease The link between C. albicans and Crohn's disease has been investigated in a large cohort. This study demonstrated that members of families with multiple cases of Crohn's disease were more likely to be colonized by C. albicans than members of control families. Experimental studies show that chemically induced colitis promotes C. albicans colonization. In turn, C. albicans colonization generates anti-Saccharomyces cerevisiae antibodies (ASCA), increases inflammation, histological scores and pro-inflammatory cytokine expression. Diagnosis A United States study in 2022 showed that most cases of candidiasis are treated empirically (without culture, pending culture or by symptoms in cases where culture did not show candida), thus not knowing whether the subtype is Candida albicans or any other candida species. For subtyping of candidiasis, a fungal culture can be performed, followed by a germ tube test in which a sample of fungal spores are suspended in animal serum and examined by microscopy for the detection of any germ tubes. Colonies of white or cream color on fungal culture having a positive germ tube test is strongly indicative of Candida albicans. Treatment There are relatively few drugs that can successfully treat Candidiasis. Treatment commonly includes: amphotericin B, echinocandin, or fluconazole for systemic infections nystatin for oral and esophageal infections clotrimazole for skin and genital yeast infections Similarly to antibiotic resistance, resistance to many anti-fungals is becoming a problem. New anti-fungals have to be developed to cope with this problem since only a limited number of anti-fungals are available. A general problem is that in contrast to bacteria, fungi are often overlooked as a potential health problem. Economic implications Given the fact that candidiasis is the fourth- (to third-) most frequent hospital acquired infection worldwide it leads to immense financial implications. Approximately 60,000 cases of systemic candidiasis each year in the USA alone lead up to a cost to be between $2–4 billion. The total costs for candidiasis are among the highest compared to other fungal infections due to the high prevalence. The immense costs are partly explained by a longer stay in the intensive care unit or hospital in general. An extended stay for up to 21 more days compared to non-infected patients is not uncommon. Role of GSDMD in C.albicans infection Gasdermin D (GSDMD) is a protein that in humans is encoded by the GSDMD gene and is a known target of the inflammasome and acts as an effector molecule of programmed cell death known as pyroptosis. This protein determines cell lysis to prevent pathogen replication and results in the release of the inflammatory cytokine interleukin-1β (IL-1β) into the extracellular space to recruit and activate immune cells at the site of infection. Inflammasome activation due to C.albicans infection triggers the release of a cytokine storm necessary to fight the pathogen. Excessive release of these pro-inflammatory mediators has been shown to exaggerate systemic inflammation leading to vascular injury and damage to vital organs. Unfortunately, Candida albicans therapy is often ineffective despite the availability of many antifungal drugs, mainly because of resistance phenomena. During conventional pyroptosis controlled by the inflammasome-GSDMD axis is hijacked by C. albicans to facilitate escape from macrophages through unfolding of hyphae and candidalysin, a fungal toxin released from hyphae. It has been shown that disruption of GSDMD in macrophages infected with Candida albicans reduces the fungal load. In addition, the presence of hyphae and candidalysin are key factors in the activation of GSDMD and the release of Candida from macrophages. Also using Candida-infected mice, inhibition of GSDMD has been shown to paradoxically improve prognosis and survival, indicating that this protein may be a potential therapeutic target in C. albicans-induced sepsis. Biofilm development Biofilm formation steps The biofilm of C. albicans is formed in four steps. First, there is the initial adherence step, where the yeast-form cells adhere to the substrate. The second step is called Intermediate step, where the cells propagate to form microcolonies, and germ tubes form to yield hyphae. In the maturation step, the biofilm biomass expands, the extracellular matrix accumulates and drug resistance increases. In the last step of biofilm formation, the yeast-form cells are released to colonize the surrounding environment (dispersion). Yeast cells released from a biofilm have novel properties, including increased virulence and drug tolerance. Zap1 Zap1, also known as Csr1 and Sur1 (zinc-responsive activator protein), is a transcription factor required for the C. albicans hypha formation in biofilms. Zap1 controls the equilibrium of yeast and hyphal cells, the zinc transporters and zinc regulated genes in biofilms of C. albicans. Zinc Zinc (Zn2+) is important for cell function of C. albicans and Zap1 controls the Zinc levels in the cells through the zinc transporters Zrt1 and Zrt2. The regulation of zinc concentration in the cells is important for the cell viability and if the zinc levels get too high, it is toxic for the cells. The Zrt1 is transporting the zinc ions with high affinity and the Zrt2 is transporting the zinc ions with low affinity. Mechanisms and proteins important for pathogenesis Filamentation The ability to switch between yeast cells and hyphal cells is an important virulence factor. Many proteins play a role in this very complex process. The formation of hyphae can for example help Candida albicans to escape from macrophages in the human body. Moreover, C. albicans undergo yeast-to-hyphal transition within the acidic macrophage phagosome. This initially causes phagosome membrane distension which eventually leads to phagosomal alkalinization by physical rupture, followed by escape. Hwp1 Hwp1 stands for Hyphal wall protein 1. Hwp1 is a mannoprotein located on the surface of the hyphae in the hyphal form of C. albicans. Hwp1 is a mammalian transglutaminase substrate. This host enzyme allows Candida albicans to attach stably to host epithelial cells. Adhesion of C. albicans to host cells is an essential first step in the infection process for colonization and subsequent induction of mucosal infection. Slr1 The RNA-binding protein Slr1 plays a role in instigating hyphal formation and virulence in C. albicans. Candidalysin Candidalysin is a cytolytic 31-amino acid α-helical peptide toxin that is released by C. albicans during hyphal formation. It contributes to virulence during mucosal infections. PRA1 During vaginal infections PRA1 (pH-regulated antigen) gene is up-regulated. Its expression correlates with the concentration of proinflammatory cytokines. Genetic and genomic tools Due to its nature as a model organism, being an important human pathogen and the alternative codon usage (CUG translated into serine rather than leucine), several specific projects and tools have been created to study C. albicans. The diploid nature and the absence of a sexual cycle make the organism difficult to study, but in the last 20 years, many systems have been developed to observe its genetics. Selection markers The selection markers most used in C. albicans are the CaNAT1 resistance marker (confers resistance against nourseothricin) and MPAr or IMH3r (confers resistance to mycophenolic acid). Next to the above-mentioned selection makers a few auxotrophic strains were generated to work with auxotrophic makers. The URA3 marker (URA3 blaster method) is an often-used strategy in uridine auxotrophic strains; however, studies have shown that differences in URA3 position in the genome can be involved in the pathogeny of C. albicans. Besides the URA3 selection one can also use the histidine, leucine and arginine autotrophy. The advantage of using those autotrophies lies in the fact that they exhibit wild-type or nearly wild-type virulence in a mouse model compared to the URA3 system. One application of the leucine, arginine and histidine autotrophy is for example the candida two-hybrid system. Full sequence genome The full genome of C. albicans has been sequenced and made publicly available in a Candida database. The heterozygous diploid strain used for this full genome sequence project is the laboratory strain SC5314. The sequencing was done using a whole-genome shotgun approach. ORFeome project Every predicted ORF has been created in a gateway adapted vector (pDONR207) and made publicly available. The vectors (plasmids) can be propagated in E.coli and grown on LB+gentamicin medium. This way every ORF is readily available in an easy to use vector. Using the gateway system it is possible to transfer the ORF of interest to any other gateway adapted vector for further studies of the specific ORF. CIp10 integrative plasmid Contrary to the yeast S. cerevisiae episomal plasmids do not stay stable in C. albicans. In order to work with plasmids in C. albicans an integrative approach (plasmid integration into the genome) thus has to be used. A second problem is that most plasmid transformations are rather inefficient in C. albicans; however, the CIp10 plasmid overcomes these problems and can be used with ease to transform C. albicans in a very efficient way. The plasmid integrates inside the RP10 locus as disruption of one RP10 allele does not seem to affect the viability and growth of C. albicans. Several adaptations of this plasmid have been made after the original became available. Candida two-hybrid (C2H) system Due to the aberrant codon usage of C. albicans it is less feasible to use the common host organism (Saccharomyces cerevisiae) for two-hybrid studies. To overcome this problem a C. albicans two-hybrid (C2H) system was created. The strain SN152 that is auxotrophic for leucine, arginine and histidine was used to create this C2H system. It was adapted by integrating a HIS1 reporter gene preceded by five LexAOp sequences. In the C2H system the bait plasmid (pC2HB) contains the Staphylococcus aureus LexA BD, while the prey plasmid (pC2HP) harbors the viral AD VP16. Both plasmids are integrative plasmids since episomal plasmids do not stay stable in C. albicans. The reporter gene used in the system is the HIS1 gene. When proteins interact, the cells will be able to grow on medium lacking histidine due to the activation of the HIS1 reporter gene. Several interactions have thus far been detected using this system in a low scale set up. A first high-throughput screening has also been performed. Interacting proteins can be found at the BioGRID. Bimolecular fluorescence complementation (BiFC) Besides the C2H system, a BiFC system has been developed to study protein-protein interactions in C. albicans. With this systems protein interactions can be studied in their native sub cellular location contrary to a C2H system in which the proteins are forced into the nucleus. With BiFC one can study for example protein interactions that take place at the cell membrane or vacuolar membrane. Microarrays Both DNA and protein microarrays were designed to study DNA expression profiles and antibody production in patients against C. albicans cell wall proteins. GRACE library Using a tetracycline-regulatable promoter system a gene replacement and conditional expression (GRACE) library was created for 1,152 genes. By using the regulatable promoter and having deleted 1 of the alleles of the specific gene it was possible to discriminate between non-essential and essential genes. Of the tested 1,152 genes 567 showed to be essential. The knowledge on essential genes can be used to discover novel antifungals. CRISPR/Cas9 CRISPR/Cas9 has been adapted to be used in C. albicans. Several studies have been performed using this system. Application in engineering C. albicans has been used in combination with carbon nanotubes (CNT) to produce stable electrically conductive bio-nano-composite tissue materials that have been used as temperature-sensing elements. Notable C. albicans researchers Neil A. R. Gow Alexander D. Johnson Frank C. Odds Charles Philippe Robin Fred Sherman David R. Soll
Biology and health sciences
Basics
Plants
411727
https://en.wikipedia.org/wiki/Toilet%20seat
Toilet seat
A toilet seat is a hinged unit consisting of a round or oval open seat, and usually a lid, which is bolted onto the bowl of a toilet used in a sitting position (as opposed to a squat toilet). The seat can be either for a flush toilet or a dry toilet. A toilet seat consists of the seat itself, which may be contoured for the user to sit on, and the lid, which covers the toilet when it is not in use – the lid may be absent in some cases, particularly in public restrooms. Usage Toilet seats often have a lid. This lid is frequently left open. The combined toilet seat and lid may be kept in a closed position when a toilet is not in use, making it so—at a minimum—the lid must be raised prior to use. It can be closed to prevent small items from falling in, reduce odors, or provide a chair in the toilet room for aesthetic purposes. Some studies show that closing the lid prevents the spread of aerosols on flushing ("toilet plume"), which might be a source of disease transmission. Depending on the sex of the user and type of use (urination or defecation) the seat itself may be left either up or down. The issue of whether the seat and lid should be placed in the closed position after use is a perennial topic of discussion and light humor (usually across gender lines), with it often being argued that leaving the toilet seat up is more efficient for men, while putting it down is more considerate for women. The "right answer" seems to depend on factors ranging from the location of the toilet (public or private), the population of the users (e.g. a sorority house vs frat house) and/or personal or family values, opinions, preferences, agreements or toiletry habits. Toilet seats often rest not directly on the porcelain or metal body of the toilet itself but upon the hinges and upon tabs/spacers affixed at a few spots. Similarly, lids do not rest directly in uniform contact with the seat but are elevated while above it by the hinges and tabs/spacers affixed at a few spots. This is a possible area where effluent aerosols can be spread when shut. Variations Toilet seats are manufactured in a range of different styles and colors, and they may be furnished matching the style of the toilet itself. They are usually built to fit the shape of the toilet bowl: two examples of this being the elongated bowl and the regular bowl. Some toilet seats are fitted with slow-closing hinges to reduce noise by preventing them from slamming against the bowl. Some seats are made of various types of wooden materials, like oak or walnut, and others are made soft for added comfort. Seats with printed multi-colored designs, such as floral or newsprint, have been fashionable at times. Other designs are made of transparent plastic, encapsulating small decorative items such as seashells or coins. The price of toilet seats varies quite considerably. Decorative textile covers for the toilet seat lid have gone in and out of fashion. Advocates claim that they allow the toilet to be used as a more comfortable seat and provide another way of decorating a bathroom. At the same time, critics view them as a sanitation problem which creates unnecessary work. Some metal toilets, such as those in many jails and prisons, have built-in toilet seats that cannot be removed, so that an inmate cannot fashion it into a weapon, shield or escape tool. Open front toilet seats The International Association of Plumbing and Mechanical Officials' Uniform Plumbing Code, section 409.2.2, requires that "all water closet seats, except those within dwelling units or for private use, shall be of the open front type". There is an exception for toilets with an automatic toilet-seat cover dispenser. The code is followed by most public authorities, so many public toilets feature open front toilet seats (also called "split seats"). The purpose for this seat design is to prevent genitals contacting the seat. It also omits an area of the seat that could be contaminated with urine and avoids contact for easier wiping. Modern design, electronic integration, and function Slow-close A slow-close seat uses special hinges to prevent the seat from slamming down. Special hinges provide resistance, allowing the seat to lower slowly. Warming High-tech toilet seats may include many features, including a heated seat, a bidet, and a blow drier. High-tech seats are most common in Japan, where a seat with integrated bidets is colloquially called a Washlet, after a leading brand. Electrically heated toilet seats have been popular in Japan since the 1970s. Since Japanese bathrooms are often unheated, the toilet seat sometimes doubles as a space heater. Integrated bidets date from around 1980, and have since become very popular in Japan, and are becoming more common in most other developed countries. Water-heated seats were in use in royal homes in Britain in the twentieth century. The first electrically heated toilet seat was manufactured by Cyril Reginald Clayton at St Leonard's on Sea in Sussex. A UK patent was applied for on 5 January 1959, filing on 4 January 1960 and granted in August 1963 (UK patent no. 934209). The first model, the 'Deluxete', was made of fiberglass with a heating element in the lid triggered by a mercury switch that warmed the seat when the lid was down. Subsequent improvements were made and another UK patent applied for, this time for a deodorizing model with integral fan on 20 May 1970. It was granted on 17 May 1972 (UK patent no. 1260402). At first marketed as the 'Deodar', this model was later sold as the 'Readywarm'. Among the early users of the 'Deluxete' was racing driver Stirling Moss. With the permission of Reginald Clayton, the electrically heated seat was further developed by the Japanese firm Matsushita. In 1993, Matt DiRoberto of Worcester, Massachusetts invented the padded toilet seat, an early 1990s fad. Seatless toilet A seatless toilet has no toilet seat. It may be much cleaner and easier to clean than toilet seats, while the structurally sound and hard rim of a porcelain toilet bowl still allows sitting. Users not aware of the possibility to sit on this type of toilet may hover over. Disposable covers A disposable piece of paper, shaped like the toilet seat itself and known as a disposable toilet seat cover or toilet sheet, can be placed on the seat. Its purpose is to make the toilet user feel more reassured that they are protected from germs. The first known patented model of the toilet seat cover dispenser dates back to 1942 and was invented by J.C. Thomasa. While toilet seat covers give public toilet users a sense of cleanliness, studies have shown they are not needed as there are few germs on a toilet seat, and infections such as salmonella are spread via the hands, not the buttocks. Society and culture Humor The toilet seat functions as a comic standby for sight gags relating to toilet humor. The most common is someone staggering out of a toilet room after an explosion with a toilet seat around his neck. In the television show Dead Like Me, George Lass, the main character, is killed when a zero-G toilet seat from space station Mir re-enters the atmosphere. US Navy's "$600 Toilet Seat" The P-3C Orion antisubmarine aircraft went into service in 1962. Twenty-five years later, in 1987, it was determined that the toilet shroud, the cover that fits over the toilet, needed replacement. Since the airplane was out of production this would require new tooling to produce. These on-board toilets required a uniquely shaped, molded fiberglass shroud that had to satisfy specifications for vibration resistance, weight, and durability. The molds had to be specially made, as it had been decades since their original production. The price reflected the design work and the cost of the equipment to manufacture them. Lockheed Corporation charged $34,560 for 54 toilet covers, or $640 each. President Ronald Reagan held a televised news conference in 1987, where he held up one of these shrouds and stated: "We didn't buy any $600 toilet seat. We bought a $600 molded plastic cover for the entire toilet system." A Pentagon spokesman, Glenn Flood stated, "The original price we were charged was $640, not just for a toilet seat, but for the large molded plastic assembly covering the entire seat, tank and full toilet assembly. The seat itself cost $9 and some cents.… The supplier charged too much, and we had the amount corrected." The president of Lockheed at the time, Lawrence Kitchen, adjusted the price to $100 each and returned $29,165. "This action is intended to put to rest an artificial issue," Kitchen stated.
Technology
Hydraulics and pneumatics
null
411782
https://en.wikipedia.org/wiki/Staining
Staining
Staining is a technique used to enhance contrast in samples, generally at the microscopic level. Stains and dyes are frequently used in histology (microscopic study of biological tissues), in cytology (microscopic study of cells), and in the medical fields of histopathology, hematology, and cytopathology that focus on the study and diagnoses of diseases at the microscopic level. Stains may be used to define biological tissues (highlighting, for example, muscle fibers or connective tissue), cell populations (classifying different blood cells), or organelles within individual cells. In biochemistry, it involves adding a class-specific (DNA, proteins, lipids, carbohydrates) dye to a substrate to qualify or quantify the presence of a specific compound. Staining and fluorescent tagging can serve similar purposes. Biological staining is also used to mark cells in flow cytometry, and to flag proteins or nucleic acids in gel electrophoresis. Light microscopes are used for viewing stained samples at high magnification, typically using bright-field or epi-fluorescence illumination. Staining is not limited to only biological materials, since it can also be used to study the structure of other materials; for example, the lamellar structures of semi-crystalline polymers or the domain structures of block copolymers. In vivo vs In vitro In vivo staining (also called vital staining or intravital staining) is the process of dyeing living tissues. By causing certain cells or structures to take on contrasting colours, their form (morphology) or position within a cell or tissue can be readily seen and studied. The usual purpose is to reveal cytological details that might otherwise not be apparent; however, staining can also reveal where certain chemicals or specific chemical reactions are taking place within cells or tissues. In vitro staining involves colouring cells or structures that have been removed from their biological context. Certain stains are often combined to reveal more details and features than a single stain alone. Combined with specific protocols for fixation and sample preparation, scientists and physicians can use these standard techniques as consistent, repeatable diagnostic tools. A counterstain is stain that makes cells or structures more visible, when not completely visible with the principal stain. Crystal violet stains both Gram positive and Gram negative organisms. Treatment with alcohol removes the crystal violet colour from gram negative organisms only. Safranin as counterstain is used to colour the gram negative organisms that got decolorised by alcohol. While ex vivo, many cells continue to live and metabolize until they are "fixed". Some staining methods are based on this property. Those stains excluded by the living cells but taken up by the already dead cells are called vital stains (e.g. trypan blue or propidium iodide for eukaryotic cells). Those that enter and stain living cells are called supravital stains (e.g. New Methylene Blue and brilliant cresyl blue for reticulocyte staining). However, these stains are eventually toxic to the organism, some more so than others. Partly due to their toxic interaction inside a living cell, when supravital stains enter a living cell, they might produce a characteristic pattern of staining different from the staining of an already fixed cell (e.g. "reticulocyte" look versus diffuse "polychromasia"). To achieve desired effects, the stains are used in very dilute solutions ranging from to (Howey, 2000). Note that many stains may be used in both living and fixed cells. Preparation The preparatory steps involved depend on the type of analysis planned. Some or all of the following procedures may be required. Wet mounts are used to view live organisms and can be made using water and certain stains. The liquid is added to the slide before the addition of the organism and a coverslip is placed over the specimen in the water and stain to help contain it within the field of view. Fixation, which may itself consist of several steps, aims to preserve the shape of the cells or tissue involved as much as possible. Sometimes heat fixation is used to kill, adhere, and alter the specimen so it accepts stains. Most chemical fixatives (chemicals causing fixation) generate chemical bonds between proteins and other substances within the sample, increasing their rigidity. Common fixatives include formaldehyde, ethanol, methanol, and/or picric acid. Pieces of tissue may be embedded in paraffin wax to increase their mechanical strength and stability and to make them easier to cut into thin slices. Mordants are chemical agents which have power of making dyes to stain materials which otherwise are unstainable Mordants are classified into two categories: a) Basic mordant: React with acidic dyes e.g. alum, ferrous sulfate, cetylpyridinium chloride etc. b) Acidic mordant : React with basic dyes e.g. picric acid, tannic acid etc. Direct Staining: Carried out without mordant. Indirect Staining: Staining with the aid of a mordant. Permeabilization involves treatment of cells with (usually) a mild surfactant. This treatment dissolves cell membranes, and allows larger dye molecules into the cell's interior. Mounting usually involves attaching the samples to a glass microscope slide for observation and analysis. In some cases, cells may be grown directly on a slide. For samples of loose cells (as with a blood smear or a pap smear) the sample can be directly applied to a slide. For larger pieces of tissue, thin sections (slices) are made using a microtome; these slices can then be mounted and inspected. Standardization Most of the dyes commonly used in microscopy are available as BSC-certified stains. This means that samples of the manufacturer's batch have been tested by an independent body, the Biological Stain Commission (BSC), and found to meet or exceed certain standards of purity, dye content and performance in staining techniques ensuring more accurately performed experiments and more reliable results. These standards are published in the commission's journal Biotechnic & Histochemistry. Many dyes are inconsistent in composition from one supplier to another. The use of BSC-certified stains eliminates a source of unexpected results. Some vendors sell stains "certified" by themselves rather than by the Biological Stain Commission. Such products may or may not be suitable for diagnostic and other applications. Negative staining A simple staining method for bacteria that is usually successful, even when the positive staining methods fail, is to use a negative stain. This can be achieved by smearing the sample onto the slide and then applying nigrosin (a black synthetic dye) or India ink (an aqueous suspension of carbon particles). After drying, the microorganisms may be viewed in bright field microscopy as lighter inclusions well-contrasted against the dark environment surrounding them. Negative staining is able to stain the background instead of the organisms because the cell wall of microorganisms typically has a negative charge which repels the negatively charged stain. The dyes used in negative staining are acidic. Note: negative staining is a mild technique that may not destroy the microorganisms, and is therefore unsuitable for studying pathogens. Positive staining Unlike negative staining, positive staining uses basic dyes to color the specimen against a bright background. While chromophore is used for both negative and positive staining alike, the type of chromophore used in this technique is a positively charged ion instead of a negative one. The negatively charged cell wall of many microorganisms attracts the positively charged chromophore which causes the specimen to absorb the stain giving it the color of the stain being used. Positive staining is more commonly used than negative staining in microbiology. The different types of positive staining are listed below. Simple versus differential Simple Staining is a technique that only uses one type of stain on a slide at a time. Because only one stain is being used, the specimens (for positive stains) or background (for negative stains) will be one color. Therefore, simple stains are typically used for viewing only one organism per slide. Differential staining uses multiple stains per slide. Based on the stains being used, organisms with different properties will appear different colors allowing for categorization of multiple specimens. Differential staining can also be used to color different organelles within one organism which can be seen in endospore staining. Types Techniques Gram Gram staining is used to determine gram status to classifying bacteria broadly based on the composition of their cell wall. Gram staining uses crystal violet to stain cell walls, iodine (as a mordant), and a fuchsin or safranin counterstain to (mark all bacteria). Gram status, helps divide specimens of bacteria into two groups, generally representative of their underlying phylogeny. This characteristic, in combination with other techniques makes it a useful tool in clinical microbiology laboratories, where it can be important in early selection of appropriate antibiotics. On most Gram-stained preparations, Gram-negative organisms appear red or pink due to their counterstain. Due to the presence of higher lipid content, after alcohol-treatment, the porosity of the cell wall increases, hence the CVI complex (crystal violet – iodine) can pass through. Thus, the primary stain is not retained. In addition, in contrast to most Gram-positive bacteria, Gram-negative bacteria have only a few layers of peptidoglycan and a secondary cell membrane made primarily of lipopolysaccharide. Endospore Endospore staining is used to identify the presence or absence of endospores, which make bacteria very difficult to kill. Bacterial spores have proven to be difficult to stain as they are not permeable to aqueous dye reagents.  Endospore staining is particularly useful for identifying endospore-forming bacterial pathogens such as Clostridioides difficile. Prior to the development of more efficient methods, this stain was performed using the Wirtz method with heat fixation and counterstain. Through the use of malachite green and a diluted ratio of carbol fuchsin, fixing bacteria in osmic acid was a great way to ensure no blending of dyes. However, newly revised staining methods have significantly decreased the time it takes to create these stains. This revision included substitution of carbol fuchsin with aqueous Safranin paired with a newly diluted 5% formula of malachite green. This new and improved composition of stains was performed in the same way as before with the use of heat fixation, rinsing, and blotting dry for later examination. Upon examination, all endospore forming bacteria will be stained green accompanied by all other cells appearing red. Ziehl-Neelsen A Ziehl–Neelsen stain is an acid-fast stain used to stain species of Mycobacterium tuberculosis that do not stain with the standard laboratory staining procedures such as Gram staining. This stain is performed through the use of both red coloured carbol fuchsin that stains the bacteria and a counter stain such as methylene blue. Haematoxylin and eosin (H&E) Haematoxylin and eosin staining is frequently used in histology to examine thin tissue sections. Haematoxylin stains cell nuclei blue, while eosin stains cytoplasm, connective tissue and other extracellular substances pink or red. Eosin is strongly absorbed by red blood cells, colouring them bright red. In a skillfully made H&E preparation the red blood cells are almost orange, and collagen and cytoplasm (especially muscle) acquire different shades of pink. Papanicolaou Papanicolaou staining, or PAP staining, was developed to replace fine needle aspiration cytology (FNAC) in hopes of decreasing staining times and cost without compromising quality. This stain is a frequently used method for examining cell samples from a variety of tissue types in various organs. PAP staining has endured several modifications in order to become a “suitable alternative” for FNAC. This transition stemmed from the appreciation of wet fixed smears by scientists preserving the structures of the nuclei opposed to the opaque appearance of air dried Romanowsky smears. This led to the creation of a hybrid stain of wet fixed and air dried known as the ultrafast papanicolaou stain. This modification includes the use of nasal saline to rehydrate cells to increase cell transparency and is paired with the use of alcoholic formalin to enhance colors of the nuclei. The papanicolaou stain is now used in place of cytological staining in all organ types due to its increase in morphological quality, decreased staining time, and decreased cost. It is frequently used to stain Pap smear specimens. It uses a combination of haematoxylin, Orange G, eosin Y, Light Green SF yellowish, and sometimes Bismarck Brown Y. PAS Periodic acid-Schiff is a histology special stain used to mark carbohydrates (glycogen, glycoprotein, proteoglycans). PAS is commonly used on liver tissue where glycogen deposits are made which is done in efforts to distinguish different types of glycogen storage diseases. PAS is important because it can detect glycogen granules found in tumors of the ovaries and pancreas of the endocrine system, as well as in the bladder and kidneys of the renal system. Basement membranes can also show up in a PAS stain and can be important when diagnosing renal disease. Due to the high volume of carbohydrates within the cell wall of hyphae and yeast forms of fungi, the Periodic acid -Schiff stain can help locate these species inside tissue samples of the human body. Masson Masson's trichrome is (as the name implies) a three-colour staining protocol. The recipe has evolved from Masson's original technique for different specific applications, but all are well-suited to distinguish cells from surrounding connective tissue. Most recipes produce red keratin and muscle fibers, blue or green staining of collagen and bone, light red or pink staining of cytoplasm, and black cell nuclei. Romanowsky The Romanowsky stains is considered a polychrome staining effect and is based on a combination of eosin plus (chemically reduced eosin) and demethylated methylene blue (containing its oxidation products azure A and azure B). This stain develops varying colors for all cell structures (“Romanowsky-Giemsa effect) and thus was used in staining neutrophil polymorphs and cell nuclei. Common variants include Wright's stain, Jenner's stain, May-Grunwald stain, Leishman stain and Giemsa stain. All are used to examine blood or bone marrow samples. They are preferred over H&E for inspection of blood cells because different types of leukocytes (white blood cells) can be readily distinguished. All are also suited to examination of blood to detect blood-borne parasites such as malaria. Silver Silver staining is the use of silver to stain histologic sections. This kind of staining is important in the demonstration of proteins (for example type III collagen) and DNA. It is used to show both substances inside and outside cells. Silver staining is also used in temperature gradient gel electrophoresis. Argentaffin cells reduce silver solution to metallic silver after formalin fixation. This method was discovered by Italian Camillo Golgi, by using a reaction between silver nitrate and potassium dichromate, thus precipitating silver chromate in some cells (see Golgi's method). Argyrophilic cells reduce silver solution to metallic silver after being exposed to the stain that contains a reductant. An example of this would be hydroquinone or formalin. Sudan Sudan staining utilizes Sudan dyes to stain sudanophilic substances, often including lipids. Sudan III, Sudan IV, Oil Red O, Osmium tetroxide, and Sudan Black B are often used. Sudan staining is often used to determine the level of fecal fat in diagnosing steatorrhea. Wirtz-Conklin The Wirtz-Conklin stain is a special technique designed for staining true endospores with the use of malachite green dye as the primary stain and safranin as the counterstain. Once stained, they do not decolourize. The addition of heat during the staining process is a huge contributing factor. Heat helps open the spore's membrane so the dye can enter. The main purpose of this stain is to show germination of bacterial spores. If the process of germination is taking place, then the spore will turn green in color due to malachite green and the surrounding cell will be red from the safranin. This stain can also help determine the orientation of the spore within the bacterial cell; whether it being terminal (at the tip), subterminal (within the cell), or central (completely in the middle of the cell). Collagen hybridizing peptide Collagen hybridizing peptide (CHP) staining allows for an easy, direct way to stain denatured collagens of any type (Type I, II, IV, etc.) regardless if they were damaged or degraded via enzymatic, mechanical, chemical, or thermal means. They work by refolding into the collagen triple helix with the available single strands in the tissue. CHPs can be visualized by a simple fluorescence microscope. Common biological stains Different stains react or concentrate in different parts of a cell or tissue, and these properties are used to advantage to reveal specific parts or areas. Some of the most common biological stains are listed below. Unless otherwise marked, all of these dyes may be used with fixed cells and tissues; vital dyes (suitable for use with living organisms) are noted. Acridine orange Acridine orange (AO) is a nucleic acid selective fluorescent cationic dye useful for cell cycle determination. It is cell-permeable, and interacts with DNA and RNA by intercalation or electrostatic attractions. When bound to DNA, it is very similar spectrally to fluorescein. Like fluorescein, it is also useful as a non-specific stain for backlighting conventionally stained cells on the surface of a solid sample of tissue (fluorescence backlighted staining). Bismarck brown Bismarck brown (also Bismarck brown Y or Manchester brown) imparts a yellow colour to acid mucins and an intense brown color to mast cells. One default of this stain is that it blots out any other structure surrounding it and makes the quality of the contrast low. It has to be paired with other stains  in order to be useful. Some complementing stains used alongside Bismark brown are Hematoxylin and Toluidine blue which provide better contrast within the histology sample. Carmine Carmine is an intensely red dye used to stain glycogen, while Carmine alum is a nuclear stain. Carmine stains require the use of a mordant, usually aluminum. Coomassie blue Coomassie brilliant blue nonspecifically stains proteins a strong blue colour. It is often used in gel electrophoresis. Cresyl violet Cresyl violet stains the acidic components of the neuronal cytoplasm a violet colour, specifically nissl bodies. Often used in brain research. Crystal violet Crystal violet, when combined with a suitable mordant, stains cell walls purple. Crystal violet is the stain used in Gram staining. DAPI DAPI is a fluorescent nuclear stain, excited by ultraviolet light and showing strong blue fluorescence when bound to DNA. DAPI binds with A=T rich repeats of chromosomes. DAPI is also not visible with regular transmission microscopy. It may be used in living or fixed cells. DAPI-stained cells are especially appropriate for cell counting. Eosin Eosin is most often used as a counterstain to haematoxylin, imparting a pink or red colour to cytoplasmic material, cell membranes, and some extracellular structures. It also imparts a strong red colour to red blood cells. Eosin may also be used as a counterstain in some variants of Gram staining, and in many other protocols. There are actually two very closely related compounds commonly referred to as eosin. Most often used is eosin Y (also known as eosin Y ws or eosin yellowish); it has a very slightly yellowish cast. The other eosin compound is eosin B (eosin bluish or imperial red); it has a very faint bluish cast. The two dyes are interchangeable, and the use of one or the other is more a matter of preference and tradition. Ethidium bromide Ethidium bromide intercalates and stains DNA, providing a fluorescent red-orange stain. Although it will not stain healthy cells, it can be used to identify cells that are in the final stages of apoptosis – such cells have much more permeable membranes. Consequently, ethidium bromide is often used as a marker for apoptosis in cells populations and to locate bands of DNA in gel electrophoresis. The stain may also be used in conjunction with acridine orange (AO) in viable cell counting. This EB/AO combined stain causes live cells to fluoresce green whilst apoptotic cells retain the distinctive red-orange fluorescence. Acid fuchsin Acid fuchsine may be used to stain collagen, smooth muscle, or mitochondria. Acid fuchsin is used as the nuclear and cytoplasmic stain in Mallory's trichrome method. Acid fuchsin stains cytoplasm in some variants of Masson's trichrome. In Van Gieson's picro-fuchsine, acid fuchsin imparts its red colour to collagen fibres. Acid fuchsin is also a traditional stain for mitochondria (Altmann's method). Haematoxylin Haematoxylin (hematoxylin in North America) is a nuclear stain. Used with a mordant, haematoxylin stains nuclei blue-violet or brown. It is most often used with eosin in the H&E stain (haematoxylin and eosin) staining, one of the most common procedures in histology. Hoechst stains Hoechst is a bis-benzimidazole derivative compound that binds to the minor groove of DNA. Often used in fluorescence microscopy for DNA staining, Hoechst stains appear yellow when dissolved in aqueous solutions and emit blue light under UV excitation. There are two major types of Hoechst: Hoechst 33258 and Hoechst 33342. The two compounds are functionally similar, but with a little difference in structure. Hoechst 33258 contains a terminal hydroxyl group and is thus more soluble in aqueous solution, however this characteristics reduces its ability to penetrate the plasma membrane. Hoechst 33342 contains an ethyl substitution on the terminal hydroxyl group (i.e. an ethylether group) making it more hydrophobic for easier plasma membrane passage Iodine Iodine is used in chemistry as an indicator for starch. When starch is mixed with iodine in solution, an intensely dark blue colour develops, representing a starch/iodine complex. Starch is a substance common to most plant cells and so a weak iodine solution will stain starch present in the cells. Iodine is one component in the staining technique known as Gram staining, used in microbiology. Used as a mordant in Gram's staining, iodine enhances the entrance of the dye through the pores present in the cell wall/membrane. Lugol's solution or Lugol's iodine (IKI) is a brown solution that turns black in the presence of starches and can be used as a cell stain, making the cell nuclei more visible. Used with common vinegar (acetic acid), Lugol's solution is used to identify pre-cancerous and cancerous changes in cervical and vaginal tissues during "Pap smear" follow up examinations in preparation for biopsy. The acetic acid causes the abnormal cells to blanch white, while the normal tissues stain a mahogany brown from the iodine. Malachite green Malachite green (also known as diamond green B or victoria green B) can be used as a blue-green counterstain to safranin in the Gimenez staining technique for bacteria. It can also be used to directly stain spores. Methyl green Methyl green is used commonly with bright-field, as well as fluorescence microscopes to dye the chromatin of cells so that they are more easily viewed. Methylene blue Methylene blue is used to stain animal cells, such as human cheek cells, to make their nuclei more observable. Also used to stain blood films in cytology. Neutral red Neutral red (or toluylene red) stains Nissl substance red. It is usually used as a counterstain in combination with other dyes. Nile blue Nile blue (or Nile blue A) stains nuclei blue. It may be used with living cells. Nile red Nile red (also known as Nile blue oxazone) is formed by boiling Nile blue with sulfuric acid. This produces a mix of Nile red and Nile blue. Nile red is a lipophilic stain; it will accumulate in lipid globules inside cells, staining them red. Nile red can be used with living cells. It fluoresces strongly when partitioned into lipids, but practically not at all in aqueous solution. Osmium tetroxide (formal name: osmium tetraoxide) Osmium tetraoxide is used in optical microscopy to stain lipids. It dissolves in fats, and is reduced by organic materials to elemental osmium, an easily visible black substance. Propidium iodide Propidium iodide is a fluorescent intercalating agent that can be used to stain cells. Propidium iodide is used as a DNA stain in flow cytometry to evaluate cell viability or DNA content in cell cycle analysis, or in microscopy to visualise the nucleus and other DNA-containing organelles. Propidium Iodide cannot cross the membrane of live cells, making it useful to differentiate necrotic, apoptotic and healthy cells. PI also binds to RNA, necessitating treatment with nucleases to distinguish between RNA and DNA staining Rhodamine Rhodamine is a protein specific fluorescent stain commonly used in fluorescence microscopy. Safranine Safranine (or Safranine O) is a red cationic dye. It binds to nuclei (DNA) and other tissue polyanions, including glycosaminoglycans in cartilage and mast cells, and components of lignin and plastids in plant tissues. Safranine should not be confused with saffron, an expensive natural dye that is used in some methods to impart a yellow colour to collagen, to contrast with blue and red colours imparted by other dyes to nuclei and cytoplasm in animal (including human) tissues. The incorrect spelling "safranin" is in common use. The -ine ending is appropriate for safranine O because this dye is an amine. Stainability of tissues Tissues which take up stains are called chromatic. Chromosomes were so named because of their ability to absorb a violet stain. Positive affinity for a specific stain may be designated by the suffix -philic. For example, tissues that stain with an azure stain may be referred to as azurophilic. This may also be used for more generalized staining properties, such as acidophilic for tissues that stain by acidic stains (most notably eosin), basophilic when staining in basic dyes, and amphophilic when staining with either acid or basic dyes. In contrast, chromophobic tissues do not take up coloured dye readily. Electron microscopy As in light microscopy, stains can be used to enhance contrast in transmission electron microscopy. Electron-dense compounds of heavy metals are typically used. Phosphotungstic acid Phosphotungstic acid is a common negative stain for viruses, nerves, polysaccharides, and other biological tissue materials. It is mostly used in a .5-2% ph form making it neutral and is paired with water to make an aqueous solution. Phosphotungstic acid is filled with electron dense matter that stains the background surrounding the specimen dark and the specimen itself light. This process is not the normal positive technique for staining where the specimen is dark and the background remains light. Osmium tetroxide Osmium tetroxide is used in optical microscopy to stain lipids. It dissolves in fats, and is reduced by organic materials to elemental osmium, an easily visible black substance. Because it is a heavy metal that absorbs electrons, it is perhaps the most common stain used for morphology in biological electron microscopy. It is also used for the staining of various polymers for the study of their morphology by TEM. is very volatile and extremely toxic. It is a strong oxidizing agent as the osmium has an oxidation number of +8. It aggressively oxidizes many materials, leaving behind a deposit of non-volatile osmium in a lower oxidation state. Ruthenium tetroxide Ruthenium tetroxide is equally volatile and even more aggressive than osmium tetraoxide and able to stain even materials that resist the osmium stain, e.g. polyethylene. Other chemicals used in electron microscopy staining include: ammonium molybdate, cadmium iodide, carbohydrazide, ferric chloride, hexamine, indium trichloride, lanthanum(III) nitrate, lead acetate, lead citrate, lead(II) nitrate, periodic acid, phosphomolybdic acid, potassium ferricyanide, potassium ferrocyanide, ruthenium red, silver nitrate, silver proteinate, sodium chloroaurate, thallium nitrate, thiosemicarbazide, uranyl acetate, uranyl nitrate, and vanadyl sulfate.
Biology and health sciences
Basics_3
Biology
411795
https://en.wikipedia.org/wiki/Soil%20type
Soil type
A soil type is a taxonomic unit in soil science. All soils that share a certain set of well-defined properties form a distinctive soil type. Soil type is a technical term of soil classification, the science that deals with the systematic categorization of soils. Every soil of the world belongs to a certain soil type. Soil type is an abstract term. In nature, you will not find soil types. You will find soils that belong to a certain soil type. In hierarchical soil classification systems, soil types mostly belong to the higher or intermediate level. A soil type can normally be subdivided into subtypes, and in many systems several soil types can be combined to entities of higher category. However, in the first classification system of the United States (Whitney, 1909), the soil type was the lowest level and the mapping unit. For the definition of soil types, some systems use primarily such characteristics that are the result of soil-forming processes (pedogenesis). An example is the German soil systematics. Other systems combine characteristics resulting from soil-forming processes and characteristics inherited from the parent material. Examples are the World Reference Base for Soil Resources (WRB) and the USDA soil taxonomy. Other systems do not ask whether the properties are the result of soil formation or not. An example is the Australian Soil Classification. A convenient way to define a soil type is referring to soil horizons. However, this is not always possible because some very initial soils may not even have a clear development of horizons. For other soils, it may be more convenient to define the soil type just referring to some properties common to the whole soil profile. For example, WRB defines the Arenosols by their sand content. Many soils are more or less strongly influenced by human activities. This is reflected by the definition of many soil types in various classification systems. Because soil type is a very general and widely used term, many soil classification systems do not use it for their definitions. The USDA soil taxonomy has six hierarchical levels that are named order, suborder, great group, subgroup, family, and series. The WRB calls the first level Reference Soil Group. The second level in WRB is constructed by adding qualifiers, and for the result (the Reference Soil Group plus the qualifiers), no taxonomic term is used.
Physical sciences
Soil science
Earth science
411851
https://en.wikipedia.org/wiki/Sexual%20dysfunction
Sexual dysfunction
Sexual dysfunction is difficulty experienced by an individual or partners during any stage of normal sexual activity, including physical pleasure, desire, preference, arousal, or orgasm. The World Health Organization defines sexual dysfunction as a "person's inability to participate in a sexual relationship as they would wish". This definition is broad and is subject to many interpretations. A diagnosis of sexual dysfunction under the DSM-5 requires a person to feel extreme distress and interpersonal strain for a minimum of six months (except for substance- or medication-induced sexual dysfunction). Sexual dysfunction can have a profound impact on an individual's perceived quality of sexual life. The term sexual disorder may not only refer to physical sexual dysfunction, but to paraphilias as well; this is sometimes termed disorder of sexual preference. A thorough sexual history and assessment of general health and other sexual problems (if any) are important when assessing sexual dysfunction, because it is usually correlated with other psychiatric issues, such as mood disorders, eating and anxiety disorders, and schizophrenia. Assessing performance anxiety, guilt, stress, and worry are integral to the optimal management of sexual dysfunction. Many of the sexual dysfunctions that are defined are based on the human sexual response cycle proposed by William H. Masters and Virginia E. Johnson, and modified by Helen Singer Kaplan. Types Sexual dysfunction can be classified into four categories: sexual desire disorders, arousal disorders, orgasm disorders, and pain disorders. Dysfunction among men and women are studied in the fields of andrology and gynecology respectively. Sexual desire disorders Sexual desire disorders or decreased libido are characterized by a lack of sexual desire, libido for sexual activity, or sexual fantasies for some time. The condition ranges from a general lack of sexual desire to a lack of sexual desire for the current partner. The condition may start after a period of normal sexual functioning, or the person may always have had an absence or a lesser intensity of sexual desire. The causes vary considerably but include a decrease in the production of normal estrogen in women, or testosterone in both men and women. Other causes may be aging, fatigue, pregnancy, medications (such as SSRIs), or psychiatric conditions, such as depression and anxiety. While many causes of low sexual desire are cited, only a few of these have ever been the object of empirical research. Sexual arousal disorders Sexual arousal disorders were previously known as frigidity in women and impotence in men, though these have now been replaced with less judgmental terms. Impotence is now known as erectile dysfunction, and frigidity has been replaced with a number of terms describing specific problems that can be broken down into four categories as described by the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders: lack of desire, lack of arousal, pain during intercourse, and lack of orgasm. For both men and women, these conditions can manifest themselves as an aversion to and avoidance of sexual contact with a partner. In men, there may be partial or complete failure to attain or maintain an erection, or a lack of sexual excitement and pleasure in sexual activity. There may be physiological origins to these disorders, such as decreased blood flow or lack of vaginal lubrication. Chronic disease and the partners' relationship can also contribute to dysfunction. Additionally, postorgasmic illness syndrome (POIS) may cause symptoms when aroused, including adrenergic-type presentation: rapid breathing, paresthesia, palpitations, headaches, aphasia, nausea, itchy eyes, fever, muscle pain and weakness, and fatigue. From the onset of arousal, symptoms can persist for up to a week in patients. The cause of this condition is unknown; however, it is believed to be a pathology of either the immune system or autonomic nervous systems. It is defined as a rare disease by the National Institute of Health, but the prevalence is unknown. It is not thought to be psychiatric in nature, but it may present as anxiety relating to coital activities and may be incorrectly diagnosed as such. There is no known cure or treatment. Erectile dysfunction Erectile dysfunction (ED), or impotence, is a sexual dysfunction characterized by the inability to develop or maintain an erection of the penis. There are various underlying causes of ED, including damage to anatomical structures, psychological causes, medical disease, and drug use. Many of these causes are medically treatable. Psychological ED can often be treated by almost anything that the patient believes in; there is a very strong placebo effect. Physical damage can be more difficult to treat. One leading physical cause of ED is continual or severe damage taken to the nervi erigentes, which can prevent or delay erection. These nerves course beside the prostate arising from the sacral plexus and can be damaged in prostatic and colorectal surgeries. Diseases are also common causes of erectile dysfunction. Diseases such as cardiovascular disease, multiple sclerosis, kidney failure, vascular disease, and spinal cord injury can cause erectile dysfunction. Cardiovascular disease can decrease blood flow to penile tissues, making it difficult to develop or maintain an erection. Due to the shame and embarrassment felt by some with erectile dysfunction, the subject was taboo for a long time and is the focus of many urban legends. Folk remedies have long been advocated, with some being advertised widely since the 1930s. The introduction of perhaps the first pharmacologically effective remedy for impotence, sildenafil (trade name Viagra), in the 1990s caused a wave of public attention, propelled in part by the newsworthiness of stories about it and heavy advertising. It is estimated that around 30 million men in the United States and 152 million men worldwide have erectile dysfunction. However, social stigma, low health literacy, and social taboos lead to under reporting which makes an accurate prevalence rate hard to determine. The Latin term impotentia coeundi describes the inability to insert the penis into the vagina, and has been mostly replaced by more precise terms. ED from vascular disease is seen mainly amongst older individuals who have atherosclerosis. Vascular disease is common in individuals who smoke or have diabetes, peripheral vascular disease, or hypertension. Any time blood flow to the penis is impaired, ED can occur. Drugs are also a cause of erectile dysfunction. Individuals who take drugs that lower blood pressure, antipsychotics, antidepressants, sedatives, narcotics, antacids, or alcohol can have problems with sexual function and loss of libido. Hormone deficiency is a relatively rare cause of erectile dysfunction. In individuals with testicular failure, as in Klinefelter syndrome, or those who have had radiation therapy, chemotherapy, or childhood exposure to the mumps virus, the testes may fail to produce testosterone. Other hormonal causes of erectile failure include brain tumors, hyperthyroidism, hypothyroidism, or adrenal gland disorders. Orgasm disorders Anorgasmia Anorgasmia is classified as persistent delays or absence of orgasm following a normal sexual excitement phase in at least 75% of sexual encounters. The disorder can have physical, psychological, or pharmacological origins. SSRI antidepressants are a common pharmaceutical culprit, as they can delay orgasm or eliminate it entirely. A common physiological cause of anorgasmia is menopause; one in three women report problems obtaining an orgasm during sexual stimulation following menopause. Premature ejaculation Premature ejaculation is when ejaculation occurs before the partner achieves orgasm, or a mutually satisfactory length of time has passed during intercourse. There is no correct length of time for intercourse to last, but generally, premature ejaculation is thought to occur when ejaculation occurs in under two minutes from the time of the insertion of the penis. For a diagnosis, the patient must have a chronic history of premature ejaculation, poor ejaculatory control, and the problem must cause feelings of dissatisfaction as well as distress for the patient, the partner, or both. Premature ejaculation has historically been attributed to psychological causes, but newer theories suggest that premature ejaculation may have an underlying neurobiological cause that may lead to rapid ejaculation. Post-orgasmic disorders Post-orgasmic disorders cause symptoms shortly after orgasm or ejaculation. Post-coital tristesse (PCT) is a feeling of melancholy and anxiety after sexual intercourse that lasts for up to two hours. Sexual headaches occur in the skull and neck during sexual activity, including masturbation, arousal or orgasm. In men, post orgasmic illness syndrome (POIS) causes severe muscle pain throughout the body and other symptoms immediately following ejaculation. These symptoms last for up to a week. Some doctors speculate that the frequency of POIS "in the population may be greater than has been reported in the academic literature", and that many with POIS are undiagnosed. POIS may involve adrenergic symptoms: rapid breathing, paresthesia, palpitations, headaches, aphasia, nausea, itchy eyes, fever, muscle pain and weakness, and fatigue. The etiology of this condition is unknown; however, it is believed to be a pathology of either the immune system or autonomic nervous systems. It is defined as a rare disease by the NIH, but the prevalence is unknown. It is not thought to be psychiatric in nature, but it may present as anxiety relating to coital activities and thus may be incorrectly diagnosed as such. There is no known cure or treatment. Dhat syndrome is another condition which occurs in men: it is a culture-bound syndrome which causes anxious and dysphoric mood after sex. It is distinct from the low-mood and concentration problems (acute aphasia) seen in POIS. Sexual pain disorders Sexual pain disorders in women include dyspareunia (painful intercourse) and vaginismus (an involuntary spasm of the muscles of the vaginal wall that interferes with intercourse). Dyspareunia may be caused by vaginal dryness. Poor lubrication may result from insufficient excitement and stimulation, or from hormonal changes caused by menopause, pregnancy, or breastfeeding. Irritation from contraceptive creams and foams can also cause dryness, as can fear and anxiety about sex. It is unclear exactly what causes vaginismus, but it is thought that past sexual trauma (such as rape or abuse) may play a role. Another female sexual pain disorder is vulvodynia, or vulvar vestibulitis when localized to the vulval vestibule. In this condition, women experience burning pain during sex, which seems to be related to problems with the skin in the vulvar and vaginal areas. Its cause is unknown. In men, structural abnormalities of the penis like Peyronie's disease can make sexual intercourse difficult and/or painful. The disease is characterized by thick fibrous bands in the penis that lead to excessive curvature during erection. It has an incidence estimated at 0.4–3% or more, is most common in men 40–70, and has no certain cause. Risk factors include genetics, minor trauma (potentially during cystoscopy or transurethral resection of the prostate), chronic systemic vascular diseases, smoking, and alcohol consumption. Priapism is a painful erection that occurs for several hours and occurs in the absence of sexual stimulation. This condition develops when blood is trapped in the penis and is unable to drain. If the condition is not promptly treated, it can lead to severe scarring and permanent loss of erectile function. The disorder is most common in young men and children. Individuals with sickle-cell disease and those who use certain medications can often develop this disorder. Causes There are many factors which may result in a person experiencing a sexual dysfunction. These may result from emotional or physical causes. Emotional factors include interpersonal or psychological problems, which include depression, sexual fears or guilt, past sexual trauma, and sexual disorders. Sexual dysfunction is especially common among people who have anxiety disorders. Ordinary anxiety can cause erectile dysfunction in men without psychiatric problems, but clinically diagnosable disorders such as panic disorder commonly cause avoidance of intercourse and premature ejaculation. Pain during intercourse is often a comorbidity of anxiety disorders among women. Physical factors that can lead to sexual dysfunctions include the use of drugs, such as alcohol, nicotine, narcotics, stimulants, antihypertensives, antihistamines, and some psychotherapeutic drugs. For women, almost any physiological change that affects the reproductive system—premenstrual syndrome, pregnancy and the postpartum period, and menopause—can have an adverse effect on libido. Back injuries may also impact sexual activity, as can problems with an enlarged prostate gland, problems with blood supply, or nerve damage (as in sexual dysfunction after spinal cord injuries). Diseases such as diabetic neuropathy, multiple sclerosis, tumors, and, rarely, tertiary syphilis may also impact activity, as can the failure of various organ systems (such as the heart and lungs), endocrine disorders (thyroid, pituitary, or adrenal gland problems), hormonal deficiencies (low testosterone, other androgens, or estrogen), and some birth defects. In the context of heterosexual relationships, one of the main reasons for the decline in sexual activity among these couples is the male partner experiencing erectile dysfunction. This can be very distressing for the male partner, causing poor body image, and it can also be a major source of low desire for these men. In aging women, it is natural for the vagina to narrow and atrophy. If a woman does not participate in sexual activity regularly (in particular, activities involving vaginal penetration), she will not be able to immediately accommodate a penis without risking pain or injury if she decides to engage in penetrative intercourse. This can turn into a vicious cycle that often leads to female sexual dysfunction. According to Emily Wentzell, American culture has anti-aging sentiments that have caused sexual dysfunction to become "an illness that needs treatment" instead of viewing it as a natural part of the aging process. Not all cultures seek treatment; for example, a population of men living in Mexico often accept ED as a normal part of their maturing sexuality. With SSRI medication Sexual problems are common with SSRIs, which can cause anorgasmia, erectile dysfunction, diminished libido, genital numbness, and sexual anhedonia (pleasureless orgasm). Poor sexual function is also one of the most common reasons people stop the medication. In some cases, symptoms of sexual dysfunction may persist after discontinuation of SSRIs. This combination of symptoms is sometimes referred to as post-SSRI sexual dysfunction. Pelvic floor dysfunction Pelvic floor dysfunction can be an underlying cause of sexual dysfunction in both women and men, and is treatable by pelvic floor physical therapy, a type of physical therapy designed to restore the health and function of the pelvic floor and surrounding areas. Female sexual dysfunction Several theories have looked at female sexual dysfunction, from medical to psychological perspectives. Three social psychological theories include: the self-perception theory, the overjustification hypothesis, and the insufficient justification hypothesis: Self-perception theory: people make attributions about their own attitudes, feelings, and behaviours by relying on their observations of external behaviours and the circumstances in which those behaviours occur Overjustification hypothesis: when an external reward is given to a person for performing an intrinsically rewarding activity, the person's intrinsic interest will decrease Insufficient justification: based on the classic cognitive dissonance theory (inconsistency between two cognitions or between a cognition and a behavior will create discomfort), this theory states that people will alter one of the cognitions or behaviours to restore consistency and reduce distress The prevalence of sexual dysfunction in women is not well known due to a paucity of epidemiological studies, inconsistent criteria for sexual dysfunction across different studies and incomplete recruitment, with studies often excluding women who were without a partner or who were sexually inactive. However, based on incomplete population based studies from the United States, Europe and Australia, unspecified arousal dysfunction (in which a woman is unable to achieve desirable genital or non-genital sexual arousal despite adequate stimulation and desire) was present in 3-9% of women aged 18–44, 5-7.5% aged 45–64 and 3-6% in women older than 65. Anorgasmia with distress (in which women were unable to achieve an orgasm) was present in 7-8% of women younger than 40, 5-7% aged 40–64 and 3-6% of those older than 65. Poor sexual self image leading to distress was seen in 13.4% of women younger than 40 in an Australian population based study. The importance of how a woman perceives her behavior should not be underestimated. Many women perceive sex as a chore as opposed to a pleasurable experience, and they tend to consider themselves sexually inadequate, which in turn does not motivate them to engage in sexual activity. Several factors influence a women's perception of her sexual life. These can include race, gender, ethnicity, educational background, socioeconomic status, sexual orientation, financial resources, culture, and religion. Cultural differences are also present in how women view menopause and its impact on health, self-image, and sexuality. A study found that African American women are the most optimistic about menopausal life; Caucasian women are the most anxious, Asian women are the most inhibited about their symptoms, and Hispanic women are the most stoic. Since these women have sexual problems, their sexual lives with their partners can become a burden without pleasure, and may eventually lose complete interest in sexual activity. Some of the women found it hard to be aroused mentally, while others had physical problems. Several factors can affect female dysfunction, such as situations in which women do not trust their sex partners, the environment where sex occurs being uncomfortable, or an inability to concentrate on the sexual activity due to a bad mood or burdens from work. Other factors include physical discomfort or difficulty in achieving arousal, which could be caused by aging or changes in the body's condition. Sexual assault has been associated with excessive menstrual bleeding, genital burning, and painful intercourse (attributable to disease, injury, or otherwise), medically unexplained dysmenorrhea, menstrual irregularity, and lack of sexual pleasure. Physically violent assaults and those committed by strangers were most strongly related to reproductive symptoms. Multiple assaults, assaults accomplished by persuasion, spousal assault, and completed intercourse were most strongly related to sexual symptoms. Assault was occasionally associated more strongly with reproductive symptoms among women with lower income or less education, possibly because of economic stress or differences in assault circumstances. Associations with unexplained menstrual irregularity were strongest among African American women; ethnic differences in reported circumstances of assault appeared to account for these differences. Assault was associated with sexual indifference only among Latinas. Menopause The most prevalent of female sexual dysfunctions that have been linked to menopause include lack of desire and libido; these are predominantly associated with hormonal physiology. Specifically, the decline in serum estrogens causes these changes in sexual functioning. Androgen depletion may also play a role, but current knowledge about this is less clear. The hormonal changes that take place during the menopausal transition have been suggested to affect women's sexual response through several mechanisms, some more conclusive than others. Aging in women Whether or not aging directly affects women's sexual functioning during menopause is controversial. However, many studies have demonstrated that aging has a powerful impact on sexual function and dysfunction in women, specifically in the areas of desire, sexual interest, and frequency of orgasm. The primary predictor of sexual response throughout menopause is prior sexual functioning, which means that it is important to understand how the physiological changes in men and women can affect sexual desire. Despite the apparent negative impact that menopause can have on sexuality and sexual functioning, sexual confidence and well-being can improve with age and menopausal status. Testosterone, along with its metabolite dihydrotestosterone, is important to normal sexual function in men and women. Dihydrotestosterone is the most prevalent androgen in both men and women. Testosterone levels in women at age 60 are on average about half of what they were before the women were 40. Although this decline is gradual for most women, those who have undergone bilateral oophorectomy experience a sudden drop in testosterone levels, as the ovaries produce 40% of the body's circulating testosterone. Sexual desire has been related to three separate components: drive, beliefs and values, and motivation. Particularly in postmenopausal women, drive fades and is no longer the initial step in a woman's sexual response. Diagnosis List of disorders DSM The fourth edition of the Diagnostic and Statistical Manual of Mental Disorders lists the following sexual dysfunctions: Hypoactive sexual desire disorder (see also asexuality, which is not classified as a disorder) Sexual aversion disorder (avoidance of or lack of desire for sexual intercourse) Female sexual arousal disorder (failure of normal lubricating arousal response) Male erectile disorder Female orgasmic disorder (see anorgasmia) Male orgasmic disorder (see anorgasmia) Premature ejaculation Dyspareunia Vaginismus Additional DSM sexual disorders that are not sexual dysfunctions include: Paraphilias PTSD due to genital mutilation or childhood sexual abuse Other sexual problems Sexual dissatisfaction (non-specific) Lack of sexual desire Anorgasmia Impotence Sexually transmitted infections Delay or absence of ejaculation, despite adequate stimulation Inability to control timing of ejaculation Inability to relax vaginal muscles enough to allow intercourse Inadequate vaginal lubrication preceding and during intercourse Burning pain on the vulva or in the vagina with contact to those areas Unhappiness or confusion related to sexual orientation Transsexual and transgender people may have sexual problems before or after surgery. Persistent sexual arousal syndrome Sexual addiction Hypersexuality All forms of female genital cutting Post-orgasmic diseases, such as Dhat syndrome, PCT, POIS, and sexual headaches. Hard flaccid syndrome Treatment Males Several decades ago, the medical community believed most sexual dysfunction cases were related to psychological issues. Although this may be true for a portion of men, the vast majority of cases have now been identified to have a physical cause or correlation. If the sexual dysfunction is deemed to have a psychological component or cause, psychotherapy can help. Situational anxiety arises from an earlier bad incident or lack of experience, and often leads to development of fear towards sexual activity and avoidance which enters a cycle of increased anxiety and desensitization of the penis. In some cases, erectile dysfunction may be due to marital disharmony. Marriage counseling sessions are recommended in this situation. Lifestyle changes such as discontinuing tobacco smoking or substance use can also treat some types of ED. Several oral medications like Viagra, Cialis, and Levitra have become available to alleviate ED and have become first line therapy. These medications provide an easy, safe, and effective treatment solution for approximately 60% of men. In the rest, the medications may not work because of wrong diagnosis or chronic history. Another type of medication that is effective in roughly 85% of men is called intracavernous pharmacotherapy, which involves injecting a vasodilator drug directly into the penis to stimulate an erection. This method has an increased risk of priapism if used in conjunction with other treatments, and localized pain. Premature ejaculations are treated by behavioural techniques Squeeze technique and Stop Start Technique. In Squeeze technique the area between head and shaft of penis is pressed using index finger and thumb just before ejaculation. In Stop Start Technique the male partner stops having sexual intercourse just before ejaculation and waits for the sense of ejaculation to pass away. Both Techniques are repeated many times. When conservative therapies fail, are an unsatisfactory treatment option, or are contraindicated for use, the insertion of a penile implant may be selected by the patient. Technological advances have made the insertion of a penile implant a safe option for the treatment of ED, which provides the highest patient and partner satisfaction rates of all available ED treatment options. Pelvic floor physical therapy has been shown to be a valid treatment for men with sexual problems and pelvic pain. The 2020 guidelines from the American College of Physicians support the discussion of testosterone treatment in adult men with age-related low levels of testosterone who have sexual dysfunction. They recommend yearly evaluation regarding possible improvement and, if none, to discontinue testosterone; intramuscular treatments should be considered rather than transdermal treatments due to costs and since the effectiveness and harm of either method is similar. Testosterone treatment for reasons other than possible improvement of sexual dysfunction may not be recommended. Females In 2015, flibanserin was approved in the US to treat decreased sexual desire in women. While it is effective for some women, it has been criticized for its limited efficacy, and has many warnings and contraindications that limit its use. Flibanserin was found to increase pleasurable sexual experiences by 0.5 events per month in trials. Possible side effects include dizziness, drowsiness, nausea and fatigue. Flibanserin should not be taken with alcohol. Bremelanotide has been shown to modestly increase sexual desire in women, but it has not shown evidence of increasing the number of satisfactory sexual experiences per month. Possible side effects include nausea, flushing and headaches. Women experiencing pain with intercourse are often prescribed pain relievers or desensitizing agents; others are prescribed vaginal lubricants. Many women with sexual dysfunction are also referred to a counselor or sex therapist. Counselling for female sexual dysfunction, including sexual counselling, cognitive behavioral therapy, body awareness counselling, and couples counselling have been found to be helpful. Estrogen replacement therapy, outside of the indicated use for menopausal symptoms, is not recommended for the treatment of sexual dysfunction in women. Menopause Estrogens are responsible for the maintenance of collagen, elastic fibers, and vasculature of the urogenital tract, all of which are important in maintaining vaginal structure and functional integrity; they are also important for maintaining vaginal pH and moisture levels, both of which help to keep the tissues lubricated and protected. Prolonged estrogen deficiency leads to atrophy, fibrosis, and reduced blood flow to the urogenital tract, which cause menopausal symptoms such as vaginal dryness and pain related to sexual activity and/or intercourse. Women experiencing vaginal dryness who cannot use commercial lubricants may be able to use coconut oil as an alternative. Androgen therapy for hypoactive sexual desire disorder has a small benefit but its safety is not known. It is not approved as a treatment in the United States. It is more commonly used among women who have had an oophorectomy or are in a postmenopausal state. However, like most treatments, this is also controversial. One study found that after a 24-week trial, women taking androgens had higher scores of sexual desire compared to a placebo group. As with all pharmacological drugs, there are side effects in using androgens, which include hirsutism, acne, polycythaemia, increased high-density lipoproteins, cardiovascular risks, and endometrial hyperplasia. Alternative treatments include topical estrogen creams and gels that can be applied to the vulva or vagina area to treat vaginal dryness and atrophy. Research In modern times, clinical study of sexual problems is usually dated back no earlier than 1970 when Masters and Johnson's Human Sexual Inadequacy was published. It was the result of over a decade of work at the Reproductive Biology Research Foundation in St. Louis, involving 790 cases. The work grew from Masters and Johnson's earlier Human Sexual Response (1966). Prior to Masters and Johnson, the clinical approach to sexual problems was largely derived from Sigmund Freud. It was held to be psychopathology and approached with a certain pessimism regarding the chance of help or improvement. Sexual problems were merely symptoms of a deeper malaise, and the diagnostic approach was from the psychopathological viewpoint. There was little distinction between difficulties in function and variations nor between perversion and problems. Despite work by psychotherapists such as Balint sexual difficulties were crudely split into frigidity or impotence, terms which acquired negative connotations in popular culture. Human Sexual Inadequacy moved thinking from psychopathology to learning; psychopathological problems would only be considered if a problem did not respond to educative treatment. Treatment was directed at couples, whereas before partners would be seen individually. Masters and Johnson believed that sex was a joint act, and that sexual communication was the key issue to sexual problems, not the specifics of an individual problem. They also proposed co-therapy, with a pair of therapists to match the clients, arguing that a lone male therapist could not fully comprehend female difficulties. The basic Masters and Johnson treatment program was an intensive two-week program to develop efficient sexual communication. The program is couple-based and therapist-led, and began with discussion and sensate focus between the couple to develop shared experiences. From the experiences, specific difficulties could be determined and approached with a specific therapy. In a limited number of male-only cases (41) Masters and Johnson developed the use of a female surrogate, which was abandoned over the ethical, legal, and other problems it raised. In defining the range of sexual problems, Masters and Johnson defined a boundary between dysfunction and deviations. Dysfunctions were transitory and experienced by most people, and included male primary or secondary impotence, premature ejaculation, and ejaculatory incompetence; female primary orgasmic dysfunction and situational orgasmic dysfunction; pain during intercourse (dyspareunia) and vaginismus. According to Masters and Johnson, sexual arousal and climax are a normal physiological process of every functionally intact adult, but they can be inhibited despite being autonomic responses. Masters and Johnson's treatment program for dysfunction was 81.1% successful. Despite Masters and Johnson's work, sexual therapy in the US was overrun by enthusiastic rather than systematic approaches, blurring the space between "enrichment" and therapy.
Biology and health sciences
Specific diseases
Health
411879
https://en.wikipedia.org/wiki/Dysmenorrhea
Dysmenorrhea
Dysmenorrhea, also known as period pain, painful periods or menstrual cramps, is pain during menstruation. Its usual onset occurs around the time that menstruation begins. Symptoms typically last less than three days. The pain is usually in the pelvis or lower abdomen. Other symptoms may include back pain, diarrhea or nausea. Dysmenorrhea can occur without an underlying problem. Underlying issues that can cause dysmenorrhea include uterine fibroids, adenomyosis, and most commonly, endometriosis. It is more common among those with heavy periods, irregular periods, those whose periods started before twelve years of age and those who have a low body weight. A pelvic exam and ultrasound in individuals who are sexually active may be useful for diagnosis. Conditions that should be ruled out include ectopic pregnancy, pelvic inflammatory disease, interstitial cystitis and chronic pelvic pain. Dysmenorrhea occurs less often in those who exercise regularly and those who have children early in life. Treatment may include the use of a heating pad. Medications that may help include NSAIDs such as ibuprofen, hormonal birth control and the IUD with progestogen. Taking vitamin B1 or magnesium may help. Evidence for yoga, acupuncture and massage is insufficient. Surgery may be useful if certain underlying problems are present. Estimates of the percentage of female adolescents and women of reproductive age affected are between 50% and 90%. It is the most common menstrual disorder. Typically, it starts within a year of the first menstrual period. When there is no underlying cause, often the pain improves with age or following having a child. Signs and symptoms The main symptom of dysmenorrhea is pain concentrated in the lower abdomen or pelvis. It is also commonly felt in the right or left side of the abdomen. It may radiate to the thighs and lower back. Symptoms often co-occurring with menstrual pain include nausea and vomiting, diarrhea, headache, dizziness, disorientation, fainting and fatigue. Symptoms of dysmenorrhea often begin immediately after ovulation and can last until the end of menstruation. This is because dysmenorrhea is often associated with changes in hormonal levels in the body that occur with ovulation. In particular, prostaglandins induce abdominal contractions that can cause pain and gastrointestinal symptoms. The use of certain types of birth control pills can prevent the symptoms of dysmenorrhea because they stop ovulation from occurring. Dysmenorrhea is associated with increased pain sensitivity and heavy menstrual bleeding. For many, primary dysmenorrhea symptoms gradually subside after their mid-20s. Pregnancy has also been demonstrated to lessen the severity of dysmenorrhea, when menstruation resumes. However, dysmenorrhea can continue until menopause. 5–15% of women with dysmenorrhea experience symptoms severe enough to interfere with daily activities. Causes There are two types of dysmenorrhea, primary and secondary, based on the absence or presence of an underlying cause. Primary dysmenorrhea occurs without an associated underlying condition, while secondary dysmenorrhea has a specific underlying cause, typically a condition that affects the uterus or other reproductive organs. Painful menstrual cramps can result from an excess of prostaglandins released from the uterus. Prostaglandins cause the uterine muscles to tighten and relax causing the menstrual cramps. This type of dysmenorrhea is called primary dysmenorrhea. Primary dysmenorrhea usually begins in the teens soon after the first period. Secondary dysmenorrhea is the type of dysmenorrhea caused by another condition such as endometriosis, uterine fibroids, uterine adenomyosis, and polycystic ovary syndrome. Rarely, birth defects, intrauterine devices, certain cancers, and pelvic infections cause secondary dysmenorrhea. If the pain occurs between menstrual periods, lasts longer than the first few days of the period, or is not adequately relieved by the use of nonsteroidal anti-inflammatory drugs (NSAIDs) or hormonal contraceptives, this could indicate another condition causing secondary dysmenorrhea. Membranous dysmenorrhea is a type of secondary dysmenorrhea in which the entire lining of the uterus is shed all at once rather than over the course of several days as is typical. Signs and symptoms include spotting, bleeding, abdominal pain, and menstrual cramps. The resulting uterine tissue is called a decidual cast and must be passed through the cervix and vagina. It typically takes the shape of the uterus itself. Membranous dysmenorrhea is extremely rare and there are very few reported cases. The underlying cause is unknown, though some evidence suggests it may be associated with ectopic pregnancy or the use of hormonal contraception. When laparoscopy is used for diagnosis, the most common cause of dysmenorrhea is endometriosis, in approximately 70% of adolescents. Other causes of secondary dysmenorrhea include leiomyoma, adenomyosis, ovarian cysts, pelvic congestion, and cavitated and accessory uterine mass. Risk factors Genetic factors, stress and depression are risk factors for dysmenorrhea. Risk factors for primary dysmenorrhea include: early age at menarche, long or heavy menstrual periods, smoking, and a family history of dysmenorrhea. Dysmenorrhea is a highly polygenic and heritable condition. There is strong evidence of familial predisposition and genetic factors increasing susceptibility to dysmenorrhea. There have been multiple polymorphisms and genetic variants in both metabolic genes and genes responsible for immunity which have been associated with the disorder. Three distinct possible phenotypes have been identified for dysmenorrhea which include "multiple severe symptoms", "mild localized pain", and "severe localized pain". While there are likely differences in genotypes underlying each phenotype, the specific correlating genotypes have not yet been identified. These phenotypes are prevalent at different levels in different population demographics, suggesting different allelic frequencies across populations (in terms of race, ethnicity, and nationality). Polymorphisms in the ESR1 gene have been commonly associated with severe dysmenorrhea. Variant genotypes in the metabolic genes such as CYP2D6 and GSTM1 have been similarly been correlated with an increased risk of severe menstrual pain, but not with moderate or occasional phenotypes. The occurrence and frequency of secondary dysmenorrhea (SD) has been associated with different alleles and genotypes of those with underlying pathologies, which can affect the pelvic region or other areas of the body. Individuals with disorders may have genetic mutations related to their diagnoses which produce dysmenorrhea as a symptom of their primary diagnosis. It has been found that those with fibromyalgia who have the ESR1 gene variation Xbal and possess the Xbal AA genotype are more susceptible to experiencing mild to severe menstrual pain resulting from their primary pathology. Commonly, genetic mutations which are a hallmark of or associated with specific disorders can produce dysmenorrhea as a symptom which accompanies the primary disorder. In contrast with secondary dysmenorrhea, primary dysmenorrhea (PD) has no underlying pathology. Genetic mutation and variations have therefore been thought to underlie this disorder and contribute to the pathogenesis of PD. There are multiple single-nucleotide polymorphisms (SNP) associated with PD. Two of the most well studied include an SNP in the promoter of MIF and an SNP in the tumor necrosis factor (TNF-α) gene. When a cytosine 173 base pairs upstream of macrophage migration inhibitory factor (MIF) promoter was replaced by a guanine there was an associated increase in the likelihood of the individual experiencing PD. While a CC/GG genotype led to an increase in likelihood of the individual experiencing severe menstrual pain, a CC/GC genotype led to a more significant likelihood of the disorder impacting the individual overall and increasing the likelihood of any of the three phenotypes. A second associated SNP was located 308 base pairs upstream from the start codon of the TNF-α gene, in which guanine was substituted for adenine. A GG genotype at the loci is associated with the disorder and has been proposed as a possible genetic marker to predict PD. There has also been an association with mutations in the MEFV gene and dysmenorrhea, which are considered to be causative. The phenotypes associated with these mutations in the MEFV genes have been better studied; individuals who are heterozygous for these mutations are more likely to be affected by PD which presents as a severe pain phenotype. Genes related to immunity have been identified as playing a significant role in PD as well. IL1A was found to be the gene most associated with primary dysmenorrhea in terms of its phenotypic impact. This gene encodes a protein essential for the regulation of immunity and inflammation.15 While the mechanism of how it influences PD has yet to be discovered, it is assumed that possible mutations in IL1A or genes which interact with it impact the regulation of inflammation during menstruation. These mutations may therefore affect pain responses during menstruation which lead to the differing phenotypes associated with dysmenorrhea. Two additionally well studied SNPs which are suspected to contribute to PD were found in ZM1Z1 (the mutant allele called rs76518691) and NGF (the mutant allele called rs7523831). Both ZMIZ1 and NGF are associated with autoimmune responses and diseases, as well as pain response. The implication of these genes impacting Dysmenorrhea is significant as it suggests mutations which affect the immune system (specifically the inflammatory response) and pain response may also be a cause of primary dysmenorrhea. Mechanism The underlying mechanism of primary dysmenorrhea is the contractions of the muscles of the uterus which induce a local ischemia. During an individual's menstrual cycle, the endometrium thickens in preparation for potential pregnancy. After ovulation, if the ovum is not fertilized and there is no pregnancy, the built-up uterine tissue is not needed and thus shed. Prostaglandins and leukotrienes are released during menstruation, due to the build up of omega-6 fatty acids. Release of prostaglandins and other inflammatory mediators in the uterus cause the uterus to contract and can result in systemic symptoms such as nausea, vomiting, bloating and headaches or migraines. Prostaglandins are thought to be a major factor in primary dysmenorrhea. When the uterine muscles contract, they constrict the blood supply to the tissue of the endometrium, which, in turn, breaks down and dies. These uterine contractions continue as they squeeze the old, dead endometrial tissue through the cervix and out of the body through the vagina. These contractions, and the resulting temporary oxygen deprivation to nearby tissues, are thought to be responsible for the pain or cramps experienced during menstruation. Compared with non-dysmenorrheic individuals, those with primary dysmenorrhea have increased activity of the uterine muscle with increased contractility and increased frequency of contractions. Diagnosis The diagnosis of dysmenorrhea is usually made simply on a medical history of menstrual pain that interferes with daily activities. However, there is no universally accepted standard technique for quantifying the severity of menstrual pains. There are various quantification models, called menstrual symptometrics, that can be used to estimate the severity of menstrual pains as well as correlate them with pain in other parts of the body, menstrual bleeding and degree of interference with daily activities. Further work-up Once a diagnosis of dysmenorrhea is made, further workup is required to search for any secondary underlying cause of it, in order to be able to treat it specifically and to avoid the aggravation of a perhaps serious underlying cause. Further work-up includes a specific medical history of symptoms and menstrual cycles and a pelvic examination. Based on results from these, additional exams and tests may be motivated, such as: Gynecologic ultrasonography Laparoscopy Management Treatments that target the mechanism of pain include non-steroidal anti-inflammatory drugs (NSAIDs) and hormonal contraceptives. NSAIDs inhibit prostaglandin production. With long-term treatment, hormonal birth control reduces the amount of uterine fluid/tissue expelled from the uterus. Thus resulting in shorter, less painful menstruation. These drugs are typically more effective than treatments that do not target the source of the pain (e.g. acetaminophen). Regular physical activity may limit the severity of uterine cramps. NSAIDs Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen and naproxen are effective in relieving the pain of primary dysmenorrhea. They can have side effects of nausea, dyspepsia, peptic ulcer, and diarrhea. Hormonal birth control Use of hormonal birth control may improve symptoms of primary dysmenorrhea. A 2009 systematic review (updated in 2023) found evidence that the low or medium doses of estrogen contained in the birth control pill reduces pain associated with dysmenorrhea. In addition, no differences between different birth control pill preparations were found. The review did not determine if the estrogen in birth control pills was more effective than NSAIDs. Norplant and Depo-provera are also effective, since these methods often induce amenorrhea. The intrauterine system (Mirena IUD) may be useful in reducing symptoms. Other A review indicated the effectiveness of transdermal nitroglycerin. Reviews indicated magnesium supplementation seemed to be effective. A review indicated the usefulness of using calcium channel blockers. Heat is effective compared to NSAIDs and is a preferred option by many patients, as it is easy to access and has no known side effects. Tamoxifen has been used effectively to reduce uterine contractility and pain in dysmenorrhea patients. There is some evidence that exercises performed 3 times a week for about 45 to 60 minutes, without particular intensity, reduces menstrual pain. Alternative medicine There is insufficient evidence to recommend the use of many herbal or dietary supplements for treating dysmenorrhea, including melatonin, vitamin E, fennel, dill, chamomile, cinnamon, damask rose, rhubarb, guava, and uzara. Further research is recommended to follow up on weak evidence of benefit for: fenugreek, ginger, valerian, zataria, zinc sulphate, fish oil, and vitamin B1. A 2016 review found that evidence of safety is insufficient for most dietary supplements. There is some evidence for the use of fenugreek. One review found thiamine and vitamin E to be likely effective. It found the effects of fish oil and vitamin B12 to be unknown. Reviews found tentative evidence that ginger powder may be effective for primary dysmenorrhea. Reviews have found promising evidence for Chinese herbal medicine for primary dysmenorrhea, but that the evidence was limited by its poor methodological quality. A 2016 Cochrane review of acupuncture for dysmenorrhea concluded that it is unknown if acupuncture or acupressure is effective. There were also concerns of bias in study design and in publication, insufficient reporting (few looked at adverse effects), and that they were inconsistent. There are conflicting reports in the literature, including one review which found that acupressure, topical heat, and behavioral interventions are likely effective. It found the effect of acupuncture and magnets to be unknown. A 2007 systematic review found some scientific evidence that behavioral interventions may be effective, but that the results should be viewed with caution due to poor quality of the data. Spinal manipulation does not appear to be helpful. Although claims have been made for chiropractic care, under the theory that treating subluxations in the spine may decrease symptoms, a 2006 systematic review found that overall no evidence suggests that spinal manipulation is effective for treatment of primary and secondary dysmenorrhea. Valerian, Humulus lupulus and Passiflora incarnata may be safe and effective in the treatment of dysmenorrhea. TENS A 2011 review stated that high-frequency transcutaneous electrical nerve stimulation may reduce pain compared with sham TENS, but seems to be less effective than ibuprofen. Surgery One treatment of last resort is presacral neurectomy. Epidemiology Dysmenorrhea is one of the most common gynecological problems, regardless of age or race. It is one of the most frequently identified causes of pelvic pain in those who menstruate. Dysmenorrhea is estimated to affect between 50% and 90% of female adolescents and women of reproductive age. Another report states that estimates can vary between 16% and 91% of surveyed individuals, with severe pain observed in 2% to 29% of menstruating individuals. Reports of dysmenorrhea are greatest among individuals in their late teens and 20s, with reports usually declining with age. The prevalence in adolescent females has been reported to be 67.2% by one study and 90% by another. It has been stated that there is no significant difference in prevalence or incidence between races, although one study of Hispanic adolescent females indicated an elevated prevalence and impact in this group. Another study indicated that dysmenorrhea was present in 36.4% of participants, and was significantly associated with lower age and lower parity. Childbearing is said to relieve dysmenorrhea, but this does not always occur. One study indicated that in nulliparous individuals with primary dysmenorrhea, the severity of menstrual pain decreased significantly after age 40. A survey in Norway showed that 14 percent of females between the ages of 20 and 35 experience symptoms so severe that they stay home from school or work. Among adolescent girls, dysmenorrhea is the leading cause of recurrent short-term school absence. A study from India conducted by Dr RimJhim Kumari found that painful menstruation affected 66.7% of the girls, out of which only 27% sought medical advice from a doctor.
Biology and health sciences
Specific diseases
Health
411951
https://en.wikipedia.org/wiki/Human%20biology
Human biology
Human biology is an interdisciplinary area of academic study that examines humans through the influences and interplay of many diverse fields such as genetics, evolution, physiology, anatomy, epidemiology, anthropology, ecology, nutrition, population genetics, and sociocultural influences. It is closely related to the biomedical sciences, biological anthropology and other biological fields tying in various aspects of human functionality. It wasn't until the 20th century when biogerontologist, Raymond Pearl, founder of the journal Human Biology, phrased the term "human biology" in a way to describe a separate subsection apart from biology. It is also a portmanteau term that describes all biological aspects of the human body, typically using the human body as a type organism for Mammalia, and in that context it is the basis for many undergraduate University degrees and modules. Most aspects of human biology are identical or very similar to general mammalian biology. In particular, and as examples, humans : maintain their body temperature have an internal skeleton have a circulatory system have a nervous system to provide sensory information and operate and coordinate muscular activity. have a reproductive system in which they bear live young and produce milk. have an endocrine system and produce and eliminate hormones and other bio-chemical signalling agents have a respiratory system where air is inhaled into lungs and oxygen is used to produce energy. have an immune system to protect against disease Excrete waste as urine and feces. History The study of integrated human biology started in the 1920s, sparked by Charles Darwin's theories which were re-conceptualized by many scientists. Human attributes, such as child growth and genetics, were put into question and thus human biology was created. Typical human attributes The key aspects of human biology are those ways in which humans are substantially different from other mammals. Humans have a very large brain in a head that is very large for the size of the animal. This large brain has enabled a range of unique attributes including the development of complex languages and the ability to make and use a complex range of tools. The upright stance and bipedal locomotion is not unique to humans but humans are the only species to rely almost exclusively on this mode of locomotion. This has resulted in significant changes in the structure of the skeleton including the articulation of the pelvis and the femur and in the articulation of the head. In comparison with most other mammals, humans are very long lived with an average age at death in the developed world of nearly 80 years old. Humans also have the longest childhood of any mammal with sexual maturity taking 12 to 16 years on average to be completed. Humans lack fur. Although there is a residual covering of fine hair, which may be more developed in some people, and localised hair covering on the head, axillary and pubic regions, in terms of protection from cold, humans are almost naked. The reason for this development is still much debated. The human eye can see objects in colour but is not well adapted to low light conditions. The sense of smell and of taste are present but are relatively inferior to a wide range of other mammals. Human hearing is efficient but lacks the acuity of some other mammals. Similarly human sense of touch is well developed especially in the hands where dextrous tasks are performed but the sensitivity is still significantly less than in other animals, particularly those equipped with sensory bristles such as cats. Scientific investigation Human biology tries to understand and promotes research on humans as living beings as a scientific discipline. It makes use of various scientific methods, such as experiments and observations, to detail the biochemical and biophysical foundations of human life describe and formulate the underlying processes using models. As a basic science, it provides the knowledge base for medicine. A number of sub-disciplines include anatomy, cytology, histology and morphology. Medicine The capabilities of the human brain and the human dexterity in making and using tools, has enabled humans to understand their own biology through scientific experiment, including dissection, autopsy, prophylactic medicine which has, in turn, enable humans to extend their life-span by understanding and mitigating the effects of diseases. Understanding human biology has enabled and fostered a wider understanding of mammalian biology and by extension, the biology of all living organisms. Nutrition Human nutrition is typical of mammalian omnivorous nutrition requiring a balanced input of carbohydrates, fats, proteins, vitamins, and minerals. However, the human diet has a few very specific requirements. These include two specific amino acids, alpha-linolenic acid and linoleic acid without which life is not sustainable in the medium to long term. All other fatty acids can be synthesized from dietary fats. Similarly, human life requires a range of vitamins to be present in food and if these are missing or are supplied at unacceptably low levels, metabolic disorders result which can end in death. The human metabolism is similar to most other mammals except for the need to have an intake of Vitamin C to prevent scurvy and other deficiency diseases. Unusually amongst mammals, a human can synthesize Vitamin D3 using natural UV light from the sun on the skin. This capability may be widespread in the mammalian world but few other mammals share the almost naked skin of humans. The darker the human's skin, the less it can manufacture Vitamin D3. Other organisms Human biology also encompasses all those organisms that live on or in the human body. Such organisms range from parasitic insects such as fleas and ticks, parasitic helminths such as liver flukes through to bacterial and viral pathogens. Many of the organisms associated with human biology are the specialised biome in the large intestine and the biotic flora of the skin and pharyngeal and nasal region. Many of these biotic assemblages help protect humans from harm and assist in digestion, and are now known to have complex effects on mood, and well-being. Social behaviour Humans in all civilizations are social animals and use their language skills and tool making skills to communicate. These communication skills enable civilizations to grow and allow for the production of art, literature and music, and for the development of technology. All of these are wholly dependent on the human biological specialisms. The deployment of these skills has allowed the human race to dominate the terrestrial biome to the detriment of most of the other species.
Biology and health sciences
Biology basics
Biology
411991
https://en.wikipedia.org/wiki/Giardiasis
Giardiasis
Giardiasis is a parasitic disease caused by Giardia duodenalis (also known as G. lamblia and G. intestinalis). Infected individuals who experience symptoms (about 10% have no symptoms) may have diarrhea, abdominal pain, and weight loss. Less common symptoms include vomiting and blood in the stool. Symptoms usually begin one to three weeks after exposure and, without treatment, may last two to six weeks or longer. Giardiasis usually spreads when Giardia duodenalis cysts within faeces contaminate food or water that is later consumed orally. The disease can also spread between people and through other animals. Cysts may survive for nearly three months in cold water. Giardiasis is diagnosed via stool tests. Prevention may be improved through proper hygiene practices. Asymptomatic cases often do not need treatment. When symptoms are present, treatment is typically provided with either tinidazole or metronidazole. Infection may cause a person to become lactose intolerant, so it is recommended to temporarily avoid lactose following an infection. Resistance to treatment may occur in some patients. Giardiasis occurs worldwide. It is one of the most common parasitic human diseases. Infection rates are as high as 7% in the developed world and 30% in the developing world. In 2013, there were approximately 280 million people worldwide with symptomatic cases of giardiasis. The World Health Organization classifies giardiasis as a neglected disease. It is popularly known as beaver fever in North America. Signs and symptoms Symptoms vary from none to severe diarrhoea with poor absorption of nutrients. The cause of this wide range in severity of symptoms is not fully known but the intestinal flora of the infected host may play a role. Diarrhoea is less likely to occur in people from developing countries. Symptoms typically develop 9–15 days after exposure, but may occur as early as one day. The most common and prominent symptom is chronic diarrhoea, which can occur for weeks or months if untreated. Diarrhoea is often greasy and foul-smelling, with a tendency to float. This characteristic diarrhoea is often accompanied by several other symptoms, including gas, abdominal cramps, and nausea or vomiting. Some people also experience symptoms outside of the gastrointestinal tract, such as itchy skin, hives, and swelling of the eyes and joints, although these are less common. Fever occurs in only about 15% of infected people, despite the nickname "beaver fever". Prolonged disease is often characterised by diarrhoea and malabsorption of nutrients in the intestine. This malabsorption causes fatty stools, substantial weight loss, and fatigue. Additionally, those with giardiasis often have difficulty absorbing lactose, vitamin A, folate, and vitamin B12. In children, prolonged giardiasis can cause failure to thrive and may impair mental development. Symptomatic infections are well recognised as causing lactose intolerance, which, though usually temporary, may become permanent. Cause Giardiasis is caused by the protozoan Giardia duodenalis. The infection occurs in many animals, including beavers, other rodents, cows, and sheep. Animals are believed to play a role in keeping infections present in an environment. G. duodenalis has been sub-classified into eight genetic assemblages (designated A–H). Genotyping of G. duodenalis isolated from various hosts has shown that assemblages A and B infect the largest range of host species, and appear to be the main and possibly only G. duodenalis assemblages that infect humans. Risk factors According to the United States Centers for Disease Control and Prevention (CDC), people at greatest risk of infection are: People in childcare settings People who are in close contact with someone who has the disease Travellers within areas that have poor sanitation People who have contact with faeces, such as during sexual activity Backpackers or campers who drink untreated water from springs, lakes, or rivers Swimmers who swallow water from swimming pools, hot tubs, interactive fountains, or untreated recreational water from springs, lakes, or rivers People who get their household water from a shallow well People with weakened immune systems People who have contact with infected animals or animal environments contaminated with faeces Factors that increase infection risk for people from developed countries include changing nappies/diapers, consuming raw food, owning a dog, and travelling in the developing world. However, 75% of infections in the United Kingdom are acquired in the UK, not through travel elsewhere. In the United States, giardiasis occurs more often in summer, which is believed to be due to a greater amount of time spent on outdoor activities and travelling in the wilderness. Transmission Giardiasis is transmitted via the faecal-oral route with the ingestion of cysts. Primary routes are personal contact and contaminated water and food. The cysts can stay infectious for up to three months in cold water. Many people with Giardia infections have no or few symptoms. They may, however, still spread the disease. Pathophysiology The life cycle of Giardia consists of a cyst form and a trophozoite form. The cyst form is infectious and once it has found a host, transforms into the trophozoite form. This trophozoite attaches to the intestinal wall and replicates within the gut. As trophozoites continue along the gastrointestinal tract, they convert back to their cyst form which is then excreted with faeces. Ingestion of only a few of these cysts is needed to generate infection in another host. Infection with Giardia results in decreased expression of brush border enzymes, morphological changes to the microvillus, increased intestinal permeability, and programmed cell death of small intestinal epithelial cells. Both trophozoites and cysts are contained within the gastrointestinal tract and do not invade beyond it. The attachment of trophozoites causes villous flattening and inhibition of enzymes that break down disaccharide sugars in the intestines. Ultimately, the community of microorganisms that lives in the intestine may overgrow and may be the cause of further symptoms, though this idea has not been fully investigated. The alteration of the villi leads to an inability of nutrient and water absorption from the intestine, resulting in diarrhoea, one of the predominant symptoms. In the case of asymptomatic giardiasis, there can be malabsorption with or without histological changes to the small intestine. The degree to which malabsorption occurs in symptomatic and asymptomatic cases is highly varied. The species Giardia intestinalis uses enzymes that break down proteins to attack the villi of the brush border and appears to increase crypt cell proliferation and crypt length of crypt cells existing on the sides of the villi. On an immunological level, activated host T lymphocytes attack endothelial cells that have been injured to remove the cell. This occurs after the disruption of proteins that connect brush border endothelial cells to one another. The result is increased intestinal permeability. There appears to be a further increase in programmed enterocyte cell death by Giardia intestinalis, which further damages the intestinal barrier and increases permeability. There is significant upregulation of the programmed cell death cascade by the parasite, and substantial downregulation of the anti-apoptotic protein Bcl-2 and upregulation of the proapoptotic protein Bax. These connections suggest a role of caspase-dependent apoptosis in the pathogenesis of giardiasis. Giardia protects its growth by reducing the formation of the gas nitric oxide by consuming all local arginine, which is the amino acid necessary to make nitric oxide. Arginine starvation is known to be a cause of programmed cell death, and local removal is a strong apoptotic agent. Host defence Host defence against Giardia consists of natural barriers, production of nitric oxide, and activation of the innate and adaptive immune systems. Natural barriers Natural barriers defend against the parasite entering the host's body. Natural barriers consist of mucus layers, bile salt, proteases, and lipases. Additionally, peristalsis and the renewal of enterocytes provide further protection against parasites. Nitric oxide production Nitric oxide does not kill the parasite, but it inhibits the growth of trophozoites as well as excystation and encystation. Innate immune system Lectin pathway of complement The lectin pathway of complement is activated by mannose-binding lectin (MBL) which binds to N-acetylglucosamine. N-acetylglucosamine is a ligand for MBL and is present on the surface of Giardia. The classical pathway of complement The classical pathway of complement is activated by antibodies specific against Giardia. Adaptive immune system Antibodies Antibodies inhibit parasite replication and also induce parasite death via the classical pathway of complement. Infection with Giardia typically results in a strong antibody response against the parasite. While IgG is made in significant amounts, IgA is believed to be more important in parasite control. IgA is the most abundant isotype in intestinal secretions, and it is also the dominant isotype in a mother's milk. Antibodies in a mother's milk protect children against giardiasis (passive immunisation). T-cells The major aspect of adaptive immune responses is the T-cell response. Giardia is an extracellular pathogen. Therefore CD4+ helper T-cells are primarily responsible for this protective effect. One role of helper T-cells is to promote antibody production and isotype switching. Other roles include cytokine production (Il-4, IL-9) to help recruit other effector cells of the immune response. Diagnosis According to the CDC, the detection of antigens on the surface of organisms in stool specimens is the current test of choice for diagnosis of giardiasis and provides increased sensitivity over more common microscopy techniques. A trichrome stain of preserved stool is another method used to detect Giardia. Microscopic examination of the stool can be performed for diagnosis. This method is not preferred, however, due to inconsistent shedding of trophozoites and cysts in infected hosts. Multiple samples over some time, typically one week, must be examined. The Entero-Test uses a gelatin capsule with an attached thread. One end is attached to the inner aspect of the host's cheek, and the capsule is swallowed. Later, the thread is withdrawn and shaken in saline to release trophozoites which can be detected with a microscope. The sensitivity of this test is low, however, and is not routinely used for diagnosis. Immunologic enzyme-linked immunosorbent assay (ELISA) testing may be used for diagnosis. These tests are capable of a 90% detection rate or more. Although hydrogen breath tests indicate poorer rates of carbohydrate absorption in those asymptomatically infected, such tests are not diagnostic of infection. Serological tests are not helpful in diagnosis. Prevention The CDC recommends hand-washing and avoiding potentially contaminated food and untreated water. Boiling water contaminated with Giardia effectively kills infectious cysts. Chemical disinfectants or filters may be used. Iodine-based disinfectants are preferred over chlorination as the latter is ineffective at destroying cysts. Although the evidence linking the drinking of water in the North American wilderness and giardiasis has been questioned, a number of studies raise concern. Most if not all CDC verified backcountry giardiasis outbreaks have been attributed to water. Surveillance data (for 2013 and 2014) reports six outbreaks (96 cases) of waterborne giardiasis contracted from rivers, streams or springs and less than 1% of reported giardiasis cases are associated with outbreaks. Person-to-person transmission accounts for the majority of Giardia infections and is usually associated with poor hygiene and sanitation. Giardia is often found on the surface of the ground, in the soil, in undercooked foods, and in water, and on hands that have not been properly cleaned after handling infected faeces. Water-borne transmission is associated with the ingestion of contaminated water. In the U.S., outbreaks typically occur in small water systems using inadequately treated surface water. Venereal transmission happens through faecal-oral contamination. Additionally, nappy/diaper changing and inadequate handwashing are risk factors for transmission from infected children. Lastly, food-borne epidemics of Giardia have developed through the contamination of food by infected food-handlers. Vaccine There are no vaccines for humans yet, however, there are several vaccine candidates in development. They are targeting: recombinant proteins, DNA vaccines, variant-specific surface proteins (VSP), cyst wall proteins (CWP), giadins, and enzymes. Researchers at CONICET have produced an oral vaccine after engineering customised proteins mimicking those expressed on the surface of Giardia trophozoites. The vaccine has proven effective in mice. At present, one commercially available vaccine exists – GiardiaVax, made from G. lamblia whole trophozoite lysate. It is a vaccine for veterinary use only in dogs and cats. GiardiaVax should promote the production of specific antibodies. Treatment Treatment is not always necessary as the infection usually resolves on its own. However, if the illness is acute or symptoms persist and medications are needed to treat it, a nitroimidazole medication is used such as metronidazole, tinidazole, secnidazole or ornidazole. The World Health Organisation and Infectious Disease Society of America recommend metronidazole as first-line therapy. The US CDC lists metronidazole, tinidazole, and nitazoxanide as effective first-line therapies; of these three, only nitazoxanide and tinidazole are approved for the treatment of giardiasis by the US FDA. A meta-analysis published by the Cochrane Collaboration in 2012 found that compared to the standard of metronidazole, albendazole had equivalent efficacy while having fewer side effects, such as gastrointestinal or neurologic issues. Other meta-analyses have reached similar conclusions. Both medications need a five to ten-day-long course; albendazole is taken once a day, while metronidazole needs to be taken three times a day. The evidence for comparing metronidazole to other alternatives such as mebendazole, tinidazole, or nitazoxanide was felt to be of very low quality. While tinidazole has side effects and efficacy similar to those of metronidazole, it is administered with a single dose. Resistance has been seen clinically to both nitroimidazoles and albendazole, but not nitazoxanide, though nitazoxanide resistance has been induced in research laboratories. The exact mechanism of resistance to all of these medications is not well understood. In the case of nitroimidazole-resistant strains of Giardia, other drugs are available which have shown efficacy in treatment including quinacrine, nitazoxanide, bacitracin zinc, furazolidone and paromomycin. Mepacrine may also be used for refractory cases. Probiotics, when given in combination with the standard treatment, have been shown to assist with clearance of Giardia. During pregnancy, paromomycin is the preferred treatment drug because of its poor intestinal absorption, resulting in less exposure to the foetus. Alternatively, metronidazole can be used after the first trimester as there has been wide experience in its use for trichomonas in pregnancy. Prognosis In people with a properly functioning immune system, infection may resolve without medication. A small portion, however, develop a chronic infection. People with an impaired immune system are at higher risk of chronic infection. Medication is an effective cure for nearly all people although there is growing drug-resistance. Children with chronic giardiasis are at risk for failure to thrive as well as more long-lasting sequelae such as growth stunting. Up to half of infected people develop a temporary lactose intolerance leading to symptoms that may mimic a chronic infection. Some people experience post-infectious irritable bowel syndrome after the infection has cleared. Giardiasis has also been implicated in the development of food allergies. This is thought to be due to its effect on intestinal permeability. Epidemiology In some developing countries Giardia is present in 30% of the population. In the United States it is estimated that it is present in 3–7% of the population. Giardiasis is associated with impaired growth and development in children, particularly influencing a country's economic growth by affecting Disability Adjusted Life Year (DALY) rates. The number of reported cases in the United States in 2018 was 15,584. All states that classify giardiasis as a notifiable disease had cases of giardiasis. The states of Illinois, Kentucky, Mississippi, North Carolina, Oklahoma, Tennessee, Texas, and Vermont did not notify the Centers for Disease Control and Prevention regarding cases in 2018. There are seasonal trends associated with giardiasis. July, August, and September are the months with the highest incidence of giardiasis in the United States. In the ECDC's (European Centre for Disease Prevention and Control) annual epidemiological report containing 2014 data, 17,278 confirmed giardiasis cases were reported by 23 of the 31 countries that are members of the EU/EEA. Germany reported the highest number at 4,011 cases. Following Germany, the UK reported 3,628 confirmed giardiasis cases. Together, this accounts for 44% of total reported cases. Research Some intestinal parasitic infections may play a role in irritable bowel syndrome and other long-term sequelae such as chronic fatigue. The mechanism of transformation from cyst to trophozoites has not been characterised but may help develop drug targets for treatment-resistant Giardia. The interaction between Giardia and host immunity, internal flora, and other pathogens is not well understood.In vitro cell cultures have been widely used to study host-parasite interactions, and human enteroids are now being used as non-transformed intestinal epithelial cell infection models for G. intestinalis and other pathogens. The main congress about giardiasis is the "International Giardia and Cryptosporidium Conference" (IGCC). A summary of results presented at the most recent edition (2019, in Rouen, France) is available. Other animals In both cats and dogs, giardiasis usually responds to metronidazole and fenbendazole. Metronidazole in pregnant cats can cause developmental malformations. Many cats dislike the taste of fenbendazole. Giardiasis has been shown to decrease weight in livestock.
Biology and health sciences
Protozoan infections
Health
412096
https://en.wikipedia.org/wiki/Diphenhydramine
Diphenhydramine
Diphenhydramine, sold under the brand name Benadryl among others, is an antihistamine and sedative. It is a first-generation H1-antihistamine and it works by blocking certain effects of histamine, which produces its antihistamine and sedative effects. Diphenhydramine is also a potent anticholinergic. It is mainly used to treat allergies, insomnia, and symptoms of the common cold. It is also less commonly used for tremors in parkinsonism, and nausea. It is taken by mouth, injected into a vein, injected into a muscle, or applied to the skin. Maximal effect is typically around two hours after a dose, and effects can last for up to seven hours. Common side effects include sleepiness, poor coordination, and upset stomach. There is no clear risk of harm when used during pregnancy; however, use during breastfeeding is not recommended. It was developed by George Rieveschl and put into commercial use in 1946. It is available as a generic medication, In 2022, it was the 258th most commonly prescribed medication in the United States, with more than 1million prescriptions. Its sedative and deliriant effects have led to some cases of recreational use. Medical uses Diphenhydramine is a first-generation antihistamine used to treat several conditions including allergic symptoms and itchiness, the common cold, insomnia, motion sickness, and extrapyramidal symptoms. Diphenhydramine also has local anesthetic properties, and has been used as such in people allergic to common local anesthetics such as lidocaine. Allergies Diphenhydramine is effective in the treatment of allergies. , it was the most commonly used antihistamine for acute allergic reactions in the emergency department. By injection, it is often used in addition to epinephrine for anaphylaxis, although its use for this purpose had not been properly studied. Its use is only recommended once acute symptoms have improved. Topical formulations of diphenhydramine are available, including creams, lotions, gels, sprays, and eye drops. These are used to relieve itching and have the advantage of causing fewer systemic effects (e.g., drowsiness) than oral forms. Movement disorders Diphenhydramine is used to treat akathisia and parkinsonism caused by antipsychotics. It is also used to treat acute dystonia including torticollis and oculogyric crisis caused by typical antipsychotics. Sleep Because of its sedative properties, diphenhydramine is widely used in nonprescription sleep aids for insomnia. The drug is an ingredient in several products sold as sleep aids, either alone or in combination with other ingredients such as acetaminophen (paracetamol) in Tylenol PM and ibuprofen in Advil PM. Diphenhydramine can cause minor psychological dependence. Diphenhydramine has also been used as an anxiolytic. Diphenhydramine has also been used off-prescription by parents in an attempt to make their children sleep and to sedate them on long-distance flights. This has been met with criticism, both by doctors and by members of the airline industry, because sedating passengers may put them at risk if they cannot react efficiently to emergencies, and because the drug's side effects, especially the chance of a paradoxical reaction, may make some users hyperactive. Addressing such use, the Seattle Children's Hospital argued, in a 2009 article, "Using a medication for your convenience is never an indication for medication in a child." The American Academy of Sleep Medicine's 2017 clinical practice guidelines recommended against the use of diphenhydramine in the treatment of insomnia, because of poor effectiveness and low quality of evidence. A major systematic review and network meta-analysis of medications for the treatment of insomnia published in 2022 found little evidence to inform the use of diphenhydramine for insomnia. Nausea Diphenhydramine also has antiemetic properties, which make it useful in treating the nausea that occurs in vertigo and motion sickness. However, when taken above recommended doses, it can cause nausea (especially above 200 mg). Special populations Diphenhydramine is secreted in breast milk. It is expected that low doses of diphenhydramine taken occasionally will cause no adverse effects in breastfed infants. Large doses and long-term use may affect the baby or reduce breast milk supply, especially when combined with sympathomimetic drugs, such as pseudoephedrine, or before the establishment of lactation. A single bedtime dose after the last feeding of the day may minimize the harmful effects of the medication on the baby and the milk supply. Still, non-sedating antihistamines are preferred. Paradoxical reactions to diphenhydramine have been documented, particularly in children, and it may cause excitation instead of sedation. Topical diphenhydramine is sometimes used especially for people in hospice. This use is without indication and topical diphenhydramine should not be used as treatment for nausea because research has not shown that this therapy is more effective than others. Anxiety Diphenhydramine (as Benadryl) is not typically used to treat anxiety because its long-term use may cause adverse effects, such as memory loss, especially in the elderly. Benadryl is not approved by the US Food and Drug Administration for treating anxiety. On the other hand, hydroxyzine, a first generation antihistamine that lacks significant anticholinergic effects may be used to treat anxiety, although benzodiazepines and antidepressants are considered more effective by most clinicians. The mild anxiolytic effects of hydroxyzine are mostly due to its weak but significant activity as an antagonist of the 5-HT2A receptor, a common target of most antidepressant drugs. Adverse effects The most prominent side effects are dizziness and sleepiness. Diphenhydramine is a potent anticholinergic agent and potential deliriant in higher doses. This activity is responsible for the side effects of dry mouth and throat, increased heart rate, pupil dilation, urinary retention, constipation, and, at high doses, hallucinations or delirium. Other side effects include motor impairment (ataxia), flushed skin, blurred vision at nearpoint owing to lack of accommodation (cycloplegia), abnormal sensitivity to bright light (photophobia), sedation, difficulty concentrating, short-term memory loss, visual disturbances, irregular breathing, dizziness, irritability, itchy skin, confusion, increased body temperature (in general, in the hands and/or feet), temporary erectile dysfunction, and excitability, and although it can be used to treat nausea, higher doses may cause vomiting. Diphenhydramine in overdose may occasionally result in QT prolongation. Some individuals experience an allergic reaction to diphenhydramine in the form of hives. Conditions such as restlessness or akathisia can worsen from increased levels of diphenhydramine, especially with recreational dosages. Normal doses of diphenhydramine, like other first generation antihistamines, can also make symptoms of restless legs syndrome worse. As diphenhydramine is extensively metabolized by the liver, caution should be exercised when giving the drug to individuals with hepatic impairment. Anticholinergic use later in life is associated with an increased risk for cognitive decline and dementia among older people. Drowsiness, memory loss, confusion, dry mouth or constipation may also occur in elderly people. Contraindications Diphenhydramine is contraindicated in premature infants and neonates, as well as people who are breastfeeding. It is a pregnancy Category B drug. Diphenhydramine has additive effects with alcohol and other CNS depressants. Monoamine oxidase inhibitors prolong and intensify the anticholinergic effect of antihistamines. Overdose Diphenhydramine is one of the most commonly misused over-the-counter drugs in the United States. Overdose symptoms may include Abdominal pain Abnormal speech (inaudibility, forced speech, etc.) Acute megacolon Anxiety/nervousness Coma Delirium Disorientation Dissociation Euphoria or dysphoria Extreme drowsiness Flushed skin Hallucinations (auditory, visual, tactile, etc.) Heart palpitations Inability to urinate Motor disturbances Muscle spasms Seizures Severe dizziness Severe mouth and throat dryness Tremors Vomiting Acute poisoning can be fatal, leading to cardiovascular collapse and death in 2–18 hours, and in general, is treated using a symptomatic and supportive approach. Diagnosis of toxicity is based on history and clinical presentation, and in general precise plasma levels do not appear to provide useful relevant clinical information. Several levels of evidence strongly indicate diphenhydramine (similar to chlorpheniramine) can block the delayed rectifier potassium channel and, as a consequence, prolong the QT interval, leading to cardiac arrhythmias such as torsades de pointes. No specific antidote for diphenhydramine toxicity is known, but the anticholinergic syndrome has been treated with physostigmine for severe delirium or tachycardia. Benzodiazepines may be administered to decrease the likelihood of psychosis, agitation, and seizures in people who are prone to these symptoms. Interactions Alcohol may increase the drowsiness caused by diphenhydramine. Pharmacology Pharmacodynamics Diphenhydramine, while traditionally known as an antagonist, acts primarily as an inverse agonist of the histamine H1 receptor. It is a member of the ethanolamine class of antihistaminergic agents. By reversing the effects of histamine on the capillaries, it can reduce the intensity of allergic symptoms. It also crosses the blood–brain barrier and inversely agonizes the H1 receptors centrally. Its effects on central H1 receptors cause drowsiness. Diphenhydramine is a potent antimuscarinic (a competitive antagonist of muscarinic acetylcholine receptors) and, as such, at high doses can cause anticholinergic syndrome. The utility of diphenhydramine as an antiparkinson agent is the result of its blocking properties on the muscarinic acetylcholine receptors in the brain. Diphenhydramine also acts as an intracellular sodium channel blocker, which is responsible for its actions as a local anesthetic. Diphenhydramine has also been shown to inhibit the reuptake of serotonin. It has been shown to be a potentiator of analgesia induced by morphine, but not by endogenous opioids, in rats. The drug has also been found to act as an inhibitor of histamine N-methyltransferase (HNMT). Pharmacokinetics Oral bioavailability of diphenhydramine is in the range of 40% to 60%, and peak plasma concentration occurs about 2 to 3 hours after administration. Diphenhydramine, available in various salt forms, such as citrate, hydrochloride, and salicylate, exhibits distinct molecular weights and pharmacokinetic properties. Specifically, diphenhydramine hydrochloride and diphenhydramine citrate possess molecular weights of and , respectively. These variations in molecular weight influence the dissolution rates and absorption characteristics of each salt form. The primary route of metabolism is two successive demethylations of the tertiary amine. The resulting primary amine is further oxidized to the carboxylic acid. Diphenhydramine is metabolized by the cytochrome P450 enzymes CYP2D6, CYP1A2, CYP2C9, and CYP2C19. The elimination half-life of diphenhydramine has not been fully elucidated, but appears to range between 2.4 and 9.3 hours in healthy adults. A 1985 review of antihistamine pharmacokinetics found that the elimination half-life of diphenhydramine ranged between 3.4 and 9.3 hours across five studies, with a median elimination half-life of 4.3 hours. A subsequent 1990 study found that the elimination half-life of diphenhydramine was 5.4 hours in children, 9.2 hours in young adults, and 13.5 hours in the elderly. A 1998 study found a half-life of 4.1 ± 0.3 hours in young men, 7.4 ± 3.0 hours in elderly men, 4.4 ± 0.3 hours in young women, and 4.9 ± 0.6 hours in elderly women. In a 2018 study in children and adolescents, the half-life of diphenhydramine was 8 to 9 hours. Chemistry Detection in body fluids Diphenhydramine can be quantified in blood, plasma, or serum. Gas chromatography with mass spectrometry (GC-MS) can be used with electron ionization on full scan mode as a screening test. GC-MS or GC-NDP can be used for quantification. Rapid urine drug screens using immunoassays based on the principle of competitive binding may show false-positive methadone results for people having ingested diphenhydramine. Quantification can be used to monitor therapy, confirm a diagnosis of poisoning in people who are hospitalized, provide evidence in an impaired driving arrest, or assist in a death investigation. History In 1943, diphenhydramine was discovered by chemist George Rieveschl and one of his students, Fred Huber, while they were conducting research into muscle relaxants at the University of Cincinnati. Huber first synthesized diphenhydramine. Rieveschl then worked with Parke-Davis to test the compound, and the company licensed the patent from him. In 1946, it became the first prescription antihistamine approved by the US FDA. In the 1960s, diphenhydramine was found to weakly inhibit reuptake of the neurotransmitter serotonin. This discovery led to a search for viable antidepressants with similar structures and fewer side effects, culminating in the invention of fluoxetine (Prozac), a selective serotonin reuptake inhibitor (SSRI). A similar search had previously led to the synthesis of the first SSRI, zimelidine, from brompheniramine, also an antihistamine. In 1975, diphenhydramine was still available only by prescription in the US and required medical supervision. Society and culture Marketing Diphenhydramine is sold under the brand name Benadryl by McNeil Consumer Healthcare in the US, UK, Canada, and South Africa. Trade names in other countries include Dimedrol, Daedalon, Nytol, and Vivinox. It is also available as a generic medication. Procter & Gamble markets an over-the-counter formulation of diphenhydramine as a sleep aid under the brand ZzzQuil. Prestige Brands markets an over-the-counter formulation of diphenhydramine as a sleep aid in the US under the name Sominex. Cultural impact Diphenhydramine is deemed to have limited abuse potential in the United States owing to its potentially serious side-effect profile and limited euphoric effects and is not a controlled substance. Since 2002, the US FDA has required special labeling warning against the use of multiple products that contain diphenhydramine. In some jurisdictions, diphenhydramine is often present in postmortem specimens collected during investigation of sudden infant deaths; the drug may play a role in these events. Diphenhydramine is among prohibited and controlled substances in the Republic of Zambia, and travelers are advised not to bring the drug into the country. Several Americans have been detained by the Zambian Drug Enforcement Commission for possession of Benadryl and other over-the-counter medications containing diphenhydramine. Recreational use Although diphenhydramine is widely used and generally considered to be safe for occasional usage, multiple cases of abuse and addiction have been documented. Because the drug is cheap and sold over the counter in most countries, adolescents without access to more sought-after illicit drugs are particularly at risk. People with mental health problems—especially those with schizophrenia—are also prone to abuse the drug, which is self-administered in large doses to treat extrapyramidal symptoms caused by the use of antipsychotics. Recreational users report calming effects, mild euphoria, and hallucinations as the desired effects of the drug. Research has shown that antimuscarinic agents, including diphenhydramine, "may have antidepressant and mood-elevating properties". A study conducted on adult males with a history of sedative abuse found that subjects who were administered a high dose (400 mg) of diphenhydramine reported a desire to take the drug again, despite also reporting negative effects, such as difficulty concentrating, confusion, tremors, and blurred vision. In 2020, an Internet challenge emerged on the social media platform TikTok involving deliberately overdosing on diphenhydramine; dubbed the Benadryl challenge, the challenge encourages participants to consume dangerous amounts of Benadryl to film the resultant psychoactive effects and has been implicated in several hospitalisations and at least two deaths.
Biology and health sciences
Antihistamines
Health
2409948
https://en.wikipedia.org/wiki/Namma%20Metro
Namma Metro
(), also known as Bengaluru Metro, is a rapid transit system serving the city of Bengaluru, the capital city of the state of Karnataka, India. It is the second-longest operational metro network in India with an operational length of 76.95 kilometers, just behind Delhi Metro. Upon its inauguration in 2011, it became the first underground metro system in South India. Namma Metro has a mix of underground, at grade, and elevated stations. Out of the 68 operational metro stations of Namma Metro as of November 2024, there are 59 elevated stations, eight underground stations and one at-grade station. The system runs on standard-gauge tracks. Bengaluru Metro Rail Corporation Limited (BMRCL), a joint venture of the Government of India and the State Government of Karnataka, is the agency for building, operating and expanding the Namma Metro network. Services operate daily between 05:00 and 24:00 running with a headway varying between 3–15 minutes. The trains initially began with three coaches but later grew to six coaches as ridership increased. Power is supplied by 750V direct current through third rail. As of March 2024, the metro system had an average daily ridership of about 636,000 passengers. On 6 December 2024, the ridership was 9.2 lakh, the highest recorded so far in the history of Namma Metro. History The State Town Planning Department had recommended looking into a mass rapid transit project, i.e. a metro for Bengaluru city, back in 1977. A high-level Committee had also agreed that a metro study was warranted and a team from Southern Railway (SR) was commissioned to do this in 1981. The Southern Railway team recommended a two-corridor metro, in length, in addition to commuter rail lines and a ring railway. In 1993, the State of Karnataka established another committee to look into mass rapid transit. This committee had again recommended the same metro project put forward by SR in 1983 and the same circular railway. The state created Bangalore Mass Rapid Transit Ltd (BMRTL) in 1994, with terms of reference to seek a public/private partnership for a mass rapid transit project. The government immediately introduced a special city cess dedicated to the anticipated mass rapid transit project. BMRTL commissioned a feasibility study which came up with an elevated, LRT-based, long network on 6 routes. A private consortium led by United Breweries Group undertook further development of the project on BOT basis. However, the project hadn't taken off. In 2003, the Government of Karnataka commissioned the Delhi Metro Rail Corporation (DMRC), which had successfully developed the Delhi Metro, to carry out a detailed preparation study for a metro in Bengaluru, to be done emulating the technical and financial aspects of the approach used in Delhi. The study recommended a 2-line metro, and in length, cross shaped. The middle of the cross was to be at the Central Railway Station in Bengaluru, completely underground. The economic rate-of-return was forecast at 22.3%. The financial forecast assumes a government subsidy for interest payments and some depreciation, i.e. fare revenue will cover somewhat more than direct operating costs. The Government accepted this option. BMRTL ceased to exist and was replaced by Bengaluru Metro Rail Corporation Ltd (BMRCL). Construction Phase 1 Delhi Metro Rail Corporation Limited (DMRC) prepared and submitted the detailed project for the first phase of Namma Metro in May 2003. The DPR was for a network with 32 stations for Phase 1 of the project, using standard gauge. The project was approved by the Union Cabinet on 25 April 2006. The foundation stone for construction of Phase 1 was laid by then Prime Minister Dr. Manmohan Singh on 24 June 2006. Navayuga Engineering Company Limited was awarded the first contract to construct Reach 1 of the east–west corridor in 2006. Civil construction on the first section (Reach-1) of the Purple Line between Baiyyappanahalli and Mahatma Gandhi Road, commenced on 15 April 2007. DPRs for a northern extension (from Yeshwanthapura to Nagasandra) and a southern extension (from Rashtreeya Vidyalaya Road to Yelachenahalli) was submitted in October 2007 and June 2008 respectively. With these extensions, the total route length for Phase 1 became . The objective was to connect the metro to the Outer Ring Road at northern and southern ends, and also cover the industrial areas of Peenya in the north-west, thereby providing better connectivity and increasing ridership. In October 2008, the Government of Karnataka approved this extension, which would cost an additional ₹1,763 crore (US$250 million). Underground section Both lines in Phase 1 have tunnel sections in the city center which were the first metro tunnels built in South India. Construction of underground sections in Phase 1 commenced in late 2012. The delay was due to cancellation of the initial tenders called in early 2008 as the entire DPR had to be revised and bids received were too high. A second round of tendering was done in late 2009 with the gigantic Majestic inter-change station as a separate package. Bids were awarded for the tunnel sections in 2011 and construction began in 2012. The tunnels bored with tunnel boring machines (TBMs), located approximately below ground level, have a diameter of and are apart. The TBMs were nicknamed Helen, Margarita, Kaveri, Krishna and Godavari. Tunnel boring of underground section UG1 (on east–west corridor) was completed on 17 March 2014. Track works and 3rd rail electrification works were completed on the east–west tunnel of the Purple line between Cubbon Park and Magadi Road and Bangalore Metro Rail Corporation Ltd (BMRCL) began trials on 23 November 2015. The entire stretch of the Purple Line was opened on 29 April 2016. Tunnel boring of underground section UG2 (on north–south corridor) was completed on 23 September 2016. The tunneling faced a major delay when the cutter head of TBM Godavari was broken and spares had to be awaited. Trial runs on the east–west tunnel of the Green line between Sampige Road and National College were commenced on 31 March 2017. The entire stretch of Green Line was opened on 19 June 2017, thus completing Phase 1 of the project. Opening After missing deadlines, Namma Metro's first section (Reach-1) finally opened to the public on 20 October 2011. There was an overwhelming response by the public at commencement of operations. As per BMRCL sources, within the first 3 days of operations 169,019 people rode the metro. At the end of the 4th day, about 200,000 passengers had already commuted on Namma Metro. Namma Metro's first 12-day cumulative revenue was ₹1 crore (US$100,000). The northern section of Green Line (Reach 3, 3A, 3B) was initially scheduled to be opened by the end of 2012. However, it was delayed and finally opened on 1 March 2014. BMRCL MD Pradeep Singh Kharola stated that about 25,000 passengers travelled on the line on the opening day. In the first month of operations, 7.62 lakh people used the line, at an average of 24,605 daily, generating revenues of ₹1.5 crore (US$210,000). The first underground section (on the Purple line) commenced operations on 30 April 2016, providing connectivity between the east and west of the city. The second underground section, along with southern reaches (viz. Reach 4,4A) was opened on 18 June 2017. Once the east and west reaches were inter-connected with opening of the underground section of the purple line, ridership surged hugely. After the north–south underground section was opened (simultaneously with the elevated reach 4,4A in the south), the network provided connectivity in all four directions with interchange between the lines and this further increased ridership. Ridership kept increasing and was around 4,50,000 daily (September 2019). Phase 1 lines and sections Phase 1 comprises two lines spanning a length of , of which about is underground and about is elevated. There are 40 stations in Phase 1, of which seven stations are underground, one is at grade and 32 are elevated. The first phase of the project was initially budgeted at ₹6,395 crore (US$875 million). With route extensions and cost escalation, this was later revised to ₹11,609 crores (US$1.6 billion). There were many delays during construction and as a result, there were several postponements. The difficult geological conditions below ground with a mix of soft soil, high groundwater levels, hard granite and large boulders was a major impediment for tunnel boring. A major delay was due to a broken cutter head of TBM Godavari for which spares had to be ordered from Italy. Phase 1, containing two lines aggregating was completed and services opened to the public on 19 June 2017. The final cost for Phase 1 was ₹14,405 crores. Phase 2 The State Government accorded approval for preparation of the detailed project report (DPR) for Phase 2 by DMRC on 4 January 2011. The high power committee (HPC) gave in-principle clearance to proceed with Phase 2 in July 2011. Karnataka government gave in-principle approval to Phase 2 of the Namma Metro project on 3 January 2012. However, there were delays in DPR preparation and hence approval from the Central Government. Phase 2 was cleared by the expenditure finance committee (EFC) in August 2013. The Union Cabinet finally announced that it had approved plans for Phase 2 on 30 January 2014. The estimated total cost for Phase 2 was around ₹26,405 crore (US$3.7 billion) at 2011-12 price levels. The State Government would contribute ₹9,000 crore (US$1.3 billion). The project cost of ₹26,405 at 2011–12 price levels is set to escalate about 5 per cent every year with increasing cost of inputs. The Union government will share that part of cost escalation due to increase in central levies, while the State and BMRCL will have to bear any other escalation. The total project cost for Phase 2 is estimated to reach at least ₹30,000 crore (US$4.2 billion) at the start of construction itself. In October 2018, Deputy Chief Minister G Parameshwara stated that the cost of Phase 2 would be around ₹32,000 crore (US$4.5 billion). Phase 2 spans a length of underground, at grade and elevated, and adds 62 stations to the network, of which 12 are underground. Phase 2 includes extension of the two Phase 1 lines in both directions, as well as construction of two new lines. The south end of the Green Line will be extended from Yelachenahalli to Silk Institute (previously named Anjanapura) along Kanakapura Road, and he north end from Nagasandra to Madavara (previously named BIEC) on Tumkur Road (NH-4). The east end of the Purple Line will be extended from Baiyappanahalli to Whitefield and the west end from Mysore Road to Kengeri (later extended to Challaghatta). A new, long, fully elevated line from RV Road to Bommasandra will be constructed, passing through Electronic City. The second new line will be from Kalena Agrahara (previously Gottigere) to Nagawara. The line is mostly underground (13.79 km or 8.57 mi), but also has a elevated section and a at-grade section. There are 18 stations on the line, of which 12 are underground and six are elevated. Unlike Phase 1, all stations being built in Phase 2 will have bus bays and/or parking facilities. Karnataka Industrial Areas Development Board (KIADB) is responsible for acquiring land for Phase 2. It was estimated that of land would be required for Phase 2 (including Phase 2A). By April 2017, BMRCL had already spent ₹5000 crore on land acquisition. Construction work began on the south extension of the Green Line from Yelachenahalli to Silk Institute (6.29 km or 3.91 mi) and the west extension of the Purple Line from Mysore Road to Kengeri (8.81 km or 5.47 mi) by October 2016. Construction work on the west extension of Purple Line was awarded in two packages for ₹660 crore, while the Green Line south extension was awarded in a single package for ₹508.86 crore. Construction work for the north extension of the Green Line (from Nagasandra to Madavara) and the east extension of the Purple Line (from Baiyyappanahalli to Whitefield) began in July 2017. The north extension is estimated to cost ₹247.41 crore while the east extension (15.5 km) was awarded for ₹1,300 crore (US$180 million). Construction of the yellow line, a new line from Rashtreeya Vidyalaya Road to Bommasandra in 3 packages began post awarding of tenders in November 2017. The first package () was a stretch from Bommasandra to Hosa Road station, including depot entry line to Hebbagodi depot and five Metro stations. The second package was for a stretch from Hosa Road to Bommanahalli (previously HSR Layout). Both packages were awarded for ₹1,750 crore (US$250 million) to Thailand-based ITD Cementation India. The third package was for a stretch from RV Road to Bommanahalli (previously HSR Layout), 6.34 km long elevated section and five stations, for ₹797.29 crores (US$110 million) and includes the construction of a road-cum-rail flyover, road widening and allied works. Construction of the 7.5 km elevated section of a new pink line between Kalena Agrahara and Tavarekere (previously Swagat Road Cross) stations, estimated to cost ₹575.52 crore (US$81 million) began post tendering in February 2018. The tender includes the construction of the elevated viaduct, 5 stations and car depot line to Kothanur depot. Underground section BMRC initially floated tenders for the construction of the 13.9 km underground section in four packages during June 2017. However, the tenders were cancelled as all the bids received were far too high (higher by nearly 70%) at Rs.8553.45 crores (US$1.28 billion) when compared to BMRC's estimated total of Rs 5047.56 crores (about US$760 million). The second round of tendering resulted in tenders being awarded to three firms during the March–June period, 2019. One of the firms (L&T) won two bids. The total awarded tunneling tenders for the underground sections was ₹5,925.95 crores (appx US$812 million). Pre-tunneling construction work and piling for stations began in May 2019 by L&T. Tunnel boring using TBMs began in July 2020 by L&T. Tunnel boring work by the other two contractors (viz. Afcons & ITD-Cem) was commenced in 2021. Phase 2A In September 2016, the government announced that a new line connecting Silk Board with K.R. Pura would be included in Phase 2 as Phase 2A of the project. The line would be along the eastern half of Outer Ring Road and is proposed to have 13 stations – Silk Board, HSR Layout, Agara, Ibbalur, Bellandur, Kadubeesanahalli, Kodibisanahalli, Marathahalli, ISRO, Doddanekundi, DRDO Sports Complex, Sarasvathi Nagara (previously Mahadevapura) and K.R. Pura. The cost was estimated to be ₹4202 crores. BMRCL prepared the detailed project report for the proposed line and submitted the DPR to the state government on 28 October 2016. Phase 2A was approved by the State Cabinet on 1 March 2017. The ORR Metro line (Blue line) will have interchange stations with the extended Purple Line at K.R. Pura and with the proposed R V Road – Bommasandra line (Yellow Line) at Silk Board. Tenders for ORR Metro line (east) were called in February 2018 and IL&FS emerged the lowest bidder for all packages. However, the tenders were quashed due to cash flow problems and bankruptcy proceedings by the firm. A second round of tendering was done in December 2019 and bids were received by multiple firms in March 2020. There are two packages. The first package included 2.84 km of ramps for a flyover at Central Silk board junction in addition to 9.859 km of viaduct with six elevated stations. The second package was for 8.377 km viaduct with seven elevated stations, 1.097 km depot line and a 0.30 km pocket track. Phase 2B There had been a proposal to build a high speed rail line from MG Road to Kempegowda International Airport (KIA), at a cost of ₹5,767 crore (US$810 million). This was to be executed by an independent SPV, but later it was decided that BMRC would manage this project and a regular metro line with fewer halts would be built instead of a high speed rail, thus travel time between city to airport would be less. As early as in February 2012, the Central Government had also requested BMRC to start work on the airport link during Phase 2 itself. Following this, in September 2016, suggestions were invited from public to choose any one of nine possible extension routes from existing and proposed metro lines to the airport. The proposed extension routes had an average length of , and cost estimates ranged between ₹4,500 crore and ₹7,000 crore. A extension from Nagawara via Kannur and Bagaluru was the shortest, while the extension from Yeshwanthpur via Yelahanka, Kannur and Bagaluru was the longest of the proposed routes. BMRC received 1,300 responses from the public. A extension of the Kalena Agrahara (previously Gottigere) – Nagawara line via Kannur and Bagaluru to the airport emerged as the most popular choice. Since Bengaluru International Airport Limited (BIAL) forbade underground construction from the southern side of the airport (due to security as it would have to pass beneath the airport's second runway), the shortest route options (i.e. extending the Pink line from Nagawara directly north) were eliminated. An alternate route proceeding north till RK Hegde nagar and then turning west to Jakkur was then explored. However, even this had an obstacle as a high-pressure petroleum pipeline was passing through the originally proposed route. Bangalore Development Minister K. J. George announced on 12 May 2017 that the government had finalized the Nagawara-Ramakrishna Hegde Nagar-Jakkur-Yelahanka route to the airport. On 10 January 2019, the State Cabinet approved a change in alignment for the proposed metro line to the airport. The new line would begin at Krishnarajapura (K.R. Pura) and be aligned along the northern part of ORR (Outer Ring Road), passing Nagawara, Hebbal, and Jakkur before heading towards the airport along Ballari road. The line would be long, about longer than the route previously proposed. It is estimated to cost ₹10,584 crore (US$1.5 billion) almost twice as much as the previous route's estimate of ₹5,950 crore (US$830 million). The Union Cabinet cleared two much awaited lines of the Bengaluru Metro's Phase 2A and 2B on 20 April 2021. Phase 2A and Phase 2B lines total a distance of 58.19 km and were approved at a cost of Rs.14,788 crore. The projected ridership on both these lines in 2026 is estimated at 7.7 lakhs. Construction of the airport line is expected to begin by October 2021. In January 2023, there was an accident when a reinforcement cage of the under-construction metro pier fell on a woman software engineer, Tejaswini Sulakhe and her son Vihan in HBR Layout, which led to their eventual death. BMRCL announced a financial assistance of 20 lakh. Chief Minister Basavaraj Bommai also announced a separate compensation of Rs 10 lakh for each of the deaths from the Chief Ministers' Relief Fund. Phase 2, 2A, and 2B lines and sections Phase 2 originally involved extending four reaches of the two lines in all directions and two new lines (Yellow Line and Pink Line). The ORR-East Line was later included as Phase 2A followed by the Airport Line (as a continuation of the ORR-East Line) as Phase 2B. The line was later named the Blue Line. Construction of Phase 2A has been divided into two (elevated section) packages. Construction of Phase 2B has been divided into three (elevated section) packages. Airport stations have not been included with tenders as they will likely be built by BIAL. Future expansion Phase 3 A Phase 3 was initially proposed in May 2016. Certain significant sections that were proposed for Phase 3 were included in Phase 2 as Phase 2A and Phase 2B. On 7 March 2020 it was announced that two corridors totaling 44.65 km would be built as fully elevated lines under PPP mode (Kempapura-Jayaprakash Nagara 4th Ph (along ORR-West) and Hosahalli-Kadabagere). On 4 March 2022, during a budget speech by the state government, another new corridor spanning 35 km from Hebbal to Sarjapura was announced (to be taken up as Phase 3B). The Chief Minister stated that the Detailed Project Report (DPR) would be prepared for the proposed corridor, estimated to cost around ₹15,000 crores. The two corridors announced previously were estimated to cost ₹13,500 crores. On 16 August 2024, the Union Cabinet approved Phase 3, which will add two lines and 31 stations, with an estimated cost of ₹15,611 crores. SECON Pvt. Ltd. has officially started geotechnical soil investigation work on Magadi Road for the upcoming 44.65 km Bangalore Metro Phase 3 project. On 11 November 2024, The 35 km stretch between Hebbal and Sarjapura, now known as the Red Line, was approved by the state finance department of Karnataka. Other proposals On 10 March 2023, the Government of Karnataka announced further extensions and routes. The new corridors costing ₹27,000 crores would add 59 km of length to the system. There has also been a proposal for an Inner Ring Metro Line, as recommended by the Indian Institute of Science (IISc) and included in Comprehensive Mobility Plan (CMP-2019). The Government of Tamil Nadu had sought an extension of the Yellow line from Bommasandra to Hosur Railway Station, located in Hosur, a city near the Karnataka-Tamil Nadu border. The Karnataka Government has advised the Tamil Nadu Government to conduct a feasibility study at its own cost. Progress on construction Operationalisation timeline Lines Overview Purple Line The Purple Line is aligned east to southwest in Namma Metro and connects Whitefield (Kadugodi) in the east with Challaghatta in the southwest. The line is long and has 37 stations. It is elevated on both the east and west sides and has a underground section in the middle. The line passes through prime activity centers of the city (Whitefield, ITPL, Krishnarajapura, MG Road, Majestic, Railway Station, Vidhana Soudha, Mysuru Road, and Kengeri). The first , 6-station stretch (Reach 1) of the Purple Line between Baiyappanahalli in the east and Mahatma Gandhi Road opened on 20 October 2011. This was the first and inaugural section of Namma Metro. The second , 6-station stretch (Reach 2) between Mysore Road and Magadi Road opened on 16 November 2015. The underground section, a stretch from Cubbon Park to Bengaluru City (KSR) Railway Station opened on 29 April 2016, thus linking the east and west sections that were already opened. Opening the underground section completed the entire Purple Line in Phase 1. Under Phase 2, the southwest extension of opened on 30 August 2021. The eastern extension of to Whitefield was under construction. of the eastern extension was opened for service on 26 March 2023 but remained disconnected from the network as a small section was not yet ready (Baiyyappanhalli-KR Puram). On 9 October 2023, Baiyyappanhalli-KR Pura and Kengeri-Challaghatta were also open to the public making the whole line operational from Whitefield to Challaghatta. It connects the industrial and suburban areas of south-west with the CBD (MG Road, Trinity) and the IT areas of the east (Baiyappanahalli, Whitefield). 37 new feeder routes of BMTC were introduced in October 2023 from KR Puram station to ensure last mile connectivity. Green Line The Green Line is aligned north to south and connects Madavara in the north-west to Silk Institute in the south-west, covering a distance of and has 30 stations. It is elevated on both north and south sides and has a underground section in the middle. The Line passes through industrial areas (Peenya, Yeshwanthapur) in the north and also through commercial hubs (Majestic, Chikpete, City Market) and connects large residential catchments in the south (Basavanagudi, Jayanagar, Banashankari, and Thalaghattapura). It is currently being expanded under Phase 2. The first , 10-station stretch (Reach 3/3A) of the Green Line opened 1 March 2014. The stretch connected Sampige Road to Peenya Industry. The second , 3-station stretch (Reach 3B) of the Green Line from Peenya Industry to Nagasandra, opened on 1 May 2015. The third stretch connecting Sampige Road to Yelachenahalli was inaugurated on 17 June 2017 and opened the next day, thereby completing the entire Phase 1. These stretches (including the underground section) were inaugurated by then President late Shri Pranab Mukherjee on 17 March 2017. Under Phase 2, a stretch from Yelachenahalli to Silk Institute was inaugurated on 14 January 2021, making it the first section of Phase 2 to be opened for service. The northern extension of to Madavara was opened to the public on 6 November 2024, without any inauguration ceremony but will formally be inaugurated at a later stage. Lines under construction in Phase 2 /2A /2B More lines and extensions are under construction. When completed, the 2nd phase (currently underway) will provide connectivity to the city's tech hubs of Electronic City and Whitefield, besides covering the eastern half of Outer Ring Road (ORR) and providing service to the Kempegowda International Airport in the north of the city. Yellow Line The Yellow Line connects Rashtreeya Vidyalaya Road to Bommasandra, covering a distance of and has 16 stations. This line is completely elevated and runs along ORR and Hosur road. It will begin at an interchange station (Rashtreeya Vidyalaya Road) and pass through two other interchange stations: Jayadeva Hospital station, where it crosses the Pink Line; and Central Silk Board station, where the Blue Line will terminate. Thus, the line connects Electronic City and Bommasandra industrial area with the metro network. The line is scheduled to become operational by end of 2024. On 9 June 2022, Karnataka Government okayed preparation of DPR by Tamil Nadu government for an extension of the Yellow line from Bommasandra to Hosur in Tamil Nadu. Pink Line The Pink Line is aligned from south to north and connects Kalena Agrahara in the south with Nagawara in the north, covering a distance of and having 18 stations. It is elevated on the southern side till Tavarakere, the rest is underground till the northern end at Nagawara station. The line will pass through Jayadeva Hospital station (interchange with Yellow Line) and Mahatma Gandhi Road (interchange with Purple Line). it passes through Cantonment and ends at Nagawara, which is planned as an interchange station with the Blue Line. The line is under construction and is expected to open in 2025–26. Blue Line The Blue Line is aligned along the eastern and northern sections of ORR. It deviates from ORR at Hebbal and proceeds along Airport Road to Kempegowda International Airport. Construction is being handled in two sections: Phase 2A covers Central Silk Board to Krishnarajapura and Phase 2B covers Krishnarajapura to Airport. The line will totally be long and have 32 stations. It is elevated almost throughout, with a small underground section in way of Yelahanka Airforce base. The line also has a section at grade level closer to the airport. Airport station will be an underground station. This line begins at Central Silk Board (interchange with the Yellow Line), and passes through Krishnarajapura (interchange with the Purple Line) and Nagawara (interchange with the Pink Line). It also passes through Hebbal and Yelahanka. The under construction line is expected to open in June 2026. In June 2022, BMRCL launched the first ever U-girder span on the ORR-Airport metro line. Phase 2A and Phase 2B (Krishnarajapura – Yelahanka – Bangalore Airport) will be funded through a $500 million loan by the Asian Development Bank (ADB) which was approved by its board in December 2020. Japan International Cooperation Agency (JICA) will provide a $318 million loan as well. A formal agreement for this was signed in March 2021. Finances Funding Phase 1 Phase 1 missed nine deadlines, and its cost was revised four times. The initial cost estimate for Phase 1 when it was approved in 2006 was . The increase in length from increased the total cost to . Delays caused further escalations. The cost rose to in 2011 and in 2015. The final cost to build Phase 1 was estimated at . Land acquisition for Phase 1 accounted for . The Central and State Government funded 58.91% of the cost. The remaining 41.09% was secured through loans from domestic and foreign financial institutions. BMRCL secured through long-term loans and by selling bonds, while the remaining cost was funded by Central Government and the State Government. BMRCL secured loans from several agencies – from the Japan International Cooperation Agency (JICA), from the Housing and Urban Development Corporation Limited (HUDCO), from the Asian Development Bank (ADB), and the rest from the French Development Agency. Approximately 10% of the 6500 crore had to be paid as interest by the BMRCL each year. The Federation of Karnataka Chambers of Commerce and Industry (FKCCI) estimated that this amounted to an interest payment of per day. BMRCL stated that interest component wasn't that high but it was "definitely more than per day". BMRCL announced plans on 13 June 2013 to issue 10-year bonds. The proposed bonds received a credit rating of "IND AA" from India Ratings & Research (Ind-Ra). Namma Metro MD N. Sivasailam announced on 3 August 2013 that the issue of bonds would be postponed as the market was volatile. He stated that the metro would "be in the market soon when it is stable". Phase 2 /2A /2B On 3 January 2012, the Karnataka government approved a budget of for Phase 2 of Namma Metro project. Phase 2 is estimated to cost . Land acquisition is expected to account for . The Central and State Governments will fund around 15,000 crore. The State and Central Governments will bear 30% and 20% of the project cost respectively. The remaining will be obtained through senior term loans. BMRCL is permitted to raise up to 9,000 crore through loans. On 27 March 2012, Asian Development Bank (ADB) signed an agreement to lend $250 million to BMRC to part-finance Phase 2. The loan marked the multilateral lending agency's foray into the urban transport sector in South Asia, the ADB said in a press release. The loan, approved by the ADB Board in March 2011, is the first ADB loan to the urban transport sector without recourse to sovereign guarantees. In 2016, the Agence Francaise De Development (AFD) sanctioned a 1,600 crore loan for the project. The rate of interest on the loan is linked to Euro Interbank Offered Rate (Euribor) + 130 basis points. In early 2017, the European Investment Bank agreed to loan 3,700 crore to BMRCL, with a repayment period of 20 years at a rate of interest lower than the one on the AFD loan. In May 2017, BMRC received in-principle approval from the European Investment Bank (EIB) to fund construction of the Gottigere-Nagawara line through a 3700 crore loan. The line is being co-financed by the Asian Infrastructure Investment Bank and the Central and State Governments. Indian firms Biocon and Infosys announced that they would provide funding for the construction of the Hebbagodi and Electronics City metro stations respectively on the RV Road-Bommasandra Metro line. BMRC expects that each firm will contribute towards the project. Biocon CMD Kiran Mazumdar Shaw stated that the company wanted to fund the project because it would help decongest the city. Both Biocon and Infosys have offices located near the stations. BMRCL secured a $318 million loan from JICA in March 2021, and a $500 million loan from the ADB in August 2021 to fund construction of the ORR-Airport metro. The State and Union governments will contribute , and the Karnataka Government will pay an additional for land acquisition. Revenues During the first month since the opening of Reach I of just 6.7 km, about 13,25,000 people travelled by metro. On average, 41,390 people took the train every day, while the average daily revenue was . The BMRC earned a revenue of in its first month of operation. However, in the first six months of operation, average ridership went down to 24,968. The BMRC earned a total of in the same period. Namma Metro posted a profit of after about one year of operations of Reach I. BMRCL estimates that nearly 80 lakh passengers travelled on the system in its first year of operations. Namma metro is also pursuing selling retail space within the metro stations to generate non-fare revenue. The following table shows annual ridership and farebox revenue of Namma Metro since inception. Infrastructure Rolling stock BMRC procured 150 metro coaches for fifty three-car train sets in DMC-TC-DMC formation for Phase l of Namma Metro from BEML – Hyundai Rotem, at a cost of Rs 1,672.50 crore (Rs 16.72 billion). Coach specifications were as follows: length 20.8m, width 2.88m, and height 3.8m. Each coach had a seating capacity of about 50 and standing capacity of 306 (basis 8 per sqm). Thus, each train had a capacity of about 1000. Traction is through four 180 kW motors in each motor coach. The trains have a maximum speed of 80 kmph and axle load of 15 tonnes. The trains operate on 750V DC with third rail bottom power collector system. Features include stainless steel body fully air-conditioned coaches, longitudinal bank of wide seats, wide vestibules between coaches, non-skid and non-slip floor surfaces, wi-fi enabled, four wide passenger access doors on each side, wide windows, automatic voice announcement system and electronic information and destination display system. Initial operations began with three-coach trains. Each train had a capacity of 1000 passengers. As ridership increased, all trains were converted to six coaches. The first train set made a trial run in December 2010. In early 2017, BMRC floated tenders for an additional 150 coaches to convert all trains to 6 coach trains on the two Phase 1 routes. On 27 March 2017, BEML announced that it had won an 1,421 crore contract to supply the coaches. The first six car train was introduced on the Purple Line on 23 June 2018. By January 2020, all trains had been converted to six coaches. Free Wi-Fi service was made available to commuters on 31 July 2013. Passengers also have emergency voice communication with train staff through a speaker system. Passengers are provided with a call button to communicate with the driver or control center during an emergency. Phase 2/2A/2B BMRCL awarded a contract to BEML to supply 318 coaches (53 trainsets with six coaches each) in August 2023 valued at 3,177Cr.96 coaches (16 trainsets) will be deployed on the Pink Line, while 222 coaches (37 trainsets) will be deployed on the Blue Line. On 31 August 2024 BEML revealed the design of upcoming rolling stock for the Pink Line and Blue Line. The first prototype is expected to be rolled out by early 2025. The coaches for the Blue Line will have specially designed luggage racks for passengers since the Blue Line connects to Kempegowda International Airport. Power supply In December 2009, the ABB Group was awarded the contract to provide power solutions for the first phase of the planned metro network. ABB designed, supplied, installed and commissioned four substations that receive and distribute electricity, each rated at 66/33 kV, as well as the auxiliary and traction substations. ABB also provided an integrated network management system, or SCADA (supervisory control and data acquisition), to monitor and control the installations. BMRC currently pays BESCOM 5.7 per unit of electricity. In 2016, BMRCL signed an agreement with CleanMax Solar to set up solar installations at Baiyapanahalli and Peenya stations. After Phase 2 of the metro is complete, CleanMax Solar will set up similar installations at the metro depots in Challaghatta, Whitefield, Kothanur and Hebbagodi. As per the agreement, CleanMax Solar will bear the cost of installation and the BMRC will pay CleanMax a rate of 5.5 per unit of electricity for three years. Following the three-year period, all six installations will be transferred to the BMRC. According to the BMRC director of operations NM Dhoke, "The 1.4 MW installation can generate up to 10,000 units, which help power the depot facilities. However, it is not sufficient to power the trains, but it will help us save 51 crore over 25 years on energy". In February 2019, Alstom was awarded a GBP 62 million contract to provide electrification systems for Phase 2 of the Namma Metro. The company will construct 56 substations to supply power for Phase 2 of the system. Signaling In September 2009, the consortium led by Alstom Project India Limited was awarded a contract worth to supply control and signalling system for the first phase of the project. The consortium is led by Alstom and composed of Alstom Transport SA, Thales Group Portugal S A, and Sumitomo Corporation. Alstom will provide the design, manufacture, supply, installing, testing, and commissioning of the train control and signalling system and Thales will provide the design, installing, testing, and commissioning of the telecommunication system for Phase 1 of the metro system. It includes the Urbalis 200 Automatic Train Control system which will ensure optimal safety, flexible operations and heightened passenger comfort. The integrated control center at Baiyyappanahalli has direct communication with trains and stations are CCTV fitted with visual and audio service information. Passengers have emergency voice communication with train staff. Stations There are 66 stations on the Namma Metro network. Majestic station is the largest with a total floor area of . Initially, there were no toilets at Namma Metro stations. BMRCL eventually heeded public demand, and the metro's first toilets were opened at Baiyappanahalli and Indiranagar stations on 21 June 2013. As of February 2017, there were 33 ATMs at Namma Metro stations. The 12 underground stations built during Phase 2 will be smaller in size than the underground stations built in Phase I to minimise land acquisition costs. All Phase I underground stations were 272 meters long and 24 meters wide, except for Chikkapete and K.R. Market stations which had the same width but were 240 meters long. In contrast, Phase 2's underground stations are shorter at 210 meters but retain the same width. On 17 February 2017, Uber announced that it would open booking counters at 12 metro stations by the end of March 2017. The counters enable commuters to book an Uber, and is aimed at commuters who do not have access to the internet or do not have the Uber mobile app installed on their phone. Ola Cabs announced a similar arrangement on 22 February 2017. Depots Under Phase 1, two depots were built at Baiyyappanahalli and Peenya. For the line extensions in Phase 2, BMRC is building additional depots at Silk Institute on the Green Line and at Challaghatta and Kadugodi for the Purple Line. For the new lines in Phase 2, depots are being built at Kothanur (Pink Line) and at Hebbagodi (Yellow Line). For the Blue Line (airport line, Phase 2A), the Baiyyappanahalli depot is planned to be used as two depots are being built for the Purple Line at its ends. In addition, a depot is planned at Doddajala, near the Trumpet inter-change. The depot at Baiyappanahalli has an operations control center for managing the metro network and also a training center. Vertical gardens The BMRC granted permission to Hydrobloom, a start-up company, to grow hydroponic plants on the pillars of the Namma Metro. Pillars covered with plants are referred to as vertical gardens. Hydrobloom had previously developed a vertical garden on a metro pillar near Rangoli Art Centre next to the MG Road Metro station. The gardens are intended for beautification and to reduce air pollution. Safety , BMRCL has two road-cum-rail rescue vehicles that can be used to perform evacuations or re-load derailed trains back onto the track. The trains are equipped with derailment prevention equipment, and the tracks are equipped with concrete barriers to prevent trains from leaving the viaduct. The support pillars are earthquake-proof and are designed to have a lifespan of at least 100 years. Trains are equipped with sensors to detect impending collisions, and have automatic braking systems to prevent speed limits from being exceeded. In case of train stoppages midway between stations during emergencies, pavements beside tracks are provided on elevated viaducts as well as in underground sections. Accessibility Yellow tactile tiles are used at all stations to guide the visually impaired. The tiles start at the ramp and lead to the staircases and lifts. Disabled and elderly passengers can avail a wheelchair at all metro stations. The wheelchair can be used by the passenger to board a train and then dropped off at the destination station. In February 2017, BBMP and BMRC began a project to upgrade all footpaths along metro routes. The project is estimated to cost 40 crore and was scheduled to be completed in 18 months. Rainwater harvesting BMRCL, in a public-private partnership, harvests rainwater from the viaducts on the rail system. The private partner, Karnataka Rural Infrastructure Development (KRIDL), collects the water at multiple points, treats it, and sells it in bulk as potable water. Pipes inside each metro viaduct pillar carry the rainwater from the viaduct down to underground tanks located beneath the median. When these tanks overflow, the water is diverted to 5 metres deep rainwater harvesting pits. Two rainwater harvesting pits are installed between each pillar. The average distance between the two pillars is 28 meters. As of March 2017, a of the elevated metro is covered by the rainwater harvesting system. With the completion of Phase 2 of the metro, the BMRCL will cover a total of with rainwater harvesting systems. Around 8 crore litres of water are expected to be collected annually. BMRCL also harvests rainwater from the 140 acres depot facility at Peenya. Water will be collected from the sq foot roof and stored in two tanks with a capacity of 50,000 litres each. Rainwater harvesting is also planned in existing and under-construction stations. The water harvested will be supplied to places where needed, and any excess will be used for groundwater recharge. BMRC has installed a water harvesting system along Reach 1 and will be doing the same for Reaches 3 and 4. Installation of flower beds was delayed due to garbage being dumped on the median by garbage collectors, BMRC will also set up flower beds on Reach 1 with assistance from the horticulture department. However, the work related to this has slowed down due to garbage contractors dumping garbage along the median, due to the lack of a waste management plan in the city. BMRC also planned to rejuvenate Kengeri and Veerasandra lakes using water collected from a nearby corridor. Operations Fare collection MIFARE DESFire platform, developed by NXP Semiconductors, was selected to manage the Automated Fare Collection (AFC) for Namma Metro. The system uses contactless smart tokens, QR based tickets and contactless smart card. Tokens are available only for a single journey and are captured by the gates on exit. The QR tickets can be purchased online through the Namma Metro app, paytm, yatra or whatsapp messenger (through the number: +918105556677). Namma Metro also provided group tickets which is a paper ticket with a higher discount for groups of people that was manually verified to pass through the gates. Smart cards can be used for multiple journeys. There is currently one type of smart card available on the metro. BMRC smart card or Varshik is priced at 50, with ₹50 as a user deposit. The card is rechargeable, however for online transactions, the card is required to be presented at the gates within seven days and after one hour of recharge or at the card top-up terminals installed at all metros before 15 days after recharge to update the recharged amount on the card. Failing to do so, the amount will be refunded automatically with 2.5% service fee within 30 days. The card can be recharged anywhere from 50 to 2,500 in increments of 50. Initially planned to be valid for one year after the last recharge, the validity has been changed to 10 years. It provides a 5% (earlier 15%) discount on fares. Saral was available for 70. It permitted one day's travel on BMTC non-air-conditioned buses and on the metro. Saral is no longer available. Saraag was available for 110. It permitted one day's travel on BMTC air-conditioned buses and on the metro. Saraag is no longer available. Sanchar was available in denominations of 10, 40, 50 and 100. Sanchar was withdrawn from 1 March 2017. National Common Mobility Card (NCMC) can be purchased from Namma metro stations free of charge by surrendering their varshik cards and in select Bank branches from 30 March. Initially the NCMC card was expected to be also used for shopping, fuel and travelling anywhere within India including BMTC and KSRTC ticketing. However, as of September 2023, BMTC and KSRTC do not support NCMC cards. BMRCL began selling tokens through automatic ticket vending machines (ATVMs) on 4 December 2012 at MG Road, Indiranagar and Baiyyappanahalli stations. The service will eventually be expanded to all metro stations. The touchscreen-enabled ATVMs are available in three languages – English, Kannada, and Hindi. Commuters can purchase a single journey token by selecting the destination station or the amount in the ATVM. They can also add value or add trips to the contactless smart card. Commuters can purchase up to eight tickets at a time and can get the receipt printer for card recharge. ATVMs accept coins of 5 and 10 denominations and 10, 20, 50, 100, 500 denominations of currency notes. However, the ATVM cannot differentiate between 1 and 2 coins. In November 2016, BMRC began accepting online payments to recharge smart cards. Approximately 68% of passengers on the metro use smart tokens and 32% use smart cards. Metro system in Bengaluru charges full fare tickets for children above three years of age which is against the Indian government circular date 6 March 2020, No. TC-II/2910/2016/child fare/VIP. Circular number 12 states that "Children under five years of age shall be carried free and purchase of any ticket is not required". "In case of children of age 5 years to under 12 years of age (in case of no berth) only half of the applicable fare shall be charged and in this case, a minimum distance of charge shall not be applicable". (CC30 of 2017 dated 24 April 2017). Similarly Karnataka government public transport system does not collect fares from children under the age of 6 yrs and half fares from children under 12 yrs of age. Bengaluru metro decided to include 3 ft as a criterion to decide who is classified as a child. The average height of Indian babies at 3 yrs is much more than 90 cms. Since BMRCL is a joint venture of the Government of India and the Government of Karnataka, it should be expected to follow Government of India/ Karnataka rules. On 26 March 2023, Prime Minister Narendra Modi symbolically launched the long awaited National Common Mobility Card (NCMC) in Bengaluru while he traveled on the newly inaugurated Whitefield stretch of the Purple Line using the NCMC. Frequency The metro service runs between 05:00 and 23:00 hours. The service starts at 07:00 hours on Sundays. There are trains every 8 minutes between 08:00 and 20:00, and every 10 minutes at other times. On weekdays, trains operate at 4-minute intervals between 09:00 and 10:00. The headway is slated to decrease to once every three minutes after completion of Phase I. The end-to-end travel time on the Purple Line is 35 minutes, and on the Green Line 45 minutes. Metro services have occasionally operated beyond 23:00 hours. Services are usually extended on festival days or when an international cricket match is held in the city. Ridership The Purple Line's Reach-1 of 6.7 km was the first to open, in October 2011. On the first three days of operations of Reach-1, 169,019 people rode the metro. At the end of the fourth day, about 200,000 passengers had already commuted on Namma Metro. In the first month since opening of Reach-1, about 1,325,000 people had travelled by metro. Thus, on average, 41,390 people took the train each day during the first month. However, the average ridership in the first six months of operation dropped to 24,968. The northern section of the Green Line (Reach 3, 3A - 9.9 km) opened in March 2014. About 25,000 passengers travelled on the northern section on the opening day. In the first month of operations of the Green Line stretch, 762,000 people used the line, at a daily average of 24,605. Reach-1, Reach-3/3A and Reach-2 (opened in November 2015) operated independently until the east–west underground section of Purple Line (connecting Reaches 1 and 2) was opened in April 2016. Once the east and west reaches were inter-connected with the intermediate underground section of the Purple Line, ridership surged hugely to nearly 100,000 a day on the line in the first few days. After the north–south underground section was opened (simultaneously with the elevated reaches 4,4A in the south), the network provided connectivity in all four directions with interchange between the lines and this increased ridership further, reaching between 350,000 and 360,000 on average by December 2017. As ridership kept growing, overcrowding became a serious concern. Hence, 3-coach trains were converted to 6 coaches. The first six car train was introduced on the Purple Line on 23 June 2018. By January 2020, all trains had been converted to six coaches. During 2019–20, the annual ridership was 174.22 million (average during the year was thus 477,315). The maximum daily ridership of 601,164 was on 25 October 2019 while the highest daily revenue of Rs.1.67 crore was on 2 March 2020. In January 2020, ridership averaged 518,074. Due to the COVID-19 pandemic, services were halted from 25 March 2020 onwards. Operations resumed on 7 September 2020. In the second wave, operations were again halted from 27 April 2021 to 20 June 2021. Services remained limited throughout the pandemic period and ridership was low due to enforcement of constrained capacity and social distancing norms, as stipulated by the state government. Namma Metro recorded its highest daily ridership of 9,20,562 passengers on 6 December 2024 due to Diljit Dosanjh's music concert at the Bangalore International Exhibition Centre (BIEC). Speed The system is designed for a maximum train speed of . However, the Research Design and Standards Organization (RDSO) fixed the speed at which trains are allowed to operate commercially as on straight sections, on curves, and in stations. Security The Karnataka State Industrial Security Force (KISF) is responsible for security on the Namma Metro. Bangalore Police conducted detailed mock drills at metro stations for the first time on 25 March 2017. Police officials stated that the drill was to assess readiness to deal with situations such as hoax bomb calls, terrorist infiltration, or attacks. Sniffer dogs, bomb detection and disposal squad, anti-sabotage squad, and Quick Reaction Teams (QRTs) were involved in the drill. Police had previously conducted basic mock drills, but this was the first one aimed toward dealing with specific threats. The chosen day was a Saturday, when crowds are usually larger, in order to make the drill more challenging for officers. The BMRCL had not informed of the drill in advance. Laws The Bangalore Metro Rail (Carriage and Ticket) Rules 2011 limit the weight of personal baggage to 15 kg. Rule 3 says: "No person shall, while traveling in metro railway, carry with him any goods other than small baggage containing personal belongings not exceeding 60cm x 45cm x 25cm in size and 15kg in weight, except with the prior approval of the metro railway administration." The rules also prohibit carrying explosive, inflammable, and poisonous substances. The Metro Railway (Operations and Maintenance) Act, 2002, imposes fines and in some cases jail sentences for offences committed on the metro. Anyone indulging in sabotaging the train or maliciously hurting or attempting to hurt other passengers while travelling in the metro can face imprisonment for up to 10 years. Pasting posters or drawing graffiti on the walls of stations or trains is punishable by a fine of 1,000 or imprisonment for up to six months. Traveling in an inebriated state or creating nuisance in the train is punishable by a 500 fine. Passengers are monitored at security checkpoints and those that are causing trouble, heavily drunk, or carrying forbidden items are not permitted to board. Spitting on the metro premises is punishable by a fine of 100. Mobile app The BMRCL launched a Namma Metro app for Android devices in 2013. However, it had limited features. The app was officially re-launched on 4 November 2016 with additional features. The app was developed by the Centre for Development of Advanced Computing (CDAC). The app allows users to purchase and recharge smart cards, locate nearby metro stations and also provides information related to parking, train frequency, route map, and fare details. WhatsApp chatbot BMRCL introduced a WhatsApp chatbot named Bhagya which can provide easy recharge of smart cards, locations of nearby metro stations and information regarding train frequency and fare details. Fire safety Namma Metro has a dedicated fire team to take care of operations and maintenance of the firefighting system installed in metro stations. They conduct regular mock exercises and liaison with the State Fire Department for any assistance in case of fire emergency. In popular culture Several films and commercials have been shot on the Namma Metro premises. The BMRCL charges for shooting inside a metro station, and for shoots inside a metro train during peak hours between 6 am to 10 pm. During non-peak hours, from 10 pm to 4 am, the agency charges for shoots inside metro stations and inside a train. The film producers must also make a security deposit of , as insurance for the metro property. Film shoots are permitted during three slots – 6 to 8 am, 12 pm to 2 pm, and 9 to 11 pm. Kannada films can avail lower rates. The first film to shoot on Namma Metro was the Kannada blockbuster Ugramm in 2013. Later many other Kannada movies, including the 2015 film Rana Vikrama starring Puneeth Rajkumar and Adah Sharma, were shot using Namma Metro's facilities. The 2012 Malayalam film 22 Female Kottayam, starring Fahadh Faasil and Rima Kallingal, has a song sequence shot in the Namma Metro coach. In December 2016, the Tamil film Imaikkaa Nodigal, starring Nayanthara, became the first film to shoot scenes inside the tunnel stretch of the Purple Line. According to BMRC officials, prior to this film, one Kannada film, one Telugu film and several commercials had been shot on the Namma Metro premises. In February 2017, a mobile game based on Namma Metro was released. It is called Bangalore Metro Simulator 2017, and allows users to drive on both the Purple Line and Green Line. Controversies In August 2021, the Karnataka government requested large corporations with offices on the Outer Ring Road (ORR) to consider extending remote work policies until the end of 2022 to reduce traffic congestion during the soon-to-be initiated metro rail construction on the stretch from Silk Board to KR Puram. Due to questions raised by IT companies and employees awaiting return to their offices, the government issued a clarification that it was not mandatory. On 10 January 2023, an under-construction metro pillar rebar on the ORR stretch of the Blue Line fell sideways on a motor vehicle, leading to the death of a woman and her child on a scooter. Some officials of the BMRCL and the Nagarjuna Construction Company (with whom the BMRCL had a contract) have been booked. This has led to allegations of corruption and negligence of the civic bodies by the media. On 22 September 2023, Fidias Panayiotou, a Cypriot YouTuber, who initially gained fame for hugging Elon Musk, sneaked onto a train by jumping over the ticketing system. The incident was recorded and posted to his YouTube channel. Many people criticized the YouTuber for unethical behavior and stealing money from the city. Some took to criticize BMRCL for the absence of security guards at the stations. After the opening of the Krishnarajapuram-Whitefield stretch of the Purple Line, metro trains were filled due to the spike in ridership. Many people online have compared it to the crowding on Mumbai local trains. The yellow line, despite having most of its civil work completed is still not operational due to a lack of rolling stock which was supposed to be delivered by the Chinese rolling stock manufacturer CRRC. The tender was awarded to CRRC in February 2020 to deliver 216 new coaches within 173 weeks. Only one completed train set has been received so far and more trains were expected to arrive in the month of August 2024. However the trains are yet to be received and a date of operations for the much anticipated Yellow line have been postponed many times. The public has reacted to this delay in an opening date by questioning the government's decision to partner with a Chinese manufacturer for the coaches. In October 2023, a proposal was made to rename the Namma Metro after the 12th-century Kannada philosopher Basaveshwara. Many citizens urged that the original name of the metro be kept, with some telling the government to focus on other infrastructure projects in Bengaluru. Several instances of individuals jumping in front of the trains in successful or unsuccessful attempts to suicide has been recorded, with the latest case on 3 August 2024. More often than not, these cases have resulted in multiple train delays and inconvenience on the entire line. Network map
Technology
India
null
2410184
https://en.wikipedia.org/wiki/Tau%20neutrino
Tau neutrino
The tau neutrino or tauon neutrino is an elementary particle which has the symbol and zero electric charge. Together with the tau (), it forms the third generation of leptons, hence the name tau neutrino. Its existence was immediately implied after the tau particle was detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his colleagues at the SLAC–LBL group. The discovery of the tau neutrino was announced in July 2000 by the DONUT collaboration (Direct Observation of the Nu Tau). In 2024, the IceCube Neutrino Observatory published findings of seven astrophysical tau neutrino candidates. Discovery The DONUT experiment from Fermilab was built during the 1990s to specifically detect the tau neutrino. These efforts came to fruition in July 2000, when the DONUT collaboration reported its detection. The tau neutrino is last of the leptons, and is the second most recent discovered particle of the Standard Model (i.e., it was observed 12 years before the discovery of the Higgs boson in 2012).
Physical sciences
Fermions
Physics
2410571
https://en.wikipedia.org/wiki/Kepler%20problem
Kepler problem
In classical mechanics, the Kepler problem is a special case of the two-body problem, in which the two bodies interact by a central force that varies in strength as the inverse square of the distance between them. The force may be either attractive or repulsive. The problem is to find the position or speed of the two bodies over time given their masses, positions, and velocities. Using classical mechanics, the solution can be expressed as a Kepler orbit using six orbital elements. The Kepler problem is named after Johannes Kepler, who proposed Kepler's laws of planetary motion (which are part of classical mechanics and solved the problem for the orbits of the planets) and investigated the types of forces that would result in orbits obeying those laws (called Kepler's inverse problem). For a discussion of the Kepler problem specific to radial orbits, see Radial trajectory. General relativity provides more accurate solutions to the two-body problem, especially in strong gravitational fields. Applications The inverse square law behind the Kepler problem is the most important central force law. The Kepler problem is important in celestial mechanics, since Newtonian gravity obeys an inverse square law. Examples include a satellite moving about a planet, a planet about its sun, or two binary stars about each other. The Kepler problem is also important in the motion of two charged particles, since Coulomb’s law of electrostatics also obeys an inverse square law. The Kepler problem and the simple harmonic oscillator problem are the two most fundamental problems in classical mechanics. They are the only two problems that have closed orbits for every possible set of initial conditions, i.e., return to their starting point with the same velocity (Bertrand's theorem). The Kepler problem also conserves the Laplace–Runge–Lenz vector, which has since been generalized to include other interactions. The solution of the Kepler problem allowed scientists to show that planetary motion could be explained entirely by classical mechanics and Newton’s law of gravity; the scientific explanation of planetary motion played an important role in ushering in the Enlightenment. History The Kepler problem begins with the empirical results of Johannes Kepler arduously derived by analysis of the astronomical observations of Tycho Brache. After some 70 attempts to match the data to circular orbits, Kepler hit upon the idea of the elliptic orbit. He eventually summarized his results in the form of three laws of planetary motion. What is now called the Kepler problem was first discussed by Isaac Newton as a major part of his Principia. His "Theorema I" begins with the first two of his three axioms or laws of motion and results in Kepler's second law of planetary motion. Next Newton proves his "Theorema II" which shows that if Kepler's second law results, then the force involved must be along the line between the two bodies. In other words, Newton proves what today might be called the "inverse Kepler problem": the orbit characteristics require the force to depend on the inverse square of the distance. Mathematical definition The central force F between two objects varies in strength as the inverse square of the distance r between them: where k is a constant and represents the unit vector along the line between them. The force may be either attractive (k < 0) or repulsive (k > 0). The corresponding scalar potential is: Solution of the Kepler problem The equation of motion for the radius of a particle of mass moving in a central potential is given by Lagrange's equations and the angular momentum is conserved. For illustration, the first term on the left-hand side is zero for circular orbits, and the applied inwards force equals the centripetal force requirement , as expected. If L is not zero the definition of angular momentum allows a change of independent variable from to giving the new equation of motion that is independent of time The expansion of the first term is This equation becomes quasilinear on making the change of variables and multiplying both sides by After substitution and rearrangement: For an inverse-square force law such as the gravitational or electrostatic potential, the scalar potential can be written The orbit can be derived from the general equation whose solution is the constant plus a simple sinusoid where (the eccentricity) and (the phase offset) are constants of integration. This is the general formula for a conic section that has one focus at the origin; corresponds to a circle, corresponds to an ellipse, corresponds to a parabola, and corresponds to a hyperbola. The eccentricity is related to the total energy (cf. the Laplace–Runge–Lenz vector) Comparing these formulae shows that corresponds to an ellipse (all solutions which are closed orbits are ellipses), corresponds to a parabola, and corresponds to a hyperbola. In particular, for perfectly circular orbits (the central force exactly equals the centripetal force requirement, which determines the required angular velocity for a given circular radius). For a repulsive force (k > 0) only e > 1 applies.
Physical sciences
Classical mechanics
Physics
2410595
https://en.wikipedia.org/wiki/Tigecycline
Tigecycline
Tigecycline, sold under the brand name Tygacil, is a tetracycline antibiotic medication for a number of bacterial infections. It is a glycylcycline class drug that is administered intravenously. It was developed in response to the growing rate of antibiotic resistant bacteria such as Staphylococcus aureus, Acinetobacter baumannii, and E. coli. As a tetracycline derivative antibiotic, its structural modifications has expanded its therapeutic activity to include Gram-positive and Gram-negative organisms, including those of multi-drug resistance. It was given a U.S. Food and Drug Administration (FDA) fast-track approval and was approved on 17 June 2005. It was approved for medical use in the European Union in April 2006. It was removed from the World Health Organization's List of Essential Medicines in 2019. The World Health Organization classifies tigecycline as critically important for human medicine. Medical uses Antibacterial use Tigecycline is used to treat different kinds of bacterial infections, including complicated skin and structure infections, complicated intra-abdominal infections and community-acquired bacterial pneumonia. Tigecycline is a glycylcycline antibiotic that covers MRSA and Gram-negative organisms: Tigecycline can treat complicated skin and structure infections caused by; Escherichia coli, vancomycin-susceptible Enterococcus faecalis, methicillin-resistant Staphylococcus aureus (MRSA), Streptococcus agalactiae, Streptococcus anginosus grp., Streptococcus pyogenes, Enterobacter cloacae, Klebsiella pneumoniae, and Bacteroides fragilis. Tigecycline is indicated for the treatment of complicated intra-abdominal infections caused by; Citrobacter freundii, Enterobacter cloacae, Escherichia coli, Klebsiella oxytoca, Klebsiella pneumoniae, vancomycin-susceptible Enterococcus faecalis, methicillin-resistant Staphylococcus aureus (MRSA), Streptococcus anginosus grp., Bacteroides fragilis, Bacteroides thetaiotaomicron, Bacteroides uniformis, Bacteroides vulgatus, Clostridium perfringens, and Peptostreptococcus micros. Tigecycline may be used for treatment of community-acquired bacterial pneumonia caused by; penicillin-susceptible Streptococcus pneumoniae, Haemophilus influenzae that does not produce Beta-lactamase and Legionella pneumophila. Tigecycline is given intravenously and has activity against a variety of Gram-positive and Gram-negative bacterial pathogens, many of which are resistant to existing antibiotics. Tigecycline successfully completed phase III trials in which it was at least equal to intravenous vancomycin and aztreonam to treat complicated skin and skin structure infections, and to intravenous imipenem and cilastatian to treat complicated intra-abdominal infections. Tigecycline is active against many Gram-positive bacteria, Gram-negative bacteria and anaerobes – including activity against methicillin-resistant Staphylococcus aureus (MRSA), Stenotrophomonas maltophilia, Haemophilus influenzae, and Neisseria gonorrhoeae (with MIC values reported at 2 μg/mL) and multi-drug resistant strains of Acinetobacter baumannii. It has no activity against Pseudomonas spp. or Proteus spp. The drug is licensed for the treatment of skin and soft tissue infections as well as intra-abdominal infections. "Tigecycline is also active against Clostridioides difficile strains. Most C. difficile isolates have MICs <0.25 for tigecycline The European Society of Clinical Microbiology and Infection recommends tigecycline as a potential salvage therapy for severe and/or complicated or refractory Clostridoides difficile infection. Tigecycline can also be used in vulnerable populations such as immunocompromised patients or patients with cancer. Non-Antibacterial use It is well established that tigecycline works as an effective antibiotic, however, it may have other properties that are not yet fully understood. Minocycline has been shown to have anti-inflammatory and anti-apoptotic activities, inhibition of proteolysis and suppression of angiogenesis and tumor metastasis. This is a feature not unique to minocycline, with many tetracyclines exhibiting non-antibiotic clinical benefits. Tigecycline has shown in vitro and in vivo activity against acute myeloid leukemia. The antileukemic activity of tigecycline can be attributed to the inhibition of mitochondrial protein translation in eukaryotic cells. Leukemic cells have an increased dependence on mitochondrial function, causing a heightened sensitivity to tigecycline. Tigecycline has also shown anti-cancer properties against several other kinds of tumors, including non-small cell lung cancer, gastric cancer, hepatocellular carcinoma, and glioblastoma. It also shows good activity against the causative agent of pythiosis. Susceptibility data Tigecycline targets both Gram-positive and Gram-negative bacteria including a few key multi-drug resistant pathogens. The following represents MIC susceptibility data for a few medically significant bacterial pathogens. Escherichia coli: 0.015 μg/mL — 4 μg/mL Klebsiella pneumoniae: 0.06 μg/mL — 16 μg/mL Staphylococcus aureus (methicillin-resistant): 0.03 μg/mL — 2 μg/mL Tigecycline generally has poor activity against most strains of Pseudomonas. Liver or kidney problems Tigecycline does not require dose adjustment for people with mild to moderate liver problems. However, in people with severe liver problems dosing should be decreased and closely monitored. Tigecycline does not require dose changes in people with poor kidney function or having hemodialysis. Resistance mechanisms Bacterial resistance towards tigecycline in Enterobacteriaceae (such as E. coli) is often caused by genetic mutations leading to an up-regulation of bacterial efflux pumps, such as the RND type efflux pump AcrAB. Some bacterial species such as Pseudomonas spp. can be naturally resistant to tigecycline through the constant over-expression of such efflux pumps. In some Enterobacteriaceae species, mutations in ribosomal genes such as rpsJ have been found to cause resistance to tigecycline. Side effects As a tetracycline derivative, tigecycline exhibits similar side effects to the class of antibiotics. Gastrointestinal (GI) symptoms are the most common reported side effect. Common side effects of tigecycline include nausea and vomiting. Nausea (26%) and vomiting (18%) tend to be mild or moderate and usually occur during the first two days of therapy. Rare adverse effects (<2%) include: swelling, pain, and irritation at injection site, anorexia, jaundice, hepatic dysfunction, pruritus, acute pancreatitis, and increased prothrombin time. Precautions Precaution is needed when taken in individuals with tetracycline hypersensitivity, pregnant women, and children. It has been found to cause fetal harm when administered during pregnancy and therefore is classified as pregnancy category D. In rats or rabbits, tigecycline crossed the placenta and was found in the fetal tissues, and is associated with slightly lower birth weights as well as slower bone ossification. Even though it was not considered teratogenic, tigecycline should be avoided unless benefits outweigh the risks. In addition, its use during childhood can cause yellow-grey-brown discoloration of the teeth and should not be used unless necessary. More so, there are clinical reports of tigecycline-induced acute pancreatitis, with particular relevance to patients also diagnosed with cystic fibrosis. Tigecycline showed an increased mortality in patients treated for hospital-acquired pneumonia, especially ventilator-associated pneumonia (a non-approved use), but also in patients with complicated skin and skin structure infections, complicated intra-abdominal infections and diabetic foot infection. Increased mortality was in comparison to other treatment of the same types of infections. The difference was not statistically significant for any type, but mortality was numerically greater for every infection type with Tigecycline treatment, and thus prompted a black box warning by the FDA. Black box warning The FDA issued a black box warning in September 2010, for tigecycline regarding an increased risk of death compared to other appropriate treatment. As a result of increase in total death rate (cause is unknown) in individuals taking this drug, tigecycline is reserved for situations in which alternative treatment is not suitable. The FDA updated the black box warning in 2013. Drug interactions Tigecycline has been found to interact with medications, such as: Warfarin: Since both tigecycline and warfarin bind to serum or plasma proteins, there is potential for protein-binding interactions, such that one drug will have more effect than the other. Although dose adjustment is not necessary, INR and prothrombin time should be monitored if given concurrently. Oral contraceptives: Effectiveness of oral contraceptives are decreased with concurrent use due to reduction in the concentration levels of oral contraceptives. However, the mechanism behind these drug interactions have not been fully analyzed. History Minocycline was a commonly used tetracycline synthesized in Lederle Laboratories in 1970, but antibiotic resistance to the drug began growing in prevalence throughout the 70's and 80's. While the problem of antibiotic resistance was known to scientists during the 1980s, apathy led to little federal attention given to the emerging crisis. However, by the late 1980s the worldwide threat began to be treated more seriously, which led to the renewed funding of antibiotic research. In 1993, researchers in the same laboratories that first synthesized minocycline created a new generation of tetracycline antibacterial agents, known as the glycylcyclines. These antibiotics were the first new drugs of the tetracycline class to be reported since the discovery of minocycline in 1970. The glycylcyclines were found to be active against a broad spectrum of tetracycline susceptible and resistant Gram (-) and Gram (+) aerobic and anaerobic bacteria. This initial research resulted in numerous studies being done on the antibacterial activity of various glycylcyclines, with extra focus being put on N,N-dimethylglycyl-amino derivatives, due to their reported potency. The aforementioned research culminated in a 1999 paper describing the discovery of a compound known as GAR-936, which would later be known as Tigecycline. Mechanism of action Tigecycline is a broad-spectrum antibiotic that acts as a protein synthesis inhibitor. It exhibits bacteriostatic activity by binding to the 30S ribosomal subunit of bacteria and thereby blocking the interaction of aminoacyl-tRNA with the A site of the ribosome. In addition, tigecycline has demonstrated bactericidal activity against isolates of S. pneumoniae and L. pneumophila. Studies have shown that tigecycline binds to the 70S ribosome with 5 fold and >100 fold greater affinity than minocycline and tetracycline, respectively . As previously mentioned, tigecycline still binds to the A site of the 30S ribosomal subunit, however the binding of the novel antibiotic involves substantial interactions with residues of helix H34 of that same subunit. These interactions are not observed in the binding of tetracycline. The findings indicate that tigecycline likely has a unique mechanism of action that prevents inhibition from ribosomal protection. It is a third-generation tetracycline derivative within a class called glycylcyclines which carry a N,N-dimethyglycylamido (DMG) moiety attached to the 9-position of tetracycline ring D. With structural modifications as a 9-DMG derivative of minocycline, tigecycline has been found to improve minimal inhibitory concentrations against Gram-negative and Gram-positive organisms, when compared to tetracyclines. Pharmacokinetics Tigecycline is metabolized through glucuronidation into glucuronide conjugates and a N-acetyl-9-aminominocycline metabolite. Therefore, dose adjustments are needed for patients with severe hepatic impairment. More so, it is primarily eliminated unchanged in the feces and secondarily eliminated by the kidneys. No renal adjustments are necessary. Society and culture Approval It is approved to treat complicated skin and soft tissue infections (cSSTI), complicated intra-abdominal infections (cIAI), and community-acquired bacterial pneumonia (CAP) in individuals 18 years and older. In the United Kingdom it is approved in adults and in children from the age of eight years for the treatment of complicated skin and soft tissue infections (excluding diabetic foot infections) and complicated intra-abdominal infections in situations where other alternative antibiotics are not suitable. Other names GAR-936 Tygacil Tigeplug (marketed by Biocon, India) Tigilyn (Marketed by Real Value therapy pharmaceuticals company in Myanmar, Manufactured by Lyka) TIGILITE (marketed in INDIA, Scutonix Lifesciences, Bombay)
Biology and health sciences
Antibiotics
Health
2411002
https://en.wikipedia.org/wiki/Controlled-access%20highway
Controlled-access highway
A controlled-access highway is a type of highway that has been designed for high-speed vehicular traffic, with all traffic flow—ingress and egress—regulated. Common English terms are freeway, motorway, and expressway. Other similar terms include throughway or thruway and parkway. Some of these may be limited-access highways, although this term can also refer to a class of highways with somewhat less isolation from other traffic. In countries following the Vienna convention, the motorway qualification implies that walking and parking are forbidden. A fully controlled-access highway provides an unhindered flow of traffic, with no traffic signals, intersections or property access. They are free of any at-grade crossings with other roads, railways, or pedestrian paths, which are instead carried by overpasses and underpasses. Entrances and exits to the highway are provided at interchanges by slip roads (ramps), which allow for speed changes between the highway and arterials and collector roads. On the controlled-access highway, opposing directions of travel are generally separated by a median strip or central reservation containing a traffic barrier or grass. Elimination of conflicts with other directions of traffic dramatically improves safety, while increasing traffic capacity and speed. Controlled-access highways evolved during the first half of the 20th century. Italy was the first country in the world to build controlled-access highways reserved for fast traffic and for motor vehicles only. Italy opened its first autostrada in 1924, A8, connecting Milan to Varese. Germany began to build its first controlled-access autobahn without speed limits ( on what is now A555, then referred to as a dual highway) in 1932 between Cologne and Bonn. It then rapidly constructed the first nationwide system of such roads. The first North American freeways (known as parkways) opened in the New York City area in the 1920s. Britain, heavily influenced by the railways, did not build its first motorway, the Preston By-pass (M6), until 1958. Most technologically advanced nations feature an extensive network of freeways or motorways to provide high-capacity urban travel, or high-speed rural travel, or both. Many have a national-level or even international-level (e.g. European E route) system of route numbering. Definition standards There are several international standards that give some definitions of words such as motorways, but there is no formal definition of the English language words such as freeway, motorway, and expressway, or of the equivalent words in other languages such as , , , , that are accepted worldwide—in most cases these words are defined by local statute or design standards or regional international treaties. Descriptions that are widely used include: Vienna Convention on Road Signs and Signals "Motorway" means a road specially designed and built for motor traffic that does not serve properties bordering on it, and that: Is provided, except at special points or temporarily, with separate carriageways for the two directions of traffic, separated from each other either by a dividing strip not intended for traffic or, exceptionally, by other means; Does not cross at level with any road, railway or tramway track, or footpath; and, Is specially sign-posted as a motorway; One green or blue symbol (like ) appears at motorway entry in countries that follow the Vienna Convention. Exits are marked with another symbol: . The definitions of "motorway" from the OECD and PIARC are almost identical. British Standards Motorway: Limited-access dual carriageway road, not crossed on the same level by other traffic lanes, for the exclusive use of certain classes of motor vehicle. ITE (including CITE) Freeway: A divided major roadway with full control of access and with no crossings at grade. This definition applies to toll as well as toll-free roads. Freeway A: This designates roadways with greater visual complexity and high traffic volumes. Usually this type of freeway will be found in metropolitan areas in or near the central core and will operate through much of the early evening hours of darkness at or near design capacity. Freeway B: This designates all other divided roadways with full control of access where lighting is needed. In the European Union, for statistical and safety purposes, some distinction might be made between motorway and expressway. For instance a principal arterial might be considered as: Roads serving long distance and mainly interurban movements. Includes motorways (urban or rural) and expressways (road which does not serve properties bordering on it and which is provided with separate carriageways for the two directions of traffic). Principal arterials may cross through urban areas, serving suburban movements. The traffic is characterized by high speeds and full or partial access control (interchanges or junctions controlled by traffic lights). Other roads leading to a principal arterial are connected to it through side collector roads. In this view, CARE's definition stands that a motorway is understood as a public road with dual carriageways and at least two lanes each way. All entrances and exits are signposted and all interchanges are grade separated. Central barrier or median present throughout the road. No crossing is permitted, while stopping is permitted only in an emergency. Restricted access to motor vehicles, prohibited to pedestrians, animals, pedal cycles, mopeds, agricultural vehicles. The minimum speed is not lower than and the maximum speed is not higher than (except Germany where no speed limit is defined). Motorways are designed to carry heavy traffic at high speed with the lowest possible number of accidents. They are also designed to collect long-distance traffic from other roads, so that conflicts between long-distance traffic and local traffic are avoided. According to the common European definition, a motorway is defined as "a road, specially designed and built for motor traffic, which does not serve properties bordering on it, and which: (a) is provided, except at special points or temporarily, with separate carriageways for the two directions of traffic, separated from each other, either by a dividing strip not intended for traffic, or exceptionally by other means; (b) does not cross at level with any road, railway or tramway track, or footpath; (c) is specially sign-posted as a motorway and is reserved for specific categories of road motor vehicles." Urban motorways are also included in this definition. However, the respective national definitions and the type of roads covered may present slight differences in different EU countries. History The first version of modern controlled-access highways evolved during the first half of the 20th century. The Long Island Motor Parkway on Long Island, New York, opened in 1908 as a private venture, was the world's first limited-access roadway. It included many modern features, including banked turns, guard rails and reinforced concrete tarmac. Traffic could turn left between the parkway and connectors, crossing oncoming traffic, so it was not a controlled-access highway (or "freeway" as later defined by the federal government's Manual on Uniform Traffic Control Devices). Modern controlled-access highways originated in the early 1920s in response to the rapidly increasing use of the automobile, the demand for faster movement between cities and as a consequence of improvements in paving processes, techniques and materials. These original high-speed roads were referred to as "dual highways" and have been modernized and are still in use today. Italy was the first country in the world to build controlled-access highways reserved for fast traffic and for motor vehicles only. The Autostrada dei Laghi ("Lakes Motorway") connected Milan to Lake Como and Lake Maggiore, and is now parts of the A8 and A9 motorways. It was devised by Piero Puricelli and was inaugurated in 1924. This motorway, called autostrada, contained only one lane in each direction and no interchanges. The Bronx River Parkway was the first road in North America to utilize a median strip to separate the opposing lanes, to be constructed through a park and where intersecting streets crossed over bridges. The Southern State Parkway opened in 1927, while the Long Island Motor Parkway was closed in 1937 and replaced by the Northern State Parkway (opened 1931) and the contiguous Grand Central Parkway (opened 1936). In Germany, construction of the Bonn-Cologne Autobahn began in 1929 and was opened in 1932 by Konrad Adenauer, then the mayor of Cologne. The German Autobahn became the first nationwide highway system. In Canada, the first precursor with semi-controlled access was The Middle Road between Hamilton and Toronto, which featured a median divider between opposing traffic flow, as well as the nation's first cloverleaf interchange. This highway developed into the Queen Elizabeth Way, which featured a cloverleaf and trumpet interchange when it opened in 1937, and until the Second World War, boasted the longest illuminated stretch of roadway built. A decade later, the first section of Highway 401 was opened, based on earlier designs. It has since gone on to become the busiest highway in the world. The word freeway was first used in February 1930 by Edward M. Bassett. Bassett argued that roads should be classified into three basic types: highways, parkways, and freeways. In Bassett's zoning and property law-based system, abutting property owners have the rights of light, air and access to highways, but not parkways and freeways; the latter two are distinguished in that the purpose of a parkway is recreation, while the purpose of a freeway is movement. Thus, as originally conceived, a freeway is simply a strip of public land devoted to movement to which abutting property owners do not have rights of light, air or access. Design Freeways, by definition, have no at-grade intersections with other roads, railroads or multi-use trails. Therefore, no traffic signals are needed and through traffic on freeways does not normally need to stop at traffic signals. Some countries, such as the United States, allow for limited exceptions: some movable bridges, for instance the Interstate Bridge on Interstate 5 between Oregon and Washington, do require drivers to stop for ship traffic. The crossing of freeways by other routes is typically achieved with grade separation either in the form of underpasses or overpasses. In addition to sidewalks (pavements) attached to roads that cross a freeway, specialized pedestrian footbridges or tunnels may also be provided. These structures enable pedestrians and cyclists to cross the freeway at that point without a detour to the nearest road crossing. Access to freeways is typically provided only at grade-separated interchanges, though lower-standard right-in/right-out (left-in/left-out in countries that drive on the left) access can be used for direct connections to side roads. In many cases, sophisticated interchanges allow for smooth, uninterrupted transitions between intersecting freeways and busy arterial roads. However, sometimes it is necessary to exit onto a surface road to transfer from one freeway to another. One example in the United States (notorious for the resulting congestion) is the connection from Interstate 70 to the Pennsylvania Turnpike (Interstate 70 and Interstate 76) through the town of Breezewood, Pennsylvania. Speed limits are generally higher on freeways and are occasionally nonexistent (as on much of Germany's Autobahn network). Because higher speeds reduce decision time, freeways are usually equipped with a larger number of guide signs than other roads, and the signs themselves are physically larger. Guide signs are often mounted on overpasses or overhead gantries so that drivers can see where each lane goes. Exit numbers are commonly derived from the exit's distance in miles or kilometers from the start of the freeway. In some areas, there are public rest areas or service areas on freeways, as well as emergency phones on the shoulder at regular intervals. In the United States, mileposts usually start at the southern or westernmost point on the freeway (either its terminus or the state line). California, Ohio and Nevada use postmile systems in which the markers indicate mileage through the state's individual counties. However, Nevada and Ohio also use the standard milepost system concurrently with their respective postmile systems. California numbers its exits off its freeways according to a milepost system but does not use milepost markers. In Europe and some other countries, motorways typically have similar characteristics such as: A typical design speed in the range of Minimum values for horizontal curve radii around . Maximum longitudinal gradients typically not exceeding 4% to 5%. Cross sections incorporating a minimum of two through-traffic lanes for each direction of travel, with a typical width of each, separated by a central median. An obstacle-free zone varying from , or alternatively installation of appropriate vehicle restraint systems. Proper design of grade-separated interchanges to provide for the movement of traffic between two or more roadways on different levels. More frequent (compared to other road types) construction of tunnels and overpasses, requiring complex equipment and methods of operation. Installation of highly efficient road equipment and traffic control devices. Cross sections A controlled access highway may be two lanes known as a two-lane expressway. They are often undivided, are sometimes built when traffic volumes are low (but may be upgraded later), right-of-way is limited or due to low funding. They may be designed for easy conversion to one side of a four-lane freeway and right of way may already be acquired. (For example, most of the Mountain Parkway in eastern Kentucky is two lanes, but work has begun to make all of it four-lane.) These are often called Super two roads. Several such roads are infamous for a high rate of lethal crashes; an outcome because they were designed for short sight distances (sufficient for freeways without oncoming traffic, but insufficient for the years in service as two-lane road with oncoming traffic). An example of such a "Highway to Hell" was European route E4 from Gävle to Axmartavlan, Sweden. The high rate of crashes with severe personal injuries on that (and similar) roads did not cease until a median crash barrier was installed, transforming the fatal crashes into non-fatal crashes. Otherwise, freeways typically have at least two lanes in each direction; some busy ones can have as many as 16 or more lanes in total. In San Diego, California, Interstate 5 has a similar system of express and local lanes for a maximum width of 21 lanes on a segment between Interstate 805 and California State Route 56. In Mississauga, Ontario, Highway 401 uses collector-express lanes for a total of 18 lanes through its intersection with Highway 403/Highway 410 and Highway 427. These wide freeways may use separate collector and express lanes to separate through traffic from local traffic, or special high-occupancy vehicle lanes, either as a special restriction on the innermost lane or a separate roadway, to encourage carpooling. These HOV lanes, or roadways open to all traffic, can be reversible lanes, providing more capacity in the direction of heavy traffic, and reversing direction before traffic switches. Sometimes a collector/distributor road, a shorter version of a local lane, shifts weaving between closely spaced interchanges to a separate roadway or altogether eliminates it. In some parts of the world, notably parts of the US, frontage roads form an integral part of the freeway system. These parallel surface roads provide a transition between high-speed "through" traffic and local traffic. Frequent slip-ramps provide access between the freeway and the frontage road, which in turn provides direct access to local roads and businesses. Except on some two-lane freeways (and very rarely on wider freeways), a median separates the opposite directions of traffic. This strip may be as simple as a grassy area, or may include a crash barrier such as a "Jersey barrier" or an "Ontario Tall Wall" to prevent head-on collisions. On some freeways, the two carriageways are built on different alignments; this may be done to make use of available corridors in a mountainous area or to provide narrower corridors through dense urban areas. Control of access Control of access relates to a legal status which limits the types of vehicles that can use a highway, as well as a road design that limits the points at which they can access it. Major arterial roads will often have partial access control, meaning that side roads will intersect the main road at grade, instead of using interchanges, but driveways may not connect directly to the main road, and drivers must use intersecting roads to access adjacent land. At arterial junctions with relatively quiet side roads, traffic is controlled mainly by two-way stop signs which do not impose significant interruptions on traffic using the main highway. Roundabouts are often used at busier intersections in Europe because they help minimize interruptions in flow, while traffic signals that create greater interference with traffic are still preferred in North America. There may be occasional interchanges with other major arterial roads. Examples include US 23 between SR 15's eastern terminus and Delaware, Ohio, along with SR 15 between its eastern terminus and I-75, US 30, SR 29/US 33, and US 35 in western and central Ohio. This type of road is sometimes called an expressway. Non-motorized access on freeways Freeways are usually limited to motor vehicles of a minimum power or weight; signs may prohibit cyclists, pedestrians and equestrians and impose a minimum speed. It is possible for non-motorized traffic to use facilities within the same right-of-way, such as sidewalks constructed along freeway-standard bridges and multi-use paths next to freeways such as the Suncoast Trail along the Suncoast Parkway in Florida. In some US jurisdictions, especially where freeways replace existing roads, non-motorized access on freeways is permitted. Different states of the United States have different laws. Cycling on freeways in Arizona may be prohibited only where there is an alternative route judged equal or better for cycling. Wyoming, the second least densely populated state, allows cycling on all freeways. Oregon allows bicycles except on specific urban freeways in Portland and Medford. In countries such as the United Kingdom new motorways require an Act of Parliament to ensure restricted right of way. Since upgrading an existing road (the "King's Highway") to a full motorway will result in extinguishing the right of access of certain groups such as pedestrians, cyclists and slow-moving traffic, many controlled access roads are not full motorways. In some cases motorways are linked by short stretches of road where alternative rights of way are not practicable such as the Dartford Crossing (the furthest downstream public crossing of the River Thames) or where it was not economic to build a motorway alongside the existing road such as the former Cumberland Gap. The A1 is a good example of piece-wise upgrading to motorway standard—as of January 2013, the route had five stretches of motorway (designated as A1(M)), reducing to four stretches in March 2018 with completion of the A1(M) through North Yorkshire. Construction techniques The most frequent way freeways are laid out is by building them from the ground up after obstructions such as forestry or buildings are cleared away. Sometimes they deplete farmland, but other methods have been developed for economic, social and even environmental reasons. Full freeways are sometimes made by converting at-grade expressways or by replacing at-grade intersections with overpasses; however, in the US, any at-grade intersection that ends a freeway often remains an at-grade intersection. Often, when there is a two-lane undivided freeway or expressway, it is converted by constructing a parallel twin corridor, and leaving a median between the two travel directions. The median-side travel lane of the old two-way corridor becomes a passing lane. Other techniques involve building a new carriageway on the side of a divided highway that has a lot of private access on one side and sometimes has long driveways on the other side since an easement for widening comes into place, especially in rural areas. When a third carriageway is added, sometimes it can shift a directional carriageway by (or maybe more depending on land availability) as a way to retain private access on one side that favors over the other. Other methods involve constructing a service drive that shortens the long driveways (typically by less than ). Interchanges and access points An interchange or a junction is a highway layout that permits traffic from one controlled-access highway to access another and vice versa, whereas an access point is a highway layout where traffic from a distributor or local road can join a controlled-access highway. Some countries, such as the United Kingdom, do not distinguish between the two, but others make a distinction; for example, Germany uses the words Kreuz ("cross") or Dreieck ("triangle") for the former and Ausfahrt ("exit") for the latter. In all cases one road crosses the other via a bridge or a tunnel, as opposed to an at-grade crossing. The inter-connecting roads, or slip-roads, which link the two roads, can follow any one of a number of patterns. The actual pattern is determined by a number of factors including local topology, traffic density, land cost, building costs, type of road, etc. In some jurisdictions feeder/distributor lanes are common, especially for cloverleaf interchanges; in others, such as the United Kingdom, where the roundabout interchange is common, feeder/distributor lanes are seldom seen. Motorways in Europe typically differ between exits and junctions. An exit leads out of the motorway system, whilst a junction is a crossing between motorways or a split/merge of two motorways. The motorway rules end at exits, but not at junctions. However, on some bridges, motorways, without changing appearance, temporarily end between the two exits closest to the bridge (or tunnel), and continue as dual carriageways. This is in order to give slower vehicles a possibility to use the bridge. The Queen Elizabeth II Bridge / Dartford tunnel at London Orbital is an example of this. London Orbital or the M25 is a motorway surrounding London, but at the last River Thames crossing before its mouth, motorway rules do not apply. (At this crossing the London Orbital is labeled A282 instead.) A few of the more common types of junction are shown below: Safety There are many differences between countries in their geography, economy, traffic growth, highway system size, degree of urbanization and motorization, etc.; all of which need to be taken into consideration when comparisons are made. According to some EU papers, safety progress on motorways is the result of several changes, including infrastructure safety and road user behavior (speed or seat belt use), while other matters such as vehicle safety and mobility patterns have an impact that has not been quantified. Motorways compared with other roads Motorways are the safest roads by design. While accounting for more than one quarter of all kilometres driven, they contributed only 8% of the total number of European road deaths in 2006. Germany's Federal Highway Research Institute provided International Road Traffic and Accident Database (IRTAD) statistics for the year 2010, comparing overall fatality rates with motorway rates (regardless of traffic intensity): The German autobahn network illustrates the safety trade-offs of controlled access highways. The injury crash rate is very low on autobahns, while 22 people died per 1,000 injury crashes—although autobahns have a lower rate than the 29 deaths per 1,000 injury accidents on conventional rural roads, the rate is higher than the risk on urban roads. Speeds are higher on rural roads and autobahns than urban roads, increasing the severity potential of a crash. According to ETSC, German motorways without a speed limit, but with a speed recommendation, are 25% more deadly than motorways with a speed limit. Germany also introduced some speed limits on various motorway sections that were not limited. This generated a reduction in deaths in a range from 20% to 50% on those sections. Causes of accidents Speed, in Europe, is considered to be one of the main contributory factors to collisions. Some countries, such as France and Switzerland, have achieved a death reduction by a better monitoring of speed. Tools used for monitoring speed might be an increase in traffic density; improved speed enforcement and stricter regulation leading to driver license withdrawal; safety cameras; penalty point; and higher fines. Some other countries use automatic time-over-distance cameras (also known as section controls) to manage speed. Fatigue is considered as a risk factor more specific to monotonous roads such as motorways, although such data are not monitored/recorded in many countries. According to Vinci Autoroutes, one third of accidents in French motorways are due to sleepy driving. 23% of people killed on French motorways were not wearing seat belts, while 98% of front-seat passengers and 87% of rear-seat passenger wear seat belts. Fatalities trends Although safety results do not change much from year to year, in Europe some changes have been observed: motorway fatalities decreased by 41% during the 2006–2015 decade, but increased by 10% between 2014 and 2015. However, taking into account motorway network length to reflect exposure, data shows that fatalities per thousand kilometres halved between 2006 and 2015. Toll effect A University of Barcelona study suggests that if tolls are implemented on a controlled-access highway, drivers may seek alternative routes to avoid paying the tolls. This may result in a decrease of safety on roads which are not designed for heavy traffic. Safety in urban areas In the United Kingdom, there are very few studies regarding the impact of road traffic accidents from existing and new urban motorways. In particular, new urban motorways do not grant a reduction of traffic accidents. In Italy, a study performed on urban motorway A56 Tangenziale di Napoli showed that reduction of speed leads to a decrease in accidents. In Marseille, France, from June 2009 to May 2010, CEREMA, the French centre for studies on risk, mobility and environment, performed a study on Marius, a network of urban motorways. This study established a link between accidents and traffic variables: for single vehicle accidents, the 6-minute average speed on the fast lane; and the time headway (on every lane), for multiple vehicle accidents, the occupancy, and the time headway (for the middle lane). The Marius network counts 292 injury accidents or fatalities for 1.5 billion of vehicle-kilometres, that is 189 injury accidents or fatalities for 1 billion of vehicle-kilometres. Some European countries have improved safety of urban motorways, with a set of measures to dynamically manage traffic flow in response to changing volumes, speeds, and incidents, including: Variable speed limits, line control, and speed harmonization Shoulder running with emergency refuge areas Queue warning and variable messaging 24/7 monitoring of traffic with cameras and/or in-pavement sensors (both to detect incidents and identify when to reduce speed limits) Incident management Automated enforcement Specialized algorithms for temporary shoulder running, variable speed limits, and/or incident detection and management Ramp metering (coordinated or independent function) In 1994, it was assumed that lighting urban motorways would benefit from more safety than unlighted ones. In California, in 2001, a study established, for urban freeways, some relationships among urban freeway accidents, traffic flow, weather, and lighting conditions: it establishes a difference between dry freeways in daylight and wet freeways in darkness it establishes that left-lane collisions are more likely induced by volume effects, while right-lane collisions are more closely tied to speed variances in adjacent lanes (in California, people drive in the right lane except when passing). Environmental effects Controlled-access highways have been constructed both between major cities as well as within them, leading to the sprawling suburban development found near most modern cities. Highways have been heavily criticized by environmentalists, urbanists, and preservationists for the noise, pollution, and economic shifts they bring. Additionally, they have been criticized by the driving public for the inefficiency with which they handle peak hour traffic. Often, rural highways open up vast areas to economic development and municipal services, generally raising property values. In contrast to this, above-grade highways in urban areas are often a source of lowered property values, contributing to urban decay. Even with overpasses and underpasses, neighbourhoods are divided—especially impoverished ones where residents are less likely to own a car, or to have the political and economic influence to resist construction efforts. Beginning in the early 1970s, the US Congress identified freeways and other urban highways as responsible for most of the noise exposure of the US population. Subsequently, computer models were developed to analyze freeway noise and aid in their design to help minimize noise exposure. Some cities have implemented freeway removal policies, under which freeways have been demolished and reclaimed as boulevards or parks, notably in Seoul (Cheonggyecheon), Portland (Harbor Drive), New York City (West Side Elevated Highway), Boston (Central Artery), San Francisco (Embarcadero Freeway), Seattle (Alaskan Way Viaduct), and Milwaukee (Park East Freeway). An alternative to surface or above-ground freeway construction has been the construction of underground urban freeways using tunnelling technologies. This has been employed in Madrid and Prague, as well as the Australian cities of Sydney (which has five such freeways), Brisbane (which has three), and Melbourne (which has two). This has had the benefit of not creating heavily trafficked surface roads and, in the case of Melbourne's EastLink freeway, prevented the destruction of an ecologically sensitive area. Other Australian cities face similar problems (lack of available land, cost of home acquisition, aesthetic problems and community opposition). Brisbane, which also has to contend with physical boundaries (the Brisbane River) and rapid population increases, has embraced underground freeways. There are currently three open to traffic (Clem Jones Tunnel (CLEM7), Airport Link and Legacy Way) and one (East-West Link) is currently in planning. All of the tunnels are designed to act as an inner-city ring road or bypass system and include provision for public transport, whether underground or in reclaimed space on the surface. However, freeways are not beneficial for road-based public transport services, because the restricted access to the roadway means that it is awkward for passengers to get to the limited number of boarding points unless they drive to them, largely defeating the purpose. In Canada, the extension of Highway 401 toward Detroit, known as the Herb Gray Parkway, has been designed with numerous tunnels and underpasses that provide land for parks and recreational uses. Freeway opponents have found that freeway expansion is often self-defeating: expansion simply generates more traffic. That is, even if traffic congestion is initially shifted from local streets to a new or widened freeway, people will begin to use their cars more and commute from more remote locations. Over time, the freeway and its environs become congested again as both the average number and distance of trips increases. This phenomenon is known as induced demand. Urban planning experts such as Drusilla Van Hengel, Joseph DiMento and Sherry Ryan argue that although properly designed and maintained freeways may be convenient and safe, at least in comparison to uncontrolled roads, they may not expand recreation, employment and education opportunities equally for different ethnic groups, or for people located in certain neighborhoods of any given city. Still, they may open new markets to some small businesses. Construction of urban freeways for the US Interstate Highway System, which began in the late 1950s, led to the demolition of thousands of city blocks and the dislocation of many more thousands of people. The citizens of many inner city areas responded with the freeway and expressway revolts. Through the study of Washington's response, it can be shown that the most effective changes came not from executive or legislative action, but instead from policy implementation. One of the foremost rationales for the creation of the United States Department of Transportation (USDOT) was that an agency was needed to mediate between the conflicting interests of interstates and cities. Initially, these policies came as regulation of the state highway departments. Over time, USDOT officials re-focused highway building from a national level to the local scale. With this shift of perspective came an encouragement for alternative transportation, and locally based planning agencies. At present, freeway expansion has largely stalled in the United States, due to a multitude of factors that converged in the 1970s: higher due process requirements prior to taking of private property, increasing land values, increasing costs for construction materials, local opposition to new freeways in urban cores, the passage of the National Environmental Policy Act (which imposed the requirement that each new federally funded project must have an environmental impact statement or report) and falling gas tax revenues as a result of the nature of the flat-cent tax (it is not automatically adjusted for inflation), the tax revolt movement, and growing popular support for high-speed mass transit in lieu of new freeways. Route numbering United Kingdom Great Britain England and Wales In England and Wales, the numbers of major motorways followed a numbering system separate from that of the A-road network, though based on the same principle of zones. Running clockwise from the M1, the zones were defined for Zones 1 to 4 based on the proposed M2, M3 and M4 motorways. The M5 and M6 numbers were reserved for the other two planned long-distance motorways. The Preston Bypass, the UK's first motorway, should have been numbered A6(M) under the scheme decided upon, but it was decided to keep the number M6 as it had already been applied. A map Showing Future Pattern of Principal National Routes was issued by the Ministry of War Transport in 1946 shortly before the law that allowed roads to be restricted to specified classes of vehicle (the Special Roads Act 1949) was passed. The first section of motorway, the M6 Preston Bypass, opened in 1958, followed by the first major section of motorway (the M1 between Crick and Berrygrove in Watford), which opened in 1959. From then until the 1980s, motorways opened at frequent intervals; by 1972 the first of motorway had been built. Whilst roads outside of urban areas continued to be built throughout the 1970s, opposition to urban routes became more pronounced. Most notably, plans by the Greater London Council for a series of ringways were cancelled following extensive road protests and a rise in costs. In 1986 the single-ring, M25 motorway was completed as a compromise. In 1996 the total length of motorways reached . Motorways in Great Britain, as in numerous European countries, will nearly always have the following characteristics: No traffic lights (except occasionally on slip roads before reaching the main carriageway). Exit is nearly always via a numbered junction and slip road, with rare minor exceptions. Pedestrians, cyclists and vehicles below a specified engine size are banned. There is a central reservation separating traffic flowing in opposing directions. (The only exception to this is the A38(M) in Birmingham where the central reservation is replaced by another lane in which the direction of traffic changes depending on the time of day. There was another small spur motorway near Manchester with no solid central reservation, but this was declassified as a motorway in the 2000s.) No roundabouts on the main carriageway. This is only the case on motorways beginning with M (so called 'M' class). In the case of upgraded A roads with numbers ending with M (i.e. Ax(M)), roundabouts may exist on the main carriageway where they intersect 'M' class motorways. In all 'M' class motorways bar two, there are no roundabouts except at the point at which the motorway ends or the motorway designation ends. The only exceptions to this in Great Britain are: the M271 in Southampton which has a roundabout on the main carriageway where it meets the M27, but then continues as the M271 after the junction. on the M60. This came about as a result of renumbering sections of the M62 and M66 motorways near Manchester as the M60, to form a ring around the city. What was formerly the junction between the M62 and M66 now involves the clockwise M60 negotiating a roundabout, while traffic for the eastbound M62 and northbound M66 carries straight on from the M60. This junction, known as Simister Island, has been criticised for the presence of a roundabout and the numbered route turning off. the A1(M) between the M62 in North Yorkshire and Washington in Tyne and Wear is built to full 'M' class standards without any roundabouts. It has been suggested that this section of the A1(M) should be reclassified as the northern extension of the M1. It was proposed in 2013 that the Ax(M) format number would be used for the highest standard of a new classification of road referred to in England as "expressways", which would be roads without normal roundabouts or right turns across the central reservation, and with graded junctions. Such roads would have motorway-style restrictions but emergency reservations rather than standard major motorway-standard hard shoulders. Scotland In Scotland, where the Scottish Office (superseded by the Scottish Executive in 1999) rather than the Ministry of Transport and Civil Aviation had the decision, there is no zonal pattern, but rather the A-road rule is strictly enforced. It was decided to reserve the numbers 7, 8 and 9 for Scotland. The M8 follows the route of the A8, and the A90 became part of the M90 when the A90 was re-routed along the path of the A85. Motorways follow an "M"-format, with two exceptions: the A823(M) near Rosyth joining the A823 to the M90, and the A74(M) between the English M6 at Gretna and the M74 at Abington. Northern Ireland In Northern Ireland a distinct numbering system is used, which is separate from the rest of the United Kingdom, though the classification of roads along the lines of A, B and C is universal throughout the UK and the Isle of Man. According to a written answer to a parliamentary question to the Northern Ireland Minister for Regional Development, there is no known reason as to how Northern Ireland's road numbering system was devised. However motorways, as in the rest of the UK, are numbered M, with the two major motorways coming from Belfast being numbered M1 and M2. The M12 is a short spur of the M1, with the M22 being a short continuation (originally intended to be a spur) of the M2. There are two other motorways, the short M3, the M5 and a motorway section of the A8 road, known as the A8(M). Republic of Ireland In the Republic of Ireland, motorway and national road numbering is quite different from the UK convention. Since the passage of the Roads Act 1993, all motorways are part of, or form, national primary roads. These routes are numbered in series, (usually, radiating anti-clockwise from Dublin, starting with the N1/M1) using numbers from 1 to 33 (and, separately from the series, 50). Motorways use the number of the route of which they form part, with an M prefix rather than N for national road (or in theory, rather than R for regional road). In most cases, the motorway has been built as a bypass of a road previously forming the national road (e.g. the M7 bypassing roads previously forming the N7)—the bypassed roads are reclassified as regional roads, although updated signposting may not be provided for some time, and adherence to signage colour conventions is lax (regional roads have black-on-white directional signage, national routes use white-on-green). Under the previous legislation, the Local Government (Roads and Motorways) Act 1974, motorways theoretically existed independently to national roads. However, the short sections of motorway opened during this act, except for the M50, always took their number from the national road that they were bypassing. The older road was not downgraded at this point (indeed, regional roads were not legislated for at this stage). Older signage at certain junctions on the M7 and M11 can be seen reflecting this earlier scheme, where for example N11 and M11 can be seen coexisting. The M50, an entirely new national road, is an exception to the normal inheritance process, as it does not replace a road previously carrying an N number. The M50 was nevertheless legislated in 1994 as the N50 route (it had only a short section of non-motorway section from the Junction 11 Tallaght to Junction 12 Firhouse until its extension as the Southern Cross Motorway). The M50's designation was chosen as a recognisable number. As of 2010, the N34 is the next unused national primary road designation. In theory, a motorway in Ireland could form part of a regional road. Australia In Australia, highway and motorway (also called freeway or expressway) numbers either use alphanumeric route markers with M prefix for motorways, freeways and expressways or national/state route markers. Before the implementation of alpha-numeric route markers, controlled-access highways were marked with a Metroad, National Highway, National Route or State Route Marker. In Sydney, Metroad route markers were used for motorways and freeways, except the Pacific Motorway (then F3 Freeway), which was marked with a National Highway marker. In Brisbane, Metroad, State routes were used for motorways and freeways. In Melbourne, all motorways and freeways used State Route markers. In Western Australia, National Route markers are used for expressways and freeways. After the implementation of alphanumeric route markers, all route markers being used for motorways and freeways in Brisbane, Melbourne and Sydney were replaced with a M marker. In Western Australia, they have not implemented the new system yet. Elsewhere In Hungary, similar to Ireland, motorway numbers can be derived from the original national highway numbers (1–7), with an M prefix attached, e.g. M7 is on the route of the old Highway 7 from Budapest towards Lake Balaton and Croatia. New motorways not following the original Budapest-centred radial highway system get numbers M8, M9, etc., or M0 in the case of the ring road around Budapest. In the Netherlands, motorway numbers can be derived from the original national highway numbers, but with an A (Autosnelweg) prefix attached, like A9. In Germany federal motorways have the prefix A (Autobahn). If the following number is odd, the motorway generally follows a north–south direction, while even-numbered motorways generally follow an east–west direction. Other controlled-access roads (dual carriageways) in Germany can be federal highways (Bundesstraßen), state highways (Landesstraßen), district highways (Kreisstraßen) and city highways (Stadtstraßen), each with their own numbering system. In Italy, motorways follow a single numbering, even if managed by different concessionaire companies: they are all marked with the letter "A" (for autostrada; "RA" in the case of motorway junctions, with the exception of the Bereguardo-Pavia junction numbered on the signs as Autostrada A53, and "T" for the international Alpine tunnels) followed by a number. Therefore a motorway with the same numbering can be managed by different concessionaire companies (for example the Autostrada A23 is managed for a stretch by and for the remaining stretch by Autostrade per l'Italia). In New Zealand, as well as in Brazil, Russia, Finland, and the Scandinavian countries, motorway numbers are derived from the state highway route that they form a part of, but unlike Hungary and Ireland, they are not distinguished from non-motorway sections of the same state highway route. In the cases where a new motorway acts as a bypass of a state highway route, the original state highway is either stripped of that status or renumbered. A low road number means a road suitable for long-distance driving. In Belgium, motorways but also some dual carriageways have numbers preceded by an A. However, those that also have an E-number are generally referenced with that one. City rings and bypasses have numbers preceded by an R; these also can be either motorways or dual carriageways. In Croatia, motorway numbering is independent of state route numbering. Motorways are prefixed by an A (for autocesta), as in many other European countries. Some motorways are the result of an upgrade of an older two-lane road, and carry concurrencies with state routes. In some other cases, such as with the A2, following the upgrade, the state route was rerouted onto the frontage road. By country While the design characteristics listed above are generally applicable around the globe, every jurisdiction provides its own specifications and design criteria for controlled-access highways. Africa Algeria In Algeria, the motorway network has about in 2x3 lanes. The network is expanding increasingly, along with other kinds of infrastructure, though this is only true for the northern region of the country, where most of its population lives. And this infrastructure is pretty well developed for North African standards. For the moment, the entire Algerian motorway network is toll-free. The toll stations are being finalized and the launch of the motorway toll is scheduled for early 2021. The maximum speed authorized on the entire network is . Egypt Egypt has many multiple-lane, high-speed motorways. Two routes in the Trans-African Highway network originate in Cairo. Egypt also has multiple highway links with Asia through the Arab Mashreq International Road Network. Egypt has a developing motorway network, connecting Cairo with Alexandria and other cities. Though most of the transport in the country is still done on the national highways, motorways are becoming increasingly an option in road transport within the country. The existing motorways in the country are: Cairo–Alexandria Desert Road: Running between Cairo and Alexandria, with an extension of , it is the main motorway in Egypt. International Coastal Road: It runs from Alexandria to Port Said, along the northern Nile Delta. It has a length of . Also, amongst other cities, it connects Damietta and Baltim. Geish Road: It runs between Helwan and Asyut, along the Nile, also connecting Beni Suef and Minya. Its length is . Ring Road: It serves as an inner ring-road for Cairo. It has a length of . Regional Ring Road: It serves as an outer ring road for Cairo, also connecting its suburbs like Helwan and 10th of Ramadan City. Its length is . Ethiopia Much of Ethiopia's highway network is developing. Road projects now represent around a quarter of the annual infrastructure budget of the Ethiopian government. Additionally, through the Road Sector Development Program (RSDP), the government has earmarked $4 billion to construct, repair and upgrade roads over the next decade. Ethiopia has over of roads. In 2014, the Addis Ababa–Adama Expressway opened, becoming the first expressway in Ethiopia. Kenya The Kenya National Highways Authority is responsible for the maintenance, management, development, and rehabilitation of highways. According to the Kenya Roads Board, Kenya has of roads. Two routes of the Trans-African Highway network cross Kenya: the Cairo-Cape Town Highway and the Lagos–Mombasa Highway. Roads in Kenya are divided into classes: Class S: "A Highway that connects two or more cities and carries safely a large volume of traffic at the highest speed of operation." Class A: "A Highway that forms a strategic route and corridor connecting international boundaries at identified immigration entry and exit points and international terminals such as international air or sea ports." Class B: "A Highway that forms an important national route linking national trading or economic hubs, County Headquarters and other nationally important centers to each other and to the National Capital or to Class A roads." Morocco The motorways and expressways of Morocco are a network of multiple-lane, high-speed, controlled-access highways. As of November 2016 the total length of Morocco's motorways was and expressways. Morocco plans to expand the road network. In the country of motorways and of expressways are currently under construction in different parts of the country. In the year 2035 the total length of the motorways will be of motorways and of expressways. According to the minister of Morocco, this plan also includes a program specific to rural roads for the construction of of roads for an investment of 30 billion dirhams. Mozambique Mozambique's highways are classified as a national or primary road (estrada nacional or estrada primária), or as regional – secondary or tertiary – roads (estradas secundárias and estradas terciárias). National roads are given the prefix "N" or "EN" followed by a one- or two-digit number. The numbers generally increase from the south of the country to the north. Regional roads are given the prefix "R", followed by a three-digit number. Mozambique has over of paved roads. Nigeria Nigeria has the largest highway network in West Africa. Although much of the highways are poorly maintained, the Federal Roads Maintenance Agency has drastically improved them. Due to Nigeria's strategic location, four routes of the Trans-African Highway network are situated in the country: the Trans-Sahara Highway to Algeria; the Trans-Sahelian Highway to Dakar, Senegal; the Trans–West African Coastal Highway; and the Lagos–Mombasa Highway. South Africa In South Africa, the term freeway differs from most other parts of the world. A freeway is a road where certain restrictions apply. The following are forbidden from using a freeway: a vehicle drawn by an animal; a pedal cycle (such as a bicycle); a motorcycle having an engine with a cylinder capacity not exceeding 50 cm3 or that is propelled by electrical power; a motor tricycle or motor quadricycle; pedestrians Drivers may not use hand signals on a freeway (except in emergencies) and the minimum speed on a freeway is . Drivers in the rightmost lane of multi-carriageway freeways must move to the left if a faster vehicle approaches from behind to overtake. Despite popular opinion that "freeway" means a road with at least two carriageways, single carriageway freeways exist, as is evidenced by the statement that "[South Africa's] roads include of dual carriageway freeway, of single carriageway freeway and of single carriage main road with unlimited access." Americas Argentina Argentina has a national route system. It is connected to the Pan-American Highway. Argentina has a total of over of paved roads. Brazil Although some of Brazilian highway is built to freeway-standard, there is no distinct designation for controlled-access highways in the Brazilian federal and state highway systems. The term autoestrada (Portuguese for "freeway" or "motorway") is not commonly used in Brazil; the terms estrada ("road") and especially rodovia ("highway") are instead preferred. Nevertheless, the most technically advanced freeways in Brazil are defined Class 0 freeways by the National Department of Transport Infrastructure (DNIT). These freeways are built to safely allow for vehicular speeds of up to . In mountainous terrain, the maximum allowable gradient is 5%, and the minimum allowable radius of curvature is (with 12% super-elevation). São Paulo state, with of freeway, has the most in the country. It is also the state with more highways conceded to the private sector. Brazil's first freeway, the Rodovia Anhanguera in São Paulo state, was completed in 1953 as an upgrade of the earlier undivided highway. That same year, construction of the second highway, Rodovia Anchieta, between São Paulo and the Atlantic coast, began. Freeway construction, most of them upgrades of older undivided highways, quickened in the following decades. The current Class 0 freeways include: Rodovia dos Bandeirantes, Rodovia dos Imigrantes, Rodovia Castelo Branco, Rodovia Ayrton Senna/Carvalho Pinto, Rodovia Osvaldo Aranha (also known as "Free-way") and São Paulo's Metropolitan Beltway Rodoanel Mário Covas – all modern, post-1970s highways meeting modern European standards. Other stretches of highway such as the under-construction south BR-101 and Rodovia Régis Bittencourt are of older design standards. British overseas territories A number of the United Kingdom's overseas territories have controlled-access highways, including the Turks and Caicos Islands and Cayman Islands. Canada Canada has no current national system for controlled-access highways. All controlled-access freeways, including sections that form part of the Trans-Canada Highway, are under provincial jurisdiction, and have no numeric continuation across provincial boundaries. The largest networks in the country are in Ontario (400-series highways) and Quebec (Autoroutes). Speed limits are not federally set, since provincial governments set speed limits for their respective regions. These roads are influenced by, and have influenced, US standards, but have design innovations and differences. The total length of dual-carriageways with controlled access in Canada is , of which are in British Columbia, in Alberta, in Saskatchewan, in Ontario, in Quebec, and in the Maritimes. El Salvador The RN-21 (East–West, Boulevard Monseñor Romero), is the very first freeway to be built in El Salvador and in Central America. The freeway passes the northern area of the city of Santa Tecla, La Libertad. It has a small portion serving Antiguo Cuscatlán, La Libertad, and merges with the RN-5 (East–West, Boulevard de Los Proceres/Autopista del Aeropuerto) in San Salvador. The total length of the RN-21 is and is currently working as a traffic reliever in the metropolitan area. Although the RN-21 was to be named in honor of the first mayor of San Salvador, Diego de Holguín, due to political reasons it was renamed Boulevard Monseñor Romero, in honor of Óscar Romero. The first phase of the highway was completed in 2009, and the second phase was completed and opened in November 2012. Mexico In Mexico, federal highways () are a series of highways that connect with roads from foreign countries or that link two or more states of the Federation. United States In the United States, a freeway is defined by the US government's Manual on Uniform Traffic Control Devices as a divided highway with full control of access. This means two things: first, adjoining property owners do not have a legal right of access, meaning all existing driveways must be removed and access to adjacent private lands must be blocked with fences or walls; instead, frontage roads provide access to properties adjacent to a freeway in many places. Second, traffic on a freeway is "free-flowing". All cross-traffic (and left-turning traffic) is relegated to overpasses or underpasses, so that there are no traffic conflicts on the main line of the highway, which must be regulated by traffic lights, stop signs, or other traffic control devices. Achieving such free flow requires the construction of many overpasses, underpasses, and ramp systems. The advantage of grade-separated interchanges is that freeway drivers can almost always maintain their speed at junctions since they do not need to yield to vehicles crossing perpendicular to mainline traffic. In contrast, an expressway is defined as a divided highway with partial control of access. Expressways may have driveways and at-grade intersections, though these are usually less numerous than on ordinary arterial roads. This distinction was first developed in 1949 by the Special Committee on Nomenclature of what is now the American Association of State Highway and Transportation Officials. Prior to that distinction the first freeways were complete in 1940, the Pennsylvania Turnpike and the Arroyo Seco Parkway (Pasadena Freeway). In turn, the definitions were incorporated into AASHTO's official standards book, the Manual on Uniform Traffic Control Devices, which would become the national standards book of USDOT under a 1966 federal statute. The same distinction has also been codified into the statutory law of eight states: California, Minnesota, Mississippi, Missouri, Nebraska, North Dakota, Ohio, and Wisconsin. However, each state codified the federal distinction slightly differently. California expressways do not necessarily have to be divided, though they must have at least partial access control. For both terms to apply, in Wisconsin, a divided highway must be at least four lanes wide; and in Missouri, both terms apply only to divided highways at least long that are not part of the Interstate Highway System. In North Dakota and Mississippi, expressways may have "full or partial" access control and "generally" have grade separations at intersections; a freeway is then defined as an expressway with full access control. Ohio's statute is similar, but instead of the vague word "generally", it imposes a requirement that 50% of an expressway's intersections must be grade-separated for the term to apply. Only Minnesota enacted the exact MUTCD definitions, in May 2008. The term "expressway" is also used in some areas of the country for what the federal government calls "freeways". Where the terms are distinguished, freeways can be characterized as expressways upgraded to full access control, while not all expressways are freeways. Examples in the United States of roads that are technically expressways (under the federal definition), but contain the word "freeway" in their names: State Fair Freeway in Kansas, Chino Valley Freeway in California, Rockaway Freeway in New York, and Shenango Valley Freeway (a portion of US 62) in Pennsylvania. Unlike in some jurisdictions, not all freeways in the US are part of a single national freeway network (although together with non-freeways, they form the National Highway System). For example, many state highways such as California State Route 99 have significant freeway sections. Many sections of the older United States Numbered Highway System have been upgraded to freeways but have kept their existing US Highway numbers. In Puerto Rico, controlled access highways are named autopista. Autopistas are tolled roads in the island, but toll cabins do exist on other types of roads as well. One of the best known autopistas in Puerto Rico is the Autopista Luis A. Ferré (Luis A. Ferré Expressway), which goes from San Juan, the capital to the north, to Ponce, the island's second largest city, to the south. Asia Afghanistan Many highways of Afghanistan were built in the 1960s with American and Soviet assistance. The Soviets built a road and tunnel through the Salang pass in 1964, connecting northern and eastern Afghanistan. A highway connecting the principal cities of Herat, Kandahar, Ghazni, and Kabul with links to highways in neighboring Pakistan formed the primary highway. The historical Highway 1 currently connects the major cities. Afghanistan has over of roads, with being paved. The highway infrastructure is currently going through reconstruction and can often be risky due to the instability of the country. Armenia Armenia has about of paved roads, of which 96% are asphalted. Armenia is connected to Europe through the International E-road network and Asia through the Asian Highway Network. Armenia is a member of the International Road Transport Union and the TIR Convention. Azerbaijan Azerbaijan has about of paved roads; the first paved roads were built during the Russian Empire. The road network, from rural roads to motorways, is today undergoing a rapid modernization with rehabilitations and extensions. For every of national territory, there are of roads. Azerbaijan is connected to Europe through the International E-road network and Asia through the Asian Highway Network. China The expressway network of China, with the national-level expressway system officially known as the National Trunk Highway System (; abbreviated as NTHS), is an integrated system of national and provincial-level expressways in China. By the end of 2019, the total length of China's expressway network reached , the world's largest expressway system by length, having surpassed the overall length of the American Interstate Highway System in 2011. Planned length is by 2020. Expressways in China are a fairly recent addition to a complicated network of roads. According to Chinese government sources, China did not have any expressways before 1988. One of the earliest expressways nationwide was the Jingshi Expressway between Beijing and Shijiazhuang in Hebei province. This expressway now forms part of the Jingzhu Expressway, currently one of the longest expressways nationwide at over . Georgia The road network in Georgia consists of of main or international highways in good condition, of which by 2021 roughly are controlled-access highway, while further expansion is ongoing. The of domestic main roads are of mixed quality, although the conditions are improving. Some of local roads are generally in poor condition. Georgia is connected to Europe via the International E-road network and Asia through the Asian Highway Network. Hong Kong In Hong Kong major motorways are numbered from 1 to 10 in addition to their names. Speed limits on expressways typically range from . India Expressways (known as "Gatimarg/गतिमार्ग", or "Speedways" in Hindi and other Indian languages) are the highest class of roads in India's road network and currently make up around of the National Highway System, with additional under various phases of implementation. They have a minimum of six or eight-lane controlled-access highways where entrance and exit is controlled by the use of slip roads. The expressways are operated and maintained by the Union, through the National Highways Authority of India. Indonesia In Indonesia all expressways (, "obstacle-free road") are tolled, so they are better known as toll road (Jalan Tol). Indonesia has expressway length so far, almost 70% of its expressways are in Java island. In 2009, the Indonesian government had planned to expand more expressway network in Java island by connecting Merak to Banyuwangi which is the total length of Trans-Java toll road including large cities expressway in Java such as Jakarta, Surabaya, Bandung and its complements is more than . The Indonesian government also had planned to build the Trans-Sumatra toll road which connects Banda Aceh to Bakauheni spanning . In 2012, the government allocated 150 trillion rupiah for the construction of the toll roads. There are three stages of construction of Trans-Sumatra toll road which is expected to be connected together in 2025. The other islands in Indonesia such as Kalimantan, Sulawesi also has begun constructed its expressways including connecting Manado to Makassar in Sulawesi and also Pontianak to Balikpapan in Kalimantan. However, there are still no plans to build an expressway in Western New Guinea due to its slow population growth. Indonesia is expected to have at least of expressway in 2030. Iran The history of freeways in Iran goes back to before the Iranian Revolution. The first freeway in Iran was built at that time, between Tehran and Karaj with additional construction and the studies of many other freeways started as well. Today Iran has about of freeway. Iraq Iraq's network of highways connects it from the inside to neighboring countries such as Syria, Turkey, Kuwait, Saudi Arabia, Jordan and Iran. When Saddam Hussein visited the United States, he was impressed at the highway style and ordered the highways to be built in American form. Freeway 1 is the longest freeway in the country, connecting from Umm Qasr Port in Basra to Ar Rutba in Anbar, spreading to a new freeway connecting it to Syria and Jordan. Iraq has about of highways, with of them paved. Israel Controlled-access highways in Israel are designated by a blue color. Blue highways are completely grade-separated but may include bus stops and other elements that may slow down traffic on the right lane. Highway 6 is Israel's longest freeway. It will extend to in length, from Shlomi in the north to the Negev Junction in the south. Japan , generally known as , make up the majority of controlled-access highways in Japan. The network boasts an uninterrupted link between Aomori Prefecture at the northern part of Honshū and Kagoshima Prefecture at the southern part of Kyūshū, linking Shikoku as well. Additional expressways serve travellers in Hokkaidō and on Okinawa Island, although those are not connected to the Honshū-Kyūshū-Shikoku grid. Expressways have a combined length of . Lebanon Lebanon has an extensive network of highways that are in varying condition throughout the country. Many highways are part of the Arab Mashreq International Road Network. Some highways have been upgraded to four-lane motorways, including the Beirut–Tripoli highway. Malaysia Controlled-access highways in Malaysia are known as ( – this is also the name for highways). However, some expressways, particularly bridges and tunnels such as the Penang Bridge, do not formally use the expressway name; a small number confusingly use the term highway, which is normally the designation for limited-access roads. Route numbers of designated expressways begin with the letter E. All expressways (excluding a section of the South Klang Valley Expressway, which is a two-lane expressway) are built with dual carriageways and at least two lanes in each direction; urban expressways generally have three or more lanes in each direction. While all expressways are grade separated at major roadways, many urban expressways in the Greater Kuala Lumpur region often have at-grade intersections, including with residential roads and shopfronts, thus do not meet the strict definition of a controlled-access highway. These expressways were previously normal arterial or collector roads that had such intersections, and were not removed when the roads were converted to expressways due to the resulting accessibility and sometimes political issues. Despite this, no expressway allows traffic to cross the median strip (apart from U-turns on a limited number of expressways) and expressways do not have at-grade traffic signals or roundabouts. Expressways have a maximum speed limit of , while speed limits of or lower are typical in built-up areas. As of 2017, expressways have only been designated in Peninsular Malaysia. There are 34 fully or partially open expressways with an approximate total length of . The vast majority of expressways are tolled; the North–South Expressway network, East Coast Expressway and West Coast Expressway predominantly use the ticket system of toll collection, while all other expressways use the barrier system. The construction and operation of expressways in Malaysia are usually privatised via concession agreements with the federal government, using the build–operate–transfer system. Pakistan The motorways of Pakistan and expressways of Pakistan are a network of multiple-lane, high-speed, limited-access or controlled-access highways in Pakistan, which are owned, maintained and operated federally by Pakistan's National Highway Authority. The total length of Pakistan's motorways and expressways is as of November 2016. Around of motorways are currently under construction in different parts of the country. Most of these motorway projects will be complete between 2018 and 2020. Pakistan's motorways are part of Pakistan's National Trade Corridor project that aims to link Pakistan's three Arabian Sea ports of Karachi, Port Qasim and Gwadar to the rest of the country. These would further link with Central Asia and China, as proposed in the China Pakistan Economic Corridor. Pakistan's first motorway, the M-2, was inaugurated in November 1997; it is a , six-lane motorway that links Pakistan's federal capital, Islamabad, with Punjab's provincial capital, Lahore. It is ranked among the world's top five speed highways/motorways. Other completed motorways and expressways are M1 Peshawar–Islamabad Motorway, M4 PindiBhattian–Faisalabad-Multan Motorway, E75 Islamabad-Murree–Kashmir Expressway, M3 Lahore–Multan Motorway, M8 Ratadero–Gawader Motorway, E8 Islamabad Expressway, M5 Multan-Sukkur Motorway, M9 Karachi-Hyderabad, Sindh and few others. Philippines Full control-access highways in the Philippines are referred to as expressways, which are usually toll roads. The expressway network is concentrated in Luzon, with the North Luzon Expressway and South Luzon Expressway being the most important ones. The expressway network in Luzon do not form an integrated network, but there are ongoing construction projects to interconnect those highways as well as to decongest the existing roads in the areas they serve. Expressways are being introduced to Visayas and Mindanao through the construction of the Cebu–Cordova Link Expressway in Metro Cebu and Davao City Expressway in Davao City. Saudi Arabia Highways in Saudi Arabia vary from eight-laned roads to small two-lane roads in rural areas. The city highways and other major highways are well maintained, especially the roads in the capital Riyadh. The roads have been constructed to resist the consistently high temperatures and do not reflect the strong sunshine. The other city highways such as the one linking coast to coast are not as great as the inner-city highways but the government is now working on rebuilding those roads. Saudi Arabia is part of the Arab-Mashreq Highway Network and connects to the rest of Asia through the Asian Highway Network. Singapore The expressways of Singapore are special roads that allow motorists to travel quickly from one urban area to another. All of them are dual carriageways with grade-separated access. They usually have three to four lanes in each direction, although there are two-lane carriageways at many expressway—expressway intersections and five-lane carriageways in some places. There are ten expressways, including the new Marina Coastal Expressway. Studies about the feasibility of additional expressways are ongoing. Construction on the first expressway, the Pan Island Expressway, started in 1966. , there are of expressways in Singapore. The Singaporean expressway networks are connected with Malaysian expressway networks via Ayer Rajah Expressway (connects with the Second Link Expressway via the Malaysia–Singapore Second Link) and Bukit Timah Expressway (connects with the Eastern Dispersal Link via the Johor–Singapore Causeway). South Korea Since Gyeongin Expressway linking Seoul and Incheon opened in 1968, national expressway system in South Korea has been expanded into 36 routes, with total length of as of 2017. Most of expressways are four-lane roads, while (26%) have six to ten lanes. Speed limit is typically for routes with four or more lanes, while some sections having fewer curves have limit of . Expressways in South Korea were originally numbered in order of construction. Since 24 August 2001, they have been numbered in a scheme somewhat similar to that of the Interstate Highway System in the United States. Furthermore, the symbols of the South Korean highways are similar to the US red, white and blue. Arterial routes are designated by two-digit numbers, with north–south routes having odd numbers, and east–west routes having even numbers. Primary routes (i.e. major thoroughfares) have 5 or 0 as their last digit, while secondary routes end in other digits. Branch routes have three-digit route numbers, where the first two digits match the route number of an arterial route. This differs from the American system, whose last two digits match the primary route. Belt lines have three-digit route numbers where the first digit matches the respective city's postal code. This also differs from American numbering. Route numbers in the range 70–99 are not used in South Korea; they are reserved for designations in the event of Korean reunification. The Gyeongbu Expressway kept its Route 1 designation, as it is South Korea's first and most important expressway. Sri Lanka Sri Lanka currently has over of designated expressways serving the southern part of the country. The first stage of the E01 Expressway (Southern Expressway) which opened in 2011 was Sri Lanka's first expressway spanning a distance of . The second stage of the Southern Expressway opened in 2014 and extends to Matara. The E03 Expressway (Colombo–Katunayake Expressway) opened in 2013 and connects Sri Lanka's largest city Colombo with the Bandaranaike International Airport covering a distance of . All E-Grade highways in Sri Lanka are access controlled, toll roads with speeds limits in the range of . The network is to be expanded to by 2024. Operational (fully or partially): Kottawa-Hambantota Kottawa-Kerawalapitiya Colombo-Katunayake Enderamulla-Kurunegala-Kandy Kahatuduwa-Pelmadulla Planned: Colombo Metropolitan expressway (Colombo Fort to Peliyagoda, connecting Colombo with the E03 expressway (Sri Lanka) Colombo-Katunayaka expressway.) Syria Syria has a well-developed system of motorways in the western half of the country. As the eastern part is underpopulated, it only has two lanes. Highways have been important in terms of transport for the ongoing civil war. The main motorways are: M1 - Runs from Homs to Latakia. It also connects Tartus, Baniyas and Jableh. Its length is . M2 - Runs from Damascus to Jdeidat Yabous, on the border with Lebanon. It also connects Al-Sabboura. Its length is . M4 - Runs from Latakia to Saraqib. It also connects Arihah and Jisr al-Shughur. Its length is . It continues to the Iraqi border all the way to Mosul. M5 - Often describes as the most important highway, it runs through much of the major cities in Syria and runs to the Jordanian border. Taiwan (Republic of China) Taiwan has an extensive road network that includes two types of controlled-access highway: freeways and expressways. Only cars and trucks are allowed onto freeways, the first of which — Freeway 1 — was completed in 1974. Expressways allow car and truck traffic as well as motorcycles with engines of 250cc or more. Expressways in Taiwan may be controlled-access highways similar to national freeways or limited-access roads. Most have urban roads and intra-city expressways (as opposed to Highway system) status, although some are built and maintained by cities. Thailand Controlled-access highways in Thailand are separated into urban expressways called expressways, which are operated by the Expressway Authority of Thailand and BEM (except Don Mueang Tollway, which is operated by Don Muang Tollway Public Company Limited) and have a span of , while intercity expressways are called motorways, which have a span of . The network is to be extended to according to the master plan. Uzbekistan Uzbekistan has of roads, about of which were paved. Much of the highways are in need of repair, although the condition has been improving. In 2017, the governments of Kazakhstan and Uzbekistan agreed to open a section of the M39 Highway by the Kazakh border. Vietnam At present, the expressway system of Vietnam is long. Under the government's plan, the national expressway system will have a total length of . The expressway system in Vietnam is separate from the national highway system. Most of the expressways are located in the North, especially around Hanoi. Of the 21 expressways in Vietnam, 8 emanate from Hanoi and 14 are in the north, with a length of . The first expressway in Vietnam is the Ho Chi Minh City - Trung Luong Expressway, which is inaugurated and opened for traffic on February 3, 2010. Currently, most of the expressways in Vietnam are four-lane highways, with some routes like Ha Noi - Haiphong, and Phap Van - Cau Gie being six-lane. The only elevated expressway in Vietnam is Mai Dich - Thanh Tri Bridge (also known as the third beltway in Hanoi). The cost of building Vietnam's highways is one of the most expensive in the world, with an average cost of $12 million per kilometer. Compared with China, where there are similarities, their highway costs only $5 million per kilometer, where in the US and European countries, costs $3–4 million per kilometer. According to road traffic laws of Vietnam, an expressway is a road for motor vehicles, with a divider separating opposing traffic directions, no at-grade crossings with intersecting roads, fully equipped facilities to ensure continuous traffic flow, safety and short journey times, and access allowed only at interchanges. Europe Regarding road function, motorways serve exclusively the function of flow. They allow for efficient throughput of, usually long distance, motorized traffic, with unhindered flow of traffic, no traffic signals, at-grade intersections or property access and elimination of conflicts with other directions of traffic, thus, dramatically improving both safety and capacity. Although roads are under the responsibility of each individual state, including within the European Union, there are some legal conventions (international treaties) and some European directives which give a legal framework for roads of a European importance with the goal to introduce some kind of homogenization between various members. They basically consider, at European level, three types of roads: motorways, express roads, and ordinary roads. Some European treaties also define aspects such as the range of speed limit, or for some geometric aspects of roads, in particularly for the International E-road network. According to Eurostat: A motorway is a road specially designed and built for motor vehicle traffic, which does not directly provide access to the properties bordering on it. Other characteristics of motorways include: two separated carriageways for the opposing directions of traffic, except at special points or, temporarily, due to carriageway repairs etc.; carriageways that are not crossed at the level of the carriageway by any other road, railway or tramway track, or footpath; and the use of special signposting to indicate the road as a motorway and to exclude specific categories of road vehicles and/ or road users. In determining the extent of a motorway its entry and exit lanes are included irrespective of the location of the motorway signposts. Urban motorways are also included in this term. Most of the European countries use the above motorway definition but different national definitions of motorways can be found in some countries. It is usually considered that: Motorways serve exclusively motorised traffic. Motorways have separate carriageways for the two directions of traffic. Motorways are not crossed at the same level by other roads, footpaths, railways Traffic entrance and exit is performed at interchanges only. Motorways have no access for traffic between interchanges and do not provide access to adjacent land. Motorways are especially sign-posted Motorways status is signaled at the entry and exit of the motorway by a symbol conforming to international agreements, but specific to each country. The peripheral northern and eastern regions of the EU have a lower density motorway network. Within the European Union, there are 26 regions (NUTS level 2) with no motorway network in 2013. Those regions are islands or remote regions, for instance four overseas French regions and Corsica. The Baltic member state of Latvia, as well as four regions from Poland, and two regions from each of Bulgaria and Romania also reported no motorway network; several of these regions bordered onto non-member neighbouring countries to the east of the EU. European motorways provide reduced accident risks: 50% to 90% lower compared to standard roads, when new motorways only reduce injuries by 7%. Some of the things which are considered as providing safety in the European motorways are central medians, grade separated interchanges, and access restrictions. Nonetheless, some specific conditions provide a height risk of a more severe accidents, such as: improper use of emergency lane, cross-median head-on accidents, wrong direction accidents. Albania Highways in Albania form part of the recent Albanian road system. Following the collapse of communism in 1991, the first highways in Albania started being constructed, The first was SH2, connecting Tirane with Durrës via Vora. Since the 2000s, main roadways have drastically improved, though lacking standards in design and road safety. This involved the construction of new roadways and the putting of contemporary signs. However, some state roads continue to deteriorate from lack of maintenance while others remain unfinished. Austria The Austrian autobahns (German: Autobahnen) are controlled-access highways in Austria. They are officially called Bundesstraßen A (Bundesautobahnen) under the authority of the federal government according to the Austrian Federal Road Act (Bundesstraßengesetz), not to be confused with the former Bundesstraßen highways maintained by the Austrian states since 2002. Austria currently has 18 Autobahnen, since 1982 built and maintained by the self-financed ASFiNAG stock company in Vienna, which is wholly owned by the Austrian republic and earns revenue from road-user charges and tolls. Each route bears a number as well as an official name with local reference, which however is not displayed on road signs. Unusually for European countries, interchanges (between motorways called Knoten, "knots") are numbered by distance in kilometres starting from where the route begins; this arrangement is also used in the Czech Republic, Slovakia, Hungary, Spain, and most provinces of Canada (and in most American states, albeit in miles). The current Austrian Autobahn network has a total length of . Belgium In 1937, the first motorway between Brussels and Ostend was completed, following the example of neighboring countries such as Germany. It mainly served local industries and tourism as a connection between the capital city and a coastal region. However, the Second World War and the reparation of the complete road network after the war caused a serious delay in the creation of other motorways. In 1949, the first plans were made to build a complete motorway network of that would be integrated with the neighboring networks. Although the plans were ready, the construction of the motorway network was much slower than in neighboring countries because the project was deemed not to be urgent. Because of economic growth in the 1960s, more citizens could afford cars, and the call for good-quality roads was higher than ever before. In each year between 1965 and 1973, over of motorway were built. At the end of the 1970s, the construction of motorways slowed down again due to costs, combined with an economic crisis, more expensive fuel and changing public opinion. In the following years, the only investments done were to complete already started motorway constructions. But most important cities were already connected. In 1981, the responsibilities for construction and maintenance of the motorways shifted from the federal to the regional governments. This sometimes caused tensions between the governments. For example, the part of the ring road around Brussels that crosses Wallonian territory has never been finished, since only Flanders suffers from the unfinished ring. Belgium today has the longest total motorway length per area unit of any country in the world. Most motorway systems in Belgium have at least three lanes in each direction. Nearly all motorways have overhead lighting including those in rural areas. The dense population of Belgium and the still unfinished state of some motorways, such as the ring roads around Brussels and Antwerp cause major traffic congestion on motorways. On an average Monday morning in 2012, there was a total of of traffic jams and the longest traffic jam of the year was , purely on the motorways. Bosnia and Herzegovina Bosnia and Herzegovina has more than of highway, which connects Kakanj-Sarajevo. There is a plan to build highway on Corridor Vc, which will go from river Sava, across Doboj, Sarajevo and Mostar to Adriatic Sea. Next sections are Kakanj-Drivuša , Zenica Sjever-Drivuša , Svilaj-Odžak , Vlakovo-Tarčin , Počitelj-Bijača . The speed limit is or in tunnels. Bulgaria Legislation in Bulgaria defines two types of highways: motorways (, ) and expressways (, ). The main differences are that motorways have emergency lanes and the maximum allowed speed limit is , while expressways do not have emergency lanes and the speed limit is . , of motorways are in service, with another under various stages of construction. More than are planned. Also, several expressways are planned. Croatia The primary high-speed motorways in Croatia are called autoceste (singular: autocesta; ), and they are defined as roads with at least two lanes in each direction (including hard shoulder) and a speed limit of not less than . The typical speed limit is . As of 2017, there are of motorways in Croatia. There is also a category known as brza cesta, meaning "expressway". These roads have a speed limit up to and are not legally required to be grade-separated, but nearly all are. Cyprus Motorways (Greek: αυτοκινητόδρομος, Turkish: Otoyol) connect all cities in Cyprus, although in the territory under de facto Turkish control these do not meet international standards for the definition of motorways. In the areas administered by the Republic of Cyprus, motorway numbers are prefaced with the letter A, and run from A1 to A6, to distinguish them from all other roads, designated B roads. Of the A roads, all are designated motorways, except for the A4, linking Larnaca with Larnaca Airport. Motorways are also distinguishable by the use of green-backed road signs, with standard international graphics, and text in yellow in Greek and white in English, distinguishable from B road signage, which has signs with blue backgrounds. Motorway junctions are theoretically designated with junction numbers, but signage is not consistent is indicating the exit numbers. Czech Republic The Czech Republic has currently (2023) of motorways (dálnice) whose speed limit is (or within a town). The total length should be around 2030. The number of a motorway (in red) copies the number of the national route (in blue) which has been replaced by the motorway. There are also roads for motorcars (silnice pro motorová vozidla). Those common roads are not subject to a fee (in form of vignette) for vehicles with total weight up to and their speed limit is , partially up to . Denmark Denmark has a well covered motorway system today, which has been difficult to build due to the county's geography with many islands. The longest bridges are the Great Belt and the Øresund bridges to Skåne (Scania) in southern Sweden. Both are motorways with dual electrical train tracks added. Finland Finland has of motorway, which is only a small proportion of the whole highway network. More than half of the length of the motorway network consists of six radial motorways originating in Helsinki, to Kirkkonummi (Länsiväylä), Turku (Vt1/E18), Tampere (Vt3/E12), Tuusula (Kt45), Heinola (Vt4/E75) and Vaalimaa (Vt7/E18). These roads have a total length of . The other motorways are rather short sections close to the biggest cities, often designed to be bypasses. The motorway section on national roads 4 and 29, between Simo and Tornio, is said to be the northernmost motorway in the world. Finnish motorways do not have a separate road numbering scheme. Instead, they carry national highway numbers. In addition to signposted motorways, there are also some limited-access two-lane expressways, and other grade-separated four-lane expressways (perhaps the most significant example being Ring III near Helsinki). France The autoroute system in France consists largely of toll roads, except around large cities and in parts of the north. It is a network of worth of motorways. Autoroute destinations are shown in blue, while destinations reached through a combination of autoroutes are shown with an added autoroute logo. Toll autoroutes are signalled with the word péage (toll). Germany Germany's network of controlled-access expressways includes all federal Autobahnen and some parts of Bundesstraßen and usually no Landesstraßen (state highways), Kreisstraßen (district highways) nor Gemeindestraßen (municipal highways). The federal Autobahn network has a total length of in 2020, making it one of the densest networks in the world. The German autobahns have no general speed limit for some classes of vehicles (though nearly 30% of the total autobahn network is subject to local and/or conditional limits), but the advisory speed limit (Richtgeschwindigkeit) is . The lower class expressways usually have speed limits of or lower. Greece Greece's motorway network has been extensively modernised throughout the 1980s, 1990s and especially the 2000s, while part of it is still under construction. Most of it was completed by mid 2017 numbering around of motorways, making it the biggest highway network in Southeastern Europe and the Balkans and one of the most advanced in Europe. There are a total of 10 main routes throughout the Greek mainland and Crete, from which some feature numerous branches and auxiliary routes. Most important motorways are the A1 Motorway connecting Greece's two largest cities (Athens and Thessaloniki), the A2 motorway (Egnatia Odos), also known as the "horizontal road axis" of Greece, connecting almost all of Northern Greece from west to east and the A8 motorway (Olympia Odos) connecting Athens and Patras. Another important motorway is the A6 motorway (Attiki Odos), the main beltway of the Athens Metropolitan area. Hungary In Hungary, a controlled-access highway is called an (plural ). Ireland In Ireland the Local Government (Roads and Motorways) Act 1974 made motorways possible, although the first section, the M7 Naas Bypass, did not open until 1983. The first section of the M50 opened in 1990, a part of which was Ireland's first toll motorway, the West-Link. However it would be the 1990s before substantial sections of motorway were opened in Ireland, with the first completed motorway—the M1 motorway—being finished in 2005. Under the Transport 21 infrastructural plan, motorways or high quality dual carriageways were built between Dublin and the major cities of Cork, Galway, Limerick and Waterford by the end of 2010. Other shorter sections of motorway either have been or will be built on some other main routes. In 2007 legislation (the Roads Bill 2007) was created to allow existing roads be designated motorways by order because previously legislation allowed only for newly built roads to be designated motorways. As a result, most HQDCs nationwide (other than some sections near Dublin on the N4 and N7, which did not fully meet motorway standards) were reclassified as motorways. The first stage in this process occurred when all the HQDC schemes open or under construction on the N7 and N8, and between Kinnegad and Athlone on the N6 and Kilcullen and south of Carlow on the N9, were reclassified motorway on 24 September 2008. Further sections of dual carriageway were reclassified in 2009. As of December 2011, the Republic of Ireland has around of motorways. Italy The world's first motorway was the Autostrada dei laghi, inaugurated on 21 September 1924 in Italy. It linked Milan to Varese; it was then extended to Como, near the border with Switzerland, inaugurated on 28 June 1925. Piero Puricelli, the engineer who designed this new type of road, decided to cover the expenses by introducing a toll. Other motorways (or autostrade) built before World War II in Italy were Naples-Pompeii, Florence-Pisa, Padua-Venice, Milan-Turin, Milan-Bergamo-Brescia and Rome-Ostia. The total length of the Italian motorway system is about , as of 30 July 2022. To these data are added 13 motorway spur routes, which extend for . The density is of motorway for every of Italian territory. Italian motorways (or autostrade) are mostly managed by concessionaire companies. From 1 October 2012 the granting body is the Ministry of Infrastructure and Transport and no longer Anas and the majority ( in 2009) are subject to toll payments. On Italian motorways, the toll applies to almost all motorways not managed by Anas. The collection of motorway tolls, from a tariff point of view, is managed mainly in two ways: either through the "closed motorway system" (km travelled) or through the "open motorway system" (flat-rate toll). Italy's motorways (or autostrade) have a standard speed limit of for cars. Limits for other vehicles (or when visibility is poor due to weather) are lower. Legal provisions allow operators to set the limit to on their concessions on a voluntary basis if there are three lanes in each direction and a working SICVE, or Safety Tutor, which is a speed-camera system that measures the average speed over a given distance. Type B highway (), commonly but unofficially known as superstrada (Italian equivalent for expressway), is a divided highway with at least two lanes in each direction, paved shoulder on the right, no cross-traffic and no at-grade intersections. Access restrictions on such highways are exactly the same as motorways (or autostrade). The signage at the beginning and the end of the strade extraurbane principali is the same, except the background colour is blue instead of green. The general speed limit on strade extraurbane principali is , unless otherwise indicated. Strade extraurbane principali are not tolled. Latvia There is currently one category of controlled-access highways in Latvia, which are expressways (Latvian:ātrgaitas ceļš) with maximum speed . The first expressway in Latvia opened in October 2023, the Ķekava Bypass, which is a part of the A7. Current length of the expressway network is . Lithuania There are two categories of controlled-access highways in Lithuania: expressways (Lithuanian: greitkeliai) with maximum speed and motorways (Lithuanian: automagistralės) with maximum speed . The first section Vilnius–Kaunas of A1 highway was completed in 1970. Kaunas–Klaipėda section of A1 was completed in 1987. Vilnius-Panevėžys (A2 highway) was completed in stages during the 1980s and finished in the 1990s. Complete length of the motorway network is . Expressway network length - . Motorway section between Kaunas and the Polish border is planned to be completed in the 2020s. Netherlands Roads in the Netherlands include at least of motorways and expressways, and with a motorway density of 64 kilometres per 1,000 km2 (103 mi/1,000 mi2), the country has one of the densest motorway networks in the world. About are fully constructed to motorway standards, These are called Autosnelweg or simply snelweg, and numbered and signposted with an A and up to three digits, like A12. They are consistently built with at least two carriageways, guard rails and interchanges with grade separation. Since September 2012, the nationwide maximum speed has been raised to , but on many stretches speed is still limited to . Dutch motorways may only be used by motor vehicles both capable and legally allowed to go at least . In March 2020, the general speed limit on Dutch motorways was lowered to during the day (6 am until 7 pm). At night, the maximum speed is different per stretch, but remained the upper limit. Dutch roads are used with a very high intensity in relation to the network length and traffic congestion is common, due to the country's high population density. Therefore, since 1979 large portions of the motorway network have been equipped with variable message signs and dynamic electronic displays, both of which are aspects of intelligent transportation systems. These signs can show a lower speed limit, as low as , to optimize the flow of heavy traffic, and a variety of other communications. Additionally there are peak, rushhour or plus lanes, which allow motorists to use the hard shoulder as an extra traffic lane in case of congestion. These extra lanes are observed by CCTV cameras from a traffic control center. Less common, but increasingly, separate roadways are created for local/regional traffic and long-distance traffic. This way the number of weaving motions across lanes is reduced, and the traffic capacity per lane of the road is optimised. A special feature of Dutch motorways is the use of Porous Asphalt Concrete, which allows water to drain efficiently, and even in heavy rain no water will splash up, in contrast to concrete or other road surfaces. The Netherlands is the only country that uses PAC this extensively, and the goal is to cover 100% of the motorways with PAC, in spite of the high costs of construction and maintenance. North Macedonia The total motorway network length in North Macedonia is as of Spring 2019. Another are under construction, (Ohrid to Kicevo) and (Skopje to Kosovo border). The stretch from Gostivar to Kicevo is planned to start with construction in 2021. The three motorway routes are A1, which is part of the European corridor E-75, A2 (part of E-65) and the recently built A4 corridor that connects Skopje to Stip. A1 connects the northern border (Serbia) with the southern one (Greece), while A2 traverses the country from the East (Bulgaria border) to West (Albania border), but only the stretch from Kumanovo to Gostivar is a divided motorway, while the rest of the length is either an undivided two-way road or in the process of turning into a motorway. Norway Norway has (2022) of motorways, in addition to of limited-access roads (in Norwegian motortrafikkvei) where pedestrians, bicycles, etc. are forbidden, though with a bit lower standard than true motorway. Most of the network serves the big cities, chiefly Oslo, Stavanger and Bergen. Northernmost motorway is, as of 2022, on E6 just south of Trondheim: see also the E6, E18 and the E39. Most motorways use four-ramp Dumbbell interchanges, but also Roundabout interchanges can be found. The first motorway was built in 1964, just outside Oslo. The motorways' road pattern layout is similar to those in the United States and Canada, featuring a yellow stripe towards the median, and white stripes between the lanes and on the edge. The speed limits are . Poland The highways in Poland are divided into motorways and expressways, both types featuring grade-separated interchanges with all other roads, emergency lanes, feeder lanes, wildlife protection measures and dedicated roadside rest areas. Motorways can be only dual carriageways, while expressways can be dual or, rarely, single carriageways. The start of an expressway in Poland is marked with a sign of white car on blue background, while number sign for an expressway is of red background and white letters, with the letter S preceding a number. Speed limits in Poland are on motorways and on dual-carriageway expressways. The Regulation of the Council of Ministers defines the network of motorways and expressways in Poland totalling about (including about of motorways). As of July 2022, there are of motorways and expressways in operation (58% of the intended network), while contracts for construction of further of motorways and expressways (15% of the intended network) are ongoing. Portugal Portugal was the third country in Europe—after Italy and Germany—to build a motorway (, plural: ), opening, in 1944, the Lisbon-Estádio Nacional section of the present A5 (Autoestrada da Costa do Estoril). Additional motorway sections were built in the 1960s, 1970s and early 1980s. However, the large-scale building of motorways started only in the late 1980s. Currently, Portugal has a very well-developed network of motorways, with about a extension, that connects all the highly populated coastal regions of the country and the main cities of the less populous interior. This means that 87% of the Portuguese population lives at less than 15 minutes' driving time from a motorway access. Unlike the neighbouring Spanish network, most of Portuguese motorways are tolled, although there are also some non-tolled highways, mostly in urban areas, like those of Greater Lisbon and Greater Oporto. In the late 1990s and early 2000s, the Government of Portugal created seven shadow toll concessions, the SCUT toll (, no costs for the user). In those concessions it were included more than of motorways and highways, some of them already built, others which were built in the following years. However, due to economical and political reasons, the shadow toll concept was abolished between 2010 and 2011, with electronic toll equipment being installed in these motorways, to charge their users. Having only electronic tolls, former SCUT motorways can now only be used by vehicles equipped with electronic payment devices or vehicles registered in the system. Portuguese motorways form an independent network (, National Motorway Network), that overlaps with the Fundamental and Complementary subnetworks of the National Highway Network (). Each motorway section overlapping with the Fundamental subnetwork is part of an IP (, Principal route) and each motorway section overlapping with the Complementary subnetwork is part of an IC (, Complementary route). Thus, a motorway can overlap with sections of different IP or IC routes and - on the other hand - an IP or IC route can overlap with sections of different motorways. An example is A22 motorway, which overlaps with sections of IP1 and of IC4 routes; another example is IP1 route, which overlaps with sections of the A22, A2, A12, A1 and A3 motorways. The National Motorway Network has a proper numbering system in which each motorway has a number prefixed by the letter "A". In most cases, a motorway signage indicates only its A number. The number of the IP or IC of which a motorway section is a part is not signed except in some short motorways which lack a proper A number. Romania As of 23 December 2024, Romania has of highways in use, with more under construction. The first motorway in Romania was completed in 1972, linking Bucharest and Pitești. The Romanian Government has adopted a General Master Plan for Transport that was approved by the European Union in July 2015, containing the strategy for expanding the road (including motorway) network until 2040, using EU funding. Russia By October 2024, Russia will have a nationwide motorway network with a length of and expressway network of . The motorways and expressways have the numbering of the Russian federal highway network or their own name, as there is no separate numbering system for motorways and expressways and their sections are mostly part of the Russian federal highway network. The legal speed limit on motorways and expressways is 110 km/h, and 130 km/h on some newly upgraded sections of motorway. Sections of Russian federal highway that have been upgraded to motorway status are marked with green signs. Federal highway roads that have been upgraded to expressways or dual and single carriageway with road junction are marked with blue signs. In the classification of Russian federal highway roads, motorways are assigned to technical category IA and expressways to technical category IB. Serbia Motorways () and expressways () are the backbone of the road system in Serbia. There are around of motorways in total. Plan is by the end of 2018. Motorways in Serbia have three lanes (including emergency lane) in each direction, signs are white-on-green, as in the rest of former Yugoslavia and the normal speed limit is . Expressways, unlike motorways, do not have emergency lanes, signs are white-on-blue and the normal speed limit is . As the Serbian word for motorway is autoput, the "A1", "A2" or "A3" road designations are used since November 2013. All state roads categorized as class I, that are motorways currently of in the future, are marked with one-digit numbers and known as class Ia. All other roads, which belong to class I, are marked with two-digit numbers and known as class Ib. Expressways belong to class Ib, too. E-numeration is also widely used on motorways. The core of the motorways is what was once called during Yugoslav period, the Brotherhood and Unity Highway, which was opened in 1950 and goes from the border with Croatia, through Belgrade, Central Serbia, Niš, and to border with North Macedonia. It was one of the first modern highways in Central-Eastern Europe. It is the most direct link between Central and Western Europe with Greece and Turkey, and subsequently the Middle-East. Slovakia Slovakia has currently (2022) of motorways (, D) and expressways (, R) whose speed limit is . They are split into expressways and motorways like Poland with expressways starting with a R, short for "rýchlostná cesta", but from April 2020 all the expressways in Slovakia were known as motorways due to that the expressways are very similar to the motorways in Slovakia. An e-vignette is paid to use the motorways in Slovakia, while before you used to pay a sticker vignette but from 2016 you pay electronically through the website. Slovenia The highways in Slovenia are the central state roads in Slovenia and are divided into motorways (, AC) and expressways (, HC). Motorways are dual carriageways with a speed limit of . They have white-on-green road signs as in Italy, Croatia and other countries nearby. Expressways are secondary roads, also dual carriageways, but without an emergency lane. They have a speed limit of and have white-on-blue road signs. Spain The Spanish network of autopistas and autovias has a length of , making it the largest in Europe and the third in the world. Autopistas are specifically reserved for automobile travel, so all vehicles not able to sustain at least are banned from them. General speed limits are mandated by the Spanish Traffic Law as . Specific limits may be imposed based on road, meteorological or traffic conditions. Spanish legislation requires an alternate route to be provided for slower vehicles. Many, but not all, autopistas are toll roads, which also mandates an alternate toll-free route under the Spanish laws. Sweden Sweden has the largest motorway network in Scandinavia (). It is, however, unevenly allocated. Most motorways are located in the south of the country, where the population density is the highest. The first motorway in Sweden opened in 1953, between Lund and Malmö. Four-lane expressways had been built before, an early example is E20 between Gothenburg and Alingsås, built in the early 1940s. Most of the current network was built in the 1970s and 1990s. E6 starts i Trelleborg in southern Sweden, it then continues along the Swedish western coast, up to the Svinesund bridge which is where Sweden borders to Norway. Its length is close to on Swedish territory alone, and it connects four of Scandinavia's six largest cities, Copenhagen, Malmö, Gothenburg and Oslo together, as well as around 20 other more or less notable towns and cities. A Swedish (partly motorway) route (rather than road) that also has a significant portion of the Swedish motorway network, is European route E4, which runs from the border city of Tornio in northern Finland to Helsingborg in southern Sweden. E4 is the main route that connects the capital Stockholm with Scania. All of E4 south of the city Gävle is of motorway standard. The part of E4 that runs through western Stockholm is called Essingeleden and is the busiest road in Sweden. Other highways that have a significant portion of motorway standard are E20, E18 and E22. Motorways in Sweden are however not restricted to European routes; so called Riksvägar and other regional road types can also be of motorway standard. An example of this is Riksväg 40. Riksväg 40 is the main link between the largest cities in the country, Stockholm and Gothenburg. Notably, not even the majority of the European route- network in Sweden is motorway or even have expressway standard. All of this is because road numbering and road standard is separate in Sweden, as in the rest of Scandinavia. Switzerland Switzerland has a two-class highway system: motorways with separated roads for oncoming traffic and a standard maximal speed limit of , and expressways often with oncoming traffic and a standard maximal speed limit of . In Switzerland as of April 2011, there were of a planned of motorway completed. The country is mountainous with a high proportion of tunnels: there are 220 totaling , which is over 12% of the total motorway length. Turkey Motorways () of Turkey are a network in constant development. All motorways (O coded), except beltways, are toll roads (using only RFID methods for the roads that operated by KGM; cash and credit card payment is also possible for the roads that operated by private companies), mostly six lanes wide, illuminated and with speed limit. As of 2024, total length of the motorways is long in total. United Kingdom Great Britain A map Showing Future Pattern of Principal National Routes was issued by the Ministry of War Transport in 1946 shortly before the law that allowed roads to be restricted to specified classes of vehicle (the Special Roads Act 1949) was passed. The first section of motorway, the M6 Preston Bypass, opened in 1958 followed by the first major section of motorway (the M1 between Crick and Berrygrove in Watford), which opened in 1959. From then until the 1980s, motorways opened at frequent intervals; by 1972 the first of motorway had been built. Whilst roads outside of urban areas continued to be built throughout the 1970s, opposition to urban routes became more pronounced. Most notably, plans by the Greater London Council for a series of ringways were cancelled following extensive road protests and a rise in costs. In 1986 the single-ring, M25 motorway was completed as a compromise. In 1996 the total length of motorways reached . Motorways in Great Britain, as in numerous European countries, will nearly always have the following characteristics: No traffic lights (except occasionally on slip roads before reaching the main carriageway). Exit is nearly always via a numbered junction and slip road, with rare minor exceptions. Pedestrians, cyclists and vehicles below a specified engine size are banned. There is a central reservation separating traffic flowing in opposing directions (the only exception to this is the A38(M) in Birmingham where the central reservation is replaced by another lane in which the direction of traffic changes depending on the time of day. There was another small spur motorway near Manchester with no solid central reservation, but this was declassified as a motorway in the 2000s.) No roundabouts on the main carriageway. (This is only the case on motorways beginning with M (so called M class)). In the case of upgraded A roads with numbers ending with M (i.e. Ax(M)), roundabouts may exist on the main carriageway where they intersect 'M' class motorways. In all M class motorways bar two, there are no roundabouts except at the point at which the motorway ends or the motorway designation ends. The only exceptions to this in Great Britain are: the M271 in Southampton which has a roundabout on the main carriageway where it meets the M27, but then continues as the M271 after the junction. on the M60. This came about as a result of renumbering sections of the M62 and M66 motorways near Manchester as the M60, to form a ring around the city. What was formerly the junction between the M62 and M66 now involves the clockwise M60 negotiating a roundabout, while traffic for the eastbound M62 and northbound M66 carries straight on from the M60. This junction, known as Simister Island, has also been criticised for the presence of a roundabout and the numbered route turning off. the A1(M) between the M62 in North Yorkshire and Washington in Tyne and Wear is built to full 'M' class standards without any roundabouts. the A74(M) between Gretna and Abingdon in Scotland is similarly built to full 'M' class standards with no roundabouts. On motorways in Great Britain there were 99 fatalities in 2017 for 69 billion vehicle miles travelled, a reduction from 183 fatalities in 2007. which is equivalent to 1.43 fatalities per billion vehicle miles traveled. Northern Ireland Legal authority existed in the Special Roads Act (Northern Ireland) 1963 similar to that in the 1949 Act. The first motorway to open was the M1 motorway, though it did so under temporary powers until the Special Roads Act had been passed. Work on the motorways continued until the 1970s when the oil crisis and The Troubles both intervened causing the abandonment of many schemes. Oceania Australia Australia's major cities, Sydney, Melbourne, Brisbane and Perth, feature a network of freeways within their urban areas, while Canberra, Adelaide, Hobart and the regional centres of Newcastle, Geelong, Gold Coast and Wollongong feature a selection of limited-access routes. Outside these areas traffic volumes do not generally demand freeway-standard access, although heavily trafficked regional corridors such as Sydney–Newcastle (M1 Pacific Motorway (F3)), Sydney–Wollongong (M1 Princes Motorway (F6)), Brisbane–Gold Coast (M1 Pacific Motorway), Melbourne–Geelong (M1 Princes Freeway), Perth-Mandurah (SR2 Kwinana Freeway) and that form part of major long-distance routes feature high-standard freeway links. The M31 Hume Highway/Freeway/Motorway connecting Sydney and Melbourne, the M23 Federal Highway spur route that connects Canberra with Sydney and the A1/M1 Pacific Highway/Motorway connecting Sydney and Brisbane are the only major interstate highways that are completed to a continuous dual carriageway standard. There are also plans to upgrade the A25 Barton Highway, another spur off the M31 that connects Canberra with Melbourne, to a dual carriageway highway. Although these inter-city highways are dual carriageway they are not all controlled access highways. Some of these inter-city highways have driveways to adjacent property and at-grade junctions with smaller roads. Unlike many other countries, some of Australia's freeways are being opened to cyclists. As the respective state governments upgrade their state's freeways, bicycle lanes are being added and/or shoulders widened alongside the freeways. The state of Queensland is an exception however, as cyclists are banned from all freeways, including the breakdown lane. Motorways referred to as an expressway in Australia include the Hunter Expressway, which connects the Hunter Valley with Newcastle, and the Southern Expressway, which connects Adelaide's outer southern suburbs to the southwestern suburbs. New Zealand The term motorway in New Zealand encompasses multilane divided freeways as well as narrower two- to four-lane undivided expressways with varying degrees of grade separation; the term motorway describes the legal traffic restrictions rather than the type of road. New Zealand's motorway network is small due to the nation's low population density and low traffic volumes making it uneconomical to build controlled-access highways outside the major urban centres. New Zealand's first motorway opened in December 1950 near Wellington, running from Johnsonville to Tawa. This motorway now forms the southern part of the Johnsonville-Porirua Motorway and part of State Highway 1. Auckland's first stretch of motorway was opened in 1953 between Ellerslie and Mount Wellington (between present-day exit 435 and exit 438), and now forms part of the Southern Motorway. Most major urban areas in New Zealand feature limited-access highways. Auckland, Wellington, Christchurch, Hamilton, Tauranga, and Dunedin contain motorways, with only Auckland having a substantial motorway network.
Technology
Road transport
null
2413401
https://en.wikipedia.org/wiki/Dissociative%20disorder
Dissociative disorder
Dissociative disorders (DDs) are a range of conditions characterized by significant disruptions or fragmentation "in the normal integration of consciousness, memory, identity, emotion, perception, body representation, motor control, and behavior." Dissociative disorders involve involuntary dissociation as an unconscious defense mechanism, wherein the individual with a dissociative disorder experiences separation in these areas as a means to protect against traumatic stress. Some dissociative disorders are caused by major psychological trauma, though the onset of depersonalization-derealization disorder may be preceded by less severe stress, by the influence of psychoactive substances, or occur without any discernible trigger. The dissociative disorders listed in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) are as follows: Dissociative identity disorder (DID, formerly multiple personality disorder): the alternation of two or more distinct personality states with impaired recall among personality states. In extreme cases, the host personality is unaware of the other, alternating personalities; however, the alternate personalities can be aware of all the existing personalities. Dissociative amnesia (formerly psychogenic amnesia): the loss of recall memory, specifically episodic memory, typically of or as a reaction to traumatic or stressful events. It is considered the most common dissociative disorder amongst those documented. This disorder can occur abruptly or gradually and may last minutes to years. Dissociative fugue was previously a separate category but is now treated as a specifier for dissociative amnesia, though many patients with dissociative fugue are ultimately diagnosed with dissociative identity disorder. Depersonalization-derealization disorder (DpDr): periods of detachment from self or surroundings which may be experienced as "unreal" (lacking in control of or "outside" self) while retaining awareness that this is a feeling and not reality. Individuals often show little emotion, report "out of body" experiences, distorted perceptions of their environment (fuzziness, blurriness, flatness, cloudiness), difficulty feeling emotions, difficulty recognizing familiar things, including one's own reflection in a mirror. They may see objects as larger or smaller than the actual size. They may lose certain bodily sensations like hunger and/or thirst. Many patients experience these symptoms continuously everyday while others experience the above symptoms in discrete episodes lasting 1+ hours. The DSM-IV category of dissociative disorder not otherwise specified was split into two diagnoses: other specified dissociative disorder and unspecified dissociative disorder. These categories are used for forms of pathological dissociation that do not fully meet the criteria of the other specified dissociative disorders; or if the correct category has not been determined; or the disorder is transient. Other specified dissociative disorder (OSDD) has multiple types, which OSDD-1 falling on the spectrum of dissociative identity disorder; it is known as partial DID in the International Classification of Diseases (see below). The ICD-11 lists dissociative disorders as: Dissociative neurological symptom disorder Dissociative amnesia Dissociative amnesia with dissociative fugue Trance disorder Possession trance disorder Dissociative identity disorder [complete] Partial dissociative identity disorder Depersonalization-derealization disorder Causes and treatment Dissociative disorders most often develop as a way to cope with psychological trauma. People with dissociative disorders were commonly subjected to chronic physical, sexual, or emotional abuse as children (or, less frequently, an otherwise frightening or highly unpredictable home environment). Some categories of DD, however, can form due to trauma that occurs later in life and is unrelated to abuse, such as war or the death of a loved one. Dissociative disorders, especially dissociative identity disorder (DID), should not be treated with an extraordinary or supernatural status. DDs would be better examined and treated through the lens of any other psychological disorder. Dissociative identity disorder Cause: The cause of dissociative identity disorder is contentious; it is most often considered to be caused either by ongoing childhood trauma that occurs before the ages of six to nine, or as an unintentional product of therapy, fantasy, or other sociogenic factors. Treatment: Long-term psychotherapy to improve the patient's quality of life. Psychotherapy often involves hypnosis (to help a patient remember and work through the trauma), creative art therapy (using creative process to help a person who cannot express their thoughts), cognitive therapy (talk therapy to identify unhealthy and negative beliefs or behaviors), and medications (antidepressants, anti-anxiety medications, or sedatives). These medications can help control the symptoms associated with DID and other DD, but there are no medications yet that specifically treat dissociative disorders. Dissociative amnesia Cause: Psychological trauma. While a history of child abuse is common in patients, it is not a necessary factor in determining if a person will develop dissociative amnesia. Treatment: Psychotherapy counseling or psychosocial therapy which involves talking about the disorder and related issues with a mental health provider. The medication pentothal can sometimes help to restore the memories. The length of an event of dissociative amnesia may be a few minutes or several years. If an episode is associated with a traumatic event, the amnesia may clear up when the person is removed from the traumatic situation. Depersonalization-derealization disorder Cause: While not as strongly linked as other dissociative disorders, there is a correlation between depersonalization-derealization disorder and childhood trauma, especially emotional abuse or neglect. It can also be caused by other forms of stress such as sudden death of a loved one. Treatment: Same treatment as dissociative amnesia. An episode of depersonalization-derealization disorder can be as brief as a few seconds or continue for several years. Neuroscience Differences in brain activity Dissociative disorders are characterized by distinct brain differences in the activation of various brain regions including the inferior parietal lobe, prefrontal cortex, and limbic system. Those with dissociative disorders have higher activity levels in the prefrontal lobe and a more inhibited limbic system on average than healthy controls. Heightened corticolimbic inhibition is associated with distinctly dissociative symptoms such as depersonalization and derealization. The function of these symptoms is thought to be a coping mechanism employed in extremely threatening or traumatic events. By inhibiting structures in the limbic system, such as the amygdala, the brain is able to reduce extreme levels of arousal. In the dissociative subtype of PTSD, there is both excessive control of emotions through suppressed limbic structures and insufficient control of emotions in the hyperactivity of the medial prefrontal cortex. Increased activity in the medial prefrontal cortex is associated with non-dissociative symptoms such as re-experiencing and hyperarousal. Differences in volume of brain structures There are notable differences in the volume of certain areas of the brain such as reduced cortical and subcortical volumes in the hippocampus and amygdala. Reduced volume of the amygdala may account for the lessened emotional reactivity observed during dissociation. The hippocampus is associated with learning and the formation of memory, and its reduced volume is associated with impairments in memory for those with DID and PTSD. Brain-imaging studies demonstrating the link between reduced hippocampal volume and DID as well as PTSD have added to empirical support for the existence of the disorder, as additional brain-imaging studies have demonstrated a negative correlation between hippocampal volume and early childhood trauma (which is hypothesized to be a potential etiological factor for dissociative symptoms). Medications There are no medications to cure or completely treat dissociative disorders, however, drugs to treat anxiety and depression that may accompany the disorders can be given. Diagnosis and prevalence The lifetime prevalence of dissociative disorders varies from 10% in the general population to 46% in psychiatric inpatients. Diagnosis can be made with the help of structured clinical interviews such as the Dissociative Disorders Interview Schedule (DDIS) and the Structured Clinical Interview for DSM-IV Dissociative Disorders (SCID-D-R), and behavioral observation of dissociative signs during the interview. Additional information can be helpful in diagnosis, including the Dissociative Experiences Scale or other questionnaires, performance-based measures, records from doctors or academic records, and information from partners, parents, or friends. A dissociative disorder cannot be ruled out in a single session and it is common for patients diagnosed with a dissociative disorder to not have a previous dissociative disorder diagnosis due to a lack of clinician training. Some diagnostic tests have also been adapted or developed for use with children and adolescents such as the Adolescent Dissociative Experiences Scale, Children's Version of the Response Evaluation Measure (REM-Y-71), Child Interview for Subjective Dissociative Experiences, Child Dissociative Checklist (CDC), Child Behavior Checklist (CBCL) Dissociation Subscale, and the Trauma Symptom Checklist for Children Dissociation Subscale. Dissociative disorders have been found to be quite prevalent in outpatient populations, as well as within low-income communities. One study found that in a population of poor inner-city outpatients, there was a 29% prevalence of dissociative disorders. There are problems with classification, diagnosis and therapeutic strategies of dissociative and conversion disorders which can be understood by the historic context of hysteria. Even current systems used to diagnose DD such as the DSM-IV and ICD-10 differ in the way the classification is determined. In most cases mental health professionals are still hesitant to diagnose patients with Dissociative Disorder, because before they are considered to be diagnosed with Dissociative Disorder these patients have more than likely been diagnosed with major depressive disorder, anxiety disorder, and most often post-traumatic stress disorder. It has been found from interviews with those who may be afflicted with dissociative disorders may be more effective at getting an accurate diagnosis than self-scoring assessments and scales. The prevalence of dissociative disorders is not completely understood due to the many difficulties in diagnosing dissociative disorders. Many of these difficulties stem from a misunderstanding of dissociative disorders, from an unfamiliarity diagnosis or symptoms to disbelief in some dissociative disorders entirely. Due to this it has been found that only 28% to 48% of people diagnosed with a dissociative disorder receive treatment for their mental health. Patients who are misdiagnosed are often those more likely to be hospitalised repeatedly, and lack of treatment can result in intensive outpatient treatment and higher rates of disability. An important concern in the diagnosis of dissociative disorders in forensic interviews is the possibility that the patient may be feigning symptoms in order to escape negative consequences. Young criminal offenders report much higher levels of dissociative disorders, such as amnesia. In one study it was found that 1% of young offenders reported complete amnesia for a violent crime, while 19% claimed partial amnesia. There have also been cases in which people with dissociative identity disorder provide conflicting testimonies in court, depending on the personality that is present. The world-wide prevalence of dissociative disorders is not well understood due to different cultural beliefs surrounding human emotions and the human brain. Children and adolescents Dissociative disorders (DD) are widely believed to have roots in adverse childhood experiences including abuse and loss, but the symptoms often go unrecognized or are misdiagnosed in children and adolescents. However, a recent western Chinese study showed an increase in awareness of dissociative disorders present in children. These studies show that DD's have an intricate relationship with the patient's mental, physical and socio-cultural environments. This study suggested that dissociative disorders are more common in Western, or developing countries, however, some cases have been seen in both clinical and non-clinical Chinese populations. There are several reasons why recognizing symptoms of dissociation in children is challenging: it may be difficult for children to describe their internal experiences; caregivers may miss signals or attempt to conceal their own abusive or neglectful behaviors; symptoms can be subtle or fleeting; disturbances of memory, mood, or concentration associated with dissociation may be misinterpreted as symptoms of other disorders. Another resource, Beacon House, informs us of dissociative disorder in children, suggesting that it is a survival mechanism that often goes unnoticed in children that have been traumatised. Dr. Shoshanah Lyons suggests that traumatised children often continue to dissociate even though they might not be in any danger, and that they are often unaware that they are dissociating. In addition to developing diagnostic tests for children and adolescents (see above), a number of approaches have been developed to improve recognition and understanding of dissociation in children. Recent research has focused on clarifying the neurological basis of symptoms associated with dissociation by studying neurochemical, functional and structural brain abnormalities that can result from childhood trauma. Others in the field have argued that recognizing disorganized attachment (DA) in children can help alert clinicians to the possibility of dissociative disorders. In their 2008 article, Rebecca Seligman and Laurence Kirmayer suggest the existence of evidence of linkages between trauma experienced in childhood and the capacity for dissociation or depersonalisation. They also suggest that individuals who are able to utilise dissociative techniques are able to keep this as an extended strategy to cope with stressful situations. Clinicians and researchers stress the importance of using a developmental model to understand both symptoms and the future course of DDs. In other words, symptoms of dissociation may manifest differently at different stages of child and adolescent development and individuals may be more or less susceptible to developing dissociative symptoms at different ages. Further research into the manifestation of dissociative symptoms and vulnerability throughout development is needed. Related to this developmental approach, more research is required to establish whether a young patient's recovery will remain stable over time. Current debates and the DSM-5 A number of aspects of dissociative disorders are currently in active debate. First, there is ongoing debate surrounding the etiology of dissociative identity disorder (DID). The crux of this debate is if DID is the result of childhood trauma or disorganized attachment. A proposed view is that dissociation has a physiological basis, in that it involves automatically triggered mechanisms such as increased blood pressure and alertness, that would, as Lynn contends, imply its existence as a cross-species disorder. A second area of discussion surrounds the question of whether there is a qualitative or quantitative difference between dissociation as a defense versus pathological dissociation. Experiences and symptoms of dissociation can range from the more mundane to those associated with post traumatic stress disorder (PTSD) or acute stress disorder (ASD) to dissociative disorders. Mirroring this complexity, the DSM-5 workgroup considered grouping dissociative disorders with other trauma/stress disorders, but instead decided to put them in the following chapter to emphasize the close relationship. The DSM-5 also introduced a dissociative subtype of PTSD. A 2012 review article supports the hypothesis that current or recent trauma may affect an individual's assessment of the more distant past, changing the experience of the past and resulting in dissociative states. However, experimental research in cognitive science continues to challenge claims concerning the validity of the dissociation construct, which is still based on Janetian notions of structural dissociation. Even the claimed etiological link between trauma/abuse and dissociation has been questioned. Links observed between trauma/abuse and DD are largely only present from a Western cultural context. For non-Western cultures dissociation "may constitute a "normal" psychological capacity". An alternative model proposes a perspective on dissociation based on a recently established link between a labile sleep–wake cycle and memory errors, cognitive failures, problems in attentional control, and difficulties in distinguishing fantasy from reality." Debates around DD also stem from Western versus non-Western lenses of viewing the disorder, and associated views of causes of DD. DID was initially believed to be specific to the West, until cross-cultural studies indicated its occurrence worldwide. Conversely, anthropologists have largely done little work on DD in the West relating to its perceptions of possession syndromes that would be present in non-Western societies. While dissociation has been viewed and catalogued by anthropologists differently in the West and non-Western societies, there are aspects of each that show DD has universal characteristics. For example, while shamanic and rituals of non-Western societies may hold dissociative aspects, this is not exclusive as many Christian sects, such as "possession by the Holy Ghost" share similar qualities to those of non-Western trances.
Biology and health sciences
Mental disorders
Health
4478594
https://en.wikipedia.org/wiki/Trapezium%20Cluster
Trapezium Cluster
The Trapezium or Orion Trapezium Cluster, also known by its Bayer designation of Theta1 Orionis (θ1 Orionis), is a tight open cluster of stars in the heart of the Orion Nebula, in the constellation of Orion. It was discovered by Galileo Galilei. On 4 February 1617 he sketched three of the stars (A, C and D), but missed the surrounding nebulosity. A fourth component (B) was identified by several observers in 1673, and several more components were discovered later like E, for a total of eight by 1888. Subsequently, several of the stars were determined to be binaries. Telescopes of amateur astronomers from about aperture can resolve six stars under good seeing conditions. The Trapezium is a relatively young cluster that has formed directly out of the parent nebula. The five brightest stars are on the order of 15 to 30 solar masses in size. They are within a diameter of 1.5 light-years of each other and are responsible for much of the illumination of the surrounding nebula. The Trapezium may be a sub-component of the larger Orion Nebula Cluster, a grouping of about 2,000 stars within a diameter of 20 light-years. Identification The Trapezium is most readily identifiable by the asterism of four relatively bright stars for which it is named. The four are often identified as A, B, C and D in order of increasing right ascension. The brightest of the four stars is C, or Theta1 Orionis C, with an apparent magnitude of 5.13. Both A and B have been identified as eclipsing binaries. Infrared images of the Trapezium are better able to penetrate the surrounding clouds of dust, and have located many more stellar components. About half the stars within the cluster exhibit circumstellar disks that are dwindling, a likely precursor to planetary formation. In addition, brown dwarfs and low-mass runaway stars have been identified. Possible black hole A 2012 paper suggests an intermediate-mass black hole with a mass more than 100 times that of the Sun may be present within the Trapezium, something that could explain the large velocity dispersion of the stars of the cluster. List of stars
Physical sciences
Notable star clusters
Astronomy
4479734
https://en.wikipedia.org/wiki/Lake%20ecosystem
Lake ecosystem
A lake ecosystem or lacustrine ecosystem includes biotic (living) plants, animals and micro-organisms, as well as abiotic (non-living) physical and chemical interactions. Lake ecosystems are a prime example of lentic ecosystems (lentic refers to stationary or relatively still freshwater, from the Latin lentus, which means "sluggish"), which include ponds, lakes and wetlands, and much of this article applies to lentic ecosystems in general. Lentic ecosystems can be compared with lotic ecosystems, which involve flowing terrestrial waters such as rivers and streams. Together, these two ecosystems are examples of freshwater ecosystems. Lentic systems are diverse, ranging from a small, temporary rainwater pool a few inches deep to Lake Baikal, which has a maximum depth of 1642 m. The general distinction between pools/ponds and lakes is vague, but Brown states that ponds and pools have their entire bottom surfaces exposed to light, while lakes do not. In addition, some lakes become seasonally stratified. Ponds and pools have two regions: the pelagic open water zone, and the benthic zone, which comprises the bottom and shore regions. Since lakes have deep bottom regions not exposed to light, these systems have an additional zone, the profundal. These three areas can have very different abiotic conditions and, hence, host species that are specifically adapted to live there. Two important subclasses of lakes are ponds, which typically are small lakes that intergrade with wetlands, and water reservoirs. Over long periods of time, lakes, or bays within them, may gradually become enriched by nutrients and slowly fill in with organic sediments, a process called succession. When humans use the drainage basin, the volumes of sediment entering the lake can accelerate this process. The addition of sediments and nutrients to a lake is known as eutrophication. Zones Lake ecosystems can be divided into zones. One common system divides lakes into three zones. The first, the littoral zone, is the shallow zone near the shore. This is where rooted wetland plants occur. The offshore is divided into two further zones, an open water zone and a deep water zone. In the open water zone (or photic zone) sunlight supports photosynthetic algae and the species that feed upon them. In the deep water zone, sunlight is not available and the food web is based on detritus entering from the littoral and photic zones. Some systems use other names. The off shore areas may be called the pelagic zone, the photic zone may be called the limnetic zone and the aphotic zone may be called the profundal zone. Inland from the littoral zone, one can also frequently identify a riparian zone which has plants still affected by the presence of the lake—this can include effects from windfalls, spring flooding, and winter ice damage. The production of the lake as a whole is the result of production from plants growing in the littoral zone, combined with production from plankton growing in the open water. Wetlands can be part of the lentic system, as they form naturally along most lake shores, the width of the wetland and littoral zone being dependent upon the slope of the shoreline and the amount of natural change in water levels, within and among years. Often dead trees accumulate in this zone, either from windfalls on the shore or logs transported to the site during floods. This woody debris provides important habitat for fish and nesting birds, as well as protecting shorelines from erosion. Abiotic components Light Light provides the solar energy required to drive the process of photosynthesis, the major energy source of lentic systems. The amount of light received depends upon a combination of several factors. Small ponds may experience shading by surrounding trees, while cloud cover may affect light availability in all systems, regardless of size. Seasonal and diurnal considerations also play a role in light availability because the shallower the angle at which light strikes water, the more light is lost by reflection. This is known as Beer's law. Once light has penetrated the surface, it may also be scattered by particles suspended in the water column. This scattering decreases the total amount of light as depth increases. Lakes are divided into photic and aphotic regions, the prior receiving sunlight and latter being below the depths of light penetration, making it void of photosynthetic capacity. In relation to lake zonation, the pelagic and benthic zones are considered to lie within the photic region, while the profundal zone is in the aphotic region. Temperature Temperature is an important abiotic factor in lentic ecosystems because most of the biota are poikilothermic, where internal body temperatures are defined by the surrounding system. Water can be heated or cooled through radiation at the surface and conduction to or from the air and surrounding substrate. Shallow ponds often have a continuous temperature gradient from warmer waters at the surface to cooler waters at the bottom. In addition, temperature fluctuations can vary greatly in these systems, both diurnally and seasonally. Temperature regimes are very different in large lakes. In temperate regions, for example, as air temperatures increase, the icy layer formed on the surface of the lake breaks up, leaving the water at approximately 4 °C. This is the temperature at which water has the highest density. As the season progresses, the warmer air temperatures heat the surface waters, making them less dense. The deeper waters remain cool and dense due to reduced light penetration. As the summer begins, two distinct layers become established, with such a large temperature difference between them that they remain stratified. The lowest zone in the lake is the coldest and is called the hypolimnion. The upper warm zone is called the epilimnion. Between these zones is a band of rapid temperature change called the thermocline. During the colder fall season, heat is lost at the surface and the epilimnion cools. When the temperatures of the two zones are close enough, the waters begin to mix again to create a uniform temperature, an event termed lake turnover. In the winter, inverse stratification occurs as water near the surface cools freezes, while warmer, but denser water remains near the bottom. A thermocline is established, and the cycle repeats. Wind In exposed systems, wind can create turbulent, spiral-formed surface currents called Langmuir circulations. Exactly how these currents become established is still not well understood, but it is evident that it involves some interaction between horizontal surface currents and surface gravity waves. The visible result of these rotations, which can be seen in any lake, are the surface foamlines that run parallel to the wind direction. Positively buoyant particles and small organisms concentrate in the foamline at the surface and negatively buoyant objects are found in the upwelling current between the two rotations. Objects with neutral buoyancy tend to be evenly distributed in the water column. This turbulence circulates nutrients in the water column, making it crucial for many pelagic species, however its effect on benthic and profundal organisms is minimal to non-existent, respectively. The degree of nutrient circulation is system specific, as it depends upon such factors as wind strength and duration, as well as lake or pool depth and productivity. Chemistry Oxygen is essential for organismal respiration. The amount of oxygen present in standing waters depends upon: 1) the area of transparent water exposed to the air, 2) the circulation of water within the system and 3) the amount of oxygen generated and used by organisms present. In shallow, plant-rich pools there may be great fluctuations of oxygen, with extremely high concentrations occurring during the day due to photosynthesis and very low values at night when respiration is the dominant process of primary producers. Thermal stratification in larger systems can also affect the amount of oxygen present in different zones. The epilimnion is oxygen rich because it circulates quickly, gaining oxygen via contact with the air. The hypolimnion, however, circulates very slowly and has no atmospheric contact. Additionally, fewer green plants exist in the hypolimnion, so there is less oxygen released from photosynthesis. In spring and fall when the epilimnion and hypolimnion mix, oxygen becomes more evenly distributed in the system. Low oxygen levels are characteristic of the profundal zone due to the accumulation of decaying vegetation and animal matter that “rains” down from the pelagic and benthic zones and the inability to support primary producers. Phosphorus is important for all organisms because it is a component of DNA and RNA and is involved in cell metabolism as a component of ATP and ADP. Also, phosphorus is not found in large quantities in freshwater systems, limiting photosynthesis in primary producers, making it the main determinant of lentic system production. The phosphorus cycle is complex, but the model outlined below describes the basic pathways. Phosphorus mainly enters a pond or lake through runoff from the watershed or by atmospheric deposition. Upon entering the system, a reactive form of phosphorus is usually taken up by algae and macrophytes, which release a non-reactive phosphorus compound as a byproduct of photosynthesis. This phosphorus can drift downwards and become part of the benthic or profundal sediment, or it can be remineralized to the reactive form by microbes in the water column. Similarly, non-reactive phosphorus in the sediment can be remineralized into the reactive form. Sediments are generally richer in phosphorus than lake water, however, indicating that this nutrient may have a long residency time there before it is remineralized and re-introduced to the system. Biotic components Bacteria Bacteria are present in all regions of lentic waters. Free-living forms are associated with decomposing organic material, biofilm on the surfaces of rocks and plants, suspended in the water column, and in the sediments of the benthic and profundal zones. Other forms are also associated with the guts of lentic animals as parasites or in commensal relationships. Bacteria play an important role in system metabolism through nutrient recycling, which is discussed in the Trophic Relationships section. Primary producers Algae, including both phytoplankton and periphyton, are the principle photosynthesizers in ponds and lakes. Phytoplankton are found drifting in the water column of the pelagic zone. Many species have a higher density than water, which should cause them to sink inadvertently down into the benthos. To combat this, phytoplankton have developed density-changing mechanisms, by forming vacuoles and gas vesicles, or by changing their shapes to induce drag, thus slowing their descent. A very sophisticated adaptation utilized by a small number of species is a tail-like flagellum that can adjust vertical position, and allow movement in any direction. Phytoplankton can also maintain their presence in the water column by being circulated in Langmuir rotations. Periphytic algae, on the other hand, are attached to a substrate. In lakes and ponds, they can cover all benthic surfaces. Both types of plankton are important as food sources and as oxygen providers. Aquatic plants live in both the benthic and pelagic zones, and can be grouped according to their manner of growth: ⑴ emergent = rooted in the substrate, but with leaves and flowers extending into the air; ⑵ floating-leaved = rooted in the substrate, but with floating leaves; ⑶ submersed = growing beneath the surface; ⑷ free-floating macrophytes = not rooted in the substrate, and floating on the surface. These various forms of macrophytes generally occur in different areas of the benthic zone, with emergent vegetation nearest the shoreline, then floating-leaved macrophytes, followed by submersed vegetation. Free-floating macrophytes can occur anywhere on the system's surface. Aquatic plants are more buoyant than their terrestrial counterparts because freshwater has a higher density than air. This makes structural rigidity unimportant in lakes and ponds (except in the aerial stems and leaves). Thus, the leaves and stems of most aquatic plants use less energy to construct and maintain woody tissue, investing that energy into fast growth instead. In order to contend with stresses induced by the wind and waves, plants must be both flexible and tough. Light, water depth, and substrate types are the most important factors controlling the distribution of submerged aquatic plants. Macrophytes are sources of food, oxygen, and habitat structure in the benthic zone, but cannot penetrate the depths of the euphotic zone, and hence are not found there. Invertebrates Zooplankton are tiny animals suspended in the water column. Like phytoplankton, these species have developed mechanisms that keep them from sinking to deeper waters, including drag-inducing body forms, and the active flicking of appendages (such as antennae or spines). Remaining in the water column may have its advantages in terms of feeding, but this zone's lack of refugia leaves zooplankton vulnerable to predation. In response, some species, especially Daphnia sp., make daily vertical migrations in the water column by passively sinking to the darker lower depths during the day, and actively moving towards the surface during the night. Also, because conditions in a lentic system can be quite variable across seasons, zooplankton have the ability to switch from laying regular eggs to resting eggs when there is a lack of food, temperatures fall below 2 °C, or if predator abundance is high. These resting eggs have a diapause, or dormancy period, that should allow the zooplankton to encounter conditions that are more favorable to survival when they finally hatch. The invertebrates that inhabit the benthic zone are numerically dominated by small species, and are species-rich compared to the zooplankton of the open water. They include: Crustaceans (e.g. crabs, crayfish, and shrimp), molluscs (e.g. clams and snails), and numerous types of insects. These organisms are mostly found in the areas of macrophyte growth, where the richest resources, highly-oxygenated water, and warmest portion of the ecosystem are found. The structurally diverse macrophyte beds are important sites for the accumulation of organic matter, and provide an ideal area for colonization. The sediments and plants also offer a great deal of protection from predatory fishes. Very few invertebrates are able to inhabit the cold, dark, and oxygen-poor profundal zone. Those that can are often red in color, due to the presence of large amounts of hemoglobin, which greatly increases the amount of oxygen carried to cells. Because the concentration of oxygen within this zone is low, most species construct tunnels or burrows in which they can hide, and utilize the minimum amount of movements necessary to circulate water through, drawing oxygen to them without expending too much energy. Fish and other vertebrates Fish have a range of physiological tolerances that are dependent upon which species they belong to. They have different lethal temperatures, dissolved oxygen requirements, and spawning needs that are based on their activity levels and behaviors. Because fish are highly mobile, they are able to deal with unsuitable abiotic factors in one zone by simply moving to another. A detrital feeder in the profundal zone, for example, that finds the oxygen concentration has dropped too low may feed closer to the benthic zone. A fish might also alter its residence during different parts of its life history: hatching in a sediment nest, then moving to the weedy benthic zone to develop in a protected environment with food resources, and finally into the pelagic zone as an adult. Other vertebrate taxa inhabit lentic systems as well. These include amphibians (e.g. salamanders and frogs), reptiles (e.g. snakes, turtles, and alligators), and a large number of waterfowl species. Most of these vertebrates spend part of their time in terrestrial habitats, and thus, are not directly affected by abiotic factors in the lake or pond. Many fish species are important both as consumers and as prey species to the larger vertebrates mentioned above. Trophic relationships Primary producers Lentic systems gain most of their energy from photosynthesis performed by aquatic plants and algae. This autochthonous process involves the combination of carbon dioxide, water, and solar energy to produce carbohydrates and dissolved oxygen. Within a lake or pond, the potential rate of photosynthesis generally decreases with depth due to light attenuation. Photosynthesis, however, is often low at the top few millimeters of the surface, likely due to inhibition by ultraviolet light. The exact depth and photosynthetic rate measurements of this curve are system-specific and depend upon: 1) the total biomass of photosynthesizing cells, 2) the amount of light attenuating materials, and 3) the abundance and frequency range of light absorbing pigments (i.e. chlorophylls) inside of photosynthesizing cells. The energy created by these primary producers is important for the community because it is transferred to higher trophic levels via consumption. Bacteria The vast majority of bacteria in lakes and ponds obtain their energy by decomposing vegetation and animal matter. In the pelagic zone, dead fish and the occasional allochthonous input of litterfall are examples of coarse particulate organic matter (CPOM>1 mm). Bacteria degrade these into fine particulate organic matter (FPOM<1 mm) and then further into usable nutrients. Small organisms such as plankton are also characterized as FPOM. Very low concentrations of nutrients are released during decomposition because the bacteria are utilizing them to build their own biomass. Bacteria, however, are consumed by protozoa, which are in turn consumed by zooplankton, and then further up the trophic levels. Elements other than carbon, particularly phosphorus and nitrogen, are regenerated when protozoa feed on bacterial prey and this way, nutrients become once more available for use in the water column. This regeneration cycle is known as the microbial loop and is a key component of lentic food webs. The decomposition of organic materials can continue in the benthic and profundal zones if the matter falls through the water column before being completely digested by the pelagic bacteria. Bacteria are found in the greatest abundance here in sediments, where they are typically 2-1000 times more prevalent than in the water column. Benthic invertebrates Benthic invertebrates, due to their high level of species richness, have many methods of prey capture. Filter feeders create currents via siphons or beating cilia, to pull water and its nutritional contents, towards themselves for straining. Grazers use scraping, rasping, and shredding adaptations to feed on periphytic algae and macrophytes. Members of the collector guild browse the sediments, picking out specific particles with raptorial appendages. Deposit feeding invertebrates indiscriminately consume sediment, digesting any organic material it contains. Finally, some invertebrates belong to the predator guild, capturing and consuming living animals. The profundal zone is home to a unique group of filter feeders that use small body movements to draw a current through burrows that they have created in the sediment. This mode of feeding requires the least amount of motion, allowing these species to conserve energy. A small number of invertebrate taxa are predators in the profundal zone. These species are likely from other regions and only come to these depths to feed. The vast majority of invertebrates in this zone are deposit feeders, getting their energy from the surrounding sediments. Fish Fish size, mobility, and sensory capabilities allow them to exploit a broad prey base, covering multiple zonation regions. Like invertebrates, fish feeding habits can be categorized into guilds. In the pelagic zone, herbivores graze on periphyton and macrophytes or pick phytoplankton out of the water column. Carnivores include fishes that feed on zooplankton in the water column (zooplanktivores), insects at the water's surface, on benthic structures, or in the sediment (insectivores), and those that feed on other fish (piscivores). Fish that consume detritus and gain energy by processing its organic material are called detritivores. Omnivores ingest a wide variety of prey, encompassing floral, faunal, and detrital material. Finally, members of the parasitic guild acquire nutrition from a host species, usually another fish or large vertebrate. Fish taxa are flexible in their feeding roles, varying their diets with environmental conditions and prey availability. Many species also undergo a diet shift as they develop. Therefore, it is likely that any single fish occupies multiple feeding guilds within its lifetime. Lentic food webs As noted in the previous sections, the lentic biota are linked in complex web of trophic relationships. These organisms can be considered to loosely be associated with specific trophic groups (e.g. primary producers, herbivores, primary carnivores, secondary carnivores, etc.). Scientists have developed several theories in order to understand the mechanisms that control the abundance and diversity within these groups. Very generally, top-down processes dictate that the abundance of prey taxa is dependent upon the actions of consumers from higher trophic levels. Typically, these processes operate only between two trophic levels, with no effect on the others. In some cases, however, aquatic systems experience a trophic cascade; for example, this might occur if primary producers experience less grazing by herbivores because these herbivores are suppressed by carnivores. Bottom-up processes are functioning when the abundance or diversity of members of higher trophic levels is dependent upon the availability or quality of resources from lower levels. Finally, a combined regulating theory, bottom-up:top-down, combines the predicted influences of consumers and resource availability. It predicts that trophic levels close to the lowest trophic levels will be most influenced by bottom-up forces, while top-down effects should be strongest at top levels. Community patterns and diversity Local species richness The biodiversity of a lentic system increases with the surface area of the lake or pond. This is attributable to the higher likelihood of partly terrestrial species of finding a larger system. Also, because larger systems typically have larger populations, the chance of extinction is decreased. Additional factors, including temperature regime, pH, nutrient availability, habitat complexity, speciation rates, competition, and predation, have been linked to the number of species present within systems. Succession patterns in plankton communities – the PEG model Phytoplankton and zooplankton communities in lake systems undergo seasonal succession in relation to nutrient availability, predation, and competition. Sommer et al. described these patterns as part of the Plankton Ecology Group (PEG) model, with 24 statements constructed from the analysis of numerous systems. The following includes a subset of these statements, as explained by Brönmark and Hansson illustrating succession through a single seasonal cycle: Winter 1. Increased nutrient and light availability result in rapid phytoplankton growth towards the end of winter. The dominant species, such as diatoms, are small and have quick growth capabilities. 2. These plankton are consumed by zooplankton, which become the dominant plankton taxa. Spring 3. A clear water phase occurs, as phytoplankton populations become depleted due to increased predation by growing numbers of zooplankton. Summer 4. Zooplankton abundance declines as a result of decreased phytoplankton prey and increased predation by juvenile fishes. 5. With increased nutrient availability and decreased predation from zooplankton, a diverse phytoplankton community develops. 6. As the summer continues, nutrients become depleted in a predictable order: phosphorus, silica, and then nitrogen. The abundance of various phytoplankton species varies in relation to their biological need for these nutrients. 7. Small-sized zooplankton become the dominant type of zooplankton because they are less vulnerable to fish predation. Fall 8. Predation by fishes is reduced due to lower temperatures and zooplankton of all sizes increase in number. Winter 9. Cold temperatures and decreased light availability result in lower rates of primary production and decreased phytoplankton populations. 10. Reproduction in zooplankton decreases due to lower temperatures and less prey. The PEG model presents an idealized version of this succession pattern, while natural systems are known for their variation. Latitudinal patterns There is a well-documented global pattern that correlates decreasing plant and animal diversity with increasing latitude, that is to say, there are fewer species as one moves towards the poles. The cause of this pattern is one of the greatest puzzles for ecologists today. Theories for its explanation include energy availability, climatic variability, disturbance, competition, etc. Despite this global diversity gradient, this pattern can be weak for freshwater systems compared to global marine and terrestrial systems. This may be related to size, as Hillebrand and Azovsky found that smaller organisms (protozoa and plankton) did not follow the expected trend strongly, while larger species (vertebrates) did. They attributed this to better dispersal ability by smaller organisms, which may result in high distributions globally. Natural lake lifecycles Lake creation Lakes can be formed in a variety of ways, but the most common are discussed briefly below. The oldest and largest systems are the result of tectonic activities. The rift lakes in Africa, for example are the result of seismic activity along the site of separation of two tectonic plates. Ice-formed lakes are created when glaciers recede, leaving behind abnormalities in the landscape shape that are then filled with water. Finally, oxbow lakes are fluvial in origin, resulting when a meandering river bend is pinched off from the main channel. Natural extinction All lakes and ponds receive sediment inputs. Since these systems are not really expanding, it is logical to assume that they will become increasingly shallower in depth, eventually becoming wetlands or terrestrial vegetation. The length of this process should depend upon a combination of depth and sedimentation rate. Moss gives the example of Lake Tanganyika, which reaches a depth of 1500 m and has a sedimentation rate of 0.5 mm/yr. Assuming that sedimentation is not influenced by anthropogenic factors, this system should go extinct in approximately 3 million years. Shallow lentic systems might also fill in as swamps encroach inward from the edges. These processes operate on a much shorter timescale, taking hundreds to thousands of years to complete the extinction process. Human impacts Acidification Sulfur dioxide and nitrogen oxides are naturally released from volcanoes, organic compounds in the soil, wetlands, and marine systems, but the majority of these compounds come from the combustion of coal, oil, gasoline, and the smelting of ores containing sulfur. These substances dissolve in atmospheric moisture and enter lentic systems as acid rain. Lakes and ponds that contain bedrock that is rich in carbonates have a natural buffer, resulting in no alteration of pH. Systems without this bedrock, however, are very sensitive to acid inputs because they have a low neutralizing capacity, resulting in pH declines even with only small inputs of acid. At a pH of 5–6 algal species diversity and biomass decrease considerably, leading to an increase in water transparency – a characteristic feature of acidified lakes. As the pH continues lower, all fauna becomes less diverse. The most significant feature is the disruption of fish reproduction. Thus, the population is eventually composed of few, old individuals that eventually die and leave the systems without fishes. Acid rain has been especially harmful to lakes in Scandinavia, western Scotland, west Wales and the north eastern United States. Eutrophication Eutrophic systems contain a high concentration of phosphorus (~30 μg/L), nitrogen (~1500 μg/L), or both. Phosphorus enters lentic waters from sewage treatment effluents, discharge from raw sewage, or from runoff of farmland. Nitrogen mostly comes from agricultural fertilizers from runoff or leaching and subsequent groundwater flow. This increase in nutrients required for primary producers results in a massive increase of phytoplankton growth, termed a "plankton bloom." This bloom decreases water transparency, leading to the loss of submerged plants. The resultant reduction in habitat structure has negative impacts on the species that utilize it for spawning, maturation, and general survival. Additionally, the large number of short-lived phytoplankton result in a massive amount of dead biomass settling into the sediment. Bacteria need large amounts of oxygen to decompose this material, thus reducing the oxygen concentration of the water. This is especially pronounced in stratified lakes, when the thermocline prevents oxygen-rich water from the surface to mix with lower levels. Low or anoxic conditions preclude the existence of many taxa that are not physiologically tolerant of these conditions. Invasive species Invasive species have been introduced to lentic systems through both purposeful events (e.g. stocking game and food species) as well as unintentional events (e.g. in ballast water). These organisms can affect natives via competition for prey or habitat, predation, habitat alteration, hybridization, or the introduction of harmful diseases and parasites. With regard to native species, invaders may cause changes in size and age structure, distribution, density, population growth, and may even drive populations to extinction. Examples of prominent invaders of lentic systems include the zebra mussel and sea lamprey in the Great Lakes.
Physical sciences
Hydrology
Earth science
4480666
https://en.wikipedia.org/wiki/Ceramic%20engineering
Ceramic engineering
Ceramic engineering is the science and technology of creating objects from inorganic, non-metallic materials. This is done either by the action of heat, or at lower temperatures using precipitation reactions from high-purity chemical solutions. The term includes the purification of raw materials, the study and production of the chemical compounds concerned, their formation into components and the study of their structure, composition and properties. Ceramic materials may have a crystalline or partly crystalline structure, with long-range order on atomic scale. Glass-ceramics may have an amorphous or glassy structure, with limited or short-range atomic order. They are either formed from a molten mass that solidifies on cooling, formed and matured by the action of heat, or chemically synthesized at low temperatures using, for example, hydrothermal or sol-gel synthesis. The special character of ceramic materials gives rise to many applications in materials engineering, electrical engineering, chemical engineering and mechanical engineering. As ceramics are heat resistant, they can be used for many tasks for which materials like metal and polymers are unsuitable. Ceramic materials are used in a wide range of industries, including mining, aerospace, medicine, refinery, food and chemical industries, packaging science, electronics, industrial and transmission electricity, and guided lightwave transmission. History The word "ceramic" is derived from the Greek word () meaning pottery. It is related to the older Indo-European language root "to burn". "Ceramic" may be used as a noun in the singular to refer to a ceramic material or the product of ceramic manufacture, or as an adjective. Ceramics is the making of things out of ceramic materials. Ceramic engineering, like many sciences, evolved from a different discipline by today's standards. Materials science engineering is grouped with ceramics engineering to this day. Abraham Darby first used coke in 1709 in Shropshire, England, to improve the yield of a smelting process. Coke is now widely used to produce carbide ceramics. Potter Josiah Wedgwood opened the first modern ceramics factory in Stoke-on-Trent, England, in 1759. Austrian chemist Carl Josef Bayer, working for the textile industry in Russia, developed a process to separate alumina from bauxite ore in 1888. The Bayer process is still used to purify alumina for the ceramic and aluminium industries. Brothers Pierre and Jacques Curie discovered piezoelectricity in Rochelle salt . Piezoelectricity is one of the key properties of electroceramics. E.G. Acheson heated a mixture of coke and clay in 1893, and invented carborundum, or synthetic silicon carbide. Henri Moissan also synthesized SiC and tungsten carbide in his electric arc furnace in Paris about the same time as Acheson. Karl Schröter used liquid-phase sintering to bond or "cement" Moissan's tungsten carbide particles with cobalt in 1923 in Germany. Cemented (metal-bonded) carbide edges greatly increase the durability of hardened steel cutting tools. W.H. Nernst developed cubic-stabilized zirconia in the 1920s in Berlin. This material is used as an oxygen sensor in exhaust systems. The main limitation on the use of ceramics in engineering is brittleness. Military The military requirements of World War II encouraged developments, which created a need for high-performance materials and helped speed the development of ceramic science and engineering. Throughout the 1960s and 1970s, new types of ceramics were developed in response to advances in atomic energy, electronics, communications, and space travel. The discovery of ceramic superconductors in 1986 has spurred intense research to develop superconducting ceramic parts for electronic devices, electric motors, and transportation equipment. There is an increasing need in the military sector for high-strength, robust materials which have the capability to transmit light around the visible (0.4–0.7 micrometers) and mid-infrared (1–5 micrometers) regions of the spectrum. These materials are needed for applications requiring transparent armour. Transparent armour is a material or system of materials designed to be optically transparent, yet protect from fragmentation or ballistic impacts. The primary requirement for a transparent armour system is to not only defeat the designated threat but also provide a multi-hit capability with minimized distortion of surrounding areas. Transparent armour windows must also be compatible with night vision equipment. New materials that are thinner, lightweight, and offer better ballistic performance are being sought. Such solid-state components have found widespread use for various applications in the electro-optical field including: optical fibres for guided lightwave transmission, optical switches, laser amplifiers and lenses, hosts for solid-state lasers and optical window materials for gas lasers, and infrared (IR) heat seeking devices for missile guidance systems and IR night vision. Modern industry Now a multibillion-dollar a year industry, ceramic engineering and research has established itself as an important field of science. Applications continue to expand as researchers develop new kinds of ceramics to serve different purposes. Zirconium dioxide ceramics are used in the manufacture of knives. The blade of the ceramic knife will stay sharp for much longer than that of a steel knife, although it is more brittle and can be snapped by dropping it on a hard surface. Ceramics such as alumina, boron carbide and silicon carbide have been used in bulletproof vests to repel small arms rifle fire. Such plates are known commonly as ballistic plates. Similar material is used to protect cockpits of some military aircraft, because of the low weight of the material. Silicon nitride parts are used in ceramic ball bearings. Their higher hardness means that they are much less susceptible to wear and can offer more than triple lifetimes. They also deform less under load meaning they have less contact with the bearing retainer walls and can roll faster. In very high speed applications, heat from friction during rolling can cause problems for metal bearings; problems which are reduced by the use of ceramics. Ceramics are also more chemically resistant and can be used in wet environments where steel bearings would rust. The major drawback to using ceramics is a significantly higher cost. In many cases their electrically insulating properties may also be valuable in bearings. In the early 1980s, Toyota researched production of an adiabatic ceramic engine which can run at a temperature of over 6000 °F (3300 °C). Ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. Fuel efficiency of the engine is also higher at high temperature, as shown by Carnot's theorem. In a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. Despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. Imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. Such engines are possible in laboratory settings, but mass-production is not feasible with current technology. Work is being done in developing ceramic parts for gas turbine engines. Currently, even blades made of advanced metal alloys used in the engines' hot section require cooling and careful limiting of operating temperatures. Turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. Recently, there have been advances in ceramics which include bio-ceramics, such as dental implants and synthetic bones. Hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. Orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. Because of this, they are of great interest for gene delivery and tissue engineering scaffolds. Most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. They are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. Work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. Ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. Durable actinide-containing ceramic materials have many applications such as in nuclear fuels for burning excess Pu and in chemically-inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. Both use and disposal of radioactive actinides require their immobilization in a durable host material. Nuclear waste long-lived radionuclides such as actinides are immobilized using chemically-durable crystalline materials based on polycrystalline ceramics and large single crystals. Alumina ceramics are widely utilized in the chemical industry due to their excellent chemical stability and high resistance to corrosion. It is used as acid-resistant pump impellers and pump bodies, ensuring long-lasting performance in transferring aggressive fluids. They are also used in acid-carrying pipe linings to prevent contamination and maintain fluid purity, which is crucial in industries like pharmaceuticals and food processing. Valves made from alumina ceramics demonstrate exceptional durability and resistance to chemical attack, making them reliable for controlling the flow of corrosive liquids. Glass-ceramics Glass-ceramic materials share many properties with both glasses and ceramics. Glass-ceramics have an amorphous phase and one or more crystalline phases and are produced by a so-called "controlled crystallization", which is typically avoided in glass manufacturing. Glass-ceramics often contain a crystalline phase which constitutes anywhere from 30% [m/m] to 90% [m/m] of its composition by volume, yielding an array of materials with interesting thermomechanical properties. In the processing of glass-ceramics, molten glass is cooled down gradually before reheating and annealing. In this heat treatment the glass partly crystallizes. In many cases, so-called 'nucleation agents' are added in order to regulate and control the crystallization process. Because there is usually no pressing and sintering, glass-ceramics do not contain the volume fraction of porosity typically present in sintered ceramics. The term mainly refers to a mix of lithium and aluminosilicates which yields an array of materials with interesting thermomechanical properties. The most commercially important of these have the distinction of being impervious to thermal shock. Thus, glass-ceramics have become extremely useful for countertop cooking. The negative thermal expansion coefficient (TEC) of the crystalline ceramic phase can be balanced with the positive TEC of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net TEC near zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. Processing steps The traditional ceramic process generally follows this sequence: Milling → Batching → Mixing → Forming → Drying → Firing → Assembly. Milling is the process by which materials are reduced from a large size to a smaller size. Milling may involve breaking up cemented material (in which case individual particles retain their shape) or pulverization (which involves grinding the particles themselves to a smaller size). Milling is generally done by mechanical means, including attrition (which is particle-to-particle collision that results in agglomerate break up or particle shearing), compression (which applies a forces that results in fracturing), and impact (which employs a milling medium or the particles themselves to cause fracturing). Attrition milling equipment includes the wet scrubber (also called the planetary mill or wet attrition mill), which has paddles in water creating vortexes in which the material collides and break up. Compression mills include the jaw crusher, roller crusher and cone crusher. Impact mills include the ball mill, which has media that tumble and fracture the material, or the ResonantAcoustic mixer. Shaft impactors cause particle-to particle attrition and compression. Batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. Mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers (a type of cement mixer), ResonantAcoustic mixers, Mueller mixers, and pug mills. Wet mixing generally involves the same equipment. Forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. Forming can involve: (1) Extrusion, such as extruding "slugs" to make bricks, (2) Pressing to make shaped parts, (3) Slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. Forming produces a "green" part, ready for drying. Green parts are soft, pliable, and over time will lose shape. Handling the green product will change its shape. For example, a green brick can be "squeezed", and after squeezing it will stay that way. Drying is removing the water or binder from the formed material. Spray drying is widely used to prepare powder for pressing operations. Other dryers are tunnel dryers and periodic dryers. Controlled heat is applied in this two-stage process. First, heat removes water. This step needs careful control, as rapid heating causes cracks and surface defects. The dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. Sintering is where the dried parts pass through a controlled heating process, and the oxides are chemically changed to cause bonding and densification. The fired part will be smaller than the dried part. Forming methods Ceramic forming techniques include throwing, slipcasting, tape casting, freeze-casting, injection molding, dry pressing, isostatic pressing, hot isostatic pressing (HIP), 3D printing and others. Methods for forming ceramic powders into complex shapes are desirable in many areas of technology. Such methods are required for producing advanced, high-temperature structural parts such as heat engine components and turbines. Materials other than ceramics which are used in these processes may include: wood, metal, water, plaster and epoxy—most of which will be eliminated upon firing. A ceramic-filled epoxy, such as Martyte, is sometimes used to protect structural steel under conditions of rocket exhaust impingement. These forming techniques are well known for providing tools and other components with dimensional stability, surface quality, high (near theoretical) density and microstructural uniformity. The increasing use and diversity of specialty forms of ceramics adds to the diversity of process technologies to be used. Thus, reinforcing fibers and filaments are mainly made by polymer, sol-gel, or CVD processes, but melt processing also has applicability. The most widely used specialty form is layered structures, with tape casting for electronic substrates and packages being pre-eminent. Photo-lithography is of increasing interest for precise patterning of conductors and other components for such packaging. Tape casting or forming processes are also of increasing interest for other applications, ranging from open structures such as fuel cells to ceramic composites. The other major layer structure is coating, where thermal spraying is very important, but chemical and physical vapor deposition and chemical (e.g., sol-gel and polymer pyrolysis) methods are all seeing increased use. Besides open structures from formed tape, extruded structures, such as honeycomb catalyst supports, and highly porous structures, including various foams, for example, reticulated foam, are of increasing use. Densification of consolidated powder bodies continues to be achieved predominantly by (pressureless) sintering. However, the use of pressure sintering by hot pressing is increasing, especially for non-oxides and parts of simple shapes where higher quality (mainly microstructural homogeneity) is needed, and larger size or multiple parts per pressing can be an advantage. The sintering process The principles of sintering-based methods are simple ("sinter" has roots in the English "cinder"). The firing is done at a temperature below the melting point of the ceramic. Once a roughly-held-together object called a "green body" is made, it is fired in a kiln, where atomic and molecular diffusion processes give rise to significant changes in the primary microstructural features. This includes the gradual elimination of porosity, which is typically accompanied by a net shrinkage and overall densification of the component. Thus, the pores in the object may close up, resulting in a denser product of significantly greater strength and fracture toughness. Another major change in the body during the firing or sintering process will be the establishment of the polycrystalline nature of the solid. Significant grain growth tends to occur during sintering, with this growth depending on temperature and duration of the sintering process. The growth of grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. In particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. In the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. The ultimate microstructure (and thus the physical properties) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. Hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass-ceramics. There are numerous possible refinements of the sintering process. Some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. Sometimes organic binders such as polyvinyl alcohol are added to hold the green body together; these burn out during the firing (at 200–350 °C). Sometimes organic lubricants are added during pressing to increase densification. It is common to combine these, and add binders and lubricants to a powder, then press. (The formulation of these organic chemical additives is an art in itself. This is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc.) A slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. Indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. If a mixture of different materials is used together in a ceramic, the sintering temperature is sometimes above the melting point of one minor component – a liquid phase sintering. This results in shorter sintering times compared to solid state sintering. Such liquid phase sintering involves in faster diffusion processes and may result in abnormal grain growth. Strength of ceramics A material's strength is dependent on its microstructure. The engineering processes to which a material is subjected can alter its microstructure. The variety of strengthening mechanisms that alter the strength of a material include the mechanism of grain boundary strengthening. Thus, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending on its microstructural properties and the desired end effect. The relation between yield stress and grain size is described mathematically by the Hall-Petch equation which is where ky is the strengthening coefficient (a constant unique to each material), σo is a materials constant for the starting stress for dislocation movement (or the resistance of the lattice to dislocation motion), d is the grain diameter, and σy is the yield stress. Theoretically, a material could be made infinitely strong if the grains are made infinitely small. This is, unfortunately, impossible because the lower limit of grain size is a single unit cell of the material. Even then, if the grains of a material are the size of a single unit cell, then the material is in fact amorphous, not crystalline, since there is no long range order, and dislocations can not be defined in an amorphous material. It has been observed experimentally that the microstructure with the highest yield strength is a grain size of about 10 nanometers, because grains smaller than this undergo another yielding mechanism, grain boundary sliding. Producing engineering materials with this ideal grain size is difficult because of the limitations of initial particle sizes inherent to nanomaterials and nanotechnology. Faber-Evans model The Faber-Evans model, developed by Katherine Faber and Anthony G. Evans, was developed to predict the increase in fracture toughness in ceramics due to crack deflection around second-phase particles that are prone to microcracking in a matrix. The model considers particle morphology, aspect ratio, spacing, and volume fraction of the second phase, as well as the reduction in local stress intensity at the crack tip when the crack is deflected or the crack plane bows. Actual crack tortuosity is obtained through imaging techniques, which allows for the direct input of deflection and bowing angles into the model. The model calculates the average strain energy release rate and compares the resulting increase in fracture toughness to that of a flat crack through the plain matrix. The magnitude of the toughening is determined by the mismatch strain caused by thermal contraction incompatibility and the microfracture resistance of the particle/matrix interface. The toughening becomes noticeable with a narrow size distribution of appropriately sized particles, and researchers typically accept that deflection effects in materials with roughly equiaxial grains may increase the fracture toughness by about twice the grain boundary value. The model reveals that the increase in toughness is dependent on particle shape and the volume fraction of the second phase, with the most effective morphology being the rod of high aspect ratio, which can account for a fourfold increase in fracture toughness. The toughening arises primarily from the twist of the crack front between particles, as indicated by deflection profiles. Disc-shaped particles and spheres are less effective in toughening. Fracture toughness, regardless of morphology, is determined by the twist of the crack front at its most severe configuration, rather than the initial tilt of the crack front. Only for disc-shaped particles does the initial tilting of the crack front provide significant toughening; however, the twist component still overrides the tilt-derived toughening. Additional important features of the deflection analysis include the appearance of asymptotic toughening for the three morphologies at volume fractions in excess of 0.2. It is also noted that a significant influence on the toughening by spherical particles is exerted by the interparticle spacing distribution; greater toughening is afforded when spheres are nearly contacting such that twist angles approach π/2. These predictions provide the basis for the design of high-toughness two-phase ceramic materials. The ideal second phase, in addition to maintaining chemical compatibility, should be present in amounts of 10 to 20 volume percent. Greater amounts may diminish the toughness increase due to overlapping particles. Particles with high aspect ratios, especially those with rod-shaped morphologies, are most suitable for maximum toughening. This model is often used to determine the factors that contribute to the increase in fracture toughness in ceramics which is ultimately useful in the development of advanced ceramic materials with improved performance. Theory of chemical processing Microstructural uniformity In the processing of fine ceramics, the irregular particle sizes and shapes in a typical powder often lead to non-uniform packing morphologies that result in packing density variations in the powder compact. Uncontrolled agglomeration of powders due to attractive van der Waals forces can also give rise to in microstructural inhomogeneities. Differential stresses that develop as a result of non-uniform drying shrinkage are directly related to the rate at which the solvent can be removed, and thus highly dependent upon the distribution of porosity. Such stresses have been associated with a plastic-to-brittle transition in consolidated bodies, and can yield to crack propagation in the unfired body if not relieved. In addition, any fluctuations in packing density in the compact as it is prepared for the kiln are often amplified during the sintering process, yielding inhomogeneous densification. Some pores and other structural defects associated with density variations have been shown to play a detrimental role in the sintering process by growing and thus limiting end-point densities. Differential stresses arising from inhomogeneous densification have also been shown to result in the propagation of internal cracks, thus becoming the strength-controlling flaws. It would therefore appear desirable to process a material in such a way that it is physically uniform with regard to the distribution of components and porosity, rather than using particle size distributions which will maximize the green density. The containment of a uniformly dispersed assembly of strongly interacting particles in suspension requires total control over particle-particle interactions. Monodisperse colloids provide this potential. Monodisperse powders of colloidal silica, for example, may therefore be stabilized sufficiently to ensure a high degree of order in the colloidal crystal or polycrystalline colloidal solid which results from aggregation. The degree of order appears to be limited by the time and space allowed for longer-range correlations to be established. Such defective polycrystalline colloidal structures would appear to be the basic elements of sub-micrometer colloidal materials science, and, therefore, provide the first step in developing a more rigorous understanding of the mechanisms involved in microstructural evolution in inorganic systems such as polycrystalline ceramics. Self-assembly Self-assembly is the most common term in use in the modern scientific community to describe the spontaneous aggregation of particles (atoms, molecules, colloids, micelles, etc.) without the influence of any external forces. Large groups of such particles are known to assemble themselves into thermodynamically stable, structurally well-defined arrays, quite reminiscent of one of the 7 crystal systems found in metallurgy and mineralogy (e.g. face-centered cubic, body-centered cubic, etc.). The fundamental difference in equilibrium structure is in the spatial scale of the unit cell (or lattice parameter) in each particular case. Thus, self-assembly is emerging as a new strategy in chemical synthesis and nanotechnology. Molecular self-assembly has been observed in various biological systems and underlies the formation of a wide variety of complex biological structures. Molecular crystals, liquid crystals, colloids, micelles, emulsions, phase-separated polymers, thin films and self-assembled monolayers all represent examples of the types of highly ordered structures which are obtained using these techniques. The distinguishing feature of these methods is self-organization in the absence of any external forces. In addition, the principal mechanical characteristics and structures of biological ceramics, polymer composites, elastomers, and cellular materials are being re-evaluated, with an emphasis on bioinspired materials and structures. Traditional approaches focus on design methods of biological materials using conventional synthetic materials. This includes an emerging class of mechanically superior biomaterials based on microstructural features and designs found in nature. The new horizons have been identified in the synthesis of bioinspired materials through processes that are characteristic of biological systems in nature. This includes the nanoscale self-assembly of the components and the development of hierarchical structures. Ceramic composites Substantial interest has arisen in recent years in fabricating ceramic composites. While there is considerable interest in composites with one or more non-ceramic constituents, the greatest attention is on composites in which all constituents are ceramic. These typically comprise two ceramic constituents: a continuous matrix, and a dispersed phase of ceramic particles, whiskers, or short (chopped) or continuous ceramic fibers. The challenge, as in wet chemical processing, is to obtain a uniform or homogeneous distribution of the dispersed particle or fiber phase. Consider first the processing of particulate composites. The particulate phase of greatest interest is tetragonal zirconia because of the toughening that can be achieved from the phase transformation from the metastable tetragonal to the monoclinic crystalline phase, aka transformation toughening. There is also substantial interest in dispersion of hard, non-oxide phases such as SiC, TiB, TiC, boron, carbon and especially oxide matrices like alumina and mullite. There is also interest too incorporating other ceramic particulates, especially those of highly anisotropic thermal expansion. Examples include Al2O3, TiO2, graphite, and boron nitride. In processing particulate composites, the issue is not only homogeneity of the size and spatial distribution of the dispersed and matrix phases, but also control of the matrix grain size. However, there is some built-in self-control due to inhibition of matrix grain growth by the dispersed phase. Particulate composites, though generally offer increased resistance to damage, failure, or both, are still quite sensitive to inhomogeneities of composition as well as other processing defects such as pores. Thus they need good processing to be effective. Particulate composites have been made on a commercial basis by simply mixing powders of the two constituents. Although this approach is inherently limited in the homogeneity that can be achieved, it is the most readily adaptable for existing ceramic production technology. However, other approaches are of interest. From the technological standpoint, a particularly desirable approach to fabricating particulate composites is to coat the matrix or its precursor onto fine particles of the dispersed phase with good control of the starting dispersed particle size and the resultant matrix coating thickness. One should in principle be able to achieve the ultimate in homogeneity of distribution and thereby optimize composite performance. This can also have other ramifications, such as allowing more useful composite performance to be achieved in a body having porosity, which might be desired for other factors, such as limiting thermal conductivity. There are also some opportunities to utilize melt processing for fabrication of ceramic, particulate, whisker and short-fiber, and continuous-fiber composites. Clearly, both particulate and whisker composites are conceivable by solid-state precipitation after solidification of the melt. This can also be obtained in some cases by sintering, as for precipitation-toughened, partially stabilized zirconia. Similarly, it is known that one can directionally solidify ceramic eutectic mixtures and hence obtain uniaxially aligned fiber composites. Such composite processing has typically been limited to very simple shapes and thus suffers from serious economic problems due to high machining costs. Clearly, there are possibilities of using melt casting for many of these approaches. Potentially even more desirable is using melt-derived particles. In this method, quenching is done in a solid solution or in a fine eutectic structure, in which the particles are then processed by more typical ceramic powder processing methods into a useful body. There have also been preliminary attempts to use melt spraying as a means of forming composites by introducing the dispersed particulate, whisker, or fiber phase in conjunction with the melt spraying process. Other methods besides melt infiltration to manufacture ceramic composites with long fiber reinforcement are chemical vapor infiltration and the infiltration of fiber preforms with organic precursor, which after pyrolysis yield an amorphous ceramic matrix, initially with a low density. With repeated cycles of infiltration and pyrolysis one of those types of ceramic matrix composites is produced. Chemical vapor infiltration is used to manufacture carbon/carbon and silicon carbide reinforced with carbon or silicon carbide fibers. Besides many process improvements, the first of two major needs for fiber composites is lower fiber costs. The second major need is fiber compositions or coatings, or composite processing, to reduce degradation that results from high-temperature composite exposure under oxidizing conditions. Applications The products of technical ceramics include tiles used in the Space Shuttle program, gas burner nozzles, ballistic protection, nuclear fuel uranium oxide pellets, bio-medical implants, jet engine turbine blades, and missile nose cones. Its products are often made from materials other than clay, chosen for their particular physical properties. These may be classified as follows: Oxides: silica, alumina, zirconia Non-oxides: carbides, borides, nitrides, silicides Composites: particulate or whisker reinforced matrices, combinations of oxides and non-oxides (e.g. polymers). Ceramics can be used in many technological industries. One application is the ceramic tiles on NASA's Space Shuttle, used to protect it and the future supersonic space planes from the searing heat of re-entry into the Earth's atmosphere. They are also used widely in electronics and optics. In addition to the applications listed here, ceramics are also used as a coating in various engineering cases. An example would be a ceramic bearing coating over a titanium frame used for an aircraft. Recently the field has come to include the studies of single crystals or glass fibers, in addition to traditional polycrystalline materials, and the applications of these have been overlapping and changing rapidly. Aerospace Engines: shielding a hot running aircraft engine from damaging other components. Airframes: used as a high-stress, high-temp and lightweight bearing and structural component. Missile nose-cones: shielding the missile internals from heat. Space Shuttle tiles Space-debris ballistic shields: ceramic fiber woven shields offer better protection to hypervelocity (~7 km/s) particles than aluminum shields of equal weight. Rocket nozzles: focusing high-temperature exhaust gases from the rocket booster. Unmanned Air Vehicles: ceramic engine utilization in aeronautical applications (such as Unmanned Air Vehicles) may result in enhanced performance characteristics and less operational costs. Biomedical Artificial bone; Dentistry applications, teeth. Biodegradable splints; Reinforcing bones recovering from osteoporosis Implant material Electronics Capacitors Integrated circuit packages Transducers Insulators Optical Optical fibers, guided light wave transmission Switches Laser amplifiers Lenses Infrared heat-seeking devices Automotive Heat shield Exhaust heat management Biomaterials Silicification is quite common in the biological world and occurs in bacteria, single-celled organisms, plants, and animals (invertebrates and vertebrates). Crystalline minerals formed in such environment often show exceptional physical properties (e.g. strength, hardness, fracture toughness) and tend to form hierarchical structures that exhibit microstructural order over a range of length or spatial scales. The minerals are crystallized from an environment that is undersaturated with respect to silicon, and under conditions of neutral pH and low temperature (0–40 °C). Formation of the mineral may occur either within or outside of the cell wall of an organism, and specific biochemical reactions for mineral deposition exist that include lipids, proteins and carbohydrates. Most natural (or biological) materials are complex composites whose mechanical properties are often outstanding, considering the weak constituents from which they are assembled. These complex structures, which have risen from hundreds of million years of evolution, are inspiring the design of novel materials with exceptional physical properties for high performance in adverse conditions. Their defining characteristics such as hierarchy, multifunctionality, and the capacity for self-healing, are currently being investigated. The basic building blocks begin with the 20 amino acids and proceed to polypeptides, polysaccharides, and polypeptides–saccharides. These, in turn, compose the basic proteins, which are the primary constituents of the 'soft tissues' common to most biominerals. With well over 1000 proteins possible, current research emphasizes the use of collagen, chitin, keratin, and elastin. The 'hard' phases are often strengthened by crystalline minerals, which nucleate and grow in a bio-mediated environment that determines the size, shape and distribution of individual crystals. The most important mineral phases have been identified as hydroxyapatite, silica, and aragonite. Using the classification of Wegst and Ashby, the principal mechanical characteristics and structures of biological ceramics, polymer composites, elastomers, and cellular materials have been presented. Selected systems in each class are being investigated with emphasis on the relationship between their microstructure over a range of length scales and their mechanical response. Thus, the crystallization of inorganic materials in nature generally occurs at ambient temperature and pressure. Yet the vital organisms through which these minerals form are capable of consistently producing extremely precise and complex structures. Understanding the processes in which living organisms control the growth of crystalline minerals such as silica could lead to significant advances in the field of materials science, and open the door to novel synthesis techniques for nanoscale composite materials, or nanocomposites. High-resolution scanning electron microscope (SEM) observations were performed of the microstructure of the mother-of-pearl (or nacre) portion of the abalone shell. Those shells exhibit the highest mechanical strength and fracture toughness of any non-metallic substance known. The nacre from the shell of the abalone has become one of the more intensively studied biological structures in materials science. Clearly visible in these images are the neatly stacked (or ordered) mineral tiles separated by thin organic sheets along with a macrostructure of larger periodic growth bands which collectively form what scientists are currently referring to as a hierarchical composite structure. (The term hierarchy simply implies that there are a range of structural features which exist over a wide range of length scales). Future developments reside in the synthesis of bio-inspired materials through processing methods and strategies that are characteristic of biological systems. These involve nanoscale self-assembly of the components and the development of hierarchical structures.
Technology
Disciplines
null
24944
https://en.wikipedia.org/wiki/Plate%20tectonics
Plate tectonics
Plate tectonics (, ) is the scientific theory that Earth's lithosphere comprises a number of large tectonic plates, which have been slowly moving since 3–4 billion years ago. The model builds on the concept of , an idea developed during the first decades of the 20th century. Plate tectonics came to be accepted by geoscientists after seafloor spreading was validated in the mid-to-late 1960s. The processes that result in plates and shape Earth's crust are called tectonics. Tectonic plates also occur in other planets and moons. Earth's lithosphere, the rigid outer shell of the planet including the crust and upper mantle, is fractured into seven or eight major plates (depending on how they are defined) and many minor plates or "platelets". Where the plates meet, their relative motion determines the type of plate boundary (or fault): , , or . The relative movement of the plates typically ranges from zero to 10 cm annually. Faults tend to be geologically active, experiencing earthquakes, volcanic activity, mountain-building, and oceanic trench formation. Tectonic plates are composed of the oceanic lithosphere and the thicker continental lithosphere, each topped by its own kind of crust. Along convergent plate boundaries, the process of subduction carries the edge of one plate down under the other plate and into the mantle. This process reduces the total surface area (crust) of the Earth. The lost surface is balanced by the formation of new oceanic crust along divergent margins by seafloor spreading, keeping the total surface area constant in a tectonic "conveyor belt". Tectonic plates are relatively rigid and float across the ductile asthenosphere beneath. Lateral density variations in the mantle result in convection currents, the slow creeping motion of Earth's solid mantle. At a seafloor spreading ridge, plates move away from the ridge, which is a topographic high, and the newly formed crust cools as it moves away, increasing its density and contributing to the motion. At a subduction zone the relatively cold, dense oceanic crust sinks down into the mantle, forming the downward convecting limb of a mantle cell, which is the strongest driver of plate motion. The relative importance and interaction of other proposed factors such as active convection, upwelling inside the mantle, and tidal drag of the Moon is still the subject of debate. Key principles The outer layers of Earth are divided into the lithosphere and asthenosphere. The division is based on differences in mechanical properties and in the method for the transfer of heat. The lithosphere is cooler and more rigid, while the asthenosphere is hotter and flows more easily. In terms of heat transfer, the lithosphere loses heat by conduction, whereas the asthenosphere also transfers heat by convection and has a nearly adiabatic temperature gradient. This division should not be confused with the chemical subdivision of these same layers into the mantle (comprising both the asthenosphere and the mantle portion of the lithosphere) and the crust: a given piece of mantle may be part of the lithosphere or the asthenosphere at different times depending on its temperature and pressure. The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which ride on the fluid-like solid the asthenosphere. Plate motions range from at the Mid-Atlantic Ridge (about as fast as fingernails grow), to about for the Nazca plate (about as fast as hair grows). Tectonic lithosphere plates consist of lithospheric mantle overlain by one or two types of crustal material: oceanic crust (in older texts called sima from silicon and magnesium) and continental crust (sial from silicon and aluminium). The distinction between oceanic crust and continental crust is based on their modes of formation. Oceanic crust is formed at sea-floor spreading centers. Continental crust is formed through arc volcanism and accretion of terranes through plate tectonic processes. Oceanic crust is denser than continental crust because it has less silicon and more of the heavier elements than continental crust. As a result of this density difference, oceanic crust generally lies below sea level, while continental crust buoyantly projects above sea level. Average oceanic lithosphere is typically thick. Its thickness is a function of its age. As time passes, it cools by conducting heat from below, and releasing it raditively into space. The adjacent mantle below is cooled by this process and added to its base. Because it is formed at mid-ocean ridges and spreads outwards, its thickness is therefore a function of its distance from the mid-ocean ridge where it was formed. For a typical distance that oceanic lithosphere must travel before being subducted, the thickness varies from about thick at mid-ocean ridges to greater than at subduction zones. For shorter or longer distances, the subduction zone, and therefore also the mean, thickness becomes smaller or larger, respectively. Continental lithosphere is typically about thick, though this varies considerably between basins, mountain ranges, and stable cratonic interiors of continents. The location where two plates meet is called a plate boundary. Plate boundaries are where geological events occur, such as earthquakes and the creation of topographic features such as mountains, volcanoes, mid-ocean ridges, and oceanic trenches. The vast majority of the world's active volcanoes occur along plate boundaries, with the Pacific plate's Ring of Fire being the most active and widely known. Some volcanoes occur in the interiors of plates, and these have been variously attributed to internal plate deformation and to mantle plumes. Tectonic plates may include continental crust or oceanic crust, or both. For example, the African plate includes the continent and parts of the floor of the Atlantic and Indian Oceans. Some pieces of oceanic crust, known as ophiolites, failed to be subducted under continental crust at destructive plate boundaries; instead these oceanic crustal fragments were pushed upward and were preserved within continental crust. Types of plate boundaries Three types of plate boundaries exist, characterized by the way the plates move relative to each other. They are associated with different types of surface phenomena. The different types of plate boundaries are: Divergent boundaries (constructive boundaries or extensional boundaries). These are where two plates slide apart from each other. At zones of ocean-to-ocean rifting, divergent boundaries form by seafloor spreading, allowing for the formation of new ocean basin, e.g. the Mid-Atlantic Ridge and East Pacific Rise. As the ocean plate splits, the ridge forms at the spreading center, the ocean basin expands, and finally, the plate area increases causing many small volcanoes and/or shallow earthquakes. At zones of continent-to-continent rifting, divergent boundaries may cause new ocean basin to form as the continent splits, spreads, the central rift collapses, and ocean fills the basin, e.g., the East African Rift, the Baikal Rift, the West Antarctic Rift, the Rio Grande Rift. Convergent boundaries (destructive boundaries or active margins) occur where two plates slide toward each other to form either a subduction zone (one plate moving underneath the other) or a continental collision. Subduction zones are of two types: ocean-to-continent subduction, where the dense oceanic lithosphere plunges beneath the less dense continent, or ocean-to-ocean subduction where older, cooler, denser oceanic crust slips beneath less dense oceanic crust. Deep marine trenches are typically associated with subduction zones, and the basins that develop along the active boundary are often called "foreland basins". Earthquakes trace the path of the downward-moving plate as it descends into asthenosphere, a trench forms, and as the subducted plate is heated it releases volatiles, mostly water from hydrous minerals, into the surrounding mantle. The addition of water lowers the melting point of the mantle material above the subducting slab, causing it to melt. The magma that results typically leads to volcanism. At zones of ocean-to-ocean subduction a deep trench to forms in an arc shape. The upper mantle of the subducted plate then heats and magma rises to form curving chains of volcanic islands e.g. the Aleutian Islands, the Mariana Islands, the Japanese island arc. At zones of ocean-to-continent subduction mountain ranges form, e.g. the Andes, the Cascade Range. At continental collision zones there are two masses of continental lithosphere converging. Since they are of similar density, neither is subducted. The plate edges are compressed, folded, and uplifted forming mountain ranges, e.g. Himalayas and Alps. Closure of ocean basins can occur at continent-to-continent boundaries. Transform boundaries (conservative boundaries or strike-slip boundaries) occur where plates are neither created nor destroyed. Instead two plates slide, or perhaps more accurately grind past each other, along transform faults. The relative motion of the two plates is either sinistral (left side toward the observer) or dextral (right side toward the observer). Transform faults occur across a spreading center. Strong earthquakes can occur along a fault. The San Andreas Fault in California is an example of a transform boundary exhibiting dextral motion. Other plate boundary zones occur where the effects of the interactions are unclear, and the boundaries, usually occurring along a broad belt, are not well defined and may show various types of movements in different episodes. Driving forces of plate motion Tectonic plates are able to move because of the relative density of oceanic lithosphere and the relative weakness of the asthenosphere. Dissipation of heat from the mantle is the original source of the energy required to drive plate tectonics through convection or large scale upwelling and doming. As a consequence, a powerful source generating plate motion is the excess density of the oceanic lithosphere sinking in subduction zones. When the new crust forms at mid-ocean ridges, this oceanic lithosphere is initially less dense than the underlying asthenosphere, but it becomes denser with age as it conductively cools and thickens. The greater density of old lithosphere relative to the underlying asthenosphere allows it to sink into the deep mantle at subduction zones, providing most of the driving force for plate movement. The weakness of the asthenosphere allows the tectonic plates to move easily towards a subduction zone. Driving forces related to mantle dynamics For much of the first quarter of the 20th century, the leading theory of the driving force behind tectonic plate motions envisaged large scale convection currents in the upper mantle, which can be transmitted through the asthenosphere. This theory was launched by Arthur Holmes and some forerunners in the 1930s and was immediately recognized as the solution for the acceptance of the theory as originally discussed in the papers of Alfred Wegener in the early years of the 20th century. However, despite its acceptance, it was long debated in the scientific community because the leading theory still envisaged a static Earth without moving continents up until the major breakthroughs of the early sixties. Two- and three-dimensional imaging of Earth's interior (seismic tomography) shows a varying lateral density distribution throughout the mantle. Such density variations can be material (from rock chemistry), mineral (from variations in mineral structures), or thermal (through thermal expansion and contraction from heat energy). The manifestation of this varying lateral density is mantle convection from buoyancy forces. How mantle convection directly and indirectly relates to plate motion is a matter of ongoing study and discussion in geodynamics. Somehow, this energy must be transferred to the lithosphere for tectonic plates to move. There are essentially two main types of mechanisms that are thought to exist related to the dynamics of the mantle that influence plate motion which are primary (through the large scale convection cells) or secondary. The secondary mechanisms view plate motion driven by friction between the convection currents in the asthenosphere and the more rigid overlying lithosphere. This is due to the inflow of mantle material related to the downward pull on plates in subduction zones at ocean trenches. Slab pull may occur in a geodynamic setting where basal tractions continue to act on the plate as it dives into the mantle (although perhaps to a greater extent acting on both the under and upper side of the slab). Furthermore, slabs that are broken off and sink into the mantle can cause viscous mantle forces driving plates through slab suction. Plume tectonics In the theory of plume tectonics followed by numerous researchers during the 1990s, a modified concept of mantle convection currents is used. It asserts that super plumes rise from the deeper mantle and are the drivers or substitutes of the major convection cells. These ideas find their roots in the early 1930s in the works of Beloussov and van Bemmelen, which were initially opposed to plate tectonics and placed the mechanism in a fixed frame of vertical movements. Van Bemmelen later modified the concept in his "Undation Models" and used "Mantle Blisters" as the driving force for horizontal movements, invoking gravitational forces away from the regional crustal doming. The theories find resonance in the modern theories which envisage hot spots or mantle plumes which remain fixed and are overridden by oceanic and continental lithosphere plates over time and leave their traces in the geological record (though these phenomena are not invoked as real driving mechanisms, but rather as modulators). The mechanism is still advocated to explain the break-up of supercontinents during specific geological epochs. It has followers amongst the scientists involved in the theory of Earth expansion. Surge tectonics Another theory is that the mantle flows neither in cells nor large plumes but rather as a series of channels just below Earth's crust, which then provide basal friction to the lithosphere. This theory, called "surge tectonics", was popularized during the 1980s and 1990s. Recent research, based on three-dimensional computer modelling, suggests that plate geometry is governed by a feedback between mantle convection patterns and the strength of the lithosphere. Driving forces related to gravity Forces related to gravity are invoked as secondary phenomena within the framework of a more general driving mechanism such as the various forms of mantle dynamics described above. In modern views, gravity is invoked as the major driving force, through slab pull along subduction zones. Gravitational sliding away from a spreading ridge is one of the proposed driving forces, it proposes plate motion is driven by the higher elevation of plates at ocean ridges. As oceanic lithosphere is formed at spreading ridges from hot mantle material, it gradually cools and thickens with age (and thus adds distance from the ridge). Cool oceanic lithosphere is significantly denser than the hot mantle material from which it is derived and so with increasing thickness it gradually subsides into the mantle to compensate the greater load. The result is a slight lateral incline with increased distance from the ridge axis. This force is regarded as a secondary force and is often referred to as "ridge push". This is a misnomer as there is no force "pushing" horizontally, indeed tensional features are dominant along ridges. It is more accurate to refer to this mechanism as "gravitational sliding", since the topography across the whole plate can vary considerably and spreading ridges are only the most prominent feature. Other mechanisms generating this gravitational secondary force include flexural bulging of the lithosphere before it dives underneath an adjacent plate, producing a clear topographical feature that can offset, or at least affect, the influence of topographical ocean ridges. Mantle plumes and hot spots are also postulated to impinge on the underside of tectonic plates. Slab pull: Scientific opinion is that the asthenosphere is insufficiently competent or rigid to directly cause motion by friction along the base of the lithosphere. Slab pull is therefore most widely thought to be the greatest force acting on the plates. In this understanding, plate motion is mostly driven by the weight of cold, dense plates sinking into the mantle at trenches. Recent models indicate that trench suction plays an important role as well. However, the fact that the North American plate is nowhere being subducted, although it is in motion, presents a problem. The same holds for the African, Eurasian, and Antarctic plates. Gravitational sliding away from mantle doming: According to older theories, one of the driving mechanisms of the plates is the existence of large scale asthenosphere/mantle domes which cause the gravitational sliding of lithosphere plates away from them (see the paragraph on Mantle Mechanisms). This gravitational sliding represents a secondary phenomenon of this basically vertically oriented mechanism. It finds its roots in the Undation Model of van Bemmelen. This can act on various scales, from the small scale of one island arc up to the larger scale of an entire ocean basin. Driving forces related to Earth rotation Alfred Wegener, being a meteorologist, had proposed tidal forces and centrifugal forces as the main driving mechanisms behind continental drift; however, these forces were considered far too small to cause continental motion as the concept was of continents plowing through oceanic crust. Therefore, Wegener later changed his position and asserted that convection currents are the main driving force of plate tectonics in the last edition of his book in 1929. However, in the plate tectonics context (accepted since the seafloor spreading proposals of Heezen, Hess, Dietz, Morley, Vine, and Matthews (see below) during the early 1960s), the oceanic crust is suggested to be in motion with the continents which caused the proposals related to Earth rotation to be reconsidered. In more recent literature, these driving forces are: Tidal drag due to the gravitational force the Moon (and the Sun) exerts on the crust of Earth Global deformation of the geoid due to small displacements of the rotational pole with respect to Earth's crust Other smaller deformation effects of the crust due to wobbles and spin movements of Earth's rotation on a smaller timescale Forces that are small and generally negligible are: The Coriolis force The centrifugal force, which is treated as a slight modification of gravity For these mechanisms to be overall valid, systematic relationships should exist all over the globe between the orientation and kinematics of deformation and the geographical latitudinal and longitudinal grid of Earth itself. These systematic relations studies in the second half of the nineteenth century and the first half of the twentieth century underline exactly the opposite: that the plates had not moved in time, that the deformation grid was fixed with respect to Earth's equator and axis, and that gravitational driving forces were generally acting vertically and caused only local horizontal movements (the so-called pre-plate tectonic, "fixist theories"). Later studies (discussed below on this page), therefore, invoked many of the relationships recognized during this pre-plate tectonics period to support their theories (see reviews of these various mechanisms related to Earth rotation the work of van Dijk and collaborators). Possible tidal effect on plate tectonics Of the many forces discussed above, tidal force is still highly debated and defended as a possible principal driving force of plate tectonics. The other forces are only used in global geodynamic models not using plate tectonics concepts (therefore beyond the discussions treated in this section) or proposed as minor modulations within the overall plate tectonics model. In 1973, George W. Moore of the USGS and R. C. Bostrom presented evidence for a general westward drift of Earth's lithosphere with respect to the mantle, based on the steepness of the subduction zones (shallow dipping towards the east, steeply dipping towards the west). They concluded that tidal forces (the tidal lag or "friction") caused by Earth's rotation and the forces acting upon it by the Moon are a driving force for plate tectonics. As Earth spins eastward beneath the Moon, the Moon's gravity ever so slightly pulls Earth's surface layer back westward, just as proposed by Alfred Wegener (see above). Since 1990 this theory is mainly advocated by Doglioni and co-workers , such as in a more recent 2006 study, where scientists reviewed and advocated these ideas. It has been suggested in that this observation may also explain why Venus and Mars have no plate tectonics, as Venus has no moon and Mars' moons are too small to have significant tidal effects on the planet. In a paper by it was suggested that, on the other hand, it can easily be observed that many plates are moving north and eastward, and that the dominantly westward motion of the Pacific Ocean basins derives simply from the eastward bias of the Pacific spreading center (which is not a predicted manifestation of such lunar forces). In the same paper the authors admit, however, that relative to the lower mantle, there is a slight westward component in the motions of all the plates. They demonstrated though that the westward drift, seen only for the past 30 Ma, is attributed to the increased dominance of the steadily growing and accelerating Pacific plate. The debate is still open, and a recent paper by Hofmeister et al. (2022) revived the idea advocating again the interaction between the Earth's rotation and the Moon as main driving forces for the plates. Role of water The role of water has been proposed to be crucial in plate tectonics on Earth. Relative significance of each driving force mechanism The vector of a plate's motion is a function of all the forces acting on the plate; however, therein lies the problem regarding the degree to which each process contributes to the overall motion of each tectonic plate. The diversity of geodynamic settings and the properties of each plate result from the impact of the various processes actively driving each individual plate. One method of dealing with this problem is to consider the relative rate at which each plate is moving as well as the evidence related to the significance of each process to the overall driving force on the plate. One of the most significant correlations discovered to date is that lithospheric plates attached to downgoing (subducting) plates move much faster than other types of plates. The Pacific plate, for instance, is essentially surrounded by zones of subduction (the so-called Ring of Fire) and moves much faster than the plates of the Atlantic basin, which are attached (perhaps one could say 'welded') to adjacent continents instead of subducting plates. It is thus thought that forces associated with the downgoing plate (slab pull and slab suction) are the driving forces which determine the motion of plates, except for those plates which are not being subducted. This view however has been contradicted by a recent study which found that the actual motions of the Pacific plate and other plates associated with the East Pacific Rise do not correlate mainly with either slab pull or slab push, but rather with a mantle convection upwelling whose horizontal spreading along the bases of the various plates drives them along via viscosity-related traction forces. The driving forces of plate motion continue to be active subjects of on-going research within geophysics and tectonophysics. History of the theory Summary The development of the theory of plate tectonics was the scientific and cultural change which occurred during a period of 50 years of scientific debate. The event of the acceptance itself was a paradigm shift and can therefore be classified as a scientific revolution, now described as the Plate Tectonics Revolution. Around the start of the twentieth century, various theorists unsuccessfully attempted to explain the many geographical, geological, and biological continuities between continents. In 1912, the meteorologist Alfred Wegener described what he called continental drift, an idea that culminated fifty years later in the modern theory of plate tectonics. Wegener expanded his theory in his 1915 book The Origin of Continents and Oceans. Starting from the idea (also expressed by his forerunners) that the present continents once formed a single land mass (later called Pangaea), Wegener suggested that these separated and drifted apart, likening them to "icebergs" of low density sial floating on a sea of denser sima. Supporting evidence for the idea came from the dove-tailing outlines of South America's east coast and Africa's west coast Antonio Snider-Pellegrini had drawn on his maps, and from the matching of the rock formations along these edges. Confirmation of their previous contiguous nature also came from the fossil plants Glossopteris and Gangamopteris, and the therapsid or mammal-like reptile Lystrosaurus, all widely distributed over South America, Africa, Antarctica, India, and Australia. The evidence for such an erstwhile joining of these continents was patent to field geologists working in the southern hemisphere. The South African Alex du Toit put together a mass of such information in his 1937 publication Our Wandering Continents, and went further than Wegener in recognising the strong links between the Gondwana fragments. Wegener's work was initially not widely accepted, in part due to a lack of detailed evidence but mostly because of the lack of a reasonable physically supported mechanism. Earth might have a solid crust and mantle and a liquid core, but there seemed to be no way that portions of the crust could move around. Many distinguished scientists of the time, such as Harold Jeffreys and Charles Schuchert, were outspoken critics of continental drift. Despite much opposition, the view of continental drift gained support and a lively debate started between "drifters" or "mobilists" (proponents of the theory) and "fixists" (opponents). During the 1920s, 1930s and 1940s, the former reached important milestones proposing that convection currents might have driven the plate movements, and that spreading may have occurred below the sea within the oceanic crust. Concepts close to the elements of plate tectonics were proposed by geophysicists and geologists (both fixists and mobilists) like Vening-Meinesz, Holmes, and Umbgrove. In 1941, Otto Ampferer described, in his publication "Thoughts on the motion picture of the Atlantic region", processes that anticipated seafloor spreading and subduction. One of the first pieces of geophysical evidence that was used to support the movement of lithospheric plates came from paleomagnetism. This is based on the fact that rocks of different ages show a variable magnetic field direction, evidenced by studies since the mid–nineteenth century. The magnetic north and south poles reverse through time, and, especially important in paleotectonic studies, the relative position of the magnetic north pole varies through time. Initially, during the first half of the twentieth century, the latter phenomenon was explained by introducing what was called "polar wander" (see apparent polar wander) (i.e., it was assumed that the north pole location had been shifting through time). An alternative explanation, though, was that the continents had moved (shifted and rotated) relative to the north pole, and each continent, in fact, shows its own "polar wander path". During the late 1950s, it was successfully shown on two occasions that these data could show the validity of continental drift: by Keith Runcorn in a paper in 1956, and by Warren Carey in a symposium held in March 1956. The second piece of evidence in support of continental drift came during the late 1950s and early 60s from data on the bathymetry of the deep ocean floors and the nature of the oceanic crust such as magnetic properties and, more generally, with the development of marine geology which gave evidence for the association of seafloor spreading along the mid-oceanic ridges and magnetic field reversals, published between 1959 and 1963 by Heezen, Dietz, Hess, Mason, Vine & Matthews, and Morley. Simultaneous advances in early seismic imaging techniques in and around Wadati–Benioff zones along the trenches bounding many continental margins, together with many other geophysical (e.g., gravimetric) and geological observations, showed how the oceanic crust could disappear into the mantle, providing the mechanism to balance the extension of the ocean basins with shortening along its margins. All this evidence, both from the ocean floor and from the continental margins, made it clear around 1965 that continental drift was feasible. The theory of plate tectonics was defined in a series of papers between 1965 and 1967. The theory revolutionized the Earth sciences, explaining a diverse range of geological phenomena and their implications in other studies such as paleogeography and paleobiology. Continental drift In the late 19th and early 20th centuries, geologists assumed that Earth's major features were fixed, and that most geologic features such as basin development and mountain ranges could be explained by vertical crustal movement, described in what is called the geosynclinal theory. Generally, this was placed in the context of a contracting planet Earth due to heat loss in the course of a relatively short geological time. It was observed as early as 1596 that the opposite coasts of the Atlantic Ocean—or, more precisely, the edges of the continental shelves—have similar shapes and seem to have once fitted together. Since that time many theories were proposed to explain this apparent complementarity, but the assumption of a solid Earth made these various proposals difficult to accept. The discovery of radioactivity and its associated heating properties in 1895 prompted a re-examination of the apparent age of Earth. This had previously been estimated by its cooling rate under the assumption that Earth's surface radiated like a black body. Those calculations had implied that, even if it started at red heat, Earth would have dropped to its present temperature in a few tens of millions of years. Armed with the knowledge of a new heat source, scientists realized that Earth would be much older, and that its core was still sufficiently hot to be liquid. By 1915, after having published a first article in 1912, Alfred Wegener was making serious arguments for the idea of continental drift in the first edition of The Origin of Continents and Oceans. In that book (re-issued in four successive editions up to the final one in 1936), he noted how the east coast of South America and the west coast of Africa looked as if they were once attached. Wegener was not the first to note this (Abraham Ortelius, Antonio Snider-Pellegrini, Eduard Suess, Roberto Mantovani and Frank Bursley Taylor preceded him just to mention a few), but he was the first to marshal significant fossil and paleo-topographical and climatological evidence to support this simple observation (and was supported in this by researchers such as Alex du Toit). Furthermore, when the rock strata of the margins of separate continents are very similar it suggests that these rocks were formed in the same way, implying that they were joined initially. For instance, parts of Scotland and Ireland contain rocks very similar to those found in Newfoundland and New Brunswick. Furthermore, the Caledonian Mountains of Europe and parts of the Appalachian Mountains of North America are very similar in structure and lithology. However, his ideas were not taken seriously by many geologists, who pointed out that there was no apparent mechanism for continental drift. Specifically, they did not see how continental rock could plow through the much denser rock that makes up oceanic crust. Wegener could not explain the force that drove continental drift, and his vindication did not come until after his death in 1930. Floating continents, paleomagnetism, and seismicity zones As it was observed early that although granite existed on continents, seafloor seemed to be composed of denser basalt, the prevailing concept during the first half of the twentieth century was that there were two types of crust, named "sial" (continental type crust) and "sima" (oceanic type crust). Furthermore, it was supposed that a static shell of strata was present under the continents. It therefore looked apparent that a layer of basalt (sial) underlies the continental rocks. However, based on abnormalities in plumb line deflection by the Andes in Peru, Pierre Bouguer had deduced that less-dense mountains must have a downward projection into the denser layer underneath. The concept that mountains had "roots" was confirmed by George B. Airy a hundred years later, during study of Himalayan gravitation, and seismic studies detected corresponding density variations. Therefore, by the mid-1950s, the question remained unresolved as to whether mountain roots were clenched in surrounding basalt or were floating on it like an iceberg. During the 20th century, improvements in and greater use of seismic instruments such as seismographs enabled scientists to learn that earthquakes tend to be concentrated in specific areas, most notably along the oceanic trenches and spreading ridges. By the late 1920s, seismologists were beginning to identify several prominent earthquake zones parallel to the trenches that typically were inclined 40–60° from the horizontal and extended several hundred kilometers into Earth. These zones later became known as Wadati–Benioff zones, or simply Benioff zones, in honor of the seismologists who first recognized them, Kiyoo Wadati of Japan and Hugo Benioff of the United States. The study of global seismicity greatly advanced in the 1960s with the establishment of the Worldwide Standardized Seismograph Network (WWSSN) to monitor the compliance of the 1963 treaty banning above-ground testing of nuclear weapons. The much improved data from the WWSSN instruments allowed seismologists to map precisely the zones of earthquake concentration worldwide. Meanwhile, debates developed around the phenomenon of polar wander. Since the early debates of continental drift, scientists had discussed and used evidence that polar drift had occurred because continents seemed to have moved through different climatic zones during the past. Furthermore, paleomagnetic data had shown that the magnetic pole had also shifted during time. Reasoning in an opposite way, the continents might have shifted and rotated, while the pole remained relatively fixed. The first time the evidence of magnetic polar wander was used to support the movements of continents was in a paper by Keith Runcorn in 1956, and successive papers by him and his students Ted Irving (who was actually the first to be convinced of the fact that paleomagnetism supported continental drift) and Ken Creer. This was immediately followed by a symposium on continental drift in Tasmania in March 1956 organised by S. Warren Carey who had been one of the supporters and promotors of Continental Drift since the thirties During this symposium, some of the participants used the evidence in the theory of an expansion of the global crust, a theory which had been proposed by other workers decades earlier. In this hypothesis, the shifting of the continents is explained by a large increase in the size of Earth since its formation. However, although the theory still has supporters in science, this is generally regarded as unsatisfactory because there is no convincing mechanism to produce a significant expansion of Earth. Other work during the following years would soon show that the evidence was equally in support of continental drift on a globe with a stable radius. During the 1930s up to the late 1950s, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force. Often, these contributions are forgotten because: At the time, continental drift was not accepted. Some of these ideas were discussed in the context of abandoned fixist ideas of a deforming globe without continental drift or an expanding Earth. They were published during an episode of extreme political and economic instability that hampered scientific communication. Many were published by European scientists and at first not mentioned or given little credit in the papers on sea floor spreading published by the American researchers in the 1960s. Mid-oceanic ridge spreading and convection In 1947, a team of scientists led by Maurice Ewing utilizing the Woods Hole Oceanographic Institution's research vessel Atlantis and an array of instruments, confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the layer of sediments consisted of basalt, not the granite which is the main constituent of continents. They also found that the oceanic crust was much thinner than continental crust. All these new findings raised important and intriguing questions. The new data that had been collected on the ocean basins also showed particular characteristics regarding the bathymetry. One of the major outcomes of these datasets was that all along the globe, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the "Great Global Rift". This was described in the crucial paper of Bruce Heezen (1960) based on his work with Marie Tharp, which would trigger a real revolution in thinking. A profound consequence of seafloor spreading is that new crust was, and still is, being continually created along the oceanic ridges. For this reason, Heezen initially advocated the so-called "expanding Earth" hypothesis of S. Warren Carey (see above). Therefore, the question remained as to how new crust could continuously be added along the oceanic ridges without increasing the size of Earth. In reality, this question had been solved already by numerous scientists during the 1940s and the 1950s, like Arthur Holmes, Vening-Meinesz, Coates and many others: The crust in excess disappeared along what were called the oceanic trenches, where so-called "subduction" occurred. Therefore, when various scientists during the early 1960s started to reason on the data at their disposal regarding the ocean floor, the pieces of the theory quickly fell into place. The question particularly intrigued Harry Hammond Hess, a Princeton University geologist and a Naval Reserve Rear Admiral, and Robert S. Dietz, a scientist with the United States Coast and Geodetic Survey who coined the term seafloor spreading. Dietz and Hess (the former published the same idea one year earlier in Nature, but priority belongs to Hess who had already distributed an unpublished manuscript of his 1962 article by 1960) were among the small number who really understood the broad implications of sea floor spreading and how it would eventually agree with the, at that time, unconventional and unaccepted ideas of continental drift and the elegant and mobilistic models proposed by previous workers like Holmes. In the same year, Robert R. Coats of the U.S. Geological Survey described the main features of island arc subduction in the Aleutian Islands. His paper, though little noted (and sometimes even ridiculed) at the time, has since been called "seminal" and "prescient". In reality, it shows that the work by the European scientists on island arcs and mountain belts performed and published during the 1930s up until the 1950s was applied and appreciated also in the United States. If Earth's crust was expanding along the oceanic ridges, Hess and Dietz reasoned like Holmes and others before them, it must be shrinking elsewhere. Hess followed Heezen, suggesting that new oceanic crust continuously spreads away from the ridges in a conveyor belt–like motion. And, using the mobilistic concepts developed before, he correctly concluded that many millions of years later, the oceanic crust eventually descends along the continental margins where oceanic trenches—very deep, narrow canyons—are formed, e.g. along the rim of the Pacific Ocean basin. The important step Hess made was that convection currents would be the driving force in this process, arriving at the same conclusions as Holmes had decades before with the only difference that the thinning of the ocean crust was performed using Heezen's mechanism of spreading along the ridges. Hess therefore concluded that the Atlantic Ocean was expanding while the Pacific Ocean was shrinking. As old oceanic crust is "consumed" in the trenches (like Holmes and others, he thought this was done by thickening of the continental lithosphere, not, as later understood, by underthrusting at a larger scale of the oceanic crust itself into the mantle), new magma rises and erupts along the spreading ridges to form new crust. In effect, the ocean basins are perpetually being "recycled", with the forming of new crust and the destruction of old oceanic lithosphere occurring simultaneously. Thus, the new mobilistic concepts neatly explained why Earth does not get bigger with sea floor spreading, why there is so little sediment accumulation on the ocean floor, and why oceanic rocks are much younger than continental rocks. Magnetic striping Beginning in the 1950s, scientists like Victor Vacquier, using magnetic instruments (magnetometers) adapted from airborne devices developed during World War II to detect submarines, began recognizing odd magnetic variations across the ocean floor. This finding, though unexpected, was not entirely surprising because it was known that basalt—the iron-rich, volcanic rock making up the ocean floor—contains a strongly magnetic mineral (magnetite) and can locally distort compass readings. This distortion was recognized by Icelandic mariners as early as the late 18th century. More importantly, because the presence of magnetite gives the basalt measurable magnetic properties, these newly discovered magnetic variations provided another means to study the deep ocean floor. When newly formed rock cools, such magnetic materials recorded Earth's magnetic field at the time. As more and more of the seafloor was mapped during the 1950s, the magnetic variations turned out not to be random or isolated occurrences, but instead revealed recognizable patterns. When these magnetic patterns were mapped over a wide region, the ocean floor showed a zebra-like pattern: one stripe with normal polarity and the adjoining stripe with reversed polarity. The overall pattern, defined by these alternating bands of normally and reversely polarized rock, became known as magnetic striping, and was published by Ron G. Mason and co-workers in 1961, who did not find, though, an explanation for these data in terms of sea floor spreading, like Vine, Matthews and Morley a few years later. The discovery of magnetic striping called for an explanation. In the early 1960s scientists such as Heezen, Hess and Dietz had begun to theorise that mid-ocean ridges mark structurally weak zones where the ocean floor was being ripped in two lengthwise along the ridge crest (see the previous paragraph). New magma from deep within Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. This process, at first denominated the "conveyer belt hypothesis" and later called seafloor spreading, operating over many millions of years continues to form new ocean floor all across the 50,000 km-long system of mid-ocean ridges. Only four years after the maps with the "zebra pattern" of magnetic stripes were published, the link between sea floor spreading and these patterns was recognized independently by Lawrence Morley, and by Fred Vine and Drummond Matthews, in 1963, (the Vine–Matthews–Morley hypothesis). This hypothesis linked these patterns to geomagnetic reversals and was supported by several lines of evidence: the stripes are symmetrical around the crests of the mid-ocean ridges; at or near the crest of the ridge, the rocks are very young, and they become progressively older away from the ridge crest; the youngest rocks at the ridge crest always have modern (normal) polarity; stripes of rock parallel to the ridge crest alternate in magnetic polarity (normal-reversed-normal, etc.), suggesting that they were formed during different epochs documenting the (already known from independent studies) normal and reversal episodes of Earth's magnetic field. By explaining both the zebra-like magnetic striping and the construction of the mid-ocean ridge system, the seafloor spreading hypothesis (SFS) quickly gained converts and represented another major advance in the development of the plate-tectonics theory. Furthermore, the oceanic crust came to be appreciated as a natural "tape recording" of the history of the geomagnetic field reversals (GMFR) of Earth's magnetic field. Extensive studies were dedicated to the calibration of the normal-reversal patterns in the oceanic crust on one hand and known timescales derived from the dating of basalt layers in sedimentary sequences (magnetostratigraphy) on the other, to arrive at estimates of past spreading rates and plate reconstructions. Definition and refining of the theory After all these considerations, plate tectonics (or, as it was initially called "New Global Tectonics") became quickly accepted and numerous papers followed that defined the concepts: In 1965, Tuzo Wilson who had been a promoter of the sea floor spreading hypothesis and continental drift from the very beginning added the concept of transform faults to the model, completing the classes of fault types necessary to make the mobility of the plates on the globe work out. A symposium on continental drift was held at the Royal Society of London in 1965 which must be regarded as the official start of the acceptance of plate tectonics by the scientific community, and which abstracts are issued as . In this symposium, Edward Bullard and co-workers showed with a computer calculation how the continents along both sides of the Atlantic would best fit to close the ocean, which became known as the famous "Bullard's Fit". In 1966 Wilson published the paper that referred to previous plate tectonic reconstructions, introducing the concept of what became known as the "Wilson Cycle". In 1967, at the American Geophysical Union's meeting, W. Jason Morgan proposed that Earth's surface consists of 12 rigid plates that move relative to each other. Two months later, Xavier Le Pichon published a complete model based on six major plates with their relative motions, which marked the final acceptance by the scientific community of plate tectonics. In the same year, McKenzie and Parker independently presented a model similar to Morgan's using translations and rotations on a sphere to define the plate motions. From that moment onwards, discussions have been focusing on the relative role of the forces driving plate tectonics, in order to evolve from a kinematic concept into a dynamic theory. Initially these concepts were focused on mantle convection, in the footsteps of A. Holmes, and also introduced the importance of the gravitational pull of subducted slabs through the works of Elsasser, Solomon, Sleep, Uyeda and Turcotte. Other authors evoked external driving forces due to the tidal drag of the Moon and other celestial bodies, and, especially since 2000, with the emergence of computational models reproducing Earth's mantle behaviour to first order, following upon the older unifying concepts of van Bemmelen, authors re-evaluated the important role of mantle dynamics. Implications for life According to a hypothesis proposed by Robert Stern and Taras Gerya, plate tectonics are a necessary criterion for a planet to be able to sustain complex life because of the role plate tectonics plays in regulating the carbon cycle. Continental drift theory helps biogeographers to explain the disjunct biogeographic distribution of present-day life found on different continents but having similar ancestors. Plate reconstruction Reconstruction is used to establish past (and future) plate configurations, helping determine the shape and make-up of ancient supercontinents and providing a basis for paleogeography. Defining plate boundaries Active plate boundaries are defined by their seismicity. Past plate boundaries within existing plates are identified from a variety of evidence, such as the presence of ophiolites that are indicative of vanished oceans. Emergence of plate tectonics and past plate motions The timing of the emergence of plate tectonics on Earth has been the subject of considerable controversy, with the estimated time varying wildly between researchers, spanning 85% of Earth's history. Some authors have suggested that during at least part of the Archean period (~4-2.5 billion years ago) the mantle was between 100 and 250 °C warmer than at present, which is thought to be incompatible with modern-style plate tectonics, and that Earth may have had a stagnant lid or other kinds of regimes. The increasingly felsic nature of preserved rocks between 3 and 2.5 billion years ago implies that subduction zones had emerged by this time, with preserved zircons suggesting that subduction may have begun as early as 3.8 billion years ago. Early subduction zones appear to have been temporary and localized, though to what degree is controversial. Modern plate tectonics are suggested to have emerged by at least 2.2 billion years ago with the formation of the first recognised supercontinent Columbia, though some authors have suggested that modern-style plate tectonics did not appear until 800 million years ago based on the late appearance of rock types like blueschist which require cold subducted material. Other authors have suggested that plate tectonics were already functional in the Hadean, over 4 billion years ago. Various types of quantitative and semi-quantitative information are available to constrain past plate motions. The geometric fit between continents, such as between west Africa and South America is still an important part of plate reconstruction. Magnetic stripe patterns provide a reliable guide to relative plate motions going back into the Jurassic period. The tracks of hotspots give absolute reconstructions, but these are only available back to the Cretaceous. Older reconstructions rely mainly on paleomagnetic pole data, although these only constrain the latitude and rotation, but not the longitude. Combining poles of different ages in a particular plate to produce apparent polar wander paths provides a method for comparing the motions of different plates through time. Additional evidence comes from the distribution of certain sedimentary rock types, faunal provinces shown by particular fossil groups, and the position of orogenic belts. Formation and break-up of continents The movement of plates has caused the formation and break-up of continents over time, including occasional formation of a supercontinent that contains most or all of the continents. The supercontinent Columbia or Nuna formed during a period of and broke up about . The supercontinent Rodinia is thought to have formed about 1billion years ago and to have embodied most or all of Earth's continents, and broken up into eight continents around . The eight continents later re-assembled into another supercontinent called Pangaea; Pangaea broke up into Laurasia (which became North America and Eurasia) and Gondwana (which became the remaining continents). The Himalayas, the world's tallest mountain range, are assumed to have been formed by the collision of two major plates. Before uplift, the area where they stand was covered by the Tethys Ocean. Modern plates Depending on how they are defined, there are usually seven or eight "major" plates: African, Antarctic, Eurasian, North American, South American, Pacific, and Indo-Australian. The latter is sometimes subdivided into the Indian and Australian plates. There are dozens of smaller plates, the eight largest of which are the Arabian, Caribbean, Juan de Fuca, Cocos, Nazca, Philippine Sea, Scotia and Somali. During the 2020s, new proposals have come forward that divide the Earth's crust into many smaller plates, called terranes, which reflects the fact that Plate reconstructions show that the larger plates have been internally deformed and oceanic and continental plates have been fragmented over time. This has resulted in the definition of roughly 1200 terranes inside the oceanic plates, continental blocks and the mobile zones (mountainous belts) that separate them. The motion of the tectonic plates is determined by remote sensing satellite data sets, calibrated with ground station measurements. Other celestial bodies The appearance of plate tectonics on terrestrial planets is related to planetary mass, with more massive planets than Earth expected to exhibit plate tectonics. Earth may be a borderline case, owing its tectonic activity to abundant water (silica and water form a deep eutectic). Venus Venus shows no evidence of active plate tectonics. There is debatable evidence of active tectonics in the planet's distant past; however, events taking place since then (such as the plausible and generally accepted hypothesis that the Venusian lithosphere has thickened greatly over the course of several hundred million years) has made constraining the course of its geologic record difficult. However, the numerous well-preserved impact craters have been used as a dating method to approximately date the Venusian surface (since there are thus far no known samples of Venusian rock to be dated by more reliable methods). Dates derived are dominantly in the range , although ages of up to have been calculated. This research has led to the fairly well accepted hypothesis that Venus has undergone an essentially complete volcanic resurfacing at least once in its distant past, with the last event taking place approximately within the range of estimated surface ages. While the mechanism of such an impressive thermal event remains a debated issue in Venusian geosciences, some scientists are advocates of processes involving plate motion to some extent. One explanation for Venus's lack of plate tectonics is that on Venus temperatures are too high for significant water to be present. Earth's crust is soaked with water, and water plays an important role in the development of shear zones. Plate tectonics requires weak surfaces in the crust along which crustal slices can move, and it may well be that such weakening never took place on Venus because of the absence of water. However, some researchers remain convinced that plate tectonics is or was once active on this planet. Mars Mars is considerably smaller than Earth and Venus, and there is evidence for ice on its surface and in its crust. In the 1990s, it was proposed that Martian Crustal Dichotomy was created by plate tectonic processes. Scientists have since determined that it was created either by upwelling within the Martian mantle that thickened the crust of the Southern Highlands and formed Tharsis or by a giant impact that excavated the Northern Lowlands. Valles Marineris may be a tectonic boundary. Observations made of the magnetic field of Mars by the Mars Global Surveyor spacecraft in 1999 showed patterns of magnetic striping discovered on this planet. Some scientists interpreted these as requiring plate tectonic processes, such as seafloor spreading. However, their data failed a "magnetic reversal test", which is used to see if they were formed by flipping polarities of a global magnetic field. Icy moons Exoplanets On Earth-sized planets, plate tectonics is more likely if there are oceans of water. However, in 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths with one team saying that plate tectonics would be episodic or stagnant and the other team saying that plate tectonics is very likely on super-earths even if the planet is dry. Consideration of plate tectonics is a part of the search for extraterrestrial intelligence and extraterrestrial life.
Physical sciences
Earth science
null
24975
https://en.wikipedia.org/wiki/Piezoelectricity
Piezoelectricity
Piezoelectricity (, ) is the electric charge that accumulates in certain solid materials—such as crystals, certain ceramics, and biological matter such as bone, DNA, and various proteins—in response to applied mechanical stress. The word piezoelectricity means electricity resulting from pressure and latent heat. It is derived (an ancient source of static electricity). The German form of the word (Piezoelektricität) was coined in 1881 by the German physicist Wilhelm Gottlieb Hankel; the English word was coined in 1883. The piezoelectric effect results from the linear electromechanical interaction between the mechanical and electrical states in crystalline materials with no inversion symmetry. The piezoelectric effect is a reversible process: materials exhibiting the piezoelectric effect also exhibit the reverse piezoelectric effect, the internal generation of a mechanical strain resulting from an applied electric field. For example, lead zirconate titanate crystals will generate measurable piezoelectricity when their static structure is deformed by about 0.1% of the original dimension. Conversely, those same crystals will change about 0.1% of their static dimension when an external electric field is applied. The inverse piezoelectric effect is used in the production of ultrasound waves. French physicists Jacques and Pierre Curie discovered piezoelectricity in 1880. The piezoelectric effect has been exploited in many useful applications, including the production and detection of sound, piezoelectric inkjet printing, generation of high voltage electricity, as a clock generator in electronic devices, in microbalances, to drive an ultrasonic nozzle, and in ultrafine focusing of optical assemblies. It forms the basis for scanning probe microscopes that resolve images at the scale of atoms. It is used in the pickups of some electronically amplified guitars and as triggers in most modern electronic drums. The piezoelectric effect also finds everyday uses, such as generating sparks to ignite gas cooking and heating devices, torches, and cigarette lighters. History Discovery and early research The pyroelectric effect, by which a material generates an electric potential in response to a temperature change, was studied by Carl Linnaeus and Franz Aepinus in the mid-18th century. Drawing on this knowledge, both René Just Haüy and Antoine César Becquerel posited a relationship between mechanical stress and electric charge; however, experiments by both proved inconclusive. The first demonstration of the direct piezoelectric effect was in 1880 by the brothers Pierre Curie and Jacques Curie. They combined their knowledge of pyroelectricity with their understanding of the underlying crystal structures that gave rise to pyroelectricity to predict crystal behavior, and demonstrated the effect using crystals of tourmaline, quartz, topaz, cane sugar, and Rochelle salt (sodium potassium tartrate tetrahydrate). Quartz and Rochelle salt exhibited the most piezoelectricity. The Curies, however, did not predict the converse piezoelectric effect. The converse effect was mathematically deduced from fundamental thermodynamic principles by Gabriel Lippmann in 1881. The Curies immediately confirmed the existence of the converse effect, and went on to obtain quantitative proof of the complete reversibility of electro-elasto-mechanical deformations in piezoelectric crystals. For the next few decades, piezoelectricity remained something of a laboratory curiosity, though it was a vital tool in the discovery of polonium and radium by Pierre and Marie Curie in 1898. More work was done to explore and define the crystal structures that exhibited piezoelectricity. This culminated in 1910 with the publication of Woldemar Voigt's Lehrbuch der Kristallphysik (Textbook on Crystal Physics), which described the 20 natural crystal classes capable of piezoelectricity, and rigorously defined the piezoelectric constants using tensor analysis. World War I and inter-war years The first practical application for piezoelectric devices was sonar, first developed during World War I. The superior performance of piezoelectric devices, operating at ultrasonic frequencies, superseded the earlier Fessenden oscillator. In France in 1917, Paul Langevin and his coworkers developed an ultrasonic submarine detector. The detector consisted of a transducer, made of thin quartz crystals carefully glued between two steel plates, and a hydrophone to detect the returned echo. By emitting a high-frequency pulse from the transducer, and measuring the amount of time it takes to hear an echo from the sound waves bouncing off an object, one can calculate the distance to that object. Piezoelectric devices found homes in many fields. Ceramic phonograph cartridges simplified player design, were cheap and accurate, and made record players cheaper to maintain and easier to build. The development of the ultrasonic transducer allowed for easy measurement of viscosity and elasticity in fluids and solids, resulting in huge advances in materials research. Ultrasonic time-domain reflectometers (which send an ultrasonic pulse through a material and measure reflections from discontinuities) could find flaws inside cast metal and stone objects, improving structural safety. World War II and post-war During World War II, independent research groups in the United States, USSR, and Japan discovered a new class of synthetic materials, called ferroelectrics, which exhibited piezoelectric constants many times higher than natural materials. This led to intense research to develop barium titanate and later lead zirconate titanate materials with specific properties for particular applications. One significant example of the use of piezoelectric crystals was developed by Bell Telephone Laboratories. Following World War I, Frederick R. Lack, working in radio telephony in the engineering department, developed the "AT cut" crystal, a crystal that operated through a wide range of temperatures. Lack's crystal did not need the heavy accessories previous crystal used, facilitating its use on the aircraft. This development allowed Allied air forces to engage in coordinated mass attacks through the use of aviation radio. Development of piezoelectric devices and materials in the United States was kept within the companies doing the development, mostly due to the wartime beginnings of the field, and in the interests of securing profitable patents. New materials were the first to be developed—quartz crystals were the first commercially exploited piezoelectric material, but scientists searched for higher-performance materials. Despite the advances in materials and the maturation of manufacturing processes, the United States market did not grow as quickly as Japan's did. Without many new applications, the growth of the United States' piezoelectric industry suffered. In contrast, Japanese manufacturers shared their information, quickly overcoming technical and manufacturing challenges and creating new markets. In Japan, a temperature stable crystal cut was developed by Issac Koga. Japanese efforts in materials research created piezoceramic materials competitive to the United States materials but free of expensive patent restrictions. Major Japanese piezoelectric developments included new designs of piezoceramic filters for radios and televisions, piezo buzzers and audio transducers that can connect directly to electronic circuits, and the piezoelectric igniter, which generates sparks for small engine ignition systems and gas-grill lighters, by compressing a ceramic disc. Ultrasonic transducers that transmit sound waves through air had existed for quite some time but first saw major commercial use in early television remote controls. These transducers now are mounted on several car models as an echolocation device, helping the driver determine the distance from the car to any objects that may be in its path. Mechanism The nature of the piezoelectric effect is closely related to the occurrence of electric dipole moments in solids. The latter may either be induced for ions on crystal lattice sites with asymmetric charge surroundings (as in BaTiO3 and PZTs) or may directly be carried by molecular groups (as in cane sugar). The dipole density or polarization (dimensionality [C·m/m3] ) may easily be calculated for crystals by summing up the dipole moments per volume of the crystallographic unit cell. As every dipole is a vector, the dipole density P is a vector field. Dipoles near each other tend to be aligned in regions called Weiss domains. The domains are usually randomly oriented, but can be aligned using the process of poling (not the same as magnetic poling), a process by which a strong electric field is applied across the material, usually at elevated temperatures. Not all piezoelectric materials can be poled. Of decisive importance for the piezoelectric effect is the change of polarization P when applying a mechanical stress. This might either be caused by a reconfiguration of the dipole-inducing surrounding or by re-orientation of molecular dipole moments under the influence of the external stress. Piezoelectricity may then manifest in a variation of the polarization strength, its direction or both, with the details depending on: 1. the orientation of P within the crystal; 2. crystal symmetry; and 3. the applied mechanical stress. The change in P appears as a variation of surface charge density upon the crystal faces, i.e. as a variation of the electric field extending between the faces caused by a change in dipole density in the bulk. For example, a 1 cm3 cube of quartz with 2 kN (500 lbf) of correctly applied force can produce a voltage of 12500 V. Piezoelectric materials also show the opposite effect, called the converse piezoelectric effect, where the application of an electrical field creates mechanical deformation in the crystal. Mathematical description Linear piezoelectricity is the combined effect of The linear electrical behavior of the material: where D is the electric flux density (electric displacement), ε is the permittivity (free-body dielectric constant), E is the electric field strength, and , . Hooke's law for linear elastic materials: where S is the linearized strain, s is compliance under short-circuit conditions, T is stress, and where u is the displacement vector. These may be combined into so-called coupled equations, of which the strain-charge form is: where is the piezoelectric tensor and the superscript t stands for its transpose. Due to the symmetry of , . In matrix form, where [d] is the matrix for the direct piezoelectric effect and [d] is the matrix for the converse piezoelectric effect. The superscript E indicates a zero, or constant, electric field; the superscript T indicates a zero, or constant, stress field; and the superscript t stands for transposition of a matrix. Notice that the third order tensor maps vectors into symmetric matrices. There are no non-trivial rotation-invariant tensors that have this property, which is why there are no isotropic piezoelectric materials. The strain-charge for a material of the 4mm (C4v) crystal class (such as a poled piezoelectric ceramic such as tetragonal PZT or BaTiO3) as well as the 6mm crystal class may also be written as (ANSI IEEE 176): where the first equation represents the relationship for the converse piezoelectric effect and the latter for the direct piezoelectric effect. Although the above equations are the most used form in literature, some comments about the notation are necessary. Generally, D and E are vectors, that is, Cartesian tensors of rank 1; and permittivity ε is a Cartesian tensor of rank 2. Strain and stress are, in principle, also rank-2 tensors. But conventionally, because strain and stress are all symmetric tensors, the subscript of strain and stress can be relabeled in the following fashion: 11 → 1; 22 → 2; 33 → 3; 23 → 4; 13 → 5; 12 → 6. (Different conventions may be used by different authors in literature. For example, some use 12 → 4; 23 → 5; 31 → 6 instead.) That is why S and T appear to have the "vector form" of six components. Consequently, s appears to be a 6-by-6 matrix instead of a rank-3 tensor. Such a relabeled notation is often called Voigt notation. Whether the shear strain components S4, S5, S6 are tensor components or engineering strains is another question. In the equation above, they must be engineering strains for the 6,6 coefficient of the compliance matrix to be written as shown, i.e., 2(s − s). Engineering shear strains are double the value of the corresponding tensor shear, such as S6 = 2S12 and so on. This also means that s66 = , where G12 is the shear modulus. In total, there are four piezoelectric coefficients, dij, eij, gij, and hij defined as follows: where the first set of four terms corresponds to the direct piezoelectric effect and the second set of four terms corresponds to the converse piezoelectric effect. The equality between the direct piezoelectric tensor and the transpose of the converse piezoelectric tensor originates from the Maxwell relations of thermodynamics. For those piezoelectric crystals for which the polarization is of the crystal-field induced type, a formalism has been worked out that allows for the calculation of piezoelectrical coefficients dij from electrostatic lattice constants or higher-order Madelung constants. Crystal classes Of the 32 crystal classes, 21 are non-centrosymmetric (not having a centre of symmetry), and of these, 20 exhibit direct piezoelectricity (the 21st is the cubic class 432). Ten of these represent the polar crystal classes, which show a spontaneous polarization without mechanical stress due to a non-vanishing electric dipole moment associated with their unit cell, and which exhibit pyroelectricity. If the dipole moment can be reversed by applying an external electric field, the material is said to be ferroelectric. The 10 polar (pyroelectric) crystal classes: 1, 2, m, mm2, 4, , 3, 3m, 6, . The other 10 piezoelectric crystal classes: 222, , 422, 2m, 32, , 622, 2m, 23, 3m. For polar crystals, for which P ≠ 0 holds without applying a mechanical load, the piezoelectric effect manifests itself by changing the magnitude or the direction of P or both. For the nonpolar but piezoelectric crystals, on the other hand, a polarization P different from zero is only elicited by applying a mechanical load. For them the stress can be imagined to transform the material from a nonpolar crystal class (P = 0) to a polar one, having P ≠ 0. Materials Many materials exhibit piezoelectricity. Crystalline materials Langasite (La3Ga5SiO14) – a quartz-analogous crystal Gallium orthophosphate (GaPO4) – a quartz-analogous crystal Lithium niobate (LiNbO3) Lithium tantalate (LiTaO3) Quartz Berlinite (AlPO4) – a rare phosphate mineral that is structurally identical to quartz Rochelle salt Topaz – piezoelectricity in topaz can probably be attributed to ordering of the (F,OH) in its lattice, which is otherwise centrosymmetric: orthorhombic bipyramidal (mmm). Topaz has anomalous optical properties, which are attributed to such ordering. Tourmaline-group minerals Lead titanate (PbTiO3) – although it occurs in nature as mineral macedonite, it is synthesized for research and applications. Ceramics Ceramics with randomly oriented grains must be ferroelectric to exhibit piezoelectricity. The occurrence of abnormal grain growth (AGG) in sintered polycrystalline piezoelectric ceramics has detrimental effects on the piezoelectric performance in such systems and should be avoided, as the microstructure in piezoceramics exhibiting AGG tends to consist of few abnormally large elongated grains in a matrix of randomly oriented finer grains. Macroscopic piezoelectricity is possible in textured polycrystalline non-ferroelectric piezoelectric materials, such as AlN and ZnO. The families of ceramics with perovskite, tungsten-bronze, and related structures exhibit piezoelectricity: Lead zirconate titanate ( with 0 ≤ x ≤ 1) – more commonly known as PZT, the most common piezoelectric ceramic in use today. Potassium niobate (KNbO3) Sodium tungstate (Na2WO3) Ba2NaNb5O5 Pb2KNb5O15 Zinc oxide (ZnO) – Wurtzite structure. While single crystals of ZnO are piezoelectric and pyroelectric, polycrystalline (ceramic) ZnO with randomly oriented grains exhibits neither piezoelectric nor pyroelectric effect. Not being ferroelectric, polycrystalline ZnO cannot be poled like barium titanate or PZT. Ceramics and polycrystalline thin films of ZnO may exhibit macroscopic piezoelectricity and pyroelectricity only if they are textured (grains are preferentially oriented), such that the piezoelectric and pyroelectric responses of all individual grains do not cancel. This is readily accomplished in polycrystalline thin films. Lead-free piezoceramics Sodium potassium niobate ((K,Na)NbO3). This material is also known as NKN or KNN. In 2004, a group of Japanese researchers led by Yasuyoshi Saito discovered a sodium potassium niobate composition with properties close to those of PZT, including a high TC. Certain compositions of this material have been shown to retain a high mechanical quality factor (Qm ≈ 900) with increasing vibration levels, whereas the mechanical quality factor of hard PZT degrades in such conditions. This fact makes NKN a promising replacement for high power resonance applications, such as piezoelectric transformers. Bismuth ferrite (BiFeO3)  – a promising candidate for the replacement of lead-based ceramics. Sodium niobate (NaNbO3) Barium titanate (BaTiO3) – Barium titanate was the first piezoelectric ceramic discovered. Bismuth titanate (Bi4Ti3O12) Sodium bismuth titanate (NaBi(TiO3)2) The fabrication of lead-free piezoceramics pose multiple challenges, from an environmental standpoint and their ability to replicate the properties of their lead-based counterparts. By removing the lead component of the piezoceramic, the risk of toxicity to humans decreases, but the mining and extraction of the materials can be harmful to the environment. Analysis of the environmental profile of PZT versus sodium potassium niobate (NKN or KNN) shows that across the four indicators considered (primary energy consumption, toxicological footprint, eco-indicator 99, and input-output upstream greenhouse gas emissions), KNN is actually more harmful to the environment. Most of the concerns with KNN, specifically its Nb2O5 component, are in the early phase of its life cycle before it reaches manufacturers. Since the harmful impacts are focused on these early phases, some actions can be taken to minimize the effects. Returning the land as close to its original form after Nb2O5 mining via dam deconstruction or replacing a stockpile of utilizable soil are known aids for any extraction event. For minimizing air quality effects, modeling and simulation still needs to occur to fully understand what mitigation methods are required. The extraction of lead-free piezoceramic components has not grown to a significant scale at this time, but from early analysis, experts encourage caution when it comes to environmental effects. Fabricating lead-free piezoceramics faces the challenge of maintaining the performance and stability of their lead-based counterparts. In general, the main fabrication challenge is creating the "morphotropic phase boundaries (MPBs)" that provide the materials with their stable piezoelectric properties without introducing the "polymorphic phase boundaries (PPBs)" that decrease the temperature stability of the material. New phase boundaries are created by varying additive concentrations so that the phase transition temperatures converge at room temperature. The introduction of the MPB improves piezoelectric properties, but if a PPB is introduced, the material becomes negatively affected by temperature. Research is ongoing to control the type of phase boundaries that are introduced through phase engineering, diffusing phase transitions, domain engineering, and chemical modification. III–V and II–VI semiconductors A piezoelectric potential can be created in any bulk or nanostructured semiconductor crystal having non central symmetry, such as the Group III–V and II–VI materials, due to polarization of ions under applied stress and strain. This property is common to both the zincblende and wurtzite crystal structures. To first order, there is only one independent piezoelectric coefficient in zincblende, called e14, coupled to shear components of the strain. In wurtzite, there are instead three independent piezoelectric coefficients: e31, e33 and e15. The semiconductors where the strongest piezoelectricity is observed are those commonly found in the wurtzite structure, i.e. GaN, InN, AlN and ZnO (see piezotronics). Since 2006, there have also been a number of reports of strong non linear piezoelectric effects in polar semiconductors. Such effects are generally recognized to be at least important if not of the same order of magnitude as the first order approximation. Polymers The piezo-response of polymers is not as high as the response for ceramics; however, polymers hold properties that ceramics do not. Over the last few decades, non-toxic, piezoelectric polymers have been studied and applied due to their flexibility and smaller acoustical impedance. Other properties that make these materials significant include their biocompatibility, biodegradability, low cost, and low power consumption compared to other piezo-materials (ceramics, etc.). Piezoelectric polymers and non-toxic polymer composites can be used given their different physical properties. Piezoelectric polymers can be classified by bulk polymers, voided charged polymers ("piezoelectrets"), and polymer composites. A piezo-response observed by bulk polymers is mostly due to its molecular structure. There are two types of bulk polymers: amorphous and semi-crystalline. Examples of semi-crystalline polymers are polyvinylidene fluoride (PVDF) and its copolymers, polyamides, and parylene-C. Non-crystalline polymers, such as polyimide and polyvinylidene chloride (PVDC), fall under amorphous bulk polymers. Voided charged polymers exhibit the piezoelectric effect due to charge induced by poling of a porous polymeric film. Under an electric field, charges form on the surface of the voids forming dipoles. Electric responses can be caused by any deformation of these voids. The piezoelectric effect can also be observed in polymer composites by integrating piezoelectric ceramic particles into a polymer film. A polymer does not have to be piezo-active to be an effective material for a polymer composite. In this case, a material could be made up of an inert matrix with a separate piezo-active component. PVDF exhibits piezoelectricity several times greater than quartz. The piezo-response observed from PVDF is about 20–30 pC/N. That is an order of 5–50 times less than that of piezoelectric ceramic lead zirconate titanate (PZT). The thermal stability of the piezoelectric effect of polymers in the PVDF family (i.e. vinylidene fluoride co-poly trifluoroethylene) goes up to 125 °C. Some applications of PVDF are pressure sensors, hydrophones, and shock wave sensors. Due to their flexibility, piezoelectric composites have been proposed as energy harvesters and nanogenerators. In 2018, it was reported by Zhu et al. that a piezoelectric response of about 17 pC/N could be obtained from PDMS/PZT nanocomposite at 60% porosity. Another PDMS nanocomposite was reported in 2017, in which BaTiO3 was integrated into PDMS to make a stretchable, transparent nanogenerator for self-powered physiological monitoring. In 2016, polar molecules were introduced into a polyurethane foam in which high responses of up to 244 pC/N were reported. Other materials Most materials exhibit at least weak piezoelectric responses. Trivial examples include sucrose (table sugar), DNA, viral proteins, including those from bacteriophage. An actuator based on wood fibers, called cellulose fibers, has been reported. D33 responses for cellular polypropylene are around 200 pC/N. Some applications of cellular polypropylene are musical key pads, microphones, and ultrasound-based echolocation systems. Recently, single amino acid such as β-glycine also displayed high piezoelectric (178 pmV−1) as compared to other biological materials. Ionic liquids were recently identified as the first piezoelectric liquid. Application High voltage and power sources Direct piezoelectricity of some substances, like quartz, can generate potential differences of thousands of volts. The best-known application is the electric cigarette lighter: pressing the button causes a spring-loaded hammer to hit a piezoelectric crystal, producing a sufficiently high-voltage electric current that flows across a small spark gap, thus heating and igniting the gas. The portable sparkers used to ignite gas stoves work the same way, and many types of gas burners now have built-in piezo-based ignition systems. A similar idea is being researched by DARPA in the United States in a project called energy harvesting, which includes an attempt to power battlefield equipment by piezoelectric generators embedded in soldiers' boots. However, these energy harvesting sources by association affect the body. DARPA's effort to harness 1–2 watts from continuous shoe impact while walking were abandoned due to the impracticality and the discomfort from the additional energy expended by a person wearing the shoes. Other energy harvesting ideas include Crowd Farm, harvesting the energy from human movements in train stations or other public places and converting a dance floor to generate electricity. Vibrations from industrial machinery can also be harvested by piezoelectric materials to charge batteries for backup supplies or to power low-power microprocessors and wireless radios. A piezoelectric transformer is a type of AC voltage multiplier. Unlike a conventional transformer, which uses magnetic coupling between input and output, the piezoelectric transformer uses acoustic coupling. An input voltage is applied across a short length of a bar of piezoceramic material such as PZT, creating an alternating stress in the bar by the inverse piezoelectric effect and causing the whole bar to vibrate. The vibration frequency is chosen to be the resonant frequency of the block, typically in the 100 kilohertz to 1 megahertz range. A higher output voltage is then generated across another section of the bar by the piezoelectric effect. Step-up ratios of more than 1,000:1 have been demonstrated. An extra feature of this transformer is that, by operating it above its resonant frequency, it can be made to appear as an inductive load, which is useful in circuits that require a controlled soft start. These devices can be used in DC–AC inverters to drive cold cathode fluorescent lamps. Piezo transformers are some of the most compact high voltage sources. Sensors The principle of operation of a piezoelectric sensor is that a physical dimension, transformed into a force, acts on two opposing faces of the sensing element. Depending on the design of a sensor, different "modes" to load the piezoelectric element can be used: longitudinal, transversal and shear. Detection of pressure variations in the form of sound is the most common sensor application, e.g. piezoelectric microphones (sound waves bend the piezoelectric material, creating a changing voltage) and piezoelectric pickups for acoustic-electric guitars. A piezo sensor attached to the body of an instrument is known as a contact microphone. Piezoelectric sensors especially are used with high frequency sound in ultrasonic transducers for medical imaging and also industrial nondestructive testing (NDT). For many sensing techniques, the sensor can act as both a sensor and an actuator—often the term transducer is preferred when the device acts in this dual capacity, but most piezo devices have this property of reversibility whether it is used or not. Ultrasonic transducers, for example, can inject ultrasound waves into the body, receive the returned wave, and convert it to an electrical signal (a voltage). Most medical ultrasound transducers are piezoelectric. In addition to those mentioned above, various sensor and transducer applications include: Piezoelectric elements are also used in the detection and generation of sonar waves. Piezoelectric materials are used in single-axis and dual-axis tilt sensing. Power monitoring in high power applications (e.g. medical treatment, sonochemistry and industrial processing). Piezoelectric microbalances are used as very sensitive chemical and biological sensors. Piezoelectrics are sometimes used in strain gauges. More commonly however, a Piezoresistive effect element is used. A piezoelectric transducer was used in the penetrometer instrument on the Huygens Probe. Piezoelectric transducers are used in electronic drum pads to detect the impact of the drummer's sticks, and to detect muscle movements in medical acceleromyography. Automotive engine management systems use piezoelectric transducers to detect Engine knock (Knock Sensor, KS), also known as detonation, at certain hertz frequencies. A piezoelectric transducer is also used in fuel injection systems to measure manifold absolute pressure (MAP sensor) to determine engine load, and ultimately the fuel injectors milliseconds of on time. Ultrasonic piezo sensors are used in the detection of acoustic emissions in acoustic emission testing. Piezoelectric transducers can be used in transit-time ultrasonic flow meters. Actuators As very high electric fields correspond to only tiny changes in the width of the crystal, this width can be changed with better-than-μm precision, making piezo crystals the most important tool for positioning objects with extreme accuracy—thus their use in actuators. Multilayer ceramics, using layers thinner than , allow reaching high electric fields with voltage lower than . These ceramics are used within two kinds of actuators: direct piezo actuators and amplified piezoelectric actuators. While direct actuator's stroke is generally lower than , amplified piezo actuators can reach millimeter strokes. Loudspeakers: Voltage is converted to mechanical movement of a metallic diaphragm. Ultrasonic cleaning usually uses piezoelectric elements to produce intense sound waves in liquid. Piezoelectric motors: Piezoelectric elements apply a directional force to an axle, causing it to rotate. Due to the extremely small distances involved, the piezo motor is viewed as a high-precision replacement for the stepper motor. Piezoelectric elements can be used in laser mirror alignment, where their ability to move a large mass (the mirror mount) over microscopic distances is exploited to electronically align some laser mirrors. By precisely controlling the distance between mirrors, the laser electronics can accurately maintain optical conditions inside the laser cavity to optimize the beam output. A related application is the acousto-optic modulator, a device that scatters light off soundwaves in a crystal, generated by piezoelectric elements. This is useful for fine-tuning a laser's frequency. Atomic force microscopes and scanning tunneling microscopes employ converse piezoelectricity to keep the sensing needle close to the specimen. Inkjet printers: On many inkjet printers, piezoelectric crystals are used to drive the ejection of ink from the inkjet print head towards the paper. Diesel engines: High-performance common rail diesel engines use piezoelectric fuel injectors, first developed by Robert Bosch GmbH, instead of the more common solenoid valve devices. Active vibration control using amplified actuators. X-ray shutters. XY stages for micro scanning used in infrared cameras. Moving the patient precisely inside active CT and MRI scanners where the strong radiation or magnetism precludes electric motors. Crystal earpieces are sometimes used in old or low power radios. High-intensity focused ultrasound for localized heating or creating a localized cavitation can be achieved, for example, in patient's body or in an industrial chemical process. Refreshable braille display. A small crystal is expanded by applying a current that moves a lever to raise individual braille cells. Piezoelectric actuator. A single crystal or a number of crystals are expanded by applying a voltage for moving and controlling a mechanism or system. Piezoelectric actuators are used for fine servo positioning in hard disc drives. Frequency standard The piezoelectrical properties of quartz are useful as a standard of frequency. Quartz clocks employ a crystal oscillator made from a quartz crystal that uses a combination of both direct and converse piezoelectricity to generate a regularly timed series of electrical pulses that is used to mark time. The quartz crystal (like any elastic material) has a precisely defined natural frequency (caused by its shape and size) at which it prefers to oscillate, and this is used to stabilize the frequency of a periodic voltage applied to the crystal. The same principle is used in some radio transmitters and receivers, and in computers where it creates a clock pulse. Both of these usually use a frequency multiplier to reach gigahertz ranges. Piezoelectric motors Types of piezoelectric motor include: The ultrasonic motor used for auto-focus in reflex cameras Inchworm motors for linear motion Rectangular four-quadrant motors with high power density (2.5 W/cm3) and speed ranging from 10 nm/s to 800 mm/s. Stepping piezo motor, using stick-slip effect. Aside from the stepping stick-slip motor, all these motors work on the same principle. Driven by dual orthogonal vibration modes with a phase difference of 90°, the contact point between two surfaces vibrates in an elliptical path, producing a frictional force between the surfaces. Usually, one surface is fixed, causing the other to move. In most piezoelectric motors, the piezoelectric crystal is excited by a sine wave signal at the resonant frequency of the motor. Using the resonance effect, a much lower voltage can be used to produce a high vibration amplitude. A stick-slip motor works using the inertia of a mass and the friction of a clamp. Such motors can be very small. Some are used for camera sensor displacement, thus allowing an anti-shake function. Reduction of vibrations and noise Different teams of researchers have been investigating ways to reduce vibrations in materials by attaching piezo elements to the material. When the material is bent by a vibration in one direction, the vibration-reduction system responds to the bend and sends electric power to the piezo element to bend in the other direction. Future applications of this technology are expected in cars and houses to reduce noise. Further applications to flexible structures, such as shells and plates, have also been studied for nearly three decades. In a demonstration at the Material Vision Fair in Frankfurt in November 2005, a team from TU Darmstadt in Germany showed several panels that were hit with a rubber mallet, and the panel with the piezo element immediately stopped swinging. Piezoelectric ceramic fiber technology is being used as an electronic damping system on some HEAD tennis rackets. All piezo transducers have a fundamental resonant frequency and many harmonic frequencies. Piezo driven Drop-On-Demand fluid systems are sensitive to extra vibrations in the piezo structure that must be reduced or eliminated. One inkjet company, Howtek, Inc solved this problem by replacing glass(rigid) inkjet nozzles with Tefzel (soft) inkjet nozzles. This novel idea popularized single nozzle inkjets and they are now used in 3D Inkjet printers that run for years if kept clean inside and not overheated (Tefzel creeps under pressure at very high temperatures) Infertility treatment In people with previous total fertilization failure, piezoelectric activation of oocytes together with intracytoplasmic sperm injection (ICSI) seems to improve fertilization outcomes. Surgery Piezosurgery is a minimally invasive technique that aims to cut a target tissue with little damage to neighboring tissues. For example, Hoigne et al. uses frequencies in the range 25–29 kHz, causing microvibrations of 60–210 μm. It has the ability to cut mineralized tissue without cutting neurovascular tissue and other soft tissue, thereby maintaining a blood-free operating area, better visibility and greater precision. Potential applications In 2015, Cambridge University researchers working in conjunction with researchers from the National Physical Laboratory and Cambridge-based dielectric antenna company Antenova Ltd, using thin films of piezoelectric materials found that at a certain frequency, these materials become not only efficient resonators, but efficient radiators as well, meaning that they can potentially be used as antennas. The researchers found that by subjecting the piezoelectric thin films to an asymmetric excitation, the symmetry of the system is similarly broken, resulting in a corresponding symmetry breaking of the electric field, and the generation of electromagnetic radiation. Several attempts at the macro-scale application of the piezoelectric technology have emerged to harvest kinetic energy from walking pedestrians. In this case, locating high traffic areas is critical for optimization of the energy harvesting efficiency, as well as the orientation of the tile pavement significantly affects the total amount of the harvested energy. A density flow evaluation is recommended to qualitatively evaluate the piezoelectric power harvesting potential of the considered area based on the number of pedestrian crossings per unit time. In X. Li's study, the potential application of a commercial piezoelectric energy harvester in a central hub building at Macquarie University in Sydney, Australia is examined and discussed. Optimization of the piezoelectric tile deployment is presented according to the frequency of pedestrian mobility and a model is developed where 3.1% of the total floor area with the highest pedestrian mobility is paved with piezoelectric tiles. The modelling results indicate that the total annual energy harvesting potential for the proposed optimized tile pavement model is estimated at 1.1 MWh/year, which would be sufficient to meet close to 0.5% of the annual energy needs of the building. In Israel, there is a company which has installed piezoelectric materials under a busy highway. The energy generated is enough to power street lights, billboards, and signs. Tire company Goodyear has plans to develop an electricity generating tire which has piezoelectric material lined inside it. As the tire moves, it deforms and thus electricity is generated. The efficiency of a hybrid photovoltaic cell that contains piezoelectric materials can be increased simply by placing it near a source of ambient noise or vibration. The effect was demonstrated with organic cells using zinc oxide nanotubes. The electricity generated by the piezoelectric effect itself is a negligible percentage of the overall output. Sound levels as low as 75 decibels improved efficiency by up to 50%. Efficiency peaked at 10 kHz, the resonant frequency of the nanotubes. The electrical field set up by the vibrating nanotubes interacts with electrons migrating from the organic polymer layer. This process decreases the likelihood of recombination, in which electrons are energized but settle back into a hole instead of migrating to the electron-accepting ZnO layer.
Physical sciences
Solid mechanics
Physics
24977
https://en.wikipedia.org/wiki/Product%20%28mathematics%29
Product (mathematics)
In mathematics, a product is the result of multiplication, or an expression that identifies objects (numbers or variables) to be multiplied, called factors. For example, 21 is the product of 3 and 7 (the result of multiplication), and is the product of and (indicating that the two factors should be multiplied together). When one factor is an integer, the product is called a multiple. The order in which real or complex numbers are multiplied has no bearing on the product; this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, the product usually depends on the order of the factors. Matrix multiplication, for example, is non-commutative, and so is multiplication in other algebras in general as well. There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many different algebraic structures. Product of two numbers Originally, a product was and is still the result of the multiplication of two or more numbers. For example, is the product of and . The fundamental theorem of arithmetic states that every composite number is a product of prime numbers, that is unique up to the order of the factors. With the introduction of mathematical notation and variables at the end of the 15th century, it became common to consider the multiplication of numbers that are either unspecified (coefficients and parameters), or to be found (unknowns). These multiplications that cannot be effectively performed are called products. For example, in the linear equation the term denotes the product of the coefficient and the unknown Later and essentially from the 19th century on, new binary operations have been introduced, which do not involve numbers at all, and have been called products; for example, the dot product. Most of this article is devoted to such non-numerical products. Product of a sequence The product operator for the product of a sequence is denoted by the capital Greek letter pi Π (in analogy to the use of the capital Sigma Σ as summation symbol). For example, the expression is another way of writing . The product of a sequence consisting of only one number is just that number itself; the product of no factors at all is known as the empty product, and is equal to 1. Commutative rings Commutative rings have a product operation. Residue classes of integers Residue classes in the rings can be added: and multiplied: Convolution Two functions from the reals to itself can be multiplied in another way, called the convolution. If then the integral is well defined and is called the convolution. Under the Fourier transform, convolution becomes point-wise function multiplication. Polynomial rings The product of two polynomials is given by the following: with Products in linear algebra There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product, exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections. Scalar multiplication By the very definition of a vector space, one can form the product of any scalar with any vector, giving a map . Scalar product A scalar product is a bi-linear map: with the following conditions, that for all . From the scalar product, one can define a norm by letting . The scalar product also allows one to define an angle between two vectors: In -dimensional Euclidean space, the standard scalar product (called the dot product) is given by: Cross product in 3-dimensional space The cross product of two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors. The cross product can also be expressed as the formal determinant: Composition of linear mappings A linear mapping can be defined as a function f between two vector spaces V and W with underlying field F, satisfying If one only considers finite dimensional vector spaces, then in which bV and bW denote the bases of V and W, and vi denotes the component of v on bVi, and Einstein summation convention is applied. Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U. Then one can get Or in matrix form: in which the i-row, j-column element of F, denoted by Fij, is fji, and Gij=gji. The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication. Product of two matrices Given two matrices and their product is given by Composition of linear functions as matrix product There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim(U), s = dim(V) and t = dim(W) be the (finite) dimensions of vector spaces U, V and W. Let be a basis of U, be a basis of V and be a basis of W. In terms of this basis, let be the matrix representing f : U → V and be the matrix representing g : V → W. Then is the matrix representing . In other words: the matrix product is the description in coordinates of the composition of linear functions. Tensor product of vector spaces Given two finite dimensional vector spaces V and W, the tensor product of them can be defined as a (2,0)-tensor satisfying: where V* and W* denote the dual spaces of V and W. For infinite-dimensional vector spaces, one also has the: Tensor product of Hilbert spaces Topological tensor product. The tensor product, outer product and Kronecker product all convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previously-fixed basis, whereas the tensor product is usually given in its intrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices). The class of all objects with a tensor product In general, whenever one has two mathematical objects that can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as the internal product of a monoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is the class of all things (of a given type) that have a tensor product. Other products in linear algebra Other kinds of products in linear algebra include: Hadamard product Kronecker product The product of tensors: Wedge product or exterior product Interior product Outer product Tensor product Cartesian product In set theory, a Cartesian product is a mathematical operation which returns a set (or product set) from multiple sets. That is, for sets A and B, the Cartesian product is the set of all ordered pairs —where and . The class of all things (of a given type) that have Cartesian products is called a Cartesian category. Many of these are Cartesian closed categories. Sets are an example of such objects. Empty product The empty product on numbers and most algebraic structures has the value of 1 (the identity element of multiplication), just like the empty sum has the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment in logic, set theory, computer programming and category theory. Products over other algebraic structures Products over other kinds of algebraic structures include: the Cartesian product of sets the direct product of groups, and also the semidirect product, knit product and wreath product the free product of groups the product of rings the product of ideals the product of topological spaces the Wick product of random variables the cap, cup, Massey and slant product in algebraic topology the smash product and wedge sum (sometimes called the wedge product) in homotopy A few of the above products are examples of the general notion of an internal product in a monoidal category; the rest are describable by the general notion of a product in category theory. Products in category theory All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, see product (category theory), which describes how to combine two objects of some kind to create an object, possibly of a different kind. But also, in category theory, one has: the fiber product or pullback, the product category, a category that is the product of categories. the ultraproduct, in model theory. the internal product of a monoidal category, which captures the essence of a tensor product. Other products A function's product integral (as a continuous equivalent to the product of a sequence or as the multiplicative version of the normal/standard/additive integral. The product integral is also known as "continuous product" or "multiplical". Complex multiplication, a theory of elliptic curves.
Mathematics
Basics
null
24981
https://en.wikipedia.org/wiki/Pioneer%2011
Pioneer 11
Pioneer 11 (also known as Pioneer G) is a NASA robotic space probe launched on April 5, 1973, to study the asteroid belt, the environment around Jupiter and Saturn, the solar wind, and cosmic rays. It was the first probe to encounter Saturn, the second to fly through the asteroid belt, and the second to fly by Jupiter. Later, Pioneer 11 became the second of five artificial objects to achieve an escape velocity allowing it to leave the Solar System. Due to power constraints and the vast distance to the probe, the last routine contact with the spacecraft was on September 30, 1995, and the last good engineering data was received on November 24, 1995. Mission background History Approved in February 1969, Pioneer 11 and its twin probe, Pioneer 10, were the first to be designed for exploring the outer Solar System. Yielding to multiple proposals throughout the 1960s, early mission objectives were defined as: Explore the interplanetary medium beyond the orbit of Mars Investigate the nature of the asteroid belt from the scientific standpoint and assess the belt's possible hazard to missions to the outer planets. Explore the environment of Jupiter. Subsequent planning for an encounter with Saturn added many more goals: Map the magnetic field of Saturn and determine its intensity, direction, and structure. Determine how many electrons and protons of various energies are distributed along the trajectory of the spacecraft through the Saturn system. Map the interaction of the Saturn system with the solar wind. Measure the temperature of Saturn's atmosphere and that of Titan, the largest satellite of Saturn. Determine the structure of the upper atmosphere of Saturn where molecules are expected to be electrically charged and form an ionosphere. Map the thermal structure of Saturn's atmosphere by infrared observations coupled with radio occultation data. Obtain spin-scan images of the Saturnian system in two colors during the encounter sequence and polarimetry measurements of the planet. Probe the ring system and the atmosphere of Saturn with S-band radio occultation. Determine more precisely the masses of Saturn and its larger satellites by accurate observations of the effects of their gravitational fields on the motion of the spacecraft. As a precursor to the Mariner Jupiter/Saturn mission, verify the environment of the ring plane to find out where it may be safely crossed by the Mariner spacecraft without serious damage. Pioneer 11 was built by TRW and managed as part of the Pioneer program by NASA Ames Research Center. A backup unit, Pioneer H, is currently on display in the "Milestones of Flight" exhibit at the National Air and Space Museum in Washington, D.C. Many elements of the mission proved to be critical in the planning of the Voyager program. Spacecraft design The Pioneer 11 bus measures deep and with six panels forming the hexagonal structure. The bus houses propellant to control the orientation of the probe and eight of the twelve scientific instruments. The spacecraft has a mass of 259 kilograms. Attitude control and propulsion Orientation of the spacecraft was maintained with six 4.5-N, hydrazine monopropellant thrusters: pair one maintains a constant spin-rate of 4.8 rpm, pair two controls the forward thrust, pair three controls attitude. Information for the orientation is provided by performing conical scanning maneuvers to track Earth in its orbit, a star sensor able to reference Canopus, and two Sun sensors. Communications The space probe includes a redundant system transceivers, one attached to the high-gain antenna, the other to an omni-antenna and medium-gain antenna. Each transceiver is 8 watts and transmits data across the S-band using 2110 MHz for the uplink from Earth and 2292 MHz for the downlink to Earth with the Deep Space Network tracking the signal. Prior to transmitting data, the probe uses a convolutional encoder to allow correction of errors in the received data on Earth. Power Pioneer 11 uses four SNAP-19 radioisotope thermoelectric generators (RTGs) (see diagram). They are positioned on two three-rod trusses, each in length and 120 degrees apart. This was expected to be a safe distance from the sensitive scientific experiments carried on board. Combined, the RTGs provided 155 watts at launch, and decayed to 140 W in transit to Jupiter. The spacecraft requires 100 W to power all systems. Computer Much of the computation for the mission was performed on Earth and transmitted to the probe, where it is able to retain in memory, up to five commands of the 222 possible entries by ground controllers. The spacecraft includes two command decoders and a command distribution unit, a very limited form of a processor, to direct operations on the spacecraft. This system requires that mission operators prepare commands long in advance of transmitting them to the probe. A data storage unit is included to record up to 6,144 bytes of information gathered by the instruments. The digital telemetry unit is then used to prepare the collected data in one of the thirteen possible formats before transmitting it back to Earth. Scientific instruments Pioneer 11 has one additional instrument more than Pioneer 10, a flux-gate magnetometer. Mission profile Launch and trajectory The Pioneer 11 probe was launched on April 6, 1973, at 02:11:00 UTC, by the National Aeronautics and Space Administration from Space Launch Complex 36A at Cape Canaveral, Florida aboard an Atlas-Centaur launch vehicle, with a Star-37E propulsion module. Its twin probe, Pioneer 10, had been launched on March 3, 1972. Pioneer 11 was launched on a trajectory directly aimed at Jupiter without any prior gravitational assists. In May 1974, Pioneer was retargeted to fly past Jupiter on a north–south trajectory, enabling a Saturn flyby in 1979. The maneuver used of propellant, lasted 42 minutes and 36 seconds, and increased Pioneer 11's speed by 230 km/h. It also made two mid-course corrections, on April 11, 1973 and November 7, 1974. Encounter with Jupiter Pioneer 11 flew past Jupiter in November and December 1974. During its closest approach, on December 2, it passed above the cloud tops. The probe obtained detailed images of the Great Red Spot, transmitted the first images of the immense polar regions, and determined the mass of Jupiter's moon Callisto. Using the gravitational pull of Jupiter, a gravity assist was used to alter the trajectory of the probe towards Saturn and gain velocity. On April 16, 1975, following the Jupiter encounter, the micrometeoroid detector was turned off. Encounter with Saturn Pioneer 11 passed by Saturn on September 1, 1979, at a distance of from Saturn's cloud tops. By this time, Voyager 1 and Voyager 2 had already passed Jupiter and were en route to Saturn, so it was decided Pioneer 11 would pass through the Saturn ring plane at the same position Voyager 2 would later have to fly through in order to reach Uranus and Neptune. If there were faint ring particles capable of damaging a probe in that area, mission planners felt it was better to learn about it via Pioneer. Thus, Pioneer 11 was acting as a "pioneer" in a true sense of the word; if danger were detected, then Voyager 2 could be redirected further away from the rings but miss the opportunity to visit the ice giants in the process. Pioneer 11 imaged—and nearly collided with—one of Saturn's small moons, passing at a distance of no more than . The object was tentatively identified as Epimetheus, a moon discovered the previous day from Pioneers imaging, and suspected from earlier observations by Earth-based telescopes. After the Voyager flybys, it became known that there are two similarly sized moons (Epimetheus and Janus) in the same orbit, so there is some uncertainty about which one was the object of Pioneer's near-miss. Pioneer 11 encountered Janus on September 1, 1979, at 14:52 UTC, at a distance of . At 16:20 UTC the same day, Pioneer 11 encountered Mimas at a distance of . Besides Epimetheus, instruments located another previously undiscovered small moon and an additional ring, charted Saturn's magnetosphere and magnetic field, and found its planet-sized moon, Titan, to be too cold for life. Hurtling underneath the ring plane, the probe sent back pictures of Saturn's rings. The rings, which normally seem bright when observed from Earth, appeared dark in the Pioneer pictures, and the dark gaps in the rings seen from Earth appeared as bright rings. Interstellar mission On February 25, 1990, Pioneer 11 became the fourth human-made object to pass beyond the orbit of the planets. By 1995, Pioneer 11 could no longer power any of its detectors, so the decision was made to shut it down. On September 29, 1995, NASA's Ames Research Center, responsible for managing the project, issued a press release that began, "After nearly 22 years of exploration out to the farthest reaches of the Solar System, one of the most durable and productive space missions in history will come to a close." It indicated NASA would use its Deep Space Network antennas to listen "once or twice a month" for the spacecraft's signal, until "some time in late 1996" when "its transmitter will fall silent altogether." NASA Administrator Daniel Goldin characterized Pioneer 11 as "the little spacecraft that could, a venerable explorer that has taught us a great deal about the Solar System and, in the end, about our own innate drive to learn. Pioneer 11 is what NASA is all about – exploration beyond the frontier." Besides announcing the end of operations, the dispatch provided a historical list of Pioneer 11 mission achievements. NASA terminated routine contact with the spacecraft on September 30, 1995, but continued to make contact for about two hours every two to four weeks. Scientists received a few minutes of good engineering data on November 24, 1995, but then lost final contact once Earth moved out of view of the spacecraft's antenna. Timeline Current status Due to power constraints and the vast distance to the probe, the last routine contact with the spacecraft was on September 30, 1995, and the last good engineering data was received on November 24, 1995. As of June 24, 2024, Pioneer 11 is estimated to be from the Earth and from the Sun. It was traveling at relative to the Sun and traveling outward at about 2.35 AU per year. The spacecraft is heading in the direction of the constellation Scutum near the current position (June 2024) RA 18h 54m dec -8° 46' (J2000.0), close to Messier 26. In 928,000 years, it will pass within of the K dwarf TYC 992-192-1 and will pass near the star Lambda Aquilae in about four million years. Pioneer 11 has been overtaken by the two Voyager probes launched in 1977. Voyager 1 has become the most distant object built by humans and will remain so for the foreseeable future, as no probe launched since Voyager has the speed to overtake it. Pioneer anomaly Analysis of the radio tracking data from the Pioneer 10 and 11 spacecraft at distances between 20 and 70 AU from the Sun had consistently indicated the presence of a small but anomalous Doppler frequency drift. The drift can be interpreted as due to a constant acceleration of directed towards the Sun. Although it was suspected that there was a systematic origin to the effect, none was found. As a result, there has been sustained interest in the nature of this so-called "Pioneer anomaly". Extended analysis of mission data by Slava Turyshev and colleagues determined the source of the anomaly to be asymmetric thermal radiation and the resulting thermal recoil force acting on the face of the Pioneers away from the Sun. Pioneer plaque Pioneer 10 and 11 both carry a gold-anodized aluminum plaque in the event that either spacecraft is ever found by intelligent lifeforms from other planetary systems. The plaques feature the nude figures of a human male and female along with several symbols that are designed to provide information about the origin of the spacecraft. Commemoration In 1991, Pioneer 11 was honored on one of 10 United States Postage Service stamps commemorating uncrewed spacecraft exploring each of the then nine planets and the Moon. Pioneer 11 was the spacecraft featured with Jupiter. Pluto was listed as "Not yet explored". Gallery
Technology
Unmanned spacecraft
null
24989
https://en.wikipedia.org/wiki/Pendulum%20clock
Pendulum clock
A pendulum clock is a clock that uses a pendulum, a swinging weight, as its timekeeping element. The advantage of a pendulum for timekeeping is that it is an approximate harmonic oscillator: It swings back and forth in a precise time interval dependent on its length, and resists swinging at other rates. From its invention in 1656 by Christiaan Huygens, inspired by Galileo Galilei, until the 1930s, the pendulum clock was the world's most precise timekeeper, accounting for its widespread use. Throughout the 18th and 19th centuries, pendulum clocks in homes, factories, offices, and railroad stations served as primary time standards for scheduling daily life, work shifts, and public transportation. Their greater accuracy allowed for the faster pace of life which was necessary for the Industrial Revolution. The home pendulum clock was replaced by less-expensive synchronous electric clocks in the 1930s and 1940s. Pendulum clocks are now kept mostly for their decorative and antique value. Pendulum clocks must be stationary to operate. Any motion or accelerations will affect the motion of the pendulum, causing inaccuracies, so other mechanisms must be used in portable timepieces. History The pendulum clock was invented on 25 December 1656 by Dutch scientist and inventor Christiaan Huygens, and patented the following year. He described it in his manuscript Horologium published in 1658. Huygens contracted the construction of his clock designs to clockmaker Salomon Coster, who actually built the clock. Huygens was inspired by investigations of pendulums by Galileo Galilei beginning around 1602. Galileo discovered the key property that makes pendulums useful timekeepers: they are isochronic, which means that the period of swing of a pendulum is approximately the same for different sized swings. Galileo in 1637 described to his son a mechanism which could keep a pendulum swinging, which has been called the first pendulum clock design (picture at top). It was partly constructed by his son in 1649, but neither lived to finish it. The introduction of the pendulum, the first harmonic oscillator used in timekeeping, increased the accuracy of clocks enormously, from about 15 minutes per day to 15 seconds per day<ref>, p.3, also published in Proceedings of the Royal Society of London, A 458, 563–579</ref> leading to their rapid spread as existing 'verge and foliot' clocks were retrofitted with pendulums. By 1659 pendulum clocks were being manufactured in France by clockmaker Nicolaus Hanet, and in England by Ahasuerus Fromanteel. These early clocks, due to their verge escapements, had wide pendulum swings of 80–100°. In his 1673 analysis of pendulums, Horologium Oscillatorium, Huygens showed that wide swings made the pendulum inaccurate, causing its period, and thus the rate of the clock, to vary with unavoidable variations in the driving force provided by the movement. Clockmakers' realization that only pendulums with small swings of a few degrees are isochronous motivated the invention of the anchor escapement by Robert Hooke around 1658, which reduced the pendulum's swing to 4–6°. The anchor became the standard escapement used in pendulum clocks. In addition to increased accuracy, the anchor's narrow pendulum swing allowed the clock's case to accommodate longer, slower pendulums, which needed less power and caused less wear on the movement. The seconds pendulum (also called the Royal pendulum), long, in which the time period is two seconds, became widely used in quality clocks. The long narrow clocks built around these pendulums, first made by William Clement around 1680, who also claimed invention of the anchor escapement, became known as grandfather clocks. The increased accuracy resulting from these developments caused the minute hand, previously rare, to be added to clock faces beginning around 1690. The 18th and 19th century wave of horological innovation that followed the invention of the pendulum brought many improvements to pendulum clocks. The deadbeat escapement invented in 1675 by Richard Towneley and popularized by George Graham around 1715 in his precision "regulator" clocks gradually replaced the anchor escapement and is now used in most modern pendulum clocks. Observation that pendulum clocks slowed down in summer brought the realization that thermal expansion and contraction of the pendulum rod with changes in temperature was a source of error. This was solved by the invention of temperature-compensated pendulums; the mercury pendulum by Graham in 1721 and the gridiron pendulum by John Harrison in 1726. With these improvements, by the mid-18th century precision pendulum clocks achieved accuracies of a few seconds per week. Until the 19th century, clocks were handmade by individual craftsmen and were very expensive. The rich ornamentation of pendulum clocks of this period indicates their value as status symbols of the wealthy. The clockmakers of each country and region in Europe developed their own distinctive styles. By the 19th century, factory production of clock parts gradually made pendulum clocks affordable by middle-class families. During the Industrial Revolution, the faster pace of life and scheduling of shifts and public transportation like trains depended on the more accurate timekeeping made possible by the pendulum. Daily life was organized around the home pendulum clock. More accurate pendulum clocks, called regulators, were installed in places of business and railroad stations and used to schedule work and set other clocks. The need for extremely accurate timekeeping in celestial navigation to determine longitude on ships during long sea voyages drove the development of the most accurate pendulum clocks, called astronomical regulators. These precision instruments, installed in clock vaults in naval observatories and kept accurate within a fraction of a second by observation of star transits overhead, were used to set marine chronometers on naval and commercial vessels. Beginning in the 19th century, astronomical regulators in naval observatories served as primary standards for national time distribution services that distributed time signals over telegraph wires. From 1909, US National Bureau of Standards (now NIST) based the US time standard on Riefler pendulum clocks, accurate to about 10 milliseconds per day. In 1929 it switched to the Shortt-Synchronome free pendulum clock before phasing in quartz standards in the 1930s. With an error of less than one second per year, the Shortt was the most accurate commercially produced pendulum clock. Pendulum clocks remained the world standard for accurate timekeeping for 270 years, until the invention of the quartz clock in 1927, and were used as time standards through World War II. The French Time Service included pendulum clocks in their ensemble of standard clocks until 1954. The home pendulum clock began to be replaced as domestic timekeeper during the 1930s and 1940s by the synchronous electric clock, which kept more accurate time because it was synchronized to the oscillation of the electric power grid. The most accurate experimental pendulum clock ever made may be the Littlemore Clock built by Edward T. Hall in the 1990s (donated in 2003 to the National Watch and Clock Museum, Columbia, Pennsylvania, USA). The largest pendulum clocks, exceeding , were built in Geneva (1972) and Gdańsk (2016). Mechanism The mechanism which runs a mechanical clock is called the movement. The movements of all mechanical pendulum clocks have these five parts: A power source; either a weight on a cord or chain that turns a pulley or sprocket, or a mainspring. A gear train (wheel train) that steps up the speed of the power so that the pendulum can use it. The gear ratios of the gear train also divide the rotation rate down to give wheels that rotate once every hour and once every 12 or 24 hours, to turn the hands of the clock. An escapement that gives the pendulum precisely timed impulses to keep it swinging, and which releases the gear train wheels to move forward a fixed amount at each swing. This is the source of the "ticking" sound of an operating pendulum clock. The pendulum, a weight on a rod, which is the timekeeping element of the clock. An indicator or dial that records how often the escapement has rotated and therefore how much time has passed, usually a traditional clock face with rotating hands. Additional functions in clocks besides basic timekeeping are called complications. More elaborate pendulum clocks may include these complications: Striking train: strikes a bell or gong on every hour, with the number of strikes equal to the number of the hour. Some clocks will also signal the half hour with a single strike. More elaborate types, technically called chiming clocks, strike on the quarter hours, and may play melodies or Cathedral chimes, usually Westminster quarters. Calendar dials: show the day, date, and sometimes month. Moon phase dial: shows the phase of the moon, usually with a painted picture of the moon on a rotating disk. These were useful historically for people planning nighttime journeys. Equation of time dial: this rare complication was used in early days to set the clock by the passage of the sun overhead at noon. It displays the difference between the time indicated by the clock and the time indicated by the position of the sun, which varies by as much as ±16 minutes during the year. Repeater attachment: repeats the hour chimes when triggered by hand. This rare complication was used before artificial lighting to check what time it was at night. In electromechanical pendulum clocks such as used in mechanical Master clocks the power source is replaced by an electrically powered solenoid that provides the impulses to the pendulum by magnetic force, and the escapement is replaced by a switch or photodetector that senses when the pendulum is in the right position to receive the impulse. These should not be confused with more recent quartz pendulum clocks in which an electronic quartz clock module swings a pendulum. These are not true pendulum clocks because the timekeeping is controlled by a quartz crystal in the module, and the swinging pendulum is merely a decorative simulation. Gravity-swing pendulum The pendulum in most clocks (see diagram) consists of a wood or metal rod (a) with a metal weight called the bob (b) on the end. The bob is traditionally lens-shaped to reduce air drag. Wooden rods were often used in quality clocks because wood had a lower coefficient of thermal expansion than metal. The rod is usually suspended from the clock frame with a short straight spring of metal ribbon (d); this avoids instabilities that were introduced by a conventional pivot. In the most accurate regulator clocks the pendulum is suspended by metal knife edges resting on flat agate (a hard mineral that will retain a highly polished surface). The pendulum is driven by an arm hanging behind it attached to the anchor piece (h) of the escapement, called the "crutch" (e), ending in a "fork" (f) which embraces the pendulum rod. Each swing of the pendulum releases the escape wheel, and a tooth of the wheel presses against one of the pallets, exerting a brief push through the crutch and fork on the pendulum rod to keep it swinging. Most quality clocks, including all grandfather clocks, have a "seconds pendulum", in which each swing of the pendulum takes one second (a complete cycle takes two seconds), which is approximately long from pivot to center of bob. Mantel clocks often have a half-second pendulum, which is approximately long. Only a few tower clocks use longer pendulums, the 1.5 second pendulum, long, or occasionally the two-second pendulum, which is used in the Great Clock of Westminster which houses Big Ben. The pendulum swings with a period that varies with the square root of its effective length. For small swings the period T, the time for one complete cycle (two swings), is where L is the length of the pendulum and g is the local acceleration of gravity. All pendulum clocks have a means of adjusting the rate. This is usually an adjustment nut (c) under the pendulum bob which moves the bob up or down on its rod. Moving the bob up reduces the length of the pendulum, reducing the pendulum's period so the clock gains time. In some pendulum clocks, fine adjustment is done with an auxiliary adjustment, which may be a small weight that is moved up or down the pendulum rod. In some master clocks and tower clocks, adjustment is accomplished by a small tray mounted on the rod where small weights are placed or removed to change the effective length, so the rate can be adjusted without stopping the clock. The period of a pendulum increases slightly with the width (amplitude) of its swing. The rate of error increases with amplitude, so when limited to small swings of a few degrees the pendulum is nearly isochronous; its period is independent of changes in amplitude. Therefore, the swing of the pendulum in clocks is limited to 2° to 4°. Small swing angles tend toward isochronous behavior due to the mathematical fact that the approximation becomes valid as the angle approaches zero. With that substitution made, the pendulum equation becomes the equation of a harmonic oscillator, which has a fixed period in all cases. As the swing angle becomes larger, the approximation gradually fails and the period is no longer fixed. Temperature compensation A major source of error in pendulum clocks is thermal expansion; the pendulum rod changes in length slightly with changes in temperature, causing changes in the rate of the clock. An increase in temperature causes the rod to expand, making the pendulum longer, so its period increases and the clock loses time. Many older quality clocks used wooden pendulum rods to reduce this error, as wood expands less than metal. The first pendulum to correct for this error was the mercury pendulum invented by Graham in 1721, which was used in precision regulator clocks into the 20th century. These had a bob consisting of a container of the liquid metal mercury. An increase in temperature would cause the pendulum rod to expand, but the mercury in the container would also expand and its level would rise slightly in the container, moving the center of gravity of the pendulum up toward the pivot. By using the correct amount of mercury, the centre of gravity of the pendulum remained at a constant height, and thus its period remained constant, despite changes in temperature. The most widely used temperature-compensated pendulum was the gridiron pendulum invented by John Harrison around 1726. This consisted of a "grid" of parallel rods of high-thermal-expansion metal such as zinc or brass and low-thermal-expansion metal such as steel. If properly combined, the length change of the high-expansion rods compensated for the length change of the low-expansion rods, again achieving a constant period of the pendulum with temperature changes. This type of pendulum became so associated with quality that decorative "fake" gridirons are often seen on pendulum clocks, that have no actual temperature compensation function. Beginning around 1900, some of the highest precision scientific clocks had pendulums made of ultra-low-expansion materials such as the nickel steel alloy Invar or fused silica, which required very little compensation for the effects of temperature. Atmospheric drag The viscosity of the air through which the pendulum swings will vary with atmospheric pressure, humidity, and temperature. This drag also requires power that could otherwise be applied to extending the time between windings. Traditionally the pendulum bob is made with a narrow streamlined lens shape to reduce air drag, which is where most of the driving power goes in a quality clock. In the late 19th century and early 20th century, pendulums for precision regulator clocks in astronomical observatories were often operated in a chamber that had been pumped to a low pressure to reduce drag and make the pendulum's operation even more accurate by avoiding changes in atmospheric pressure. Fine adjustment of the rate of the clock could be made by slight changes to the internal pressure in the sealed housing. Leveling and "beat" To keep time accurately, pendulum clocks must be level. If they are not, the pendulum swings more to one side than the other, upsetting the symmetrical operation of the escapement. This condition can often be heard audibly in the ticking sound of the clock. The ticks or "beats" should be at precisely equally spaced intervals to give a sound of, "tick...tock...tick...tock"; if they are not, and have the sound "tick-tock...tick-tock..." the clock is out of beat and needs to be leveled. This problem can easily cause the clock to stop working, and is one of the most common reasons for service calls. A spirit level or watch timing machine can achieve a higher accuracy than relying on the sound of the beat; precision regulators often have a built-in spirit level for the task. Older freestanding clocks often have feet with adjustable screws to level them, more recent ones have a leveling adjustment in the movement. Some modern pendulum clocks have 'auto-beat' or 'self-regulating beat adjustment' devices, and do not need this adjustment. Local gravity Since the pendulum rate will increase with an increase in gravity, and local gravitational acceleration varies with latitude and elevation on Earth, the highest precision pendulum clocks must be readjusted to keep time after a move. For example, a pendulum clock moved from sea level to will lose 16 seconds per day. With the most accurate pendulum clocks, even moving the clock to the top of a tall building would cause it to lose measurable time due to lower gravity. The local gravity also varies by about 0.5% with latitude between the equator and the poles, with gravity increasing at higher latitudes due to the oblate shape of the Earth. Thus precision regulator clocks used for celestial navigation in the early 20th century had to be recalibrated when moved to a different latitude. Torsion pendulum Also called torsion-spring pendulum, this is a wheel-like mass (most often four spheres on cross spokes) suspended from a vertical strip (ribbon) of spring steel, used as the regulating mechanism in torsion pendulum clocks. Rotation of the mass winds and unwinds the suspension spring, with the energy impulse applied to the top of the spring. The main advantage of this type of pendulum is its low energy use; with a period of 12–15 seconds, compared to the gravity swing pendulum's period of 0.5—2s, it is possible to make clocks that need to be wound only every 30 days, or even only once a year or more. Since the restoring force is provided by the elasticity of the spring, which varies with temperature, it is more affected by temperature changes than a gravity-swing pendulum. The most accurate torsion clocks use a spring of elinvar which has low temperature coefficient of elasticity. A torsion pendulum clock requiring only annual winding is sometimes called a "400-Day clock" or "anniversary clock", sometimes given as a wedding gift. Torsion pendulums are also used in "perpetual" clocks which do not need winding, as their mainspring is kept wound by changes in atmospheric temperature and pressure with a bellows arrangement. The Atmos clock, one example, uses a torsion pendulum with a long oscillation period of 60 seconds. Escapement The escapement is a mechanical linkage that converts the force from the clock's wheel train into impulses that keep the pendulum swinging back and forth. It is the part that makes the "ticking" sound in a working pendulum clock. Most escapements consist of a wheel with pointed teeth called the escape wheel which is turned by the clock's wheel train, and surfaces the teeth push against, called pallets. During most of the pendulum's swing the wheel is prevented from turning because a tooth is resting against one of the pallets; this is called the "locked" state. Each swing of the pendulum a pallet releases a tooth of the escape wheel. The wheel rotates forward a fixed amount until a tooth catches on the other pallet. These releases allow the clock's wheel train to advance a fixed amount with each swing, moving the hands forward at a constant rate, controlled by the pendulum. Although the escapement is necessary, its force disturbs the natural motion of the pendulum, and in precision pendulum clocks this was often the limiting factor on the accuracy of the clock. Different escapements have been used in pendulum clocks over the years to try to solve this problem. In the 18th and 19th centuries, escapement design was at the forefront of timekeeping advances. The anchor escapement (see animation) was the standard escapement used until the 1800s when an improved version, the deadbeat escapement, took over in precision clocks. It is used in almost all pendulum clocks today. The remontoire, a small spring mechanism rewound at intervals which serves to isolate the escapement from the varying force of the wheel train, was used in a few precision clocks. In tower clocks the wheel train must turn the large hands on the clock face on the outside of the building, and the weight of these hands, varying with snow and ice buildup, put a varying load on the wheel train. Gravity escapements were used in tower clocks. By the end of the 19th century specialized escapements were used in the most accurate clocks, called astronomical regulators'', which were employed in naval observatories and for scientific research. The Riefler escapement, used in Clemens-Riefler regulator clocks was accurate to 10 milliseconds per day. Electromagnetic escapements, which used a switch or phototube to turn on a solenoid electromagnet to give the pendulum an impulse without requiring a mechanical linkage, were developed. The most accurate pendulum clock was the Shortt-Synchronome clock, a complicated electromechanical clock with two pendulums developed in 1923 by W.H. Shortt and Frank Hope-Jones, which was accurate to better than one second per year. A slave pendulum in a separate clock was linked by an electric circuit and electromagnets to a master pendulum in a vacuum tank. The slave pendulum performed the timekeeping functions, leaving the master pendulum to swing virtually undisturbed by outside influences. In the 1920s the Shortt-Synchronome briefly became the highest standard for timekeeping in observatories before quartz clocks superseded pendulum clocks as precision time standards. Time indication The indicating system is almost always the traditional dial with moving hour and minute hands. Many clocks have a small third hand indicating seconds on a subsidiary dial. Pendulum clocks are usually designed to be set by opening the glass face cover and manually pushing the minute hand around the dial to the correct time. The minute hand is mounted on a slipping friction sleeve which allows it to be turned on its arbor. The hour hand is driven not from the wheel train but from the minute hand's shaft through a small set of gears, so rotating the minute hand manually also sets the hour hand. Maintenance and Repair Pendulum clocks are long lived and don't require a lot of maintenance, which is one reason for their popularity. As in any mechanism with moving parts, regular cleaning and lubrication is required. Specific low viscosity lubricants have been developed for clocks, one of the most widely used being a polyalcanoate synthetic oil. Springs and pins may wear out and break and need replacing. Styles Pendulum clocks were more than simply utilitarian timekeepers; due to their high cost they were status symbols that expressed the wealth and culture of their owners. They evolved in a number of traditional styles, specific to different countries and times as well as their intended use. Case styles somewhat reflect the furniture styles popular during the period. Experts can often pinpoint when an antique clock was made within a few decades by subtle differences in their cases and faces. These are some of the different styles of pendulum clocks: Act of Parliament clock Anniversary clock (uses a torsion pendulum) Banjo clock Bracket clock Cartel clock Comtoise or Morbier clock Crystal regulator Cuckoo clock Grandfather clock Lantern clock Mantel clock Master clock Ogee clock Pillar clock Schoolhouse regulator Torsion pendulum clock Turret clock Vienna regulator Zaandam clock
Technology
Clocks
null
25005
https://en.wikipedia.org/wiki/Peano%20axioms
Peano axioms
In mathematical logic, the Peano axioms (, ), also known as the Dedekind–Peano axioms or the Peano postulates, are axioms for the natural numbers presented by the 19th-century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of whether number theory is consistent and complete. The axiomatization of arithmetic provided by Peano axioms is commonly called Peano arithmetic. The importance of formalizing arithmetic was not well appreciated until the work of Hermann Grassmann, who showed in the 1860s that many facts in arithmetic could be derived from more basic facts about the successor operation and induction. In 1881, Charles Sanders Peirce provided an axiomatization of natural-number arithmetic. In 1888, Richard Dedekind proposed another axiomatization of natural-number arithmetic, and in 1889, Peano published a simplified version of them as a collection of axioms in his book The principles of arithmetic presented by a new method (). The nine Peano axioms contain three types of statements. The first axiom asserts the existence of at least one member of the set of natural numbers. The next four are general statements about equality; in modern treatments these are often not taken as part of the Peano axioms, but rather as axioms of the "underlying logic". The next three axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final, axiom is a second-order statement of the principle of mathematical induction over the natural numbers, which makes this formulation close to second-order arithmetic. A weaker first-order system is obtained by explicitly adding the addition and multiplication operation symbols and replacing the second-order induction axiom with a first-order axiom schema. The term Peano arithmetic is sometimes used for specifically naming this restricted system. Historical second-order formulation When Peano formulated his axioms, the language of mathematical logic was in its infancy. The system of logical notation he created to present the axioms did not prove to be popular, although it was the genesis of the modern notation for set membership (∈, which comes from Peano's ε). Peano maintained a clear distinction between mathematical and logical symbols, which was not yet common in mathematics; such a separation had first been introduced in the Begriffsschrift by Gottlob Frege, published in 1879. Peano was unaware of Frege's work and independently recreated his logical apparatus based on the work of Boole and Schröder. The Peano axioms define the arithmetical properties of natural numbers, usually represented as a set N or The non-logical symbols for the axioms consist of a constant symbol 0 and a unary function symbol S. The first axiom states that the constant 0 is a natural number: Peano's original formulation of the axioms used 1 instead of 0 as the "first" natural number, while the axioms in Formulario mathematico include zero. The next four axioms describe the equality relation. Since they are logically valid in first-order logic with equality, they are not considered to be part of "the Peano axioms" in modern treatments. The remaining axioms define the arithmetical properties of the natural numbers. The naturals are assumed to be closed under a single-valued "successor" function S. Axioms 1, 6, 7, 8 define a unary representation of the intuitive notion of natural numbers: the number 1 can be defined as S(0), 2 as S(S(0)), etc. However, considering the notion of natural numbers as being defined by these axioms, axioms 1, 6, 7, 8 do not imply that the successor function generates all the natural numbers different from 0. The intuitive notion that each natural number can be obtained by applying successor sufficiently many times to zero requires an additional axiom, which is sometimes called the axiom of induction. The induction axiom is sometimes stated in the following form: In Peano's original formulation, the induction axiom is a second-order axiom. It is now common to replace this second-order principle with a weaker first-order induction scheme. There are important differences between the second-order and first-order formulations, as discussed in the section below. Defining arithmetic operations and relations If we use the second-order induction axiom, it is possible to define addition, multiplication, and total (linear) ordering on N directly using the axioms. However, and addition and multiplication are often added as axioms. The respective functions and relations are constructed in set theory or second-order logic, and can be shown to be unique using the Peano axioms. Addition Addition is a function that maps two natural numbers (two elements of N) to another one. It is defined recursively as: For example: To prove commutativity of addition, first prove and , each by induction on . Using both results, then prove by induction on . The structure is a commutative monoid with identity element 0. is also a cancellative magma, and thus embeddable in a group. The smallest group embedding N is the integers. Multiplication Similarly, multiplication is a function mapping two natural numbers to another one. Given addition, it is defined recursively as: It is easy to see that is the multiplicative right identity: To show that is also the multiplicative left identity requires the induction axiom due to the way multiplication is defined: is the left identity of 0: . If is the left identity of (that is ), then is also the left identity of : , using commutativity of addition. Therefore, by the induction axiom is the multiplicative left identity of all natural numbers. Moreover, it can be shown that multiplication is commutative and distributes over addition: . Thus, is a commutative semiring. Inequalities The usual total order relation ≤ on natural numbers can be defined as follows, assuming 0 is a natural number: For all , if and only if there exists some such that . This relation is stable under addition and multiplication: for , if , then: a + c ≤ b + c, and a · c ≤ b · c. Thus, the structure is an ordered semiring; because there is no natural number between 0 and 1, it is a discrete ordered semiring. The axiom of induction is sometimes stated in the following form that uses a stronger hypothesis, making use of the order relation "≤": For any predicate φ, if φ(0) is true, and for every , if φ(k) is true for every such that , then φ(S(n)) is true, then for every , φ(n) is true. This form of the induction axiom, called strong induction, is a consequence of the standard formulation, but is often better suited for reasoning about the ≤ order. For example, to show that the naturals are well-ordered—every nonempty subset of N has a least element—one can reason as follows. Let a nonempty be given and assume X has no least element. Because 0 is the least element of N, it must be that . For any , suppose for every , . Then , for otherwise it would be the least element of X. Thus, by the strong induction principle, for every , . Thus, , which contradicts X being a nonempty subset of N. Thus X has a least element. Models A model of the Peano axioms is a triple , where N is a (necessarily infinite) set, and satisfies the axioms above. Dedekind proved in his 1888 book, The Nature and Meaning of Numbers (, i.e., "What are the numbers and what are they good for?") that any two models of the Peano axioms (including the second-order induction axiom) are isomorphic. In particular, given two models and of the Peano axioms, there is a unique homomorphism satisfying and it is a bijection. This means that the second-order Peano axioms are categorical. (This is not the case with any first-order reformulation of the Peano axioms, below.) Set-theoretic models The Peano axioms can be derived from set theoretic constructions of the natural numbers and axioms of set theory such as ZF. The standard construction of the naturals, due to John von Neumann, starts from a definition of 0 as the empty set, ∅, and an operator s on sets defined as: The set of natural numbers N is defined as the intersection of all sets closed under s that contain the empty set. Each natural number is equal (as a set) to the set of natural numbers less than it: and so on. The set N together with 0 and the successor function satisfies the Peano axioms. Peano arithmetic is equiconsistent with several weak systems of set theory. One such system is ZFC with the axiom of infinity replaced by its negation. Another such system consists of general set theory (extensionality, existence of the empty set, and the axiom of adjunction), augmented by an axiom schema stating that a property that holds for the empty set and holds of an adjunction whenever it holds of the adjunct must hold for all sets. Interpretation in category theory The Peano axioms can also be understood using category theory. Let C be a category with terminal object 1C, and define the category of pointed unary systems, US1(C) as follows: The objects of US1(C) are triples where X is an object of C, and and are C-morphisms. A morphism φ : (X, 0X, SX) → (Y, 0Y, SY) is a C-morphism with and . Then C is said to satisfy the Dedekind–Peano axioms if US1(C) has an initial object; this initial object is known as a natural number object in C. If is this initial object, and is any other object, then the unique map is such that This is precisely the recursive definition of 0X and SX. Consistency When the Peano axioms were first proposed, Bertrand Russell and others agreed that these axioms implicitly defined what we mean by a "natural number". Henri Poincaré was more cautious, saying they only defined natural numbers if they were consistent; if there is a proof that starts from just these axioms and derives a contradiction such as 0 = 1, then the axioms are inconsistent, and don't define anything. In 1900, David Hilbert posed the problem of proving their consistency using only finitistic methods as the second of his twenty-three problems. In 1931, Kurt Gödel proved his second incompleteness theorem, which shows that such a consistency proof cannot be formalized within Peano arithmetic itself, if Peano arithmetic is consistent. Although it is widely claimed that Gödel's theorem rules out the possibility of a finitistic consistency proof for Peano arithmetic, this depends on exactly what one means by a finitistic proof. Gödel himself pointed out the possibility of giving a finitistic consistency proof of Peano arithmetic or stronger systems by using finitistic methods that are not formalizable in Peano arithmetic, and in 1958, Gödel published a method for proving the consistency of arithmetic using type theory. In 1936, Gerhard Gentzen gave a proof of the consistency of Peano's axioms, using transfinite induction up to an ordinal called ε0. Gentzen explained: "The aim of the present paper is to prove the consistency of elementary number theory or, rather, to reduce the question of consistency to certain fundamental principles". Gentzen's proof is arguably finitistic, since the transfinite ordinal ε0 can be encoded in terms of finite objects (for example, as a Turing machine describing a suitable order on the integers, or more abstractly as consisting of the finite trees, suitably linearly ordered). Whether or not Gentzen's proof meets the requirements Hilbert envisioned is unclear: there is no generally accepted definition of exactly what is meant by a finitistic proof, and Hilbert himself never gave a precise definition. The vast majority of contemporary mathematicians believe that Peano's axioms are consistent, relying either on intuition or the acceptance of a consistency proof such as Gentzen's proof. A small number of philosophers and mathematicians, some of whom also advocate ultrafinitism, reject Peano's axioms because accepting the axioms amounts to accepting the infinite collection of natural numbers. In particular, addition (including the successor function) and multiplication are assumed to be total. Curiously, there are self-verifying theories that are similar to PA but have subtraction and division instead of addition and multiplication, which are axiomatized in such a way to avoid proving sentences that correspond to the totality of addition and multiplication, but which are still able to prove all true theorems of PA, and yet can be extended to a consistent theory that proves its own consistency (stated as the non-existence of a Hilbert-style proof of "0=1"). Peano arithmetic as first-order theory All of the Peano axioms except the ninth axiom (the induction axiom) are statements in first-order logic. The arithmetical operations of addition and multiplication and the order relation can also be defined using first-order axioms. The axiom of induction above is second-order, since it quantifies over predicates (equivalently, sets of natural numbers rather than natural numbers). As an alternative one can consider a first-order axiom schema of induction. Such a schema includes one axiom per predicate definable in the first-order language of Peano arithmetic, making it weaker than the second-order axiom. The reason that it is weaker is that the number of predicates in first-order language is countable, whereas the number of sets of natural numbers is uncountable. Thus, there exist sets that cannot be described in first-order language (in fact, most sets have this property). First-order axiomatizations of Peano arithmetic have another technical limitation. In second-order logic, it is possible to define the addition and multiplication operations from the successor operation, but this cannot be done in the more restrictive setting of first-order logic. Therefore, the addition and multiplication operations are directly included in the signature of Peano arithmetic, and axioms are included that relate the three operations to each other. The following list of axioms (along with the usual axioms of equality), which contains six of the seven axioms of Robinson arithmetic, is sufficient for this purpose: In addition to this list of numerical axioms, Peano arithmetic contains the induction schema, which consists of a recursively enumerable and even decidable set of axioms. For each formula in the language of Peano arithmetic, the first-order induction axiom for φ is the sentence where is an abbreviation for y1,...,yk. The first-order induction schema includes every instance of the first-order induction axiom; that is, it includes the induction axiom for every formula φ. Equivalent axiomatizations The above axiomatization of Peano arithmetic uses a signature that only has symbols for zero as well as the successor, addition, and multiplications operations. There are many other different, but equivalent, axiomatizations. One such alternative uses an order relation symbol instead of the successor operation and the language of discretely ordered semirings (axioms 1-7 for semirings, 8-10 on order, 11-13 regarding compatibility, and 14-15 for discreteness): , i.e., addition is associative. , i.e., addition is commutative. , i.e., multiplication is associative. , i.e., multiplication is commutative. , i.e., multiplication distributes over addition. , i.e., zero is an identity for addition, and an absorbing element for multiplication (actually superfluous). , i.e., one is an identity for multiplication. , i.e., the '<' operator is transitive. , i.e., the '<' operator is irreflexive. , i.e., the ordering satisfies trichotomy. , i.e. the ordering is preserved under addition of the same element. , i.e. the ordering is preserved under multiplication by the same positive element. , i.e. given any two distinct elements, the larger is the smaller plus another element. , i.e. zero and one are distinct and there is no element between them. In other words, 0 is covered by 1, which suggests that these numbers are discrete. , i.e. zero is the minimum element. The theory defined by these axioms is known as PA−. It is also incomplete and one of its important properties is that any structure satisfying this theory has an initial segment (ordered by ) isomorphic to . Elements in that segment are called standard elements, while other elements are called nonstandard elements. Finally, Peano arithmetic PA is obtained by adding the first-order induction schema. Undecidability and incompleteness According to Gödel's incompleteness theorems, the theory of PA (if consistent) is incomplete. Consequently, there are sentences of first-order logic (FOL) that are true in the standard model of PA but are not a consequence of the FOL axiomatization. Essential incompleteness already arises for theories with weaker axioms, such as Robinson arithmetic. Closely related to the above incompleteness result (via Gödel's completeness theorem for FOL) it follows that there is no algorithm for deciding whether a given FOL sentence is a consequence of a first-order axiomatization of Peano arithmetic or not. Hence, PA is an example of an undecidable theory. Undecidability arises already for the existential sentences of PA, due to the negative answer to Hilbert's tenth problem, whose proof implies that all computably enumerable sets are diophantine sets, and thus definable by existentially quantified formulas (with free variables) of PA. Formulas of PA with higher quantifier rank (more quantifier alternations) than existential formulas are more expressive, and define sets in the higher levels of the arithmetical hierarchy. Nonstandard models Although the usual natural numbers satisfy the axioms of PA, there are other models as well (called "non-standard models"); the compactness theorem implies that the existence of nonstandard elements cannot be excluded in first-order logic. The upward Löwenheim–Skolem theorem shows that there are nonstandard models of PA of all infinite cardinalities. This is not the case for the original (second-order) Peano axioms, which have only one model, up to isomorphism. This illustrates one way the first-order system PA is weaker than the second-order Peano axioms. When interpreted as a proof within a first-order set theory, such as ZFC, Dedekind's categoricity proof for PA shows that each model of set theory has a unique model of the Peano axioms, up to isomorphism, that embeds as an initial segment of all other models of PA contained within that model of set theory. In the standard model of set theory, this smallest model of PA is the standard model of PA; however, in a nonstandard model of set theory, it may be a nonstandard model of PA. This situation cannot be avoided with any first-order formalization of set theory. It is natural to ask whether a countable nonstandard model can be explicitly constructed. The answer is affirmative as Skolem in 1933 provided an explicit construction of such a nonstandard model. On the other hand, Tennenbaum's theorem, proved in 1959, shows that there is no countable nonstandard model of PA in which either the addition or multiplication operation is computable. This result shows it is difficult to be completely explicit in describing the addition and multiplication operations of a countable nonstandard model of PA. There is only one possible order type of a countable nonstandard model. Letting ω be the order type of the natural numbers, ζ be the order type of the integers, and η be the order type of the rationals, the order type of any countable nonstandard model of PA is , which can be visualized as a copy of the natural numbers followed by a dense linear ordering of copies of the integers. Overspill A cut in a nonstandard model M is a nonempty subset C of M so that C is downward closed (x < y and y ∈ C ⇒ x ∈ C) and C is closed under successor. A proper cut is a cut that is a proper subset of M. Each nonstandard model has many proper cuts, including one that corresponds to the standard natural numbers. However, the induction scheme in Peano arithmetic prevents any proper cut from being definable. The overspill lemma, first proved by Abraham Robinson, formalizes this fact.
Mathematics
Axiomatic systems
null
25006
https://en.wikipedia.org/wiki/Procyon
Procyon
Procyon () is the brightest star in the constellation of Canis Minor and usually the eighth-brightest star in the night sky, with an apparent visual magnitude of 0.34. It has the Bayer designation α Canis Minoris, which is Latinized to Alpha Canis Minoris, and abbreviated α CMi or Alpha CMi, respectively. As determined by the European Space Agency Hipparcos astrometry satellite, this system lies at a distance of just , and is therefore one of Earth's nearest stellar neighbors. A binary star system, Procyon consists of a white-hued main-sequence star of spectral type F5 IV–V, designated component A, in orbit with a faint white dwarf companion of spectral type DQZ, named Procyon B. The pair orbit each other with a period of 40.84 years and an eccentricity of 0.4. Observation Procyon is usually the eighth-brightest star in the night sky, culminating at midnight on 14 January. It forms one of the three vertices of the Winter Triangle asterism, in combination with Sirius and Betelgeuse. The prime period for evening viewing of Procyon is in late winter in the Northern Hemisphere. It has a color index of 0.42, and its hue has been described as having a faint yellow tinge to it. Stellar system Procyon is a binary star system with a bright primary component, Procyon A, having an apparent magnitude of 0.34, and a faint companion, Procyon B, at magnitude 10.7. The pair orbit each other with a period of 40.84 years along an elliptical orbit with an eccentricity of 0.4, more eccentric than Mercury's. The plane of their orbit is inclined at an angle of 31.1° to the line of sight with the Earth. The average separation of the two components is , a little less than the distance between Uranus and the Sun, though the eccentric orbit carries them as close as 8.9 AU and as far as 21.0 AU. Procyon A The primary has a stellar classification of F5IV–V, indicating that it is a late-stage F-type main-sequence star. Procyon A is bright for its spectral class, suggesting that it is evolving into a subgiant that has nearly fused its hydrogen core into helium, after which it will expand as the nuclear reactions move outside the core. As it continues to expand, the star will eventually swell to about 80 to 150 times its current diameter and become a red or orange color. This will probably happen within 10 to 100 million years. The effective temperature of the stellar atmosphere is an estimated , giving Procyon A a white hue. It is 1.5 times the solar mass (), twice the solar radius (), and has seven times the Sun's luminosity (). Both the core and the envelope of this star are convective; the two regions being separated by a wide radiation zone. Oscillations In late June 2004, Canada's orbital MOST satellite telescope carried out a 32-day survey of Procyon A. The continuous optical monitoring was intended to confirm solar-like oscillations in its brightness observed from Earth and to permit asteroseismology. No oscillations were detected and the authors concluded that the theory of stellar oscillations may need to be reconsidered. However, others argued that the non-detection was consistent with published ground-based radial velocity observations of solar-like oscillations. Subsequent observations in radial velocity have confirmed that Procyon is indeed oscillating. Photometric measurements from the NASA Wide Field Infrared Explorer (WIRE) satellite from 1999 and 2000 showed evidence of granulation (convection near the surface of the star) and solar-like oscillations. Unlike the MOST result, the variation seen in the WIRE photometry was in agreement with radial velocity measurements from the ground. Additional observations with MOST taken in 2007 were able to detect oscillations. Procyon B Like Sirius B, Procyon B is a white dwarf that was inferred from astrometric data long before it was observed. Its existence had been postulated by German astronomer Friedrich Bessel as early as 1844, and, although its orbital elements had been calculated by his countryman Arthur Auwers in 1862 as part of his thesis, Procyon B was not visually confirmed until 1896 when John Martin Schaeberle observed it at the predicted position using the 36-inch refractor at Lick Observatory. It is more difficult to observe from Earth than Sirius B, due to a greater apparent magnitude difference and smaller angular separation from its primary. At , Procyon B is considerably less massive than Sirius B; however, the peculiarities of degenerate matter ensure that it is larger than its more famous neighbor, with an estimated radius of 8,600 km, versus 5,800 km for Sirius B. The radius agrees with white dwarf models that assume a carbon core. It has a stellar classification of DQZ, having a helium-dominated atmosphere with traces of heavy elements. For reasons that remain unclear, the mass of Procyon B is unusually low for a white dwarf star of its type. With a surface temperature of , it is also much cooler than Sirius B; this is a testament to its lesser mass and greater age. The mass of the progenitor star for Procyon B was about and it came to the end of its life some  billion years ago, after a main-sequence lifetime of  million years. X-ray emission Attempts to detect X-ray emission from Procyon with nonimaging, soft X-ray-sensitive detectors prior to 1975 failed. Extensive observations of Procyon were carried out with the Copernicus and TD-1A satellites in the late 1970s. The X-ray source associated with Procyon AB was observed on 1 April 1979, with the Einstein Observatory high-resolution imager (HRI). The HRI X-ray pointlike source location is ~4″ south of Procyon A, on the edge of the 90% confidence error circle, indicating identification with Procyon A rather than Procyon B which was located about 5″ north of Procyon A (about 9″ from the X-ray source location). Etymology and cultural significance α Canis Minoris (Latinized to Alpha Canis Minoris) is the star's Bayer designation. The name Procyon comes from the Ancient Greek (), meaning "before the dog", since it precedes the "Dog Star" Sirius as it travels across the sky due to Earth's rotation. (Although Procyon has a greater right ascension, it also has a more northerly declination, which means it will rise above the horizon earlier than Sirius from most northerly latitudes.) In Greek mythology, Procyon is associated with Maera, a hound belonging to Erigone, daughter of Icarius of Athens. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Procyon for the star α Canis Minoris A. The two dog stars are referred to in the most ancient literature and were venerated by the Babylonians and the Egyptians, In Babylonian mythology, Procyon was known as Nangar (the Carpenter), an aspect of Marduk, involved in constructing and organizing the celestial sky. The constellations in Macedonian folklore represented agricultural items and animals, reflecting their village way of life. To them, Procyon and Sirius were Volci "the wolves", circling hungrily around Orion which depicted a plough with oxen. Rarer names are the Latin translation of Procyon, Antecanis, and the Arabic-derived names Al Shira and Elgomaisa. Medieval astrolabes of England and Western Europe used a variant of this, Algomeiza/Algomeyza. Al Shira derives from , "the Syrian sign" (the other sign being Sirius; "Syria" is supposedly a reference to its northern location relative to Sirius); Elgomaisa derives from "the bleary-eyed (woman)", in contrast to "the teary-eyed (woman)", which is Sirius. (See Gomeisa.) In Chinese, (), meaning South River, refers to an asterism consisting of Procyon, ε Canis Minoris and β Canis Minoris. Consequently, Procyon itself is known as (, the Third Star of South River). It is part of the Vermilion Bird. The Hawaiians see Procyon as part of an asterism Ke ka o Makali'i ("the canoe bailer of Makali'i") that helps them navigate at sea. In Hawaiian language, this star is called Puana ("blossom"), which is a new Hawaiian name based on the Māori name Puangahori. It forms this asterism (Ke ka o Makali'i) with the Pleiades (Makali'i), Auriga, Orion, Capella, Sirius, Castor and Pollux. In Tahitian lore, Procyon was one of the pillars propping up the sky, known as Anâ-tahu'a-vahine-o-toa-te-manava ("star-the-priestess-of-brave-heart"), the pillar for elocution. Māori astronomers know the star as Puangahori ("False Puanga") which distinguishes it from its pair Puanga or Puanga-rua ("Blossom-cluster") which refers to a star of great importance to Māori culture and calendar, known by its western name Rigel. Procyon appears on the flag of Brazil, symbolizing the state of Amazonas. The Kalapalo people of Mato Grosso state in Brazil call Procyon and Canopus Kofongo ("Duck"), with Castor and Pollux representing his hands. The asterism's appearance signified the coming of the rainy season and increase in food staple manioc, used at feasts to feed guests. Known as Sikuliarsiujuittuq to the Inuit, Procyon was quite significant in their astronomy and mythology. Its eponymous name means "the one who never goes onto the newly formed sea ice", and refers to a man who stole food from his village's hunters because he was too obese to hunt on ice. He was killed by the other hunters who convinced him to go on the sea ice. Procyon received this designation because it typically appears red (though sometimes slightly greenish) as it rises during the Arctic winter; this red color was associated with Sikuliarsiujuittuq's bloody end. View from this system Were the Sun to be observed from this star system, it would appear to be a magnitude 2.55 star in the constellation Aquila with the exact opposite coordinates at right ascension , declination . It would be as bright as β Scorpii is in our sky. Canis Minor would obviously be missing its brightest star. Procyon's closest neighboring star is Luyten's Star, about away. Procyon would be the brightest star in the night sky of an exoplanet orbiting Luyten's Star, with an apparent magnitude of -4.68. Luyten's Star would also be visible from Procyon, at an apparent magnitude of 4.61, unlike any red dwarfs from Earth.
Physical sciences
Notable stars
null
25010
https://en.wikipedia.org/wiki/Proton%E2%80%93proton%20chain
Proton–proton chain
The proton–proton chain, also commonly referred to as the chain, is one of two known sets of nuclear fusion reactions by which stars convert hydrogen to helium. It dominates in stars with masses less than or equal to that of the Sun, whereas the CNO cycle, the other known reaction, is suggested by theoretical models to dominate in stars with masses greater than about 1.3 solar masses. In general, proton–proton fusion can occur only if the kinetic energy (temperature) of the protons is high enough to overcome their mutual electrostatic repulsion. In the Sun, deuteron-producing events are rare. Diprotons are the much more common result of proton–proton reactions within the star, and diprotons almost immediately decay back into two protons. Since the conversion of hydrogen to helium is slow, the complete conversion of the hydrogen initially in the core of the Sun is calculated to take more than ten billion years. Although sometimes called the "proton–proton chain reaction", it is not a chain reaction in the normal sense. In most nuclear reactions, a chain reaction designates a reaction that produces a product, such as neutrons given off during fission, that quickly induces another such reaction. The proton–proton chain is, like a decay chain, a series of reactions. The product of one reaction is the starting material of the next reaction. There are two main chains leading from hydrogen to helium in the Sun. One chain has five reactions, the other chain has six. History of the theory The theory that proton–proton reactions are the basic principle by which the Sun and other stars burn was advocated by Arthur Eddington in the 1920s. At the time, the temperature of the Sun was considered to be too low to overcome the Coulomb barrier. After the development of quantum mechanics, it was discovered that tunneling of the wavefunctions of the protons through the repulsive barrier allows for fusion at a lower temperature than the classical prediction. In 1939, Hans Bethe attempted to calculate the rates of various reactions in stars. Starting with two protons combining to give a deuterium nucleus and a positron he found what we now call Branch II of the proton–proton chain. But he did not consider the reaction of two nuclei (Branch I) which we now know to be important. This was part of the body of work in stellar nucleosynthesis for which Bethe won the Nobel Prize in Physics in 1967. The proton–proton chain The first step in all the branches is the fusion of two protons into a deuteron. As the protons fuse, one of them undergoes beta plus decay, converting into a neutron by emitting a positron and an electron neutrino (though a small amount of deuterium nuclei is produced by the "pep" reaction, see below): {| border="0" |- style="height:2em;" |p ||+ ||p||→ || | +|| | + | ||+ || |} The positron will annihilate with an electron from the environment into two gamma rays. Including this annihilation and the energy of the neutrino, the net reaction {| border="0" |- style="height:2em;" |p ||+ ||p|| + →  | + | ||+ || |} (which is the same as the PEP reaction, see below) has a Q value (released energy) of 1.442 MeV: The relative amounts of energy going to the neutrino and to the other products is variable. This is the rate-limiting reaction and is extremely slow due to it being initiated by the weak nuclear force. The average proton in the core of the Sun waits 9 billion years before it successfully fuses with another proton. It has not been possible to measure the cross-section of this reaction experimentally because it is so low but it can be calculated from theory. After it is formed, the deuteron produced in the first stage can fuse with another proton to produce the stable, light isotope of helium, : :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || ||+ || |} This process, mediated by the strong nuclear force rather than the weak force, is extremely fast by comparison to the first step. It is estimated that, under the conditions in the Sun's core, each newly created deuterium nucleus exists for only about one second before it is converted into helium-3. In the Sun, each helium-3 nucleus produced in these reactions exists for only about 400 years before it is converted into helium-4. Once the helium-3 has been produced, there are four possible paths to generate . In , helium-4 is produced by fusing two helium-3 nuclei; the and branches fuse with pre-existing to form beryllium-7, which undergoes further reactions to produce two helium-4 nuclei. About 99% of the energy output of the sun comes from the various chains, with the other 1% coming from the CNO cycle. According to one model of the sun, 83.3 percent of the produced by the various branches is produced via branch I while produces 16.68 percent and 0.02 percent. Since half the neutrinos produced in branches II and III are produced in the first step (synthesis of a deuteron), only about 8.35 percent of neutrinos come from the later steps (see below), and about 91.65 percent are from deuteron synthesis. However, another solar model from around the same time gives only 7.14 percent of neutrinos from the later steps and 92.86 percent from the synthesis of deuterium nuclei. The difference is apparently due to slightly different assumptions about the composition and metallicity of the sun. There is also the extremely rare branch. Other even rarer reactions may occur. The rate of these reactions is very low due to very small cross-sections, or because the number of reacting particles is so low that any reactions that might happen are statistically insignificant. The overall reaction is: releasing 26.73 MeV of energy, some of which is lost to the neutrinos. The branch {| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ ||2  ||+ || |} The complete chain releases a net energy of but 2.2 percent of this energy (0.59 MeV) is lost to the neutrinos that are produced. The branch is dominant at temperatures of 10 to . Below , the chain proceeds at slow rate, resulting in a low production of . The branch :{| border="0" |- style="height:2em;" | ||+ || ||→ ||||+ || ||+ || |- style="height:2em;" | ||+ || ||→ ||||+ || ||+ || ||/ || |- style="height:2em;" | ||+ || ||→ ||2  || || ||+ || |} The branch is dominant at temperatures of 18 to . Note that the energies in the second reaction above are the energies of the neutrinos that are produced by the reaction. 90 percent of the neutrinos produced in the reaction of to carry an energy of , while the remaining 10 percent carry . The difference is whether the lithium-7 produced is in the ground state or an excited (metastable) state, respectively. The total energy released going from to stable is about 0.862 MeV, almost all of which is lost to the neutrino if the decay goes directly to the stable lithium. The branch :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || || || ||+ || |- style="height:2em;" | ||+ || ||→ || ||+ || |- style="height:2em;" | || || ||→ || ||+ || ||+ || || |- style="height:2em;" | || || ||→ ||2  |} The last three stages of this chain, plus the positron annihilation, contribute a total of 18.209 MeV, though much of this is lost to the neutrino. The chain is dominant if the temperature exceeds . The chain is not a major source of energy in the Sun, but it was very important in the solar neutrino problem because it generates very high energy neutrinos (up to ). The (Hep) branch This reaction is predicted theoretically, but it has never been observed due to its rarity (about in the Sun). In this reaction, helium-3 captures a proton directly to give helium-4, with an even higher possible neutrino energy (up to ). :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || ||+ || |} The mass–energy relationship gives for the energy released by this reaction plus the ensuing annihilation, some of which is lost to the neutrino. Energy release Comparing the mass of the final helium-4 atom with the masses of the four protons reveals that 0.7 percent of the mass of the original protons has been lost. This mass has been converted into energy, in the form of kinetic energy of produced particles, gamma rays, and neutrinos released during each of the individual reactions. The total energy yield of one whole chain is . Energy released as gamma rays will interact with electrons and protons and heat the interior of the Sun. Also kinetic energy of fusion products (e.g. of the two protons and the from the reaction) adds energy to the plasma in the Sun. This heating keeps the core of the Sun hot and prevents it from collapsing under its own weight as it would if the sun were to cool down. Neutrinos do not interact significantly with matter and therefore do not heat the interior and thereby help support the Sun against gravitational collapse. Their energy is lost: the neutrinos in the , , and chains carry away 2.0%, 4.0%, and 28.3% of the energy in those reactions, respectively. The following table calculates the amount of energy lost to neutrinos and the amount of "solar luminosity" coming from the three branches. "Luminosity" here means the amount of energy given off by the Sun as electromagnetic radiation rather than as neutrinos. The starting figures used are the ones mentioned higher in this article. The table concerns only the 99% of the power and neutrinos that come from the reactions, not the 1% coming from the CNO cycle. The PEP reaction A deuteron can also be produced by the rare pep (proton–electron–proton) reaction (electron capture): :{| border="0" |- style="height:2em;" | ||+ || ||+ || ||→ || ||+ || |} In the Sun, the frequency ratio of the pep reaction versus the reaction is 1:400. However, the neutrinos released by the pep reaction are far more energetic: while neutrinos produced in the first step of the reaction range in energy up to , the pep reaction produces sharp-energy-line neutrinos of . Detection of solar neutrinos from this reaction were reported by the Borexino collaboration in 2012. Both the pep and reactions can be seen as two different Feynman representations of the same basic interaction, where the electron passes to the right side of the reaction as a positron. This is represented in the figure of proton–proton and electron-capture reactions in a star, available at the NDM'06 web site.
Physical sciences
Stellar astronomy
Astronomy
25011
https://en.wikipedia.org/wiki/Plankton
Plankton
Plankton are the diverse collection of organisms that drift in water (or air) but are unable to actively propel themselves against currents (or wind). The individual organisms constituting plankton are called plankters. In the ocean, they provide a crucial source of food to many small and large aquatic organisms, such as bivalves, fish, and baleen whales. Marine plankton include bacteria, archaea, algae, protozoa, microscopic fungi, and drifting or floating animals that inhabit the saltwater of oceans and the brackish waters of estuaries. Freshwater plankton are similar to marine plankton, but are found in lakes and rivers. Mostly, plankton just drift where currents take them, though some, like jellyfish, swim slowly but not fast enough to generally overcome the influence of currents. Although plankton are usually thought of as inhabiting water, there are also airborne versions that live part of their lives drifting in the atmosphere. These aeroplankton include plant spores, pollen and wind-scattered seeds. They may also include microorganisms swept into the air from terrestrial dust storms and oceanic plankton swept into the air by sea spray. Though many planktonic species are microscopic in size, plankton includes organisms over a wide range of sizes, including large organisms such as jellyfish. This is because plankton are defined by their ecological niche and level of motility rather than by any phylogenetic or taxonomic classification. The "plankton" category differentiates these organisms from those that float on the water's surface, called neuston, those that can swim against a current, called nekton, and those that live on the deep sea floor, called benthos. Terminology The name plankton was coined by German marine biologist Victor Hensen in 1887 from shortening the word halyplankton from Greek háls "sea" and planáō to "drift" or "wander". While some forms are capable of independent movement and can swim hundreds of meters vertically in a single day (a behavior called diel vertical migration), their horizontal position is primarily determined by the surrounding water movement, and plankton typically flow with ocean currents. This is in contrast to nekton organisms, such as fish, squid and marine mammals, which can swim against the ambient flow and control their position in the environment. Within the plankton, holoplankton spend their entire life cycle as plankton (e.g. most algae, copepods, salps, and some jellyfish). By contrast, meroplankton are only planktic for part of their lives (usually the larval stage), and then graduate to either a nektic (swimming) or benthic (sea floor) existence. Examples of meroplankton include the larvae of sea urchins, starfish, crustaceans, marine worms, and most fish. The amount and distribution of plankton depends on available nutrients, the state of water and a large amount of other plankton. The study of plankton is termed planktology and a planktonic individual is referred to as a plankter. The adjective planktonic is widely used in both the scientific and popular literature, and is a generally accepted term. However, from the standpoint of prescriptive grammar, the less-commonly used planktic is more strictly the correct adjective. When deriving English words from their Greek or Latin roots, the gender-specific ending (in this case, "-on" which indicates the word is neuter) is normally dropped, using only the root of the word in the derivation. Trophic groups Plankton are primarily divided into broad functional (or trophic level) groups: Phytoplankton (from Greek phyton, or plant) are autotrophic prokaryotic or eukaryotic algae that live near the water surface where there is sufficient light to support photosynthesis. Among the more important groups are the diatoms, cyanobacteria, dinoflagellates, and coccolithophores. Zooplankton (from Greek zoon, or animal) are small protozoans or metazoans (e.g. crustaceans and other animals) that feed on other plankton. Some of the eggs and larvae of larger nektonic animals, such as fish, crustaceans, and annelids, are included here. Mycoplankton include fungi and fungus-like organisms, which, like bacterioplankton, are also significant in remineralisation and nutrient cycling. Bacterioplankton include bacteria and archaea, which play an important role in remineralising organic material down the water column (note that prokaryotic phytoplankton are also bacterioplankton). Virioplankton are viruses. Viruses are more abundant in the plankton than bacteria and archaea, though much smaller. Mixoplankton Mixotrophs. Plankton have traditionally been categorized as producer, consumer, and recycler groups, but some plankton are able to benefit from more than just one trophic level. In this mixed trophic strategy—known as mixotrophy—organisms act as both producers and consumers, either at the same time or switching between modes of nutrition in response to ambient conditions. This makes it possible to use photosynthesis for growth when nutrients and light are abundant, but switch to eating phytoplankton, zooplankton or each other when growing conditions are poor. Mixotrophs are divided into two groups; constitutive mixotrophs (CMs) which are able to perform photosynthesis on their own, and non-constitutive mixotrophs (NCMs) which use phagocytosis to engulf phototrophic prey that are either kept alive inside the host cell, which benefits from its photosynthesis, or they digested, except for the plastids, which continue to perform photosynthesis (kleptoplasty). Recognition of the importance of mixotrophy as an ecological strategy is increasing, as well as the wider role this may play in marine biogeochemistry. Studies have shown that mixotrophs are much more important for marine ecology than previously assumed and comprise more than half of all microscopic plankton. Their presence acts as a buffer that prevents the collapse of ecosystems during times with little to no light. Size groups Plankton are also often described in terms of size. Usually the following divisions are used: {| class="wikitable" |width="120"| Group |width="100"| Size range    (ESD) |width="350"| Examples |- | Megaplankton ||> 20 cm || metazoans; e.g. jellyfish; ctenophores; salps and pyrosomes (pelagic Tunicata); Cephalopoda; Amphipoda |- | Macroplankton || 2→20 cm || metazoans; e.g. Pteropoda; Chaetognaths; Euphausiacea (krill); Medusae; ctenophores; salps, doliolids and pyrosomes (pelagic Tunicata); Cephalopoda; Janthina and Recluzia (two genera of gastropods); Amphipoda |- | Mesoplankton || 0.2→20 mm || metazoans; e.g. copepods; Medusae; Cladocera; Ostracoda; Chaetognaths; Pteropoda; Tunicata |- | Microplankton || 20→200 μm || large eukaryotic protists; most phytoplankton; Protozoa Foraminifera; tintinnids; other ciliates; Rotifera; juvenile metazoans – Crustacea (copepod nauplii) |- | Nanoplankton || 2→20 μm || small eukaryotic protists; small diatoms; small flagellates; Pyrrophyta; Chrysophyta; Chlorophyta; Xanthophyta |- | Picoplankton || 0.2→2 μm || small eukaryotic protists; bacteria; Chrysophyta |- | Femtoplankton || < 0.2 μm || marine viruses |- |} However, some of these terms may be used with very different boundaries, especially on the larger end. The existence and importance of nano- and even smaller plankton was only discovered during the 1980s, but they are thought to make up the largest proportion of all plankton in number and diversity. The microplankton and smaller groups are microorganisms and operate at low Reynolds numbers, where the viscosity of water is more important than its mass or inertia. Habitat groups Marine plankton Marine plankton includes marine bacteria and archaea, algae, protozoa and drifting or floating animals that inhabit the saltwater of oceans and the brackish waters of estuaries. Freshwater plankton Freshwater plankton are similar to marine plankton, but are found inland in the freshwaters of lakes and rivers. Aeroplankton Aeroplankton are tiny lifeforms that float and drift in the air, carried by the current of the wind; they are the atmospheric analogue to oceanic plankton. Most of the living things that make up aeroplankton are very small to microscopic in size, and many can be difficult to identify because of their tiny size. Scientists can collect them for study in traps and sweep nets from aircraft, kites or balloons. Aeroplankton is made up of numerous microbes, including viruses, about 1000 different species of bacteria, around 40,000 varieties of fungi, and hundreds of species of protists, algae, mosses and liverworts that live some part of their life cycle as aeroplankton, often as spores, pollen, and wind-scattered seeds. Additionally, peripatetic microorganisms are swept into the air from terrestrial dust storms, and an even larger amount of airborne marine microorganisms are propelled high into the atmosphere in sea spray. Aeroplankton deposits hundreds of millions of airborne viruses and tens of millions of bacteria every day on every square meter around the planet. The sea surface microlayer, compared to the sub-surface waters, contains elevated concentration of bacteria and viruses. These materials can be transferred from the sea-surface to the atmosphere in the form of wind-generated aqueous aerosols due to their high vapour tension and a process known as volatilisation. When airborne, these microbes can be transported long distances to coastal regions. If they hit land they can have an effect on animal, vegetation and human health. Marine aerosols that contain viruses can travel hundreds of kilometers from their source and remain in liquid form as long as the humidity is high enough (over 70%). These aerosols are able to remain suspended in the atmosphere for about 31 days. Evidence suggests that bacteria can remain viable after being transported inland through aerosols. Some reached as far as 200 meters at 30 meters above sea level. The process which transfers this material to the atmosphere causes further enrichment in both bacteria and viruses in comparison to either the SML or sub-surface waters (up to three orders of magnitude in some locations). Geoplankton Many animals live in terrestrial environments by thriving in transient often microscopic bodies of water and moisture, these include rotifers and gastrotrichs which lay resilient eggs capable of surviving years in dry environments, and some of which can go dormant themselves. Nematodes are usually microscopic with this lifestyle. Water bears, despite only having lifespans of a few months, famously can enter suspended animation during dry or hostile conditions and survive for decades. This allows them to be ubiquitous in terrestrial environments despite needing water to grow and reproduce. Many microscopic crustacean groups like copepods and amphipods (of which sandhoppers are members) and seed shrimp are known to go dormant when dry and live in transient bodies of water too Other groups Gelatinous zooplankton Gelatinous zooplankton are fragile animals that live in the water column in the ocean. Their delicate bodies have no hard parts and are easily damaged or destroyed. Gelatinous zooplankton are often transparent. All jellyfish are gelatinous zooplankton, but not all gelatinous zooplankton are jellyfish. The most commonly encountered organisms include ctenophores, medusae, salps, and Chaetognatha in coastal waters. However, almost all marine phyla, including Annelida, Mollusca and Arthropoda, contain gelatinous species, but many of those odd species live in the open ocean and the deep sea and are less available to the casual ocean observer. Ichthyoplankton Ichthyoplankton are the eggs and larvae of fish. They are mostly found in the sunlit zone of the water column, less than 200 metres deep, which is sometimes called the epipelagic or photic zone. Ichthyoplankton are planktonic, meaning they cannot swim effectively under their own power, but must drift with the ocean currents. Fish eggs cannot swim at all, and are unambiguously planktonic. Early stage larvae swim poorly, but later stage larvae swim better and cease to be planktonic as they grow into juveniles. Fish larvae are part of the zooplankton that eat smaller plankton, while fish eggs carry their food supply. Both eggs and larvae are themselves eaten by larger animals. Fish can produce high numbers of eggs which are often released into the open water column. Fish eggs typically have a diameter of about . The newly hatched young of oviparous fish are called larvae. They are usually poorly formed, carry a large yolk sac (for nourishment), and are very different in appearance from juvenile and adult specimens. The larval period in oviparous fish is relatively short (usually only several weeks), and larvae rapidly grow and change appearance and structure (a process termed metamorphosis) to become juveniles. During this transition larvae must switch from their yolk sac to feeding on zooplankton prey, a process which depends on typically inadequate zooplankton density, starving many larvae. In time fish larvae become able to swim against currents, at which point they cease to be plankton and become juvenile fish. Holoplankton Holoplankton are organisms that are planktic for their entire life cycle. Holoplankton can be contrasted with meroplankton, which are planktic organisms that spend part of their life cycle in the benthic zone. Examples of holoplankton include some diatoms, radiolarians, some dinoflagellates, foraminifera, amphipods, krill, copepods, and salps, as well as some gastropod mollusk species. Holoplankton dwell in the pelagic zone as opposed to the benthic zone. Holoplankton include both phytoplankton and zooplankton and vary in size. The most common plankton are protists. Meroplankton Meroplankton are a wide variety of aquatic organisms that have both planktonic and benthic stages in their life cycles. Much of the meroplankton consists of larval stages of larger organisms. Meroplankton can be contrasted with holoplankton, which are planktonic organisms that stay in the pelagic zone as plankton throughout their entire life cycle. After some time in the plankton, many meroplankton graduate to the nekton or adopt a benthic (often sessile) lifestyle on the seafloor. The larval stages of benthic invertebrates make up a significant proportion of planktonic communities. The planktonic larval stage is particularly crucial to many benthic invertebrates in order to disperse their young. Depending on the particular species and the environmental conditions, larval or juvenile-stage meroplankton may remain in the pelagic zone for durations ranging from hours to months. Pseudoplankton Pseudoplankton are organisms that attach themselves to planktonic organisms or other floating objects, such as drifting wood, buoyant shells of organisms such as Spirula, or man-made flotsam. Examples include goose barnacles and the bryozoan Jellyella. By themselves these animals cannot float, which contrasts them with true planktonic organisms, such as Velella and the Portuguese Man o' War, which are buoyant. Pseudoplankton are often found in the guts of filtering zooplankters. Tychoplankton Tychoplankton are organisms, such as free-living or attached benthic organisms and other non-planktonic organisms, that are carried into the plankton through a disturbance of their benthic habitat, or by winds and currents. This can occur by direct turbulence or by disruption of the substrate and subsequent entrainment in the water column. Tychoplankton are, therefore, a primary subdivision for sorting planktonic organisms by duration of lifecycle spent in the plankton, as neither their entire lives nor particular reproductive portions are confined to planktonic existence. Tychoplankton are sometimes called accidental plankton. Mineralized plankton Distribution Apart from aeroplankton, plankton inhabits oceans, seas, lakes and ponds. Local abundance varies horizontally, vertically and seasonally. The primary cause of this variability is the availability of light. All plankton ecosystems are driven by the input of solar energy (but see chemosynthesis), confining primary production to surface waters, and to geographical regions and seasons having abundant light. A secondary variable is nutrient availability. Although large areas of the tropical and sub-tropical oceans have abundant light, they experience relatively low primary production because they offer limited nutrients such as nitrate, phosphate and silicate. This results from large-scale ocean circulation and water column stratification. In such regions, primary production usually occurs at greater depth, although at a reduced level (because of reduced light). Despite significant macronutrient concentrations, some ocean regions are unproductive (so-called HNLC regions). The micronutrient iron is deficient in these regions, and adding it can lead to the formation of phytoplankton algal blooms. Iron primarily reaches the ocean through the deposition of dust on the sea surface. Paradoxically, oceanic areas adjacent to unproductive, arid land thus typically have abundant phytoplankton (e.g., the eastern Atlantic Ocean, where trade winds bring dust from the Sahara Desert in north Africa). While plankton are most abundant in surface waters, they live throughout the water column. At depths where no primary production occurs, zooplankton and bacterioplankton instead consume organic material sinking from more productive surface waters above. This flux of sinking material, so-called marine snow, can be especially high following the termination of spring blooms. The local distribution of plankton can be affected by wind-driven Langmuir circulation and the biological effects of this physical process. Ecological significance Food chain As well as representing the lower levels of a food chain that supports commercially important fisheries, plankton ecosystems play a role in the biogeochemical cycles of many important chemical elements, including the ocean's carbon cycle. Fish larvae mainly eat zooplankton, which in turn eat phytoplankton Carbon cycle Primarily by grazing on phytoplankton, zooplankton provide carbon to the planktic foodweb, either respiring it to provide metabolic energy, or upon death as biomass or detritus. Organic material tends to be denser than seawater, so it sinks into open ocean ecosystems away from the coastlines, transporting carbon along with it. This process, called the biological pump, is one reason that oceans constitute the largest carbon sink on Earth. However, it has been shown to be influenced by increments of temperature. In 2019, a study indicated that at ongoing rates of seawater acidification, Antarctic phytoplanktons could become smaller and less effective at storing carbon before the end of the century. It might be possible to increase the ocean's uptake of carbon dioxide () generated through human activities by increasing plankton production through iron fertilization – introducing amounts of iron into the ocean. However, this technique may not be practical at a large scale. Ocean oxygen depletion and resultant methane production (caused by the excess production remineralising at depth) is one potential drawback. Oxygen production Phytoplankton absorb energy from the Sun and nutrients from the water to produce their own nourishment or energy. In the process of photosynthesis, phytoplankton release molecular oxygen () into the water as a waste byproduct. It is estimated that about 50% of the world's oxygen is produced via phytoplankton photosynthesis. The rest is produced via photosynthesis on land by plants. Furthermore, phytoplankton photosynthesis has controlled the atmospheric / balance since the early Precambrian Eon. Absorption efficiency The absorption efficiency (AE) of plankton is the proportion of food absorbed by the plankton that determines how available the consumed organic materials are in meeting the required physiological demands. Depending on the feeding rate and prey composition, variations in absorption efficiency may lead to variations in fecal pellet production, and thus regulates how much organic material is recycled back to the marine environment. Low feeding rates typically lead to high absorption efficiency and small, dense pellets, while high feeding rates typically lead to low absorption efficiency and larger pellets with more organic content. Another contributing factor to dissolved organic matter (DOM) release is respiration rate. Physical factors such as oxygen availability, pH, and light conditions may affect overall oxygen consumption and how much carbon is loss from zooplankton in the form of respired . The relative sizes of zooplankton and prey also mediate how much carbon is released via sloppy feeding. Smaller prey are ingested whole, whereas larger prey may be fed on more "sloppily", that is more biomatter is released through inefficient consumption. There is also evidence that diet composition can impact nutrient release, with carnivorous diets releasing more dissolved organic carbon (DOC) and ammonium than omnivorous diets. Biomass variability The growth of phytoplankton populations is dependent on light levels and nutrient availability. The chief factor limiting growth varies from region to region in the world's oceans. On a broad scale, growth of phytoplankton in the oligotrophic tropical and subtropical gyres is generally limited by nutrient supply, while light often limits phytoplankton growth in subarctic gyres. Environmental variability at multiple scales influences the nutrient and light available for phytoplankton, and as these organisms form the base of the marine food web, this variability in phytoplankton growth influences higher trophic levels. For example, at interannual scales phytoplankton levels temporarily plummet during El Niño periods, influencing populations of zooplankton, fishes, sea birds, and marine mammals. The effects of anthropogenic warming on the global population of phytoplankton is an area of active research. Changes in the vertical stratification of the water column, the rate of temperature-dependent biological reactions, and the atmospheric supply of nutrients are expected to have important impacts on future phytoplankton productivity. Additionally, changes in the mortality of phytoplankton due to rates of zooplankton grazing may be significant. Plankton diversity Planktonic relationships Fish and plankton Zooplankton are the initial prey item for almost all fish larvae as they switch from their yolk sacs to external feeding. Fish rely on the density and distribution of zooplankton to match that of new larvae, which can otherwise starve. Natural factors (e.g., current variations, temperature changes) and man-made factors (e.g. river dams, ocean acidification, rising temperatures) can strongly affect zooplankton populations, which can in turn strongly affect fish larval survival, and therefore breeding success. It has been shown that plankton can be patchy in marine environments where there aren't significant fish populations and additionally, where fish are abundant, zooplankton dynamics are influenced by the fish predation rate in their environment. Depending on the predation rate, they could express regular or chaotic behavior. A negative effect that fish larvae can have on planktonic algal blooms is that the larvae will prolong the blooming event by diminishing available zooplankton numbers; this in turn permits excessive phytoplankton growth allowing the bloom to flourish . The importance of both phytoplankton and zooplankton is also well-recognized in extensive and semi-intensive pond fish farming. Plankton population-based pond management strategies for fish rearing have been practiced by traditional fish farmers for decades, illustrating the importance of plankton even in man-made environments. Whales and plankton Of all animal fecal matter, it is whale feces that is the 'trophy' in terms of increasing nutrient availability. Phytoplankton are the powerhouse of open ocean primary production and they can acquire many nutrients from whale feces. In the marine food web, phytoplankton are at the base of the food web and are consumed by zooplankton & krill, which are preyed upon by larger and larger marine organisms, including whales, so it can be said that whale poop fuels the entire food web. Humans and plankton Plankton have many direct and indirect effects on humans. Around 70% of the oxygen in the atmosphere is produced in the oceans from phytoplankton performing photosynthesis, meaning that the majority of the oxygen available for us and other organisms that respire aerobically is produced by plankton. Plankton also make up the base of the marine food web, providing food for all the trophic levels above. Recent studies have analyzed the marine food web to see if the system runs on a top-down or bottom-up approach. Essentially, this research is focused on understanding whether changes in the food web are driven by nutrients at the bottom of the food web or predators at the top. The general conclusion is that the bottom-up approach seemed to be more predictive of food web behavior. This indicates that plankton have more sway in determining the success of the primary consumer species that prey on them than do the secondary consumers that prey on the primary consumers. In some cases, plankton act as an intermediate host for deadly parasites in humans. One such case is that of cholera, an infection caused by several pathogenic strains of Vibrio cholerae. These species have been shown to have a symbiotic relationship with chitinous zooplankton species like copepods. These bacteria benefit not only from the food provided by the chiton from the zooplankton, but also from the protection from acidic environments. Once the copepods have been ingested by a human host, the chitinous exterior protects the bacteria from the stomach acids in the stomach and proceed to the intestines. Once there, the bacteria bind with the surface of the small intestine and the host will start developing symptoms, including extreme diarrhea, within five days.
Biology and health sciences
Other organisms
null
25030
https://en.wikipedia.org/wiki/Plain%20text
Plain text
In computing, plain text is a loose term for data (e.g. file contents) that represent only characters of readable material but not its graphical representation nor other objects (floating-point numbers, images, etc.). It may also include a limited number of "whitespace" characters that affect simple arrangement of text, such as spaces, line breaks, or tabulation characters. Plain text is different from formatted text, where style information is included; from structured text, where structural parts of the document such as paragraphs, sections, and the like are identified; and from binary files in which some portions must be interpreted as binary objects (encoded integers, real numbers, images, etc.). The term is sometimes used quite loosely, to mean files that contain only "readable" content (or just files with nothing that the speaker does not prefer). For example, that could exclude any indication of fonts or layout (such as markup, markdown, or even tabs); characters such as curly quotes, non-breaking spaces, soft hyphens, em dashes, and/or ligatures; or other things. In principle, plain text can be in any encoding, but occasionally the term is taken to imply ASCII. As Unicode-based encodings such as UTF-8 and UTF-16 become more common, that usage may be shrinking. Plain text is also sometimes used only to exclude "binary" files: those in which at least some parts of the file cannot be correctly interpreted via the character encoding in effect. For example, a file or string consisting of "hello" (in any encoding), following by 4 bytes that express a binary integer that is not a character, is a binary file. Converting a plain text file to a different character encoding does not change the meaning of the text, as long as the correct character encoding is used. However, converting a binary file to a different format may alter the interpretation of the non-textual data. Plain text and rich text According to The Unicode Standard: "Plain text is a pure sequence of character codes; plain Un-encoded text is therefore a sequence of Unicode character codes. In contrast, styled text, also known as rich text, is any text representation containing plain text plus added information such as a language identifier, font size, color, hypertext links, and so on. SGML, RTF, HTML, XML, and TeX are examples of rich text fully represented as plain text streams, interspersing plain text data with sequences of characters that represent the additional data structures." According to other definitions, however, files that contain markup or other meta-data are generally considered plain text, so long as the markup is also in a directly human-readable form (as in HTML, XML, and so on). Thus, representations such as SGML, RTF, HTML, XML, wiki markup, and TeX, as well as nearly all programming language source code files, are considered plain text. The particular content is irrelevant to whether a file is plain text. For example, an SVG file can express drawings or even bitmapped graphics, but is still plain text. The use of plain text rather than binary files enables files to survive much better "in the wild", in part by making them largely immune to computer architecture incompatibilities. For example, all the problems of Endianness can be avoided (with encodings such as UCS-2 rather than UTF-8, endianness matters, but uniformly for every character, rather than for potentially-unknown subsets of it). Usage The purpose of using plain text today is primarily independence from programs that require their very own special encoding or formatting or file format. Plain text files can be opened, read, and edited with ubiquitous text editors and utilities. A command-line interface allows people to give commands in plain text and get a response, also typically in plain text. Many other computer programs are also capable of processing or creating plain text, such as countless programs in DOS, Windows, classic Mac OS, and Unix and its kin; as well as web browsers (a few browsers such as Lynx and the Line Mode Browser produce only plain text for display) and other e-text readers. Plain text files are almost universal in programming; a source code file containing instructions in a programming language is almost always a plain text file. Plain text is also commonly used for configuration files, which are read for saved settings at the startup of a program. Plain text is used for much e-mail. A comment, a ".txt" file, or a TXT Record generally contains only plain text (without formatting) intended for humans to read. The best format for storing knowledge persistently is plain text, rather than some binary format. Encoding Character encodings Before the early 1960s, computers were mainly used for number-crunching rather than for text, and memory was extremely expensive. Computers often allocated only 6 bits for each character, permitting only 64 characters—assigning codes for A-Z, a-z, and 0-9 would leave only 2 codes: nowhere near enough. Most computers opted not to support lower-case letters. Thus, early text projects such as Roberto Busa's Index Thomisticus, the Brown Corpus, and others had to resort to conventions such as keying an asterisk preceding letters actually intended to be upper-case. Fred Brooks of IBM argued strongly for going to 8-bit bytes, because someday people might want to process text, and won. Although IBM used EBCDIC, most text from then on came to be encoded in ASCII, using values from 0 to 31 for (non-printing) control characters, and values from 32 to 127 for graphic characters such as letters, digits, and punctuation. Most machines stored characters in 8 bits rather than 7, ignoring the remaining bit or using it as a checksum. The near-ubiquity of ASCII was a great help, but failed to address international and linguistic concerns. The dollar-sign ("$") was not as useful in England, and the accented characters used in Spanish, French, German, Portuguese, Italian and many other languages were entirely unavailable in ASCII (not to mention characters used in Greek, Russian, and most Eastern languages). Many individuals, companies, and countries defined extra characters as needed—often reassigning control characters, or using values in the range from 128 to 255. Using values above 128 conflicts with using the 8th bit as a checksum, but the checksum usage gradually died out. These additional characters were encoded differently in different countries, making texts impossible to decode without figuring out the originator's rules. For instance, a browser might display ¬A rather than ` if it tried to interpret one character set as another. The International Organization for Standardization (ISO) eventually developed several code pages under ISO 8859, to accommodate various languages. The first of these (ISO 8859-1) is also known as "Latin-1", and covers the needs of most (not all) European languages that use Latin-based characters (there was not quite enough room to cover them all). ISO 2022 then provided conventions for "switching" between different character sets in mid-file. Many other organisations developed variations on these, and for many years Windows and Macintosh computers used incompatible variations. The text-encoding situation became more and more complex, leading to efforts by ISO and by the Unicode Consortium to develop a single, unified character encoding that could cover all known (or at least all currently known) languages. After some conflict, these efforts were unified. Unicode currently allows for 1,114,112 code values, and assigns codes covering nearly all modern text writing systems, as well as many historical ones, and for many non-linguistic characters such as printer's dingbats, mathematical symbols, etc. Text is considered plain text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data. Perhaps the most common way of explicitly stating the specific encoding of plain text is with a MIME type. For email and HTTP, the default MIME type is "text/plain" -- plain text without markup. Another MIME type often used in both email and HTTP is "text/html; charset=UTF-8" -- plain text represented using the UTF-8 character encoding with HTML markup. Another common MIME type is "application/json" -- plain text represented using the UTF-8 character encoding with JSON markup. When a document is received without any explicit indication of the character encoding, some applications use charset detection to attempt to guess what encoding was used. Control codes ASCII reserves the first 32 codes (numbers 0–31 decimal) for control characters known as the "C0 set": codes originally intended not to represent printable information, but rather to control devices (such as printers) that make use of ASCII, or to provide meta-information about data streams such as those stored on magnetic tape. They include common characters like the newline and the tab character. In 8-bit character sets such as Latin-1 and the other ISO 8859 sets, the first 32 characters of the "upper half" (128 to 159) are also control codes, known as the "C1 set". They are rarely used directly; when they turn up in documents which are ostensibly in an ISO 8859 encoding, their code positions generally refer instead to the characters at that position in a proprietary, system-specific encoding, such as Windows-1252 or Mac OS Roman, that use the codes to instead provide additional graphic characters. Unicode defines additional control characters, including bi-directional text direction override characters (used to explicitly mark right-to-left writing inside left-to-right writing and the other way around) and variation selectors to select alternate forms of CJK ideographs, emoji and other characters.
Technology
Data storage and memory
null
25040
https://en.wikipedia.org/wiki/Pioneer%20program
Pioneer program
The Pioneer programs were two series of United States lunar and planetary space probes. The first program, which ran from 1958 to 1960, unsuccessfully attempted to send spacecraft to orbit the Moon, successfully sent one spacecraft to fly by the Moon, and successfully sent one spacecraft to investigate interplanetary space between the orbits of Earth and Venus. The second program, which ran from 1965 to 1992, sent four spacecraft to measure interplanetary space weather, two to explore Jupiter and Saturn, and two to explore Venus. The two outer planet probes, Pioneer 10 and Pioneer 11, became the first two of five artificial objects to achieve the escape velocity that will allow them to leave the Solar System, and carried a golden plaque each depicting a man and a woman and information about the origin and the creators of the probes, in case any extraterrestrials find them someday. Naming Credit for naming the first probe has been attributed to Stephen A. Saliga, who had been assigned to the Air Force Orientation Group, Wright-Patterson AFB, as chief designer of Air Force exhibits. While he was at a briefing, the spacecraft was described to him, as, a "lunar-orbiting vehicle, with an infrared scanning device." Saliga thought the title too long, and lacked theme for an exhibit design. He suggested, "Pioneer", as the name of the probe, since "the Army had already launched and orbited the Explorer satellite, and their Public Information Office was identifying the Army, as, 'Pioneers in Space,'" and, by adopting the name, the Air Force would "make a 'quantum jump' as to who, really, [were] the 'Pioneers' in space.'" Early missions The earliest missions were attempts to achieve Earth's escape velocity, simply to show it was feasible and to study the Moon. This included the first launch by NASA which was formed from the old NACA. These missions were carried out by the Air Force Ballistic Missile Division, Army, and NASA. Able space probes (1958–1960) Juno II lunar probes (1958–1959) Pioneer 3 – Lunar flyby, missed Moon due to launcher failure December 6, 1958 Pioneer 4 – Lunar flyby, achieved Earth escape velocity, launched March 3, 1959 Later missions (1965–1978) Five years after the early Able space probe missions ended, NASA Ames Research Center used the Pioneer name for a new series of missions, initially aimed at the inner Solar System, before the flyby missions to Jupiter and Saturn. While successful, the missions returned much poorer images than the Voyager program probes would five years later. In 1978, the end of the program saw a return to the inner Solar System, with the Pioneer Venus Orbiter and Multiprobe, this time using orbital insertion rather than flyby missions. The new missions were numbered beginning with Pioneer 6 (alternate names in parentheses). Interplanetary weather The spacecraft in Pioneer missions 6, 7, 8, and 9 comprised a new interplanetary space weather network: Pioneer 6 (Pioneer A) – launched December 1965 Pioneer 7 (Pioneer B) – launched August 1966 Pioneer 8 (Pioneer C) – launched December 1967 Pioneer 9 (Pioneer D) – launched November 1968 (inactive since 1983) Pioneer E – lost in launcher failure August 1969 Pioneer 6 and Pioneer 9 are in solar orbits with 0.8 AU distance to the Sun. Their orbital periods are therefore slightly shorter than Earth's. Pioneer 7 and Pioneer 8 are in solar orbits with 1.1 AU distance to the Sun. Their orbital periods are therefore slightly longer than Earth's. Since the probes' orbital periods differ from that of the Earth, from time to time, they face a side of the Sun that cannot be seen from Earth. The probes can sense parts of the Sun several days before the Sun's rotation reveals it to ground-based Earth orbiting observatories. Outer Solar System missions Pioneer 10 (Pioneer F) – Jupiter, interstellar medium, launched March 1972 Pioneer 11 (Pioneer G) – Jupiter, Saturn, interstellar medium, launched April 1973 Pioneer H – proposed out-of-ecliptic mission for 1974, never launched. Would have used flight spare for Pioneers 10 and 11. Venus project Pioneer Venus Orbiter (Pioneer Venus 1, Pioneer 12) – launched May 1978 Pioneer Venus Multiprobe (Pioneer Venus 2, Pioneer 13) – launched August 1978 Pioneer Venus Probe Bus – transport vehicle and upper atmosphere probe Pioneer Venus Large Probe – 300 kg parachuted probe Pioneer Venus North Probe – 75 kg impactor probe Pioneer Venus Night Probe – 75 kg impactor probe Pioneer Venus Day Probe – 75 kg impactor probe
Technology
Unmanned spacecraft
null
25065
https://en.wikipedia.org/wiki/Parameter
Parameter
A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc. Parameter has more specific meanings within various disciplines, including mathematics, computer programming, engineering, statistics, logic, linguistics, and electronic musical composition. In addition to its technical uses, there are also extended uses, especially in non-scientific contexts, where it is used to mean defining characteristics or boundaries, as in the phrases 'test parameters' or 'game play parameters'. Modelization When a system is modeled by equations, the values that describe the system are called parameters. For example, in mechanics, the masses, the dimensions and shapes (for solid bodies), the densities and the viscosities (for fluids), appear as parameters in the equations modeling movements. There are often several choices for the parameters, and choosing a convenient set of parameters is called parametrization. For example, if one were considering the movement of an object on the surface of a sphere much larger than the object (e.g. the Earth), there are two commonly used parametrizations of its position: angular coordinates (like latitude/longitude), which neatly describe large movements along circles on the sphere, and directional distance from a known point (e.g. "10km NNW of Toronto" or equivalently "8km due North, and then 6km due West, from Toronto" ), which are often simpler for movement confined to a (relatively) small area, like within a particular country or region. Such parametrizations are also relevant to the modelization of geographic areas (i.e. map drawing). Mathematical functions Mathematical functions have one or more arguments that are designated in the definition by variables. A function definition can also contain parameters, but unlike variables, parameters are not listed among the arguments that the function takes. When parameters are present, the definition actually defines a whole family of functions, one for every valid set of values of the parameters. For instance, one could define a general quadratic function by declaring ; Here, the variable x designates the function's argument, but a, b, and c are parameters (in this instance, also called coefficients) that determine which particular quadratic function is being considered. A parameter could be incorporated into the function name to indicate its dependence on the parameter. For instance, one may define the base-b logarithm by the formula where b is a parameter that indicates which logarithmic function is being used. It is not an argument of the function, and will, for instance, be a constant when considering the derivative . In some informal situations it is a matter of convention (or historical accident) whether some or all of the symbols in a function definition are called parameters. However, changing the status of symbols between parameter and variable changes the function as a mathematical object. For instance, the notation for the falling factorial power , defines a polynomial function of n (when k is considered a parameter), but is not a polynomial function of k (when n is considered a parameter). Indeed, in the latter case, it is only defined for non-negative integer arguments. More formal presentations of such situations typically start out with a function of several variables (including all those that might sometimes be called "parameters") such as as the most fundamental object being considered, then defining functions with fewer variables from the main one by means of currying. Sometimes it is useful to consider all functions with certain parameters as parametric family, i.e. as an indexed family of functions. Examples from probability theory are given further below. Examples In a section on frequently misused words in his book The Writer's Art, James J. Kilpatrick quoted a letter from a correspondent, giving examples to illustrate the correct use of the word parameter: W.M. Woods ... a mathematician ... writes ... "... a variable is one of the many things a parameter is not." ... The dependent variable, the speed of the car, depends on the independent variable, the position of the gas pedal. [Kilpatrick quoting Woods] "Now ... the engineers ... change the lever arms of the linkage ... the speed of the car ... will still depend on the pedal position ... but in a ... different manner. You have changed a parameter" A parametric equaliser is an audio filter that allows the frequency of maximum cut or boost to be set by one control, and the size of the cut or boost by another. These settings, the frequency level of the peak or trough, are two of the parameters of a frequency response curve, and in a two-control equaliser they completely describe the curve. More elaborate parametric equalisers may allow other parameters to be varied, such as skew. These parameters each describe some aspect of the response curve seen as a whole, over all frequencies. A graphic equaliser provides individual level controls for various frequency bands, each of which acts only on that particular frequency band. If asked to imagine the graph of the relationship y = ax2, one typically visualizes a range of values of x, but only one value of a. Of course a different value of a can be used, generating a different relation between x and y. Thus a is a parameter: it is less variable than the variable x or y, but it is not an explicit constant like the exponent 2. More precisely, changing the parameter a gives a different (though related) problem, whereas the variations of the variables x and y (and their interrelation) are part of the problem itself. In calculating income based on wage and hours worked (income equals wage multiplied by hours worked), it is typically assumed that the number of hours worked is easily changed, but the wage is more static. This makes wage a parameter, hours worked an independent variable, and income a dependent variable. Mathematical models In the context of a mathematical model, such as a probability distribution, the distinction between variables and parameters was described by Bard as follows: We refer to the relations which supposedly describe a certain physical situation, as a model. Typically, a model consists of one or more equations. The quantities appearing in the equations we classify into variables and parameters. The distinction between these is not always clear cut, and it frequently depends on the context in which the variables appear. Usually a model is designed to explain the relationships that exist among quantities which can be measured independently in an experiment; these are the variables of the model. To formulate these relationships, however, one frequently introduces "constants" which stand for inherent properties of nature (or of the materials and equipment used in a given experiment). These are the parameters. Analytic geometry In analytic geometry, a curve can be described as the image of a function whose argument, typically called the parameter, lies in a real interval. For example, the unit circle can be specified in the following two ways: implicit form, the curve is the locus of points in the Cartesian plane that satisfy the relation parametric form, the curve is the image of the function with parameter As a parametric equation this can be writtenThe parameter in this equation would elsewhere in mathematics be called the independent variable. Mathematical analysis In mathematical analysis, integrals dependent on a parameter are often considered. These are of the form In this formula, t is the argument of the function F, and on the right-hand side the parameter on which the integral depends. When evaluating the integral, t is held constant, and so it is considered to be a parameter. If we are interested in the value of F for different values of t, we then consider t to be a variable. The quantity x is a dummy variable or variable of integration (confusingly, also sometimes called a parameter of integration). Statistics and econometrics In statistics and econometrics, the probability framework above still holds, but attention shifts to estimating the parameters of a distribution based on observed data, or testing hypotheses about them. In frequentist estimation parameters are considered "fixed but unknown", whereas in Bayesian estimation they are treated as random variables, and their uncertainty is described as a distribution. In estimation theory of statistics, "statistic" or estimator refers to samples, whereas "parameter" or estimand refers to populations, where the samples are taken from. A statistic is a numerical characteristic of a sample that can be used as an estimate of the corresponding parameter, the numerical characteristic of the population from which the sample was drawn. For example, the sample mean (estimator), denoted , can be used as an estimate of the mean parameter (estimand), denoted μ, of the population from which the sample was drawn. Similarly, the sample variance (estimator), denoted S2, can be used to estimate the variance parameter (estimand), denoted σ2, of the population from which the sample was drawn. (Note that the sample standard deviation (S) is not an unbiased estimate of the population standard deviation (σ): see Unbiased estimation of standard deviation.) It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation. Probability theory In probability theory, one may describe the distribution of a random variable as belonging to a family of probability distributions, distinguished from each other by the values of a finite number of parameters. For example, one talks about "a Poisson distribution with mean value λ". The function defining the distribution (the probability mass function) is: This example nicely illustrates the distinction between constants, parameters, and variables. e is Euler's number, a fundamental mathematical constant. The parameter λ is the mean number of observations of some phenomenon in question, a property characteristic of the system. k is a variable, in this case the number of occurrences of the phenomenon actually observed from a particular sample. If we want to know the probability of observing k1 occurrences, we plug it into the function to get . Without altering the system, we can take multiple samples, which will have a range of values of k, but the system is always characterized by the same λ. For instance, suppose we have a radioactive sample that emits, on average, five particles every ten minutes. We take measurements of how many particles the sample emits over ten-minute periods. The measurements exhibit different values of k, and if the sample behaves according to Poisson statistics, then each value of k will come up in a proportion given by the probability mass function above. From measurement to measurement, however, λ remains constant at 5. If we do not alter the system, then the parameter λ is unchanged from measurement to measurement; if, on the other hand, we modulate the system by replacing the sample with a more radioactive one, then the parameter λ would increase. Another common distribution is the normal distribution, which has as parameters the mean μ and the variance σ². In these above examples, the distributions of the random variables are completely specified by the type of distribution, i.e. Poisson or normal, and the parameter values, i.e. mean and variance. In such a case, we have a parameterized distribution. It is possible to use the sequence of moments (mean, mean square, ...) or cumulants (mean, variance, ...) as parameters for a probability distribution: see Statistical parameter. Computer programming In computer programming, two notions of parameter are commonly used, and are referred to as parameters and arguments—or more formally as a formal parameter and an actual parameter. For example, in the definition of a function such as y = f(x) = x + 2, x is the formal parameter (the parameter) of the defined function. When the function is evaluated for a given value, as in f(3): or, y = f(3) = 3 + 2 = 5, 3 is the actual parameter (the argument) for evaluation by the defined function; it is a given value (actual value) that is substituted for the formal parameter of the defined function. (In casual usage the terms parameter and argument might inadvertently be interchanged, and thereby used incorrectly.) These concepts are discussed in a more precise way in functional programming and its foundational disciplines, lambda calculus and combinatory logic. Terminology varies between languages; some computer languages such as C define parameter and argument as given here, while Eiffel uses an alternative convention. Artificial intelligence In artificial intelligence, a model describes the probability that something will occur. Parameters in a model are the weight of the various probabilities. Tiernan Ray, in an article on GPT-3, described parameters this way: Engineering In engineering (especially involving data acquisition) the term parameter sometimes loosely refers to an individual measured item. This usage is not consistent, as sometimes the term channel refers to an individual measured item, with parameter referring to the setup information about that channel. "Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Properties can have all sorts of dimensions, depending upon the system being considered; parameters are dimensionless, or have the dimension of time or its reciprocal." The term can also be used in engineering contexts, however, as it is typically used in the physical sciences. Environmental science In environmental science and particularly in chemistry and microbiology, a parameter is used to describe a discrete chemical or microbiological entity that can be assigned a value: commonly a concentration, but may also be a logical entity (present or absent), a statistical result such as a 95 percentile value or in some cases a subjective value. Linguistics Within linguistics, the word "parameter" is almost exclusively used to denote a binary switch in a Universal Grammar within a Principles and Parameters framework. Logic In logic, the parameters passed to (or operated on by) an open predicate are called parameters by some authors (e.g., Prawitz's Natural Deduction; Paulson's Designing a theorem prover). Parameters locally defined within the predicate are called variables. This extra distinction pays off when defining substitution (without this distinction special provision must be made to avoid variable capture). Others (maybe most) just call parameters passed to (or operated on by) an open predicate variables, and when defining substitution have to distinguish between free variables and bound variables. Music In music theory, a parameter denotes an element which may be manipulated (composed), separately from the other elements. The term is used particularly for pitch, loudness, duration, and timbre, though theorists or composers have sometimes considered other musical aspects as parameters. The term is particularly used in serial music, where each parameter may follow some specified series. Paul Lansky and George Perle criticized the extension of the word "parameter" to this sense, since it is not closely related to its mathematical sense, but it remains common. The term is also common in music production, as the functions of audio processing units (such as the attack, release, ratio, threshold, and other variables on a compressor) are defined by parameters specific to the type of unit (compressor, equalizer, delay, etc.).
Mathematics
Functions: General
null
25073
https://en.wikipedia.org/wiki/Polyatomic%20ion
Polyatomic ion
A polyatomic ion (also known as a molecular ion) is a covalent bonded set of two or more atoms, or of a metal complex, that can be considered to behave as a single unit and that has a net charge that is not zero. The term molecule may or may not be used to refer to a polyatomic ion, depending on the definition used. The prefix poly- carries the meaning "many" in Greek, but even ions of two atoms are commonly described as polyatomic. In older literature, a polyatomic ion may instead be referred to as a radical (or less commonly, as a radical group). In contemporary usage, the term radical refers to various free radicals, which are species that have an unpaired electron and need not be charged. A simple example of a polyatomic ion is the hydroxide ion, which consists of one oxygen atom and one hydrogen atom, jointly carrying a net charge of −1; its chemical formula is . In contrast, an ammonium ion consists of one nitrogen atom and four hydrogen atoms, with a charge of +1; its chemical formula is . Polyatomic ions often are useful in the context of acid–base chemistry and in the formation of salts. Often, a polyatomic ion can be considered as the conjugate acid or base of a neutral molecule. For example, the conjugate base of sulfuric acid (H2SO4) is the polyatomic hydrogen sulfate anion (). The removal of another hydrogen ion produces the sulfate anion (). Nomenclature of polyatomic anions There are several patterns that can be used for learning the nomenclature of polyatomic anions. First, when the prefix bi is added to a name, a hydrogen is added to the ion's formula and its charge is increased by 1, the latter being a consequence of the hydrogen ion's +1 charge. An alternative to the bi- prefix is to use the word hydrogen in its place: the anion derived from . For example, let us consider the carbonate() ion: + → , which is called either bicarbonate or hydrogen carbonate. The process that forms these ions is called protonation. Most of the common polyatomic anions are oxyanions, conjugate bases of oxyacids (acids derived from the oxides of non-metallic elements). For example, the sulfate anion, , is derived from , which can be regarded as + . The second rule is based on the oxidation state of the central atom in the ion, which in practice is often (but not always) directly related to the number of oxygen atoms in the ion, following the pattern shown below. The following table shows the chlorine oxyanion family: As the number of oxygen atoms bound to chlorine increases, the chlorine's oxidation number becomes more positive. This gives rise to the following common pattern: first, the -ate ion is considered to be the base name; adding a per- prefix adds an oxygen, while changing the -ate suffix to -ite will reduce the oxygens by one, and keeping the suffix -ite and adding the prefix hypo- reduces the number of oxygens by one more, all without changing the charge. The naming pattern follows within many different oxyanion series based on a standard root for that particular series. The -ite has one less oxygen than the -ate, but different -ate anions might have different numbers of oxygen atoms. These rules do not work with all polyatomic anions, but they do apply to several of the more common ones. The following table shows how these prefixes are used for some of these common anion groups. Some oxo-anions can dimerize with loss of an oxygen atom. The prefix pyro is used, as the reaction that forms these types of chemicals often involves heating to form these types of structures. The prefix pyro is also denoted by the prefix di- . For example, dichromate ion is a dimer. Other examples of common polyatomic ions The following tables give additional examples of commonly encountered polyatomic ions. Only a few representatives are given, as the number of polyatomic ions encountered in practice is very large.
Physical sciences
Bonding
Chemistry
25096
https://en.wikipedia.org/wiki/Peripheral%20nervous%20system
Peripheral nervous system
The peripheral nervous system (PNS) is one of two components that make up the nervous system of bilateral animals, with the other part being the central nervous system (CNS). The PNS consists of nerves and ganglia, which lie outside the brain and the spinal cord. The main function of the PNS is to connect the CNS to the limbs and organs, essentially serving as a relay between the brain and spinal cord and the rest of the body. Unlike the CNS, the PNS is not protected by the vertebral column and skull, or by the blood–brain barrier, which leaves it exposed to toxins. The peripheral nervous system can be divided into the somatic nervous system and the visceral nervous system. Each of these have a sensory and a motor division. The visceral motor division is known as the autonomic nervous system. In the somatic nervous system, the cranial nerves are part of the PNS with the exceptions of the olfactory nerve and epithelia and the optic nerve (cranial nerve II) along with the retina, which are considered parts of the central nervous system based on developmental origin. The second cranial nerve is not a true peripheral nerve but a tract of the diencephalon. Cranial nerve ganglia, as with all ganglia, are part of the PNS. The autonomic nervous system exerts involuntary control over smooth muscle and glands. The connection between CNS and organs allows the system to be in two different functional states: sympathetic and parasympathetic. Structure The peripheral nervous system is divided into the somatic nervous system, and the autonomic nervous system. The somatic nervous system is under voluntary control, and transmits signals from the brain to end organs such as muscles. The sensory nervous system is part of the somatic nervous system and transmits signals from senses such as taste and touch (including fine touch and gross touch) to the spinal cord and brain. The autonomic nervous system is a "self-regulating" system which influences the function of organs outside voluntary control, such as the heart rate, or the functions of the digestive system. Somatic nervous system The somatic nervous system includes the sensory nervous system (ex. the somatosensory system) and consists of sensory nerves and somatic nerves, and many nerves which hold both functions. In the head and neck, cranial nerves carry somatosensory data. There are twelve cranial nerves, ten of which originate from the brainstem, and mainly control the functions of the anatomic structures of the head with some exceptions. One unique cranial nerve is the vagus nerve, which receives sensory information from organs in the thorax and abdomen. The other unique cranial nerve is the accessory nerve which is responsible for innervating the sternocleidomastoid and trapezius muscles, neither of which are located exclusively in the head. For the rest of the body, spinal nerves are responsible for somatosensory information. These arise from the spinal cord. Usually these arise as a web ("plexus") of interconnected nerves roots that arrange to form single nerves. These nerves control the functions of the rest of the body. In humans, there are 31 pairs of spinal nerves: 8 cervical, 12 thoracic, 5 lumbar, 5 sacral, and 1 coccygeal. These nerve roots are named according to the spinal vertebrata which they are adjacent to. In the cervical region, the spinal nerve roots come out above the corresponding vertebrae (i.e., nerve root between the skull and 1st cervical vertebrae is called spinal nerve C1). From the thoracic region to the coccygeal region, the spinal nerve roots come out below the corresponding vertebrae. This method creates a problem when naming the spinal nerve root between C7 and T1 (so it is called spinal nerve root C8). In the lumbar and sacral region, the spinal nerve roots travel within the dural sac and they travel below the level of L2 as the cauda equina. Cervical spinal nerves (C1–C4) The first 4 cervical spinal nerves, C1 through C4, split and recombine to produce a variety of nerves that serve the neck and back of head. Spinal nerve C1 is called the suboccipital nerve, which provides motor innervation to muscles at the base of the skull. C2 and C3 form many of the nerves of the neck, providing both sensory and motor control. These include the greater occipital nerve, which provides sensation to the back of the head, the lesser occipital nerve, which provides sensation to the area behind the ears, the greater auricular nerve and the lesser auricular nerve. The phrenic nerve is a nerve essential for our survival which arises from nerve roots C3, C4 and C5. It supplies the thoracic diaphragm, enabling breathing. If the spinal cord is transected above C3, then spontaneous breathing is not possible. Brachial plexus (C5–T1) The last four cervical spinal nerves, C5 through C8, and the first thoracic spinal nerve, T1, combine to form the brachial plexus, or plexus brachialis, a tangled array of nerves, splitting, combining and recombining, to form the nerves that subserve the upper-limb and upper back. Although the brachial plexus may appear tangled, it is highly organized and predictable, with little variation between people. See brachial plexus injuries. Lumbosacral plexus (L1–Co1) The anterior divisions of the lumbar nerves, sacral nerves, and coccygeal nerve form the lumbosacral plexus, the first lumbar nerve being frequently joined by a branch from the twelfth thoracic. For descriptive purposes this plexus is usually divided into three parts: lumbar plexus sacral plexus pudendal plexus Autonomic nervous system The autonomic nervous system (ANS) controls involuntary responses to regulate physiological functions. The brain and spinal cord of the central nervous system are connected with organs that have smooth muscle, such as the heart, bladder, and other cardiac, exocrine, and endocrine related organs, by ganglionic neurons. The most notable physiological effects from autonomic activity are pupil constriction and dilation, and salivation of saliva. The autonomic nervous system is always activated, but is either in the sympathetic or parasympathetic state. Depending on the situation, one state can overshadow the other, resulting in a release of different kinds of neurotransmitters. Sympathetic nervous system The sympathetic system is activated during a "fight or flight" situation in which mental stress or physical danger is encountered. Neurotransmitters such as norepinephrine, and epinephrine are released, which increases heart rate and blood flow in certain areas like muscle, while simultaneously decreasing activities of non-critical functions for survival, like digestion. The systems are independent to each other, which allows activation of certain parts of the body, while others remain rested. Parasympathetic nervous system Primarily using the neurotransmitter acetylcholine (ACh) as a mediator, the parasympathetic system allows the body to function in a "rest and digest" state. Consequently, when the parasympathetic system dominates the body, there are increases in salivation and activities in digestion, while heart rate and other sympathetic response decrease. Unlike the sympathetic system, humans have some voluntary controls in the parasympathetic system. The most prominent examples of this control are urination and defecation. Enteric nervous system There is a lesser known division of the autonomic nervous system known as the enteric nervous system. Located only around the digestive tract, this system allows for local control without input from the sympathetic or the parasympathetic branches, though it can still receive and respond to signals from the rest of the body. The enteric system is responsible for various functions related to gastrointestinal system. Disease Diseases of the peripheral nervous system can be specific to one or more nerves, or affect the system as a whole. Any peripheral nerve or nerve root can be damaged, called a mononeuropathy. Such injuries can be because of injury or trauma, or compression. Compression of nerves can occur because of a tumour mass or injury. Alternatively, if a nerve is in an area with a fixed size it may be trapped if the other components increase in size, such as carpal tunnel syndrome and tarsal tunnel syndrome. Common symptoms of carpal tunnel syndrome include pain and numbness in the thumb, index and middle finger. In peripheral neuropathy, the function one or more nerves are damaged through a variety of means. Toxic damage may occur because of diabetes (diabetic neuropathy), alcohol, heavy metals or other toxins; some infections; autoimmune and inflammatory conditions such as amyloidosis and sarcoidosis. Peripheral neuropathy is associated with a sensory loss in a "glove and stocking" distribution that begins at the peripheral and slowly progresses upwards, and may also be associated with acute and chronic pain. Peripheral neuropathy is not just limited to the somatosensory nerves, but the autonomic nervous system too (autonomic neuropathy).
Biology and health sciences
Nervous system
null
25098
https://en.wikipedia.org/wiki/Phase%20velocity
Phase velocity
The phase velocity of a wave is the rate at which the wave propagates in any medium. This is the velocity at which the phase of any one frequency component of the wave travels. For such a component, any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength (lambda) and time period as Equivalently, in terms of the wave's angular frequency , which specifies angular change per unit of time, and wavenumber (or angular wave number) , which represent the angular change per unit of space, To gain some basic intuition for this equation, we consider a propagating (cosine) wave . We want to see how fast a particular phase of the wave travels. For example, we can choose , the phase of the first crest. This implies , and so . Formally, we let the phase and see immediately that and . So, it immediately follows that As a result, we observe an inverse relation between the angular frequency and wavevector. If the wave has higher frequency oscillations, the wavelength must be shortened for the phase velocity to remain constant. Additionally, the phase velocity of electromagnetic radiation may – under certain circumstances (for example anomalous dispersion) – exceed the speed of light in vacuum, but this does not indicate any superluminal information or energy transfer. It was theoretically described by physicists such as Arnold Sommerfeld and Léon Brillouin. The previous definition of phase velocity has been demonstrated for an isolated wave. However, such a definition can be extended to a beat of waves, or to a signal composed of multiple waves. For this it is necessary to mathematically write the beat or signal as a low frequency envelope multiplying a carrier. Thus the phase velocity of the carrier determines the phase velocity of the wave set. Group velocity The group velocity of a collection of waves is defined as When multiple sinusoidal waves are propagating together, the resultant superposition of the waves can result in an "envelope" wave as well as a "carrier" wave that lies inside the envelope. This commonly appears in wireless communication when modulation (a change in amplitude and/or phase) is employed to send data. To gain some intuition for this definition, we consider a superposition of (cosine) waves with their respective angular frequencies and wavevectors. So, we have a product of two waves: an envelope wave formed by and a carrier wave formed by . We call the velocity of the envelope wave the group velocity. We see that the phase velocity of is In the continuous differential case, this becomes the definition of the group velocity. Refractive index In the context of electromagnetics and optics, the frequency is some function of the wave number, so in general, the phase velocity and the group velocity depend on specific medium and frequency. The ratio between the speed of light c and the phase velocity vp is known as the refractive index, . In this way, we can obtain another form for group velocity for electromagnetics. Writing , a quick way to derive this form is to observe We can then rearrange the above to obtain From this formula, we see that the group velocity is equal to the phase velocity only when the refractive index is independent of frequency . When this occurs, the medium is called non-dispersive, as opposed to dispersive, where various properties of the medium depend on the frequency . The relation is known as the dispersion relation of the medium.
Physical sciences
Waves
Physics
25107
https://en.wikipedia.org/wiki/Polio
Polio
Poliomyelitis ( ), commonly shortened to polio, is an infectious disease caused by the poliovirus. Approximately 75% of cases are asymptomatic; mild symptoms which can occur include sore throat and fever; in a proportion of cases more severe symptoms develop such as headache, neck stiffness, and paresthesia. These symptoms usually pass within one or two weeks. A less common symptom is permanent paralysis, and possible death in extreme cases. Years after recovery, post-polio syndrome may occur, with a slow development of muscle weakness similar to what the person had during the initial infection. Polio occurs naturally only in humans. It is highly infectious, and is spread from person to person either through fecal–oral transmission (e.g. poor hygiene, or by ingestion of food or water contaminated by human feces), or via the oral–oral route. Those who are infected may spread the disease for up to six weeks even if no symptoms are present. The disease may be diagnosed by finding the virus in the feces or detecting antibodies against it in the blood. Poliomyelitis has existed for thousands of years, with depictions of the disease in ancient art. The disease was first recognized as a distinct condition by the English physician Michael Underwood in 1789, and the virus that causes it was first identified in 1909 by the Austrian immunologist Karl Landsteiner. Major outbreaks started to occur in the late 19th century in Europe and the United States, and in the 20th century, it became one of the most worrying childhood diseases. Following the introduction of polio vaccines in the 1950s, polio incidence declined rapidly. , only Pakistan and Afghanistan remain endemic for wild poliovirus (WPV). Once infected, there is no specific treatment. The disease can be prevented by the polio vaccine, with multiple doses required for lifelong protection. There are two broad types of polio vaccine; an injected polio vaccine (IPV) using inactivated poliovirus and an oral polio vaccine (OPV) containing attenuated (weakened) live virus. Through the use of both types of vaccine, incidence of wild polio has decreased from an estimated 350,000 cases in 1988 to 30 confirmed cases in 2022, confined to just three countries. In rare cases, the traditional OPV was able to revert to a virulent form. An improved oral vaccine with greater genetic stability (nOPV2) was developed and granted full licensure in December 2023. Signs and symptoms The term "poliomyelitis" is used to identify the disease caused by any of the three serotypes of poliovirus. Two basic patterns of polio infection are described: a minor illness that does not involve the central nervous system (CNS), sometimes called abortive poliomyelitis, and a major illness involving the CNS, which may be paralytic or nonparalytic. Adults are more likely to develop symptoms, including severe symptoms, than children. In most people with a normal immune system, a poliovirus infection is asymptomatic. In about 25% of cases, the infection produces minor symptoms which may include sore throat and low fever. These symptoms are temporary and full recovery occurs within one or two weeks. In about 1 percent of infections the virus can migrate from the gastrointestinal tract into the central nervous system (CNS). Most patients with CNS involvement develop nonparalytic aseptic meningitis, with symptoms of headache, neck, back, abdominal and extremity pain, fever, vomiting, stomach pain, lethargy, and irritability. About one to five in 1,000 cases progress to paralytic disease, in which the muscles become weak, floppy and poorly controlled, and, finally, completely paralyzed; this condition is known as acute flaccid paralysis. The weakness most often involves the legs, but may less commonly involve the muscles of the head, neck, and diaphragm. Depending on the site of paralysis, paralytic poliomyelitis is classified as spinal, bulbar, or bulbospinal. In those who develop paralysis, between 2 and 10 percent die as the paralysis affects the breathing muscles. Encephalitis, an infection of the brain tissue itself, can occur in rare cases, and is usually restricted to infants. It is characterized by confusion, changes in mental status, headaches, fever, and, less commonly, seizures and spastic paralysis. Etymology The term poliomyelitis derives from the Ancient Greek (), meaning "grey", ( "marrow"), referring to the grey matter of the spinal cord, and the suffix -itis, which denotes inflammation, i.e., inflammation of the spinal cord's grey matter. The word was first used in 1874 and is attributed to the German physician Adolf Kussmaul. The first recorded use of the abbreviated version polio was in the Indianapolis Star in 1911. Cause Poliomyelitis does not affect any species other than humans. The disease is caused by infection with a member of the genus Enterovirus known as poliovirus (PV). This group of RNA viruses colonize the gastrointestinal tract – specifically the oropharynx and the intestine. Its structure is quite simple, composed of a single (+) sense RNA genome enclosed in a protein shell called a capsid. In addition to protecting the virus' genetic material, the capsid proteins enable poliovirus to infect certain types of cells. Three serotypes of poliovirus have been identified – wild poliovirus type 1 (WPV1), type 2 (WPV2), and type 3 (WPV3) – each with a slightly different capsid protein. All three are extremely virulent and produce the same disease symptoms. WPV1 is the most commonly encountered form, and the one most closely associated with paralysis. WPV2 was certified as eradicated in 2015 and WPV3 certified as eradicated in 2019. The incubation period (from exposure to the first signs and symptoms) ranges from three to six days for nonparalytic polio. If the disease progresses to cause paralysis, this occurs within 7 to 21 days. Individuals who are exposed to the virus, either through infection or by immunization via polio vaccine, develop immunity. In immune individuals, IgA antibodies against poliovirus are present in the tonsils and gastrointestinal tract and able to block virus replication; IgG and IgM antibodies against PV can prevent the spread of the virus to motor neurons of the central nervous system. Infection or vaccination with one serotype of poliovirus does not provide immunity against the other serotypes, and full immunity requires exposure to each serotype. A rare condition with a similar presentation, nonpoliovirus poliomyelitis, may result from infections with enteroviruses other than poliovirus. The oral polio vaccine, which has been in use since 1961, contains weakened viruses that can replicate. On rare occasions, these may be transmitted from the vaccinated person to other people; in communities with good vaccine coverage, transmission is limited, and the virus dies out. In communities with low vaccine coverage, this weakened virus may continue to circulate and, over time may mutate and revert to a virulent form. Polio arising from this cause is referred to as circulating vaccine-derived poliovirus (cVDPV) or variant poliovirus in order to distinguish it from the natural or "wild" poliovirus (WPV). Transmission Poliomyelitis is highly contagious. The disease is transmitted primarily via the fecal–oral route, by ingesting contaminated food or water. It is occasionally transmitted via the oral–oral route. It is seasonal in temperate climates, with peak transmission occurring in summer and autumn. These seasonal differences are far less pronounced in tropical areas. Polio is most infectious between 7 and 10 days before and after the appearance of symptoms, but transmission is possible as long as the virus remains in the saliva or feces. Virus particles can be excreted in the feces for up to six weeks. Factors that increase the risk of polio infection include pregnancy, the very old and the very young, immune deficiency, and malnutrition. Although the virus can cross the maternal-fetal barrier during pregnancy, the fetus does not appear to be affected by either maternal infection or polio vaccination. Maternal antibodies also cross the placenta, providing passive immunity that protects the infant from polio infection during the first few months of life. Pathophysiology Poliovirus enters the body through the mouth, infecting the first cells with which it comes in contact – the pharynx and intestinal mucosa. It gains entry by binding to an immunoglobulin-like receptor, known as the poliovirus receptor or CD155, on the cell membrane. The virus then hijacks the host cell's own machinery, and begins to replicate. Poliovirus divides within gastrointestinal cells for about a week, from where it spreads to the tonsils (specifically the follicular dendritic cells residing within the tonsilar germinal centers), the intestinal lymphoid tissue including the M cells of Peyer's patches, and the deep cervical and mesenteric lymph nodes, where it multiplies abundantly. The virus is subsequently absorbed into the bloodstream. Known as viremia, the presence of a virus in the bloodstream enables it to be widely distributed throughout the body. Poliovirus can survive and multiply within the blood and lymphatics for long periods of time, sometimes as long as 17 weeks. In a small percentage of cases, it can spread and replicate in other sites, such as brown fat, the reticuloendothelial tissues, and muscle. This sustained replication causes a major viremia, and leads to the development of minor influenza-like symptoms. Rarely, this may progress and the virus may invade the central nervous system, provoking a local inflammatory response. In most cases, this causes a self-limiting inflammation of the meninges, the layers of tissue surrounding the brain, which is known as nonparalytic aseptic meningitis. Penetration of the CNS provides no known benefit to the virus, and is quite possibly an incidental deviation of a normal gastrointestinal infection. The mechanisms by which poliovirus spreads to the CNS are poorly understood, but it appears to be primarily a chance event – largely independent of the age, gender, or socioeconomic position of the individual. Paralytic polio In around one percent of infections, poliovirus spreads along certain nerve fiber pathways, preferentially replicating in and destroying motor neurons within the spinal cord, brain stem, or motor cortex. This leads to the development of paralytic poliomyelitis, the various forms of which (spinal, bulbar, and bulbospinal) vary only with the amount of neuronal damage and inflammation that occurs, and the region of the CNS affected. The destruction of neuronal cells produces lesions within the spinal ganglia; these may also occur in the reticular formation, vestibular nuclei, cerebellar vermis, and deep cerebellar nuclei. Inflammation associated with nerve cell destruction often alters the color and appearance of the gray matter in the spinal column, causing it to appear reddish and swollen. Other destructive changes associated with paralytic disease occur in the forebrain region, specifically the hypothalamus and thalamus. Early symptoms of paralytic polio include high fever, headache, stiffness in the back and neck, asymmetrical weakness of various muscles, sensitivity to touch, difficulty swallowing, muscle pain, loss of superficial and deep reflexes, paresthesia (pins and needles), irritability, constipation, or difficulty urinating. Paralysis generally develops one to ten days after early symptoms begin, progresses for two to three days, and is usually complete by the time the fever breaks. The likelihood of developing paralytic polio increases with age, as does the extent of paralysis. In children, nonparalytic meningitis is the most likely consequence of CNS involvement, and paralysis occurs in only one in 1000 cases. In adults, paralysis occurs in one in 75 cases. In children under five years of age, paralysis of one leg is most common; in adults, extensive paralysis of the chest and abdomen also affecting all four limbs – quadriplegia – is more likely. Paralysis rates also vary depending on the serotype of the infecting poliovirus; the highest rates of paralysis (one in 200) are associated with poliovirus type 1, the lowest rates (one in 2,000) are associated with type 2. Spinal polio Spinal polio, the most common form of paralytic poliomyelitis, results from viral invasion of the motor neurons of the anterior horn cells, or the ventral (front) grey matter section in the spinal column, which are responsible for movement of the muscles, including those of the trunk, limbs, and the intercostal muscles. Virus invasion causes inflammation of the nerve cells, leading to damage or destruction of motor neuron ganglia. When spinal neurons die, Wallerian degeneration takes place, leading to weakness of those muscles formerly innervated by the now-dead neurons. With the destruction of nerve cells, the muscles no longer receive signals from the brain or spinal cord; without nerve stimulation, the muscles atrophy, becoming weak, floppy and poorly controlled, and finally completely paralyzed. Maximum paralysis progresses rapidly (two to four days), and usually involves fever and muscle pain. Deep tendon reflexes are also affected, and are typically absent or diminished; sensation (the ability to feel) in the paralyzed limbs, however, is not affected. The extent of spinal paralysis depends on the region of the cord affected, which may be cervical, thoracic, or lumbar. The virus may affect muscles on both sides of the body, but more often the paralysis is asymmetrical. Any limb or combination of limbs may be affected – one leg, one arm, or both legs and both arms. Paralysis is often more severe proximally (where the limb joins the body) than distally (the fingertips and toes). Bulbar polio Making up about two percent of cases of paralytic polio, bulbar polio occurs when poliovirus invades and destroys nerves within the bulbar region of the brain stem. The bulbar region is a white matter pathway that connects the cerebral cortex to the brain stem. The destruction of these nerves weakens the muscles supplied by the cranial nerves, producing symptoms of encephalitis, and causes difficulty breathing, speaking and swallowing. Critical nerves affected are the glossopharyngeal nerve (which partially controls swallowing and functions in the throat, tongue movement, and taste), the vagus nerve (which sends signals to the heart, intestines, and lungs), and the accessory nerve (which controls upper neck movement). Due to the effect on swallowing, secretions of mucus may build up in the airway, causing suffocation. Other signs and symptoms include facial weakness (caused by destruction of the trigeminal nerve and facial nerve, which innervate the cheeks, tear ducts, gums, and muscles of the face, among other structures), double vision, difficulty in chewing, and abnormal respiratory rate, depth, and rhythm (which may lead to respiratory arrest). Pulmonary edema and shock are also possible and may be fatal. Bulbospinal polio Approximately 19 percent of all paralytic polio cases have both bulbar and spinal symptoms; this subtype is called respiratory or bulbospinal polio. Here, the virus affects the upper part of the cervical spinal cord (cervical vertebrae C3 through C5), and paralysis of the diaphragm occurs. The critical nerves affected are the phrenic nerve (which drives the diaphragm to inflate the lungs) and those that drive the muscles needed for swallowing. By destroying these nerves, this form of polio affects breathing, making it difficult or impossible for the patient to breathe without the support of a ventilator. It can lead to paralysis of the arms and legs and may also affect swallowing and heart functions. Diagnosis Paralytic poliomyelitis may be clinically suspected in individuals experiencing acute onset of flaccid paralysis in one or more limbs with decreased or absent tendon reflexes in the affected limbs that cannot be attributed to another apparent cause, and without sensory or cognitive loss. A laboratory diagnosis is usually made based on the recovery of poliovirus from a stool sample or a swab of the pharynx. Rarely, it may be possible to identify poliovirus in the blood or in the cerebrospinal fluid. Poliovirus samples are further analysed using reverse transcription polymerase chain reaction (RT-PCR) or genomic sequencing to determine the serotype (i.e., 1, 2, or 3), and whether the virus is a wild or vaccine-derived strain. Prevention Passive immunization In 1950, William Hammon at the University of Pittsburgh purified the gamma globulin component of the blood plasma of polio survivors. Hammon proposed the gamma globulin, which contained antibodies to poliovirus, could be used to halt poliovirus infection, prevent disease, and reduce the severity of disease in other patients who had contracted polio. The results of a large clinical trial were promising; the gamma globulin was shown to be about 80 percent effective in preventing the development of paralytic poliomyelitis. It was also shown to reduce the severity of the disease in patients who developed polio. Due to the limited supply of blood plasma gamma globulin was later deemed impractical for widespread use and the medical community focused on the development of a polio vaccine. Vaccine Two types of vaccine are used throughout the world to combat polio: an inactivated poliovirus given by injection, and a weakened poliovirus given by mouth. Both types induce immunity to polio and are effective in protecting individuals from disease. The inactivated polio vaccine (IPV) was developed in 1952 by Jonas Salk at the University of Pittsburgh, and announced to the world on 12 April 1955. The Salk vaccine is based on poliovirus grown in a type of monkey kidney tissue culture (vero cell line), which is chemically inactivated with formalin. After two doses of IPV (given by injection), 90 percent or more of individuals develop protective antibody to all three serotypes of poliovirus, and at least 99 percent are immune to poliovirus following three doses. Subsequently, Albert Sabin developed a polio vaccine that can be administered orally (oral polio vaccine - OPV), comprising a live, attenuated virus. It was produced by the repeated passage of the virus through nonhuman cells at subphysiological temperatures. The attenuated poliovirus in the Sabin vaccine replicates very efficiently in the gut, the primary site of wild poliovirus infection and replication, but the vaccine strain is unable to replicate efficiently within nervous system tissue. A single dose of Sabin's trivalent OPV produces immunity to all three poliovirus serotypes in about 50 percent of recipients. Three doses of OPV produce protective antibody to all three poliovirus types in more than 95 percent of recipients. Human trials of Sabin's vaccine began in 1957, and in 1958, it was selected, in competition with the live attenuated vaccines of Koprowski and other researchers, by the US National Institutes of Health. Licensed in 1962, it rapidly became the only oral polio vaccine used worldwide. OPV efficiently blocks person-to-person transmission of wild poliovirus by oral–oral and fecal–oral routes, thereby protecting both individual vaccine recipients and the wider community. The live attenuated virus may be transmitted from vaccinees to their unvaccinated contacts, resulting in wider community immunity. IPV confers good immunity but is less effective at preventing spread of wild poliovirus by the fecal–oral route. Because the oral polio vaccine is inexpensive, easy to administer, and produces excellent immunity in the intestine (which helps prevent infection with wild virus in areas where it is endemic), it has been the vaccine of choice for controlling poliomyelitis in many countries. On very rare occasions, the attenuated virus in the Sabin OPV can revert into a form that can paralyze. In 2017, cases caused by vaccine-derived poliovirus (cVDPV) outnumbered wild poliovirus cases for the first time, due to wild polio cases hitting record lows. Most industrialized countries have switched to inactivated polio vaccine, which cannot revert, either as the sole vaccine against poliomyelitis or in combination with oral polio vaccine. An improved oral vaccine (Novel oral polio vaccine type 2 - nOPV2) began development in 2011 and was granted emergency licensing in 2021, and subsequently full licensure in December 2023. This has greater genetic stability than the traditional oral vaccine and is less likely to revert to a virulent form. Treatment There is no cure for polio, but there are treatments. The focus of modern treatment has been on providing relief of symptoms, speeding recovery and preventing complications. Supportive measures include antibiotics to prevent infections in weakened muscles, analgesics for pain, moderate exercise and a nutritious diet. Treatment of polio often requires long-term rehabilitation, including occupational therapy, physical therapy, braces, corrective shoes and, in some cases, orthopedic surgery. Portable ventilators may be required to support breathing. Historically, a noninvasive, negative-pressure ventilator, more commonly called an iron lung, was used to artificially maintain respiration during an acute polio infection until a person could breathe independently (generally about one to two weeks). The use of iron lungs is largely obsolete in modern medicine as more modern breathing therapies have been developed and due to the eradication of polio in most of the world. Other historical treatments for polio include hydrotherapy, electrotherapy, massage and passive motion exercises, and surgical treatments, such as tendon lengthening and nerve grafting. Prognosis Patients with abortive polio infections recover completely. In those who develop only aseptic meningitis, the symptoms can be expected to persist for two to ten days, followed by complete recovery. In cases of spinal polio, if the affected nerve cells are completely destroyed, paralysis will be permanent; cells that are not destroyed, but lose function temporarily, may recover within four to six weeks after onset. Half the patients with spinal polio recover fully; one-quarter recover with mild disability, and the remaining quarter are left with severe disability. The degree of both acute paralysis and residual paralysis is likely to be proportional to the degree of viremia, and inversely proportional to the degree of immunity. Spinal polio is rarely fatal. Without respiratory support, consequences of poliomyelitis with respiratory involvement include suffocation or pneumonia from aspiration of secretions. Overall, 5 to 10 percent of patients with paralytic polio die due to the paralysis of muscles used for breathing. The case fatality rate (CFR) varies by age: 2 to 5 percent of children and up to 15 to 30 percent of adults die. Bulbar polio often causes death if respiratory support is not provided; with support, its CFR ranges from 25 to 75 percent, depending on the age of the patient. When intermittent positive pressure ventilation is available, the fatalities can be reduced to 15 percent. Recovery Many cases of poliomyelitis result in only temporary paralysis. Generally in these cases, nerve impulses return to the paralyzed muscle within a month, and recovery is complete in six to eight months. The neurophysiological processes involved in recovery following acute paralytic poliomyelitis are quite effective; muscles are able to retain normal strength even if half the original motor neurons have been lost. Paralysis remaining after one year is likely to be permanent, although some recovery of muscle strength is possible up to 18 months after infection. One mechanism involved in recovery is nerve terminal sprouting, in which remaining brainstem and spinal cord motor neurons develop new branches, or axonal sprouts. These sprouts can reinnervate orphaned muscle fibers that have been denervated by acute polio infection, restoring the fibers' capacity to contract and improving strength. Terminal sprouting may generate a few significantly enlarged motor neurons doing work previously performed by as many as four or five units: a single motor neuron that once controlled 200 muscle cells might control 800 to 1000 cells. Other mechanisms that occur during the rehabilitation phase, and contribute to muscle strength restoration, include myofiber hypertrophy – enlargement of muscle fibers through exercise and activity – and transformation of type II muscle fibers to type I muscle fibers. In addition to these physiological processes, the body can compensate for residual paralysis in other ways. Weaker muscles can be used at a higher than usual intensity relative to the muscle's maximal capacity, little-used muscles can be developed, and ligaments can enable stability and mobility. Complications Residual complications of paralytic polio often occur following the initial recovery process. Muscle paresis and paralysis can sometimes result in skeletal deformities, tightening of the joints, and movement disability. Once the muscles in the limb become flaccid, they may interfere with the function of other muscles. A typical manifestation of this problem is equinus foot (similar to club foot). This deformity develops when the muscles that pull the toes downward are working, but those that pull it upward are not, and the foot naturally tends to drop toward the ground. If the problem is left untreated, the Achilles tendons at the back of the foot retract and the foot cannot take on a normal position. People with polio that develop equinus foot cannot walk properly because they cannot put their heels on the ground. A similar situation can develop if the arms become paralyzed. In some cases the growth of an affected leg is slowed by polio, while the other leg continues to grow normally. The result is that one leg is shorter than the other and the person limps and leans to one side, in turn leading to deformities of the spine (such as scoliosis). Osteoporosis and increased likelihood of bone fractures may occur. An intervention to prevent or lessen length disparity can be to perform an epiphysiodesis on the distal femoral and proximal tibial/fibular condyles, so that limb's growth is artificially stunted, and by the time of epiphyseal (growth) plate closure, the legs are more equal in length. Alternatively, a person can be fitted with custom-made footwear which corrects the difference in leg lengths. Other surgery to re-balance muscular agonist/antagonist imbalances may also be helpful. Extended use of braces or wheelchairs may cause compression neuropathy, as well as a loss of proper function of the veins in the legs, due to pooling of blood in paralyzed lower limbs. Complications from prolonged immobility involving the lungs, kidneys and heart include pulmonary edema, aspiration pneumonia, urinary tract infections, kidney stones, paralytic ileus, myocarditis and cor pulmonale. Post-polio syndrome Between 25 percent and 50 percent of individuals who have recovered from paralytic polio in childhood can develop additional symptoms decades after recovering from the acute infection, notably new muscle weakness and extreme fatigue. This condition is known as post-polio syndrome (PPS) or post-polio sequelae. The symptoms of PPS are thought to involve a failure of the oversized motor units created during the recovery phase of the paralytic disease. Contributing factors that increase the risk of PPS include aging with loss of neuron units, the presence of a permanent residual impairment after recovery from the acute illness, and both overuse and disuse of neurons. PPS is a slow, progressive disease, and there is no specific treatment for it. Post-polio syndrome is not an infectious process, and persons experiencing the syndrome do not shed poliovirus. Orthotics Paralysis, length differences and deformations of the lower extremities can lead to a hindrance when walking with compensation mechanisms that lead to a severe impairment of the gait pattern. In order to be able to stand and walk safely and to improve the gait pattern, orthotics can be included in the therapy concept. Today, modern materials and functional elements enable the orthosis to be specifically adapted to the requirements resulting from the patient's gait. Mechanical stance phase control knee joints may secure the knee joint in the early stance phases and release again for knee flexion when the swing phase is initiated. With the help of an orthotic treatment with a stance phase control knee joint, a natural gait pattern can be achieved despite mechanical protection against unwanted knee flexion. In these cases, locked knee joints are often used, which have a good safety function, but do not allow knee flexion when walking during swing phase. With such joints, the knee joint remains mechanically blocked during the swing phase. Patients with locked knee joints must swing the leg forward with the knee extended even during the swing phase. This only works if the patient develops compensatory mechanisms, e.g. by raising the body's center of gravity in the swing phase (Duchenne limping) or by swinging the orthotic leg to the side (circumduction). Epidemiology Major polio epidemics were unknown before the 20th century; up until that time, polio was an endemic disease worldwide. Mothers who had survived polio infection passed on temporary immunity to their babies in the womb and through breast milk. As a result, an infant who encountered a polio infection generally suffered only mild symptoms and acquired a long-term immunity to the disease. With improvements in sanitation and hygiene during the 19th century, the general level of herd immunity in the population declined; this provided circumstances where epidemics of polio became frequent. It is estimated that epidemic polio killed or paralysed over half a million people every year. Following the widespread use of poliovirus vaccine in the mid-1950s, new cases of poliomyelitis declined dramatically in many industrialized countries. Efforts to completely eradicate the disease started in 1988 and are ongoing. Circulating vaccine-derived polioviruses The oral polio vaccine, while highly effective, has the disadvantage that it contains a live virus which has been attenuated so that it cannot cause severe illness. The vaccine virus is excreted in the stool, and in under-immunized communities it can spread from person to person. This is known as circulating vaccine-derived poliovirus (cVDPV) or more simply as variant poliovirus. With prolonged transmission of this kind, the weakened virus can mutate and revert to a form that causes illness and paralysis. Cases of cVDPV now exceed wild-type cases, making it desirable to discontinue the use of the oral polio vaccine as soon as safely possible and instead use other types of polio vaccines. Eradication A global effort to eradicate polio – the Global Polio Eradication Initiative – began in 1988, led by the World Health Organization, UNICEF, and The Rotary Foundation. Polio is one of only two diseases currently the subject of a global eradication program, the other being Guinea worm disease. So far, the only diseases completely eradicated by humankind are smallpox, declared eradicated in 1980, and rinderpest, declared eradicated in 2011. In April 2012, the World Health Assembly declared that the failure to completely eradicate polio would be a programmatic emergency for global public health, and that it "must not happen". These efforts have hugely reduced the number of cases; from an estimated 350,000 cases in 1988 to a low of 483 cases in 2001, after which it remained at a level of about 1,000–2000 cases per year for a number of years. By 2015, polio was believed to remain naturally spreading in only two countries, Pakistan and Afghanistan, although it continued to cause outbreaks in other nearby countries due to hidden or re-established transmission. Global surveillance for polio takes two forms. Cases of acute flaccid paralysis (AFP) are tested for the presence and type of poliovirus. In addition, environmental and wastewater samples are tested for the presence of poliovirus - this is an effective method of detecting circulating virus which has not given rise to severe symptoms. Here is a summary of both wild polio (WPV) and variant polio (cVDPV) prevalence over the last 5 years: 2019 - 147 cases of WPV1 in Pakistan, and 29 cases in Afghanistan. None were reported elsewhere in the world. cVDPV was detected in 19 countries with 378 confirmed cases. 2020 - 84 WPV1 cases in Pakistan, 56 in Afghanistan. 32 countries reported cVDPV detection, and there were 1,103 cVDPV cases. In 2021, there were just six confirmed cases of wild poliovirus — one in Pakistan, four in Afghanistan, and one in Malawi. The case in Malawi, the country's first in almost three decades and the first in Africa in five years, was seen as a significant setback to the eradication effort. 23 countries detected cVDPV, with 698 cases. In 2022, there were 30 confirmed cases of WPV1 reported to WHO, with two cases in Pakistan and 20 Afghanistan respectively, while eight non-endemic cases were recorded in Mozambique, the first cases in the country since 1992. The Mozambique cases derived from the strain of Pakistani origin that caused two confirmed cases in Malawi in 2021. 24 countries detected cVDPV, with 881 cases. In 2023, twelve cases of WPV1 were reported, six each in Afghanistan and Pakistan. 32 countries reported cVDPV, with 524 cases. Afghanistan and Pakistan The last remaining region with wild polio cases are the South Asian countries Afghanistan and Pakistan. During 2011, the CIA ran a fake hepatitis vaccination clinic in Abbottabad, Pakistan, in an attempt to locate Osama bin Laden. This destroyed trust in vaccination programs in the region. There were attacks and deaths among vaccination workers; 66 vaccinators were killed in 2013 and 2014. In Afghanistan, the Taliban banned house-to-house polio vaccination between 2018 and 2021. These factors have set back efforts to eliminate polio by means of vaccination in these countries. In Afghanistan, 80 cases of polio were reported from 35 districts during 2011. Incidence over the subsequent 10 years has declined to just 4 cases in 2 districts during 2021. In Pakistan, cases dropped by 97 percent from 2014 to 2018; reasons include 440 million dirham support from the United Arab Emirates to vaccinate more than ten million children, changes in the military situation, and arrests of some of those who attacked polio workers. Americas The Americas were declared polio-free in 1994. The last known case was a boy in Peru in 1991. The US Centers for Disease Control and Prevention recommends polio vaccination boosters for travelers and those who live in countries where the disease is endemic. In July 2022, the US state of New York reported a polio case for the first time in almost a decade in the country; this was attributed to a vaccine-derived strain of the virus. Western Pacific In 2000, polio was declared to have been officially eliminated in 37 Western Pacific countries, including China and Australia. Despite eradication ten years earlier, an outbreak was confirmed in China in September 2011, involving a strain common in Pakistan. In September 2019, the Department of Health of the Philippines declared a polio outbreak in the country after a single case in a 3-year-old girl. In December 2019, acute poliomyelitis was confirmed in an infant in Sabah state, Borneo, Malaysia. Subsequently, a further three polio cases were reported, with the last case reported in January 2020. Both outbreaks were found to be linked instances of vaccine-derived poliomyelitis. Europe Europe was declared polio-free in 2002. Southeast Asia On 27 March 2014, the WHO announced the eradication of poliomyelitis in the South-East Asia Region, which includes eleven countries: Bangladesh, Bhutan, North Korea, India, Indonesia, Maldives, Myanmar, Nepal, Sri Lanka, Thailand and Timor-Leste. With the addition of this region, 80 per cent of the world population was considered to be living in polio-free regions. Middle East In Syria, difficulties in executing immunization programs in the ongoing civil war led to a return of polio, probably in 2012, acknowledged by the WHO in 2013. 15 cases were confirmed among children in Syria between October and November 2013 in Deir Ezzor. Later, two more cases, one each in rural Damascus and Aleppo, were identified. It was the first outbreak in Syria since 1999. Doctors and international public health agencies report more than 90 cases of polio in Syria, with fears of contagion in rebel areas from lack of sanitation and safe-water services. A vaccination campaign in Syria operated under gunfire and led to the deaths of several vaccinators, but returned vaccination coverage to pre-war levels. Syria is currently free of polio, but is considered "at risk". In 2024, the Gaza Health Ministry reported that several children have shown symptoms consistent with polio, with laboratory tests confirming that a 10-month-old child is infected with the virus. In 2022, prior to the Israel-Hamas conflict, routine immunization coverage of eligible children exceeded 99%, but fell to less than 90% by the first quarter of 2024, according to the WHO. United Nations Secretary-General António Guterres urged for a weeklong cease-fire in Gaza to facilitate vaccinations and prevent a potential polio outbreak, emphasizing the risk faced by many children. Africa In 2003, in northern Nigeria – a country that at that time was considered provisionally polio free – a fatwa was issued declaring that the polio vaccine was designed to render children sterile. Subsequently, polio reappeared in Nigeria and spread from there to several other countries. In 2013, nine health workers administering polio vaccine were targeted and killed by gunmen on motorcycles in Kano, but this was the only attack. Local traditional and religious leaders and polio survivors worked to revive the campaign, and Nigeria was removed from the polio-endemic list in September 2015 after more than a year without any cases, only to be restored to the list in 2016 when two cases were detected. Africa was declared free of wild polio in August 2020, although cases of circulating vaccine-derived poliovirus type 2 continue to appear in several countries. A single case of wild polio that was detected in Malawi in February 2022, and another in Mozambique in May 2022 were both of a strain imported from Pakistan and do not affect the African region's wild poliovirus-free certification status. History The effects of polio have been known since prehistory; Egyptian paintings and carvings depict otherwise healthy people with withered limbs, and young children walking with canes. The earliest known case of polio is indicated by the remains of a teenage girl discovered in a 4000-year-old burial site in the United Arab Emirates, exhibiting characteristic symptoms of the condition. The first clinical description was provided by the English physician Michael Underwood in 1789, where he refers to polio as "a debility of the lower extremities". The work of physicians Jakob Heine in 1840 and Karl Oskar Medin in 1890 led to it being known as Heine–Medin disease. The disease was later called infantile paralysis, based on its propensity to affect children. Before the 20th century, polio infections were rarely seen in infants before six months of age, most cases occurring in children six months to four years of age. Poorer sanitation of the time resulted in constant exposure to the virus, which enhanced a natural immunity within the population. In developed countries during the late 19th and early 20th centuries, improvements were made in community sanitation, including better sewage disposal and clean water supplies. These changes drastically increased the proportion of children and adults at risk of paralytic polio infection, by reducing childhood exposure and immunity to the disease. Small localized paralytic polio epidemics began to appear in Europe and the United States around 1900. Outbreaks reached pandemic proportions in Europe, North America, Australia, and New Zealand during the first half of the 20th century. By 1950, the peak age incidence of paralytic poliomyelitis in the United States had shifted from infants to children aged five to nine years, when the risk of paralysis is greater; about one-third of the cases were reported in persons over 15 years of age. Accordingly, the rate of paralysis and death due to polio infection also increased during this time. In the United States, the 1952 polio epidemic became the worst outbreak in the nation's history. Of the nearly 58,000 cases reported that year, 3,145 died and 21,269 were left with mild to disabling paralysis. Intensive care medicine has its origin in the fight against polio. Most hospitals in the 1950s had limited access to iron lungs for patients unable to breathe without mechanical assistance. Respiratory centers designed to assist the most severe polio patients, first established in 1952 at the Blegdam Hospital of Copenhagen by Danish anesthesiologist Bjørn Ibsen, were the precursors of modern intensive care units (ICU). (A year later, Ibsen would establish the world's first dedicated ICU.) The polio epidemics not only altered the lives of those who survived them, but also brought profound cultural changes, spurring grassroots fund-raising campaigns that would revolutionize medical philanthropy, and giving rise to the modern field of rehabilitation therapy. As one of the largest disabled groups in the world, polio survivors also helped to advance the modern disability rights movement through campaigns for the social and civil rights of the disabled. The World Health Organization estimates that there are 10 to 20 million polio survivors worldwide. In 1977, there were 254,000 persons living in the United States who had been paralyzed by polio. According to doctors and local polio support groups, some 40,000 polio survivors with varying degrees of paralysis were living in Germany, 30,000 in Japan, 24,000 in France, 16,000 in Australia, 12,000 in Canada and 12,000 in the United Kingdom in 2001. Many notable individuals have survived polio and often credit the prolonged immobility and residual paralysis associated with polio as a driving force in their lives and careers. The disease was very well publicized during the polio epidemics of the 1950s, with extensive media coverage of any scientific advancements that might lead to a cure. Thus, the scientists working on polio became some of the most famous of the century. Fifteen scientists and two laymen who made important contributions to the knowledge and treatment of poliomyelitis are honored by the Polio Hall of Fame, which was dedicated in 1957 at the Roosevelt Warm Springs Institute for Rehabilitation in Warm Springs, Georgia, US. In 2008 four organizations (Rotary International, the World Health Organization, the U.S. Centers for Disease Control and UNICEF) were added to the Hall of Fame. World Polio Day (24 October) as an annual day of awareness was established by Rotary International to commemorate the birth of Jonas Salk, who led the first team to develop a vaccine against poliomyelitis. A global effort to eradicate polio – the Global Polio Eradication Initiative (GPEI) – began in 1988, led by the World Health Organization, UNICEF, and The Rotary Foundation. Since then, international cooperation led by GPEI has reduced polio worldwide by 99 percent, and the campaign is ongoing. In 2010, wild poliovirus was discovered through importation in 13 different countries. They were Chad, the Democratic Republic of Congo, the Republic of Congo, Kazakhstan, Liberia, Mali, Nepal, Niger, the Russian Federation, Senegal, Tajikistan, Turkmenistan, and Uganda. In 2021, Types 2 and 3 were fully eradicated from every country; however, type 1 cases still remain in Pakistan and Afghanistan. A majority of countries have successfully eradicated polio, with Pakistan and Afghanistan being the last countries with endemic cases of poliovirus. The following countries have been considered polio-free, but not confirmed as of April 2024: Somalia, Djibouti, Sudan, Egypt, Libya, Tunisia, Morocco, Palestine, Lebanon, Syria, Jordan, Saudi Arabia, Bahrain, Qatar, Oman, Yemen, the UAE, Iraq, Kuwait, and Iran. Research Since 2018, Global Polio Eradication Initiative (GPEI) has coordinated efforts both to eliminate polio and to research means of improving surveillance and prevention. At the peak of its work, the programme directly employed 4000 people across 75 countries and managed a budget of nearly U.S. $1 billion. , the GPEI had raised 18 billion dollars in funding, with annual contributions around 800 million to 1 billion dollars. Around 30% of the funding came from the Gates Foundation 30% from developed governments, 27% from countries at risk of polio, and the rest was made up of donations from nonprofits, private funders, and other foundations. The GPEI has identified six directions for continuing research: Optimizing oral polio vaccine efficacy Developing affordable inactivated polio vaccine Managing risks associated with vaccine-derived polioviruses and vaccine-associated paralytic polio (including OPV cessation) Antivirals Polio diagnostics Surveillance research Even if polio can be eliminated from the world population, vaccination programs should continue for at least ten years. The retention of live poliovirus samples in laboratories and vaccine manufacturing facilities (which carry a risk of escape of the virus) should progressively be reduced. To support these two objectives, vaccines are under development which either utilise a virus-like particle, or which derive from a modified virus which cannot reproduce in a human host.
Biology and health sciences
Infectious disease
null
25120
https://en.wikipedia.org/wiki/Polar%20coordinate%20system
Polar coordinate system
In mathematics, the polar coordinate system specifies a given point in a plane by using a distance and an angle as its two coordinates. These are the point's distance from a reference point called the pole, and the point's direction from the pole relative to the direction of the polar axis, a ray drawn from the pole. The distance from the pole is called the radial coordinate, radial distance or simply radius, and the angle is called the angular coordinate, polar angle, or azimuth. The pole is analogous to the origin in a Cartesian coordinate system. Polar coordinates are most appropriate in any context where the phenomenon being considered is inherently tied to direction and length from a center point in a plane, such as spirals. Planar physical systems with bodies moving around a central point, or phenomena originating from a central point, are often simpler and more intuitive to model using polar coordinates. The polar coordinate system is extended to three dimensions in two ways: the cylindrical coordinate system adds a second distance coordinate, and the spherical coordinate system adds a second angular coordinate. Grégoire de Saint-Vincent and Bonaventura Cavalieri independently introduced the system's concepts in the mid-17th century, though the actual term polar coordinates has been attributed to Gregorio Fontana in the 18th century. The initial motivation for introducing the polar system was the study of circular and orbital motion. History The concepts of angle and radius were already used by ancient peoples of the first millennium BC. The Greek astronomer and astrologer Hipparchus (190–120 BC) created a table of chord functions giving the length of the chord for each angle, and there are references to his using polar coordinates in establishing stellar positions. In On Spirals, Archimedes describes the Archimedean spiral, a function whose radius depends on the angle. The Greek work, however, did not extend to a full coordinate system. From the 8th century AD onward, astronomers developed methods for approximating and calculating the direction to Mecca (qibla)—and its distance—from any location on the Earth. From the 9th century onward they were using spherical trigonometry and map projection methods to determine these quantities accurately. The calculation is essentially the conversion of the equatorial polar coordinates of Mecca (i.e. its longitude and latitude) to its polar coordinates (i.e. its qibla and distance) relative to a system whose reference meridian is the great circle through the given location and the Earth's poles and whose polar axis is the line through the location and its antipodal point. There are various accounts of the introduction of polar coordinates as part of a formal coordinate system. The full history of the subject is described in Harvard professor Julian Lowell Coolidge's Origin of Polar Coordinates. Grégoire de Saint-Vincent and Bonaventura Cavalieri independently introduced the concepts in the mid-seventeenth century. Saint-Vincent wrote about them privately in 1625 and published his work in 1647, while Cavalieri published his in 1635 with a corrected version appearing in 1653. Cavalieri first used polar coordinates to solve a problem relating to the area within an Archimedean spiral. Blaise Pascal subsequently used polar coordinates to calculate the length of parabolic arcs. In Method of Fluxions (written 1671, published 1736), Sir Isaac Newton examined the transformations between polar coordinates, which he referred to as the "Seventh Manner; For Spirals", and nine other coordinate systems. In the journal Acta Eruditorum (1691), Jacob Bernoulli used a system with a point on a line, called the pole and polar axis respectively. Coordinates were specified by the distance from the pole and the angle from the polar axis. Bernoulli's work extended to finding the radius of curvature of curves expressed in these coordinates. The actual term polar coordinates has been attributed to Gregorio Fontana and was used by 18th-century Italian writers. The term appeared in English in George Peacock's 1816 translation of Lacroix's Differential and Integral Calculus. Alexis Clairaut was the first to think of polar coordinates in three dimensions, and Leonhard Euler was the first to actually develop them. Conventions The radial coordinate is often denoted by r or ρ, and the angular coordinate by φ, θ, or t. The angular coordinate is specified as φ by ISO standard 31-11. However, in mathematical literature the angle is often denoted by θ instead. Angles in polar notation are generally expressed in either degrees or radians (2 rad being equal to 360°). Degrees are traditionally used in navigation, surveying, and many applied disciplines, while radians are more common in mathematics and mathematical physics. The angle φ is defined to start at 0° from a reference direction, and to increase for rotations in either clockwise (cw) or counterclockwise (ccw) orientation. For example, in mathematics, the reference direction is usually drawn as a ray from the pole horizontally to the right, and the polar angle increases to positive angles for ccw rotations, whereas in navigation (bearing, heading) the 0°-heading is drawn vertically upwards and the angle increases for cw rotations. The polar angles decrease towards negative values for rotations in the respectively opposite orientations. Uniqueness of polar coordinates Adding any number of full turns (360°) to the angular coordinate does not change the corresponding direction. Similarly, any polar coordinate is identical to the coordinate with the negative radial component and the opposite direction (adding 180° to the polar angle). Therefore, the same point (r, φ) can be expressed with an infinite number of different polar coordinates and , where n is an arbitrary integer. Moreover, the pole itself can be expressed as (0, φ) for any angle φ. Where a unique representation is needed for any point besides the pole, it is usual to limit r to positive numbers () and φ to either the interval or the interval , which in radians are or . Another convention, in reference to the usual codomain of the arctan function, is to allow for arbitrary nonzero real values of the radial component and restrict the polar angle to . In all cases a unique azimuth for the pole (r = 0) must be chosen, e.g., φ = 0. Converting between polar and Cartesian coordinates The polar coordinates r and φ can be converted to the Cartesian coordinates x and y by using the trigonometric functions sine and cosine: The Cartesian coordinates x and y can be converted to polar coordinates r and φ with r ≥ 0 and φ in the interval (−, ] by: where hypot is the Pythagorean sum and atan2 is a common variation on the arctangent function defined as If r is calculated first as above, then this formula for φ may be stated more simply using the arccosine function: Complex numbers Every complex number can be represented as a point in the complex plane, and can therefore be expressed by specifying either the point's Cartesian coordinates (called rectangular or Cartesian form) or the point's polar coordinates (called polar form). In polar form, the distance and angle coordinates are often referred to as the number's magnitude and argument respectively. Two complex numbers can be multiplied by adding their arguments and multiplying their magnitudes. The complex number z can be represented in rectangular form as where i is the imaginary unit, or can alternatively be written in polar form as and from there, by Euler's formula, as where e is Euler's number, and φ, expressed in radians, is the principal value of the complex number function arg applied to x + iy. To convert between the rectangular and polar forms of a complex number, the conversion formulae given above can be used. Equivalent are the and angle notations: For the operations of multiplication, division, exponentiation, and root extraction of complex numbers, it is generally much simpler to work with complex numbers expressed in polar form rather than rectangular form. From the laws of exponentiation: Multiplication Division Exponentiation (De Moivre's formula) Root Extraction (Principal root) Polar equation of a curve The equation defining a plane curve expressed in polar coordinates is known as a polar equation. In many cases, such an equation can simply be specified by defining r as a function of φ. The resulting curve then consists of points of the form (r(φ), φ) and can be regarded as the graph of the polar function r. Note that, in contrast to Cartesian coordinates, the independent variable φ is the second entry in the ordered pair. Different forms of symmetry can be deduced from the equation of a polar function r: If the curve will be symmetrical about the horizontal (0°/180°) ray; If it will be symmetric about the vertical (90°/270°) ray: If it will be rotationally symmetric by α clockwise and counterclockwise about the pole. Because of the circular nature of the polar coordinate system, many curves can be described by a rather simple polar equation, whereas their Cartesian form is much more intricate. Among the best known of these curves are the polar rose, Archimedean spiral, lemniscate, limaçon, and cardioid. For the circle, line, and polar rose below, it is understood that there are no restrictions on the domain and range of the curve. Circle The general equation for a circle with a center at and radius a is This can be simplified in various ways, to conform to more specific cases, such as the equation for a circle with a center at the pole and radius a. When or the origin lies on the circle, the equation becomes In the general case, the equation can be solved for , giving The solution with a minus sign in front of the square root gives the same curve. Line Radial lines (those running through the pole) are represented by the equation where is the angle of elevation of the line; that is, , where is the slope of the line in the Cartesian coordinate system. The non-radial line that crosses the radial line perpendicularly at the point has the equation Otherwise stated is the point in which the tangent intersects the imaginary circle of radius Polar rose A polar rose is a mathematical curve that looks like a petaled flower, and that can be expressed as a simple polar equation, for any constant γ0 (including 0). If k is an integer, these equations will produce a k-petaled rose if k is odd, or a 2k-petaled rose if k is even. If k is rational, but not an integer, a rose-like shape may form but with overlapping petals. Note that these equations never define a rose with 2, 6, 10, 14, etc. petals. The variable a directly represents the length or amplitude of the petals of the rose, while k relates to their spatial frequency. The constant γ0 can be regarded as a phase angle. Archimedean spiral The Archimedean spiral is a spiral discovered by Archimedes which can also be expressed as a simple polar equation. It is represented by the equation Changing the parameter a will turn the spiral, while b controls the distance between the arms, which for a given spiral is always constant. The Archimedean spiral has two arms, one for and one for . The two arms are smoothly connected at the pole. If , taking the mirror image of one arm across the 90°/270° line will yield the other arm. This curve is notable as one of the first curves, after the conic sections, to be described in a mathematical treatise, and as a prime example of a curve best defined by a polar equation. Conic sections A conic section with one focus on the pole and the other somewhere on the 0° ray (so that the conic's major axis lies along the polar axis) is given by: where e is the eccentricity and is the semi-latus rectum (the perpendicular distance at a focus from the major axis to the curve). If , this equation defines a hyperbola; if , it defines a parabola; and if , it defines an ellipse. The special case of the latter results in a circle of the radius . Quadratrix A quadratrix in the first quadrant (x, y) is a curve with y = ρ sin θ equal to the fraction of the quarter circle with radius r determined by the radius through the curve point. Since this fraction is , the curve is given by . Intersection of two polar curves The graphs of two polar functions and have possible intersections of three types: In the origin, if the equations and have at least one solution each. All the points where are solutions to the equation where is an integer. All the points where are solutions to the equation where is an integer. Calculus Calculus can be applied to equations expressed in polar coordinates. The angular coordinate φ is expressed in radians throughout this section, which is the conventional choice when doing calculus. Differential calculus Using and , one can derive a relationship between derivatives in Cartesian and polar coordinates. For a given function, u(x,y), it follows that (by computing its total derivatives) or Hence, we have the following formula: Using the inverse coordinates transformation, an analogous reciprocal relationship can be derived between the derivatives. Given a function u(r,φ), it follows that or Hence, we have the following formulae: To find the Cartesian slope of the tangent line to a polar curve r(φ) at any given point, the curve is first expressed as a system of parametric equations. Differentiating both equations with respect to φ yields Dividing the second equation by the first yields the Cartesian slope of the tangent line to the curve at the point : For other useful formulas including divergence, gradient, and Laplacian in polar coordinates, see curvilinear coordinates. Integral calculus (arc length) The arc length (length of a line segment) defined by a polar function is found by the integration over the curve r(φ). Let L denote this length along the curve starting from points A through to point B, where these points correspond to φ = a and φ = b such that . The length of L is given by the following integral Integral calculus (area) Let R denote the region enclosed by a curve r(φ) and the rays φ = a and φ = b, where . Then, the area of R is This result can be found as follows. First, the interval is divided into n subintervals, where n is some positive integer. Thus Δφ, the angle measure of each subinterval, is equal to (the total angle measure of the interval), divided by n, the number of subintervals. For each subinterval i = 1, 2, ..., n, let φi be the midpoint of the subinterval, and construct a sector with the center at the pole, radius r(φi), central angle Δφ and arc length r(φi)Δφ. The area of each constructed sector is therefore equal to Hence, the total area of all of the sectors is As the number of subintervals n is increased, the approximation of the area improves. Taking , the sum becomes the Riemann sum for the above integral. A mechanical device that computes area integrals is the planimeter, which measures the area of plane figures by tracing them out: this replicates integration in polar coordinates by adding a joint so that the 2-element linkage effects Green's theorem, converting the quadratic polar integral to a linear integral. Generalization Using Cartesian coordinates, an infinitesimal area element can be calculated as dA = dx dy. The substitution rule for multiple integrals states that, when using other coordinates, the Jacobian determinant of the coordinate conversion formula has to be considered: Hence, an area element in polar coordinates can be written as Now, a function, that is given in polar coordinates, can be integrated as follows: Here, R is the same region as above, namely, the region enclosed by a curve r(φ) and the rays φ = a and φ = b. The formula for the area of R is retrieved by taking f identically equal to 1. A more surprising application of this result yields the Gaussian integral: Vector calculus Vector calculus can also be applied to polar coordinates. For a planar motion, let be the position vector , with r and depending on time t. We define an orthonormal basis with three unit vectors: radial, transverse, and normal directions. The radial direction is defined by normalizing : Radial and velocity directions span the plane of the motion, whose normal direction is denoted : The transverse direction is perpendicular to both radial and normal directions: Then This equation can be obtain by taking derivative of the function and derivatives of the unit basis vectors. For a curve in 2D where the parameter is the previous equations simplify to: Centrifugal and Coriolis terms The term is sometimes referred to as the centripetal acceleration, and the term as the Coriolis acceleration. For example, see Shankar. Note: these terms, that appear when acceleration is expressed in polar coordinates, are a mathematical consequence of differentiation; they appear whenever polar coordinates are used. In planar particle dynamics these accelerations appear when setting up Newton's second law of motion in a rotating frame of reference. Here these extra terms are often called fictitious forces; fictitious because they are simply a result of a change in coordinate frame. That does not mean they do not exist, rather they exist only in the rotating frame. Co-rotating frame For a particle in planar motion, one approach to attaching physical significance to these terms is based on the concept of an instantaneous co-rotating frame of reference. To define a co-rotating frame, first an origin is selected from which the distance r(t) to the particle is defined. An axis of rotation is set up that is perpendicular to the plane of motion of the particle, and passing through this origin. Then, at the selected moment t, the rate of rotation of the co-rotating frame Ω is made to match the rate of rotation of the particle about this axis, dφ/dt. Next, the terms in the acceleration in the inertial frame are related to those in the co-rotating frame. Let the location of the particle in the inertial frame be (r(t), φ(t)), and in the co-rotating frame be (r′(t), φ′(t)). Because the co-rotating frame rotates at the same rate as the particle, dφ′/dt = 0. The fictitious centrifugal force in the co-rotating frame is mrΩ2, radially outward. The velocity of the particle in the co-rotating frame also is radially outward, because dφ′/dt = 0. The fictitious Coriolis force therefore has a value −2m(dr/dt)Ω, pointed in the direction of increasing φ only. Thus, using these forces in Newton's second law we find: where over dots represent derivatives with respect to time, and F is the net real force (as opposed to the fictitious forces). In terms of components, this vector equation becomes: which can be compared to the equations for the inertial frame: This comparison, plus the recognition that by the definition of the co-rotating frame at time t it has a rate of rotation Ω = dφ/dt, shows that we can interpret the terms in the acceleration (multiplied by the mass of the particle) as found in the inertial frame as the negative of the centrifugal and Coriolis forces that would be seen in the instantaneous, non-inertial co-rotating frame. For general motion of a particle (as opposed to simple circular motion), the centrifugal and Coriolis forces in a particle's frame of reference commonly are referred to the instantaneous osculating circle of its motion, not to a fixed center of polar coordinates. For more detail, see centripetal force. Differential geometry In the modern terminology of differential geometry, polar coordinates provide coordinate charts for the differentiable manifold , the plane minus the origin. In these coordinates, the Euclidean metric tensor is given byThis can be seen via the change of variables formula for the metric tensor, or by computing the differential forms dx, dy'' via the exterior derivative of the 0-forms , and substituting them in the Euclidean metric tensor . Let , and be two points in the plane given by their cartesian and polar coordinates. Then Since , and , we get that Now we use the trigonometric identity to proceed: If the radial and angular quantities are near to each other, and therefore near to a common quantity and , we have that . Moreover, the cosine of can be approximated with the Taylor series of the cosine up to linear terms: so that , and . Therefore, around an infinitesimally small domain of any point, as stated. An orthonormal frame with respect to this metric is given bywith dual coframeThe connection form relative to this frame and the Levi-Civita connection is given by the skew-symmetric matrix of 1-formsand hence the curvature form vanishes. Therefore, as expected, the punctured plane is a flat manifold. Extensions in 3D The polar coordinate system is extended into three dimensions with two different coordinate systems, the cylindrical and spherical coordinate system. Applications Polar coordinates are two-dimensional and thus they can be used only where point positions lie on a single two-dimensional plane. They are most appropriate in any context where the phenomenon being considered is inherently tied to direction and length from a center point. For instance, the examples above show how elementary polar equations suffice to define curves—such as the Archimedean spiral—whose equation in the Cartesian coordinate system would be much more intricate. Moreover, many physical systems—such as those concerned with bodies moving around a central point or with phenomena originating from a central point—are simpler and more intuitive to model using polar coordinates. The initial motivation for the introduction of the polar system was the study of circular and orbital motion. Position and navigation Polar coordinates are used often in navigation as the destination or direction of travel can be given as an angle and distance from the object being considered. For instance, aircraft use a slightly modified version of the polar coordinates for navigation. In this system, the one generally used for any sort of navigation, the 0° ray is generally called heading 360, and the angles continue in a clockwise direction, rather than counterclockwise, as in the mathematical system. Heading 360 corresponds to magnetic north, while headings 90, 180, and 270 correspond to magnetic east, south, and west, respectively. Thus, an aircraft traveling 5 nautical miles due east will be traveling 5 units at heading 90 (read zero-niner-zero by air traffic control). Modeling Systems displaying radial symmetry provide natural settings for the polar coordinate system, with the central point acting as the pole. A prime example of this usage is the groundwater flow equation when applied to radially symmetric wells. Systems with a radial force are also good candidates for the use of the polar coordinate system. These systems include gravitational fields, which obey the inverse-square law, as well as systems with point sources, such as radio antennas. Radially asymmetric systems may also be modeled with polar coordinates. For example, a microphone's pickup pattern illustrates its proportional response to an incoming sound from a given direction, and these patterns can be represented as polar curves. The curve for a standard cardioid microphone, the most common unidirectional microphone, can be represented as at its target design frequency. The pattern shifts toward omnidirectionality at lower frequencies.
Mathematics
Geometry
null
25126
https://en.wikipedia.org/wiki/Postage%20stamp
Postage stamp
A postage stamp is a small piece of paper issued by a post office, postal administration, or other authorized vendors to customers who pay postage (the cost involved in moving, insuring, or registering mail). Then the stamp is affixed to the face or address-side of any item of mail—an envelope or other postal cover (e.g., packet, box, mailing cylinder)—which they wish to send. The item is then processed by the postal system, where a postmark or cancellation mark—in modern usage indicating date and point of origin of mailing—is applied to the stamp and its left and right sides to prevent its reuse. Next the item is delivered to its address. Always featuring the name of the issuing nation (with the exception of the United Kingdom), a denomination of its value, and often an illustration of persons, events, institutions, or natural realities that symbolize the nation's traditions and values, every stamp is printed on a piece of usually rectangular, but sometimes triangular or otherwise shaped special custom-made paper whose back is either glazed with an adhesive gum or self-adhesive. Because governments issue stamps of different denominations in unequal numbers and routinely discontinue some lines and introduce others, and because of their illustrations and association with the social and political realities of the time of their issue, they are often prized for their beauty and historical significance by stamp collectors, whose study of their history and of mailing systems is called philately. Because collectors often buy stamps from an issuing agency with no intention to use them for postage, the revenues from such purchases and payments of postage can make them a source of net profit to that agency. On 1 May 1840, the Penny Black, the first adhesive postage stamp, was issued in the United Kingdom. Within three years postage stamps were introduced in Switzerland and Brazil, a little later in the United States, and by 1860, they were in 90 countries around the world. The first postage stamps did not need to show the issuing country, so no country name was included on them. Thus the United Kingdom remains the only country in the world to omit its name on postage stamps; the monarch's image signifies the United Kingdom as the country of origin. Invention Throughout modern history numerous methods were used to indicate that postage had been paid on a mailed item, so several different men have received credit for inventing the postage stamp. William Dockwra In 1680, William Dockwra, an English merchant in London, and his partner Robert Murray established the London Penny Post. The LPP was a mail system that delivered letters and small parcels inside the city of London for the sum of one penny. Confirmation of paid postage was indicated by the use of a hand stamp to frank the mailed item. Though this "stamp" was applied to the letter or parcel itself, rather than to a separate piece of paper, it is considered by many historians to be the world's first postage stamp. Lovrenc Košir In 1835, the civil servant Lovrenc Košir from Ljubljana in Austria-Hungary (now Slovenia), suggested the use of "artificially affixed postal tax stamps" using "gepresste Papieroblate" ("pressed paper wafers"), but although civil bureaucrats considered the suggestion in detail, it was not adopted. The 'Papieroblate' were to produce stamps as paper decals so thin as to prevent their reuse. Rowland Hill In 1836, Robert Wallace, a Member of (British) Parliament, gave Sir Rowland Hill numerous books and documents about the postal service, which Hill described as a "half hundred weight of material". After a detailed study, on 4 January 1837 Hill submitted a pamphlet entitled Post Office Reform: Its Importance and Practicability to the Chancellor of the Exchequer, Thomas Spring Rice, which was marked "private and confidential", and not released to the general public. The Chancellor summoned Hill to a meeting at which he suggested improvements and changes to be presented in a supplement, which Hill duly produced and submitted on 28 January 1837. Summoned to give evidence before the Commission for Post Office Enquiry on 13 February 1837, Hill read from the letter he wrote to the Chancellor that included a statement saying that the notation of paid postage could be created... by using a bit of paper just large enough to bear the stamp, and covered at the back with a glutinous wash..." This would eventually become the first unambiguous description of a modern adhesive postage stamp (though the term "postage stamp" originated at a later date). Shortly afterward, Hill's revision of the booklet, dated 22 February 1837, containing some 28,000 words, incorporating the supplement given to the Chancellor and statements he made to the commission, was published and made available to the general public. Hansard records that on 15 December 1837, Benjamin Hawes asked the Chancellor of the Exchequer "whether it was the intention of the Government to give effect to the recommendation of the Commissioners of the Post-office, contained in their ninth report relating to the reduction of the rates of postage, and the issuing of penny stamps?" Hill's ideas for postage stamps and charging paid-postage based on weight soon took hold, and were adopted in many countries throughout the world. With the new policy of charging by weight, using envelopes for mailing documents became the norm. Hill's brother Edwin invented a prototype envelope-making machine that folded paper into envelopes quickly enough to match the pace of the growing demand for postage stamps. Rowland Hill and the reforms he introduced to the United Kingdom postal system appear on several of its commemorative stamps. James Chalmers In the 1881 book The Penny Postage Scheme of 1837, Scotsman Patrick Chalmers claimed that his father, James Chalmers, published an essay in August 1834 describing and advocating a postage stamp, but submitted no evidence of the essay's existence. Nevertheless, until he died in 1891, Patrick Chalmers campaigned to have his father recognized as the inventor of the postage stamp. The first independent evidence for Chalmers' claim is an essay, dated 8 February 1838 and received by the Post Office on 17 February 1838, in which he proposed adhesive postage stamps to the General Post Office. In this approximately 800-word document concerning methods of indicating that postage had been paid on mail he states: "Therefore, of Mr Hill's plan of a uniform rate of postage... I conceive that the most simple and economical mode... would be by Slips... in the hope that Mr Hill's plan may soon be carried into operation I would suggest that sheets of Stamped Slips should be prepared... then be rubbed over on the back with a strong solution of gum...". Chalmers' original document is now in the United Kingdom's National Postal Museum. Since Chalmers used the same postage denominations that Hill had proposed in February 1837, it is clear that he was aware of Hill's proposals, but whether he obtained a copy of Hill's booklet or simply read about it in one or both of the two detailed accounts (25 March 1837 and 20 December 1837) published in The Times is unknown. Neither article mentioned "a bit of paper just large enough to bear the stamp", so Chalmers could not have known that Hill had made such a proposal. This suggests that either Chalmers had previously read Hill's booklet and was merely elaborating Hill's idea, or he had independently developed the idea of the modern postage stamp. James Chalmers organized petitions "for a low and uniform rate of postage". The first such petition was presented in the House of Commons on 4 December 1837 (from Montrose). Further petitions which he organized were presented on 1 May 1838 (from Dunbar and Cupar), 14 May 1838 (from the county of Forfar), and 12 June 1839. At this same time, other groups organized petitions and presented them to Parliament. All petitions for consumer-oriented, low-cost, volume-based postal rates followed publication of Hill's proposals. Other claimants Other claimants include or have included John Gray of the British Museum Samuel Forrester, a Scottish tax official Charles Whiting, a London stationer Samuel Roberts of Llanbrynmair, Wales Francis Worrell Stevens, schoolmaster at Loughton Ferdinand Egarter of Spittal, Austria Curry Gabriel Treffenberg from Sweden History The nineteenth century Postage stamps have facilitated the delivery of mail since the 1840s. Before then, ink and hand-stamps (hence the word 'stamp'), usually made from wood or cork, were often used to frank the mail and confirm the payment of postage. The first adhesive postage stamp, commonly referred to as the Penny Black, was issued in the United Kingdom in 1840. The invention of the stamp was part of an attempt to improve the postal system in the United Kingdom of Great Britain and Ireland, which, in the early 19th century, was in disarray and rife with corruption. There are varying accounts of the inventor or inventors of the stamp. Before the introduction of postage stamps, mail in the United Kingdom was paid for by the recipient, a system that was associated with an irresolvable problem: the costs of delivering mail were not recoverable by the postal service when recipients were unable or unwilling to pay for delivered items, and senders had no incentive to restrict the number, size, or weight of items sent, whether or not they would ultimately be paid for. The postage stamp resolved this issue in a simple and elegant manner, with the additional benefit of room for an element of beauty to be introduced. Concurrently with the first stamps, the United Kingdom offered wrappers for mail. Later related inventions include postal stationery such as prepaid-postage envelopes, post cards, lettercards, aerogrammes, and postage meters. The postage stamp afforded convenience for both the mailer and postal officials, more effectively recovered costs for the postal service, and ultimately resulted in a better, faster postal system. With the conveniences stamps offered, their use resulted in greatly increased mailings during the 19th and 20th centuries. Postage stamps released during this era were the most popular way of paying for mail; however by the end of the 20th century were rapidly being eclipsed by the use of metered postage and bulk mailing by businesses. As postage stamps with their engraved imagery began to appear on a widespread basis, historians and collectors began to take notice. The study of postage stamps and their use is referred to as philately. Stamp collecting can be both a hobby and a form of historical study and reference, as government-issued postage stamps and their mailing systems have always been involved with the history of nations. Although a number of people laid claim to the concept of the postage stamp, it is well documented that stamps were first introduced in the United Kingdom of Great Britain and Ireland on 1 May 1840 as a part of postal reforms promoted by Sir Rowland Hill. With its introduction the postage fee was paid by the sender and not the recipient, though it was still possible to send mail without prepaying. From when the first postage stamps were used, postmarks were applied to prevent the stamps being used again. The first stamp, the "Penny black", became available for purchase 1 May 1840, to be valid as of 6 May 1840. Two days later, 8 May 1840, the Two penny blue was introduced. The Penny black was sufficient for a letter less than half an ounce to be sent anywhere within the United Kingdom. Both stamps included an engraving of the young Queen Victoria, without perforations, as the first stamps were separated from their sheets by cutting them with scissors. The first stamps did not need to show the issuing country, so no country name was included on them. The United Kingdom remains the only country to omit its name on postage stamps, using the reigning monarch's head as country identification. Following the introduction of the postage stamp in the United Kingdom, prepaid postage considerably increased the number of letters mailed. Before 1839, the number of letters sent in the United Kingdom was typically 76 million. By 1850, this increased five-fold to 350 million, continuing to grow rapidly until the end of the 20th century when newer methods of indicating the payment of postage reduced the use of stamps. Other countries soon followed the United Kingdom with their own stamps. The canton of Zürich in Switzerland issued the Zürich 4 and 6 rappen on 1 March 1843. Although the Penny black could be used to send a letter less than half an ounce anywhere within the United Kingdom, the Swiss did not initially adopt that system, instead continuing to calculate mail rates based on distance to be delivered. Brazil issued the Bull's Eye stamp on 1 August 1843. Using the same printer used for the Penny black, Brazil opted for an abstract design instead of the portrait of Emperor Pedro II, so his image would not be disfigured by a postmark. In 1845, some postmasters in the United States issued their own stamps, but it was not until 1847 that the first official United States stamps were issued: 5 and 10 cent issues depicting Benjamin Franklin and George Washington. A few other countries issued stamps in the late 1840s. The famous Mauritius "Post Office" stamps were issued by Mauritius in September 1847. Many others, such as India, started their use in the 1850s, and by the 1860s most countries issued stamps. Perforation of postage stamps began in January 1854. The first officially perforated stamps were issued in February 1854. Stamps from Henry Archer's perforation trials were issued in the last few months of 1850; during the 1851 parliamentary session at the House of Commons of the United Kingdom; and finally in 1853/54 after the United Kingdom government paid Archer £4,000 for his machine and the patent. The Universal Postal Union, established in 1874, prescribed that nations shall only issue postage stamps according to the quantity of real use, and no living persons shall be taken as subjects. The latter rule lost its significance after World War I. The twentieth and twenty-first century After World War II, it became customary in some countries, especially small Arab nations, to issue postage stamps en masse as it was realized how profitable that was. During the 21st century, the amount of mail—and the use of postage stamps, accordingly—has reduced in the world because of electronic mail and other technological innovations. Iceland has already announced that it will no longer issue new stamps for collectors because sales have decreased and there are enough stamps in stock. In 2013 the Netherlands PostNL introduced Postzegelcodes, a nine-character alphanumeric code that is written as a 3x3 grid on the piece of mail as an alternative to stamps. In December 2020, 590,000 people sent cards with these handwritten codes. Design When the first postage stamps were issued in the 1840s, they followed an almost identical standard in shape, size and general subject matter. They were rectangular in shape. They bore the images of queens, presidents and other political figures. They also depicted the denomination of the postage-paid, and with the exception of the United Kingdom, depicted the name of the country from which issued. Nearly all early postage stamps depict images of national leaders only. Soon after the introduction of the postage stamp, other subjects and designs began to appear. Some designs were welcome, others widely criticized. For example, in 1869, the United States Post Office broke the tradition of depicting presidents or other famous historical figures, instead using other subjects including a train and horse.(See: 1869 Pictorial Issue.) The change was greeted with general disapproval, and sometimes harsh criticism from the American public. Perforations Perforations are small holes made between individual postage stamps on a sheet of stamps, facilitating separation of a desired number of stamps. The resulting frame-like, rippled edge surrounding the separated stamp defines a characteristic meme for the appearance of a postage stamp. In the first decade of postage stamps' existence (depending on the country), stamps were issued without perforations. Scissors or other cutting mechanisms were required to separate a desired number of stamps from a full sheet. If cutting tools were not used, individual stamps were torn off. This is evidenced by the ragged edges of surviving examples. Mechanically separating stamps from a sheet proved an inconvenience for postal clerks and businesses, both dealing with large numbers of individual stamps on a daily basis. By 1850, methods such as rouletting wheels were being devised in efforts of making stamp separation more convenient, and less time-consuming. The United Kingdom was the first country to issue postage stamps with perforations. The first machine specifically designed to perforate sheets of postage stamps was invented in London by Henry Archer, an Irish landowner and railroad man from Dublin, Ireland. The 1850 Penny Red was the first stamp to be perforated during trial course of Archer's perforating machine. After a period of trial and error and modifications of Archer's invention, new machines based on the principles pioneered by Archer were purchased and in 1854 the United Kingdom postal authorities started continuously issuing perforated postage stamps in the Penny Red and all subsequent designs. In the United States, the use of postage stamps caught on quickly and became more widespread when on 3 March 1851, the last day of its legislative session, Congress passed the Act of March 3, 1851 (An Act to reduce and modify the Rates of Postage in the United States). Similarly introduced on the last day of the Congressional session four years later, the Act of March 3, 1855 required the prepayment of postage on all mailings. Thereafter, postage stamp use in the United States quickly doubled, and by 1861 had quadrupled. In 1856, under the direction of Postmaster General James Campbell, Toppan and Carpenter, (commissioned by the United States government to print United States postage stamps through the 1850s) purchased a rotary machine designed to separate stamps, patented in England in 1854 by William and Henry Bemrose, who were printers in Derby, England. The original machine cut slits into the paper rather than punching holes, but the machine was soon modified. The first stamp issue to be officially perforated, the 3-cent George Washington, was issued by the United States Post Office on 24 February 1857. Between 1857 and 1861, all stamps originally issued between 1851 and 1856 were reissued with perforations. Initial capacity was insufficient to perforate all stamps printed, thus perforated issues used between February and July 1857 are scarce and quite valuable. Shapes and materials In addition to the most common rectangular shape, stamps have been issued in geometric (circular, triangular and pentagonal) and irregular shapes. The United States issued its first circular stamp in 2000 as a hologram of the Earth. Sierra Leone and Tonga have issued stamps in the shapes of fruit. Stamps that are printed on sheets are generally separated by perforations, though, more recently, with the advent of gummed stamps that do not have to be moistened prior to affixing them, designs can incorporate smooth edges (although a purely decorative perforated edge is often present). Stamps are most commonly made from paper designed specifically for them, and are printed in sheets, rolls, or small booklets. Less commonly, postage stamps are made of materials other than paper, such as embossed foil (sometimes of gold). Switzerland made a stamp that contained a bit of lace and one of wood. The United States produced one of plastic. East Germany issued a stamp of synthetic chemicals. In the Netherlands a stamp was made of silver foil. Bhutan issued one with its national anthem on a playable record. Graphic characteristics The subjects found on the face of postage stamps are generally what defines a particular stamp issue to the public and are often a reason why they are saved by collectors or history enthusiasts. Graphical subjects found on postage stamps have ranged from the early portrayals of kings, queens and presidents to later depictions of ships, birds and satellites, famous people, historical events, comics, dinosaurs, hobbies (knitting, stamp collecting), sports, holiday themes, and a plethora of other subjects too numerous to list. Artists, designers, engravers and administrative officials are involved with the choice of subject matter and the method of printing stamps. Early stamp images were almost always produced from an engraving—a design etched into a steel die, which was then hardened and whose impression was transferred to a printing plate. Using an engraved image was deemed a more secure way of printing stamps as it was nearly impossible to counterfeit a finely detailed image with raised lines for anyone but a master engraver. In the mid-20th century, stamp issues produced by other forms of printing began to emerge, such as lithography, photogravure, intaglio and web offset printing. These later printing methods were less expensive and typically produced images of lesser quality. Scents Occasionally, postal authorities issue novelty "scented" or "aromatic" stamps which contain a scent, more readily apparent when rubbed. The effect is achieved by using ink which contains microcapsules that provide the desired fragrance when broken. The scent usually only lasts for a limited time after production, such as a few months or years. Such stamps are usually related to aromatic subjects including coffee, roses, grapes, chocolate, vanilla, cinnamon, pine needles or freshly baked bread. The first scented stamps were issued by Bhutan in 1973. Types Airmail stamp – for payment of airmail service. The term "airmail" or an equivalent is usually printed on special airmail stamps. Airmail stamps typically depict images of airplanes and/or famous pilots and were used when airmail was a special type of mail delivery separate from mail delivered by train, ship or automobile. Aside from mail with local destinations, today almost all other mail is transported by aircraft and thus airmail is now the standard method of delivery. Scott has a separate category and listing for United States Airmail Postage. Prior to 1940, the Scott Catalogue did not have a special designation for airmail stamps. The various major stamp catalogs have different numbering systems and may not always list airmail stamps the same way. ATM stamp – stamps dispensed by automates and have their value imprinted only at the time of purchase Booklet stamp – stamps produced and issued in booklet format Carrier's stamp Certified mail stamp Cinderella stamp (see also: Poster stamp) Coil stamps – tear-off stamps issued individually in a vending machine, or purchased in a roll Commemorative stamp – a stamp which is issued for a limited time to commemorate a person or event Anniversaries of birthdays and historical events are among the most common examples. Computer vended postage – advanced secure postage that uses information-based indicia (IBI) technology. IBI uses a two-dimensional bar code (Datamatrix or PDF417) to encode the originating address, date of mailing, postage and a digital signature to verify the stamp. Customised stamp – a stamp on which the image can be chosen by the purchaser by sending in a photograph or by use of the computer. Some are not true stamps but technically meter labels. Definitive stamps – stamps for everyday postage and are usually produced to meet current postal rates. They often have less appealing designs than commemoratives, though there are notable exceptions. The same design may be used for many years. The use of the same design over an extended period may lead to unintended colour varieties. This may make them just as interesting to philatelists as are commemoratives. A good example would be the US 1903 regular issues, their designs being very picturesque and ornamental. Definitive stamps are often issued in a series of stamps with different denominations. Express mail stamp / special delivery stamp Late fee stamp – issued to show payment of a fee to allow inclusion of a letter or package in the outgoing dispatch although it has been turned in after the cut-off time Local post stamps – used on mail in a local post; a postal service that operates only within a limited geographical area, typically a city or a single transportation route. Some local posts have been operated by governments, while others, known as private local posts, have been operated by for-profit companies. Make up stamp – a stamp with a very small value, used to make up the difference when postage rates are increased Military stamp – stamp for a country's armed forces, usually using a special postal system Minisheet – a commemorative issue smaller than a regular full sheet of stamps, but with more than one stamp Minisheets often contain a number of different stamps and often have a decorative border.
Technology
Media and communication
null
25130
https://en.wikipedia.org/wiki/Ponte%20Vecchio
Ponte Vecchio
The Ponte Vecchio (; "Old Bridge") is a medieval stone closed-spandrel segmental arch bridge over the Arno, in Florence, Italy. The only bridge in Florence spared from destruction during World War II, it is noted for the shops built along it; building shops on such bridges was once a common practice. Butchers, tanners, and farmers initially occupied the shops; the present tenants are jewellers, art dealers, and souvenir sellers. The Ponte Vecchio's two neighbouring bridges are the Ponte Santa Trinita and the Ponte alle Grazie. The bridge connects Via Por Santa Maria (Lungarno degli Acciaiuoli and Lungarno degli Archibusieri) to Via de' Guicciardini (Borgo San Jacopo and Via de' Bardi). The name was given to what was the oldest Florentine bridge when the Ponte alla Carraia was built, then called in contrast to the old one. Beyond the historical value, the bridge over time has played a central role in the city road system, starting from when it connected the Roman Florentia with the Via Cassia Nova commissioned by the emperor Hadrian in 123 AD. In contemporary times, despite being closed to vehicular traffic, the bridge is crossed by a considerable pedestrian flow generated both by its fame and by the fact that it connects places of high tourist interest on the two banks of the river: Piazza del Duomo, Piazza della Signoria on one side with the area of Palazzo Pitti and Santo Spirito in the Oltrarno. The bridge appears in the list drawn up in 1901 by the General Directorate of Antiquities and Fine Arts, as a monumental building to be considered national artistic heritage. History and construction The bridge spans the Arno at its narrowest point where it is believed that a bridge was first built in Roman times, when the via Cassia crossed the river at this point. The Roman piers were of stone, the superstructure of wood. The bridge first appears in a document of 996 and was destroyed by a flood in 1117 and reconstructed in stone. In 1218 the Ponte alla Carraia, a wooden structure, was established nearby which led to it being referred to as "Ponte Nuovo" relative to the older (Vecchio) structure. It was swept away again in 1333 except for two of its central piers, as noted by Giovanni Villani in his Nuova Cronica. It was rebuilt in 1345. This location marks one of the earliest crossings of the Arno in Florence, possibly originating from Roman times or even before. Although floods have repeatedly damaged it, the current bridge has stood since approximately 1339-1345. For many years, the only older bridge in the city was the Rubaconte bridge, built nearly a century earlier. But after significant 19th-century modifications and its destruction in 1944, the Ponte Vecchio claimed its title as the oldest bridge in Florence. Giorgio Vasari recorded the traditional view of his day that attributed its design to Taddeo Gaddi— besides Giotto one of the few artistic names of the trecento still recalled two hundred years later. Modern historians present Neri di Fioravanti as a possible candidate as the builder. Sheltered in a little loggia at the central opening of the bridge is a weathered dedication stone, which once read Nel trentatrè dopo il mille-trecento, il ponte cadde, per diluvio dell' acque: poi dieci anni, come al Comun piacque, rifatto fu con questo adornamento. The Torre dei Mannelli was built at the southeast corner of the bridge to defend it. The bridge consists of three segmental arches: the main arch has a span of , and the two side arches each span . The rise of the arches is between 3.5 and 4.4 metres (11½ to 14½ feet), and the span-to-rise ratio is 5:1. The shallow segmental arches, which require fewer piers than the semicircular arch traditionally used by Romans, enabled ease of access and navigation for animal-drawn carts. Another notable design element is the large piazza at the center of the bridge that Leon Battista Alberti described as a prominent ornament in the city. A stone with an inscription from Dante (Paradiso xvi. 140-7) records the spot at the entrance to the bridge where Buondelmonte de' Buondelmonti was murdered by the Amidei clan in 1215, which began the urban fighting of the Guelfs and Ghibellines. The bridge has always hosted shops and merchants who displayed their goods on tables before their premises, after authorization by the Bargello (a sort of a lord mayor, a magistrate and a police authority). Later additions and changes In order to connect the Palazzo Vecchio (Florence's town hall) with the Palazzo Pitti, in 1565 Cosimo I de' Medici had Giorgio Vasari build the Vasari Corridor, part of which runs above the Ponte Vecchio. To enhance the prestige and clean up the bridge, a decree was made in 1565 that excluded butchers from this bridge (only goldsmiths and jewellers are allowed) that is in effect to this day. The association of butchers had monopolized the shops on the bridge since 1442. The back shops (retrobotteghe) that may be seen from upriver were added in the seventeenth century. 20th century In 1900, to honour and mark the fourth century of the birth of the great Florentine sculptor and master goldsmith Benvenuto Cellini, the leading goldsmiths of the bridge commissioned the Florentine sculptor, Raffaello Romanelli, to create a bronze bust of Cellini to stand atop a fountain in the middle of the Eastern side of the bridge, where it stands to this day. During World War II, the Ponte Vecchio was not destroyed by the German army during their retreat at the advance of the British 8th Army on 4 August 1944, unlike all the other bridges in Florence. This was, according to many locals and tour guides, because of an express order by Hitler. Access to the Ponte Vecchio was, however, obstructed by the destruction of the buildings at both ends of the bridge, which have since been rebuilt using a combination of original and modern designs. The bridge was severely damaged in the 1966 flood of the Arno. Between 2005 and 2006, 5,500 padlocks, known as love locks, which were attached to the railings around the bust of Cellini, were removed by the city council. According to the council, the padlocks were aesthetically displeasing and damaged the bust and its railings. There is now a fine for attaching love locks to the bridge. An announcement in April 2024 stated that work would be completed on the bridge, including a cleaning, an upgrade of the replacement joints previously installed, strengthening of the stone and restoration of the footpath's stone. Panorama In art The bridge is mentioned in the aria "O mio babbino caro" by Giacomo Puccini. Wall mural in Grossi Florentino, executed by students of Napier Waller under supervision
Technology
Bridges
null
25142
https://en.wikipedia.org/wiki/Sus%20%28genus%29
Sus (genus)
Sus () is the genus of domestic and wild pigs, within the even-toed ungulate family Suidae. Sus include domestic pigs (Sus domesticus) and their ancestor, the common Eurasian wild boar (Sus scrofa), along with various other species. Sus species, like all suids, are native to the Eurasian and African continents, ranging from Europe to the Pacific islands. Juvenile pigs are known as piglets. Pigs live in complex social groups and are considered one of the more intelligent mammals, as reflected in their ability to learn. With around 1 billion of this species alive at any time, the domestic pig is among the most populous large mammals in the world. Pigs are omnivores and can consume a wide range of food. Pigs are biologically similar to humans and are thus frequently used for human medical research. Etymology The Online Etymology Dictionary provides anecdotal evidence as well as linguistic, saying that the term derives probably from Old English , found in compounds, ultimate origin unknown. Originally "young pig" (the word for adults was swine). Apparently related to Low German , Dutch ("but the phonology is difficult" -- OED). ... Another Old English word for "pig" was , related to "furrow," from PIE *perk- "dig, furrow" (source also of Latin "pig," see pork). "This reflects a widespread IE tendency to name animals from typical attributes or activities" [Roger Lass]. Synonyms grunter, oinker are from sailors' and fishermen's euphemistic avoidance of uttering the word pig at sea, a superstition perhaps based on the fate of the Gadarene swine, who drowned. The Online Etymology Dictionary also traces the evolution of sow, the term for a female pig, through various historical languages: Old English , "female of the swine," from Proto-Germanic *su- (cognates: Old Saxon, Old High German su, German , Dutch , Old Norse ), from PIE root *su- (cognates: Sanskrit "wild boar, swine;" Avestan hu "wild boar;" Greek hys "swine;" Latin "swine", "pertaining to swine"; Old Church Slavonic "swine;" Lettish sivens "young pig;" Welsh , Irish "swine; Old Irish "snout, plowshare"), possibly imitative of pig noise; note that Sanskrit sukharah means "maker of (the sound) su". An adjectival form is porcine. Another adjectival form (technically for the subfamily rather than genus name) is suine (comparable to bovine, canine, etc.); for the family, it is suid (as with bovid, canid). Description and behaviour A typical pig has a large head with a long snout that is strengthened by a special prenasal bone and by a disk of cartilage at the tip. The snout is used to dig into the soil to find food and is a very acute sense organ. Each foot has four hooves with the two larger central toes bearing most of the weight, and the outer two also being used in soft ground. The dental formula of adult pigs is , giving a total of 44 teeth. The rear teeth are adapted for crushing. In the male, the canine teeth form tusks, which grow continuously and are sharpened by constantly being ground against each other. Occasionally, captive mother pigs may savage their own piglets, often if they become severely stressed. Some attacks on newborn piglets are non-fatal. Others may kill the piglets and sometimes, the mother may eat them. An estimated 50% of piglet fatalities are due to the mother attacking, or unintentionally crushing, the newborn pre-weaned animals. Distribution and evolution With around 1 billion individuals alive at any time, the domestic pig is one of the most numerous large mammals on the planet. The ancestor of the domestic pig is the wild boar, which is one of the most numerous and widespread large mammals. Its many subspecies are native to all but the harshest climates of continental Eurasia and its islands and Africa as well, from Ireland and India to Japan and north to Siberia. Long isolated from other pigs on the many islands of Indonesia, Malaysia, and the Philippines, pigs have evolved into many different species, including wild boar, bearded pigs, and warty pigs. Humans have introduced pigs into Australia, North and South America, and numerous islands, either accidentally as escaped domestic pigs which have gone feral, or as wild boar. Habitat and reproduction The wild boar (Sus scrofa) can take advantage of any forage resources. Therefore, they can live in virtually any productive habitat that can provide enough water to sustain large mammals such as pigs. Pigs are famously fecund; when well-fed, a sow can birth twelve or more piglets in her annual litter. If there is increased foraging by wild boars in certain areas, they can cause a nutritional shortage which can cause the pig population to decrease. If the nutritional state returns to normal, the pig population will most likely rise due to the pigs' naturally-increased reproduction rate. Diet and foraging Pigs are omnivores, which means that they consume both plants and animals. In the wild, they are foragers, searching through their habitat for food (which, for pigs, often includes digging with their snouts). Wild pigs eat roots, tubers, leaves, fruits, mushrooms, and flowers, in addition to some insects (especially insect grubs) and fish. Pigs are famously fond of truffle mushrooms, which grow underground; pigs find them by scent and unearth them with their snouts. In Europe, trained "truffle pigs" find these valuable fungi for humans. Pigs do not hunt, but will readily eat carrion, eggs, and other animal foods that they can find. As livestock, pigs were once fed all manner of mixed household food scraps (called "slops"), but on large modern farms are now fed mostly corn and soybean meal with a mixture of vitamins and minerals added. Traditionally, pigs were raised on dairy farms and fed any excess milk and the whey left over from cheese and butter making. Pigs brought so much extra income to these farms that they earned the nickname "mortgage lifters". Older pigs will consume three to five gallons of water per day. When kept as pets, the optimal healthy diet consists mainly of a balanced diet of raw vegetables, although some may give their pigs commercial mini pig pellet feed. Relationship with humans Most pigs today are domesticated pigs raised for meat (known as pork). Miniature breeds are commonly kept as pets. Because of their foraging abilities and excellent sense of smell, people in many European countries use them to find truffles. Both wild and feral pigs are commonly hunted. Apart from meat, pig skin is turned into leather, and their hairs are used to make brushes. The relatively short, stiff, coarse pig hairs are called bristles, and were once so commonly used in paintbrushes that in 1946 the Australian Government launched Operation Pig Bristle. In May 1946, in response to a shortage of pig bristles for paintbrushes to paint houses in the post-World War II construction boom, the Royal Australian Air Force (RAAF) flew in 28 short tons of pig bristles from China, their only commercially available source at the time. Use in human healthcare Human skin is very similar to pig skin, therefore many preclinical studies employ pig skin. In addition to providing use in biomedical research and for drug testing, genetic advances in human healthcare have provided a pathway for domestic pigs to become xenotransplantation candidates for humans. Species The genus Sus is currently thought to contain nine living species. Several extinct species (†) are known from fossils. Extant species {{Species table/row |name= Domestic pig|binomial=Sus domesticus (sometimes considered subspecies of S. scrofa) |image=File:Standing piglet at golden hour in Don Det Laos.jpg |image-size=180px |image-alt= |authority-name=Erxleben |authority-year= 1777|authority-not-original= |range= Domesticated |range-image= |range-image-size=180px |size= |habitat= |hunting= |iucn-status= LC |population= |direction= |subspecies= }} The pygmy hog, formerly Sus salvanius, is now placed in the monotypic genus Porcula. The Red river hog, formerly Sus porcus, is now placed in the genus Potamochoerus. Fossil species †Sus australis Han, 1987 – Early Pleistocene of China †Sus bijiashanensis Han et al., 1975 – Early Pleistocene of China †Sus falconeri – Pleistocene of the Siwalik region, India †Sus houi Qi et al., 1999 – Pleistocene of China †Sus hysudricus Falconer and Cautley 1847 – Pliocene of India †Sus jiaoshanensis Zhao, 1980 – Early Pleistocene of China †Sus liuchengensis Han, 1987 – Early Pleistocene of China †Sus lydekkeri Zdansky, 1928 – Pleistocene of China †Sus officinalis Koenigswald, 1933 – Middle Pleistocene of China †Sus peii Han, 1987 – Early Pleistocene of China †Sus subtriquetra Xue, 1981 †Sus strozzi Forsyth Major, 1881Pliocene and Early Pleistocene of Europe †Sus xiaozhu Han et al., 1975 – Early Pleistocene of China Domestication Pigs have been domesticated since ancient times in the Old World. Pigs were domesticated on each end of Eurasia, and possibly several times. It is now thought that pigs were attracted to human settlements for the food scraps, and that the process of domestication began as a commensal relationship. Archaeological evidence suggests that pigs were being managed in the wild in a way similar to the way they are managed by some modern New Guineans from wild boar as early as 13,000–12,700 BP in the Near East in the Tigris Basin, Çayönü, Cafer Höyük, Nevalı Çori. Remains of pigs have been dated to earlier than 11,400 BP in Cyprus that must have been introduced from the mainland which suggests domestication in the adjacent mainland by then. Pigs were also domesticated in China, potentially more than once. In some parts of China pigs were kept in pens from early times, separating them from wild populations and allowing farmers to create breeds that were fatter and bred more quickly. Early Modern Europeans brought these breeds back home and crossed them with their own pigs, which was the origins of most modern pig breeds. In India, pigs have been domesticated for a long time mostly in Goa and some rural areas for pig toilets. This practice also occurred in China. Though ecologically logical as well as economical, pig toilets are waning in popularity as use of septic tanks and/or sewerage systems is increasing in rural areas. Hernando de Soto and other early Spanish explorers brought pigs to southeastern North America from Europe. As in Medieval Europe, pigs are valued on certain oceanic islands for their self-sufficiency, which allows them to be turned loose, although the practice does have drawbacks (see environmental impact). The domestic pig (Sus domesticus) is usually given the scientific name Sus scrofa domesticus, although some taxonomists, including the American Society of Mammalogists, call it S. domesticus, reserving S. scrofa for the wild boar. It was domesticated approximately 5,000 to 7,000 years ago. The upper canines form sharp distinctive tusks that curve outward and upward. Compared to other artiodactyles, their head is relatively long, pointed, and free of warts. Their head and body length ranges from and they can weigh between . In November 2012, scientists managed to sequence the genome of the domestic pig. The similarities between the pig and human genomes mean that the new data may have wide applications in the study and treatment of human genetic diseases. In August 2015, a study looked at over 100 pig genome sequences to ascertain their process of domestication. The process of domestication was assumed to have been initiated by humans, involved few individuals and relied on reproductive isolation between wild and domestic forms. The study found that the assumption of reproductive isolation with population bottlenecks was not supported. The study indicated that pigs were domesticated separately in Western Asia and China, with Western Asian pigs introduced into Europe where they crossed with wild boar. A model that fitted the data included admixture with a now extinct ghost population of wild pigs during the Pleistocene. The study also found that despite back-crossing with wild pigs, the genomes of domestic pigs have strong signatures of selection at DNA loci that affect behavior and morphology. The study concluded that human selection for domestic traits likely counteracted the homogenizing effect of gene flow from wild boars and created domestication islands in the genome. The same process may also apply to other domesticated animals. In culture Pigs have been important in culture across the world since neolithic times. They appear in art, literature, and religion. In Asia the wild boar is one of 12 animal images comprising the Chinese zodiac, while in Europe the boar represents a standard charge in heraldry. In Islam and Judaism pigs and those who handle them are viewed negatively, and the consumption of pork is forbidden. Pigs are alluded to in animal epithets and proverbs. The pig has been celebrated throughout Europe since ancient times in its carnivals, the name coming from the Italian carne levare, the lifting of meat. Pigs have been brought into literature for varying reasons, ranging from the pleasures of eating, as in Charles Lamb's A Dissertation upon Roast Pig, to William Golding's Lord of the Flies (with the fat character "Piggy"), where the rotting boar's head on a stick represents Beelzebub, "lord of the flies" being the direct translation of the Hebrew , and George Orwell's allegorical novel Animal Farm, where the central characters, representing Soviet leaders, are all pigs. Environmental damage Domestic pigs that have escaped from urban areas or were allowed to forage in the wild, and in some cases wild boars which were introduced as prey for hunting, have given rise to large populations of feral pigs in North and South America, Australia, New Zealand, Hawaii, and other areas where pigs are not native. Accidental or deliberate releases of pigs into countries or environments where they are an alien species have caused extensive environmental change. Their omnivorous diet, aggressive behaviour, and their feeding method of rooting in the ground all combine to severely alter ecosystems unused to pigs. Pigs will even eat small animals and destroy nests of ground nesting birds. The Invasive Species Specialist Group lists feral pigs on the list of the world's 100 worst invasive species and says: Health problems Because of their biological similarities, pigs can harbour a range of parasites and diseases that can be transmitted to humans. Examples of such zoonoses include trichinosis, Taenia solium'', cysticercosis, and brucellosis. Pigs also host large concentrations of parasitic ascarid worms in their digestive tracts. Some strains of influenza are endemic in pigs, the most significant of which are H1N1, H1N2, and H3N2, the first of which has caused several outbreaks among humans, including the Spanish flu, 1977 Russian flu pandemic, and the 2009 swine flu pandemic. Pigs also can acquire human influenza.
Biology and health sciences
Artiodactyla
null
25175
https://en.wikipedia.org/wiki/Quadratic%20equation
Quadratic equation
In mathematics, a quadratic equation () is an equation that can be rearranged in standard form as where the variable represents an unknown number, and , , and represent known numbers, where . (If and then the equation is linear, not quadratic.) The numbers , , and are the coefficients of the equation and may be distinguished by respectively calling them, the quadratic coefficient, the linear coefficient and the constant coefficient or free term. The values of that satisfy the equation are called solutions of the equation, and roots or zeros of the quadratic function on its left-hand side. A quadratic equation has at most two solutions. If there is only one solution, one says that it is a double root. If all the coefficients are real numbers, there are either two real solutions, or a single real double root, or two complex solutions that are complex conjugates of each other. A quadratic equation always has two roots, if complex roots are included and a double root is counted for two. A quadratic equation can be factored into an equivalent equation where and are the solutions for . The quadratic formula expresses the solutions in terms of , , and . Completing the square is one of several ways for deriving the formula. Solutions to problems that can be expressed in terms of quadratic equations were known as early as 2000 BC. Because the quadratic equation involves only one unknown, it is called "univariate". The quadratic equation contains only powers of that are non-negative integers, and therefore it is a polynomial equation. In particular, it is a second-degree polynomial equation, since the greatest power is two. Solving the quadratic equation A quadratic equation whose coefficients are real numbers can have either zero, one, or two distinct real-valued solutions, also called roots. When there is only one distinct root, it can be interpreted as two roots with the same value, called a double root. When there are no real roots, the coefficients can be considered as complex numbers with zero imaginary part, and the quadratic equation still has two complex-valued roots, complex conjugates of each-other with a non-zero imaginary part. A quadratic equation whose coefficients are arbitrary complex numbers always has two complex-valued roots which may or may not be distinct. The solutions of a quadratic equation can be found by several alternative methods. Factoring by inspection It may be possible to express a quadratic equation as a product . In some cases, it is possible, by simple inspection, to determine values of p, q, r, and s that make the two forms equivalent to one another. If the quadratic equation is written in the second form, then the "Zero Factor Property" states that the quadratic equation is satisfied if or . Solving these two linear equations provides the roots of the quadratic. For most students, factoring by inspection is the first method of solving quadratic equations to which they are exposed. If one is given a quadratic equation in the form , the sought factorization has the form , and one has to find two numbers and that add up to and whose product is (this is sometimes called "Vieta's rule" and is related to Vieta's formulas). As an example, factors as . The more general case where does not equal can require a considerable effort in trial and error guess-and-check, assuming that it can be factored at all by inspection. Except for special cases such as where or , factoring by inspection only works for quadratic equations that have rational roots. This means that the great majority of quadratic equations that arise in practical applications cannot be solved by factoring by inspection. Completing the square The process of completing the square makes use of the algebraic identity which represents a well-defined algorithm that can be used to solve any quadratic equation. Starting with a quadratic equation in standard form, Divide each side by , the coefficient of the squared term. Subtract the constant term from both sides. Add the square of one-half of , the coefficient of , to both sides. This "completes the square", converting the left side into a perfect square. Write the left side as a square and simplify the right side if necessary. Produce two linear equations by equating the square root of the left side with the positive and negative square roots of the right side. Solve each of the two linear equations. We illustrate use of this algorithm by solving The plus–minus symbol "±" indicates that both and are solutions of the quadratic equation. Quadratic formula and its derivation Completing the square can be used to derive a general formula for solving quadratic equations, called the quadratic formula. The mathematical proof will now be briefly summarized. It can easily be seen, by polynomial expansion, that the following equation is equivalent to the quadratic equation: Taking the square root of both sides, and isolating , gives: Some sources, particularly older ones, use alternative parameterizations of the quadratic equation such as or  , where has a magnitude one half of the more common one, possibly with opposite sign. These result in slightly different forms for the solution, but are otherwise equivalent. A number of alternative derivations can be found in the literature. These proofs are simpler than the standard completing the square method, represent interesting applications of other frequently used techniques in algebra, or offer insight into other areas of mathematics. A lesser known quadratic formula, as used in Muller's method, provides the same roots via the equation This can be deduced from the standard quadratic formula by Vieta's formulas, which assert that the product of the roots is . It also follows from dividing the quadratic equation by giving solving this for and then inverting. One property of this form is that it yields one valid root when , while the other root contains division by zero, because when , the quadratic equation becomes a linear equation, which has one root. By contrast, in this case, the more common formula has a division by zero for one root and an indeterminate form for the other root. On the other hand, when , the more common formula yields two correct roots whereas this form yields the zero root and an indeterminate form . When neither nor is zero, the equality between the standard quadratic formula and Muller's method, can be verified by cross multiplication, and similarly for the other choice of signs. Reduced quadratic equation It is sometimes convenient to reduce a quadratic equation so that its leading coefficient is one. This is done by dividing both sides by , which is always possible since is non-zero. This produces the reduced quadratic equation: where and . This monic polynomial equation has the same solutions as the original. The quadratic formula for the solutions of the reduced quadratic equation, written in terms of its coefficients, is Discriminant In the quadratic formula, the expression underneath the square root sign is called the discriminant of the quadratic equation, and is often represented using an upper case or an upper case Greek delta: A quadratic equation with real coefficients can have either one or two distinct real roots, or two distinct complex roots. In this case the discriminant determines the number and nature of the roots. There are three cases: If the discriminant is positive, then there are two distinct roots both of which are real numbers. For quadratic equations with rational coefficients, if the discriminant is a square number, then the roots are rational—in other cases they may be quadratic irrationals. If the discriminant is zero, then there is exactly one real root sometimes called a repeated or double root or two equal roots. If the discriminant is negative, then there are no real roots. Rather, there are two distinct (non-real) complex roots which are complex conjugates of each other. In these expressions is the imaginary unit. Thus the roots are distinct if and only if the discriminant is non-zero, and the roots are real if and only if the discriminant is non-negative. Geometric interpretation The function is a quadratic function. The graph of any quadratic function has the same general shape, which is called a parabola. The location and size of the parabola, and how it opens, depend on the values of , , and . If , the parabola has a minimum point and opens upward. If , the parabola has a maximum point and opens downward. The extreme point of the parabola, whether minimum or maximum, corresponds to its vertex. The -coordinate of the vertex will be located at , and the -coordinate of the vertex may be found by substituting this -value into the function. The -intercept is located at the point . The solutions of the quadratic equation correspond to the roots of the function , since they are the values of for which . If , , and are real numbers and the domain of is the set of real numbers, then the roots of are exactly the -coordinates of the points where the graph touches the -axis. If the discriminant is positive, the graph touches the -axis at two points; if zero, the graph touches at one point; and if negative, the graph does not touch the -axis. Quadratic factorization The term is a factor of the polynomial if and only if is a root of the quadratic equation It follows from the quadratic formula that In the special case where the quadratic has only one distinct root (i.e. the discriminant is zero), the quadratic polynomial can be factored as Graphical solution The solutions of the quadratic equation may be deduced from the graph of the quadratic function which is a parabola. If the parabola intersects the -axis in two points, there are two real roots, which are the -coordinates of these two points (also called -intercept). If the parabola is tangent to the -axis, there is a double root, which is the -coordinate of the contact point between the graph and parabola. If the parabola does not intersect the -axis, there are two complex conjugate roots. Although these roots cannot be visualized on the graph, their real and imaginary parts can be. Let and be respectively the -coordinate and the -coordinate of the vertex of the parabola (that is the point with maximal or minimal -coordinate. The quadratic function may be rewritten Let be the distance between the point of -coordinate on the axis of the parabola, and a point on the parabola with the same -coordinate (see the figure; there are two such points, which give the same distance, because of the symmetry of the parabola). Then the real part of the roots is , and their imaginary part are . That is, the roots are or in the case of the example of the figure Avoiding loss of significance Although the quadratic formula provides an exact solution, the result is not exact if real numbers are approximated during the computation, as usual in numerical analysis, where real numbers are approximated by floating point numbers (called "reals" in many programming languages). In this context, the quadratic formula is not completely stable. This occurs when the roots have different order of magnitude, or, equivalently, when and are close in magnitude. In this case, the subtraction of two nearly equal numbers will cause loss of significance or catastrophic cancellation in the smaller root. To avoid this, the root that is smaller in magnitude, , can be computed as where is the root that is bigger in magnitude. This is equivalent to using the formula using the plus sign if and the minus sign if A second form of cancellation can occur between the terms and of the discriminant, that is when the two roots are very close. This can lead to loss of up to half of correct significant figures in the roots. Examples and applications The golden ratio is found as the positive solution of the quadratic equation The equations of the circle and the other conic sections—ellipses, parabolas, and hyperbolas—are quadratic equations in two variables. Given the cosine or sine of an angle, finding the cosine or sine of the angle that is half as large involves solving a quadratic equation. The process of simplifying expressions involving the square root of an expression involving the square root of another expression involves finding the two solutions of a quadratic equation. Descartes' theorem states that for every four kissing (mutually tangent) circles, their radii satisfy a particular quadratic equation. The equation given by Fuss' theorem, giving the relation among the radius of a bicentric quadrilateral's inscribed circle, the radius of its circumscribed circle, and the distance between the centers of those circles, can be expressed as a quadratic equation for which the distance between the two circles' centers in terms of their radii is one of the solutions. The other solution of the same equation in terms of the relevant radii gives the distance between the circumscribed circle's center and the center of the excircle of an ex-tangential quadrilateral. Critical points of a cubic function and inflection points of a quartic function are found by solving a quadratic equation. In physics, for motion with constant acceleration , the displacement or position of a moving body can be expressed as a quadratic function of time given the initial position and initial velocity : . In chemistry, the pH of a solution of weak acid can be calculated from the negative base-10 logarithm of the positive root of a quadratic equation in terms of the acidity constant and the analytical concentration of the acid. History Babylonian mathematicians, as early as 2000 BC (displayed on Old Babylonian clay tablets) could solve problems relating the areas and sides of rectangles. There is evidence dating this algorithm as far back as the Third Dynasty of Ur. In modern notation, the problems typically involved solving a pair of simultaneous equations of the form: which is equivalent to the statement that and are the roots of the equation: The steps given by Babylonian scribes for solving the above rectangle problem, in terms of and , were as follows: Compute half of p. Square the result. Subtract q. Find the (positive) square root using a table of squares. Add together the results of steps (1) and (4) to give . In modern notation this means calculating , which is equivalent to the modern day quadratic formula for the larger real root (if any) with , , and . Geometric methods were used to solve quadratic equations in Babylonia, Egypt, Greece, China, and India. The Egyptian Berlin Papyrus, dating back to the Middle Kingdom (2050 BC to 1650 BC), contains the solution to a two-term quadratic equation. Babylonian mathematicians from circa 400 BC and Chinese mathematicians from circa 200 BC used geometric methods of dissection to solve quadratic equations with positive roots. Rules for quadratic equations were given in The Nine Chapters on the Mathematical Art, a Chinese treatise on mathematics. These early geometric methods do not appear to have had a general formula. Euclid, the Greek mathematician, produced a more abstract geometrical method around 300 BC. With a purely geometric approach Pythagoras and Euclid created a general procedure to find solutions of the quadratic equation. In his work Arithmetica, the Greek mathematician Diophantus solved the quadratic equation, but giving only one root, even when both roots were positive. In 628 AD, Brahmagupta, an Indian mathematician, gave in his book Brāhmasphuṭasiddhānta the first explicit (although still not completely general) solution of the quadratic equation as follows: "To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value." This is equivalent to The Bakhshali Manuscript written in India in the 7th century AD contained an algebraic formula for solving quadratic equations, as well as linear indeterminate equations (originally of type ). Muhammad ibn Musa al-Khwarizmi (9th century) developed a set of formulas that worked for positive solutions. Al-Khwarizmi goes further in providing a full solution to the general quadratic equation, accepting one or two numerical answers for every quadratic equation, while providing geometric proofs in the process. He also described the method of completing the square and recognized that the discriminant must be positive, which was proven by his contemporary 'Abd al-Hamīd ibn Turk (Central Asia, 9th century) who gave geometric figures to prove that if the discriminant is negative, a quadratic equation has no solution. While al-Khwarizmi himself did not accept negative solutions, later Islamic mathematicians that succeeded him accepted negative solutions, as well as irrational numbers as solutions. Abū Kāmil Shujā ibn Aslam (Egypt, 10th century) in particular was the first to accept irrational numbers (often in the form of a square root, cube root or fourth root) as solutions to quadratic equations or as coefficients in an equation. The 9th century Indian mathematician Sridhara wrote down rules for solving quadratic equations. The Jewish mathematician Abraham bar Hiyya Ha-Nasi (12th century, Spain) authored the first European book to include the full solution to the general quadratic equation. His solution was largely based on Al-Khwarizmi's work. The writing of the Chinese mathematician Yang Hui (1238–1298 AD) is the first known one in which quadratic equations with negative coefficients of 'x' appear, although he attributes this to the earlier Liu Yi. By 1545 Gerolamo Cardano compiled the works related to the quadratic equations. The quadratic formula covering all cases was first obtained by Simon Stevin in 1594. In 1637 René Descartes published La Géométrie containing the quadratic formula in the form we know today. Advanced topics Alternative methods of root calculation Vieta's formulas Vieta's formulas (named after François Viète) are the relations between the roots of a quadratic polynomial and its coefficients. They result from comparing term by term the relation with the equation The first Vieta's formula is useful for graphing a quadratic function. Since the graph is symmetric with respect to a vertical line through the vertex, the vertex's -coordinate is located at the average of the roots (or intercepts). Thus the -coordinate of the vertex is The -coordinate can be obtained by substituting the above result into the given quadratic equation, giving Also, these formulas for the vertex can be deduced directly from the formula (see Completing the square) For numerical computation, Vieta's formulas provide a useful method for finding the roots of a quadratic equation in the case where one root is much smaller than the other. If , then , and we have the estimate: The second Vieta's formula then provides: These formulas are much easier to evaluate than the quadratic formula under the condition of one large and one small root, because the quadratic formula evaluates the small root as the difference of two very nearly equal numbers (the case of large ), which causes round-off error in a numerical evaluation. The figure shows the difference between (i) a direct evaluation using the quadratic formula (accurate when the roots are near each other in value) and (ii) an evaluation based upon the above approximation of Vieta's formulas (accurate when the roots are widely spaced). As the linear coefficient increases, initially the quadratic formula is accurate, and the approximate formula improves in accuracy, leading to a smaller difference between the methods as increases. However, at some point the quadratic formula begins to lose accuracy because of round off error, while the approximate method continues to improve. Consequently, the difference between the methods begins to increase as the quadratic formula becomes worse and worse. This situation arises commonly in amplifier design, where widely separated roots are desired to ensure a stable operation (see Step response). Trigonometric solution In the days before calculators, people would use mathematical tables—lists of numbers showing the results of calculation with varying arguments—to simplify and speed up computation. Tables of logarithms and trigonometric functions were common in math and science textbooks. Specialized tables were published for applications such as astronomy, celestial navigation and statistics. Methods of numerical approximation existed, called prosthaphaeresis, that offered shortcuts around time-consuming operations such as multiplication and taking powers and roots. Astronomers, especially, were concerned with methods that could speed up the long series of computations involved in celestial mechanics calculations. It is within this context that we may understand the development of means of solving quadratic equations by the aid of trigonometric substitution. Consider the following alternate form of the quadratic equation, where the sign of the ± symbol is chosen so that and may both be positive. By substituting and then multiplying through by , we obtain Introducing functions of and rearranging, we obtain where the subscripts and correspond, respectively, to the use of a negative or positive sign in equation . Substituting the two values of or found from equations or into gives the required roots of . Complex roots occur in the solution based on equation if the absolute value of exceeds unity. The amount of effort involved in solving quadratic equations using this mixed trigonometric and logarithmic table look-up strategy was two-thirds the effort using logarithmic tables alone. Calculating complex roots would require using a different trigonometric form. To illustrate, let us assume we had available seven-place logarithm and trigonometric tables, and wished to solve the following to six-significant-figure accuracy: A seven-place lookup table might have only 100,000 entries, and computing intermediate results to seven places would generally require interpolation between adjacent entries. (rounded to six significant figures) Solution for complex roots in polar coordinates If the quadratic equation with real coefficients has two complex roots—the case where requiring a and c to have the same sign as each other—then the solutions for the roots can be expressed in polar form as where and Geometric solution The quadratic equation may be solved geometrically in a number of ways. One way is via Lill's method. The three coefficients , , are drawn with right angles between them as in SA, AB, and BC in Figure 6. A circle is drawn with the start and end point SC as a diameter. If this cuts the middle line AB of the three then the equation has a solution, and the solutions are given by negative of the distance along this line from A divided by the first coefficient or SA. If is the coefficients may be read off directly. Thus the solutions in the diagram are −AX1/SA and −AX2/SA. The Carlyle circle, named after Thomas Carlyle, has the property that the solutions of the quadratic equation are the horizontal coordinates of the intersections of the circle with the horizontal axis. Carlyle circles have been used to develop ruler-and-compass constructions of regular polygons. Generalization of quadratic equation The formula and its derivation remain correct if the coefficients , and are complex numbers, or more generally members of any field whose characteristic is not . (In a field of characteristic 2, the element is zero and it is impossible to divide by it.) The symbol in the formula should be understood as "either of the two elements whose square is , if such elements exist". In some fields, some elements have no square roots and some have two; only zero has just one square root, except in fields of characteristic . Even if a field does not contain a square root of some number, there is always a quadratic extension field which does, so the quadratic formula will always make sense as a formula in that extension field. Characteristic 2 In a field of characteristic , the quadratic formula, which relies on being a unit, does not hold. Consider the monic quadratic polynomial over a field of characteristic . If , then the solution reduces to extracting a square root, so the solution is and there is only one root since In summary, See quadratic residue for more information about extracting square roots in finite fields. In the case that , there are two distinct roots, but if the polynomial is irreducible, they cannot be expressed in terms of square roots of numbers in the coefficient field. Instead, define the 2-root of to be a root of the polynomial , an element of the splitting field of that polynomial. One verifies that is also a root. In terms of the 2-root operation, the two roots of the (non-monic) quadratic are and For example, let denote a multiplicative generator of the group of units of , the Galois field of order four (thus and are roots of over . Because , is the unique solution of the quadratic equation . On the other hand, the polynomial is irreducible over , but it splits over , where it has the two roots and , where is a root of in . This is a special case of Artin–Schreier theory.
Mathematics
Algebra
null
25179
https://en.wikipedia.org/wiki/Quark
Quark
A quark () is a type of elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. All commonly observable matter is composed of up quarks, down quarks and electrons. Owing to a phenomenon known as color confinement, quarks are never found in isolation; they can be found only within hadrons, which include baryons (such as protons and neutrons) and mesons, or in quark–gluon plasmas. For this reason, much of what is known about quarks has been drawn from observations of hadrons. Quarks have various intrinsic properties, including electric charge, mass, color charge, and spin. They are the only elementary particles in the Standard Model of particle physics to experience all four fundamental interactions, also known as fundamental forces (electromagnetism, gravitation, strong interaction, and weak interaction), as well as the only known particles whose electric charges are not integer multiples of the elementary charge. There are six types, known as flavors, of quarks: up, down, charm, strange, top, and bottom. Up and down quarks have the lowest masses of all quarks. The heavier quarks rapidly change into up and down quarks through a process of particle decay: the transformation from a higher mass state to a lower mass state. Because of this, up and down quarks are generally stable and the most common in the universe, whereas strange, charm, bottom, and top quarks can only be produced in high energy collisions (such as those involving cosmic rays and in particle accelerators). For every quark flavor there is a corresponding type of antiparticle, known as an antiquark, that differs from the quark only in that some of its properties (such as the electric charge) have equal magnitude but opposite sign. The quark model was independently proposed by physicists Murray Gell-Mann and George Zweig in 1964. Quarks were introduced as parts of an ordering scheme for hadrons, and there was little evidence for their physical existence until deep inelastic scattering experiments at the Stanford Linear Accelerator Center in 1968. Accelerator program experiments have provided evidence for all six flavors. The top quark, first observed at Fermilab in 1995, was the last to be discovered. Classification The Standard Model is the theoretical framework describing all the known elementary particles. This model contains six flavors of quarks (), named up (), down (), strange (), charm (), bottom (), and top (). Antiparticles of quarks are called antiquarks, and are denoted by a bar over the symbol for the corresponding quark, such as for an up antiquark. As with antimatter in general, antiquarks have the same mass, mean lifetime, and spin as their respective quarks, but the electric charge and other charges have the opposite sign. Quarks are spin- particles, which means they are fermions according to the spin–statistics theorem. They are subject to the Pauli exclusion principle, which states that no two identical fermions can simultaneously occupy the same quantum state. This is in contrast to bosons (particles with integer spin), of which any number can be in the same state. Unlike leptons, quarks possess color charge, which causes them to engage in the strong interaction. The resulting attraction between different quarks causes the formation of composite particles known as hadrons (see below). The quarks that determine the quantum numbers of hadrons are called valence quarks; apart from these, any hadron may contain an indefinite number of virtual "sea" quarks, antiquarks, and gluons, which do not influence its quantum numbers. There are two families of hadrons: baryons, with three valence quarks, and mesons, with a valence quark and an antiquark. The most common baryons are the proton and the neutron, the building blocks of the atomic nucleus. A great number of hadrons are known (see list of baryons and list of mesons), most of them differentiated by their quark content and the properties these constituent quarks confer. The existence of "exotic" hadrons with more valence quarks, such as tetraquarks () and pentaquarks (), was conjectured from the beginnings of the quark model but not discovered until the early 21st century. Elementary fermions are grouped into three generations, each comprising two leptons and two quarks. The first generation includes up and down quarks, the second strange and charm quarks, and the third bottom and top quarks. All searches for a fourth generation of quarks and other elementary fermions have failed, and there is strong indirect evidence that no more than three generations exist. Particles in higher generations generally have greater mass and less stability, causing them to decay into lower-generation particles by means of weak interactions. Only first-generation (up and down) quarks occur commonly in nature. Heavier quarks can only be created in high-energy collisions (such as in those involving cosmic rays), and decay quickly; however, they are thought to have been present during the first fractions of a second after the Big Bang, when the universe was in an extremely hot and dense phase (the quark epoch). Studies of heavier quarks are conducted in artificially created conditions, such as in particle accelerators. Having electric charge, mass, color charge, and flavor, quarks are the only known elementary particles that engage in all four fundamental interactions of contemporary physics: electromagnetism, gravitation, strong interaction, and weak interaction. Gravitation is too weak to be relevant to individual particle interactions except at extremes of energy (Planck energy) and distance scales (Planck distance). However, since no successful quantum theory of gravity exists, gravitation is not described by the Standard Model. See the table of properties below for a more complete overview of the six quark flavors' properties. History The quark model was independently proposed by physicists Murray Gell-Mann and George Zweig in 1964. The proposal came shortly after Gell-Mann's 1961 formulation of a particle classification system known as the Eightfold Way – or, in more technical terms, SU(3) flavor symmetry, streamlining its structure. Physicist Yuval Ne'eman had independently developed a scheme similar to the Eightfold Way in the same year. An early attempt at constituent organization was available in the Sakata model. At the time of the quark theory's inception, the "particle zoo" included a multitude of hadrons, among other particles. Gell-Mann and Zweig posited that they were not elementary particles, but were instead composed of combinations of quarks and antiquarks. Their model involved three flavors of quarks, up, down, and strange, to which they ascribed properties such as spin and electric charge. The initial reaction of the physics community to the proposal was mixed. There was particular contention about whether the quark was a physical entity or a mere abstraction used to explain concepts that were not fully understood at the time. In less than a year, extensions to the Gell-Mann–Zweig model were proposed. Sheldon Glashow and James Bjorken predicted the existence of a fourth flavor of quark, which they called charm. The addition was proposed because it allowed for a better description of the weak interaction (the mechanism that allows quarks to decay), equalized the number of known quarks with the number of known leptons, and implied a mass formula that correctly reproduced the masses of the known mesons. Deep inelastic scattering experiments conducted in 1968 at the Stanford Linear Accelerator Center (SLAC) and published on October 20, 1969, showed that the proton contained much smaller, point-like objects and was therefore not an elementary particle. Physicists were reluctant to firmly identify these objects with quarks at the time, instead calling them "partons" – a term coined by Richard Feynman. The objects that were observed at SLAC would later be identified as up and down quarks as the other flavors were discovered. Nevertheless, "parton" remains in use as a collective term for the constituents of hadrons (quarks, antiquarks, and gluons). Richard Taylor, Henry Kendall and Jerome Friedman received the 1990 Nobel Prize in physics for their work at SLAC. The strange quark's existence was indirectly validated by SLAC's scattering experiments: not only was it a necessary component of Gell-Mann and Zweig's three-quark model, but it provided an explanation for the kaon () and pion () hadrons discovered in cosmic rays in 1947. In a 1970 paper, Glashow, John Iliopoulos and Luciano Maiani presented the GIM mechanism (named from their initials) to explain the experimental non-observation of flavor-changing neutral currents. This theoretical model required the existence of the as-yet undiscovered charm quark. The number of supposed quark flavors grew to the current six in 1973, when Makoto Kobayashi and Toshihide Maskawa noted that the experimental observation of CP violation could be explained if there were another pair of quarks. Charm quarks were produced almost simultaneously by two teams in November 1974 (see November Revolution) – one at SLAC under Burton Richter, and one at Brookhaven National Laboratory under Samuel Ting. The charm quarks were observed bound with charm antiquarks in mesons. The two parties had assigned the discovered meson two different symbols, and ; thus, it became formally known as the meson. The discovery finally convinced the physics community of the quark model's validity. In the following years a number of suggestions appeared for extending the quark model to six quarks. Of these, the 1975 paper by Haim Harari was the first to coin the terms top and bottom for the additional quarks. In 1977, the bottom quark was observed by a team at Fermilab led by Leon Lederman. This was a strong indicator of the top quark's existence: without the top quark, the bottom quark would have been without a partner. It was not until 1995 that the top quark was finally observed, also by the CDF and DØ teams at Fermilab. It had a mass much larger than expected, almost as large as that of a gold atom. Etymology For some time, Gell-Mann was undecided on an actual spelling for the term he intended to coin, until he found the word quark in James Joyce's 1939 book Finnegans Wake: The word quark is an outdated English word meaning to croak and the above-quoted lines are about a bird choir mocking king Mark of Cornwall in the legend of Tristan and Iseult. Especially in the German-speaking parts of the world there is a widespread legend, however, that Joyce had taken it from the word , a German word of Slavic origin which denotes a curd cheese, but is also a colloquial term for "trivial nonsense". In the legend it is said that he had heard it on a journey to Germany at a farmers' market in Freiburg. Some authors, however, defend a possible German origin of Joyce's word quark. Gell-Mann went into further detail regarding the name of the quark in his 1994 book The Quark and the Jaguar: Zweig preferred the name ace for the particle he had theorized, but Gell-Mann's terminology came to prominence once the quark model had been commonly accepted. The quark flavors were given their names for several reasons. The up and down quarks are named after the up and down components of isospin, which they carry. Strange quarks were given their name because they were discovered to be components of the strange particles discovered in cosmic rays years before the quark model was proposed; these particles were deemed "strange" because they had unusually long lifetimes. Glashow, who co-proposed the charm quark with Bjorken, is quoted as saying, "We called our construct the 'charmed quark', for we were fascinated and pleased by the symmetry it brought to the subnuclear world." The names "bottom" and "top", coined by Harari, were chosen because they are "logical partners for up and down quarks". Alternative names for bottom and top quarks are "beauty" and "truth" respectively, but these names have somewhat fallen out of use. While "truth" never did catch on, accelerator complexes devoted to massive production of bottom quarks are sometimes called "beauty factories". Properties Electric charge Quarks have fractional electric charge values – either (−) or (+) times the elementary charge (e), depending on flavor. Up, charm, and top quarks (collectively referred to as up-type quarks) have a charge of + e; down, strange, and bottom quarks (down-type quarks) have a charge of − e. Antiquarks have the opposite charge to their corresponding quarks; up-type antiquarks have charges of − e and down-type antiquarks have charges of + e. Since the electric charge of a hadron is the sum of the charges of the constituent quarks, all hadrons have integer charges: the combination of three quarks (baryons), three antiquarks (antibaryons), or a quark and an antiquark (mesons) always results in integer charges. For example, the hadron constituents of atomic nuclei, neutrons and protons, have charges of 0 e and +1 e respectively; the neutron is composed of two down quarks and one up quark, and the proton of two up quarks and one down quark. Spin Spin is an intrinsic property of elementary particles, and its direction is an important degree of freedom. It is sometimes visualized as the rotation of an object around its own axis (hence the name "spin"), though this notion is somewhat misguided at subatomic scales because elementary particles are believed to be point-like. Spin can be represented by a vector whose length is measured in units of the reduced Planck constant ħ (pronounced "h bar"). For quarks, a measurement of the spin vector component along any axis can only yield the values + or −; for this reason quarks are classified as spin- particles. The component of spin along a given axis – by convention the z axis – is often denoted by an up arrow ↑ for the value + and down arrow ↓ for the value −, placed after the symbol for flavor. For example, an up quark with a spin of + along the z axis is denoted by u↑. Weak interaction A quark of one flavor can transform into a quark of another flavor only through the weak interaction, one of the four fundamental interactions in particle physics. By absorbing or emitting a W boson, any up-type quark (up, charm, and top quarks) can change into any down-type quark (down, strange, and bottom quarks) and vice versa. This flavor transformation mechanism causes the radioactive process of beta decay, in which a neutron () "splits" into a proton (), an electron () and an electron antineutrino () (see picture). This occurs when one of the down quarks in the neutron () decays into an up quark by emitting a virtual boson, transforming the neutron into a proton (). The boson then decays into an electron and an electron antineutrino. Both beta decay and the inverse process of inverse beta decay are routinely used in medical applications such as positron emission tomography (PET) and in experiments involving neutrino detection. While the process of flavor transformation is the same for all quarks, each quark has a preference to transform into the quark of its own generation. The relative tendencies of all flavor transformations are described by a mathematical table, called the Cabibbo–Kobayashi–Maskawa matrix (CKM matrix). Enforcing unitarity, the approximate magnitudes of the entries of the CKM matrix are: where Vij represents the tendency of a quark of flavor i to change into a quark of flavor j (or vice versa). There exists an equivalent weak interaction matrix for leptons (right side of the W boson on the above beta decay diagram), called the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix). Together, the CKM and PMNS matrices describe all flavor transformations, but the links between the two are not yet clear. Strong interaction and color charge According to quantum chromodynamics (QCD), quarks possess a property called color charge. There are three types of color charge, arbitrarily labeled blue, green, and red. Each of them is complemented by an anticolor – antiblue, antigreen, and antired. Every quark carries a color, while every antiquark carries an anticolor. The system of attraction and repulsion between quarks charged with different combinations of the three colors is called strong interaction, which is mediated by force carrying particles known as gluons; this is discussed at length below. The theory that describes strong interactions is called quantum chromodynamics (QCD). A quark, which will have a single color value, can form a bound system with an antiquark carrying the corresponding anticolor. The result of two attracting quarks will be color neutrality: a quark with color charge ξ plus an antiquark with color charge −ξ will result in a color charge of 0 (or "white" color) and the formation of a meson. This is analogous to the additive color model in basic optics. Similarly, the combination of three quarks, each with different color charges, or three antiquarks, each with different anticolor charges, will result in the same "white" color charge and the formation of a baryon or antibaryon. In modern particle physics, gauge symmetries – a kind of symmetry group – relate interactions between particles (see gauge theories). Color SU(3) (commonly abbreviated to SU(3)c) is the gauge symmetry that relates the color charge in quarks and is the defining symmetry for quantum chromodynamics. Just as the laws of physics are independent of which directions in space are designated x, y, and z, and remain unchanged if the coordinate axes are rotated to a new orientation, the physics of quantum chromodynamics is independent of which directions in three-dimensional color space are identified as blue, red, and green. SU(3)c color transformations correspond to "rotations" in color space (which, mathematically speaking, is a complex space). Every quark flavor f, each with subtypes fB, fG, fR corresponding to the quark colors, forms a triplet: a three-component quantum field that transforms under the fundamental representation of SU(3)c. The requirement that SU(3)c should be local – that is, that its transformations be allowed to vary with space and time – determines the properties of the strong interaction. In particular, it implies the existence of eight gluon types to act as its force carriers. Mass Two terms are used in referring to a quark's mass: current quark mass refers to the mass of a quark by itself, while constituent quark mass refers to the current quark mass plus the mass of the gluon particle field surrounding the quark. These masses typically have very different values. Most of a hadron's mass comes from the gluons that bind the constituent quarks together, rather than from the quarks themselves. While gluons are inherently massless, they possess energy – more specifically, quantum chromodynamics binding energy (QCBE) – and it is this that contributes so greatly to the overall mass of the hadron (see mass in special relativity). For example, a proton has a mass of approximately , of which the rest mass of its three valence quarks only contributes about ; much of the remainder can be attributed to the field energy of the gluons (see chiral symmetry breaking). The Standard Model posits that elementary particles derive their masses from the Higgs mechanism, which is associated to the Higgs boson. It is hoped that further research into the reasons for the top quark's large mass of ~, almost the mass of a gold atom, might reveal more about the origin of the mass of quarks and other elementary particles. Size In QCD, quarks are considered to be point-like entities, with zero size. As of 2014, experimental evidence indicates they are no bigger than 10−4 times the size of a proton, i.e. less than 10−19 metres. Table of properties The following table summarizes the key properties of the six quarks. Flavor quantum numbers (isospin (I3), charm (C), strangeness (S, not to be confused with spin), topness (T), and bottomness (B′)) are assigned to certain quark flavors, and denote qualities of quark-based systems and hadrons. The baryon number (B) is + for all quarks, as baryons are made of three quarks. For antiquarks, the electric charge (Q) and all flavor quantum numbers (B, I3, C, S, T, and B′) are of opposite sign. Mass and total angular momentum (J; equal to spin for point particles) do not change sign for the antiquarks. Interacting quarks As described by quantum chromodynamics, the strong interaction between quarks is mediated by gluons, massless vector gauge bosons. Each gluon carries one color charge and one anticolor charge. In the standard framework of particle interactions (part of a more general formulation known as perturbation theory), gluons are constantly exchanged between quarks through a virtual emission and absorption process. When a gluon is transferred between quarks, a color change occurs in both; for example, if a red quark emits a red–antigreen gluon, it becomes green, and if a green quark absorbs a red–antigreen gluon, it becomes red. Therefore, while each quark's color constantly changes, their strong interaction is preserved. Since gluons carry color charge, they themselves are able to emit and absorb other gluons. This causes asymptotic freedom: as quarks come closer to each other, the chromodynamic binding force between them weakens. Conversely, as the distance between quarks increases, the binding force strengthens. The color field becomes stressed, much as an elastic band is stressed when stretched, and more gluons of appropriate color are spontaneously created to strengthen the field. Above a certain energy threshold, pairs of quarks and antiquarks are created. These pairs bind with the quarks being separated, causing new hadrons to form. This phenomenon is known as color confinement: quarks never appear in isolation. This process of hadronization occurs before quarks formed in a high energy collision are able to interact in any other way. The only exception is the top quark, which may decay before it hadronizes. Sea quarks Hadrons contain, along with the valence quarks () that contribute to their quantum numbers, virtual quark–antiquark () pairs known as sea quarks (). Sea quarks form when a gluon of the hadron's color field splits; this process also works in reverse in that the annihilation of two sea quarks produces a gluon. The result is a constant flux of gluon splits and creations colloquially known as "the sea". Sea quarks are much less stable than their valence counterparts, and they typically annihilate each other within the interior of the hadron. Despite this, sea quarks can hadronize into baryonic or mesonic particles under certain circumstances. Other phases of quark matter Under sufficiently extreme conditions, quarks may become "deconfined" out of bound states and propagate as thermalized "free" excitations in the larger medium. In the course of asymptotic freedom, the strong interaction becomes weaker at increasing temperatures. Eventually, color confinement would be effectively lost in an extremely hot plasma of freely moving quarks and gluons. This theoretical phase of matter is called quark–gluon plasma. The exact conditions needed to give rise to this state are unknown and have been the subject of a great deal of speculation and experimentation. An estimate puts the needed temperature at kelvin. While a state of entirely free quarks and gluons has never been achieved (despite numerous attempts by CERN in the 1980s and 1990s), recent experiments at the Relativistic Heavy Ion Collider have yielded evidence for liquid-like quark matter exhibiting "nearly perfect" fluid motion. The quark–gluon plasma would be characterized by a great increase in the number of heavier quark pairs in relation to the number of up and down quark pairs. It is believed that in the period prior to 10−6 seconds after the Big Bang (the quark epoch), the universe was filled with quark–gluon plasma, as the temperature was too high for hadrons to be stable. Given sufficiently high baryon densities and relatively low temperatures – possibly comparable to those found in neutron stars – quark matter is expected to degenerate into a Fermi liquid of weakly interacting quarks. This liquid would be characterized by a condensation of colored quark Cooper pairs, thereby breaking the local SU(3)c symmetry. Because quark Cooper pairs harbor color charge, such a phase of quark matter would be color superconductive; that is, color charge would be able to pass through it with no resistance.
Physical sciences
Fermions
null
25182
https://en.wikipedia.org/wiki/Quantization%20%28physics%29
Quantization (physics)
Quantization (in British English quantisation) is the systematic transition procedure from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. It is a procedure for constructing quantum mechanics from classical mechanics. A generalization involving infinite degrees of freedom is field quantization, as in the "quantization of the electromagnetic field", referring to photons as field "quanta" (for instance as light quanta). This procedure is basic to theories of atomic physics, chemistry, particle physics, nuclear physics, condensed matter physics, and quantum optics. Historical overview In 1901, when Max Planck was developing the distribution function of statistical mechanics to solve the ultraviolet catastrophe problem, he realized that the properties of blackbody radiation can be explained by the assumption that the amount of energy must be in countable fundamental units, i.e. amount of energy is not continuous but discrete. That is, a minimum unit of energy exists and the following relationship holds for the frequency . Here, is called the Planck constant, which represents the amount of the quantum mechanical effect. It means a fundamental change of mathematical model of physical quantities. In 1905, Albert Einstein published a paper, "On a heuristic viewpoint concerning the emission and transformation of light", which explained the photoelectric effect on quantized electromagnetic waves. The energy quantum referred to in this paper was later called "photon".  In July 1913, Niels Bohr used quantization to describe the spectrum of a hydrogen atom in his paper "On the constitution of atoms and molecules". The preceding theories have been successful, but they are very phenomenological theories.  However, the French mathematician Henri Poincaré first gave a systematic and rigorous definition of what quantization is in his 1912 paper "Sur la théorie des quanta". The term "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics.  (1931). Canonical quantization Canonical quantization develops quantum mechanics from classical mechanics. One introduces a commutation relation among canonical coordinates. Technically, one converts coordinates to operators, through combinations of creation and annihilation operators. The operators act on quantum states of the theory. The lowest energy state is called the vacuum state. Quantization schemes Even within the setting of canonical quantization, there is difficulty associated to quantizing arbitrary observables on the classical phase space. This is the ordering ambiguity: classically, the position and momentum variables x and p commute, but their quantum mechanical operator counterparts do not. Various quantization schemes have been proposed to resolve this ambiguity, of which the most popular is the Weyl quantization scheme. Nevertheless, the Groenewold–van Hove theorem dictates that no perfect quantization scheme exists. Specifically, if the quantizations of x and p are taken to be the usual position and momentum operators, then no quantization scheme can perfectly reproduce the Poisson bracket relations among the classical observables. See Groenewold's theorem for one version of this result. Covariant canonical quantization There is a way to perform a canonical quantization without having to resort to the non covariant approach of foliating spacetime and choosing a Hamiltonian. This method is based upon a classical action, but is different from the functional integral approach. The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge "flows"). It starts with the classical algebra of all (smooth) functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations. Then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket. This Poisson algebra is then ℏ -deformed in the same way as in canonical quantization. In quantum field theory, there is also a way to quantize actions with gauge "flows". It involves the Batalin–Vilkovisky formalism, an extension of the BRST formalism. Deformation quantization One of the earliest attempts at a natural quantization was Weyl quantization, proposed by Hermann Weyl in 1927. Here, an attempt is made to associate a quantum-mechanical observable (a self-adjoint operator on a Hilbert space) with a real-valued function on classical phase space. The position and momentum in this phase space are mapped to the generators of the Heisenberg group, and the Hilbert space appears as a group representation of the Heisenberg group. In 1946, H. J. Groenewold considered the product of a pair of such observables and asked what the corresponding function would be on the classical phase space. This led him to discover the phase-space star-product of a pair of functions. More generally, this technique leads to deformation quantization, where the ★-product is taken to be a deformation of the algebra of functions on a symplectic manifold or Poisson manifold. However, as a natural quantization scheme (a functor), Weyl's map is not satisfactory. For example, the Weyl map of the classical angular-momentum-squared is not just the quantum angular momentum squared operator, but it further contains a constant term . (This extra term offset is pedagogically significant, since it accounts for the nonvanishing angular momentum of the ground-state Bohr orbit in the hydrogen atom, even though the standard QM ground state of the atom has vanishing .) As a mere representation change, however, Weyl's map is useful and important, as it underlies the alternate equivalent phase space formulation of conventional quantum mechanics. Geometric quantization In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in. A more geometric approach to quantization, in which the classical phase space can be a general symplectic manifold, was developed in the 1970s by Bertram Kostant and Jean-Marie Souriau. The method proceeds in two stages. First, once constructs a "prequantum Hilbert space" consisting of square-integrable functions (or, more properly, sections of a line bundle) over the phase space. Here one can construct operators satisfying commutation relations corresponding exactly to the classical Poisson-bracket relations. On the other hand, this prequantum Hilbert space is too big to be physically meaningful. One then restricts to functions (or sections) depending on half the variables on the phase space, yielding the quantum Hilbert space. Path integral quantization A classical mechanical theory is given by an action with the permissible configurations being the ones which are extremal with respect to functional variations of the action. A quantum-mechanical description of the classical system can also be constructed from the action of the system by means of the path integral formulation. Other types Loop quantum gravity (loop quantization) Uncertainty principle (quantum statistical mechanics approach) Schwinger's quantum action principle
Physical sciences
Quantum mechanics
Physics
25198
https://en.wikipedia.org/wiki/Quaternary
Quaternary
The Quaternary ( ) is the current and most recent of the three periods of the Cenozoic Era in the geologic time scale of the International Commission on Stratigraphy (ICS), as well as the current and most recent of the twelve periods of the Phanerozoic eon. It follows the Neogene Period and spans from 2.58 million years ago to the present. The Quaternary Period is divided into two epochs: the Pleistocene (2.58 million years ago to 11.7 thousand years ago) and the Holocene (11.7 thousand years ago to today); a proposed third epoch, the Anthropocene, was rejected in 2024 by IUGS, the governing body of the ICS. The Quaternary is typically defined by the Quaternary glaciation, the cyclic growth and decay of continental ice sheets related to the Milankovitch cycles and the associated climate and environmental changes that they caused. Research history In 1759 Giovanni Arduino proposed that the geological strata of northern Italy could be divided into four successive formations or "orders" (). The term "quaternary" was introduced by Jules Desnoyers in 1829 for sediments of France's Seine Basin that clearly seemed to be younger than Tertiary Period rocks. The Quaternary Period follows the Neogene Period and extends to the present. The Quaternary covers the time span of glaciations classified as the Pleistocene, and includes the present interglacial time-period, the Holocene. This places the start of the Quaternary at the onset of Northern Hemisphere glaciation approximately 2.6 million years ago (mya). Prior to 2009, the Pleistocene was defined to be from 1.805 million years ago to the present, so the current definition of the Pleistocene includes a portion of what was, prior to 2009, defined as the Pliocene. Quaternary stratigraphers usually worked with regional subdivisions. From the 1970s, the International Commission on Stratigraphy (ICS) tried to make a single geologic time scale based on GSSP's, which could be used internationally. The Quaternary subdivisions were defined based on biostratigraphy instead of paleoclimate. This led to the problem that the proposed base of the Pleistocene was at 1.805 million years ago, long after the start of the major glaciations of the northern hemisphere. The ICS then proposed to abolish use of the name Quaternary altogether, which appeared unacceptable to the International Union for Quaternary Research (INQUA). In 2009, it was decided to make the Quaternary the youngest period of the Cenozoic Era with its base at 2.588 mya and including the Gelasian Stage, which was formerly considered part of the Neogene Period and Pliocene Epoch. This was later revised to 2.58 mya. The Anthropocene was proposed as a third epoch as a mark of the anthropogenic impact on the global environment starting with the Industrial Revolution, or about 200 years ago. The Anthropocene was rejected as a geological epoch in 2024 by the International Union of Geological Sciences (IUGS), the governing body of the ICS. Geology The 2.58 million years of the Quaternary represents the time during which recognisable humans existed. Over this geologically short time period there has been relatively little change in the distribution of the continents due to plate tectonics. The Quaternary geological record is preserved in greater detail than that for earlier periods. The major geographical changes during this time period included the emergence of the straits of Bosphorus and Skagerrak during glacial epochs, which respectively turned the Black Sea and Baltic Sea into fresh water lakes, followed by their flooding (and return to salt water) by rising sea level; the periodic filling of the English Channel, forming a land bridge between Britain and the European mainland; the periodic closing of the Bering Strait, forming the land bridge between Asia and North America; and the periodic flash flooding of Scablands of the American Northwest by glacial water. The current extent of Hudson Bay, the Great Lakes and other major lakes of North America are a consequence of the Canadian Shield's readjustment since the last ice age; different shorelines have existed over the course of Quaternary time. Climate The climate was one of periodic glaciations with continental glaciers moving as far from the poles as 40 degrees latitude. Glaciation took place repeatedly during the Quaternary Ice age – a term coined by Schimper in 1839 that began with the start of the Quaternary about 2.58 Mya and continues to the present day. In 1821, a Swiss engineer, Ignaz Venetz, presented an article in which he suggested the presence of traces of the passage of a glacier at a considerable distance from the Alps. This idea was initially disputed by another Swiss scientist, Louis Agassiz, but when he undertook to disprove it, he ended up affirming his colleague's hypothesis. A year later, Agassiz raised the hypothesis of a great glacial period that would have had long-reaching general effects. This idea gained him international fame and led to the establishment of the Glacial Theory. In time, thanks to the refinement of geology, it has been demonstrated that there were several periods of glacial advance and retreat and that past temperatures on Earth were very different from today. In particular, the Milankovitch cycles of Milutin Milankovitch are based on the premise that variations in incoming solar radiation are a fundamental factor controlling Earth's climate. During this time, substantial glaciers advanced and retreated over much of North America and Europe, parts of South America and Asia, and all of Antarctica. Flora and fauna There was a major extinction of large mammals globally during the Late Pleistocene Epoch. Many forms such as sabre-toothed cats, mammoths, mastodons, glyptodonts, etc., became extinct worldwide. Others, including horses, camels and American cheetahs became extinct in North America. The Great Lakes formed and giant mammals thrived in parts of North America and Eurasia not covered in ice. These mammals became extinct when the glacial period ended about 11,700 years ago. Modern humans evolved about 315,000 years ago. During the Quaternary Period, mammals, flowering plants, and insects dominated the land.
Physical sciences
Geological periods
null
25202
https://en.wikipedia.org/wiki/Quantum%20mechanics
Quantum mechanics
Quantum mechanics is a fundamental theory that describes the behavior of nature at and below the scale of atoms. It is the foundation of all quantum physics, which includes quantum chemistry, quantum field theory, quantum technology, and quantum information science. Quantum mechanics can describe many systems that classical physics cannot. Classical physics can describe many aspects of nature at an ordinary (macroscopic and (optical) microscopic) scale, but is not sufficient for describing them at very small submicroscopic (atomic and subatomic) scales. Most theories in classical physics can be derived from quantum mechanics as an approximation, valid at large (macroscopic/microscopic) scale. Quantum systems have bound states that are quantized to discrete values of energy, momentum, angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle). Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield. Overview and fundamental concepts Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known as quantum electrodynamics (QED), has been shown to agree with experiment to within 1 part in 1012 when predicting the magnetic properties of an electron. A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another. One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum. Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known as wave–particle duality. In addition to light, electrons, atoms, and molecules are all found to exhibit the same dual behavior when fired towards a double slit. Another non-classical phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor. When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem. Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables. It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples. Mathematical formulation In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector belonging to a (separable) complex Hilbert space . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, and represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors with the usual inner product. Physical quantities of interestposition, momentum, energy, spinare represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density. After the measurement, if result was obtained, the quantum state is postulated to collapse to , in the non-degenerate case, or to , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity (see Measurement in quantum mechanics). Time evolution of a quantum state The time evolution of a quantum state is described by the Schrödinger equation: Here denotes the Hamiltonian, the observable corresponding to the total energy of the system, and is the reduced Planck constant. The constant is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle. The solution of this differential equation is given by The operator is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state – it makes a definite prediction of what the quantum state will be at any later time. Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1). Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution in closed form. However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy. Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion. Uncertainty principle One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator and momentum operator do not commute, but rather satisfy the canonical commutation relation: Given a quantum state, the Born rule lets us compute expectation values for both and , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have and likewise for the momentum: The uncertainty principle states that Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators and . The commutator of these two operators is and this provides the lower bound on the product of standard deviations: Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum is replaced by , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times . Composite systems and entanglement When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let and be two quantum systems, with Hilbert spaces and , respectively. The Hilbert space of the composite system is then If the state for the first system is the vector and the state for the second system is , then the state of the composite system is Not all states in the joint Hilbert space can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if and are both possible states for system , and likewise and are both possible states for system , then is a valid joint state that is not separable. States that are not separable are called entangled. If the state for a composite system is entangled, it is impossible to describe either component system or system by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. Equivalence between formulations There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. Symmetries and conservation laws The Hamiltonian is known as the generator of time evolution, since it defines a unitary time-evolution operator for each value of . From this relation between and , it follows that any observable that commutes with will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator can generate a family of unitary operators parameterized by a variable . Under the evolution generated by , any observable that commutes with will be conserved. Moreover, if is conserved by evolution under , then is conserved under the evolution generated by . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law. Examples Free particle The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy: The general solution of the Schrödinger equation is given by which is a superposition of all possible plane waves , which are eigenstates of the momentum operator with momentum . The coefficients of the superposition are , which is the Fourier transform of the initial quantum state . It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet: which has Fourier transform, and therefore momentum distribution We see that as we make smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. Particle in a box The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written With the differential operator defined by the previous equation is evocative of the classic kinetic energy analogue, with state in this case having energy coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are or, from Euler's formula, The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at , and . At , in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of , This constraint on implies a constraint on the energy levels, yielding A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. Harmonic oscillator As in the classical case, the potential for the quantum harmonic oscillator is given by This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by where Hn are the Hermite polynomials and the corresponding energy levels are This is another example illustrating the discretization of energy for bound states. Mach–Zehnder interferometer The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector that is a superposition of the "lower" path and the "upper" path , that is, for complex . In order to respect the postulate that we require that . Both beam splitters are modelled as the unitary matrix , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of , or be reflected to the other path with a probability amplitude of . The phase shifter on the upper arm is modelled as the unitary matrix , which means that if the photon is on the "upper" path it will gain a relative phase of , and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitter , a phase shifter , and another beam splitter , and so end up in the state and the probabilities that it will be detected at the right or at the top are given respectively by One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given by , independently of the phase . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths. Applications Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics. In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Relation to other scientific theories Classical mechanics The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization. When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator. Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, though at temperatures approaching absolute zero quantum behavior may manifest macroscopically. Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. Special relativity and electrodynamics Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical Coulomb potential. Likewise, in a Stern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. Relation to general relativity Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon. One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG. Philosophical implications Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics." The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr, Heisenberg, Schrödinger, Feynman, and Zeilinger as well as 21st-century researchers in quantum foundations. Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem. Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful. Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later. History Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light. During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible unitsthe word "atom" deriving from the Greek for "uncuttable" the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons. The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν): , where h is the Planck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser. This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects. In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927. By 1930, quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids.
Physical sciences
Physics
null