id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
215419
https://en.wikipedia.org/wiki/Giant%20squid
Giant squid
The giant squid (Architeuthis dux) is a species of deep-ocean dwelling squid in the family Architeuthidae. It can grow to a tremendous size, offering an example of abyssal gigantism: recent estimates put the maximum size at around for females and for males, from the posterior fins to the tip of the two long tentacles. This makes it longer than the colossal squid at an estimated , but substantially lighter, as the tentacles make up most of the length. The mantle of the giant squid is about long (more for females, less for males), and the length of the squid excluding its tentacles (but including head and arms) rarely exceeds . Claims of specimens measuring or more have not been scientifically documented. The number of different giant squid species has been debated, but genetic research suggests that only one species exists. In 2004, a Japanese research team obtained the first images of a living animal in its habitat. Taxonomy The closest relatives of the giant squid are thought to be the four obscure species of "neosquid" in the family Neoteuthidae, each of which belongs to its own monotypic genus, as with the giant squid. Together, both families comprise the superfamily Architeuthoidea. Range and habitat The giant squid is widespread, occurring in all of the world's oceans. It is usually found near continental and island slopes from the North Atlantic Ocean, especially Newfoundland, Norway, the northern British Isles, Spain and the oceanic islands of the Azores and Madeira, to the South Atlantic around southern Africa, the North Pacific around Japan, and the southwestern Pacific around New Zealand and Australia. Specimens are rare in tropical and polar latitudes. The vertical distribution of giant squid is incompletely known, but data from trawled specimens and sperm whale diving behavior suggest it spans a large range of depths, possibly . Morphology and anatomy Like all squid, a giant squid has a mantle (torso), eight arms, and two longer tentacles (the longest known tentacles of any cephalopod). The arms and tentacles account for much of the squid's great length, making it much lighter than its chief predator, the sperm whale. Scientifically documented specimens have masses of hundreds, rather than thousands, of kilograms. The inside surfaces of the arms and tentacles are lined with hundreds of subspherical suction cups, in diameter, each mounted on a stalk. The circumference of these suckers is lined with sharp, finely serrated rings of chitin. The perforation of these teeth and the suction of the cups serve to attach the squid to its prey. It is common to find circular scars from the suckers on or close to the head of sperm whales that have attacked giant squid. Each tentacular club is divided into three regions—the carpus ("wrist"), manus ("hand") and dactylus ("finger"). The carpus has a dense cluster of cups, in six or seven irregular, transverse rows. The manus is broader, closer to the end of the club, and has enlarged suckers in two medial rows. The dactylus is the tip. The bases of all the arms and tentacles are arranged in a circle surrounding the animal's single, parrot-like beak, as in other cephalopods. Giant squid have small fins at the rear of their mantles used for locomotion. Like other cephalopods, they are propelled by jet—by pulling water into the mantle cavity, and pushing it through the siphon, in gentle, rhythmic pulses. They can also move quickly by expanding the cavity to fill it with water, then contracting muscles to jet water through the siphon. Giant squid breathe using two large gills inside the mantle cavity. The circulatory system is closed, which is a distinct characteristic of cephalopods. Like other squid, they contain dark ink. The giant squid has a sophisticated nervous system and complex brain, attracting great interest from scientists. It also has the largest eyes of any living creature except perhaps the colossal squid—up to at least in diameter, with a pupil (only the extinct ichthyosaurs are known to have had larger eyes). Large eyes can better detect light (including bioluminescent light), which is scarce in deep water. The giant squid probably cannot see color, but it can probably discern small differences in tone, which is important in the low-light conditions of the deep ocean. Giant squid and some other large squid species maintain neutral buoyancy in seawater through an ammonium chloride solution which is found throughout their bodies and is lighter than seawater. This differs from the method of flotation used by most fish, which involves a gas-filled swim bladder. The solution tastes somewhat like salty liquorice/salmiak and makes giant squid unattractive for general human consumption. Like all cephalopods, giant squid use organs called statocysts to sense their orientation and motion in water. The age of a giant squid can be determined by "growth rings" in the statocyst's statolith, similar to determining the age of a tree by counting its rings. Much of what is known about giant squid age is based on estimates of the growth rings and from undigested beaks found in the stomachs of sperm whales. Size The giant squid is the second-largest mollusc and one of the largest of all extant invertebrates. It is exceeded only by the colossal squid, Mesonychoteuthis hamiltoni, which may have a mantle nearly twice as long. Several extinct cephalopods, such as the Cretaceous coleoids Yezoteuthis and Haboroteuthis, and the Ordovician nautiloid Endoceras may have grown even larger. Although the Cretaceous Tusoteuthis, with its long mantle, was once considered to grow to a size close to that of the giant squid (over including arms), this genus is likely doubtful. The largest specimen probably belonged to the genus Enchoteuthis, estimated to have short arms, with a total length of only . Giant squid size, particularly total length, has often been exaggerated. Reports of specimens reaching and even exceeding are widespread, but no specimens approaching this size have been scientifically documented. According to giant squid expert Steve O'Shea, such lengths were likely achieved by greatly stretching the two tentacles like elastic bands. Based on the examination of 130 specimens and of beaks found inside sperm whales, giant squids' mantles are not known to exceed . Including the head and arms, but excluding the tentacles, the length very rarely exceeds . Maximum total length, when measured relaxed post mortem, is estimated at or for females and for males from the posterior fins to the tip of the two long tentacles. Giant squid exhibit sexual dimorphism. Maximum weight is estimated at for females and for males. Reproductive cycle Little is known about the reproductive cycle of giant squid. They are thought to reach sexual maturity at about three years old; males reach sexual maturity at a smaller size than females. Females produce large quantities of eggs, sometimes more than , that average long and wide. Females have a single median ovary in the rear end of the mantle cavity and paired, convoluted oviducts, where mature eggs pass exiting through the oviducal glands, then through the nidamental glands. As in other squid, these glands produce a gelatinous material used to keep the eggs together once they are laid. In males, as with most other cephalopods, the single, posterior testis produces sperm that move into a complex system of glands that manufacture the spermatophores. These are stored in the elongate sac, or Needham's sac, that terminates in the penis from which they are expelled during mating. The penis is prehensile, over long, and extends from inside the mantle. The two ventral arms on a male giant squid are hectocotylized, which means they are specialized to facilitate the fertilization of the female's eggs. How the sperm is transferred to the egg mass is much debated, as giant squid lack the hectocotylus used for reproduction in many other cephalopods. It may be transferred in sacs of spermatophores, called spermatangia, which the male injects into the female's arms. This is suggested by a female specimen recently found in Tasmania, having a small subsidiary tendril attached to the base of each arm. Post-larval juveniles have been discovered in surface waters off New Zealand, with plans to capture more and maintain them in an aquarium to learn more about the creature. Young giant squid specimens were found off the coast of southern Japan in 2013 and confirmed through genetic analysis. Another juvenile, approximately 3.7 metres long, was encountered and filmed alive in the harbour in the Japanese city of Toyama on 24 December 2015; after being filmed and viewed by a large number of spectators, including a diver who entered the water to film the squid up close, it was guided out of the harbour into Toyama Bay by the diver. Genetics Analysis of the mitochondrial DNA of giant squid individuals from all over the world has found that there is little variation between individuals across the globe (just 181 differing genetic base pairs out of 20,331). This suggests that there is only a single species of giant squid in the world. Squid larvae may be dispersed by ocean currents across vast distances. Ecology Feeding Recent studies have shown giant squid feed on deep-sea fish, such as the orange roughy (Hoplostethus atlanticus), and other squid species. They catch prey using the two tentacles, gripping it with serrated sucker rings on the ends. Then they bring it toward the powerful beak, and shred it with the radula (tongue with small, file-like teeth) before it reaches the esophagus. They are believed to be solitary hunters, as only individual giant squid have been caught in fishing nets. Although the majority of giant squid caught by trawl in New Zealand waters have been associated with the local hoki (Macruronus novaezelandiae) fishery, hoki do not feature in the squid's diet. This suggests giant squid and hoki prey on the same animals. Predators and potential cannibalism The known predators of adult giant squid include sperm whales, pilot whales, southern sleeper sharks, and in some regions killer whales. Juveniles may fall prey to other large deep sea predators. Because sperm whales are skilled at locating giant squid, scientists have tried to observe them to study the squid. Giant squid have also been recently discovered to presumably steal food from each other; in mid-to-late October 2016, a giant squid washed ashore in Galicia, Spain. The squid had been photographed alive by a tourist named Javier Ondicol shortly before its death, and examination of its corpse by the Coordinators for the Study and Protection of Marine Species (CEPESMA) indicates that the squid was attacked and mortally wounded by another giant squid, losing parts of its fins, and receiving damage to its mantle, one of its gills, and losing an eye. The intact nature of the specimen indicates that the giant squid managed to escape its rival by slowly retreating to shallow water, where it died of its wounds. The incident is the second to be documented among Architeuthis recorded in Spain, with the other occurring in Villaviciosa. Evidence in the form of giant squid stomach contents containing beak fragments from other giant squid in Tasmania also supports the theory that the species is at least occasionally cannibalistic. Alternatively, such squid-on-squid attacks may be a result of competition for prey. These traits are seen in the Humboldt squid as well, indicating that cannibalism in large squid may be more common than originally thought. Population Scientists have been unable to determine the worldwide population of giant squid to any degree of accuracy. Estimates have been put together based on the number of giant squid beaks found in the stomachs of deceased sperm whales, a known predator of the giant squid, and the better-known population of sperm whales. Based on such observations, it has been estimated that sperm whales consume between 4.3 and 131 million giant squid annually, implying that the giant squid population is likewise well into the millions, but more precise estimates have been elusive. Species The taxonomy of the giant squid, as with many cephalopod genera, has long been debated. Lumpers and splitters may propose as many as seventeen species or as few as one. The broadest list is: Architeuthis dux, Atlantic giant squid Architeuthis (Loligo) hartingii Architeuthis japonica Architeuthis kirkii Architeuthis (Megateuthis) martensii, North Pacific giant squid Architeuthis physeteris Architeuthis sanctipauli, southern giant squid Architeuthis (Steenstrupia) stockii Architeuthis (Loligo) bouyeri Architeuthis clarkei Architeuthis (Plectoteuthis) grandis Architeuthis (Megaloteuthis) harveyi Architeuthis longimanus Architeuthis monachus? Architeuthis nawaji Architeuthis princeps Architeuthis (Dubioteuthis) physeteris Architeuthis titan Architeuthis verrilli It is unclear if these are distinct species, as no genetic or physical basis for distinguishing between them has yet been proposed. In the 1984 FAO Species Catalogue of the Cephalopods of the World, Roper, et al. wrote: In Cephalopods: A World Guide (2000), Mark Norman writes: In March 2013, researchers at the University of Copenhagen suggested that, based on DNA research, there is only one species: Timeline Aristotle, who lived in the fourth century BC, described a large squid, which he called teuthus, distinguishing it from the smaller squid, the teuthis. He mentions, "of the calamaries, the so-called teuthus is much bigger than the teuthis; for teuthi [plural of teuthus] have been found as much as five ells [5.7 metres] long". Pliny the Elder, living in the first century AD, also described a gigantic squid in his Natural History, with the head "as big as a cask", arms long, and carcass weighing . Tales of giant squid have been common among mariners since ancient times, and may have led to the Norse legend of the kraken, a tentacled sea monster as large as an island capable of engulfing and sinking any ship. Japetus Steenstrup, the describer of Architeuthis, suggested a giant squid was the species described as a sea monk to the Danish king Christian III circa 1550. The Lusca of the Caribbean and Scylla in Greek mythology may also derive from giant squid sightings. Eyewitness accounts of other sea monsters like the sea serpent are also thought to be mistaken interpretations of giant squid. Nevertheless, the historian Otto Latva, who has studied the past interactions between humans and giant squid, has pointed out that many old stories about sea monsters were not associated with giant squid until the late 19th century. Latva has proposed that the giant squid was monsterized in the 19th century by natural historians and other writers. Regarding seafarers' relationship with giant squid, he explains it as being pragmatic: they did not perceive giant squid as monsters, but as sea animals that could be utilized in various ways. Steenstrup wrote a number of papers on giant squid in the 1850s. He first used the term "Architeuthus" (this was the spelling he chose) in a paper in 1857. A portion of a giant squid was secured by the French corvette Alecton in 1861, leading to wider recognition of the genus in the scientific community. From 1870 to 1880, many squid were stranded on the shores of Newfoundland. For example, a specimen washed ashore in Thimble Tickle Bay, Newfoundland, on 2 November 1878; its mantle was reported to be long, with one tentacle long, and it was estimated as weighing . Many of these specimens were not preserved, often being processed into manure or animal feed. In 1873, a squid "attacked" a minister and a young boy in a dory near Bell Island, Newfoundland. Many strandings also occurred in New Zealand during the late 19th century. Although strandings continue to occur sporadically throughout the world, none have been as frequent as those at Newfoundland and New Zealand in the 19th century. It is not known why giant squid become stranded on shore, but it may be because the distribution of deep, cold water where squid live is temporarily altered. Many scientists who have studied squid mass strandings believe they are cyclical and predictable. The length of time between strandings is not known, but was proposed to be 90 years by Architeuthis specialist Frederick Aldrich. Aldrich used this value to correctly predict a relatively small stranding that occurred between 1961 and 1968. In 2004, another giant squid, later named "Archie", was caught off the coast of the Falkland Islands by a fishing trawler. It was long and was sent to the Natural History Museum in London to be studied and preserved. It was put on display on 1 March 2006 at the Darwin Centre. The find of such a large, complete specimen is very rare, as most specimens are in a poor condition, having washed up dead on beaches or been retrieved from the stomachs of dead sperm whales. Researchers undertook a painstaking process to preserve the body. It was transported to England on ice aboard the trawler; then it was defrosted, which took about four days. The major difficulty was that thawing the thick mantle took much longer than the tentacles. To prevent the tentacles from rotting, scientists covered them in ice packs, and bathed the mantle in water. Then they injected the squid with a formol-saline solution to prevent rotting. The creature is now on show in a glass tank at the Darwin Centre of the Natural History Museum. In December 2005, the Melbourne Aquarium in Australia paid A$100,000 for the intact body of a giant squid, preserved in a giant block of ice, which had been caught by fishermen off the coast of New Zealand's South Island that year. The number of known giant squid specimens was close to 700 in 2011, and new ones are reported each year. Around 30 of these specimens are exhibited at museums and aquaria worldwide. The Museo del Calamar Gigante in Luarca, Spain, had by far the largest collection on public display, but many of the museum's specimens were destroyed during a storm in February 2014. The search for a live Architeuthis specimen includes attempts to find live young, including larvae. The larvae closely resemble those of Nototodarus and Onykia, but are distinguished by the shape of the mantle attachment to the head, the tentacle suckers, and the beaks. Images and video of live animals By the turn of the 21st century, the giant squid remained one of the few extant megafauna to have never been photographed alive, either in the wild or in captivity. Marine biologist and author Richard Ellis described it as "the most elusive image in natural history". In 1993, an image purporting to show a diver with a live giant squid (identified as Architeuthis dux) was published in the book European Seashells. However, the animal in this photograph was a sick or dying Onykia robusta, not a giant squid. The first footage of live (larval) giant squid ever captured on film was in 2001. The footage was shown on Chasing Giants: On the Trail of the Giant Squid on the Discovery Channel. First images of live adult The first image of a live mature giant squid was taken on 15 January 2002, on Goshiki beach, Amino Cho, Kyoto Prefecture, Japan. The animal, which measured about in mantle length and in total length, was found near the water's surface. It was captured and tied to a quay, where it died overnight. The specimen was identified by Koutarou Tsuchiya of the Tokyo University of Fisheries. It is on display at the National Science Museum of Japan. First observations in the wild The first photographs of a live giant squid in its natural habitat were taken on 30 September 2004, by Tsunemi Kubodera (National Science Museum of Japan) and Kyoichi Mori (Ogasawara Whale Watching Association). Their teams had worked together for nearly two years to accomplish this. They used a five-ton fishing boat and only two crew members. The images were created on their third trip to a known sperm whale hunting ground south of Tokyo, where they had dropped a line baited with squid and shrimp. The line also held a camera and a flash. After over twenty tries that day, an giant squid attacked the lure and snagged its tentacle. The camera took over 500 photos before the squid managed to break free after four hours. The squid's tentacle remained attached to the lure. Later DNA tests confirmed the animal as a giant squid. On 27 September 2005, Kubodera and Mori released the photographs to the world. The photo sequence, taken at a depth of off Japan's Ogasawara Islands, shows the squid homing in on the baited line and enveloping it in "a ball of tentacles". The researchers were able to locate the likely general location of giant squid by closely tailing the movements of sperm whales. According to Kubodera, "we knew that they fed on the squid, and we knew when and how deep they dived, so we used them to lead us to the squid". Kubodera and Mori reported their observations in the journal Proceedings of the Royal Society. Among other things, the observations demonstrate actual hunting behaviors of adult Architeuthis, a subject on which there had been much speculation. The photographs showed an aggressive hunting pattern by the baited squid, leading to it impaling a tentacle on the bait ball's hooks. This may disprove the theory that the giant squid is a drifter which eats whatever floats by, rarely moving so as to conserve energy. It seems the species has a much more aggressive feeding technique. First video of live adult in natural habitat In November 2006, American explorer and diver Scott Cassell led an expedition to the Gulf of California with the aim of filming a giant squid in its natural habitat. The team employed a novel filming method: using a Humboldt squid carrying a specially designed camera clipped to its fin. The camera-bearing squid caught on film what was claimed to be a giant squid, with an estimated length of , engaging in predatory behavior. The footage aired a year later on a History Channel program, MonsterQuest: Giant Squid Found. Cassell subsequently distanced himself from this documentary, claiming that it contained multiple factual and scientific errors. In July 2012, a crew from television networks NHK and Discovery Channel captured what was described as "the first-ever footage of a live giant squid in its natural habitat". The footage was revealed on a NHK Special on 13 January 2013, and was shown on Discovery Channel's show Monster Squid: The Giant Is Real on 27 January 2013, and on Giant Squid: Filming the Impossible – Natural World Special on BBC Two. To capture the footage the team aboard OceanX's vessel MV Alucia traveled to the Ogasawara Islands, south of Tokyo and utilized the ship's crewed submersibles. The squid was about long and was missing its feeding tentacles, likely from a failed attack by a sperm whale. It was drawn into viewing range by both artificial bioluminescence created to mimic panicking Atolla jellyfish and by using a Thysanoteuthis rhombus (diamond squid) as bait. The giant squid was filmed feeding for about 23 minutes by Tsunemi Kubodera until it departed. The technique of using unobtrusive viewing and bioluminescence luring of the squid with quiet unobtrusive platforms was described by Edith Widder, a member of the expedition. Second video of giant squid in natural habitat On 19 June 2019, in an expedition run by the National Oceanic & Atmospheric Association (NOAA), known as the Journey to Midnight, biologists Nathan J. Robinson and Edith Widder captured a video of a juvenile giant squid at a depth of 759 meters (2,490 feet) in the Gulf of Mexico. Michael Vecchione, a NOAA Fisheries zoologist, confirmed that the captured footage was that of the genus Architeuthis, and that the individual filmed measured at somewhere between . Other sightings Videos of live giant squids have been occasionally captured near the surface since the 2012 sighting, with one of these aforementioned individuals being guided back into the open ocean after appearing in Toyama Harbor on 24 December 2015. The majority of these sightings were of sick or dying individuals that had come up to the surface. Aquarium keeping The giant squid cannot be kept in aquariums due to its hard-to-reach habitat, body size, and special needs. In 2022, a live specimen was found off the coast of Japan, and an attempt was made to transport it to the Echizen Matsushima Aquarium in the city of Sakai. Cultural depictions Representations of the giant squid have been known from early legends of the kraken through books such as Moby-Dick and Jules Verne's 1870 novel Twenty Thousand Leagues Under the Seas on to other novels such as Ian Fleming's Dr. No, Peter Benchley's Beast (adapted as a film called The Beast), and Michael Crichton Sphere (adapted as a film). In particular, the image of a giant squid locked in battle with a sperm whale is a common one, although the squid is the whale's prey and not an equal combatant.
Biology and health sciences
Cephalopods
Animals
215428
https://en.wikipedia.org/wiki/Motacillidae
Motacillidae
The wagtails, longclaws, and pipits are a family, Motacillidae, of small passerine birds with medium to long tails. Around 70 species occur in five genera. The longclaws are entirely restricted to the Afrotropics, and the wagtails are predominantly found in Europe, Africa, and Asia, with two species migrating and breeding in Alaska. The pipits have the most cosmopolitan distribution, being found mostly in the Old World, but occurring also in the Americas and oceanic islands such as New Zealand and the Falklands. Two African species, the yellow-breasted pipit and Sharpe's longclaw, are sometimes placed in a separate seventh genus, Hemimacronyx, which is closely related to the longclaws. Most motacillids are ground-feeding insectivores of slightly open country. They occupy almost all available habitats, from the shore to high mountains. Wagtails prefer wetter habitats than the pipits. A few species use forests, including the forest wagtail, and other species use forested mountain streams, such as the grey wagtail or the mountain wagtail. Motacillids take a wide range of invertebrate prey: insects are the most commonly taken, but also including spiders, worms, and small aquatic molluscs and arthropods. All species seem to be fairly catholic in their diets, and the most commonly taken prey for any particular species or population usually reflects local availability. With the exception of the forest wagtail, they nest on the ground, laying up to six speckled eggs. Description Wagtails, pipits, and longclaws are slender, small to medium-sized passerines, ranging from in length, with short necks and long tails. They have long, pale legs with long toes and claws, particularly the hind toe, which can be up to 4 cm in length in some longclaws. No sexual dimorphism in size is seen. Overall, the robust longclaws are larger than the pipits and wagtails. Longclaws can weigh as much as 64 g, as in Fülleborn's longclaw, whereas the weight range for pipits and wagtails is 15–31 g, with the smallest species being perhaps the yellowish pipit. The plumage of most pipits is dull brown and reminiscent of the larks, although some species have brighter plumages, particularly the golden pipit of north-east Africa. The adult male longclaws have brightly coloured undersides. The wagtails often have striking plumage, including grey, black, white, and yellow. Phylogeny A molecular phylogenetic study published in 2019 sampled 56 of the 68 recognised species in the family Motacillidae and found that the species formed six major clades. The pipit genus Anthus was paraphyletic with respect to the longclaw genus Macronyx. The striped pipit (Anthus lineiventris) and the African rock pipit (Anthus crenatus) were nested with the longclaws in Macronyx. The type species of Anthus, the meadow pipit, was nested with the other Palearctic species in Clade 2. Species and genera Family: Motacillidae Genus Dendronanthus Forest wagtail, Dendronanthus indicus Genus Motacilla: typical wagtails Western yellow wagtail, Motacilla flava – possibly paraphyletic Eastern yellow wagtail, Motacilla tschutschensis – possibly paraphyletic Citrine wagtail, Motacilla citreola – possibly paraphyletic Cape wagtail, Motacilla capensis Madagascar wagtail, Motacilla flaviventris Grey wagtail, Motacilla cinerea Mountain wagtail, Motacilla clara White wagtail, Motacilla alba – possibly paraphyletic Pied wagtail, Motacilla alba yarrellii Black-backed wagtail, Motacilla alba lugens African pied wagtail, Motacilla aguimp Mekong wagtail, Motacilla samveasnae Japanese wagtail, Motacilla grandis White-browed wagtail, Motacilla madaraspatensis São Tomé shorttail Motacilla bocagii Genus Tmetothylacus Golden pipit, Tmetothylacus tenellus Genus Macronyx: longclaws Cape longclaw, Macronyx capensis Yellow-throated longclaw, Macronyx croceus Fülleborn's longclaw, Macronyx fuellebornii Sharpe's longclaw, Macronyx sharpei - sometimes placed in genus Hemimacronyx Abyssinian longclaw, Macronyx flavicollis Pangani longclaw, Macronyx aurantiigula Rosy-throated longclaw, Macronyx ameliae Grimwood's longclaw, Macronyx grimwoodi Genus Anthus: typical pipits Richard's pipit Anthus richardi Paddyfield pipit Anthus rufulus Australian pipit Anthus australis New Zealand pipit Anthus novaeseelandiae African pipit Anthus cinnamomeus Mountain pipit Anthus hoeschi Blyth's pipit Anthus godlewskii Tawny pipit Anthus campestris Long-billed pipit Anthus similis Nicholson's pipit Anthus nicholsoni Wood pipit Anthus nyassae Buffy pipit Anthus vaalensis Plain-backed pipit Anthus leucophrys Long-legged pipit, Anthus pallidiventris Meadow pipit Anthus pratensis Tree pipit Anthus trivialis Olive-backed pipit Anthus hodgsoni Pechora pipit Anthus gustavi Rosy pipit Anthus roseatus Red-throated pipit Anthus cervinus Buff-bellied pipit Anthus rubescens Water pipit Anthus spinoletta European rock pipit Anthus petrosus Nilgiri pipit Anthus nilghiriensis Upland pipit Anthus sylvanus Berthelot's pipit Anthus berthelotii Striped pipit Anthus lineiventris African rock pipit Anthus crenatus Short-tailed pipit Anthus brachyurus Bushveld pipit Anthus caffer Sokoke pipit Anthus sokokensis Malindi pipit Anthus melindae Yellow-breasted pipit Anthus chloris - sometimes placed in genus Hemimacronyx Alpine pipit Anthus gutturalis Sprague's pipit Anthus spragueii Yellowish pipit Anthus chii Short-billed pipit Anthus furcatus Pampas pipit Anthus chacoensis Correndera pipit Anthus correndera South Georgia pipit Anthus antarcticus Ochre-breasted pipit Anthus nattereri Hellmayr's pipit Anthus hellmayri Paramo pipit Anthus bogotensis Madanga Anthus ruficollis
Biology and health sciences
Passerida
Animals
215506
https://en.wikipedia.org/wiki/Red%20kangaroo
Red kangaroo
The red kangaroo (Osphranter rufus) is the largest of all kangaroos, the largest terrestrial mammal native to Australia, and the largest extant marsupial. It is found across mainland Australia, except for the more fertile areas, such as southern Western Australia, the eastern and southeastern coasts, and the rainforests along the northern coast. Taxonomy The initial description of the species by A.G. Desmarest was published in 1822. The type location was given as an unknown location west of the Blue Mountains. The author assigned the new species to the genus Kangurus. In 1842, Gould reassigned the species to the genus Osphranter, a taxon later submerged as a subgenus of Macropus. A taxonomic restructure in 2015 in Taxonomy of Australian Mammals by Jackson and Groves promoted Osphranter back to the genus level, redefining the red kangaroo, among others, as a species within the genus Osphranter. This was further supported by genetic analysis in 2019. Description This species is a very large kangaroo with long, pointed ears and a square shaped muzzle (snout/nose). They are sexually dimorphic; males have short, red-brown fur, fading to pale buff below and on the limbs, while females are smaller than males and are blue-grey with a brown tinge and pale grey below, although arid zone females are coloured more like males. It has two forelimbs with small claws, two muscular hind-limbs, which are used for jumping, and a strong tail which is often used to create a tripod when standing upright. Males grow up to a head-and-body length of with a tail that adds a further to the total length. Adult males are referred to by Australians as "Big Reds". Females are considerably smaller, with a head-and-body length of and tail length of . Females can weigh from , while males typically weigh about twice as much at . The average red kangaroo stands approximately tall to the top of the head in upright posture. Large mature males can stand more than tall, with the largest confirmed one having been around tall and weighed . The red kangaroo maintains its internal temperature at a point of homeostasis about using a variety of physical, physiological, and behavioural adaptations. These include having an insulating layer of fur, being less active and staying in the shade when temperatures are high, panting, sweating, and licking its forelimbs. They have an exceptional ability to survive in extreme temperatures using a cooling mechanism where they can increase their panting and sweating rates in high temperatures to cool their bodies. To survive in harsh conditions and conserve energy, Red kangaroos can enter a state of torpor. Red kangaroos also have a high tolerance for consuming plants high in salt content, and can survive for long periods without water by reabsorbing water from their urine in the kidneys, minimizing water loss. They can go for extended periods without drinking, meeting moisture requirements from consumed vegetation. The red kangaroo's range of vision is approximately 300° (324° with about 25° overlap), due to the position of its eyes. Locomotion The red kangaroo's legs work much like a rubber band, with the Achilles tendon stretching as the animal comes down, then releasing its energy to propel the animal up and forward, enabling the characteristic bouncing locomotion. They can reach speeds of around . The males can cover in one leap while reaching heights of , though the average is . Ecology and habitat The red kangaroo ranges throughout western and central Australia. Its range encompasses scrubland, grassland, and desert habitats. It typically inhabits open habitats with some trees for shade. Red kangaroos are capable of conserving enough water and selecting enough fresh vegetation to survive in an arid environment. The kangaroo's kidneys efficiently concentrate urine, particularly during summer. Red kangaroo primarily eat green vegetation, particularly fresh grasses and forbs, and can get enough even when most plants look brown and dry. One study of kangaroos in Central Australia found that green grass makes up 75–95% of the diet, with Eragrostis setifolia dominating at 54%. This grass continues to be green into the dry season. Kangaroos primarily consumed this species, along with Enneapogon avenaceus in western New South Wales, where it comprised as much as 21–69% of the red kangaroo's diet according to one study. During dry times, kangaroos search for green plants by staying on open grassland and near watercourses. While grasses and forbs are preferred, red kangaroos will also eat certain species of chenopods, like Bassia diacantha and Maireana pyramidata, and will even browse shrubs when its favoured foods are scarce. However, some perennial chenopods, such as round-leaf chenopod Kochia are avoided even when abundant. At times, red kangaroos congregate in large numbers; in areas with much forage, these groups can number as much as 1,500 individuals. Red kangaroos are mostly crepuscular and nocturnal, resting in the shade during the day. However, they sometimes move about during the day. Red kangaroos rely on small saltbushes or mulga bushes for shelter in extreme heat rather than rocky outcrops or caves. Grazing takes up most of their daily activities. Like most kangaroo species, they are mostly sedentary, staying within a relatively well-defined home range. However, great environmental changes can cause them to travel great distances. Kangaroos in New South Wales have weekly home ranges of , with the larger areas belonging to adult males. When forage is poor and rainfall patchy, kangaroos will travel to more favourable feeding grounds. Another study of kangaroos in central Australia found that most of them stay close to remaining vegetation but disperse to find fresh plants after it rains. The red kangaroo is too big to be subject to significant non-human predation. They can use their robust legs and clawed feet to defend themselves from attackers with kicks and blows. However, dingoes and birds of prey are potential predators of joeys and a pack of dingos or a pair of wedge-tailed eagles can occasionally kill adults. Additionally, saltwater crocodiles can prey on kangaroos. Kangaroos are adept swimmers, and often flee into waterways if threatened by a terrestrial predator. If pursued into the water, a kangaroo may use its forepaws to hold the predator underwater so as to drown it. Behaviour Red kangaroos live in groups of 2–4 members. The most common groups are females and their young. Larger groups can be found in densely populated areas and females are usually with a male. Membership of these groups is very flexible, and males (boomers) are not territorial, fighting only over females (flyers) that come into heat. Males develop proportionately much larger shoulders and arms than females. Most agonistic interactions occur between young males, which engage in ritualised fighting known as boxing. They usually stand up on their hind limbs and attempt to push their opponent off balance by jabbing him or locking forearms. If the fight escalates, they will begin to kick each other. Using their tail to support their weight, they deliver kicks with their powerful hind legs. Compared to other kangaroo species, fights between red kangaroo males tend to involve more wrestling. Fights establish dominance relationships among males, and determine who gets access to estrous females. Dominant males make agonistic behaviours and more sexual behaviours until they are overthrown. Displaced males live alone and avoid close contact with others. Reproduction The red kangaroo breeds all year round. The females have the unusual ability to delay the birth of their baby until their previous joey has left the pouch. This is known as embryonic diapause. Copulation may last 25 minutes. The red kangaroo has the typical reproductive system of a kangaroo. The neonate emerges after only 33 days. Usually only one young is born at a time. It is blind, hairless, and only a few centimetres long. Its hind legs are mere stumps; it instead uses its more developed forelegs to climb its way through the thick fur on its mother's abdomen into the pouch, which takes about three to five minutes. Once in the pouch, it fastens onto one of the two teats and starts to feed. Almost immediately, the mother's sexual cycle starts again. Another egg descends into the uterus and she becomes sexually receptive. Then, if she mates and a second egg is fertilised, its development is temporarily halted. Meanwhile, the neonate in the pouch grows rapidly. After approximately 190 days, the baby (called a joey) is sufficiently large and developed to make its full emergence out of the pouch, after sticking its head out for a few weeks until it eventually feels safe enough to fully emerge. From then on, it spends increasing time in the outside world and eventually, after around 235 days, it leaves the pouch for the last time. While the young joey will permanently leave the pouch at around 235 days old, it will continue to suckle until it reaches about 12 months of age. A doe may first reproduce as early as 18 months of age and as late as five years during drought, but normally she is two and a half years old before she begins to breed. The female red kangaroo is usually permanently pregnant except on the day she gives birth; however, she has the ability to freeze the development of an embryo until the previous joey is able to leave the pouch. This is known as embryonic diapause, and will occur in times of drought and in areas with poor food sources. The composition of the milk produced by the mother varies according to the needs of the joey. In addition, red kangaroo mothers may "have up to three generations of offspring simultaneously; a young-at-foot suckling from an elongated teat, a young in the pouch attached to a second teat and a blastula in arrested development in the uterus". The red kangaroo has also been observed to engage in alloparental care, a behaviour in which a female may adopt another female's joey. This is a common parenting behaviour seen in many other animal species like wolves, elephants and fathead minnows. Relationship with humans The red kangaroo is an abundant species and has even benefited from the spread of agriculture and creation of man-made waterholes. However, competition with livestock and rabbits poses a threat. They are also sometimes shot by farmers as pests, although in Queensland and New South Wales a permit is required. Kangaroos dazzled by headlights or startled by engine noise often leap in front of vehicles, severely damaging or destroying smaller or unprotected vehicles. The risk of harm to vehicle occupants is greatly increased if the windscreen is the point of impact. As a result, "kangaroo crossing" signs are commonplace in Australia. Peak times for kangaroo/vehicle crashes are between 5:00 PM and 10:00 PM, in winter, and after extended dry-weather spells. Commercial use Like all Australian wildlife, the red kangaroo is protected by legislation, but it is so numerous that there is regulated harvest of its hide and meat. Hunting permits and commercial harvesting are controlled under nationally approved management plans, which aim to maintain red kangaroo populations and manage them as a renewable resource. In 2023, over half a million red kangaroos were harvested across all of Australia. Harvesting of kangaroos is controversial, particularly due to the animal's popularity. In the year 2000, 1,173,242 animals were killed. In 2009 the government put a limit of 1,611,216 for the number of red kangaroos available for commercial use. The kangaroo industry is worth about A$270 million each year, and employs over 4000 people. The kangaroos provide meat for both humans and pet food. Kangaroo meat is very lean with only about 2% fat. Their skins are used for leather.
Biology and health sciences
Diprotodontia
Animals
215515
https://en.wikipedia.org/wiki/Longclaw
Longclaw
The longclaws are a genus, Macronyx, of small African passerine birds in the family Motacillidae. Longclaws are slender, often colorful, ground-feeding insectivores of open country. They are ground nesters, laying up to four speckled eggs. They are named for their unusually long hind claws, which are thought to help in walking on grass. There are only between 10,000 and 19,000 Sharpe's longclaw left in Kenya. The genus Macronyx was introduced by the English naturalist William Swainson in 1827 with the Cape longclaw as the type species. The name combines the Classical Greek words "long" or "great" and "claw". Species list The genus contains eight species:
Biology and health sciences
Passerida
Animals
215567
https://en.wikipedia.org/wiki/Pink
Pink
Pink is a pale tint of red, the color of the pink flower. It was first used as a color name in the late 17th century. According to surveys in Europe and the United States, pink is the color most often associated with charm, politeness, sensitivity, tenderness, sweetness, childhood, femininity, and romance. A combination of pink and white is associated with innocence, whereas a combination of pink and black links to eroticism and seduction. In the 21st century, pink is seen as a symbol of femininity, though it has not always been seen this way. In the 1920s, light red, which is similar to pink, was seen as a color that reflected masculinity. In nature and culture Etymology and definitions The color pink is named after the flowers, pinks, flowering plants in the genus Dianthus, and derives from the frilled edge of the flowers. The verb "to pink" dates from the 14th century and means "to decorate with a perforated or punched pattern" (possibly from German picken, "to peck"). It has survived to the current day in pinking shears, hand-held scissors that cut a zig-zagged line to prevent fraying. History, art and fashion The color pink has been described in literature since ancient times. In the Odyssey, written in approximately 800 BCE, Homer wrote "Then, when the child of morning, rosy-fingered dawn appeared..." Roman poets also described the color. Roseus is the Latin word meaning "rosy" or "pink." Lucretius used the word to describe the dawn in his epic poem On the Nature of Things (De rerum natura). Pink was not a common color in the fashion of the Middle Ages; nobles usually preferred brighter reds, such as crimson. However, it did appear in women's fashion and religious art. In the 13th and 14th centuries, in works by Cimabue and Duccio, the Christ child was sometimes portrayed dressed in pink, the color associated with the body of Christ. In the high Renaissance painting the Madonna of the Pinks by Raphael, the Christ child is presenting a pink flower to the Virgin Mary. The pink was a symbol of marriage, showing a spiritual marriage between the mother and child. During the Renaissance, pink was mainly used for the flesh color of white faces and hands. The pigment commonly used for this was called light cinabrese; it was a mixture of the red earth pigment called sinopia, or Venetian red, and a white pigment called Bianco San Genovese, or lime white. In his famous 15th century manual on painting, Il Libro Dell'Arte, Cennino Cennini described it this way: "This pigment is made from the loveliest and lightest sinopia that is found and is mixed and mulled with St. John's white, as it is called in Florence; and this white is made from thoroughly white and thoroughly purified lime. And when these two pigments have been thoroughly mulled together (that is, two parts cinabrese and the third white), make little loaves of them like half walnuts and leave them to dry. When you need some, take however much of it seems appropriate. And this pigment does you great credit if you use it for painting faces, hands, and nudes on walls..." 18th century Pink was particularly championed by Madame de Pompadour (1721–1764), the mistress of King Louis XV of France, who wore combinations of pale blue and pink, and had a particular tint of pink made for her by the Sevres porcelain factory, created by adding nuances of blue, black and yellow. While pink was quite evidently the color of seduction in the portraits made by George Romney of Emma, Lady Hamilton, the future mistress of Admiral Horatio Nelson, in the late 18th century, it had the completely opposite meaning in the portrait of Sarah Barrett Moulton painted by Thomas Lawrence in 1794. In this painting, it symbolized childhood, innocence and tenderness. Sarah Moulton was just eleven years of age when the picture was painted, and died the following year. 19th century In 19th century England, pink ribbons or decorations were often worn by young boys; boys were simply considered small men, and while men in England wore red uniforms, boys wore pink. In fact the clothing for children in the 19th century was almost always white, since, before the invention of chemical dyes, clothing of any color would quickly fade when washed in boiling water. Queen Victoria was painted in 1850 with her seventh child and third son, Prince Arthur, who wore white and pink. In late nineteenth-century France, Impressionist painters working in a pastel color palette sometimes depicted women wearing the color pink, such as Edgar Degas' image of ballet dancers or Mary Cassatt's images of women and children. 20th century - present A dress parade, held in 1949, at the famous Waldorf-Astoria Hotel in New York, caused a stir among attendees due to the vibrant pink tones in the dresses and garments. The journalists and critics of the time, seeking to know Mexican designer Ramón Valdiosera's inspiration, asked him about the origin of the color. The artist simply replied that that pink was already part of Mexican culture, which the New York fashion critic Perle Mesta then described as Mexican Pink. The First inauguration of Dwight D. Eisenhower (1953), when Eisenhower's wife Mamie Eisenhower wore a pink dress as her inaugural gown, is thought to have been a key turning point in the association of pink as a color associated with girls. Mamie's strong liking of pink led to the public association with pink being a color that "ladylike women wear." The 1957 American musical Funny Face also played a role in cementing the color's association with women. In the 20th century, pinks became bolder, brighter, and more assertive, partly because of the invention of chemical dyes that did not fade. The pioneer in the creation of the new wave of pinks was the Italian designer Elsa Schiaparelli (1890–1973), who was aligned with the artists of the surrealist movement, including Jean Cocteau. In 1931 she created a new variety of the color, called shocking pink, made by mixing magenta with a small amount of white. She launched a perfume called Shocking, sold in a bottle in the shape of a woman's torso, said to be modelled on that of Mae West. Her fashions, co-designed with artists like Cocteau, featured the new pinks. In Nazi Germany in the 1930s and 1940s, inmates of Nazi concentration camps who were accused of homosexuality were forced to wear a pink triangle. Because of this, the pink triangle has become a symbol of the modern gay rights movement. The transition to pink as a sexually differentiating color for girls occurred gradually, through the selective process of the marketplace, in the 1930s and 40s. In the 1920s, some groups had described pink as a masculine color, an equivalent to red, which was considered for men but lighter for boys. But stores nonetheless found that people were increasingly choosing to buy pink for girls, and blue for boys, until this became an accepted norm in the 1940s. Science and nature Optics In optics, the word "pink" can refer to any of the pale shades of colors between bluish red to red in hue, of medium to high lightness, and of low to moderate saturation. Although pink is generally considered a tint of red, the colors of most tints of pink are slightly bluish, and lie between red and magenta. A few variations of pink, such as salmon color, lean toward orange. Sunrises and sunsets As a ray of white sunlight travels through the atmosphere, some of the colors are scattered out of the beam by air molecules and airborne particles. This is called Rayleigh scattering. Colors with a shorter wavelength, such as blue and green, scatter more strongly, and are removed from the light that finally reaches the eye. At sunrise and sunset, when the path of the sunlight through the atmosphere to the eye is longest, the blue and green components are removed almost completely, leaving the longer wavelength orange, red and pink light. The remaining pinkish sunlight can also be scattered by cloud droplets and other relatively large particles, which give the sky above the horizon a pink or reddish glow. Geology Biology Pink coloration of meat and seafood Raw beef is red, because the muscles of vertebrate animals, such as cows and pigs, contain a protein called myoglobin, which binds oxygen and iron atoms. When beef is cooked, the myoglobin proteins undergo oxidation, and gradually turn from red to pink to brown; that is, from rare to medium to well-done. Pork contains less myoglobin than beef and therefore is less red; when heated, it changes from pinkish-red to less pink to tan or white. Ham, though it contains myoglobins like beef, undergoes a different transformation. Traditional hams, such as prosciutto, are made by taking the hind leg or thigh of a pig, covering it with sea salt, which removes the moisture content, and then letting it dry or cure for as long as two years. The salt (sodium nitrate) permits the ham to retain its original pink color, even when dried out. Supermarket hams are made by a different and faster process; they are brined, or infused with a salt-water solution, containing sodium nitrite, which transfers nitric oxide, which bonds with the myoglobin to form the traditional pink cured ham color. The shells and flesh of crustaceans such as crabs, lobsters and shrimp contain a pink carotenoid pigment called astaxanthin. Their shells, naturally blue-green, turn pink or red when cooked. The flesh of the salmon also contains astaxanthin, which makes it pink. Farm-bred salmon are sometimes fed these pigments to improve their pinkness, and it is sometimes also used to enhance the color of egg yolks. Plants and flowers Pink is one of the most common colors of flowers; it serves to attract the insects and birds necessary for pollination and perhaps also to deter predators. The color comes from natural pigments called anthocyanins, which also provide the pink in raspberries. Pigments - Pinke In the 17th century, the word pink or pinke was also used to describe a yellowish pigment, which was mixed with blue colors to yield greenish colors. Thomas Jenner's A Book of Drawing, Limning, Washing (1652) categorises "Pink & blew bice" amongst the greens (p.  38), and specifies several admixtures of greenish colors made with pink—e.g. "Grasse-green is made of Pink and Bice, it is shadowed with Indigo and Pink … French-green of Pink and Indico [shadowed with] Indico" (pp. 38–40). In William Salmon's Polygraphice (1673), "Pink yellow" is mentioned amongst the chief yellow pigments (p. 96), and the reader is instructed to mix it with either Saffron or Ceruse for "sad" or "light" shades thereof, respectively. Sonics Pink noise (), also known as 1/f noise, in audio engineering is a signal or process with a frequency spectrum such that the power spectral density is proportional to the reciprocal of the frequency. Lighting Grow lights often use a combination of red and blue wavelengths, which generally appear pink to the human eye. Pink neon signs are generally produced using one of two different methods. One method is to use neon gas and a blue or purple phosphor, which generally produces a warmer (more reddish) or more intense shade of pink. Another method is to use an argon/mercury blend and a red phosphor, which generally produces a cooler (more purplish) or softer shade of pink. Pink LEDs can be produced using two methods, either with a blue LED using two phosphors (yellow for the first phosphor, and red, orange, or pink for the second), or by placing a pink dye on top of a white LED. Color shifting was a common issue with early pink LEDs, where the red, orange, or pink phosphors or dyes faded over time, causing the pink color to eventually shift towards white or blue. These issues have been mitigated by the more recent introduction of more fade-resistant phosphors. Engineering Insulation manufactured by Owens Corning is dyed pink, with the Pink Panther as its corporate mascot. The company holds a trademark on the color pink for insulation products in order to prevent competitors from using it, and is the first company in the United States to trademark a color. The United States Manual on Uniform Traffic Control Devices specifies fluorescent pink as an optional color for traffic signs used for incident management as an alternative to the traditional orange in order to distinguish them from construction zone signs. In symbolism and culture Common associations and popularity According to public opinion surveys in Europe and the United States, pink is the color most associated with charm, politeness, sensitivity, tenderness, sweetness, softness, childhood, the feminine, and the romantic. Although it did not have any strong negative associations in these surveys, few respondents chose pink as their favorite color. Pink was the favorite color of only two percent of respondents. There was a notable difference between men and women in regards to a preference for pink; three percent of women chose pink as their favorite color, compared with less than one percent of men. Many of the men surveyed were unable to even identify pink correctly, confusing it with mauve. Pink was also more popular with older people than younger. In Japan, pink is the color most commonly associated with springtime due to the blooming cherry blossoms. This is different from surveys in the United States and Europe where green is the color most associated with springtime. Pink in other languages In many languages, the word for the color pink is based on the name of the rose flower; like rose in French; roze in Dutch; rosa in German, Latin, Portuguese, Catalan, Spanish, Italian, Swedish and Norwegian (Nynorsk and Bokmål); rozovyy/розовый in Russian; różowy in Polish; ורוד (varód) in Hebrew; গোলাপি (golapi) in Bangla; and गुलाबी (gulābee) in Hindi. In English "rose", too, often refers to both the flower and the color. In Danish, Faroese and Finnish, the color pink is described as a lighter shade of red: lyserød in Danish, ljósareyður in Faroese and vaaleanpunainen in Finnish, all meaning "light red". Similarly, some Celtic languages use a term meaning "whitish red": gwynnrudh in Cornish, bándearg in Irish, bane-yiarg in Manx, bàn-dhearg in Scottish Gaelic (which also uses liath-dhearg "greyish/pale red" and pinc from English). In Icelandic, the color is called bleikur, originally meaning "pale". In the Japanese language, the traditional word for pink, , takes its name from the peach blossom. There is a separate word for the color of the cherry blossom: sakura-iro. In recent times a word based on the English version, , has begun to be used. In Chinese, the color pink is named with a compound noun 粉紅色, meaning "powder red" where the powder refers to substances used for women's make-up. The Thai word for the color, ชมพู (chom-puu), derives ultimately from Sanskrit जम्बू (jambū) "rose apple". Idioms and expressions In the pink. To be in top form, in good health, in good condition. In Romeo and Juliet, Mercutio says; "I am the very pink of courtesy." Romeo: Pink for flower? Mercutio: Right. Romeo: Then my pump is well flowered." To see pink elephants means to hallucinate from alcoholism. The expression was used by American novelist Jack London in his book John Barleycorn in 1913. Pink slip. To be given a pink slip means to be fired or dismissed from a job. It was first recorded in 1915 in the United States. The phrase "pink-collar worker" refers to persons working in jobs conventionally regarded as "women's work". Pink money, the pink pound or pink dollar is an economic term which refers to the spending power of the LGBT community. Advertising agencies sometimes call the gay market the pink economy. Tickled pink means extremely pleased. The Pink Tax refers to the invisible price women must pay for goods that are created and advertised specifically for them. It is the tendency for products targeted specifically toward women to be more expensive than those targeted toward men. Architecture Early pink buildings were usually built of brick or sandstone, which takes its pale red color from hematite, or iron ore. In the 18th century - the golden age of pink and other pastel colors - pink mansions and churches were built all across Europe. More modern pink buildings usually use the color pink to appear exotic or to attract attention. Food and beverages According to surveys in Europe and the United States, pink is the color most associated with sweet foods and beverages. Pink is also one of the few colors to be strongly associated with a particular aroma, that of roses. Many strawberry and raspberry-flavored foods are colored pink and light red as well, sometimes to distinguish them from cherry-flavored foods that are more commonly colored dark red (although raspberry-flavored foods, particularly in the United States, are often colored blue as well). The drink Tab was packaged in pink cans, presumably to subconsciously convey a sweet taste. The pink color in most packaged and processed foods, ice creams, candies and pastries is made with artificial food coloring. The most common pink food coloring is erythrosine, also known as Red No. 3, an organoiodine compound, a derivative of fluorone, which is a cherry-pink synthetic. It is usually listed on package labels as E-127. Another common red or pink (particularly in the United States where erythrosine is less frequently used) is Allura Red AC (E-129), also known as Red No. 40. Some products use a natural red or pink food coloring, Cochineal, also called carmine, made with crushed insects of the family Dactylopius coccus. Gender In Europe and the United States, pink is often associated with girls, while blue is associated with boys. These colors were first used as gender markers just prior to World War I (for either girls or boys), and pink was first established as a female gender indicators in the 1940s. In the 20th century, the practice in Europe varied from country to country, with some assigning colors based on the baby's complexion, and others assigning pink sometimes to boys and sometimes to girls. Many have noted the contrary association of pink with boys in 20th-century America. An article in the trade publication Earnshaw's Infants' Department in June 1918 said: One reason for the increased use of pink for girls and blue for boys was the invention of new chemical dyes, which meant that children's clothing could be mass-produced and washed in hot water without fading. Prior to this time, most small children of both sexes wore white, which could be frequently washed. Another factor was the popularity of blue and white sailor suits for young boys, a fashion that started in the late 19th century. Blue was also the usual color of school uniforms, for boys and girls. Blue was associated with seriousness and study, while pink was associated with childhood and softness. By the 1950s, pink was strongly associated with femininity, but to an extent that was "neither rigid nor universal" as it later became. One study by two neuroscientists in Current Biology examined color preferences across British and Chinese cultures and found significant differences between male and female responses. Both groups favored blues over other hues, but women had more favorable responses to the reddish-purple range of the spectrum and men had more favorable responses to the greenish-yellow middle of the spectrum. Despite the fact that the study used adults in mainstream cultures, and both groups preferred blues, and responses to the color pink were never even tested, the popular press represented the research as an indication of an innate preference by girls for pink. The misreading has been often repeated in market research, reinforcing American culture's association of pink with girls on the basis of imagined innate characteristics. As of 2008 various feminist groups and the Breast Cancer Awareness Month use the color pink to convey empowerment of women. Breast cancer charities around the world have used the color to symbolize support for people with breast cancer and promote awareness of the disease. A key tactic of these charities is encouraging women and men to wear pink to show their support for breast cancer awareness and research. Pink has symbolized a "welcome embrace" in India and masculinity in Japan. Toys Toys aimed at girls often display pink prominently on packaging and the toy themselves. This is a relatively recent trend, with toys from the 1920s to the 1960s not being gendered by color (though they were gendered by a focus on domesticity and nurturing). The current color-based gendering of toys can be traced back to the deregulation of children's television programs. This allowed toy companies to produce shows that were designed specifically to sell their products, and gender was an important differentiator of these shows and the toys they were advertising. In its 1957 catalog, Lionel Trains offered for sale a pink model freight train for girls. The steam locomotive and coal car were pink and the freight cars of the freight train were various pastel colors. The caboose was baby blue. It was a marketing failure because any girl who might want a model train would want a realistically colored train, while boys in the 1950s did not want to be seen playing with a pink train. However, today it is a valuable collector's item. Sexuality As noted above, pink combined with black or violet is commonly associated with eroticism and seduction. In street slang, the pink sometimes refers to the vagina. In Russian, pink (розовый, rozovyj) is used to refer to lesbians, and light blue (голубой, goluboj) refers to gay men. In Japan, a genre of low budget, erotic cinema is referred to as . In India, Pink colored turbans are worn at Hindu weddings. Pink can be an informal synonym for homosexual, similarly to lavender. Politics Pink, being a 'watered-down' red, is sometimes used in a derogatory way to describe a person with mild communist or socialist beliefs (see Pinko). The term little pink (小粉红) is used to describe the young nationalists on the internet in mainland China. The term pink revolution is sometimes used to refer to the overthrow of President Askar Akayev and his government in the Central Asian republic of Kyrgyzstan after the parliamentary elections of February 27 and March 13, 2005, although it is more commonly called the Tulip Revolution. The Swedish feminist party Feminist Initiative uses pink as their color. Code Pink is an American women's anti-globalization and anti-war group founded in 2002 by activist Medea Benjamin. The group has disrupted Congressional hearings and heckled President Obama at his public speeches. The TRS party of Telangana, India has pink as its primary color It was a common practice to color the British Empire pink on maps. Supporters of Philippine Presidential candidate and former Vice President Leni Robredo used pink to show their solidarity with her in her 2022 presidential campaign. Social movements Pink is often used as a symbolic color by groups involved in issues important to women, as well as to lesbian, gay, bisexual and transgender people. A Dutch newsgroup about homosexuality is called nl.roze (roze being the Dutch word for pink), while in Britain, Pink News is a gay newspaper and online news service. There is a magazine called Pink for the lesbian, gay, bisexual and transgender (LGBT) community which has different editions for various metropolitan areas. In France Pink TV is an LGBT cable channel. In Ireland, Support group for Irish Pink Adoptions defines a pink family as a relatively neutral umbrella term for the single gay men, single lesbians, or same-gender couples who intend to adopt, are in the process of adopting, or have adopted. It also covers adults born/raised in such families. The group welcome the input of other people touched by adoption, especially people who were adopted as children and are now adults. Pinkstinks, a campaign founded in London in May 2008 to raise awareness of what they claim is the damage caused by gender stereotyping of children. The Pink Pistols is a gay gun rights organization. The pink ribbon is the international symbol of breast cancer awareness. Pink was chosen partially because it is so strongly associated with femininity. Academic dress In the French academic dress system, the five traditional fields of study (Arts, Science, Medicine, Law and Divinity) are each symbolized by a distinctive color, which appears in the academic dress of the people who graduated in this field. Redcurrant, an extremely red shade of pink, is the distinctive color for Medicine (and other health-related fields) . Heraldry The word pink is not used for any tincture (color) in heraldry, but there are two fairly uncommon tinctures which are both close to pink: The heraldic color of rose is a modern innovation, mostly used in Canadian heraldry, depicting a reddish pink color like the shade usually called rose. In French heraldry, the color carnation is sometimes used, corresponding to the skin color of a light skinned Caucasian human. This can also be seen as a pink shade but is usually depicted slightly more brownish beige than the rose tincture. Calendars In Thailand, pink is associated with Tuesday on the Thai solar calendar. Anyone may wear pink on Tuesdays, and anyone born on a Tuesday may adopt pink as their color. The press Pink is used for the newsprint paper of several important newspapers devoted to business and sports, and the color is also connected with the press aimed at the LGBTQIA community. Since 1893 the London Financial Times newspaper has used a distinctive salmon pink color for its newsprint, originally because pink dyed paper was less expensive than bleached white paper. Today the color is used to distinguish the newspaper from competitors on a press kiosk or news stand. In some countries, the salmon press identifies economic newspapers or economics sections in "white" newspapers. Some sports newspapers, such as La Gazzetta dello Sport in Italy, also use pink paper to stand out from other newspapers. It awards a pink jersey to the winner of Italy's most important bicycle race, the Giro d'Italia. (See #Sports). Law In England and Wales, a brief delivered to a barrister by a solicitor is usually tied with pink ribbon. Pink was traditionally the color associated with the defense, while white ribbons may have been used for the prosecution. Literature In Spanish and Italian, a romantic novel is known as a "pink novel" (novela rosa in Spanish, romanzo rosa in Italian). In Nathaniel Hawthorne's 1835 short story, Young Goodman Brown, Faith is wearing a pink ribbon in her hair which represents her innocence. Carl Surely's short story "Dinsdale's Pink" is a coming of age tale of a young man growing up in Berlin in the 1930s, dealing with issues of gender, sexuality and politics. In Louisa May Alcott's 1868-69 book Little Women, Amy March uses blue and pink ribbons to tell the difference between her sister Meg's newborn twins. Religion In the Yogic Hindu, Shaktic Hindu and Tantric Buddhist traditions rose is one of the colors of the fourth primary energy center, the heart chakra Anahata. The other color is green. In Catholicism, pink (called rose by the Catholic Church) symbolizes joy and happiness. It is used for the Third Sunday of Advent (Gaudete Sunday) and the Fourth Sunday of Lent (Laetare Sunday) to mark the halfway point in these seasons of penance. For this reason, one of the candles in an Advent wreath may be pink, rather than purple. Pink is the color most associated with Indian spiritual leader Meher Baba, who often wore pink coats to please his closest female follower, Mehera Irani, and today pink remains an important color, symbolizing love, to Baba's followers. Some Wiccans believe that it represents affection, friendship, companionship, and spiritual healing. It is often used for love spells. Sports Palermo, a soccer team based in Palermo, Italy, traditionally wears pink home jerseys. Cerezo Osaka, a soccer team based in Osaka, Japan, typically wears pink home shirts. Cerezo is the Spanish translation for cherry tree, which are known for its pink blossoms. Inter Miami, a soccer team based in South Florida, USA, features pink home shirts. The club wore white home shirts in its first two seasons in existence. In Major League Baseball, pink bats are used by baseball players on Mother's Day as part of a week-long program to benefit Susan G. Komen for the Cure. Pink can mean the scarlet coat worn in fox hunting (a.k.a. "riding to hounds"). One legend about the origin of this meaning refers to a tailor named Pink (or Pinke, or Pinque). The leader in the Giro d'Italia cycle race wears a pink jersey (maglia rosa); this reflects the distinctive pink-colored newsprint of the sponsoring Italian La Gazzetta dello Sport newspaper. The University of Iowa's Kinnick Stadium visitors' locker room is painted pink. The decor has sparked controversy, perceived by some people as suggesting sexism and homophobia. WWE Hall of Famer Bret Hart, as well as other members of the Hart wrestling family, is known for his pink and black wrestling attire. The Western Hockey League team Calgary Hitmen originally wore pink as a tribute to the aforementioned Bret Hart, who was a part team owner at the time. Snooker uses a pink-colored object ball that is worth 6pts when legally potted. Formula One constructors Force India and Racing Point used pink as the primary color on their cars during the 2017–2020 seasons. At the 2017 United States Grand Prix, the purple side-wall branding on the ultra-soft compound tire was replaced by pink for the race to raise awareness of Breast Cancer Awareness Month. Several teams also incorporated pink into their liveries to support the cause (except Force India, whose cars were pink to begin with). To distinguish tuned performance models from ordinary ones, Subaru uses a badge with a pink background on their cars. Also the logo of their motorsports arm Subaru Tecnica International is colored pink. The NFL among other sports have incorporated pink into their promotions, team uniforms and equipment during the month of October in support of Breast Cancer Awareness Month. Music The names of the music artists Pink, Momoiro Clover Z and Blackpink use the color as an influence.
Physical sciences
Color terms
null
215706
https://en.wikipedia.org/wiki/Supermassive%20black%20hole
Supermassive black hole
A supermassive black hole (SMBH or sometimes SBH) is the largest type of black hole, with its mass being on the order of hundreds of thousands, or millions to billions, of times the mass of the Sun (). Black holes are a class of astronomical objects that have undergone gravitational collapse, leaving behind spheroidal regions of space from which nothing can escape, including light. Observational evidence indicates that almost every large galaxy has a supermassive black hole at its center. For example, the Milky Way galaxy has a supermassive black hole at its center, corresponding to the radio source Sagittarius A*. Accretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars. Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way's center (Sagittarius A*). Description Supermassive black holes are classically defined as black holes with a mass above 100,000 () solar masses (); some have masses of . Supermassive black holes have physical properties that clearly distinguish them from lower-mass classifications. First, the tidal forces in the vicinity of the event horizon are significantly weaker for supermassive black holes. The tidal force on a body at a black hole's event horizon is inversely proportional to the square of the black hole's mass: a person at the event horizon of a black hole experiences about the same tidal force between their head and feet as a person on the surface of the Earth. Unlike with stellar-mass black holes, one would not experience significant tidal force until very deep into the black hole's event horizon. It is somewhat counterintuitive to note that the average density of a SMBH within its event horizon (defined as the mass of the black hole divided by the volume of space within its Schwarzschild radius) can be smaller than the density of water. This is because the Schwarzschild radius () is directly proportional to its mass. Since the volume of a spherical object (such as the event horizon of a non-rotating black hole) is directly proportional to the cube of the radius, the density of a black hole is inversely proportional to the square of the mass, and thus higher mass black holes have a lower average density. The Schwarzschild radius of the event horizon of a nonrotating and uncharged supermassive black hole of around is comparable to the semi-major axis of the orbit of planet Uranus, which is about 19 AU. Some astronomers refer to black holes of greater than as ultramassive black holes (UMBHs or UBHs), but the term is not broadly used. Possible examples include the black holes at the cores of TON 618, NGC 6166, ESO 444-46 and NGC 4889, which are among the most massive black holes known. Some studies have suggested that the maximum natural mass that a black hole can reach, while being luminous accretors (featuring an accretion disk), is typically on the order of about . However, a 2020 study suggested even larger black holes, dubbed stupendously large black holes (SLABs), with masses greater than , could exist based on used models; some studies place the black hole at the core of Phoenix A in this category. History of research The story of how supermassive black holes were found began with the investigation by Maarten Schmidt of the radio source 3C 273 in 1963. Initially this was thought to be a star, but the spectrum proved puzzling. It was determined to be hydrogen emission lines that had been redshifted, indicating the object was moving away from the Earth. Hubble's law showed that the object was located several billion light-years away, and thus must be emitting the energy equivalent of hundreds of galaxies. The rate of light variations of the source dubbed a quasi-stellar object, or quasar, suggested the emitting region had a diameter of one parsec or less. Four such sources had been identified by 1964. In 1963, Fred Hoyle and W. A. Fowler proposed the existence of hydrogen-burning supermassive stars (SMS) as an explanation for the compact dimensions and high energy output of quasars. These would have a mass of about . However, Richard Feynman noted stars above a certain critical mass are dynamically unstable and would collapse into a black hole, at least if they were non-rotating. Fowler then proposed that these supermassive stars would undergo a series of collapse and explosion oscillations, thereby explaining the energy output pattern. Appenzeller and Fricke (1972) built models of this behavior, but found that the resulting star would still undergo collapse, concluding that a non-rotating SMS "cannot escape collapse to a black hole by burning its hydrogen through the CNO cycle". Edwin E. Salpeter and Yakov Zeldovich made the proposal in 1964 that matter falling onto a massive compact object would explain the properties of quasars. It would require a mass of around to match the output of these objects. Donald Lynden-Bell noted in 1969 that the infalling gas would form a flat disk that spirals into the central "Schwarzschild throat". He noted that the relatively low output of nearby galactic cores implied these were old, inactive quasars. Meanwhile, in 1967, Martin Ryle and Malcolm Longair suggested that nearly all sources of extra-galactic radio emission could be explained by a model in which particles are ejected from galaxies at relativistic velocities, meaning they are moving near the speed of light. Martin Ryle, Malcolm Longair, and Peter Scheuer then proposed in 1973 that the compact central nucleus could be the original energy source for these relativistic jets. Arthur M. Wolfe and Geoffrey Burbidge noted in 1970 that the large velocity dispersion of the stars in the nuclear region of elliptical galaxies could only be explained by a large mass concentration at the nucleus; larger than could be explained by ordinary stars. They showed that the behavior could be explained by a massive black hole with up to , or a large number of smaller black holes with masses below . Dynamical evidence for a massive dark object was found at the core of the active elliptical galaxy Messier 87 in 1978, initially estimated at . Discovery of similar behavior in other galaxies soon followed, including the Andromeda Galaxy in 1984 and the Sombrero Galaxy in 1988. Donald Lynden-Bell and Martin Rees hypothesized in 1971 that the center of the Milky Way galaxy would contain a massive black hole. Sagittarius A* was discovered and named on February 13 and 15, 1974, by astronomers Bruce Balick and Robert Brown using the Green Bank Interferometer of the National Radio Astronomy Observatory. They discovered a radio source that emits synchrotron radiation; it was found to be dense and immobile because of its gravitation. This was, therefore, the first indication that a supermassive black hole exists in the center of the Milky Way. The Hubble Space Telescope, launched in 1990, provided the resolution needed to perform more refined observations of galactic nuclei. In 1994 the Faint Object Spectrograph on the Hubble was used to observe Messier 87, finding that ionized gas was orbiting the central part of the nucleus at a velocity of ±500 km/s. The data indicated a concentrated mass of lay within a span, providing strong evidence of a supermassive black hole. Using the Very Long Baseline Array to observe Messier 106, Miyoshi et al. (1995) were able to demonstrate that the emission from an H2O maser in this galaxy came from a gaseous disk in the nucleus that orbited a concentrated mass of , which was constrained to a radius of 0.13 parsecs. Their ground-breaking research noted that a swarm of solar mass black holes within a radius this small would not survive for long without undergoing collisions, making a supermassive black hole the sole viable candidate. Accompanying this observation which provided the first confirmation of supermassive black holes was the discovery of the highly broadened, ionised iron Kα emission line (6.4 keV) from the galaxy MCG-6-30-15. The broadening was due to the gravitational redshift of the light as it escaped from just 3 to 10 Schwarzschild radii from the black hole. On April 10, 2019, the Event Horizon Telescope collaboration released the first horizon-scale image of a black hole, in the center of the galaxy Messier 87. In March 2020, astronomers suggested that additional subrings should form the photon ring, proposing a way of better detecting these signatures in the first black hole image. Formation The origin of supermassive black holes remains an active field of research. Astrophysicists agree that black holes can grow by accretion of matter and by merging with other black holes. There are several hypotheses for the formation mechanisms and initial masses of the progenitors, or "seeds", of supermassive black holes. Independently of the specific formation channel for the black hole seed, given sufficient mass nearby, it could accrete to become an intermediate-mass black hole and possibly a SMBH if the accretion rate persists. Distant and early supermassive black holes, such as J0313–1806, and ULAS J1342+0928, are hard to explain so soon after the Big Bang. Some postulate they might come from direct collapse of dark matter with self-interaction. A small minority of sources argue that they may be evidence that the Universe is the result of a Big Bounce, instead of a Big Bang, with these supermassive black holes being formed before the Big Bounce. First stars The early progenitor seeds may be black holes of that are left behind by the explosions of massive stars and grow by accretion of matter. Another model involves a dense stellar cluster undergoing core collapse as the negative heat capacity of the system drives the velocity dispersion in the core to relativistic speeds. Before the first stars, large gas clouds could collapse into a "quasi-star", which would in turn collapse into a black hole of around . These stars may have also been formed by dark matter halos drawing in enormous amounts of gas by gravity, which would then produce supermassive stars with . The "quasi-star" becomes unstable to radial perturbations because of electron-positron pair production in its core and could collapse directly into a black hole without a supernova explosion (which would eject most of its mass, preventing the black hole from growing as fast). A more recent theory proposes that SMBH seeds were formed in the very early universe each from the collapse of a supermassive star with mass of around . Direct-collapse and primordial black holes Large, high-redshift clouds of metal-free gas, when irradiated by a sufficiently intense flux of Lyman–Werner photons, can avoid cooling and fragmenting, thus collapsing as a single object due to self-gravitation. The core of the collapsing object reaches extremely large values of matter density, of the order of about , and triggers a general relativistic instability. Thus, the object collapses directly into a black hole, without passing from the intermediate phase of a star, or of a quasi-star. These objects have a typical mass of about and are named direct collapse black holes. A 2022 computer simulation showed that the first supermassive black holes can arise in rare turbulent clumps of gas, called primordial halos, that were fed by unusually strong streams of cold gas. The key simulation result was that cold flows suppressed star formation in the turbulent halo until the halo's gravity was finally able to overcome the turbulence and formed two direct-collapse black holes of and . The birth of the first SMBHs can therefore be a result of standard cosmological structure formation — contrary to what had been thought for almost two decades. Primordial black holes (PBHs) could have been produced directly from external pressure in the first moments after the Big Bang. These black holes would then have more time than any of the above models to accrete, allowing them sufficient time to reach supermassive sizes. Formation of black holes from the deaths of the first stars has been extensively studied and corroborated by observations. The other models for black hole formation listed above are theoretical. The formation of a supermassive black hole requires a relatively small volume of highly dense matter having small angular momentum. Normally, the process of accretion involves transporting a large initial endowment of angular momentum outwards, and this appears to be the limiting factor in black hole growth. This is a major component of the theory of accretion disks. Gas accretion is both the most efficient and the most conspicuous way in which black holes grow. The majority of the mass growth of supermassive black holes is thought to occur through episodes of rapid gas accretion, which are observable as active galactic nuclei or quasars. Observations reveal that quasars were much more frequent when the Universe was younger, indicating that supermassive black holes formed and grew early. A major constraining factor for theories of supermassive black hole formation is the observation of distant luminous quasars, which indicate that supermassive black holes of had already formed when the Universe was less than one billion years old. This suggests that supermassive black holes arose very early in the Universe, inside the first massive galaxies. Maximum mass limit There is a natural upper limit to how large supermassive black holes can grow. Supermassive black holes in any quasar or active galactic nucleus (AGN) appear to have a theoretical upper limit of physically around for typical parameters, as anything above this slows growth down to a crawl (the slowdown tends to start around ) and causes the unstable accretion disk surrounding the black hole to coalesce into stars that orbit it. A study concluded that the radius of the innermost stable circular orbit (ISCO) for SMBH masses above this limit exceeds the self-gravity radius, making disc formation no longer possible. A larger upper limit of around was represented as the absolute maximum mass limit for an accreting SMBH in extreme cases, for example its maximal prograde spin with a dimensionless spin parameter of a = 1, although the maximum limit for a black hole's spin parameter is very slightly lower at a = 0.9982. At masses just below the limit, the disc luminosity of a field galaxy is likely to be below the Eddington limit and not strong enough to trigger the feedback underlying the M–sigma relation, so SMBHs close to the limit can evolve above this. It was noted that, black holes close to this limit are likely to be rather even rarer, as it would require the accretion disc to be almost permanently prograde because the black hole grows and the spin-down effect of retrograde accretion is larger than the spin-up by prograde accretion, due to its ISCO and therefore its lever arm. This would require the hole spin to be permanently correlated with a fixed direction of the potential controlling gas flow, within the black hole's host galaxy, and thus would tend to produce a spin axis and hence AGN jet direction, which is similarly aligned with the galaxy. Current observations do not support this correlation. The so-called 'chaotic accretion' presumably has to involve multiple small-scale events, essentially random in time and orientation if it is not controlled by a large-scale potential in this way. This would lead the accretion statistically to spin-down, due to retrograde events having larger lever arms than prograde, and occurring almost as often. There is also other interactions with large SMBHs that trend to reduce their spin, including particularly mergers with other black holes, which can statistically decrease the spin. All of these considerations suggested that SMBHs usually cross the critical theoretical mass limit at modest values of their spin parameters, so that in all but rare cases. Although modern UMBHs within quasars and galactic nuclei cannot grow beyond around through the accretion disk and as well given the current age of the universe, some of these monster black holes in the universe are predicted to still continue to grow up to stupendously large masses of perhaps during the collapse of superclusters of galaxies in the extremely far future of the universe. Activity and galactic evolution Gravitation from supermassive black holes in the center of many galaxies is thought to power active objects such as Seyfert galaxies and quasars, and the relationship between the mass of the central black hole and the mass of the host galaxy depends upon the galaxy type. An empirical correlation between the size of supermassive black holes and the stellar velocity dispersion of a galaxy bulge is called the M–sigma relation. An AGN is now considered to be a galactic core hosting a massive black hole that is accreting matter and displays a sufficiently strong luminosity. The nuclear region of the Milky Way, for example, lacks sufficient luminosity to satisfy this condition. The unified model of AGN is the concept that the large range of observed properties of the AGN taxonomy can be explained using just a small number of physical parameters. For the initial model, these values consisted of the angle of the accretion disk's torus to the line of sight and the luminosity of the source. AGN can be divided into two main groups: a radiative mode AGN in which most of the output is in the form of electromagnetic radiation through an optically thick accretion disk, and a jet mode in which relativistic jets emerge perpendicular to the disk. Mergers and recoiled SMBHs The interaction of a pair of SMBH-hosting galaxies can lead to merger events. Dynamical friction on the hosted SMBH objects causes them to sink toward the center of the merged mass, eventually forming a pair with a separation of under a kiloparsec. The interaction of this pair with surrounding stars and gas will then gradually bring the SMBH together as a gravitationally bound binary system with a separation of ten parsecs or less. Once the pair draw as close as 0.001 parsecs, gravitational radiation will cause them to merge. By the time this happens, the resulting galaxy will have long since relaxed from the merger event, with the initial starburst activity and AGN having faded away. The gravitational waves from this coalescence can give the resulting SMBH a velocity boost of up to several thousand km/s, propelling it away from the galactic center and possibly even ejecting it from the galaxy. This phenomenon is called a gravitational recoil. The other possible way to eject a black hole is the classical slingshot scenario, also called slingshot recoil. In this scenario first a long-lived binary black hole forms through a merger of two galaxies. A third SMBH is introduced in a second merger and sinks into the center of the galaxy. Due to the three-body interaction one of the SMBHs, usually the lightest, is ejected. Due to conservation of linear momentum the other two SMBHs are propelled in the opposite direction as a binary. All SMBHs can be ejected in this scenario. An ejected black hole is called a runaway black hole. There are different ways to detect recoiling black holes. Often a displacement of a quasar/AGN from the center of a galaxy or a spectroscopic binary nature of a quasar/AGN is seen as evidence for a recoiled black hole. Candidate recoiling black holes include NGC 3718, SDSS1133, 3C 186, E1821+643 and SDSSJ0927+2943. Candidate runaway black holes are HE0450–2958, CID-42 and objects around RCP 28. Runaway supermassive black holes may trigger star formation in their wakes. A linear feature near the dwarf galaxy RCP 28 was interpreted as the star-forming wake of a candidate runaway black hole. Hawking radiation Hawking radiation is black-body radiation that is predicted to be released by black holes, due to quantum effects near the event horizon. This radiation reduces the mass and energy of black holes, causing them to shrink and ultimately vanish. If black holes evaporate via Hawking radiation, a non-rotating and uncharged stupendously large black hole with a mass of will evaporate in around . Black holes formed during the predicted collapse of superclusters of galaxies in the far future with would evaporate over a timescale of up to . Evidence Doppler measurements Some of the best evidence for the presence of black holes is provided by the Doppler effect whereby light from nearby orbiting matter is red-shifted when receding and blue-shifted when advancing. For matter very close to a black hole the orbital speed must be comparable with the speed of light, so receding matter will appear very faint compared with advancing matter, which means that systems with intrinsically symmetric discs and rings will acquire a highly asymmetric visual appearance. This effect has been allowed for in modern computer-generated images such as the example presented here, based on a plausible model for the supermassive black hole in Sgr A* at the center of the Milky Way. However, the resolution provided by presently available telescope technology is still insufficient to confirm such predictions directly. What already has been observed directly in many systems are the lower non-relativistic velocities of matter orbiting further out from what are presumed to be black holes. Direct Doppler measures of water masers surrounding the nuclei of nearby galaxies have revealed a very fast Keplerian motion, only possible with a high concentration of matter in the center. Currently, the only known objects that can pack enough matter in such a small space are black holes, or things that will evolve into black holes within astrophysically short timescales. For active galaxies farther away, the width of broad spectral lines can be used to probe the gas orbiting near the event horizon. The technique of reverberation mapping uses variability of these lines to measure the mass and perhaps the spin of the black hole that powers active galaxies. In the Milky Way Evidence indicates that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because: The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours ( or 120 AU) from the center of the central object. From the motion of star S2, the object's mass can be estimated as , or about . The radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with it. Observations of the star S14 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit. No known astronomical object other than a black hole can contain in this volume of space. Infrared observations of bright flare activity near Sagittarius A* show orbital motion of plasma with a period of at a separation of six to ten times the gravitational radius of the candidate SMBH. This emission is consistent with a circularized orbit of a polarized "hot spot" on an accretion disk in a strong magnetic field. The radiating matter is orbiting at 30% of the speed of light just outside the innermost stable circular orbit. On January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers. Outside the Milky Way Unambiguous dynamical evidence for supermassive black holes exists only for a handful of galaxies; these include the Milky Way, the Local Group galaxies M31 and M32, and a few galaxies beyond the Local Group, such as NGC 4395. In these galaxies, the root mean square (or rms) velocities of the stars or gas rises proportionally to 1/ near the center, indicating a central point mass. In all other galaxies observed to date, the rms velocities are flat, or even falling, toward the center, making it impossible to state with certainty that a supermassive black hole is present. Nevertheless, it is commonly accepted that the center of nearly every galaxy contains a supermassive black hole. The reason for this assumption is the M–sigma relation, a tight (low scatter) relation between the mass of the hole in the 10 or so galaxies with secure detections, and the velocity dispersion of the stars in the bulges of those galaxies. This correlation, although based on just a handful of galaxies, suggests to many astronomers a strong connection between the formation of the black hole and the galaxy itself. On March 28, 2011, a supermassive black hole was seen tearing a mid-size star apart. That is the only likely explanation of the observations that day of sudden X-ray radiation and the follow-up broad-band observations. The source was previously an inactive galactic nucleus, and from study of the outburst the galactic nucleus is estimated to be a SMBH with mass of the order of a . This rare event is assumed to be a relativistic outflow (material being emitted in a jet at a significant fraction of the speed of light) from a star tidally disrupted by the SMBH. A significant fraction of a solar mass of material is expected to have accreted onto the SMBH. Subsequent long-term observation will allow this assumption to be confirmed if the emission from the jet decays at the expected rate for mass accretion onto a SMBH. Individual studies The nearby Andromeda Galaxy, 2.5 million light-years away, contains a central black hole, significantly larger than the Milky Way's. The largest supermassive black hole in the Milky Way's vicinity appears to be that of Messier 87 (i.e., M87*), at a mass of at a distance of 48.92 million light-years. The supergiant elliptical galaxy NGC 4889, at a distance of 336 million light-years away in the Coma Berenices constellation, contains a black hole measured to be . Masses of black holes in quasars can be estimated via indirect methods that are subject to substantial uncertainty. The quasar TON 618 is an example of an object with an extremely large black hole, estimated at . Its redshift is 2.219. Other examples of quasars with large estimated black hole masses are the hyperluminous quasar APM 08279+5255, with an estimated mass of , and the quasar SMSS J215728.21-360215.1, with a mass of , or nearly 10,000 times the mass of the black hole at the Milky Way's Galactic Center. Some galaxies, such as the galaxy 4C +37.11, appear to have two supermassive black holes at their centers, forming a binary system. If they collided, the event would create strong gravitational waves. Binary supermassive black holes are believed to be a common consequence of galactic mergers. The binary pair in OJ 287, 3.5 billion light-years away, contains the most massive black hole in a pair, with a mass estimated at . In 2011, a super-massive black hole was discovered in the dwarf galaxy Henize 2-10, which has no bulge. The precise implications for this discovery on black hole formation are unknown, but may indicate that black holes formed before bulges. In 2012, astronomers reported an unusually large mass of approximately for the black hole in the compact, lenticular galaxy NGC 1277, which lies 220 million light-years away in the constellation Perseus. The putative black hole has approximately 59 percent of the mass of the bulge of this lenticular galaxy (14 percent of the total stellar mass of the galaxy). Another study reached a very different conclusion: this black hole is not particularly overmassive, estimated at between with being the most likely value. On February 28, 2013, astronomers reported on the use of the NuSTAR satellite to accurately measure the spin of a supermassive black hole for the first time, in NGC 1365, reporting that the event horizon was spinning at almost the speed of light. In September 2014, data from different X-ray telescopes have shown that the extremely small, dense, ultracompact dwarf galaxy M60-UCD1 hosts a 20 million solar mass black hole at its center, accounting for more than 10% of the total mass of the galaxy. The discovery is quite surprising, since the black hole is five times more massive than the Milky Way's black hole despite the galaxy being less than five-thousandths the mass of the Milky Way. Some galaxies lack any supermassive black holes in their centers. Although most galaxies with no supermassive black holes are very small, dwarf galaxies, one discovery remains mysterious: The supergiant elliptical cD galaxy A2261-BCG has not been found to contain an active supermassive black hole of at least , despite the galaxy being one of the largest galaxies known; over six times the size and one thousand times the mass of the Milky Way. Despite that, several studies gave very large mass values for a possible central black hole inside A2261-BGC, such as about as large as or as low as . Since a supermassive black hole will only be visible while it is accreting, a supermassive black hole can be nearly invisible, except in its effects on stellar orbits. This implies that either A2261-BGC has a central black hole that is accreting at a low level or has a mass rather below . In December 2017, astronomers reported the detection of the most distant quasar known by this time, ULAS J1342+0928, containing the most distant supermassive black hole, at a reported redshift of z = 7.54, surpassing the redshift of 7 for the previously known most distant quasar ULAS J1120+0641. In February 2020, astronomers reported the discovery of the Ophiuchus Supercluster eruption, the most energetic event in the Universe ever detected since the Big Bang. It occurred in the Ophiuchus Cluster in the galaxy NeVe 1, caused by the accretion of nearly of material by its central 7 billion supermassive black hole. The eruption lasted for about 100 million years and released 5.7 million times more energy than the most powerful gamma-ray burst known. The eruption released shock waves and jets of high-energy particles that punched the intracluster medium, creating a cavity about 1.5 million light-years wide – ten times the Milky Way's diameter. In February 2021, astronomers released, for the first time, a very high-resolution image of 25,000 active supermassive black holes, covering four percent of the Northern celestial hemisphere, based on ultra-low radio wavelengths, as detected by the Low-Frequency Array (LOFAR) in Europe.
Physical sciences
Basics_3
null
215754
https://en.wikipedia.org/wiki/Newton-metre
Newton-metre
The newton-metre or newton-meter (also non-hyphenated, newton metre or newton meter; symbol N⋅m or N m) is the unit of torque (also called ) in the International System of Units (SI). One newton-metre is equal to the torque resulting from a force of one newton applied perpendicularly to the end of a moment arm that is one metre long. The unit is also used less commonly as a unit of work, or energy, in which case it is equivalent to the more common and standard SI unit of energy, the joule. In this usage the metre term represents the distance travelled or displacement in the direction of the force, and not the perpendicular distance from a fulcrum (i.e. the lever arm length) as it does when used to express torque. This usage is generally discouraged, since it can lead to confusion as to whether a given quantity expressed in newton-metres is a torque or a quantity of energy. "Even though torque has the same dimension as energy (SI unit joule), the joule is never used for expressing torque". Newton-metres and joules are dimensionally equivalent in the sense that they have the same expression in SI base units, but are distinguished in terms of applicable kind of quantity, to avoid misunderstandings when a torque is mistaken for an energy or vice versa. Similar examples of dimensionally equivalent units include Pa versus J/m3, Bq versus Hz, and ohm versus ohm per square. Conversion factors 1 kilogram-force metre = 9.80665 N⋅m 1 newton-metre ≈ 0.73756215 pound-force-feet 1 pound-foot ≡ 1 pound-force-foot ≈ 1.35581795 N⋅m 1 ounce-inch ≡ 1 ounce-force-inch ≈ 7.06155181 mN⋅m (millinewton-metres) 1 dyne-centimetre = 10−7 N⋅m
Physical sciences
Torque
Basics and measurement
215844
https://en.wikipedia.org/wiki/Pulp%20%28paper%29
Pulp (paper)
Pulp is a fibrous lignocellulosic material prepared by chemically, semi-chemically or mechanically producing cellulosic fibers from wood, fiber crops, waste paper, or rags. Mixed with water and other chemicals or plant-based additives, pulp is the major raw material used in papermaking and the industrial production of other paper products. History Before the widely acknowledged invention of papermaking by Cai Lun in China around AD 105, paper-like writing materials such as papyrus and amate were produced by ancient civilizations using plant materials which were largely unprocessed. Strips of bark or bast material were woven together, beaten into rough sheets, dried, and polished by hand. Pulp used in modern and traditional papermaking is distinguished by the process which produces a finer, more regular slurry of cellulose fibers which are pulled out of solution by a screen and dried to form sheets or rolls. The earliest paper produced in China consisted of bast fibers from the paper mulberry (kozo) plant along with hemp rag and net scraps. By the 6th century, the mulberry tree was domesticated by farmers in China specifically for the purpose of producing pulp to be used in the papermaking process. In addition to mulberry, pulp was also made from bamboo, hibiscus bark, blue sandalwood, straw, and cotton. Papermaking using pulp made from hemp and linen fibers from tattered clothing, fishing nets and fabric bags spread to Europe in the 13th century, with an ever-increasing use of rags being central to the manufacture and affordability of rag paper, a factor in the development of printing. By the 1800s, production demands on the newly industrialized papermaking and printing industries led to a shift in raw materials, most notably the use of pulpwood and other tree products which today make up more than 95% of global pulp production. The use of wood pulp and the invention of automatic paper machines in the late 18th- and early 19th-century contributed to paper's status as an inexpensive commodity in modern times. While some of the earliest examples of paper made from wood pulp include works published by Jacob Christian Schäffer in 1765 and Matthias Koops in 1800, large-scale wood paper production began in the 1840s with unique, simultaneous developments in mechanical pulping made by Friedrich Gottlob Keller in Germany and by Charles Fenerty in Nova Scotia. Chemical processes quickly followed, first with J. Roth's use of sulfurous acid to treat wood, then by Benjamin Tilghman's U.S. patent on the use of calcium bisulfite, Ca(HSO3)2, to pulp wood in 1867. Almost a decade later, the first commercial sulfite pulp mill was built, in Sweden. It used magnesium as the counter ion and was based on work by Carl Daniel Ekman. By 1900, sulfite pulping had become the dominant means of producing wood pulp, surpassing mechanical pulping methods. The competing chemical pulping process, the sulfate, or kraft, process, was developed by Carl F. Dahl in 1879; the first kraft mill started, in Sweden, in 1890. The invention of the recovery boiler, by G.H. Tomlinson in the early 1930s, allowed kraft mills to recycle almost all of their pulping chemicals. This, along with the ability of the kraft process to accept a wider variety of types of wood and to produce stronger fibres, made the kraft process the dominant pulping process, starting in the 1940s. Global production of wood pulp in 2006 was 175 million tons (160 million tonnes). In the previous year, 63 million tons (57 million tonnes) of market pulp (not made into paper in the same facility) was sold, with Canada being the largest source at 21 percent of the total, followed by the United States at 16 percent. The wood fiber sources required for pulping are "45% sawmill residue, 21% logs and chips, and 34% recycled paper" (Canada, 2014). Chemical pulp made up 93% of market pulp. Wood pulp The timber resources used to make wood pulp are referred to as pulpwood. While in theory any tree can be used for pulp-making, coniferous trees are preferred because the cellulose fibers in the pulp of these species are longer, and therefore make stronger paper. Some of the most commonly used trees for paper making include softwoods such as spruce, pine, fir, larch and hemlock, and hardwoods such as eucalyptus, aspen and birch. There is also increasing interest in genetically modified tree species (such as GM eucalyptus and GM poplar) because of several major benefits these can provide, such as increased ease of breaking down lignin and increased growth rate. A pulp mill is a manufacturing facility that converts wood chips or other plant fibre source into a thick fiberboard which can be shipped to a paper mill for further processing. Pulp can be manufactured using mechanical, semi-chemical or fully chemical methods (kraft and sulfite processes). The finished product may be either bleached or non-bleached, depending on the customer requirements. Wood and other plant materials used to make pulp contain three main components (apart from water): cellulose fibers (desired for papermaking), lignin (a three-dimensional polymer that binds the cellulose fibres together) and hemicelluloses (shorter branched carbohydrate polymers). The aim of pulping is to break down the bulk structure of the fibre source, be it chips, stems or other plant parts, into the constituent fibres. Chemical pulping achieves this by degrading the lignin and hemicellulose into small, water-soluble molecules which can be washed away from the cellulose fibres without depolymerizing the cellulose fibres (chemically depolymerizing the cellulose weakens the fibres). The various mechanical pulping methods, such as groundwood (GW) and refiner mechanical pulping (RMP), physically tear the cellulose fibres one from another. Much of the lignin remains adhering to the fibres. Strength is impaired because the fibres may be cut. There are a number of related hybrid pulping methods that use a combination of chemical and thermal treatment to begin an abbreviated chemical pulping process, followed immediately by a mechanical treatment to separate the fibres. These hybrid methods include thermomechanical pulping, also known as TMP, and chemithermomechanical pulping, also known as CTMP. The chemical and thermal treatments reduce the amount of energy subsequently required by the mechanical treatment, and also reduce the amount of strength loss suffered by the fibres. Harvesting trees Most pulp mills use good forest management practices in harvesting trees to ensure that they have a sustainable source of raw materials citation required. One of the major complaints about harvesting wood for pulp mills is that it reduces the biodiversity of the harvested forest. Pulp tree plantations account for 16 percent of world pulp production, old-growth forests 9 percent, and second- and third- and more generation forests account for the rest. Reforestation is practiced in most areas, so trees are a renewable resource. The FSC (Forest Stewardship Council), SFI (Sustainable Forestry Initiative), PEFC (Programme for the Endorsement of Forest Certification), and other bodies certify paper made from trees harvested according to guidelines meant to ensure good forestry practices. The number of trees consumed depends on whether mechanical processes or chemical processes are used. It has been estimated that based on a mixture of softwoods and hardwoods 12 metres (40 ft) tall and 15–20 centimetres (6–8 in) in diameter, it would take an average of 24 trees to produce 0.9 tonne (1 ton) of printing and writing paper, using the kraft process (chemical pulping). Mechanical pulping is about twice as efficient in using trees, since almost all of the wood is used to make fibre, therefore it takes about 12 trees to make 0.9 tonne (1 ton) of mechanical pulp or newsprint. There are roughly two short tons in a cord of wood. Preparation for pulping Wood chipping is the act and industry of chipping wood for pulp, but also for other processed wood products and mulch. Only the heartwood and sapwood are useful for making pulp. Bark contains relatively few useful fibers and is removed and used as fuel to provide steam for use in the pulp mill. Most pulping processes require that the wood be chipped and screened to provide uniform sized chips. Pulping There are a number of different processes which can be used to separate the wood fiber: Mechanical pulp Manufactured grindstones with embedded silicon carbide or aluminum oxide can be used to grind small wood logs called "bolts" to make stone pulp (SGW). If the wood is steamed prior to grinding it is known as pressure ground wood pulp (PGW). Most modern mills use chips rather than logs and ridged metal discs called refiner plates instead of grindstones. If the chips are just ground up with the plates, the pulp is called refiner mechanical pulp (RMP) and if the chips are steamed while being refined the pulp is called thermomechanical pulp (TMP). Steam treatment significantly reduces the total energy needed to make the pulp and decreases the damage (cutting) to fibres. Mechanical pulps are used for products that require less strength, such as newsprint and paperboards. Thermomechanical pulp Thermomechanical pulp is pulp produced by processing wood chips using heat (thus "thermo-") and a mechanical refining movement (thus "-mechanical"). It is a two-stage process where the logs are first stripped of their bark and converted into small chips. These chips have a moisture content of around 25–30 percent. A mechanical force is applied to the wood chips in a crushing or grinding action which generates heat and water vapour and softens the lignin thus separating the individual fibres. The pulp is then screened and cleaned, any clumps of fibre are reprocessed. This process gives a high yield of fibre from the timber (around 95 percent) and as the lignin has not been removed, the fibres are hard and rigid. Chemi-thermomechanical pulp Wood chips can be pre-treated with sodium carbonate, sodium hydroxide, sodium sulfate and other chemicals prior to refining with equipment similar to a mechanical mill. The conditions of the chemical treatment are much less vigorous (lower temperature, shorter time, less extreme pH) than in a chemical pulping process since the goal is to make the fibers easier to refine, not to remove lignin as in a fully chemical process. Pulps made using these hybrid processes are known as chemi-thermomechanical pulps (CTMP). Chemical pulp Chemical pulp is produced by combining wood chips and chemicals in large vessels called digesters. There, heat and chemicals break down lignin, which binds cellulose fibres together, without seriously degrading the cellulose fibres. Chemical pulp is used for materials that need to be stronger or combined with mechanical pulps to give a product different characteristics. The kraft process is the dominant chemical pulping method, with the sulfite process second. Historically soda pulping was the first successful chemical pulping method. Recycled pulp Recycled pulp is also called deinked pulp (DIP). DIP is recycled paper which has been processed by chemicals, thus removing printing inks and other unwanted elements and freed the paper fibres. The process is called deinking. DIP is used as raw material in papermaking. Many newsprint, toilet paper and facial tissue grades commonly contain 100 percent deinked pulp and in many other grades, such as lightweight coated for offset and printing and writing papers for office and home use, DIP makes up a substantial proportion of the furnish. Organosolv pulping Organosolv pulping uses organic solvents at temperatures above 140 °C to break down lignin and hemicellulose into soluble fragments. The pulping liquor is easily recovered by distillation. The reason for using a solvent is to make the lignin more soluble in the cooking liquor. Most common used solvents are methanol, ethanol, formic acid and acetic acid often in combination with water. Alternative pulping methods Research is under way to develop biopulping (biological pulping), similar to chemical pulping but using certain species of fungi that are able to break down the unwanted lignin, but not the cellulose fibres. In the biopulping process, the fungal enzyme lignin peroxidase selectively digests lignin to leave remaining cellulose fibres. This could have major environmental benefits in reducing the pollution associated with chemical pulping. The pulp is bleached using chlorine dioxide stage followed by neutralization and calcium hypochlorite. The oxidizing agent in either case oxidizes and destroys the dyes formed from the tannins of the wood and accentuated (reinforced) by sulfides present in it. Steam exploded fibre is a pulping and extraction technique that has been applied to wood and other fibrous organic material. Bleaching The pulp produced up to this point in the process can be bleached to produce a white paper product. The chemicals used to bleach pulp have been a source of environmental concern, and recently the pulp industry has been using alternatives to chlorine, such as chlorine dioxide, oxygen, ozone and hydrogen peroxide. Alternatives to wood pulp Pulp made from non-wood plant sources or recycled textiles is manufactured today largely as a speciality product for fine-printing and art purposes. Modern machine- and hand-made art papers made with cotton, linen, hemp, abaca, kozo, and other fibers are often valued for their longer, stronger fibers and their lower lignin content. Lignin, present in virtually all plant materials, contributes to the acidification and eventual breakdown of paper products, often characterized by the browning and embrittling of paper with a high lignin content such as newsprint. 100% cotton or a combination of cotton and linen pulp is widely used to produce documents intended for long-term use, such as certificates, currency, and passports. Today, some groups advocate using field crop fibre or agricultural residues instead of wood fibre as a more sustainable means of production. There is enough straw to meet much of North America's book, magazine, catalogue and copy paper needs. Agricultural-based paper does not come from tree farms. Some agricultural residue pulps take less time to cook than wood pulps. That means agricultural-based paper uses less energy, less water and fewer chemicals. Pulp made from wheat and flax straw has half the ecological footprint of pulp made from forests. Hemp paper is a possible replacement, but processing infrastructure, storage costs and the low usability percentage of the plant means it is not a ready substitute. However, wood is also a renewable resource, with about 90 percent of pulp coming from plantations or reforested areas. Non-wood fibre sources account for about 5–10 percent of global pulp production, for a variety of reasons, including seasonal availability, problems with chemical recovery, brightness of the pulp etc. In China, as of 2009, a higher proportion of non-wood pulp processing increased use of water and energy. Nonwovens are in some applications alternatives to paper made from wood pulp, like filter paper or tea bags. Market pulp Market pulp is any variety of pulp that is produced in one location, dried and shipped to another location for further processing. Important quality parameters for pulp not directly related to the fibres are brightness, dirt levels, viscosity and ash content. In 2004 it accounted for about 55 million metric tons of market pulp. Air dry pulp is the most common form to sell pulp. This is pulp dried to about 10 percent moisture content. It is normally delivered as sheeted bales of 250 kg. The reason to leave 10 percent moisture in the pulp is that this minimizes the fibre to fibre bonding and makes it easier to disperse the pulp in water for further processing to paper. Roll pulp or reel pulp is the most common delivery form of pulp to non traditional pulp markets. Fluff pulp is normally shipped on rolls (reels). This pulp is dried to 5–6 percent moisture content. At the customer this is going to a comminution process to prepare for further processing. Some pulps are flash dried. This is done by pressing the pulp to about 50 percent moisture content and then let it fall through silos that are 15–17 m high. Gas fired hot air is the normal heat source. The temperature is well above the char point of cellulose, but large amount of moisture in the fibre wall and lumen prevents the fibres from being incinerated. It is often not dried down to 10 percent moisture (air dry). The bales are not as densely packed as air dry pulp. Environmental concerns The major environmental impacts of producing wood pulp come from its impact on forest sources and from its waste products. Forest resources The impact of logging to provide the raw material for wood pulp is an area of intense debate. Modern logging practices, using forest management seek to provide a reliable, renewable source of raw materials for pulp mills. The practice of clear cutting is a particularly sensitive issue since it is a very visible effect of logging. Reforestation, the planting of tree seedlings on logged areas, has also been criticized for decreasing biodiversity because reforested areas are monocultures. Logging of old growth forests accounts for less than 10 percent of wood pulp, but is one of the most controversial issues. Effluents from pulp mills The process effluents are treated in a biological effluent treatment plant, which guarantees that the effluents are not toxic in the recipient. Mechanical pulp is not a major cause for environmental concern since most of the organic material is retained in the pulp, and the chemicals used (hydrogen peroxide and sodium dithionite) produce benign byproducts (water and sodium sulfate (finally), respectively). Chemical pulp mills, especially kraft mills, are energy self-sufficient and very nearly closed cycle with respect to inorganic chemicals. Bleaching with chlorine produces large amounts of organochlorine compounds, including polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans (PCDD/Fs). Many mills have adopted alternatives to chlorinated bleaching agents thereby reducing emissions of organochlorine pollution. Odor problems The kraft pulping reaction in particular releases foul-smelling compounds. The sulfide reagent that degrades lignin structure also causes some demethylation, yielding methanethiol, dimethyl sulfide, and dimethyl disulfide. These same compounds are released during many forms of microbial decay, including the internal microbial action in Camembert cheese, although the kraft process is a chemical one and does not involve any microbial degradation. These compounds have extremely low odor thresholds and disagreeable smells. Applications The main applications for pulp are paper and board production. The furnish of pulps used depends on the quality on the finished paper. Important quality parameters are wood furnish, brightness, viscosity, extractives, dirt count and strength. Chemical pulps are used for making nanocellulose. Speciality pulp grades have many other applications. Dissolving pulp is used in making regenerated cellulose that is used textile and cellophane production. It is also used to make cellulose derivatives. Fluff pulp is used in diapers, feminine hygiene products and nonwovens. Paper production The Fourdrinier Machine is the basis for most modern papermaking, and it has been used in some variation since its conception. It accomplishes all the steps needed to transform pulp into a final paper product. Economics In 2009, NBSK pulp sold for $650/ton in the United States. The price had dropped due to falling demand when newspapers reduced their size, in part, as a result of the recession. By 2024 this price had recovered to $1315/ton.
Technology
Materials
null
215909
https://en.wikipedia.org/wiki/Electricity%20market
Electricity market
An electricity market is a system that enables the exchange of electrical energy, through an electrical grid. Historically, electricity has been primarily sold by companies that operate electric generators, and purchased by consumers or electricity retailers. The electric power industry began in the late 19th century in the United States and United Kingdom. Throughout the 20th century, and up to the present, there have been deep changes in the economic management of electricity. Changes have occurred across different regions and countries, for many reasons, ranging from technological advances (on supply and demand sides) to politics and ideology. Around the turn of the 21st century, several countries restructured their electric power industries, replacing the vertically integrated and tightly regulated "traditional" electricity market with market mechanisms for electricity generation, transmission, distribution, and retailing. The traditional and competitive market approaches loosely correspond to two visions of industry: the deregulation was transforming electricity from a public service (like sewerage) into a tradable good (like crude oil). As of 2020s, the traditional markets are still common in some regions, including large parts of the United States and Canada. In recent years, governments have reformed electricity markets to improve management of variable renewable energy and reduce greenhouse gas emissions. Services The structure of an electricity market is quite complex. Markets often include mechanisms to manage a variety of relevant services alongside energy. Services may include: on the supply side, a wholesale energy market (all of these use offer caps in some form) on the demand side, a retail energy market A simple "energy-only" wholesale electricity market would only facilitate the sale of energy, without regard for other services that may support the system, and experienced problems once implemented alone. To account for this, the electricity market structure typically includes: ancillary services not directly related to producing electricity. These would not generate income in the "energy-only" model, but are essential for overall operation of the system (frequency control market, voltage control and reactive power management, inertial response and others) capacity market or some other mechanism providing an income stream necessary to build and maintain additional generation units ("reserves") for the worst-case scenario. On a typical day, these units are never called upon (not "dispatched") and thus would not produce revenue in an "energy only" market. cost-based market with audited costs replacing producers' bids in places where the local market power is a concern (e.g., in some parts of the US and entire hydropower-rich countries of Latin America). Due to a lack of competition, electricity networks are subject to price regulation in Australia. The competitive retail electricity markets were able to maintain their simple structure. In addition, for most major operators, there are markets for transmission rights and electricity derivatives such as electricity futures and options, which are actively traded. The market externality of greenhouse gas emissions is sometimes dealt with by carbon pricing. Economic theory Electricity market is characterized by unique features that are atypical in the markets for commodities or consumption goods. Although few somewhat similar markets exist (for example, airplane tickets and hotel rooms, like electricity, cannot be stored and the demand for them varies by season), the magnitude of peak pricing (peak price can be 100 times higher than an off-peak one) sets the electricity market apart (the summer price for a beachfront hotel room can be 3–4 times higher than the off-season one), the hotel/airline markets can also use retail price discrimination, unavailable in the wholesale electricity market. The peculiarities of the electricity market make it fundamentally incomplete. Generation Electricity is typically available on demand. In order to achieve this, the supply must match the demand very closely at any time despite the continuous variations of both (so called grid balancing). Frequently, the only safety margins are the ones provided by the kinetic energy of the physically rotating machinery (synchronous generators and turbines). If there is a mismatch between supply and demand the generators absorb extra energy by speeding up or produce more power by slowing down causing the utility frequency (either 50 or 60 hertz) to increase or decrease. However, the frequency cannot deviate too much from the target: many units of the electrical equipment can be destroyed by the out-of-bounds frequency and thus will automatically disconnect from the grid to protect themselves, potentially triggering a blackout. There are many other physical and economic constraints affecting the electricity network and the market, with some creating non-convexity: a typical consumer is not aware of the current system frequency and pays a fixed price for a unit of energy that does not depend on the balance between supply and demand, and thus can suddenly increase or decrease the consumption; variable renewable energy sources are intermittent due to the reliance on the weather and can ramp up or down literally from one minute to another; the fossil-fuel and nuclear plants have restrictions on the ramping speed: from 5–30 minutes in the gas-fired plants to hours in the coal-fired generation, and even longer for the nuclear ones; many fossil-fuel plants cannot be ramped down below 20–60% of the nameplate capacity; due to high cost of the start-up, the production cost of electricity might differ from the marginal cost in some time intervals thus forcing the providers to bid above the marginal cost. Networks Electricity networks are natural monopolies, because it is not feasible to build multiple networks competing against one another. In order to address this, many electricity networks are regulated to address the risk of price gouging. The two main types of network price regulation are: Cost of service regulation: prices are based on the actual cost of operating the network Incentive regulation: prices are capped based on a prior estimate of costs. The difference between the actual and estimated costs deliver a return or loss to the network operator. The design of transmission network limits the amount of electricity that can be transmitted from one tighly-coupled area ("node") to another, so a generator in one node might be unable to service a load in another node (due to "transmission congestion"), potentially creating fragments of the market that have to be served with local generation ("load pockets"). History Traditional industries After its first few years of existence, the electricity supply industry was regulated by the various levels of government. By the 1950s, a wide variety of arrangements had evolved with substantial differences between countries and even at the regional level, for example: France, Italy, the Republic of Ireland and Greece had nationwide government-owned vertically integrated company; The United Kingdom had a government-owned generation and transmission (Central Electricity Generating Board), but the distribution was decentralized in 14 electricity boards; Germany combined a small number of regional integrated generation and transmission companies with municipal distribution; Japan had 10 regional vertically integrated monopolies; Norway's electricity supply was mostly at the level of municipalities; In the US, a complex mix of companies, owned either privately or by varying levels of government, evolved, while the regulation favored municipal-level and co-op ownership. For example, Hawaii had only privately owned utilities, Nebraska only publicly owned ones, the Tennessee Valley Authority (the largest generation company) is federally owned, and the Los Angeles Department of Water and Power is city-owned. These diverse structures had a few unifying features: very little reliance on competitive markets, no formal wholesale markets, and customers unable to choose their suppliers. The diversity and sheer size of the US market made the potential trade gains large enough to justify some wholesale transactions: large utilities were providing electricity to smaller (municipal or cooperative) ones under bilateral requirements contracts; coordination sales were made between the vertically integrated companies to reduce the costs, sometimes through power pools. On the retail side, customers were charged fixed regulated prices that did not change with marginal costs, retail tariffs almost entirely relied on volumetric pricing (based on the meter readings recorded monthly), and fixed cost recovery was included into the per-kWh price. The traditional market arrangement was designed for the state of the electric industry common pre-restructuring (and still common in some regions, including large parts of the US and Canada). Richard L. Schmalensee calls this state historical (as opposed to post-restructuring emerging one). In the historical regime almost all generation sources can be considered dispatchable (available on demand, unlike the emerging variable renewable energy). Evolution of deregulated markets Chile had become a pioneer in deregulation in the early 1980s (the law of 1982 had codified the changes that were started in 1979). Only few years later the new market approach to electricity was formulated in the US, popularized in the influential work by Joskow and Schmalensee, "Markets for Power: An Analysis of Electrical Utility Deregulation" (1983). At the same time in the UK, Energy Act of 1983 made provisions for common carriage in the electricity networks, enabling a choice of supplier for electricity boards and very large customers (analogous to "wheeling" in the US). The incorporation of distributed energy resources (DERs) has inspired innovative electricity markets that emerge from a hierarchical deregulated market structure, such as local flexibility markets, with upstream aggregating entities representing multiple DERs (e.g., aggregators). Flexibility Markets refer to the markets in which Distribution System Operators (DSOs) procure services from assets linked to their distribution system, aiming to guarantee the operational safety of the distribution network. This concept is relatively new, and its design is currently a subject of active research. In this sense, different entities can act as aggregators, e.g. demand response aggregators, community managers, electricity service providers, and more, depending on the characteristics of the set of assets being represented. Wholesale electricity market A wholesale electricity market, also power exchange or PX, (or energy exchange especially if they also trade gas) is a system enabling purchases, through bids to buy; sales, through offers to sell. Bids and offers use supply and demand principles to set the price. Long-term contracts are similar to power purchase agreements and generally considered private bi-lateral transactions between counterparties. A wholesale electricity market exists when competing generators offer their electricity output to retailers. The retailers then re-price the electricity and take it to market. While wholesale pricing used to be the exclusive domain of large retail suppliers, increasingly markets like New England are beginning to open up to end-users. Large end-users seeking to cut out unnecessary overhead in their energy costs are beginning to recognize the advantages inherent in such a purchasing move. Consumers buying electricity directly from generators is a relatively recent phenomenon. Buying wholesale electricity is not without its drawbacks (market uncertainty, membership costs, set up fees, collateral investment, and organization costs, as electricity would need to be bought on a daily basis), however, the larger the end user's electrical load, the greater the benefit and incentive to make the switch. For an economically efficient electricity wholesale market to flourish it is essential that a number of criteria are met, namely the existence of a coordinated spot market that has "bid-based, security-constrained, economic dispatch with nodal prices". These criteria have been largely adopted in the US, Australia, New Zealand and Singapore. Markets for power-related commodities required and managed by (and paid for by) market operators to ensure reliability, are considered ancillary services and include such names as spinning reserve, non-spinning reserve, operating reserves, responsive reserve, regulation up, regulation down, and installed capacity. Market clearing Wholesale transactions (bids and offers) in electricity are typically cleared and settled by the market operator or a special-purpose independent entity charged exclusively with that function. Market operators may or may not clear trades, but often require knowledge of the trade to maintain generation and load balance. Markets for electricity trade net generation output for a number of intervals usually in increments of 5, 15 and 60 minutes. Two types of auction can be used to determine which producers are dispatched: Double auction: the operator aggregates both the supply bids for each interval (forming a supply curve) and demand bids (demand curve). The clearing price is defined by the intersection of the supply and demand curves for each time interval. One example of this is the Nord Pool. Single reverse auction: the operator aggregates only the supply bids, and dispatches the cheapest combination of options. To determine payments, the clearing can use one of two arrangements: pay-as-bid (PAB) where each successful bidder is paid the price stated in their bid. This arrangement is not common, but notable cases include the UK and the Nord Pool's intra-day market. pay-as-clear: also known as uniform pricing, or marginal pricing system (MPS). All participants are paid the price of the highest successful bid (clearing price). This system is commonly used by the electricity markets. In PAB, strategic bidding can lead producers to bid much higher than their true cost, because they will be dispatched as long as their bid is below the clearing price. In the absence of collusion, it is expected that MPS incentivizes producers to bid close to their short run marginal cost to avoid the risk of missing out altogether. MPS is also more transparent, as the new bidder already knows the market price and can estimate the profitability with his marginal cost, to do well with the PAB, the bidder needs information about other bids, too. Due to higher risks of the PAB, it gives an extra advantage to the large players that are better equipped to estimate the market and take the risk (for example, by gambling with a high bid for some of their units). However, MPS results in producers being paid more than their bidding prices by design, leading to calls to replace it with PAB despite the incentive for strategic bidding. Centralized and decentralized markets To handle all the constraints while keeping the system in balance, a central agency, the transmission system operator (TSO), is required to coordinate the unit commitment and economic dispatch. If the frequency falls outside a predetermined range the system operator will act to add or remove either generation or load. Unlike the real-time decisions that are necessarily centralized, the electricity market itself can be centralized or decentralized. In the centralized market the TSO decides which plant should run and how much is it supposed to produce way before the delivery (during the "spot market" phase, or day-ahead operation). In a decentralized market the producer only commits to the delivery of electricity, but the means to do that are left to the producer itself (for example, it can enter the agreement with another producer to provide the actual energy). Centralized markets make it easier to accommodate non-convexities, while the decentralized allow intra-day trading to correct the possibly suboptimal decisions made day-ahead, for example, accommodating improved weather forecasts for renewables. Due to the difference in the grid construction (US had weaker transmission networks), the design of wholesale markets in the US and Europe had diverged, even though initially the US was followed the European (decentralized) example. To accommodate the transmission network constraints centralized markets typically use locational marginal pricing (LMP) where each node has its own local market price (thus another name for the practice, nodal pricing). Political considerations sometimes make it unpalatable to force consumers in the same territory, but connected to different nodes, to pay different prices for electricity, so a modified generator nodal pricing (GNP) model is used: the generators are still being paid the nodal prices, while the load serving entities are charging the end users prices that are averaged over the territory. Many decentralized markets do not use the LMP and have a price established over a geographic area ("zone", thus the name zonal pricing) or a "region" (regional pricing, the term is used primarily for very large zones of the National Electricity Market of Australia, where five regions cover the continent). In the beginning of 2020s there was no clear preference for any of the two market designs, for example, the North American markets went through centralization, while the European ones moved in the opposite direction: Centralized market A transmission system operator in a centralized electricity market obtains the cost information (usually three components: start-up costs, no-load costs, marginal production costs) for each unit of generation ("unit-based bidding") and makes all the decisions in the day-ahead and real-time (system redispatch) markets. This approach allows the operator to take into consideration the details of the configuration of the transmission system. The centralized market normally uses the LMP, and the dispatch goal is minimizing the total cost in each node (which in a large network count in hundreds or even thousands). The centralized markets use some procedures resembling the vertically integrated electric utilities of the era before the deregulation, so the centralized markets are also called integrated electricity markets. Due to the centralized and detailed nature of the day-ahead dispatch, it stays feasible and cost-efficient at the time of delivery, unless some unexpected adverse events occur. Early decisions help to efficiently schedule the plants with the long ramp-up times. The drawbacks of the centralized design with LMP are: politically, it proved hard to justify higher electricity pricing for customers in some locations. In the US the solution was found in the form of GNP; simplified bidding does not allow to properly capture the cost structure of a more complicated plants, like a combined cycle gas turbine or a hydropower cascade; generation companies have an incentive to overstate their start-up costs (to capture more make-whole payments, see below); absence of the intra-day market makes integration of the renewables harder; the integrated markets are very computation-intensive, this complexity makes them opaque to traders and hard to scale; the unchecked power of the transmission system operator makes it harder for the regulator to handle. Price of a unit of electricity with LMP is based on the marginal cost, so the start-up and no-load costs are not included. Centralized markets therefore typically pay a compensation for these costs to the producer (so called make-whole or uplift payments), financed in some way by the market participants (and, ultimately, the consumers). Inflexibility of the centralized market manifests itself in two ways: once set at the day-ahead market, the contract usually cannot be changed (some markets allow for an hour-ahead correction), so unexpected adverse events have to be accommodated in the real-time and thus in suboptimal way, hurting producers with long ramp-up times, complex cost structures, wind power generation; new technology (energy storage, demand response) with new cost structures require time and effort to accommodate. Market clearing algorithms are complex (some are NP-complete) and have to be executed in limited time (5–60 minutes). The results are thus not necessarily optimal, are hard to replicate independently, and require the market participants to trust the operator (due to the complexity sometimes a decision by the algorithm to accept or reject the bid appears entirely arbitrary to the bidder). If the transmission system operator owns the actual transmission network, it would be incentivized to profit by increasing the congestion rents. Thus in the US the operator typically does not own any capacity and is frequently called an independent system operator (ISO). Cost-based market The higher degree of centralization of the market involves the direct cost calculations by the market operator (producers no longer submit bids). Despite the obvious problem with generation companies incentivized to inflate their costs (this can be hidden through transactions with affiliated companies), this cost-based electricity market arrangement eliminates the market power of the providers and is used in situation when an abuse of market power is possible (e. g., Chile with its preponderance of hydro power, in the US when the local market power is sufficiently high, some European markets). A less-obvious issue is the tendency of market participants under these conditions to concentrate on investments in the peaker plants to the detriment of the baseload power. One of the advantages of the cost-based market is the relatively low cost to set it up. The cost-based approach is popular in Latin America: in addition to Chile, it is used in Bolivia, Peru, Brazil, and countries in Central America. A system operator performs an audit of parameters of each generator unit (including heat rate, minimum load, ramping speed, etc.) and estimates the direct marginal costs of its operation. Based on this information, an hour-by-hour dispatch schedule is put in place to minimize the total direct cost. In the process, the hourly shadow prices are obtained for each node that might be used to settle the market sales. Decentralized market Decentralized markets allow the generation companies to choose their own way to provide energy for their day-ahead bid (that specifies price and location). The provider can use any unit at its disposal (so called "portfolio-based bidding") or even pay another company to deliver the energy. The market still has the central operator that exclusively controls the system in real-time, but with significantly diminished powers to intervene ahead of delivery (frequently just the ability to schedule the transmission network for day-ahead operation). This arrangement makes operator's ownership of the transmission capacity less of an issue, and European countries, with the exception of UK, permit it (following the independent transmission system operator or ITSO model). While some operators in Europe are involved in structuring the day-ahead and intra-day markets, the other ones are not. For example, the UK market after the New Electricity Trading Arrangements in UK and the market in New Zealand let the markets sort out all the frictions before real-time. This reliance on financial instruments leads to the additional names for the decentralized markets: exchange-based, unbundled, bilateral. Bid-based, security-constrained, economic dispatch with nodal prices The system price in the day-ahead market is, in principle, determined by matching offers from generators to bids from consumers at each node to develop a classic supply and demand equilibrium price, usually on an hourly interval, and is calculated separately for subregions in which the system operator's load flow model indicates that constraints will bind transmission imports. The theoretical prices of electricity at each node on the network is a calculated "shadow price", in which it is assumed that one additional kilowatt-hour is demanded at the node in question, and the hypothetical incremental cost to the system that would result from the optimized redispatch of available units establishes the hypothetical production cost of the hypothetical kilowatt-hour. This is known as locational marginal pricing (LMP) or nodal pricing and is used in some deregulated markets, most notably in the Midcontinent Independent System Operator (MISO), PJM Interconnection, ERCOT, New York, and ISO New England markets in the United States, New Zealand, and in Singapore. In practice, the LMP algorithm described above is run, incorporating a security-constrained (defined below), least-cost dispatch calculation with supply based on the generators that submitted offers in the day-ahead market, and demand based on bids from load-serving entities draining supplies at the nodes in question. Due to various non-convexities present in wholesale electricity markets, in the form of economies of scale, start-up and/or shut-down costs, avoidable costs, indivisibilities, minimum supply requirements, etc., some suppliers may incur losses under LMP, e.g., because they may fail to recover their fixed cost through commodity payments only. To address this problem, various pricing schemes that lift the price above marginal cost and/or provide side-payments (uplifts) have been proposed. Liberopoulos and Andrianesis (2016) review and compare several of these schemes on the price, uplifts, and profits that each scheme generates. While in theory the LMP concepts are useful and not evidently subject to manipulation, in practice system operators have substantial discretion over LMP results through the ability to classify units as running in "out-of-merit dispatch", which are thereby excluded from the LMP calculation. In most systems, units that are dispatched to provide reactive power to support transmission grids are declared to be "out-of-merit" (even though these are typically the same units that are located in constrained areas and would otherwise result in scarcity signals). System operators also normally bring units online to hold as "spinning-reserve" to protect against sudden outages or unexpectedly rapid ramps in demand, and declare them "out-of-merit". The result is often a substantial reduction in clearing price at a time when increasing demand would otherwise result in escalating prices. Researchers have noted that a variety of factors, including energy price caps set well below the putative scarcity value of energy, the effect of "out-of-merit" dispatch, the use of techniques such as voltage reductions during scarcity periods with no corresponding scarcity price signal, etc., results in a missing money problem. The consequence is that prices paid to suppliers in the "market" are substantially below the levels required to stimulate new entry. The markets have therefore been useful in bringing efficiencies to short-term system operations and dispatch, but have been a failure in what was advertised as a principal benefit: stimulating suitable new investment where it is needed, when it is needed. In LMP markets, where constraints exist on a transmission network, there is a need for more expensive generation to be dispatched on the downstream side of the constraint. Prices on either side of the constraint separate giving rise to congestion pricing and constraint rentals. A constraint can be caused when a particular branch of a network reaches its thermal limit or when a potential overload will occur due to a contingent event (e.g., failure of a generator or transformer or a line outage) on another part of the network. The latter is referred to as a security constraint. Transmission systems are operated to allow for continuity of supply even if a contingent event, like the loss of a line, were to occur. This is known as a security constrained system. In most systems the algorithm used is a "DC" model rather than an "AC" model, so constraints and redispatch resulting from thermal limits are identified/predicted, but constraints and redispatch resulting from reactive power deficiencies are not. Some systems take marginal losses into account. The prices in the real-time market are determined by the LMP algorithm described above, balancing supply from available units. This process is carried out for each 5-minute, half-hour or hour (depending on the market) interval at each node on the transmission grid. The hypothetical redispatch calculation that determines the LMP must respect security constraints and the redispatch calculation must leave sufficient margin to maintain system stability in the event of an unplanned outage anywhere on the system. This results in a spot market with "bid-based, security-constrained, economic dispatch with nodal prices". Many established markets do not employ nodal pricing, examples being the UK, EPEX SPOT (most European countries), and Nord Pool Spot (Nordic and Baltic countries). Risk management Financial risk management is often a high priority for participants in deregulated electricity markets due to the substantial price and volume risks that the markets can exhibit. A consequence of the complexity of a wholesale electricity market can be extremely high price volatility at times of peak demand and supply shortages. The particular characteristics of this price risk are highly dependent on the physical fundamentals of the market such as the mix of types of generation plant and relationship between demand and weather patterns. Price risk can be manifest by price "spikes" which are hard to predict and price "steps" when the underlying fuel or plant position changes for long periods. Volume risk is often used to denote the phenomenon whereby electricity market participants have uncertain volumes or quantities of consumption or production. For example, a retailer is unable to accurately predict consumer demand for any particular hour more than a few days into the future and a producer is unable to predict the precise time that they will have plant outage or shortages of fuel. A compounding factor is also the common correlation between extreme price and volume events. For example, price spikes frequently occur when some producers have plant outages or when some consumers are in a period of peak consumption. The introduction of substantial amounts of intermittent power sources such as wind energy may affect market prices. Electricity retailers, who in aggregate buy from the wholesale market, and generators who in aggregate sell to the wholesale market, are exposed to these price and volume effects and to protect themselves from volatility, they will enter into "hedge contracts" with each other. The structure of these contracts varies by regional market due to different conventions and market structures. However, the two simplest and most common forms are simple fixed price forward contracts for physical delivery and contracts for differences where the parties agree a strike price for defined time periods. In the case of a contract for difference, if a resulting wholesale price index (as referenced in the contract) in any time period is higher than the "strike" price, the generator will refund the difference between the "strike" price and the actual price for that period. Similarly a retailer will refund the difference to the generator when the actual price is less than the "strike price". The actual price index is sometimes referred to as the "spot" or "pool" price, depending on the market. Many other hedging arrangements, such as swing contracts, virtual bidding, financial transmission rights, call options and put options are traded in sophisticated electricity markets. In general they are designed to transfer financial risks between participants. Price capping and cross subsidy Due to high gas prices because of the 2022 Russia–European Union gas dispute, in late 2022 the EU capped non-gas power prices at 180 euros per megawatt hour and the UK is considering price capping. Fossil fuels, especially gas, may be price capped higher than renewables, with revenue above the cap subsidizing some consumers, as in Turkey. Academic study of an earlier price cap in that market concluded that it reduced welfare, and another study said that an EU-wide price cap would risk "a never-ending spiral of higher import prices and higher subsidies". It has been academically argued via game theory that a cap on the price of imported Russian gas (some of which is used to generate electricity) could be beneficial, however politically this is difficult. Wholesale electricity markets Argentina – see Electricity sector in Argentina Australia – see Electricity sector in Australia Wholesale Electricity Market (WA) – Australian Energy Market Operator (AEMO) National Electricity Market (East Coast) – AEMO Austria – see EPEX SPOT and EXAA Energy Exchange Belgium – see APX Group Brazil – see Electricity sector in Brazil Canada – see Electricity sector in Canada Chile – see Electricity sector in Chile Colombia – see Electricity sector in Colombia Czech Republic – Czech electricity and gas market operator and Power Exchange Central Europe (PXE) Croatia – Croatian Power Exchange (CROPEX) France – see Electricity sector in France and EPEX SPOT Germany – see Electricity sector in Germany, European Energy Exchange AG (EEX) and EPEX SPOT Hungary – Hungarian Power Exchange HUPX and Power Exchange Central Europe (PXE) India – see Indian Energy Exchange and Power Exchange India Limited (PXIL) Ireland – Single Electricity Market Operator (SEMO) Italy – GME Japan – see Electricity sector in Japan and Japan Electric Power Exchange (JEPX) Korea – Korea Power Exchange (KPX) Mexico – Centro Nacional de Control de Energía (CENACE) Netherlands – see APX-ENDEX New Zealand – see Electricity sector in New Zealand and New Zealand Electricity Market Philippines – Philippine Wholesale Electricity Spot Market Poland – Polish Power Exchange (POLPX) Portugal – OMI-Polo Español, S.A. (OMIE), OMIP, Sociedad Rectora del Mercado de Productos Derivados, S.A. (MEFF), and European Energy Exchange AG (EEX) Scandinavia – see Nord Pool Spot Slovakia – Power Exchange Central Europe (PXE) Spain – OMI-Polo Español, S.A. (OMIE), OMIP, Sociedad Rectora del Mercado de Productos Derivados, S.A. (MEFF), and EEX Sultanate of Oman – Oman Electricity Market Russian Federation – Trade System Administrator (ATS) Singapore – Energy Market Authority of Singapore (EMA) and Energy Market Company (EMC) Turkey – see Electricity sector in Turkey, Turkish Electricity Market United Kingdom – see APX-ENDEX and Elexon United States – summarized by the Federal Energy Regulatory Commission (FERC). California Independent System Operator ERCOT Market in Texas Midwest – Midcontinent Independent System Operator (MISO Energy) New England market New York – New York Independent System Operator (NYISO) PJM Interconnection for all or parts of Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia, and the District of Columbia. Southwest – Southwest Power Pool, Inc Vietnam – Vietnam wholesale electricity market (VWEM). Operated by EVN Electric power exchanges An electric power exchange is a commodities exchange dealing with electric power: Indian Energy Exchange APX Group Energy Exchange Austria European Energy Exchange European Power Exchange HUPX Hungarian Power Exchange Nord Pool AS Powernext International trading Electricity itself, or products made with a lot of electricity, exported to another country may be charged a carbon tariff if the exporting country has no carbon price: for example as the UK has the UK ETS it would not be charged the EU Carbon Border Adjustment Mechanism whereas Turkey has no carbon price so might be charged. Possible future changes Rather than the traditional merit order based on cost, when there is excess generation ramping down the plants which most damage health has been suggested. Due to the growth of renewables and the 2021–2022 global energy crisis some countries are considering changing their electricity markets. For example, some Europeans suggest decoupling electricity prices from natural gas prices. Retail electricity market A retail electricity market exists when end-use customers can choose their supplier from competing electricity retailers; one term used in the United States for this type of consumer choice is 'energy choice'. A separate issue for electricity markets is whether or not consumers face real-time pricing (prices based on the variable wholesale price) or a price that is set in some other way, such as average annual costs. In many markets, consumers do not pay based on the real-time price, and hence have no incentive to reduce demand at times of high (wholesale) prices or to shift their demand to other periods. Demand response may use pricing mechanisms or technical solutions to reduce peak demand. Generally, electricity retail reform follows from electricity wholesale reform. However, it is possible to have a single electricity generation company and still have retail competition. If a wholesale price can be established at a node on the transmission grid and the electricity quantities at that node can be reconciled, competition for retail customers within the distribution system beyond the node is possible. In the German market, for example, large, vertically integrated utilities compete with one another for customers on a more or less open grid. Although market structures vary, there are some common functions that an electricity retailer has to be able to perform, or enter into a contract for, to compete effectively. Failure or incompetence in the execution of one or more of the following has led to some dramatic financial disasters: Billing Credit control Customer management via an efficient call centre Distribution use-of-system contract Reconciliation agreement "Pool" or "spot market" purchase agreement Hedge contracts – contracts for differences to manage "spot price" risk The two main areas of weakness have been risk management and billing. In the United States in 2001, California's flawed regulation of retail competition led to the California electricity crisis and left incumbent retailers subject to high spot prices but without the ability to hedge against these. In the UK a retailer, Independent Energy, with a large customer base went bust when it could not collect the money due from customers. Competitive retail needs open access to distribution and transmission wires. This in turn requires that prices must be set for both these services. They must also provide appropriate returns to the owners of the wires and encourage efficient location of power plants. There are two types of fees, the access fee and the regular fee. The access fee covers the cost of having and accessing the network of wires available, or the right to use the existing transmission and distribution network. The regular fee reflects the marginal cost of transferring electricity through the existing network of wires. New technology is available and has been piloted by the US Department of Energy that may be better suited to real-time market pricing. A potential use of event-driven SOA (service-oriented architecture) could be a virtual electricity market where home clothes dryers can bid on the price of the electricity they use in a real-time market pricing system. The real-time market price and control system could turn home electricity customers into active participants in managing the power grid and their monthly utility bills. Customers can set limits on how much they would pay for electricity to run a clothes dryer, for example, and electricity providers willing to transmit power at that price would be alerted over the grid and could sell the electricity to the dryer. On one side, consumer devices can bid for power based on how much the owner of the device were willing to pay, set ahead of time by the consumer. On the other side, suppliers can enter bids automatically from their electricity generators, based on how much it would cost to start up and run the generators. Further, the electricity suppliers could perform real-time market analysis to determine return-on-investment for optimizing profitability or reducing end-user cost of goods. The effects of a competitive retail electricity market are mixed across states, but generally appear to lower prices in states with high participation and raise prices in states that have little customer participation. Event-driven SOA software could allow homeowners to customize many different types of electricity devices found within their home to a desired level of comfort or economy. The event-driven software could also automatically respond to changing electricity prices, in as little as five-minute intervals. For example, to reduce the home owner's electricity usage in peak periods (when electricity is most expensive), the software could automatically lower the target temperature of the thermostat on the central heating system (in winter) or raise the target temperature of the thermostat on the central cooling system (in summer). Deregulated market experience Comparisons between the traditional and competitive market designs experience have provide mixed results. The US experience where the deregulated utilities operate alongside the vertically integrated ones, there is some evidence of the increased efficiencies: deregulated nuclear and coal-fired plants (but not the gas-fired ones) outperformed their vertically integrated peers; deregulated plants switched to less capital-intensive strategies of complying with the regulations; the wholesale trading allowed for substantially better use of the generation facilities; the departures of prices from costs (generation company mark-ups) had increased. Schmalensee concludes that it is plausible that the restructuring resulted in lower wholesale prices, at least in the US and the UK. MacKay and Mercadal in a large-scale analysis of the US market between 1994 and 2016, while confirming Schmalensee's findings on lower costs, reached the opposite conclusion on the prices: deregulated utilities realized significantly higher prices due to higher markup of the generation facilities and double extraction of the profit margin by the two vertically separated companies. Regarding resource adequacy, the US market at the start of restructuring had excess generating capacity, confirming the expectation that regulated prices provide an incentive for the generators to overinvest. Initial hope that the revenue stream would be sufficient to continue building up the capacity did not materialize: faced with abuse of market power, all US markets introduced wholesale price caps that in many case were much lower than the value of lost load thus creating the "missing money problem" (capping revenue at the time of relatively infrequent shortages causes the shortage of money to build the infrastructure that is only used during these shortages); the problem of over-investment was replaced by underinvestment, dragging down the grid reliability. In response, major transfer payments for capacity were instituted (in the US in 2018 the payments were getting as high as 47% of the new unit's revenue). EU markets followed the American lead in the 2010s. Schmalensee notes that while the process of determining the amount of compensation for new capacity in the US is in principle similar to the integrated resource planning of the traditional markets, the new version is less transparent and provides less certainty due to frequent rule changes (the traditional scheme guaranteed the cost recovery), so an efficiency improvement in this area is unlikely. The introduction of the choice of supplier and variable pricing in the retail market was enthusiastically supported by larger consumers (businesses) that can employ the time of consumption-shifting techniques to benefit from the time-of-use pricing and have access to hedging against very high prices. Acceptance among residential customers in the US was minimal. Many regional markets have achieved some success, and the ongoing trend continues to be towards deregulation and introduction of competition. However, in 2000/2001 major failures such as the California electricity crisis and the Enron debacle caused a slow down in the pace of change and in some regions an increase in market regulation and reduction in competition. However, this trend is widely regarded as a temporary one against the longer-term trend towards more open and competitive markets. Notwithstanding the favorable light in which market solutions are viewed conceptually, the "missing money" problem has to date proved intractable. If electricity prices were to move to the levels needed to incentivize new merchant (i.e., market-based) transmission and generation, the costs to consumers would be politically difficult. The increase in annual costs to consumers in New England alone were calculated at $3 billion during the recent FERC hearings on the NEPOOL market structure. Several mechanisms that are intended to incentivize new investment where it is most needed by offering enhanced capacity payments (but only in zones where generation is projected to be short) have been proposed for NEPOOL, PJM and NYPOOL, and go under the generic heading of "locational capacity" or LICAP (the PJM version is called the "Reliability Pricing Model", or "RPM"). Capacity market In a deregulated grid some sort of incentives are necessary for market participants to build and maintain generation and transmission resources that may some day be called upon to maintain the grid balance (supporting the "resource adequacy", or RA), but most of the time these resources are idled and do not produce revenue from the sale of electricity. Since "energy-only markets have the potential to result in an equilibrium point for the market that is not consistent with what users and regulators want to see", all existing wholesale electricity markets rely on offer caps in some form. These caps prevent the suppliers from fully recovering their investment into the reserve capacity through the scarcity pricing, creating a missing money problem for generators. To avoid underinvestment into the generation and transmission capacity, all markets employ some kind of RA transfers. Typical regulator requires a retailer to purchase firm capacity for 110–120% of its annual peak power. The contracts are either bilateral (between the retailers and generator owners), or are traded on a centralized capacity market (the case, e.g., for the eastern USA grid). Turkey The capacity mechanism is claimed to be a mechanism for subsiding coal in Turkey, and has been criticised by some economists, as they say it encourages strategic capacity withholding. It was designed to keep gas plants in the system. Unlike many other markets it is a hybrid system based partly on fixed costs and partly on market clearing price. Many say it is not good and should be changed, for example by using regional bidding zones because constraint management is the main problem in the market. United Kingdom The Capacity Market is a part of the British government's Electricity Market Reform package. According to the Department for Business, Energy and Industrial Strategy "the Capacity Market will ensure security of electricity supply by providing a payment for reliable sources of capacity, alongside their electricity revenues, to ensure they deliver energy when needed. This will encourage the investment we need to replace older power stations and provide backup for more intermittent and inflexible low carbon generation sources". Auctions Two Capacity Market Auctions are held each year. The T-4 auction buys capacity to be delivered in four years’ time and the T-1 auction is a top-up auction held just ahead of each delivery year. Capacity Market Auction results have been published for several years. 2023, 7.6 GW for delivery in 2024/25, mostly gas and nuclear 2023, 42.8 GW for delivery in 2027/28, mostly gas Definitions The National Grid 'Guidance document for Capacity Market participants' provides the following definitions: "CMU (Capacity Market Unit) – this is the Generating Unit(s) or DSR Capacity that is being prequalified and will ultimately provide Capacity should they secure a Capacity Agreement". "A Generating CMU is a generating unit that provides electricity, is capable of being controlled independently from any other generating unit outside the CMU, is measured by 1 or more half hourly meters and has a connection capacity greater than 2MW". "A DSR CMU is a commitment by a person to provide an amount of capacity by a method of Demand Side Response by either reducing the DSR customers import of electricity, as measured by one or more half hourly meters, exporting electricity generated by one or more permitted on site generating units or varying demand for active power in response to changing system frequency". Frequency control market Within many electricity markets, there are specialised markets for the provision of frequency control and ancillary services (FCAS). If the electricity system has supply (generation) in excess of electricity demand, at any instant, then the frequency will increase. By contrast, if there is insufficient supply of electricity to meet demand at any time then the system frequency will fall. If it falls too far, the power system will become unstable. Frequency control markets are in addition to, and separate from, the wholesale electricity pool market. These markets serve to incentivise the provision of frequency raise services or frequency lower services. Frequency raise involves rapid provision of extra electricity generation, so that supply and demand can be more closely matched.
Technology
Electricity transmission and distribution
null
215948
https://en.wikipedia.org/wiki/Mergini
Mergini
The sea ducks (Mergini) are a tribe of the duck subfamily of birds, the Anatinae. The taxonomy of this group is incomplete. Some authorities separate the group as a subfamily, while others remove some genera. Most species within the group spend their winters near coastal waters. Many species have developed specialized salt glands to allow them to tolerate salt water, but these are poorly developed in juveniles. Some of the species prefer riverine habitats. All but two of the 22 species in this group live in far northern latitudes. The fish-eating members of this group, such as the mergansers and smew, have serrated edges to their bills to help them grip their prey and are often known as "sawbills". Other sea ducks forage by diving underwater, taking molluscs or crustaceans from the sea floor. The Mergini take on the eclipse plumage during the late summer and molt into their breeding plumage during the winter. Species There are twenty-two species in ten genera: Below is a phylogeny based on a mitogenomic study of the placement of the Labrador duck and the diving "goose" Chendytes lawi.
Biology and health sciences
Anseriformes
Animals
215987
https://en.wikipedia.org/wiki/Iron%28III%29%20oxide
Iron(III) oxide
Iron(III) oxide or ferric oxide is the inorganic compound with the formula . It occurs in nature as the mineral hematite, which serves as the primary source of iron for the steel industry. It is also known as red iron oxide, especially when used in pigments. It is one of the three main oxides of iron, the other two being iron(II) oxide (FeO), which is rare; and iron(II,III) oxide (), which also occurs naturally as the mineral magnetite. Iron(III) oxide is often called rust, since rust shares several properties and has a similar composition; however, in chemistry, rust is considered an ill-defined material, described as hydrous ferric oxide. Ferric oxide is readily attacked by even weak acids. It is a weak oxidising agent, most famously when reduced by aluminium in the thermite reaction. Structure can be obtained in various polymorphs. In the primary polymorph, α, iron adopts octahedral coordination geometry. That is, each Fe center is bound to six oxygen ligands. In the γ polymorph, some of the Fe sit on tetrahedral sites, with four oxygen ligands. Alpha phase α- has the rhombohedral, corundum (α-Al2O3) structure and is the most common form. It occurs naturally as the mineral hematite, which is mined as the main ore of iron. It is antiferromagnetic below ~260 K (Morin transition temperature), and exhibits weak ferromagnetism between 260 K and the Néel temperature, 950 K. It is easy to prepare using both thermal decomposition and precipitation in the liquid phase. Its magnetic properties are dependent on many factors, e.g., pressure, particle size, and magnetic field intensity. Gamma phase γ-Fe2O3 has a cubic structure. It is metastable and converted from the alpha phase at high temperatures. It occurs naturally as the mineral maghemite. It is ferromagnetic and finds application in recording tapes, although ultrafine particles smaller than 10 nanometers are superparamagnetic. It can be prepared by thermal dehydratation of gamma iron(III) oxide-hydroxide. Another method involves the careful oxidation of iron(II,III) oxide (Fe3O4). The ultrafine particles can be prepared by thermal decomposition of iron(III) oxalate. Other solid phases Several other phases have been identified or claimed. The beta phase (β-phase) is cubic body-centered (space group Ia3), metastable, and at temperatures above converts to alpha phase. It can be prepared by reduction of hematite by carbon, pyrolysis of iron(III) chloride solution, or thermal decomposition of iron(III) sulfate. The epsilon (ε) phase is rhombic, and shows properties intermediate between alpha and gamma, and may have useful magnetic properties applicable for purposes such as high density recording media for big data storage. Preparation of the pure epsilon phase has proven very challenging. Material with a high proportion of epsilon phase can be prepared by thermal transformation of the gamma phase. The epsilon phase is also metastable, transforming to the alpha phase at between . It can also be prepared by oxidation of iron in an electric arc or by sol-gel precipitation from iron(III) nitrate. Research has revealed epsilon iron(III) oxide in ancient Chinese Jian ceramic glazes, which may provide insight into ways to produce that form in the lab. Additionally, at high pressure an amorphous form is claimed. Liquid phase Molten is expected to have a coordination number of close to 5 oxygen atoms about each iron atom, based on measurements of slightly oxygen deficient supercooled liquid iron oxide droplets, where supercooling circumvents the need for the high oxygen pressures required above the melting point to maintain stoichiometry. Hydrated iron(III) oxides Several hydrates of Iron(III) oxide exist. When alkali is added to solutions of soluble Fe(III) salts, a red-brown gelatinous precipitate forms. This is not , but (also written as ). Several forms of the hydrated oxide of Fe(III) exist as well. The red lepidocrocite (γ-) occurs on the outside of rusticles, and the orange goethite (α-) occurs internally in rusticles. When ·H2O is heated, it loses its water of hydration. Further heating at converts to black (), which is known as the mineral magnetite. is soluble in acids, giving . In concentrated aqueous alkali, gives . Reactions The most important reaction is its carbothermal reduction, which gives iron used in steel-making: Another redox reaction is the extremely exothermic thermite reaction with aluminium. This process is used to weld thick metals such as rails of train tracks by using a ceramic container to funnel the molten iron in between two sections of rail. Thermite is also used in weapons and making small-scale cast-iron sculptures and tools. Partial reduction with hydrogen at about produces magnetite, a black magnetic material that contains both Fe(III) and Fe(II): Iron(III) oxide is insoluble in water but dissolves readily in strong acid, e.g., hydrochloric and sulfuric acids. It also dissolves well in solutions of chelating agents such as EDTA and oxalic acid. Heating iron(III) oxides with other metal oxides or carbonates yields materials known as ferrates (ferrate (III)): Preparation Iron(III) oxide is a product of the oxidation of iron. It can be prepared in the laboratory by electrolyzing a solution of sodium bicarbonate, an inert electrolyte, with an iron anode: The resulting hydrated iron(III) oxide, written here as , dehydrates around . Uses Iron industry The overwhelming application of iron(III) oxide is as the feedstock of the steel and iron industries, e.g., the production of iron, steel, and many alloys. Iron oxide (Fe2O3) has been used in stained glass since the medieval period, with evidence suggesting its use in stained glass production dating back to the early Middle Ages, where it was primarily used to create yellow, orange, and red colors in the glass, still being used for industrial purposes today. Polishing A very fine powder of ferric oxide is known as "jeweler's rouge", "red rouge", or simply rouge. It is used to put the final polish on metallic jewelry and lenses, and historically as a cosmetic. Rouge cuts more slowly than some modern polishes, such as cerium(IV) oxide, but is still used in optics fabrication and by jewelers for the superior finish it can produce. When polishing gold, the rouge slightly stains the gold, which contributes to the appearance of the finished piece. Rouge is sold as a powder, paste, laced on polishing cloths, or solid bar (with a wax or grease binder). Other polishing compounds are also often called "rouge", even when they do not contain iron oxide. Jewelers remove the residual rouge on jewelry by use of ultrasonic cleaning. Products sold as "stropping compound" are often applied to a leather strop to assist in getting a razor edge on knives, straight razors, or any other edged tool. Pigment Iron(III) oxide is also used as a pigment, under names "Pigment Brown 6", "Pigment Brown 7", and "Pigment Red 101". Some of them, e.g., Pigment Red 101 and Pigment Brown 6, are approved by the US Food and Drug Administration (FDA) for use in cosmetics. Iron oxides are used as pigments in dental composites alongside titanium oxides. Hematite is the characteristic component of the Swedish paint color Falu red. Magnetic recording Iron(III) oxide was the most common magnetic particle used in all types of magnetic storage and recording media, including magnetic disks (for data storage) and magnetic tape (used in audio and video recording as well as data storage). Its use in computer disks was superseded by cobalt alloy, enabling thinner magnetic films with higher storage density. Photocatalysis α- has been studied as a photoanode for solar water oxidation. However, its efficacy is limited by a short diffusion length (2–4 nm) of photo-excited charge carriers and subsequent fast recombination, requiring a large overpotential to drive the reaction. Research has been focused on improving the water oxidation performance of using nanostructuring, surface functionalization, or by employing alternate crystal phases such as β-. Medicine Calamine lotion, used to treat mild itchiness, is chiefly composed of a combination of zinc oxide, acting as astringent, and about 0.5% iron(III) oxide, the product's active ingredient, acting as antipruritic. The red color of iron(III) oxide is also mainly responsible for the lotion's pink color.
Physical sciences
Oxide salts
Chemistry
216000
https://en.wikipedia.org/wiki/Hemimorphite
Hemimorphite
Hemimorphite is the chemical compound Zn4(Si2O7)(OH)2·H2O, a component of mineral calamine. It is a silicate mineral which, together with smithsonite (ZnCO3), has been historically mined from the upper parts of zinc and lead ores. Both compounds were originally believed to be the same mineral and classified as calamine. In the second half of the 18th century, it was discovered that these two different compounds were both present in calamine. They closely resemble one another. The silicate was the rarer of the two and was named hemimorphite because of the hemimorph development of its crystals. This unusual form, which is typical of only a few minerals, means that the crystals are terminated by dissimilar faces. Hemimorphite most commonly forms crystalline crusts and layers, also massive, granular, rounded and reniform aggregates, concentrically striated, or finely needle-shaped, fibrous or stalactitic, and rarely fan-shaped clusters of crystals. Some specimens show strong green fluorescence in shortwave ultraviolet light (253.7 nm) and weak light pink fluorescence in longwave UV. Occurrence Hemimorphite most frequently occurs as the product of the oxidation of the upper parts of sphalerite bearing ore bodies, accompanied by other secondary minerals which form the so-called iron cap or gossan. Hemimorphite is an important ore of zinc and contains up to 54.2% of the metal, together with silicon, oxygen and hydrogen. The crystals are blunt at one end and sharp at the other. The regions on the Belgian-German border are well known for their deposits of hemimorphite of metasomatic origin, especially Vieille Montagne in Belgium and Aachen in Germany. Other deposits are in Tarnowskie Góry area in Upper Silesia, Poland; near Phoenixville, Pennsylvania; the Missouri lead-zinc district; Elkhorn, Montana; Leadville, Colorado; and Organ Mountains, New Mexico in the United States; and in several localities in North Africa. Further hemimorphite occurrences are the Padaeng deposit near Mae Sod in western Thailand; Sardinia; Nerchinsk, Siberia; Cave del Predil, Italy; Bleiberg, Carinthia, Austria; Matlock, Derbyshire, England.
Physical sciences
Silicate minerals
Earth science
216021
https://en.wikipedia.org/wiki/Periscope
Periscope
A periscope is an instrument for observation over, around or through an object, obstacle or condition that prevents direct line-of-sight observation from an observer's current position. In its simplest form, it consists of an outer case with mirrors at each end set parallel to each other at a 45° angle. This form of periscope, with the addition of two simple lenses, served for observation purposes in the trenches during World War I. Military personnel also use periscopes in some gun turrets and in armoured vehicles. More complex periscopes using prisms or advanced fiber optics instead of mirrors and providing magnification operate on submarines and in various fields of science. The overall design of the classical submarine periscope is very simple: two telescopes pointed into each other. If the two telescopes have different individual magnification, the difference between them causes an overall magnification or reduction. Early examples Johannes Hevelius described an early periscope (which he called a "polemoscope") with lenses in 1647 in his work Selenographia, sive Lunae descriptio [Selenography, or an account of the Moon]. Hevelius saw military applications for his invention. Mikhail Lomonosov invented an "optical tube" which was similar to a periscope. In 1834, it was used in a submarine, designed by Karl Andreevich Schilder. In 1854, Hippolyte Marié-Davy invented the first naval periscope, consisting of a vertical tube with two small mirrors fixed at each end at 45°. Simon Lake used periscopes in his submarines in 1902. Sir Howard Grubb perfected the device in World War I. Morgan Robertson (1861–1915) claimed to have tried to patent the periscope: he described a submarine using a periscope in his fictional works. Periscopes, in some cases fixed to rifles, served in World War I (1914–1918) to enable soldiers to see over the tops of trenches, thus avoiding exposure to enemy fire (especially from snipers). The periscope rifle also saw use during the war – this was an infantry rifle sighted by means of a periscope, so the shooter could aim and fire the weapon from a safe position below the trench parapet. During World War II (1939–1945), artillery observers and officers used specifically manufactured periscope binoculars with different mountings. Some of them also allowed estimating the distance to a target, as they were designed as stereoscopic rangefinders. Armored vehicle periscopes Tanks and armoured vehicles use periscopes: they enable drivers, tank commanders, and other vehicle occupants to inspect their situation through the vehicle roof. Prior to periscopes, direct vision slits were cut in the armour for occupants to see out. Periscopes permit view outside of the vehicle without needing to cut these weaker vision openings in the front and side armour, better protecting the vehicle and occupants. A protectoscope is a related periscopic vision device designed to provide a window in armoured plate, similar to a direct vision slit. A compact periscope inside the protectoscope allows the vision slit to be blanked off with spaced armoured plate. This prevents a potential ingress point for small arms fire, with only a small difference in vision height, but still requires the armour to be cut. In the context of armoured fighting vehicles, such as tanks, a periscopic vision device may also be referred to as an episcope. In this context a periscope refers to a device that can rotate to provide a wider field of view (or is fixed into an assembly that can), while an episcope is fixed into position. Periscopes may also be referred to by slang, e.g. "shufti-scope". Gundlach and Vickers 360-degree periscopes An important development, the Gundlach rotary periscope, incorporated a rotating top with a selectable additional prism which reversed the view. This allowed a tank commander to obtain a 360-degree field of view without moving his seat, including rear vision by engaging the extra prism. This design, patented by Rudolf Gundlach in 1936, first saw use in the Polish 7-TP light tank (produced from 1935 to 1939). As a part of Polish–British pre-World War II military cooperation, the patent was sold to Vickers-Armstrong where it saw further development for use in British tanks, including the Crusader, Churchill, Valentine, and Cromwell models as the Vickers Tank Periscope MK.IV. The Gundlach-Vickers technology was shared with the American Army for use in its tanks including the Sherman, built to meet joint British and US requirements. This saw post-war controversy through legal action: "After the Second World War and a long court battle, in 1947 he, Rudolf Gundlach, received a large payment for his periscope patent from some of its producers." The USSR also copied the design and used it extensively in its tanks, including the T-34 and T-70. The copies were based on Lend-Lease British vehicles, and many parts remain interchangeable. Germany also made and used copies. Periscopic gun-sights Periscopic sights were also introduced during the Second World War. In British use, the Vickers periscope was provided with sighting lines, enabling front and rear prisms to be directly aligned to gain an accurate direction. On later tanks such as the Churchill and Cromwell, a similarly marked episcope provided a backup sighting mechanism aligned with a vane sight on the turret roof. Later, US-built Sherman tanks and British Centurion and Charioteer tanks replaced the main telescopic sight with a true periscopic sight in the primary role. The periscopic sight was linked to the gun itself, allowing elevation to be captured (rotation being fixed as part of rotating turret). The sights formed part of the overall periscope, providing the gunner with greater overall vision than previously possible with the telescopic sight. The FV4201 Chieftain used the TESS (TElescopic Sighting System) developed in the early 1980s that was later sold as surplus for use on the RAF Phantom aircraft. Modern specialised AFV periscopes In modern use, specialised periscopes can also provide night vision. The Embedded Image Periscope (EIP) designed and patented by Kent Periscopes provides standard unity vision periscope functionality for normal daytime viewing of the vehicle surroundings plus the ability to display digital images from a range of on-vehicle sensors and cameras (including thermal and low light) such that the resulting image appears "embedded" internally within the unit and projected at a comfortable viewing positions. Naval use Periscopes allow a submarine, when submerged at a relatively shallow depth, to search visually for nearby targets and threats on the surface of the water and in the air. When not in use, a submarine's periscope retracts into the hull. A submarine commander in tactical conditions must exercise discretion when using his periscope, since it creates a visible wake (and may also become detectable by radar), giving away the submarine's position. Marie-Davey built a simple, fixed naval periscope using mirrors in 1854. Thomas H. Doughty of the United States Navy later invented a prismatic version for use in the American Civil War of 1861–1865. Submarines adopted periscopes early. Captain Arthur Krebs adapted two on the experimental French submarine in 1888 and 1889. The Spanish inventor Isaac Peral equipped his submarine (developed in 1886 but launched on September 8, 1888) with a fixed, non-retractable periscope that used a combination of prisms to relay the image to the submariner. (Peral also developed a primitive gyroscope for submarine navigation and pioneered the ability to fire live torpedoes while submerged.) The invention of the collapsible periscope for use in submarine warfare is usually credited to Simon Lake in 1902. Lake called his device the "omniscope" or "skalomniscope". modern submarine periscopes incorporate lenses for magnification and function as telescopes. They typically employ prisms and total internal reflection instead of mirrors, because prisms, which do not require coatings on the reflecting surface, are much more rugged than mirrors. They may have additional optical capabilities such as range-finding and targeting. The mechanical systems of submarine periscopes typically use hydraulics and need to be quite sturdy to withstand the drag through water. The periscope chassis may also support a radio or radar antenna. Submarines traditionally had two periscopes; a navigation or observation periscope and a targeting, or commander's, periscope. Navies originally mounted these periscopes in the conning tower, one forward of the other in the narrow hulls of diesel-electric submarines. In the much wider hulls of US Navy submarines the two operate side-by-side. The observation scope, used to scan the sea surface and sky, typically had a wide field of view and no magnification or low-power magnification. The targeting or "attack" periscope, by comparison, had a narrower field of view and higher magnification. In World War II and earlier submarines it was the only means of gathering target data to accurately fire a torpedo, since sonar was not yet sufficiently advanced for this purpose (ranging with sonar required emission of an acoustic "ping" that gave away the location of the submarine) and most torpedoes were unguided. Twenty-first-century submarines do not necessarily have periscopes. The United States Navy's s and the Royal Navy's s instead use photonics masts, pioneered by the Royal Navy's , which lift an electronic imaging sensor-set above the water. Signals from the sensor-set travel electronically to workstations in the submarine's control center. While the cables carrying the signal must penetrate the submarine's hull, they use a much smaller and more easily sealed—and therefore less expensive and safer—hull opening than those required by periscopes. Eliminating the telescoping tube running through the conning tower also allows greater freedom in designing the pressure hull and in placing internal equipment. Aircraft use Periscopes have also been used on aircraft for sections with limited view. The first known use of aircraft periscope was on the Spirit of St. Louis. The Vickers VC10 had a periscope that could be used on four locations of the aircraft fuselage, V-Bombers such as the Avro Vulcan and Handley Page Victor and the Nimrod MR1 as the "on top sight". Various US bomber aircraft such as the B-52 used sextant periscopes for celestial navigation before the introduction of GPS. This also allowed the aircrew to navigate without the use of an astrodome in the fuselage. An emergency periscope was used on all Boeing 737 models manufactured before 1997 found under "Seat D" behind the over wing exit row to regulate the landing gear. High speed and hypersonic aircraft such as the North American X-15 used a periscope.
Technology
Optical instruments
null
216049
https://en.wikipedia.org/wiki/Adaptive%20optics
Adaptive optics
Adaptive optics (AO) is a technique of precisely deforming a mirror in order to compensate for light distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array. Adaptive optics should not be confused with active optics, which work on a longer timescale to correct the primary mirror geometry. Other methods can achieve resolving power exceeding the limit imposed by atmospheric distortion, such as speckle imaging, aperture synthesis, and lucky imaging, or by moving outside the atmosphere with space telescopes, such as the Hubble Space Telescope. History Adaptive optics was first envisioned by Horace W. Babcock in 1953, and was also considered in science fiction, as in Poul Anderson's novel Tau Zero (1970), but it did not come into common usage until advances in computer technology during the 1990s made the technique practical. Some of the initial development work on adaptive optics was done by the US military during the Cold War and was intended for use in tracking Soviet satellites. Microelectromechanical systems (MEMS) deformable mirrors and magnetics concept deformable mirrors are currently the most widely used technology in wavefront shaping applications for adaptive optics given their versatility, stroke, maturity of technology, and the high-resolution wavefront correction that they afford. Tip–tilt correction The simplest form of adaptive optics is tip–tilt correction, which corresponds to correction of the tilts of the wavefront in two dimensions (equivalent to correction of the position offsets for the image). This is performed using a rapidly moving tip–tilt mirror that makes small rotations around two of its axes. A significant fraction of the aberration introduced by the atmosphere can be removed in this way. Tip–tilt mirrors are effectively segmented mirrors having only one segment which can tip and tilt, rather than having an array of multiple segments that can tip and tilt independently. Due to the relative simplicity of such mirrors and having a large stroke, meaning they have large correcting power, most AO systems use these, first, to correct low-order aberrations. Higher-order aberrations may then be corrected with deformable mirrors. In astronomy Atmospheric seeing When light from a star or another astronomical object enters the Earth's atmosphere, atmospheric turbulence (introduced, for example, by different temperature layers and different wind speeds interacting) can distort and move the image in various ways. Visual images produced by any telescope larger than approximately are blurred by these distortions. Wavefront sensing and correction An adaptive optics system tries to correct these distortions, using a wavefront sensor which takes some of the astronomical light, a deformable mirror that lies in the optical path, and a computer that receives input from the detector. The wavefront sensor measures the distortions the atmosphere has introduced on the timescale of a few milliseconds; the computer calculates the optimal mirror shape to correct the distortions and the surface of the deformable mirror is reshaped accordingly. For example, an telescope (like the VLT or Keck) can produce AO-corrected images with an angular resolution of 30–60 milliarcsecond (mas) resolution at infrared wavelengths, while the resolution without correction is of the order of 1 arcsecond.} In order to perform adaptive optics correction, the shape of the incoming wavefronts must be measured as a function of position in the telescope aperture plane. Typically the circular telescope aperture is split up into an array of pixels in a wavefront sensor, either using an array of small lenslets (a Shack–Hartmann wavefront sensor), or using a curvature or pyramid sensor which operates on images of the telescope aperture. The mean wavefront perturbation in each pixel is calculated. This pixelated map of the wavefronts is fed into the deformable mirror and used to correct the wavefront errors introduced by the atmosphere. It is not necessary for the shape or size of the astronomical object to be known – even Solar System objects which are not point-like can be used in a Shack–Hartmann wavefront sensor, and time-varying structure on the surface of the Sun is commonly used for adaptive optics at solar telescopes. The deformable mirror corrects incoming light so that the images appear sharp. Using guide stars Natural guide stars Because a science target is often too faint to be used as a reference star for measuring the shape of the optical wavefronts, a nearby brighter guide star can be used instead. The light from the science target has passed through approximately the same atmospheric turbulence as the reference star's light and so its image is also corrected, although generally to a lower accuracy. The necessity of a reference star means that an adaptive optics system cannot work everywhere on the sky, but only where a guide star of sufficient luminosity (for current systems, about magnitude 12–15) can be found very near to the object of the observation. This severely limits the application of the technique for astronomical observations. Another major limitation is the small field of view over which the adaptive optics correction is good. As the angular distance from the guide star increases, the image quality degrades. A technique known as "multiconjugate adaptive optics" uses several deformable mirrors to achieve a greater field of view. Artificial guide stars An alternative is the use of a laser beam to generate a reference light source (a laser guide star, LGS) in the atmosphere. There are two kinds of LGSs: Rayleigh guide stars and sodium guide stars. Rayleigh guide stars work by propagating a laser, usually at near ultraviolet wavelengths, and detecting the backscatter from air at altitudes between . Sodium guide stars use laser light at 589 nm to resonantly excite sodium atoms higher in the mesosphere and thermosphere, which then appear to "glow". The LGS can then be used as a wavefront reference in the same way as a natural guide star – except that (much fainter) natural reference stars are still required for image position (tip/tilt) information. The lasers are often pulsed, with measurement of the atmosphere being limited to a window occurring a few microseconds after the pulse has been launched. This allows the system to ignore most scattered light at ground level; only light which has travelled for several microseconds high up into the atmosphere and back is actually detected.} In retinal imaging Ocular aberrations are distortions in the wavefront passing through the pupil of the eye. These optical aberrations diminish the quality of the image formed on the retina, sometimes necessitating the wearing of spectacles or contact lenses. In the case of retinal imaging, light passing out of the eye carries similar wavefront distortions, leading to an inability to resolve the microscopic structure (cells and capillaries) of the retina. Spectacles and contact lenses correct "low-order aberrations", such as defocus and astigmatism, which tend to be stable in humans for long periods of time (months or years). While correction of these is sufficient for normal visual functioning, it is generally insufficient to achieve microscopic resolution. Additionally, "high-order aberrations", such as coma, spherical aberration, and trefoil, must also be corrected in order to achieve microscopic resolution. High-order aberrations, unlike low-order, are not stable over time, and may change over time scales of 0.1s to 0.01s. The correction of these aberrations requires continuous, high-frequency measurement and compensation. Measurement of ocular aberrations Ocular aberrations are generally measured using a wavefront sensor, and the most commonly used type of wavefront sensor is the Shack–Hartmann. Ocular aberrations are caused by spatial phase nonuniformities in the wavefront exiting the eye. In a Shack-Hartmann wavefront sensor, these are measured by placing a two-dimensional array of small lenses (lenslets) in a pupil plane conjugate to the eye's pupil, and a CCD chip at the back focal plane of the lenslets. The lenslets cause spots to be focused onto the CCD chip, and the positions of these spots are calculated using a centroiding algorithm. The positions of these spots are compared with the positions of reference spots, and the displacements between the two are used to determine the local curvature of the wavefront allowing one to numerically reconstruct the wavefront information—an estimate of the phase nonuniformities causing aberration. Correction of ocular aberrations Once the local phase errors in the wavefront are known, they can be corrected by placing a phase modulator such as a deformable mirror at yet another plane in the system conjugate to the eye's pupil. The phase errors can be used to reconstruct the wavefront, which can then be used to control the deformable mirror. Alternatively, the local phase errors can be used directly to calculate the deformable mirror instructions. Open loop vs. closed loop operation If the wavefront error is measured before it has been corrected by the wavefront corrector, then operation is said to be "open loop". If the wavefront error is measured after it has been corrected by the wavefront corrector, then operation is said to be "closed loop". In the latter case then the wavefront errors measured will be small, and errors in the measurement and correction are more likely to be removed. Closed loop correction is the norm. Applications Adaptive optics was first applied to flood-illumination retinal imaging to produce images of single cones in the living human eye. It has also been used in conjunction with scanning laser ophthalmoscopy to produce (also in living human eyes) the first images of retinal microvasculature and associated blood flow and retinal pigment epithelium cells in addition to single cones. Combined with optical coherence tomography, adaptive optics has allowed the first three-dimensional images of living cone photoreceptors to be collected. In microscopy In microscopy, adaptive optics is used to correct for sample-induced aberrations. The required wavefront correction is either measured directly using wavefront sensor or estimated by using sensorless AO techniques. Other uses Besides its use for improving nighttime astronomical imaging and retinal imaging, adaptive optics technology has also been used in other settings. Adaptive optics is used for solar astronomy at observatories such as the Swedish 1-m Solar Telescope, Dunn Solar Telescope, and Big Bear Solar Observatory. It is also expected to play a military role by allowing ground-based and airborne laser weapons to reach and destroy targets at a distance including satellites in orbit. The Missile Defense Agency Airborne Laser program is the principal example of this. Adaptive optics has been used to enhance the performance of classical and quantum free-space optical communication systems, and to control the spatial output of optical fibers. Medical applications include imaging of the retina, where it has been combined with optical coherence tomography. Also the development of Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) has enabled correcting for the aberrations of the wavefront that is reflected from the human retina and to take diffraction limited images of the human rods and cones. Adaptive and active optics are also being developed for use in glasses to achieve better than 20/20 vision, initially for military applications. After propagation of a wavefront, parts of it may overlap leading to interference and preventing adaptive optics from correcting it. Propagation of a curved wavefront always leads to amplitude variation. This needs to be considered if a good beam profile is to be achieved in laser applications. In material processing using lasers, adjustments can be made on the fly to allow for variation of focus-depth during piercing for changes in focal length across the working surface. Beam width can also be adjusted to switch between piercing and cutting mode. This eliminates the need for optic of the laser head to be switched, cutting down on overall processing time for more dynamic modifications. Adaptive optics, especially wavefront-coding spatial light modulators, are frequently used in optical trapping applications to multiplex and dynamically reconfigure laser foci that are used to micro-manipulate biological specimens. Beam stabilization A rather simple example is the stabilization of the position and direction of laser beam between modules in a large free space optical communication system. Fourier optics is used to control both direction and position. The actual beam is measured by photo diodes. This signal is fed into analog-to-digital converters and then a microcontroller which runs a PID controller algorithm. The controller then drives digital-to-analog converters which drive stepper motors attached to mirror mounts. If the beam is to be centered onto 4-quadrant diodes, no analog-to-digital converter is needed. Operational amplifiers are sufficient.
Technology
Telescope
null
216087
https://en.wikipedia.org/wiki/Quokka
Quokka
The quokka () (Setonix brachyurus) is a small macropod about the size of a domestic cat. It is the only member of the genus Setonix. Like other marsupials in the macropod family (such as kangaroos and wallabies), the quokka is herbivorous and mainly nocturnal. The quokka's range is a small area of southwestern Australia. They inhabit some smaller islands off the coast of Western Australia, particularly Rottnest Island just off Perth and Bald Island near Albany. Isolated, scattered populations also exist in forest and coastal heath between Perth and Albany. A small colony inhabits a protected area of Two Peoples Bay Nature Reserve, where they co-exist with the critically endangered Gilbert's potoroo. Description A quokka weighs and is long with a tail, which is quite short for a macropod. It has a stocky build, well developed hind legs, rounded ears, and a short, broad head. Although looking rather like a very small kangaroo, it can climb small trees and shrubs up to . Its coarse fur is a grizzled brown colour, fading to buff underneath. The quokka is known to live for an average of 10 years. Quokkas are nocturnal animals; they sleep during the day in Acanthocarpus preissii, using the plants' spikes for protection and hiding. Quokkas have a promiscuous mating system. After a month of gestation, females give birth to a single baby called a joey. Females can give birth twice a year and produce about 17 joeys during their lifespan. The joey lives in its mother's pouch for six months. Once it leaves the pouch, the joey relies on its mother for milk for two more months and is fully weaned around eight months after birth. Females sexually mature after roughly 18 months. When a female quokka with a joey in her pouch is pursued by a predator, she may drop her baby onto the ground; the joey produces noises which may serve to attract the predator's attention, while the mother escapes. Discovery and name The word "quokka" is originally derived from a Noongar word, which was probably . Today, the Noongar people refer to them as ban-gup, bungeup and quak-a. In 1658, Dutch mariner Samuel Volckertszoon wrote of sighting "a wild cat" on the island. In 1696, Dutch explorer Willem de Vlamingh mistook them for giant rats, and renamed the Wadjemup island , which means "the rat nest island" in Dutch. Vlamingh had originally described them "as a kind of rat as big as a common cat". Ecology On the mainland, quokkas prefer areas with more vegetation, both for a wider variety of food and also for cover from predators such as dingoes, red foxes, and feral cats. In the wild, the quokka's range is restricted to a very small range in the South West of Western Australia, with a number of small scattered populations. One large population exists on Rottnest Island and a smaller population is on Bald Island near Albany. These islands are free of the aforementioned predators. On Rottnest, quokkas are common and occupy a variety of habitats, ranging from semiarid scrub to cultivated gardens. Prickly Acanthocarpus plants, which are unaccommodating for humans and other relatively large animals to walk through, provide their favourite daytime shelter for sleeping. Additionally, they are known for their ability to climb trees. Diet Like most macropods, quokkas eat many types of vegetation, including grasses, sedges and leaves. A study found that Guichenotia ledifolia, a small shrub species of the family Malvaceae, is one of the quokka's favoured foods. Rottnest Island visitors are urged to never feed quokkas, in part because eating "human food" such as chips can cause dehydration and malnourishment, both of which are detrimental to the quokka's health. Despite the relative lack of fresh water on Rottnest Island, quokkas do have high water requirements, which they satisfy mostly through eating vegetation. On the mainland, quokkas only live in areas that have or more of rain per year. The quokkas chew their cud, similar to cows. Population At the time of colonial settlement, the quokka was widespread and abundant, with its distribution encompassing an area of about of the South West of Western Australia, including the two offshore islands, Bald and Rottnest. By 1992, following extensive population declines in the 20th century, the quokka's distribution on the mainland had been reduced by more than 50% to an area of about . Despite being numerous on the small, offshore islands, the quokka is classified as vulnerable. On the mainland, where it is threatened by introduced predatory species such as red foxes, cats, and dogs, it requires dense ground cover for refuge. Clearfell logging, agricultural development, and housing expansion have reduced their habitat, contributing to the decline of the species, as has the clearing and burning of the remaining swamplands. Moreover, quokkas usually have a litter size of one and successfully rear one young each year. Although they are constantly mating, usually one day after the young are born, the small litter size, along with the restricted space and threatening predators, contributes to the scarcity of the species on the mainland. An estimated 4,000 quokkas live on the mainland, with nearly all mainland populations being groups of fewer than 50, although one declining group of over 700 occurs in the southern forest between Nannup and Denmark. In 2015, an extensive bushfire near Northcliffe nearly eradicated one of the local mainland populations, with an estimated 90% of the 500 quokkas dying. In 2007, the quokka population on Rottnest Island was estimated at between 8,000 and 12,000. Snakes are the quokka's only predator on the island. The population on smaller Bald Island, where the quokka has no predators, is 600–1,000. At the end of summer and into autumn, a seasonal decline of quokkas occurs on Rottnest Island, where loss of vegetation and reduction of available surface water can lead to starvation. This species saw the most significant decline from 1930 to the 1990s, when their distribution was reduced by over half. The quokka markedly declined in its abundance and distribution in the early 1930s, and this tendency has continued till today. Their presence on the mainland has declined to such an extent that they are only found in small groups in bushland surrounding Perth. In late 2024 a new quokka population was discovered in the Perth Hills. It is the first time that quokkas have been photographed by the general public in the Perth Hills and is an important finding for conservation of the species. Their exact location will remain confidential. The quokka is now listed as vulnerable in accordance with the IUCN criteria. Conservation The quokka, while not in complete danger of going extinct, are considered threatened. As the climate continues to change so does the Australian landscape; being herbivores, the quokka rely on many native plants for their diet as well as protection. The quokka were found to prefer malvaceae species as a main source of food, using shrubs as shelter during the hottest points of the day. Due to factors such as wildfires and anthropogenic influence, the location of the natural flora has been changing making it harder for them to access. Invasive species and environmental changes are the primary threats to quokkas. A study found that the mainland populations prefer to live in areas with an average rainfall that exceeded 700 mm but fell below 1000 mm, which becomes increasingly complicated as aridity continues to increase in South west Australia. Increasing temperatures have also been found to play an important role in the distribution of the quokka as the mean annual temperatures have increased exponentially since the 1970s in South West of Western Australia. With climate change limiting the optimal living conditions of the quokka and changing the abundance of their diet, the quokka are listed as vulnerable on the IUCN Red List of threatened species. The increasing risk of severe bushfires presents a serious risk to quokkas, as quokka populations have a slow recovery rate after bushfires and take a long time to recolonise intensely burnt landscapes. Human interaction Quokkas have little fear of humans and commonly approach people closely, particularly on Rottnest Island, where they are abundant. Though quokkas are approachable, there are a few dozen cases annually of quokkas biting people, especially children. There are restrictions regarding feeding. It is illegal for members of the public to handle the animals in any way, and feeding, particularly of "human food", is especially discouraged, as they can easily get sick. An infringement notice carrying a $300 fine can be issued by the Rottnest Island Authority for such an offence. The maximum penalty for animal cruelty is a $50,000 fine and a five-year prison sentence. In addition to restrictions on human interactions with quokkas, they have been tested to be potentially harmful to humans with their high salmonella infection rates, especially in the summer heat. This has been proven and experimented by scientists who have taken blood tests on wild quokkas on Rottnest Island. Quokkas can also be observed at several zoos and wildlife parks around Australia, including Perth Zoo, Taronga Zoo, Wild Life Sydney,, Australia Zoo, Adelaide Zoo, and Caversham Wildlife Park. Physical interaction is generally not permitted without explicit permission from supervising staff. Quokka behaviour in response to human interaction has been examined in zoo environments. One brief study indicated fewer animals remained visible from the visitor paths when the enclosure was an open or walk-through environment. This may have been due to the quokkas acquiring avoidance behaviour of visitors, which the authors propose has implications for stress management in their exhibition to the public. Quokka selfies In the mid-2010s, quokkas earned a reputation on the internet as "the world's happiest animals" and symbols of positivity, as frontal photos of their faces make them appear to be smiling (they do not, in fact "smile" in the human sense; this can be attributed to their natural facial structures). Many photos of smiling quokkas have since gone viral, and the "quokka selfie" has become a popular social media trend, with celebrities such as Chris Hemsworth, Shawn Mendes, Margot Robbie, Roger Federer and Kim Donghyuk of iKON taking part in the activity. Tourist numbers to Rottnest Island have subsequently increased.
Biology and health sciences
Diprotodontia
Animals
216104
https://en.wikipedia.org/wiki/Protein%20engineering
Protein engineering
Protein engineering is the process of developing useful or valuable proteins through the design and production of unnatural polypeptides, often by altering amino acid sequences found in nature. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles. It has been used to improve the function of many enzymes for industrial catalysis. It is also a product and services market, with an estimated value of $168 billion by 2017. There are two general strategies for protein engineering: rational protein design and directed evolution. These methods are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, and advances in high-throughput screening, may greatly expand the abilities of protein engineering. Eventually, even unnatural amino acids may be included, via newer methods, such as expanded genetic code, that allow encoding novel amino acids in genetic code. The applications in numerous fields, including medicine and industrial bioprocessing, are vast and numerous. Approaches Rational design In rational protein design, a scientist uses detailed knowledge of the structure and function of a protein to make desired changes. In general, this has the advantage of being inexpensive and technically easy, since site-directed mutagenesis methods are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and, even when available, it can be very difficult to predict the effects of various mutations since structural information most often provide a static picture of a protein structure. However, programs such as Folding@home and Foldit have utilized crowdsourcing techniques in order to gain insight into the folding motifs of proteins. Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is large, the most challenging requirement for computational protein design is a fast, yet accurate, energy function that can distinguish optimal sequences from similar suboptimal ones. Multiple sequence alignment Without structural information about a protein, sequence analysis is often useful in elucidating information about the protein. These techniques involve alignment of target protein sequences with other related protein sequences. This alignment can show which amino acids are conserved between species and are important for the function of the protein. These analyses can help to identify hot spot amino acids that can serve as the target sites for mutations. Multiple sequence alignment utilizes data bases such as PREFAB, SABMARK, OXBENCH, IRMBASE, and BALIBASE in order to cross reference target protein sequences with known sequences. Multiple sequence alignment techniques are listed below. This method begins by performing pair wise alignment of sequences using k-tuple or Needleman–Wunsch methods. These methods calculate a matrix that depicts the pair wise similarity among the sequence pairs. Similarity scores are then transformed into distance scores that are used to produce a guide tree using the neighbor joining method. This guide tree is then employed to yield a multiple sequence alignment. Clustal omega This method is capable of aligning up to 190,000 sequences by utilizing the k-tuple method. Next sequences are clustered using the mBed and k-means methods. A guide tree is then constructed using the UPGMA method that is used by the HH align package. This guide tree is used to generate multiple sequence alignments. MAFFT This method utilizes fast Fourier transform (FFT) that converts amino acid sequences into a sequence composed of volume and polarity values for each amino acid residue. This new sequence is used to find homologous regions. K-Align This method utilizes the Wu-Manber approximate string matching algorithm to generate multiple sequence alignments. Multiple sequence comparison by log expectation (MUSCLE) This method utilizes Kmer and Kimura distances to generate multiple sequence alignments. T-Coffee This method utilizes tree based consistency objective functions for alignment evolution. This method has been shown to be 5–10% more accurate than Clustal W. Coevolutionary analysis Coevolutionary analysis is also known as correlated mutation, covariation, or co-substitution. This type of rational design involves reciprocal evolutionary changes at evolutionarily interacting loci. Generally this method begins with the generation of a curated multiple sequence alignments for the target sequence. This alignment is then subjected to manual refinement that involves removal of highly gapped sequences, as well as sequences with low sequence identity. This step increases the quality of the alignment. Next, the manually processed alignment is utilized for further coevolutionary measurements using distinct correlated mutation algorithms. These algorithms result in a coevolution scoring matrix. This matrix is filtered by applying various significance tests to extract significant coevolution values and wipe out background noise. Coevolutionary measurements are further evaluated to assess their performance and stringency. Finally, the results from this coevolutionary analysis are validated experimentally. Structural prediction De novo generation of protein benefits from knowledge of existing protein structures. This knowledge of existing protein structure assists with the prediction of new protein structures. Methods for protein structure prediction fall under one of the four following classes: ab initio, fragment based methods, homology modeling, and protein threading. Ab initio These methods involve free modeling without using any structural information about the template. Ab initio methods are aimed at prediction of the native structures of proteins corresponding to the global minimum of its free energy. some examples of ab initio methods are AMBER, GROMOS, GROMACS, CHARMM, OPLS, and ENCEPP12. General steps for ab initio methods begin with the geometric representation of the protein of interest. Next, a potential energy function model for the protein is developed. This model can be created using either molecular mechanics potentials or protein structure derived potential functions. Following the development of a potential model, energy search techniques including molecular dynamic simulations, Monte Carlo simulations and genetic algorithms are applied to the protein. Fragment based These methods use database information regarding structures to match homologous structures to the created protein sequences. These homologous structures are assembled to give compact structures using scoring and optimization procedures, with the goal of achieving the lowest potential energy score. Webservers for fragment information are I-TASSER, ROSETTA, ROSETTA @ home, FRAGFOLD, CABS fold, PROFESY, CREF, QUARK, UNDERTAKER, HMM, and ANGLOR. Homology modeling These methods are based upon the homology of proteins. These methods are also known as comparative modeling. The first step in homology modeling is generally the identification of template sequences of known structure which are homologous to the query sequence. Next the query sequence is aligned to the template sequence. Following the alignment, the structurally conserved regions are modeled using the template structure. This is followed by the modeling of side chains and loops that are distinct from the template. Finally the modeled structure undergoes refinement and assessment of quality. Servers that are available for homology modeling data are listed here: SWISS MODEL, MODELLER, ReformAlign, PyMOD, TIP-STRUCTFAST, COMPASS, 3d-PSSM, SAMT02, SAMT99, HHPRED, FAGUE, 3D-JIGSAW, META-PP, ROSETTA, and I-TASSER. Protein threading Protein threading can be used when a reliable homologue for the query sequence cannot be found. This method begins by obtaining a query sequence and a library of template structures. Next, the query sequence is threaded over known template structures. These candidate models are scored using scoring functions. These are scored based upon potential energy models of both query and template sequence. The match with the lowest potential energy model is then selected. Methods and servers for retrieving threading data and performing calculations are listed here: GenTHREADER, pGenTHREADER, pDomTHREADER, ORFEUS, PROSPECT, BioShell-Threading, FFASO3, RaptorX, HHPred, LOOPP server, Sparks-X, SEGMER, THREADER2, ESYPRED3D, LIBRA, TOPITS, RAPTOR, COTH, MUSTER. For more information on rational design see site-directed mutagenesis. Multivalent binding Multivalent binding can be used to increase the binding specificity and affinity through avidity effects. Having multiple binding domains in a single biomolecule or complex increases the likelihood of other interactions to occur via individual binding events. Avidity or effective affinity can be much higher than the sum of the individual affinities providing a cost and time-effective tool for targeted binding. Multivalent proteins Multivalent proteins are relatively easy to produce by post-translational modifications or multiplying the protein-coding DNA sequence. The main advantage of multivalent and multispecific proteins is that they can increase the effective affinity for a target of a known protein. In the case of an inhomogeneous target using a combination of proteins resulting in multispecific binding can increase specificity, which has high applicability in protein therapeutics. The most common example for multivalent binding are the antibodies, and there is extensive research for bispecific antibodies. Applications of bispecific antibodies cover a broad spectrum that includes diagnosis, imaging, prophylaxis, and therapy. Directed evolution In directed evolution, random mutagenesis, e.g. by error-prone PCR or sequence saturation mutagenesis, is applied to a protein, and a selection regime is used to select variants having desired traits. Further rounds of mutation and selection are then applied. This method mimics natural evolution and, in general, produces superior results to rational design. An added process, termed DNA shuffling, mixes and matches pieces of successful variants to produce better results. Such processes mimic the recombination that occurs naturally during sexual reproduction. Advantages of directed evolution are that it requires no prior structural knowledge of a protein, nor is it necessary to be able to predict what effect a given mutation will have. Indeed, the results of directed evolution experiments are often surprising in that desired changes are often caused by mutations that were not expected to have some effect. The drawback is that they require high-throughput screening, which is not feasible for all proteins. Large amounts of recombinant DNA must be mutated and the products screened for desired traits. The large number of variants often requires expensive robotic equipment to automate the process. Further, not all desired activities can be screened for easily. Natural Darwinian evolution can be effectively imitated in the lab toward tailoring protein properties for diverse applications, including catalysis. Many experimental technologies exist to produce large and diverse protein libraries and for screening or selecting folded, functional variants. Folded proteins arise surprisingly frequently in random sequence space, an occurrence exploitable in evolving selective binders and catalysts. While more conservative than direct selection from deep sequence space, redesign of existing proteins by random mutagenesis and selection/screening is a particularly robust method for optimizing or altering extant properties. It also represents an excellent starting point for achieving more ambitious engineering goals. Allying experimental evolution with modern computational methods is likely the broadest, most fruitful strategy for generating functional macromolecules unknown to nature. The main challenges of designing high quality mutant libraries have shown significant progress in the recent past. This progress has been in the form of better descriptions of the effects of mutational loads on protein traits. Also computational approaches have showed large advances in the innumerably large sequence space to more manageable screenable sizes, thus creating smart libraries of mutants. Library size has also been reduced to more screenable sizes by the identification of key beneficial residues using algorithms for systematic recombination. Finally a significant step forward toward efficient reengineering of enzymes has been made with the development of more accurate statistical models and algorithms quantifying and predicting coupled mutational effects on protein functions. Generally, directed evolution may be summarized as an iterative two step process which involves generation of protein mutant libraries, and high throughput screening processes to select for variants with improved traits. This technique does not require prior knowledge of the protein structure and function relationship. Directed evolution utilizes random or focused mutagenesis to generate libraries of mutant proteins. Random mutations can be introduced using either error prone PCR, or site saturation mutagenesis. Mutants may also be generated using recombination of multiple homologous genes. Nature has evolved a limited number of beneficial sequences. Directed evolution makes it possible to identify undiscovered protein sequences which have novel functions. This ability is contingent on the proteins ability to tolerant amino acid residue substitutions without compromising folding or stability. Directed evolution methods can be broadly categorized into two strategies, asexual and sexual methods. Asexual methods Asexual methods do not generate any cross links between parental genes. Single genes are used to create mutant libraries using various mutagenic techniques. These asexual methods can produce either random or focused mutagenesis. Random mutagenesis Random mutagenic methods produce mutations at random throughout the gene of interest. Random mutagenesis can introduce the following types of mutations: transitions, transversions, insertions, deletions, inversion, missense, and nonsense. Examples of methods for producing random mutagenesis are below. Error prone PCR Error prone PCR utilizes the fact that Taq DNA polymerase lacks 3' to 5' exonuclease activity. This results in an error rate of 0.001–0.002% per nucleotide per replication. This method begins with choosing the gene, or the area within a gene, one wishes to mutate. Next, the extent of error required is calculated based upon the type and extent of activity one wishes to generate. This extent of error determines the error prone PCR strategy to be employed. Following PCR, the genes are cloned into a plasmid and introduced to competent cell systems. These cells are then screened for desired traits. Plasmids are then isolated for colonies which show improved traits, and are then used as templates the next round of mutagenesis. Error prone PCR shows biases for certain mutations relative to others. Such as biases for transitions over transversions. Rates of error in PCR can be increased in the following ways: Increase concentration of magnesium chloride, which stabilizes non complementary base pairing. Add manganese chloride to reduce base pair specificity. Increased and unbalanced addition of dNTPs. Addition of base analogs like dITP, 8 oxo-dGTP, and dPTP. Increase concentration of Taq polymerase. Increase extension time. Increase cycle time. Use less accurate Taq polymerase. Also see polymerase chain reaction for more information. Rolling circle error-prone PCR This PCR method is based upon rolling circle amplification, which is modeled from the method that bacteria use to amplify circular DNA. This method results in linear DNA duplexes. These fragments contain tandem repeats of circular DNA called concatamers, which can be transformed into bacterial strains. Mutations are introduced by first cloning the target sequence into an appropriate plasmid. Next, the amplification process begins using random hexamer primers and Φ29 DNA polymerase under error prone rolling circle amplification conditions. Additional conditions to produce error prone rolling circle amplification are 1.5 pM of template DNA, 1.5 mM MnCl2 and a 24 hour reaction time. MnCl2 is added into the reaction mixture to promote random point mutations in the DNA strands. Mutation rates can be increased by increasing the concentration of MnCl2, or by decreasing concentration of the template DNA. Error prone rolling circle amplification is advantageous relative to error prone PCR because of its use of universal random hexamer primers, rather than specific primers. Also the reaction products of this amplification do not need to be treated with ligases or endonucleases. This reaction is isothermal. Chemical mutagenesis Chemical mutagenesis involves the use of chemical agents to introduce mutations into genetic sequences. Examples of chemical mutagens follow. Sodium bisulfate is effective at mutating G/C rich genomic sequences. This is because sodium bisulfate catalyses deamination of unmethylated cytosine to uracil. Ethyl methane sulfonate alkylates guanidine residues. This alteration causes errors during DNA replication. Nitrous acid causes transversion by de-amination of adenine and cytosine. The dual approach to random chemical mutagenesis is an iterative two step process. First it involves the in vivo chemical mutagenesis of the gene of interest via EMS. Next, the treated gene is isolated and cloning into an untreated expression vector in order to prevent mutations in the plasmid backbone. This technique preserves the plasmids genetic properties. Targeting glycosylases to embedded arrays for mutagenesis (TaGTEAM) This method has been used to create targeted in vivo mutagenesis in yeast. This method involves the fusion of a 3-methyladenine DNA glycosylase to tetR DNA-binding domain. This has been shown to increase mutation rates by over 800 time in regions of the genome containing tetO sites. Mutagenesis by random insertion and deletion This method involves alteration in length of the sequence via simultaneous deletion and insertion of chunks of bases of arbitrary length. This method has been shown to produce proteins with new functionalities via introduction of new restriction sites, specific codons, four base codons for non-natural amino acids. Transposon based random mutagenesis Recently many methods for transposon based random mutagenesis have been reported. This methods include, but are not limited to the following: PERMUTE-random circular permutation, random protein truncation, random nucleotide triplet substitution, random domain/tag/multiple amino acid insertion, codon scanning mutagenesis, and multicodon scanning mutagenesis. These aforementioned techniques all require the design of mini-Mu transposons. Thermo scientific manufactures kits for the design of these transposons. Random mutagenesis methods altering the target DNA length These methods involve altering gene length via insertion and deletion mutations. An example is the tandem repeat insertion (TRINS) method. This technique results in the generation of tandem repeats of random fragments of the target gene via rolling circle amplification and concurrent incorporation of these repeats into the target gene. Mutator strains Mutator strains are bacterial cell lines which are deficient in one or more DNA repair mechanisms. An example of a mutator strand is the E. coli XL1-RED. This subordinate strain of E. coli is deficient in the MutS, MutD, MutT DNA repair pathways. Use of mutator strains is useful at introducing many types of mutation; however, these strains show progressive sickness of culture because of the accumulation of mutations in the strains own genome. Focused mutagenesis Focused mutagenic methods produce mutations at predetermined amino acid residues. These techniques require and understanding of the sequence-function relationship for the protein of interest. Understanding of this relationship allows for the identification of residues which are important in stability, stereoselectivity, and catalytic efficiency. Examples of methods that produce focused mutagenesis are below. Site saturation mutagenesis Site saturation mutagenesis is a PCR based method used to target amino acids with significant roles in protein function. The two most common techniques for performing this are whole plasmid single PCR, and overlap extension PCR. Whole plasmid single PCR is also referred to as site directed mutagenesis (SDM). SDM products are subjected to Dpn endonuclease digestion. This digestion results in cleavage of only the parental strand, because the parental strand contains a GmATC which is methylated at N6 of adenine. SDM does not work well for large plasmids of over ten kilobases. Also, this method is only capable of replacing two nucleotides at a time. Overlap extension PCR requires the use of two pairs of primers. One primer in each set contains a mutation. A first round of PCR using these primer sets is performed and two double stranded DNA duplexes are formed. A second round of PCR is then performed in which these duplexes are denatured and annealed with the primer sets again to produce heteroduplexes, in which each strand has a mutation. Any gaps in these newly formed heteroduplexes are filled with DNA polymerases and further amplified. Sequence saturation mutagenesis (SeSaM) Sequence saturation mutagenesis results in the randomization of the target sequence at every nucleotide position. This method begins with the generation of variable length DNA fragments tailed with universal bases via the use of template transferases at the 3' termini. Next, these fragments are extended to full length using a single stranded template. The universal bases are replaced with a random standard base, causing mutations. There are several modified versions of this method such as SeSAM-Tv-II, SeSAM-Tv+, and SeSAM-III. Single primer reactions in parallel (SPRINP) This site saturation mutagenesis method involves two separate PCR reaction. The first of which uses only forward primers, while the second reaction uses only reverse primers. This avoids the formation of primer dimer formation. Mega primed and ligase free focused mutagenesis This site saturation mutagenic technique begins with one mutagenic oligonucleotide and one universal flanking primer. These two reactants are used for an initial PCR cycle. Products from this first PCR cycle are used as mega primers for the next PCR. Ω-PCR This site saturation mutagenic method is based on overlap extension PCR. It is used to introduce mutations at any site in a circular plasmid. PFunkel-ominchange-OSCARR This method utilizes user defined site directed mutagenesis at single or multiple sites simultaneously. OSCARR is an acronym for one pot simple methodology for cassette randomization and recombination. This randomization and recombination results in randomization of desired fragments of a protein. Omnichange is a sequence independent, multisite saturation mutagenesis which can saturate up to five independent codons on a gene. Trimer-dimer mutagenesis This method removes redundant codons and stop codons. Cassette mutagenesis This is a PCR based method. Cassette mutagenesis begins with the synthesis of a DNA cassette containing the gene of interest, which is flanked on either side by restriction sites. The endonuclease which cleaves these restriction sites also cleaves sites in the target plasmid. The DNA cassette and the target plasmid are both treated with endonucleases to cleave these restriction sites and create sticky ends. Next the products from this cleavage are ligated together, resulting in the insertion of the gene into the target plasmid. An alternative form of cassette mutagenesis called combinatorial cassette mutagenesis is used to identify the functions of individual amino acid residues in the protein of interest. Recursive ensemble mutagenesis then utilizes information from previous combinatorial cassette mutagenesis. Codon cassette mutagenesis allows you to insert or replace a single codon at a particular site in double stranded DNA. Sexual methods Sexual methods of directed evolution involve in vitro recombination which mimic natural in vivo recombination. Generally these techniques require high sequence homology between parental sequences. These techniques are often used to recombine two different parental genes, and these methods do create cross overs between these genes. In vitro homologous recombination Homologous recombination can be categorized as either in vivo or in vitro. In vitro homologous recombination mimics natural in vivo recombination. These in vitro recombination methods require high sequence homology between parental sequences. These techniques exploit the natural diversity in parental genes by recombining them to yield chimeric genes. The resulting chimera show a blend of parental characteristics. DNA shuffling This in vitro technique was one of the first techniques in the era of recombination. It begins with the digestion of homologous parental genes into small fragments by DNase1. These small fragments are then purified from undigested parental genes. Purified fragments are then reassembled using primer-less PCR. This PCR involves homologous fragments from different parental genes priming for each other, resulting in chimeric DNA. The chimeric DNA of parental size is then amplified using end terminal primers in regular PCR. Random priming in vitro recombination (RPR) This in vitro homologous recombination method begins with the synthesis of many short gene fragments exhibiting point mutations using random sequence primers. These fragments are reassembled to full length parental genes using primer-less PCR. These reassembled sequences are then amplified using PCR and subjected to further selection processes. This method is advantageous relative to DNA shuffling because there is no use of DNase1, thus there is no bias for recombination next to a pyrimidine nucleotide. This method is also advantageous due to its use of synthetic random primers which are uniform in length, and lack biases. Finally this method is independent of the length of DNA template sequence, and requires a small amount of parental DNA. Truncated metagenomic gene-specific PCR This method generates chimeric genes directly from metagenomic samples. It begins with isolation of the desired gene by functional screening from metagenomic DNA sample. Next, specific primers are designed and used to amplify the homologous genes from different environmental samples. Finally, chimeric libraries are generated to retrieve the desired functional clones by shuffling these amplified homologous genes. Staggered extension process (StEP) This in vitro method is based on template switching to generate chimeric genes. This PCR based method begins with an initial denaturation of the template, followed by annealing of primers and a short extension time. All subsequent cycle generate annealing between the short fragments generated in previous cycles and different parts of the template. These short fragments and the templates anneal together based on sequence complementarity. This process of fragments annealing template DNA is known as template switching. These annealed fragments will then serve as primers for further extension. This method is carried out until the parental length chimeric gene sequence is obtained. Execution of this method only requires flanking primers to begin. There is also no need for Dnase1 enzyme. Random chimeragenesis on transient templates (RACHITT) This method has been shown to generate chimeric gene libraries with an average of 14 crossovers per chimeric gene. It begins by aligning fragments from a parental top strand onto the bottom strand of a uracil containing template from a homologous gene. 5' and 3' overhang flaps are cleaved and gaps are filled by the exonuclease and endonuclease activities of Pfu and taq DNA polymerases. The uracil containing template is then removed from the heteroduplex by treatment with a uracil DNA glcosylase, followed by further amplification using PCR. This method is advantageous because it generates chimeras with relatively high crossover frequency. However it is somewhat limited due to the complexity and the need for generation of single stranded DNA and uracil containing single stranded template DNA. Synthetic shuffling Shuffling of synthetic degenerate oligonucleotides adds flexibility to shuffling methods, since oligonucleotides containing optimal codons and beneficial mutations can be included. In vivo Homologous Recombination Cloning performed in yeast involves PCR dependent reassembly of fragmented expression vectors. These reassembled vectors are then introduced to, and cloned in yeast. Using yeast to clone the vector avoids toxicity and counter-selection that would be introduced by ligation and propagation in E. coli. Mutagenic organized recombination process by homologous in vivo grouping (MORPHING) This method introduces mutations into specific regions of genes while leaving other parts intact by utilizing the high frequency of homologous recombination in yeast. Phage-assisted continuous evolution (PACE) This method utilizes a bacteriophage with a modified life cycle to transfer evolving genes from host to host. The phage's life cycle is designed in such a way that the transfer is correlated with the activity of interest from the enzyme. This method is advantageous because it requires minimal human intervention for the continuous evolution of the gene. In vitro non-homologous recombination methods These methods are based upon the fact that proteins can exhibit similar structural identity while lacking sequence homology. Exon shuffling Exon shuffling is the combination of exons from different proteins by recombination events occurring at introns. Orthologous exon shuffling involves combining exons from orthologous genes from different species. Orthologous domain shuffling involves shuffling of entire protein domains from orthologous genes from different species. Paralogous exon shuffling involves shuffling of exon from different genes from the same species. Paralogous domain shuffling involves shuffling of entire protein domains from paralogous proteins from the same species. Functional homolog shuffling involves shuffling of non-homologous domains which are functional related. All of these processes being with amplification of the desired exons from different genes using chimeric synthetic oligonucleotides. This amplification products are then reassembled into full length genes using primer-less PCR. During these PCR cycles the fragments act as templates and primers. This results in chimeric full length genes, which are then subjected to screening. Incremental truncation for the creation of hybrid enzymes (ITCHY) Fragments of parental genes are created using controlled digestion by exonuclease III. These fragments are blunted using endonuclease, and are ligated to produce hybrid genes. THIOITCHY is a modified ITCHY technique which utilized nucleotide triphosphate analogs such as α-phosphothioate dNTPs. Incorporation of these nucleotides blocks digestion by exonuclease III. This inhibition of digestion by exonuclease III is called spiking. Spiking can be accomplished by first truncating genes with exonuclease to create fragments with short single stranded overhangs. These fragments then serve as templates for amplification by DNA polymerase in the presence of small amounts of phosphothioate dNTPs. These resulting fragments are then ligated together to form full length genes. Alternatively the intact parental genes can be amplified by PCR in the presence of normal dNTPs and phosphothioate dNTPs. These full length amplification products are then subjected to digestion by an exonuclease. Digestion will continue until the exonuclease encounters an α-pdNTP, resulting in fragments of different length. These fragments are then ligated together to generate chimeric genes. SCRATCHY This method generates libraries of hybrid genes inhibiting multiple crossovers by combining DNA shuffling and ITCHY. This method begins with the construction of two independent ITCHY libraries. The first with gene A on the N-terminus. And the other having gene B on the N-terminus. These hybrid gene fragments are separated using either restriction enzyme digestion or PCR with terminus primers via agarose gel electrophoresis. These isolated fragments are then mixed together and further digested using DNase1. Digested fragments are then reassembled by primerless PCR with template switching. Recombined extension on truncated templates (RETT) This method generates libraries of hybrid genes by template switching of uni-directionally growing polynucleotides in the presence of single stranded DNA fragments as templates for chimeras. This method begins with the preparation of single stranded DNA fragments by reverse transcription from target mRNA. Gene specific primers are then annealed to the single stranded DNA. These genes are then extended during a PCR cycle. This cycle is followed by template switching and annealing of the short fragments obtained from the earlier primer extension to other single stranded DNA fragments. This process is repeated until full length single stranded DNA is obtained. Sequence homology-independent protein recombination (SHIPREC) This method generates recombination between genes with little to no sequence homology. These chimeras are fused via a linker sequence containing several restriction sites. This construct is then digested using DNase1. Fragments are made are made blunt ended using S1 nuclease. These blunt end fragments are put together into a circular sequence by ligation. This circular construct is then linearized using restriction enzymes for which the restriction sites are present in the linker region. This results in a library of chimeric genes in which contribution of genes to 5' and 3' end will be reversed as compared to the starting construct. Sequence independent site directed chimeragenesis (SISDC) This method results in a library of genes with multiple crossovers from several parental genes. This method does not require sequence identity among the parental genes. This does require one or two conserved amino acids at every crossover position. It begins with alignment of parental sequences and identification of consensus regions which serve as crossover sites. This is followed by the incorporation of specific tags containing restriction sites followed by the removal of the tags by digestion with Bac1, resulting in genes with cohesive ends. These gene fragments are mixed and ligated in an appropriate order to form chimeric libraries. Degenerate homo-duplex recombination (DHR) This method begins with alignment of homologous genes, followed by identification of regions of polymorphism. Next the top strand of the gene is divided into small degenerate oligonucleotides. The bottom strand is also digested into oligonucleotides to serve as scaffolds. These fragments are combined in solution are top strand oligonucleotides are assembled onto bottom strand oligonucleotides. Gaps between these fragments are filled with polymerase and ligated. Random multi-recombinant PCR (RM-PCR) This method involves the shuffling of plural DNA fragments without homology, in a single PCR. This results in the reconstruction of complete proteins by assembly of modules encoding different structural units. User friendly DNA recombination (USERec) This method begins with the amplification of gene fragments which need to be recombined, using uracil dNTPs. This amplification solution also contains primers, PfuTurbo, and Cx Hotstart DNA polymerase. Amplified products are next incubated with USER enzyme. This enzyme catalyzes the removal of uracil residues from DNA creating single base pair gaps. The USER enzyme treated fragments are mixed and ligated using T4 DNA ligase and subjected to Dpn1 digestion to remove the template DNA. These resulting dingle stranded fragments are subjected to amplification using PCR, and are transformed into E. coli. Golden Gate shuffling (GGS) recombination This method allows you to recombine at least 9 different fragments in an acceptor vector by using type 2 restriction enzyme which cuts outside of the restriction sites. It begins with sub cloning of fragments in separate vectors to create Bsa1 flanking sequences on both sides. These vectors are then cleaved using type II restriction enzyme Bsa1, which generates four nucleotide single strand overhangs. Fragments with complementary overhangs are hybridized and ligated using T4 DNA ligase. Finally these constructs are then transformed into E. coli cells, which are screened for expression levels. Phosphoro thioate-based DNA recombination method (PRTec) This method can be used to recombine structural elements or entire protein domains. This method is based on phosphorothioate chemistry which allows the specific cleavage of phosphorothiodiester bonds. The first step in the process begins with amplification of fragments that need to be recombined along with the vector backbone. This amplification is accomplished using primers with phosphorothiolated nucleotides at 5' ends. Amplified PCR products are cleaved in an ethanol-iodine solution at high temperatures. Next these fragments are hybridized at room temperature and transformed into E. coli which repair any nicks. Integron This system is based upon a natural site specific recombination system in E. coli. This system is called the integron system, and produces natural gene shuffling. This method was used to construct and optimize a functional tryptophan biosynthetic operon in trp-deficient E. coli by delivering individual recombination cassettes or trpA-E genes along with regulatory elements with the integron system. Y-Ligation based shuffling (YLBS) This method generates single stranded DNA strands, which encompass a single block sequence either at the 5' or 3' end, complementary sequences in a stem loop region, and a D branch region serving as a primer binding site for PCR. Equivalent amounts of both 5' and 3' half strands are mixed and formed a hybrid due to the complementarity in the stem region. Hybrids with free phosphorylated 5' end in 3' half strands are then ligated with free 3' ends in 5' half strands using T4 DNA ligase in the presence of 0.1 mM ATP. Ligated products are then amplified by two types of PCR to generate pre 5' half and pre 3' half PCR products. These PCR product are converted to single strands via avidin-biotin binding to the 5' end of the primes containing stem sequences that were biotin labeled. Next, biotinylated 5' half strands and non-biotinylated 3' half strands are used as 5' and 3' half strands for the next Y-ligation cycle. Semi-rational design Semi-rational design uses information about a proteins sequence, structure and function, in tandem with predictive algorithms. Together these are used to identify target amino acid residues which are most likely to influence protein function. Mutations of these key amino acid residues create libraries of mutant proteins that are more likely to have enhanced properties. Advances in semi-rational enzyme engineering and de novo enzyme design provide researchers with powerful and effective new strategies to manipulate biocatalysts. Integration of sequence and structure based approaches in library design has proven to be a great guide for enzyme redesign. Generally, current computational de novo and redesign methods do not compare to evolved variants in catalytic performance. Although experimental optimization may be produced using directed evolution, further improvements in the accuracy of structure predictions and greater catalytic ability will be achieved with improvements in design algorithms. Further functional enhancements may be included in future simulations by integrating protein dynamics. Biochemical and biophysical studies, along with fine-tuning of predictive frameworks will be useful to experimentally evaluate the functional significance of individual design features. Better understanding of these functional contributions will then give feedback for the improvement of future designs. Directed evolution will likely not be replaced as the method of choice for protein engineering, although computational protein design has fundamentally changed the way protein engineering can manipulate bio-macromolecules. Smaller, more focused and functionally-rich libraries may be generated by using in methods which incorporate predictive frameworks for hypothesis-driven protein engineering. New design strategies and technical advances have begun a departure from traditional protocols, such as directed evolution, which represents the most effective strategy for identifying top-performing candidates in focused libraries. Whole-gene library synthesis is replacing shuffling and mutagenesis protocols for library preparation. Also highly specific low throughput screening assays are increasingly applied in place of monumental screening and selection efforts of millions of candidates. Together, these developments are poised to take protein engineering beyond directed evolution and towards practical, more efficient strategies for tailoring biocatalysts. Screening and selection techniques Once a protein has undergone directed evolution, ration design or semi-ration design, the libraries of mutant proteins must be screened to determine which mutants show enhanced properties. Phage display methods are one option for screening proteins. This method involves the fusion of genes encoding the variant polypeptides with phage coat protein genes. Protein variants expressed on phage surfaces are selected by binding with immobilized targets in vitro. Phages with selected protein variants are then amplified in bacteria, followed by the identification of positive clones by enzyme linked immunosorbent assay. These selected phages are then subjected to DNA sequencing. Cell surface display systems can also be utilized to screen mutant polypeptide libraries. The library mutant genes are incorporated into expression vectors which are then transformed into appropriate host cells. These host cells are subjected to further high throughput screening methods to identify the cells with desired phenotypes. Cell free display systems have been developed to exploit in vitro protein translation or cell free translation. These methods include mRNA display, ribosome display, covalent and non covalent DNA display, and in vitro compartmentalization. Enzyme engineering Enzyme engineering is the application of modifying an enzyme's structure (and, thus, its function) or modifying the catalytic activity of isolated enzymes to produce new metabolites, to allow new (catalyzed) pathways for reactions to occur, or to convert from certain compounds into others (biotransformation). These products are useful as chemicals, pharmaceuticals, fuel, food, or agricultural additives. An enzyme reactor consists of a vessel containing a reactional medium that is used to perform a desired conversion by enzymatic means. Enzymes used in this process are free in the solution. Also Microorganisms are one of important origin for genuine enzymes . Examples of engineered proteins Computing methods have been used to design a protein with a novel fold, such as Top7, and sensors for unnatural molecules. The engineering of fusion proteins has yielded rilonacept, a pharmaceutical that has secured Food and Drug Administration (FDA) approval for treating cryopyrin-associated periodic syndrome. Another computing method, IPRO, successfully engineered the switching of cofactor specificity of Candida boidinii xylose reductase. Iterative Protein Redesign and Optimization (IPRO) redesigns proteins to increase or give specificity to native or novel substrates and cofactors. This is done by repeatedly randomly perturbing the structure of the proteins around specified design positions, identifying the lowest energy combination of rotamers, and determining whether the new design has a lower binding energy than prior ones. Computation-aided design has also been used to engineer complex properties of a highly ordered nano-protein assembly. A protein cage, E. coli bacterioferritin (EcBfr), which naturally shows structural instability and an incomplete self-assembly behavior by populating two oligomerization states, is the model protein in this study. Through computational analysis and comparison to its homologs, it has been found that this protein has a smaller-than-average dimeric interface on its two-fold symmetry axis due mainly to the existence of an interfacial water pocket centered on two water-bridged asparagine residues. To investigate the possibility of engineering EcBfr for modified structural stability, a semi-empirical computational method is used to virtually explore the energy differences of the 480 possible mutants at the dimeric interface relative to the wild type EcBfr. This computational study also converges on the water-bridged asparagines. Replacing these two asparagines with hydrophobic amino acids results in proteins that fold into alpha-helical monomers and assemble into cages as evidenced by circular dichroism and transmission electron microscopy. Both thermal and chemical denaturation confirm that, all redesigned proteins, in agreement with the calculations, possess increased stability. One of the three mutations shifts the population in favor of the higher order oligomerization state in solution as shown by both size exclusion chromatography and native gel electrophoresis. A in silico method, PoreDesigner, was developed to redesign bacterial channel protein (OmpF) to reduce its 1 nm pore size to any desired sub-nm dimension. Transport experiments on the narrowest designed pores revealed complete salt rejection when assembled in biomimetic block-polymer matrices.
Technology
Biotechnology
null
216143
https://en.wikipedia.org/wiki/Sustainable%20agriculture
Sustainable agriculture
Sustainable agriculture is farming in sustainable ways meeting society's present food and textile needs, without compromising the ability for current or future generations to meet their needs. It can be based on an understanding of ecosystem services. There are many methods to increase the sustainability of agriculture. When developing agriculture within sustainable food systems, it is important to develop flexible business processes and farming practices. Agriculture has an enormous environmental footprint, playing a significant role in causing climate change (food systems are responsible for one third of the anthropogenic greenhouse gas emissions), water scarcity, water pollution, land degradation, deforestation and other processes; it is simultaneously causing environmental changes and being impacted by these changes. Sustainable agriculture consists of environment friendly methods of farming that allow the production of crops or livestock without causing damage to human or natural systems. It involves preventing adverse effects on soil, water, biodiversity, and surrounding or downstream resources, as well as to those working or living on the farm or in neighboring areas. Elements of sustainable agriculture can include permaculture, agroforestry, mixed farming, multiple cropping, and crop rotation. Developing sustainable food systems contributes to the sustainability of the human population. For example, one of the best ways to mitigate climate change is to create sustainable food systems based on sustainable agriculture. Sustainable agriculture provides a potential solution to enable agricultural systems to feed a growing population within the changing environmental conditions. Besides sustainable farming practices, dietary shifts to sustainable diets are an intertwined way to substantially reduce environmental impacts. Numerous sustainability standards and certification systems exist, including organic certification, Rainforest Alliance, Fair Trade, UTZ Certified, GlobalGAP, Bird Friendly, and the Common Code for the Coffee Community (4C). Definition The term "sustainable agriculture" was defined in 1977 by the USDA as an integrated system of plant and animal production practices having a site-specific application that will, over the long term: satisfy human food and fiber needs enhance environmental quality and the natural resource base upon which the agriculture economy depends make the most efficient use of nonrenewable resources and on-farm resources and integrate, where appropriate, natural biological cycles and controls sustain the economic viability of farm operations enhance the quality of life for farmers and society as a whole. Yet the idea of having a sustainable relationship with the land has been prevalent in indigenous communities for centuries before the term was formally added to the lexicon. Aims A common consensus is that sustainable farming is the most realistic way to feed growing populations. In order to successfully feed the population of the planet, farming practices must consider future costs–to both the environment and the communities they fuel. The risk of not being able to provide enough resources for everyone led to the adoption of technology within the sustainability field to increase farm productivity. The ideal end result of this advancement is the ability to feed ever-growing populations across the world. The growing popularity of sustainable agriculture is connected to the wide-reaching fear that the planet's carrying capacity (or planetary boundaries), in terms of the ability to feed humanity, has been reached or even exceeded. Key principles There are several key principles associated with sustainability in agriculture: The incorporation of biological and ecological processes such as nutrient cycling, soil regeneration, and nitrogen fixation into agricultural and food production practices. Using decreased amounts of non-renewable and unsustainable inputs, particularly environmentally harmful ones. Using the expertise of farmers to both productively work the land as well as to promote the self-reliance and self-sufficiency of farmers. Solving agricultural and natural resource problems through the cooperation and collaboration of people with different skills. The problems tackled include pest management and irrigation. It "considers long-term as well as short-term economics because sustainability is readily defined as forever, that is, agricultural environments that are designed to promote endless regeneration". It balances the need for resource conservation with the needs of farmers pursuing their livelihood. It is considered to be reconciliation ecology, accommodating biodiversity within human landscapes. Oftentimes, the execution of sustainable practices within farming comes through the adoption of technology and environmentally-focused appropriate technology. Technological approaches Sustainable agricultural systems are becoming an increasingly important field for AI research and development. By leveraging AI's skills in areas such as resource optimization, crop health monitoring, and yield prediction, farmers might greatly advance toward more environmentally friendly agricultural practices. Artificial intelligence (AI) mobile soil analysis enables farmers to enhance soil fertility while decreasing their ecological footprint. This technology permits on-site, real-time evaluations of soil nutrient levels. Environmental factors Practices that can cause long-term damage to soil include excessive tilling of the soil (leading to erosion) and irrigation without adequate drainage (leading to salinization). The most important factors for a farming site are climate, soil, nutrients and water resources. Of the four, water and soil conservation are the most amenable to human intervention. When farmers grow and harvest crops, they remove some nutrients from the soil. Without replenishment, the land suffers from nutrient depletion and becomes either unusable or suffers from reduced yields. Sustainable agriculture depends on replenishing the soil while minimizing the use or need of non-renewable resources, such as natural gas or mineral ores. A farm that can "produce perpetually", yet has negative effects on environmental quality elsewhere is not sustainable agriculture. An example of a case in which a global view may be warranted is the application of fertilizer or manure, which can improve the productivity of a farm but can pollute nearby rivers and coastal waters (eutrophication). The other extreme can also be undesirable, as the problem of low crop yields due to exhaustion of nutrients in the soil has been related to rainforest destruction. In Asia, the specific amount of land needed for sustainable farming is about which include land for animal fodder, cereal production as a cash crop, and other food crops. In some cases, a small unit of aquaculture is included (AARI-1996). Nutrients Nitrates Nitrates are used widely in farming as fertilizer. Unfortunately, a major environmental problem associated with agriculture is the leaching of nitrates into the environment. Possible sources of nitrates that would, in principle, be available indefinitely, include: recycling crop waste and livestock or treated human manure growing legume crops and forages such as peanuts or alfalfa that form symbioses with nitrogen-fixing bacteria called rhizobia industrial production of nitrogen by the Haber process uses hydrogen, which is currently derived from natural gas (but this hydrogen could instead be made by electrolysis of water using renewable electricity) genetically engineering (non-legume) crops to form nitrogen-fixing symbioses or fix nitrogen without microbial symbionts. The last option was proposed in the 1970s, but is only gradually becoming feasible. Sustainable options for replacing other nutrient inputs such as phosphorus and potassium are more limited. Other options include long-term crop rotations, returning to natural cycles that annually flood cultivated lands (returning lost nutrients) such as the flooding of the Nile, the long-term use of biochar, and use of crop and livestock landraces that are adapted to less than ideal conditions such as pests, drought, or lack of nutrients. Crops that require high levels of soil nutrients can be cultivated in a more sustainable manner with appropriate fertilizer management practices. Phosphate Phosphate is a primary component in fertilizer. It is the second most important nutrient for plants after nitrogen, and is often a limiting factor. It is important for sustainable agriculture as it can improve soil fertility and crop yields. Phosphorus is involved in all major metabolic processes including photosynthesis, energy transfer, signal transduction, macromolecular biosynthesis, and respiration. It is needed for root ramification and strength and seed formation, and can increase disease resistance. Phosphorus is found in the soil in both inorganic and organic forms and makes up approximately 0.05% of soil biomass. Phosphorus fertilizers are the main input of inorganic phosphorus in agricultural soils and approximately 70%–80% of phosphorus in cultivated soils is inorganic. Long-term use of phosphate-containing chemical fertilizers causes eutrophication and deplete soil microbial life, so people have looked to other sources. Phosphorus fertilizers are manufactured from rock phosphate. However, rock phosphate is a non-renewable resource and it is being depleted by mining for agricultural use: peak phosphorus will occur within the next few hundred years, or perhaps earlier. Potassium Potassium is a macronutrient very important for plant development and is commonly sought in fertilizers. This nutrient is essential for agriculture because it improves water retention, nutrient value, yield, taste, color, texture and disease resistance of crops. It is often used in the cultivation of grains, fruits, vegetables, rice, wheat, millets, sugar, corn, soybeans, palm oil and coffee. Potassium chloride (KCl) represents the most widely source of K used in agriculture, accounting for 90% of all potassium produced for agricultural use.   The use of KCl leads to high concentrations of chloride (Clˉ) in soil harming its health due to the increase in soil salinity, imbalance in nutrient availability and this ion's biocidal effect for soil organisms. In consequences the development of plants and soil organisms is affected, putting at risk soil biodiversity and agricultural productivity. A sustainable option for replacing KCl are chloride-free fertilizers, its use should take into account plants' nutrition needs, and the promotion of soil health. Soil Land degradation is becoming a severe global problem. According to the Intergovernmental Panel on Climate Change: "About a quarter of the Earth's ice-free land area is subject to human-induced degradation (medium confidence). Soil erosion from agricultural fields is estimated to be currently 10 to 20 times (no tillage) to more than 100 times (conventional tillage) higher than the soil formation rate (medium confidence)." Almost half of the land on earth is covered with dry land, which is susceptible to degradation. Over a billion tonnes of southern Africa's soil are being lost to erosion annually, which if continued will result in halving of crop yields within thirty to fifty years. Improper soil management is threatening the ability to grow sufficient food. Intensive agriculture reduces the carbon level in soil, impairing soil structure, crop growth and ecosystem functioning, and accelerating climate change. Modification of agricultural practices is a recognized method of carbon sequestration as soil can act as an effective carbon sink. Soil management techniques include no-till farming, keyline design and windbreaks to reduce wind erosion, reincorporation of organic matter into the soil, reducing soil salinization, and preventing water run-off. Land As the global population increases and demand for food increases, there is pressure on land as a resource. In land-use planning and management, considering the impacts of land-use changes on factors such as soil erosion can support long-term agricultural sustainability, as shown by a study of Wadi Ziqlab, a dry area in the Middle East where farmers graze livestock and grow olives, vegetables, and grains. Looking back over the 20th century shows that for people in poverty, following environmentally sound land practices has not always been a viable option due to many complex and challenging life circumstances. Currently, increased land degradation in developing countries may be connected with rural poverty among smallholder farmers when forced into unsustainable agricultural practices out of necessity. Converting big parts of the land surface to agriculture has severe environmental and health consequences. For example, it leads to rise in zoonotic disease (like the Coronavirus disease 2019) due to the degradation of natural buffers between humans and animals, reducing biodiversity and creating larger groups of genetically similar animals. Land is a finite resource on Earth. Although expansion of agricultural land can decrease biodiversity and contribute to deforestation, the picture is complex; for instance, a study examining the introduction of sheep by Norse settlers (Vikings) to the Faroe Islands of the North Atlantic concluded that, over time, the fine partitioning of land plots contributed more to soil erosion and degradation than grazing itself. The Food and Agriculture Organization of the United Nations estimates that in coming decades, cropland will continue to be lost to industrial and urban development, along with reclamation of wetlands, and conversion of forest to cultivation, resulting in the loss of biodiversity and increased soil erosion. Energy In modern agriculture, energy is used in on-farm mechanisation, food processing, storage, and transportation processes. It has therefore been found that energy prices are closely linked to food prices. Oil is also used as an input in agricultural chemicals. The International Energy Agency projects higher prices of non-renewable energy resources as a result of fossil fuel resources being depleted. It may therefore decrease global food security unless action is taken to 'decouple' fossil fuel energy from food production, with a move towards 'energy-smart' agricultural systems including renewable energy. The use of solar powered irrigation in Pakistan is said to be a closed system for agricultural water irrigation. The environmental cost of transportation could be avoided if people use local products. Water In some areas sufficient rainfall is available for crop growth, but many other areas require irrigation. For irrigation systems to be sustainable, they require proper management (to avoid salinization) and must not use more water from their source than is naturally replenishable. Otherwise, the water source effectively becomes a non-renewable resource. Improvements in water well drilling technology and submersible pumps, combined with the development of drip irrigation and low-pressure pivots, have made it possible to regularly achieve high crop yields in areas where reliance on rainfall alone had previously made successful agriculture unpredictable. However, this progress has come at a price. In many areas, such as the Ogallala Aquifer, the water is being used faster than it can be replenished. According to the UC Davis Agricultural Sustainability Institute, several steps must be taken to develop drought-resistant farming systems even in "normal" years with average rainfall. These measures include both policy and management actions: improving water conservation and storage measures providing incentives for selection of drought-tolerant crop species using reduced-volume irrigation systems managing crops to reduce water loss not planting crops at all. Indicators for sustainable water resource development include the average annual flow of rivers from rainfall, flows from outside a country, the percentage of water coming from outside a country, and gross water withdrawal. It is estimated that agricultural practices consume 69% of the world's fresh water. Farmers discovered a way to save water using wool in Wyoming and other parts of the United States. Social factors Rural economic development Sustainable agriculture attempts to solve multiple problems with one broad solution. The goal of sustainable agricultural practices is to decrease environmental degradation due to farming while increasing crop–and thus food–output. There are many varying strategies attempting to use sustainable farming practices in order to increase rural economic development within small-scale farming communities. Two of the most popular and opposing strategies within the modern discourse are allowing unrestricted markets to determine food production and deeming food a human right. Neither of these approaches have been proven to work without fail. A promising proposal to rural poverty reduction within agricultural communities is sustainable economic growth; the most important aspect of this policy is to regularly include the poorest farmers in the economy-wide development through the stabilization of small-scale agricultural economies. In 2007, the United Nations reported on "Organic Agriculture and Food Security in Africa", stating that using sustainable agriculture could be a tool in reaching global food security without expanding land usage and reducing environmental impacts. There has been evidence provided by developing nations from the early 2000s stating that when people in their communities are not factored into the agricultural process that serious harm is done. The social scientist Charles Kellogg has stated that, "In a final effort, exploited people pass their suffering to the land." Sustainable agriculture mean the ability to permanently and continuously "feed its constituent populations". There are a lot of opportunities that can increase farmers' profits, improve communities, and continue sustainable practices. For example, in Uganda, Genetically Modified Organisms were originally illegal. However, with the stress of banana crisis in Uganda, where Banana Bacterial Wilt had the potential to wipe out 90% of yield, they decided to explore GMOs as a possible solution. The government issued the National Biotechnology and Biosafety bill, which will allow scientists that are part of the National Banana Research Program to start experimenting with genetically modified organisms. This effort has the potential to help local communities because a significant portion live off the food they grow themselves, and it will be profitable because the yield of their main produce will remain stable. Not all regions are suitable for agriculture. The technological advancement of the past few decades has allowed agriculture to develop in some of these regions. For example, Nepal has built greenhouses to deal with its high altitude and mountainous regions. Greenhouses allow for greater crop production and also use less water since they are closed systems. Desalination techniques can turn salt water into fresh water which allows greater access to water for areas with a limited supply. This allows the irrigation of crops without decreasing natural fresh water sources. While desalination can be a tool to provide water to areas that need it to sustain agriculture, it requires money and resources. Regions of China have been considering large scale desalination in order to increase access to water, but the current cost of the desalination process makes it impractical. Women Women working in sustainable agriculture come from numerous backgrounds, ranging from academia to labour. From 1978-2007, in the United States, the number of women farm operators has tripled. In 2007, women operated 14 percent of farms, compared to five percent in 1978. Much of the growth is due to women farming outside of the "male dominated field of conventional agriculture". Growing your own food The practice of growing food in the backyard of houses, schools, etc., by families or by communities became widespread in the US at the time of World War I, the Great Depression and World War II, so that in one point of time 40% of the vegetables of the USA was produced in this way. The practice became more popular again in the time of the COVID-19 pandemic. This method permits to grow food in a relatively sustainable way and at the same time can make it easier for poor people to obtain food. Economic factors Costs, such as environmental problems, not covered in traditional accounting systems (which take into account only the direct costs of production incurred by the farmer) are known as externalities. Netting studied sustainability and intensive agriculture in smallholder systems through history. There are several studies incorporating externalities such as ecosystem services, biodiversity, land degradation, and sustainable land management in economic analysis. These include The Economics of Ecosystems and Biodiversity study and the Economics of Land Degradation Initiative which seek to establish an economic cost-benefit analysis on the practice of sustainable land management and sustainable agriculture. Triple bottom line frameworks include social and environmental alongside a financial bottom line. A sustainable future can be feasible if growth in material consumption and population is slowed down and if there is a drastic increase in the efficiency of material and energy use. To make that transition, long- and short-term goals will need to be balanced enhancing equity and quality of life. Challenges and debates Barriers The barriers to sustainable agriculture can be broken down and understood through three different dimensions. These three dimensions are seen as the core pillars to sustainability: social, environmental, and economic pillars. The social pillar addresses issues related to the conditions in which societies are born into, growing in, and learning from. It deals with shifting away from traditional practices of agricultural and moving into new sustainable practices that will create better societies and conditions. The environmental pillar addresses climate change and focuses on agricultural practices that protect the environment for future generations. The economic pillar discovers ways in which sustainable agriculture can be practiced while fostering economic growth and stability, with minimal disruptions to livelihoods. All three pillars must be addressed to determine and overcome the barriers preventing sustainable agricultural practices. Social barriers to sustainable agriculture include cultural shifts, the need for collaboration, incentives, and new legislation. The move from conventional to sustainable agriculture will require significant behavioural changes from both farmers and consumers. Cooperation and collaboration between farmers is necessary to successfully transition to sustainable practices with minimal complications. This can be seen as a challenge for farmers who care about competition and profitability. There must also be an incentive for farmers to change their methods of agriculture. The use of public policy, advertisements, and laws that make sustainable agriculture mandatory or desirable can be utilized to overcome these social barriers. Environmental barriers prevent the ability to protect and conserve the natural ecosystem. Examples of these barriers include the use of pesticides and the effects of climate change. Pesticides are widely used to combat pests that can devastate production and plays a significant role in keeping food prices and production costs low. To move toward sustainable agriculture, farmers are encouraged to utilize green pesticides, which cause less harm to both human health and habitats, but would entail a higher production cost. Climate change is also a rapidly growing barrier, one that farmers have little control over, which can be seen through place-based barriers. These place-based barriers include factors such as weather conditions, topography, and soil quality which can cause losses in production, resulting in the reluctance to switch from conventional practices. Many environmental benefits are also not visible or immediately evident. Significant changes such as lower rates of soil and nutrient loss, improved soil structure, and higher levels of beneficial microorganisms take time. In conventional agriculture, the benefits are easily visible with no weeds, pests, etc..., but the long term costs to the soil and surrounding ecosystems are hidden and "externalized". Conventional agricultural practices since the evolution of technology have caused significant damage to the environment through biodiversity loss, disrupted ecosystems, poor water quality, among other harms. The economic obstacles to implementing sustainable agricultural practices include low financial return/profitability, lack of financial incentives, and negligible capital investments. Financial incentives and circumstances play a large role in whether sustainable practices will be adopted. The human and material capital required to shift to sustainable methods of agriculture requires training of the workforce and making investments in new technology and products, which comes at a high cost. In addition to this, farmers practicing conventional agriculture can mass produce their crops, and therefore maximize their profitability. This would be difficult to do in sustainable agriculture which encourages low production capacity. The author James Howard Kunstler claims almost all modern technology is bad and that there cannot be sustainability unless agriculture is done in ancient traditional ways. Efforts toward more sustainable agriculture are supported in the sustainability community, however, these are often viewed only as incremental steps and not as an end. One promising method of encouraging sustainable agriculture is through local farming and community gardens. Incorporating local produce and agricultural education into schools, communities, and institutions can promote the consumption of freshly grown produce which will drive consumer demand. Some foresee a true sustainable steady state economy that may be very different from today's: greatly reduced energy usage, minimal ecological footprint, fewer consumer packaged goods, local purchasing with short food supply chains, little processed foods, more home and community gardens, etc. Different viewpoints about the definition There is a debate on the definition of sustainability regarding agriculture. The definition could be characterized by two different approaches: an ecocentric approach and a technocentric approach. The ecocentric approach emphasizes no- or low-growth levels of human development, and focuses on organic and biodynamic farming techniques with the goal of changing consumption patterns, and resource allocation and usage. The technocentric approach argues that sustainability can be attained through a variety of strategies, from the view that state-led modification of the industrial system like conservation-oriented farming systems should be implemented, to the argument that biotechnology is the best way to meet the increasing demand for food. One can look at the topic of sustainable agriculture through two different lenses: multifunctional agriculture and ecosystem services. Both of approaches are similar, but look at the function of agriculture differently. Those that employ the multifunctional agriculture philosophy focus on farm-centered approaches, and define function as being the outputs of agricultural activity. The central argument of multifunctionality is that agriculture is a multifunctional enterprise with other functions aside from the production of food and fiber. These functions include renewable resource management, landscape conservation and biodiversity. The ecosystem service-centered approach posits that individuals and society as a whole receive benefits from ecosystems, which are called "ecosystem services". In sustainable agriculture, the services that ecosystems provide include pollination, soil formation, and nutrient cycling, all of which are necessary functions for the production of food. It is also claimed sustainable agriculture is best considered as an ecosystem approach to agriculture, called agroecology. Ethics Most agricultural professionals agree that there is a "moral obligation to pursue [the] goal [of] sustainability." The major debate comes from what system will provide a path to that goal because if an unsustainable method is used on a large scale it will have a massive negative effect on the environment and human population. Methods Other practices include polyculture, growing a diverse number of perennial crops in a single field, each of which would grow in separate seasons so as not to compete with each other for natural resources. This system would result in increased resistance to diseases and decreased effects of erosion and loss of nutrients in the soil. Nitrogen fixation from legumes, for example, used in conjunction with plants that rely on nitrate from the soil for growth, helps to allow the land to be reused annually. Legumes will grow for a season and replenish the soil with ammonium and nitrate, and the next season other plants can be seeded and grown in the field in preparation for harvest. Sustainable methods of weed management may help reduce the development of herbicide-resistant weeds. Crop rotation may also replenish nitrogen if legumes are used in the rotations and may also use resources more efficiently. There are also many ways to practice sustainable animal husbandry. Some of the tools to grazing management include fencing off the grazing area into smaller areas called paddocks, lowering stock density, and moving the stock between paddocks frequently. Intensification An increased production is a goal of intensification. Sustainable intensification encompasses specific agriculture methods that increase production and at the same time help improve environmental outcomes. The desired outcomes of the farm are achieved without the need for more land cultivation or destruction of natural habitat; the system performance is upgraded with no net environmental cost. Sustainable Intensification has become a priority for the United Nations. Sustainable intensification differs from prior intensification methods by specifically placing importance on broader environmental outcomes. By 2018; it was predicted in 100 nations a combined total of 163 million farms used sustainable intensification. The amount of agricultural land covered by this is 453 million ha of land. That amount of land is equal to 29% of farms worldwide. In light of concerns about food security, human population growth and dwindling land suitable for agriculture, sustainable intensive farming practises are needed to maintain high crop yields, while maintaining soil health and ecosystem services. The capacity for ecosystem services to be strong enough to allow a reduction in use of non-renewable inputs whilst maintaining or boosting yields has been the subject of much debate. Recent work in irrigated rice production system of east Asia has suggested that – in relation to pest management at least – promoting the ecosystem service of biological control using nectar plants can reduce the need for insecticides by 70% whilst delivering a 5% yield advantage compared with standard practice. Vertical farming is a concept with the potential advantages of year-round production, isolation from pests and diseases, controllable resource recycling and reduced transportation costs. Water Water efficiency can be improved by reducing the need for irrigation and using alternative methods. Such methods include: researching on drought resistant crops, monitoring plant transpiration and reducing soil evaporation. Drought resistant crops have been researched extensively as a means to overcome the issue of water shortage. They are modified genetically so they can adapt in an environment with little water. This is beneficial as it reduces the need for irrigation and helps conserve water. Although they have been extensively researched, significant results have not been achieved as most of the successful species will have no overall impact on water conservation. However, some grains like rice, for example, have been successfully genetically modified to be drought resistant. Soil and nutrients Soil amendments include using compost from recycling centers. Using compost from yard and kitchen waste uses available resources in the area. Abstinence from soil tillage before planting and leaving the plant residue after harvesting reduces soil water evaporation; It also serves to prevent soil erosion. Crop residues left covering the surface of the soil may result in reduced evaporation of water, a lower surface soil temperature, and reduction of wind effects. A way to make rock phosphate more effective is to add microbial inoculates such as phosphate-solubilizing microorganisms, known as PSMs, to the soil. These solubilize phosphorus already in the soil and use processes like organic acid production and ion exchange reactions to make that phosphorus available for plants. Experimentally, these PSMs have been shown to increase crop growth in terms of shoot height, dry biomass and grain yield. Phosphorus uptake is even more efficient with the presence of mycorrhizae in the soil. Mycorrhiza is a type of mutualistic symbiotic association between plants and fungi, which are well-equipped to absorb nutrients, including phosphorus, in soil. These fungi can increase nutrient uptake in soil where phosphorus has been fixed by aluminum, calcium, and iron. Mycorrhizae can also release organic acids that solubilize otherwise unavailable phosphorus. Pests and weeds Soil steaming can be used as an alternative to chemicals for soil sterilization. Different methods are available to induce steam into the soil to kill pests and increase soil health. Solarizing is based on the same principle, used to increase the temperature of the soil to kill pathogens and pests. Certain plants can be cropped for use as biofumigants, "natural" fumigants, releasing pest suppressing compounds when crushed, ploughed into the soil, and covered in plastic for four weeks. Plants in the Brassicaceae family release large amounts of toxic compounds such as methyl isothiocyanates. Location Relocating current croplands to environmentally more optimal locations, whilst allowing ecosystems in then-abandoned areas to regenerate could substantially decrease the current carbon, biodiversity, and irrigation water footprint of global crop production, with relocation only within national borders also having substantial potential. Plants Sustainability may also involve crop rotation. Crop rotation and cover crops prevent soil erosion, by protecting topsoil from wind and water. Effective crop rotation can reduce pest pressure on crops, provides weed control, reduces disease build up, and improves the efficiency of soil nutrients and nutrient cycling. This reduces the need for fertilizers and pesticides. Increasing the diversity of crops by introducing new genetic resources can increase yields by 10 to 15 percent compared to when they are grown in monoculture. Perennial crops reduce the need for tillage and thus help mitigate soil erosion, and may sometimes tolerate drought better, increase water quality and help increase soil organic matter. There are research programs attempting to develop perennial substitutes for existing annual crops, such as replacing wheat with the wild grass Thinopyrum intermedium, or possible experimental hybrids of it and wheat. Being able to do all of this without the use of chemicals is one of the main goals of sustainability which is why crop rotation is a very central method of sustainable agriculture. Related concepts Organic agriculture Organic agriculture can be defined as: {{blockquote|an integrated farming system that strives for sustainability, the enhancement of soil fertility and biological diversity whilst, with rare exceptions, prohibiting synthetic pesticides, antibiotics, synthetic fertilizers, genetically modified organisms, and growth hormones.<ref>H. Martin, '’Ontario Ministry of Agriculture, Food and Rural Affairs Introduction to Organic Farming, </ref>}} Some claim organic agriculture may produce the most sustainable products available for consumers in the US, where no other alternatives exist, although the focus of the organics industry is not sustainability. In 2018 the sales of organic products in USA reach $52.5 billion According to a USDA survey two-thirds of Americans consume organic products at least occasionally. Ecological farming Ecological farming is a concept that focused on the environmental aspects of sustainable agriculture. Ecological farming includes all methods, including organic, which regenerate ecosystem services like: prevention of soil erosion, water infiltration and retention, carbon sequestration in the form of humus, and increased biodiversity. Many techniques are used including no-till farming, multispecies cover crops, strip cropping, terrace cultivation, shelter belts, pasture cropping etc. There are a plethora of methods and techniques that are employed when practicing ecological farming, all having their own unique benefits and implementations that lead to more sustainable agriculture. Crop genetic diversity is one method that is used to reduce the risks associated with monoculture crops, which can be susceptible to a changing climate. This form of biodiversity causes crops to be more resilient, increasing food security and enhancing the productivity of the field on a long-term scale. The use of biodigestors is another method which converts organic waste into a combustible gas, which can provide several benefits to an ecological farm: it can be used as a fuel source, fertilizer for crops and fish ponds, and serves as a method for removing wastes that are rich in organic matter. Because biodigestors can be used as fertilizer, it reduces the amount of industrial fertilizers that are needed to sustain the yields of the farm. Another technique used is aquaculture integration, which combines fish farming with agricultural farming, using the wastes from animals and crops and diverting them towards the fish farms to be used up instead of being leeched into the environment. Mud from the fish ponds can also be used to fertilize crops. Organic fertilizers can also be employed in an ecological farm, such as animal and green manure. This allows soil fertility to be improved and well-maintained, leads to reduced costs and increased yields, reduces the usage of non-renewable resources in industrial fertilizers (Nitrogen and Phosphorus), and reduces the environmental pressures that are posed by intensive agricultural systems. Precision Agriculture can also be used, which focuses on efficient removal of pests using non-chemical techniques and minimizes the amount of tilling needed to sustain the farm. An example of a precision machine is the false seedbed tiller, which can remove a great majority of small weeds while only tilling one centimeter deep. This minimized tilling reduces the amount of new weeds that germinate from soil disturbance. Other methods that reduce soil erosion include contour farming, strip cropping, and terrace cultivation. Benefits Ecological farming involves the introduction of symbiotic species, where possible, to support the ecological sustainability of the farm. Associated benefits include a reduction in ecological debt and elimination of dead zones. Ecological farming is a pioneering, practical development which aims to create globally sustainable land management systems, and encourages review of the importance of maintaining biodiversity in food production and farming end products. One foreseeable option is to develop specialized automata to scan and respond to soil and plant situations relative to intensive care for the soil and the plants. Accordingly, conversion to ecological farming may best utilize the information age, and become recognised as a primary user of robotics and expert systems. Challenges The challenge for ecological farming science is to be able to achieve a mainstream productive food system that is sustainable or even regenerative. To enter the field of ecological farming, location relative to the consumer, can reduce the food miles factor to help minimise damage to the biosphere by combustion engine emissions involved in current food transportation. Design of the ecological farm is initially constrained by the same limitations as conventional farming: local climate, the soil's physical properties, budget for beneficial soil supplements, manpower and available automatons; however long-term water management by ecological farming methods is likely to conserve and increase water availability for the location, and require far fewer inputs to maintain fertility. Principles Certain principles unique to ecological farming need to be considered. Food production should be ecological in both origin and destiny (the term destiny refers to the post-harvest ecological footprint which results in getting produce to the consumer). Integration of species that maintain ecosystem services whilst providing a selection of alternative products. Minimise food miles, packaging, energy consumption and waste. Define a new ecosystem to suit human needs using lessons from existing ecosystems from around the world.Nutrient dense food species Apply the value of a knowledge-base (advanced data base) about soil microorganisms so that discoveries of the ecological benefits of having various kinds of microorganisms encouraged in productive systems such as Forest Gardens can be assessed and optimised; for example in the case of naturally occurring microorganisms called denitrifiers. Traditional agriculture Often thought of as inherently destructive, slash-and-burn or slash-and-char shifting cultivation have been practiced in the Amazon for thousands of years. Some traditional systems combine polyculture with sustainability. In South-East Asia, rice-fish systems on rice paddies have raised freshwater fish as well as rice, producing an additional product and reducing eutrophication of neighboring rivers. A variant in Indonesia combines rice, fish, ducks and water fern; the ducks eat the weeds that would otherwise limit rice growth, saving labour and herbicides, while the duck and fish manure substitute for fertilizer. Raised field agriculture has been recently revived in certain areas of the world, such as the Altiplano region in Bolivia and Peru. This has resurged in the form of traditional Waru Waru raised fields, which create nutrient-rich soil in regions where such soil is scarce. This method is extremely productive and has recently been utilized by indigenous groups in the area and the nearby Amazon Basin to make use of lands that have been historically hard to cultivate. Other forms of traditional agriculture include agro forestry, crop rotations, and water harvesting. Water harvesting is one of the largest and most common practices, particularly used in dry areas and seasons. In Ethiopia, over half of their GDP and over 80 percent of their exports are attributed to agriculture; yet, it is known for its intense droughts and dry periods. Rain water harvesting is considered to be a low-cost alternative. This type of harvesting collects and stores water from roof tops during high-rain periods for use during droughts. Rainwater harvesting has been a large practice to help the country survive by focusing on runoff irrigation, roof water harvesting, and flood spreading. Indigenous Agriculture in North America Native Americans in the United States practiced sustainable agriculture through their subsistence farming techniques. Many tribes grew or harvested their own food from plants that thrived in their local ecosystems. Native American farming practices are specific to local environments and work with natural processes. This is a practice called Permaculture, and it involves a deep understanding of the local environment. Native American farming techniques also incorporate local biodiversity into many of their practices, which helps the land remain healthy. Many indigenous tribes incorporated Intercropping into their agriculture, which is a practice where multiple crops are planted together in the same area. This strategy allows crops to help one another grow through exchanged nutrients, maintained soil moisture, and physical supports for one another. The crops that are paired in intercropping often do not heavily compete for resources, which helps them to each be successful. For example, many tribes utilized intercropping in ways such as the Three Sisters Garden. This gardening technique consists of corn, beans, and squash. These crops grow in unity as the corn stalk supports the beans, the beans produce nitrogen, and the squash retain moisture. Intercropping also provides a natural strategy for pest management and the prevention of weed growth. Intercropping is a natural agricultural practice that often improves the overall health of the soil and plants, increases crop yield, and is sustainable. One of the most significant aspects of indigenous sustainable agriculture is their traditional ecological knowledge of harvesting. The Anishinaabe tribes follow an ideology known as "the Honorable Harvest". The Honorable Harvest is a set of practices that emphasize the idea that people should "take only what you need and use everything you take." Resources are conserved through this practice because several rules are followed when harvesting a plant. These rules are to never take the first plant, never take more than half of the plants, and never take the last plant. This encourages future growth of the plant and therefore leads to a sustainable use of the plants in the area. Native Americans practiced agroforestry by managing the forest, animals, and crops together. They also helped promote tree growth through controlled burns and silviculture. Often, the remaining ash from these burns would be used to fertilize their crops. By improving the conditions of the forest, the local wildlife populations also increased. Native Americans allowed their livestock to graze in the forest, which provided natural fertilizer for the trees as well. Regenerative agriculture Regenerative agriculture is a conservation and rehabilitation approach to food and farming systems. It focuses on topsoil regeneration, increasing biodiversity, improving the water cycle, enhancing ecosystem services, supporting biosequestration, increasing resilience to climate change, and strengthening the health and vitality of farm soil. Practices include, recycling as much farm waste as possible, and adding composted material from sources outside the farm. Alternative methods Permaculture Polyculture There is limited evidence polyculture may contribute to sustainable agriculture. A meta-analysis of a number of polycrop studies found that predator insect biodiversity was higher at comparable yields than conventional in certain two-crop systems with a single cash crop combined with a cover crop. One approach to sustainability is to develop polyculture systems using perennial crop varieties. Such varieties are being developed for rice, wheat, sorghum, barley, and sunflowers. If these can be combined in polyculture with a leguminous cover crop such as alfalfa, fixation of nitrogen will be added to the system, reducing the need for fertilizer and pesticides. Local small-scale agriculture The use of available city space (e.g., rooftop gardens, community gardens, garden sharing, organopónicos, and other forms of urban agriculture) may be able to contribute to sustainability. Some consider "guerrilla gardening" an example of sustainability in action – in some cases seeds of edible plants have been sown in local rural areas. Hydroponics or soil-less culture Hydroponics is an alternative to agriculture that creates the ideal environment for optimal growth without using a dormant medium. This innovative farming technique produces higher crop yields without compromising soil health. The most significant drawback of this sustainable farming technique is the cost associated with development. Standards Certification systems are important to the agriculture community and to consumers as these standards determine the sustainability of produce. Numerous sustainability standards and certification systems exist, including organic certification, Rainforest Alliance, Fair Trade, UTZ Certified, GlobalGAP, Bird Friendly, and the Common Code for the Coffee Community (4C). These standards specify rules that producers, manufacturers and traders need to follow so that the things they do, make, or grow do not hurt people and the environment. These standards are also known as Voluntary Sustainability Standards (VSS) that are private standards that require products to meet specific economic, social or environmental sustainability metrics. The requirements can refer to product quality or attributes, but also to production and processing methods, as well as transportation. VSS are mostly designed and marketed by non-governmental organizations (NGOs) or private firms and they are adopted by actors up and down the value chain, from farmers to retailers. Certifications and labels are used to signal the successful implementation of a VSS. According to the ITC standards map the mostly covered products by standards are agricultural products. Around 500 VSS today apply to key exports of many developing countries, such as coffee, tea, bananas, cocoa, palm oil, timber, cotton, and organic agri-foods. VSS are found to reduce eutrophication, water use, greenhouse gas emissions, and natural ecosystem conversion. And thus are considered as a potential tool for sustainable agriculture. The USDA produces an organic label that is supported by nationalized standards of farmers and facilities. The steps for certification consist of creating an organic system plan, which determines how produce will be tilled, grazed, harvested, stored, and transported. This plan also manages and monitors the substances used around the produce, the maintenance needed to protect the produce, and any nonorganic products that may come in contact with the produce. The organic system plan is then reviewed and inspected by the USDA certifying agent. Once the certification is granted, the produce receives an approval sticker from the USDA and the produce is distributed across the U.S. In order to hold farmers accountable and ensure that Americans are receiving organic produce, these inspections are done at least once a year. This is just one example of sustainable certification systems through produce maintenance. Policy Sustainable agriculture is a topic in international policy concerning its potential to reduce environmental risks. In 2011, the Commission on Sustainable Agriculture and Climate Change, as part of its recommendations for policymakers on achieving food security in the face of climate change, urged that sustainable agriculture must be integrated into national and international policy. The Commission stressed that increasing weather variability and climate shocks will negatively affect agricultural yields, necessitating early action to drive change in agricultural production systems towards increasing resilience. It also called for dramatically increased investments in sustainable agriculture in the next decade, including in national research and development budgets, land rehabilitation, economic incentives, and infrastructure improvement. At the global level During 2021 United Nations Climate Change Conference, 45 countries pledged to give more than 4 billion dollars for transition to sustainable agriculture. The organization "Slow Food" expressed concern about the effectivity of the spendings, as they concentrate on technological solutions and reforestation en place of "a holistic agroecology that transforms food from a mass-produced commodity into part of a sustainable system that works within natural boundaries." Additionally, the Summit consisted of negotiations that led to heavily reducing CO2 emissions, becoming carbon neutral, ending deforestation and reliance on coal, and limiting methane emissions. In November, the Climate Action Tracker reported that global efforts are on track to for a 2.7 °C temperature increase with current policies, finding that the current targets will not meet global needs as coal and natural gas consumption are primarily responsible for the gap in progress. Since, like-minded developing countries asked for an addendum to the agreement that removed the obligation for developing countries to meet the same requirements of wealthy nations. European Union In May 2020 the European Union published a program, named "From Farm to Fork" for making its agriculture more sustainable. In the official page of the program From Farm to Fork is cited Frans Timmermans the Executive Vice-President of the European Commission, saying that: The program includes the next targets: Making 25% of EU agriculture organic, by 2030. Reduce by 50% the use of pesticides by 2030. Reduce the use of fertilizers by 20% by 2030. Reduce nutrient loss by at least 50%. Reduce the use of antimicrobials in agriculture and antimicrobials in aquaculture by 50% by 2030. Create sustainable food labeling. Reduce food waste by 50% by 2030. Dedicate to R&I related to the issue €10 billion. United States Policies from 1930 - 2000 The New Deal implemented policies and programs that promoted sustainable agriculture. Under the Agriculture Adjustment Act of 1933, it provided farmers payments to create a supply management regime that capped production of important crops. This allowed farmers to focus on growing food and not competing in the market based system. The New Deal also provided a monetary incentive for farmers that left some of their fields unsown or ungrazed to order to improve the soil conditions. The Cooperative Extension Service was also established that set up sharing funding responsibilities amongst the USDA, land-grant universities, and local communities. The 1950s to 1990s was when the government switched its stance on agriculture policy which halted sustainable agriculture. The Agricultural Act of 1954 passed which supported farmers with flexible price supports, but only to commodity programs. The Food and Agricultural Act of 1965 had new income support payments and continued supply controls but reduced priced supports. Agriculture and Consumer Protection Act of 1973 removed price supports and instead introduced target prices and deficiency payments. It continued to promote commodity crops by lowering interest rates. Food Security Act of 1985 continued commodity loan programs. These policies incentivized profit over sustainability because the US government was promoting farms to maximize their production output instead of placing checks. This meant that farms were being turned into food factories as they became bigger in size and grew more commodity crops like corn, wheat, and cotton. From 1900 to 2002, the number of farms in the US decreased significantly while the average size of a farm went up after 1950. Current Policies In the United States, the federal Natural Resources Conservation Service (USDA) provides technical and financial assistance for those interested in pursuing natural resource conservation along with production agriculture. With programs like SARE and China-UK SAIN to help promote research on sustainable agriculture practices and a framework for agriculture and climate change respectively. Future Policies Currently, there are policies on the table that could move the US agriculture system into a more sustainable direction with the Green New Deal. This policy promotes decentralizing agrarian governance by breaking up large commodity farms that were created in the 1950s to 1980s. Decentralized governance within the farming community would allow for more adaptive management at local levels to help focus on climate change mitigation, food security, and landscape-scale ecological stewardship. The Green New Deal would invest in public infrastructure to support farmers transition from industrial food regime and acquire agroecological skills. Just like in the New Deal, it would invest in cooperatives and commons to share and redistribute resources like land, food, equipment, research facilities, personnel, and training programs. All of these policies and programs would break down barriers that have prevented sustainable farmers and agriculture from taking place in the United States. Asia China In 2016, the Chinese government adopted a plan to reduce China's meat consumption by 50%, for achieving more sustainable and healthy food system. In 2019, the National Basic Research Program or Program 973 funded research into Science and Technology Backyard (STB). STBs are hubs often created in rural areas with significant rates of small-scale farming that combine knowledge of traditional practices with new innovations and technology implementation. The purpose of this program was to invest in sustainable farming throughout the country and increase food production while achieving few negative environmental effects. The program was ultimately proven to be successful, and the study found that the merging of traditional practices and appropriate technology was instrumental in higher crop yields. India In collaboration with the Food and Land Use Coalition (FOLU), CEEW (council for energy, environment and water), has given an overview of the current state of sustainable agriculture practices and systems (SAPSs) in India. India is aiming to scale-up SAPs, through policymakers, administrators, philanthropists, and other which represent a vital alternative to conventional, input-intensive agriculture. In idea these efforts identify 16 SAPSsincluding agroforestry, crop rotation, rainwater harvesting, organic farming and natural farmingusing agroecology as an investigative lens. In a conclusive understanding it is realised that sustainable agriculture is far from mainstream in India. Further proposals for several measures for promoting SAPSs, including restructured government support and rigorous evidence generation for benefits and implementation of sustainable farming are ongoing progress in Indian Agriculture. An example of initiatives in India towards exploring the world of sustainable farming has been set by the Sowgood foundation which is a nonprofit founded by educator Pragati Chaswal. It started by teaching primary school children about sustainable farming by helping them farm on small farm strips in suburban farmhouses and gardens. Today many government and private schools in Delhi, India have adopted the sowgood foundation curriculum for sustainable farming for their students. Other countries Israel In 2012, the Israeli Ministry of Agriculture found itself at the height of the Israeli commitment to sustainable agriculture policy. A large factor of this policy was funding programs that made sustainable agriculture accessible to smaller Palestinian-Arab communities. The program was meant to create biodiversity, train farmers in sustainable agriculture methods, and hold regular meetings for agriculture stakeholders. History In 1907, the American author Franklin H. King discussed in his book Farmers of Forty Centuries'' the advantages of sustainable agriculture and warned that such practices would be vital to farming in the future. The phrase 'sustainable agriculture' was reportedly coined by the Australian agronomist Gordon McClymont. The term became popular in the late 1980s. There was an international symposium on sustainability in horticulture by the International Society of Horticultural Science at the International Horticultural Congress in Toronto in 2002. At the following conference at Seoul in 2006, the principles were discussed further. This potential future inability to feed the world's population has been a concern since the English political economist Thomas Malthus in the early 1800s, but has become increasingly important recently. Starting at the very end of the twentieth and early twenty-first centuries, this issue became widely discussed in the U.S. because of growing anxieties of a rapidly increasing global population. Agriculture has long been the biggest industry worldwide and requires significant land, water, and labor inputs. At the turn of the twenty-first century, experts questioned the industry's ability to keep up with population growth. This debate led to concerns over global food insecurity and "solving hunger".
Technology
Forms
null
216174
https://en.wikipedia.org/wiki/Homeothermy
Homeothermy
Homeothermy, homothermy or homoiothermy is thermoregulation that maintains a stable internal body temperature regardless of external influence. This internal body temperature is often, though not necessarily, higher than the immediate environment (from Greek ὅμοιος homoios "similar" and θέρμη thermē "heat"). Homeothermy is one of the 3 types of thermoregulation in warm-blooded animal species. Homeothermy's opposite is poikilothermy. A poikilotherm is an organism that does not maintain a fixed internal temperature but rather its internal temperature fluctuates based on its environment and physical behaviour. Homeotherms are not necessarily endothermic. Some homeotherms may maintain constant body temperatures through behavioral mechanisms alone, i.e., behavioral thermoregulation. Many reptiles use this strategy. For example, desert lizards are remarkable in that they maintain near-constant activity temperatures that are often within a degree or two of their lethal critical temperatures. Evolution Origin of homeothermy The evolution of homeothermy is a complex topic with various hypotheses proposed to explain its origin. Here are the most common hypotheses: Metabolic Efficiency Hypothesis: This hypothesis suggests that homeothermy evolved as a result of increased metabolic efficiency. Maintaining a consistent internal temperature allows for optimal enzyme activity and biochemical reactions. This efficiency could have provided an advantage in terms of sustained activity levels, improved foraging, and enhanced muscle function. Endothermic Parental Care Hypothesis: This hypothesis proposes that homeothermy developed as a way to provide consistent and warm internal environments for developing embryos or young offspring. Endothermy could have enabled parents to keep their eggs or young warm, leading to improved survival rates and successful reproduction. Activity Level Hypothesis: Homeothermy might have evolved to facilitate sustained activity levels. Cold-blooded animals are often limited by external temperatures, which can affect their ability to hunt, escape predators, and carry out other essential activities. Homeothermy could have provided a selective advantage by allowing animals to be active for longer periods of time, increasing their chances of survival. Predator-Prey Dynamics: The evolution of homeothermy could be linked to predator-prey dynamics. If predators were cold-blooded while their prey were warm-blooded, the predators might have struggled to hunt efficiently in cooler conditions. Homeothermy in prey species could have provided a competitive advantage by allowing them to maintain consistent performance across a wider range of temperatures. Environmental Instability: Fluctuations in the Earth's climate over evolutionary timescales could have driven the development of homeothermy. Environments with unpredictable temperature changes might have favored animals that could regulate their body temperature internally, allowing them to adapt to varying conditions. Coevolution with Microorganisms: Homeothermy might have evolved in response to interactions with microorganisms, such as parasites and pathogens. Warm-blooded animals could have gained an advantage by creating an inhospitable environment for many disease-causing organisms, thus reducing the risk of infections. Insulation and Thermoregulation: Homeothermy could have originated as a response to the development of insulating structures like fur, feathers, or other coverings. As animals developed these insulating features, they would have been better equipped to maintain a stable internal temperature. Over time, this could have led to more advanced mechanisms for thermoregulation. Altitude and Oxygen Availability: Some researchers suggest that homeothermy might have evolved as animals migrated to higher altitudes where oxygen levels are lower. Homeothermy could have helped compensate for the reduced oxygen availability, ensuring efficient oxygen utilization and overall metabolic function. Migratory Patterns: Animals that migrated long distances would have encountered a wide range of temperature conditions. Homeothermy could have evolved as a way to maintain energy-efficient migration by reducing the need to frequently stop and warm up. Energetic Benefits: Homeothermy might have provided energetic advantages by allowing animals to exploit a wider range of ecological niches and food sources. Warm-blooded animals could have survived in habitats where cold-blooded competitors struggled due to temperature limitations. These hypotheses are not mutually exclusive, and the evolution of homeothermy likely involved a combination of factors. The exact origin of homeothermy is still an area of active research and debate within the scientific community. Advantages Enzymes have a relatively narrow temperature range at which their efficiencies are optimal. Temperatures outside this range can greatly reduce the rate of a reaction or stop it altogether. A creature with a fairly constant body temperature can therefore specialize in enzymes which are efficient at that particular temperature. A poikilotherm must either operate well below optimum efficiency most of the time, migrate, hibernate or expend extra resources producing a wider range of enzymes to cover the wider range of body temperatures. However, some environments offer much more consistent temperatures than others. For example, the tropics often have seasonal variations in temperature that are smaller than their diurnal variations. In addition, large bodies of water, such as the ocean and very large lakes, have moderate temperature variations. The waters below the ocean surface are particularly stable in temperature. Disadvantages Because many homeothermic animals use enzymes that are specialized for a narrow range of body temperatures, hypothermia rapidly leads to torpor and then death. Additionally, homeothermy obtained from endothermy is a high energy strategy and many environments will offer lower carrying capacity to these organisms. In cold weather the energy expenditure to maintain body temperature accelerates starvation and may lead to death.
Biology and health sciences
Basics
Biology
216187
https://en.wikipedia.org/wiki/Incineration
Incineration
Incineration is a waste treatment process that involves the combustion of substances contained in waste materials. Industrial plants for waste incineration are commonly referred to as waste-to-energy facilities. Incineration and other high-temperature waste treatment systems are described as "thermal treatment". Incineration of waste materials converts the waste into ash, flue gas and heat. The ash is mostly formed by the inorganic constituents of the waste and may take the form of solid lumps or particulates carried by the flue gas. The flue gases must be cleaned of gaseous and particulate pollutants before they are dispersed into the atmosphere. In some cases, the heat that is generated by incineration can be used to generate electric power. Incineration with energy recovery is one of several waste-to-energy technologies such as gasification, pyrolysis and anaerobic digestion. While incineration and gasification technologies are similar in principle, the energy produced from incineration is high-temperature heat whereas combustible gas is often the main energy product from gasification. Incineration and gasification may also be implemented without energy and materials recovery. In several countries, there are still concerns from experts and local communities about the environmental effect of incinerators (see arguments against incineration). In some countries, incinerators built just a few decades ago often did not include a materials separation to remove hazardous, bulky or recyclable materials before combustion. These facilities tended to risk the health of the plant workers and the local environment due to inadequate levels of gas cleaning and combustion process control. Most of these facilities did not generate electricity. Incinerators reduce the solid mass of the original waste by 80–85% and the volume (already compressed somewhat in garbage trucks) by 95–96%, depending on composition and degree of recovery of materials such as metals from the ash for recycling. This means that while incineration does not completely replace landfilling, it significantly reduces the necessary volume for disposal. Garbage trucks often reduce the volume of waste in a built-in compressor before delivery to the incinerator. Alternatively, at landfills, the volume of the uncompressed garbage can be reduced by approximately 70% by using a stationary steel compressor, albeit with a significant energy cost. In many countries, simpler waste compaction is a common practice for compaction at landfills. Incineration has particularly strong benefits for the treatment of certain waste types in niche areas such as clinical wastes and certain hazardous wastes where pathogens and toxins can be destroyed by high temperatures. Examples include chemical multi-product plants with diverse toxic or very toxic wastewater streams, which cannot be routed to a conventional wastewater treatment plant. Waste combustion is particularly popular in countries such as Japan, Singapore and the Netherlands, where land is a scarce resource. Denmark and Sweden have been leaders by using the energy generated from incineration for more than a century, in localised combined heat and power facilities supporting district heating schemes. In 2005, waste incineration produced 4.8% of the electricity consumption and 13.7% of the total domestic heat consumption in Denmark. A number of other European countries rely heavily on incineration for handling municipal waste, in particular Luxembourg, the Netherlands, Germany, and France. History The first UK incinerators for waste disposal were built in Nottingham by Manlove, Alliott & Co. Ltd. in 1874 to a design patented by Alfred Fryer. They were originally known as destructors. The first US incinerator was built in 1885 on Governors Island in New York, NY. The first facility in Austria-Hungary was built in 1905 in Brunn. Technology An incinerator is a furnace for burning waste. Modern incinerators include pollution mitigation equipment such as flue gas cleaning. There are various types of incinerator plant design: moving grate, fixed grate, rotary-kiln, and fluidised bed. Burn pile The burn pile or the burn pit is one of the simplest and earliest forms of waste disposal, essentially consisting of a mound of combustible materials piled on the open ground and set on fire, leading to pollution. Burn piles can and have spread uncontrolled fires, for example, if the wind blows burning material off the pile into surrounding combustible grasses or onto buildings. As interior structures of the pile are consumed, the pile can shift and collapse, spreading the burn area. Even in a situation of no wind, small lightweight ignited embers can lift off the pile via convection, and waft through the air into grasses or onto buildings, igniting them. Burn piles often do not result in full combustion of waste and therefore produce particulate pollution. Burn barrel The burn barrel is a somewhat more controlled form of private waste incineration, containing the burning material inside a metal barrel, with a metal grating over the exhaust. The barrel prevents the spread of burning material in windy conditions, and as the combustibles are reduced they can only settle down into the barrel. The exhaust grating helps to prevent the spread of burning embers. Typically steel drums are used as burn barrels, with air vent holes cut or drilled around the base for air intake. Over time, the very high heat of incineration causes the metal to oxidize and rust, and eventually the barrel itself is consumed by the heat and must be replaced. The private burning of dry cellulosic/paper products is generally clean-burning, producing no visible smoke, but plastics in the household waste can cause private burning to create a public nuisance, generating acrid odors and fumes that make eyes burn and water. A two-layered design enables secondary combustion, reducing smoke. Most urban communities ban burn barrels and certain rural communities may have prohibitions on open burning, especially those home to many residents not familiar with this common rural practice. in the United States, private rural household or farm waste incineration of small quantities was typically permitted so long as it is not a nuisance to others, does not pose a risk of fire such as in dry conditions, and the fire does not produce dense, noxious smoke. A handful of states, such as New York, Minnesota, and Wisconsin, have laws or regulations either banning or strictly regulating open burning due to health and nuisance effects. People intending to burn waste may be required to contact a state agency in advance to check current fire risk and conditions, and to alert officials of the controlled fire that will occur. Moving grate The typical incineration plant for municipal solid waste is a moving grate incinerator. The moving grate enables the movement of waste through the combustion chamber to be optimized to allow a more efficient and complete combustion. A single moving grate boiler can handle up to of waste per hour, and can operate 8,000 hours per year with only one scheduled stop for inspection and maintenance of about one month's duration. Moving grate incinerators are sometimes referred to as municipal solid waste incinerators (MSWIs). The waste is introduced by a waste crane through the "throat" at one end of the grate, from where it moves down over the descending grate to the ash pit in the other end. Here the ash is removed through a water lock. Part of the combustion air (primary combustion air) is supplied through the grate from below. This air flow also has the purpose of cooling the grate itself. Cooling is important for the mechanical strength of the grate, and many moving grates are also water-cooled internally. Secondary combustion air is supplied into the boiler at high speed through nozzles over the grate. It facilitates complete combustion of the flue gases by introducing turbulence for better mixing and by ensuring a surplus of oxygen. In multiple/stepped hearth incinerators, the secondary combustion air is introduced in a separate chamber downstream the primary combustion chamber. According to the European Waste Incineration Directive, incineration plants must be designed to ensure that the flue gases reach a temperature of at least for 2 seconds in order to ensure proper breakdown of toxic organic substances. In order to comply with this at all times, it is required to install backup auxiliary burners (often fueled by oil), which are fired into the boiler in case the heating value of the waste becomes too low to reach this temperature alone. The flue gases are then cooled in the superheaters, where the heat is transferred to steam, heating the steam to typically at a pressure of for the electricity generation in the turbine. At this point, the flue gas has a temperature of around , and is passed to the flue gas cleaning system. In Scandinavia, scheduled maintenance is always performed during summer, where the demand for district heating is low. Often, incineration plants consist of several separate 'boiler lines' (boilers and flue gas treatment plants), so that waste can continue to be received at one boiler line while the others are undergoing maintenance, repair, or upgrading. Fixed grate The older and simpler kind of incinerator was a brick-lined cell with a fixed metal grate over a lower ash pit, with one opening in the top or side for loading and another opening in the side for removing incombustible solids called clinkers. Many small incinerators formerly found in apartment houses have now been replaced by waste compactors. Rotary-kiln The rotary-kiln incinerator is used by municipalities and by large industrial plants. This design of incinerator has two chambers: a primary chamber and secondary chamber. The primary chamber in a rotary kiln incinerator consists of an inclined refractory lined cylindrical tube. The inner refractory lining serves as sacrificial layer to protect the kiln structure. This refractory layer needs to be replaced from time to time. Movement of the cylinder on its axis facilitates movement of waste. In the primary chamber, there is conversion of solid fraction to gases, through volatilization, destructive distillation and partial combustion reactions. The secondary chamber is necessary to complete gas phase combustion reactions. The clinkers spill out at the end of the cylinder. A tall flue-gas stack, fan, or steam jet supplies the needed draft. Ash drops through the grate, but many particles are carried along with the hot gases. The particles and any combustible gases may be combusted in an "afterburner". Fluidized bed A strong airflow is forced through a sandbed. The air seeps through the sand until a point is reached where the sand particles separate to let the air through and mixing and churning occurs, thus a fluidized bed is created and fuel and waste can now be introduced. The sand with the pre-treated waste and/or fuel is kept suspended on pumped air currents and takes on a fluid-like character. The bed is thereby violently mixed and agitated keeping small inert particles and air in a fluid-like state. This allows all of the mass of waste, fuel and sand to be fully circulated through the furnace. Specialized incinerator Furniture factory sawdust incinerators need much attention as these have to handle resin powder and many flammable substances. Controlled combustion, burn back prevention systems are essential as dust when suspended resembles the fire catch phenomenon of any liquid petroleum gas. Use of heat The heat produced by an incinerator can be used to generate steam which may then be used to drive a turbine in order to produce electricity. The typical amount of net energy that can be produced per tonne municipal waste is about 2/3 MWh of electricity and 2 MWh of district heating. Thus, incinerating about per day of waste will produce about 400 MWh of electrical energy per day (17 MW of electrical power continuously for 24 hours) and 1200 MWh of district heating energy each day. Pollution Incineration has a number of outputs such as the ash and the emission to the atmosphere of flue gas. Before the flue gas cleaning system, if installed, the flue gases may contain particulate matter, heavy metals, dioxins, furans, sulfur dioxide, and hydrochloric acid. If plants have inadequate flue gas cleaning, these outputs may add a significant pollution component to stack emissions. In a study from 1997, Delaware Solid Waste Authority found that, for same amount of produced energy, incineration plants emitted fewer particles, hydrocarbons and less SO2, HCl, CO and NOx than coal-fired power plants, but more than natural gas–fired power plants. According to Germany's Ministry of the Environment, waste incinerators reduce the amount of some atmospheric pollutants by substituting power produced by coal-fired plants with power from waste-fired plants. Gaseous emissions Dioxin and furans The most publicized concerns about the incineration of municipal solid wastes (MSW) involve the fear that it produces significant amounts of dioxin and furan emissions. Dioxins and furans are considered by many to be serious health hazards. The EPA announced in 2012 that the safe limit for human oral consumption is 0.7 picograms Toxic Equivalence (TEQ) per kilogram bodyweight per day, which works out to 17 billionths of a gram for a 150 lb person per year. In 2005, the Ministry of the Environment of Germany, where there were 66 incinerators at that time, estimated that "...whereas in 1990 one third of all dioxin emissions in Germany came from incineration plants, for the year 2000 the figure was less than 1%. Chimneys and tiled stoves in private households alone discharge approximately 20 times more dioxin into the environment than incineration plants." According to the United States Environmental Protection Agency, the combustion percentages of the total dioxin and furan inventory from all known and estimated sources in the U.S. (not only incineration) for each type of incineration are as follows: 35.1% backyard barrels; 26.6% medical waste; 6.3% municipal wastewater treatment sludge; 5.9% municipal waste combustion; 2.9% industrial wood combustion. Thus, the controlled combustion of waste accounted for 41.7% of the total dioxin inventory. In 1987, before the governmental regulations required the use of emission controls, there was a total of Toxic Equivalence (TEQ) of dioxin emissions from US municipal waste combustors. Today, the total emissions from the plants are TEQ annually, a reduction of 99%. Backyard barrel burning of household and garden wastes, still allowed in some rural areas, generates of dioxins annually. Studies conducted by the US-EPA demonstrated that one family using a burn barrel produced more emissions than an incineration plant disposing of of waste per day by 1997 and five times that by 2007 due to increased chemicals in household trash and decreased emission by municipal incinerators using better technology. Most of the improvement in U.S. dioxin emissions has been for large-scale municipal waste incinerators. As of 2000, although small-scale incinerators (those with a daily capacity of less than 250 tons) processed only 9% of the total waste combusted, these produced 83% of the dioxins and furans emitted by municipal waste combustion. Dioxin cracking methods and limitations The breakdown of dioxin requires exposure of the molecular ring to a sufficiently high temperature so as to trigger thermal breakdown of the strong molecular bonds holding it together. Small pieces of fly ash may be somewhat thick, and too brief an exposure to high temperature may only degrade dioxin on the surface of the ash. For a large volume air chamber, too brief an exposure may also result in only some of the exhaust gases reaching the full breakdown temperature. For this reason there is also a time element to the temperature exposure to ensure heating completely through the thickness of the fly ash and the volume of waste gases. There are trade-offs between increasing either the temperature or exposure time. Generally where the molecular breakdown temperature is higher, the exposure time for heating can be shorter, but excessively high temperatures can also cause wear and damage to other parts of the incineration equipment. Likewise the breakdown temperature can be lowered to some degree but then the exhaust gases would require a greater lingering period of perhaps several minutes, which would require large/long treatment chambers that take up a great deal of treatment plant space. A side effect of breaking the strong molecular bonds of dioxin is the potential for breaking the bonds of nitrogen gas (N2) and oxygen gas (O2) in the supply air. As the exhaust flow cools, these highly reactive detached atoms spontaneously reform bonds into reactive oxides such as NOx in the flue gas, which can result in smog formation and acid rain if they were released directly into the local environment. These reactive oxides must be further neutralized with selective catalytic reduction (SCR) or selective non-catalytic reduction (see below). Dioxin cracking in practice The temperatures needed to break down dioxin are typically not reached when burning plastics outdoors in a burn barrel or garbage pit, causing high dioxin emissions as mentioned above. While plastic does usually burn in an open-air fire, the dioxins remain after combustion and either float off into the atmosphere, or may remain in the ash where it can be leached down into groundwater when rain falls on the ash pile. Fortunately, dioxin and furan compounds bond very strongly to solid surfaces and are not dissolved by water, so leaching processes are limited to the first few millimeters below the ash pile. The gas-phase dioxins can be substantially destroyed using catalysts, some of which can be present as part of the fabric filter bag structure. Modern municipal incinerator designs include a high-temperature zone, where the flue gas is sustained at a temperature above for at least 2 seconds before it is cooled down. They are equipped with auxiliary heaters to ensure this at all times. These are often fueled by oil or natural gas, and are normally only active for a very small fraction of the time. Further, most modern incinerators utilize fabric filters (often with Teflon membranes to enhance collection of sub-micron particles) which can capture dioxins present in or on solid particles. For very small municipal incinerators, the required temperature for thermal breakdown of dioxin may be reached using a high-temperature electrical heating element, plus a selective catalytic reduction stage. Although dioxins and furans may be destroyed by combustion, their reformation by a process known as 'de novo synthesis' as the emission gases cool is a probable source of the dioxins measured in emission stack tests from plants that have high combustion temperatures held at long residence times. CO2 As for other complete combustion processes, nearly all of the carbon content in the waste is emitted as CO2 to the atmosphere. MSW contains approximately the same mass fraction of carbon as CO2 itself (27%), so incineration of 1 ton of MSW produces approximately 1 ton of CO2. If the waste was landfilled without prior stabilization (typically via anaerobic digestion), 1 ton of MSW would produce approximately methane via the anaerobic decomposition of the biodegradable part of the waste. Since the global warming potential of methane is 34 and the weight of 62 cubic meters of methane at 25 degrees Celsius is 40.7 kg, this is equivalent to 1.38 ton of CO2, which is more than the 1 ton of CO2 which would have been produced by incineration. In some countries, large amounts of landfill gas are collected. Still the global warming potential of the landfill gas emitted to atmosphere is significant. In the US it was estimated that the global warming potential of the emitted landfill gas in 1999 was approximately 32% higher than the amount of CO2 that would have been emitted by incineration. Since this study, the global warming potential estimate for methane has been increased from 21 to 35, which alone would increase this estimate to almost the triple GWP effect compared to incineration of the same waste. In addition, nearly all biodegradable waste has biological origin. This material has been formed by plants using atmospheric CO2 typically within the last growing season. If these plants are regrown the CO2 emitted from their combustion will be taken out from the atmosphere once more. Such considerations are the main reason why several countries administrate incineration of biodegradable waste as renewable energy. The rest – mainly plastics and other oil and gas derived products – is generally treated as non-renewables. Different results for the CO2 footprint of incineration can be reached with different assumptions. Local conditions (such as limited local district heating demand, no fossil fuel generated electricity to replace or high levels of aluminium in the waste stream) can decrease the CO2 benefits of incineration. The methodology and other assumptions may also influence the results significantly. For example, the methane emissions from landfills occurring at a later date may be neglected or given less weight, or biodegradable waste may not be considered CO2 neutral. A study by Eunomia Research and Consulting in 2008 on potential waste treatment technologies in London demonstrated that by applying several of these (according to the authors) unusual assumptions the average existing incineration plants performed poorly for CO2 balance compared to the theoretical potential of other emerging waste treatment technologies. Other emissions Other gaseous emissions in the flue gas from incinerator furnaces include nitrogen oxides, sulfur dioxide, hydrochloric acid, heavy metals, and fine particles. Of the heavy metals, mercury is a major concern due to its toxicity and high volatility, as essentially all mercury in the municipal waste stream may exit in emissions if not removed by emission controls. The steam content in the flue may produce visible fume from the stack, which can be perceived as a visual pollution. It may be avoided by decreasing the steam content by flue-gas condensation and reheating, or by increasing the flue gas exit temperature well above its dew point. Flue-gas condensation allows the latent heat of vaporization of the water to be recovered, subsequently increasing the thermal efficiency of the plant. Flue-gas cleaning The quantity of pollutants in the flue gas from incineration plants may or may not be reduced by several processes, depending on the plant. Particulate is collected by particle filtration, most often electrostatic precipitators (ESP) and/or baghouse filters. The latter are generally very efficient for collecting fine particles. In an investigation by the Ministry of the Environment of Denmark in 2006, the average particulate emissions per energy content of incinerated waste from 16 Danish incinerators were below 2.02 g/GJ (grams per energy content of the incinerated waste). Detailed measurements of fine particles with sizes below 2.5 micrometres (PM2.5) were performed on three of the incinerators: One incinerator equipped with an ESP for particle filtration emitted 5.3 g/GJ fine particles, while two incinerators equipped with baghouse filters emitted 0.002 and 0.013 g/GJ PM2.5. For ultra fine particles (PM1.0), the numbers were 4.889 g/GJ PM1.0 from the ESP plant, while emissions of 0.000 and 0.008 g/GJ PM1.0 were measured from the plants equipped with baghouse filters. Acid gas scrubbers are used to remove hydrochloric acid, nitric acid, hydrofluoric acid, mercury, lead and other heavy metals. The efficiency of removal will depend on the specific equipment, the chemical composition of the waste, the design of the plant, the chemistry of reagents, and the ability of engineers to optimize these conditions, which may conflict for different pollutants. For example, mercury removal by wet scrubbers is considered coincidental and may be less than 50%. Basic scrubbers remove sulfur dioxide, forming gypsum by reaction with lime. Waste water from scrubbers must subsequently pass through a waste water treatment plant. Sulfur dioxide may also be removed by dry desulfurisation by injection limestone slurry into the flue gas before the particle filtration. NOx is either reduced by catalytic reduction with ammonia in a catalytic converter (selective catalytic reduction, SCR) or by a high-temperature reaction with ammonia in the furnace (selective non-catalytic reduction, SNCR). Urea may be substituted for ammonia as the reducing reagent but must be supplied earlier in the process so that it can hydrolyze into ammonia. Substitution of urea can reduce costs and potential hazards associated with storage of anhydrous ammonia. Heavy metals are often adsorbed on injected active carbon powder, which is collected by particle filtration. Solid outputs Incineration produces fly ash and bottom ash just as is the case when coal is combusted. The total amount of ash produced by municipal solid waste incineration ranges from 4 to 10% by volume and 15–20% by weight of the original quantity of waste, and the fly ash amounts to about 10–20% of the total ash. The fly ash, by far, constitutes more of a potential health hazard than does the bottom ash because the fly ash often contain high concentrations of heavy metals such as lead, cadmium, copper and zinc as well as small amounts of dioxins and furans. The bottom ash seldom contain significant levels of heavy metals. At present although some historic samples tested by the incinerator operators' group would meet the being ecotoxic criteria at present the EA say "we have agreed" to regard incinerator bottom ash as "non-hazardous" until the testing programme is complete. Other pollution issues Odor pollution can be a problem with old-style incinerators, but odors and dust are extremely well controlled in newer incineration plants. They receive and store the waste in an enclosed area with a negative pressure with the airflow being routed through the boiler which prevents unpleasant odors from escaping into the atmosphere. A study found that the strongest odor at an incineration facility in Eastern China occurred at its waste tipping port. An issue that affects community relationships is the increased road traffic of waste collection vehicles to transport municipal waste to the incinerator. Due to this reason, most incinerators are located in industrial areas. This problem can be avoided to an extent through the transport of waste by rail from transfer stations. Health effects Scientific researchers have investigated the human health effects of pollutants produced by waste incineration. Many studies have examined health impacts from exposure to pollutants utilizing U.S. EPA modeling guidelines. Exposure through inhalation, ingestion, soil, and dermal contact are incorporated in these models. Research studies have also assessed exposure to pollutants through blood or urine samples of residents and workers who live near waste incinerators. Findings from a systematic review of previous research identified a number of symptoms and diseases related to incinerator pollution exposure. These include neoplasia, respiratory issues, congenital anomalies, and infant deaths or miscarriages. Populations near old, inadequately maintained incinerators experience a higher degree of health issues. Some studies also identified possible cancer risk. However, difficulties in separating incinerator pollution exposure from combined industry, motor vehicle, and agriculture pollution limits these conclusions on health risks. Many communities have advocated for the improvement or removal of waste incinerator technology. Specific pollutant exposures, such as high levels of nitrogen dioxide, have been cited in community-led complaints relating to increased emergency room visits for respiratory issues. Potential health effects of waste incineration technology have been publicized, notably when located in communities already facing disproportionate health burdens. For example Wheelabrator Baltimore in Maryland has been investigated due to increased rates of asthma in its neighboring community, which is predominantly occupied by low-income, people of color. Community-led efforts have suggested a need for future research to address a lack of real-time pollution data. These sources have also cited a need for academic, government, and non-profit partnerships to better determine the health impacts of incineration. Debate Use of incinerators for waste management is controversial. The debate over incinerators typically involves business interests (representing both waste generators and incinerator firms), government regulators, environmental activists and local citizens who must weigh the economic appeal of local industrial activity with their concerns over health and environmental risk. People and organizations professionally involved in this issue include the U.S. Environmental Protection Agency and a great many local and national air quality regulatory agencies worldwide. Arguments for incineration The concerns over the health effects of dioxin and furan emissions have been significantly lessened by advances in emission control designs and very stringent new governmental regulations that have resulted in large reductions in the amount of dioxins and furans emissions. The U.K. Health Protection Agency concluded in 2009 that "Modern, well managed incinerators make only a small contribution to local concentrations of air pollutants. It is possible that such small additions could have an impact on health but such effects, if they exist, are likely to be very small and not detectable." Incineration plants can generate electricity and heat that can substitute power plants powered by other fuels at the regional electric and district heating grid, and steam supply for industrial customers. Incinerators and other waste-to-energy plants generate at least partially biomass-based renewable energy that offsets greenhouse gas pollution from coal-, oil- and gas-fired power plants. The E.U. considers energy generated from biogenic waste (waste with biological origin) by incinerators as non-fossil renewable energy under its emissions caps. These greenhouse gas reductions are in addition to those generated by the avoidance of landfill methane. The bottom ash residue remaining after combustion has been shown to be a non-hazardous solid waste that can be safely put into landfills or recycled as construction aggregate. Samples are tested for ecotoxic metals. In densely populated areas, finding space for additional landfills is becoming increasingly difficult. Fine particles can be efficiently removed from the flue gases with baghouse filters. Even though approximately 40% of the incinerated waste in Denmark was incinerated at plants with no baghouse filters, estimates based on measurements by the Danish Environmental Research Institute showed that incinerators were only responsible for approximately 0.3% of the total domestic emissions of particulate smaller than 2.5 micrometres (PM2.5) to the atmosphere in 2006. Incineration of municipal solid waste avoids the release of methane. Every ton of MSW incinerated, prevents about one ton of carbon dioxide equivalents from being released to the atmosphere. Most municipalities that operate incineration facilities have higher recycling rates than neighboring cities and countries that do not send their waste to incinerators.. In a country overview from 2016 by the European Environmental Agency the top recycling performing countries are also the ones having the highest penetration of incineration, even though all material recovery from waste sent to incineration (e.g. metals and construction aggregate) is per definition not counted as recycling in European targets. The recovery of glass, stone and ceramic materials reused in construction, as well as ferrous and in some cases non-ferrous metals recovered from combustion residue thus adds further to the actual recycled amounts. Metals recovered from ash would typically be difficult or impossible to recycle through conventional means, as the removal of attached combustible material through incineration provides an alternative to labor- or energy-intensive mechanical separation methods. Volume of combusted waste is reduced by approximately 90%, increasing the life of landfills. Ash from modern incinerators is vitrified at temperatures of to , reducing the leachability and toxicity of residue. As a result, special landfills are generally no longer required for incinerator ash from municipal waste streams, and existing landfills can see their life dramatically increased by combusting waste, reducing the need for municipalities to site and construct new landfills. Arguments against incineration The Scottish Protection Agency's (SEPA) comprehensive health effects research concluded "inconclusively" on health effects in October 2009. The authors stress, that even though no conclusive evidence of non-occupational health effects from incinerators were found in the existing literature, "small but important effects might be virtually impossible to detect". The report highlights epidemiological deficiencies in previous UK health studies and suggests areas for future studies. The U.K. Health Protection Agency produced a lesser summary in September 2009. Many toxicologists criticise and dispute this report as not being comprehensive epidemiologically, thin on peer review and the effects of fine particle effects on health. Combustion produces ash concentrates ecotoxic heavy metals from waste into ash, mostly the fly ash component. This ash must be stored in specialized landfills. The less toxic bottom ash (incinerator bottom ash, IBA) can be encased into concrete as a building material, but there is a risk of hydrogen gas explosion due to the aluminum content. The UK Highway Authority put the use of IBA in foam concrete on hold as it investigates a series of explosions in 2009. Recovery of useful metals from ash is a new but even less mature approach. The health effects of dioxin and furan emissions from old incinerators; especially during start up and shut down, or where filter bypass is required continue to be a problem. Incinerators emit varying levels of heavy metals such as vanadium, manganese, chromium, nickel, arsenic, mercury, lead and cadmium, which can be toxic at very minute levels. Alternative technologies are available or in development such as mechanical biological treatment, anaerobic digestion (MBT/AD), autoclaving or mechanical heat treatment (MHT) using steam or plasma arc gasification (PGP), which is incineration using electrically produced extreme high temperatures, or combinations of these treatments. Erection of incinerators compete with the development and introduction of other emerging technologies. A UK government WRAP report, August 2008 found that in the UK median incinerator costs per ton were generally higher than those for MBT treatments by £18 per metric ton; and £27 per metric ton for most modern (post 2000) incinerators. Building and operating waste processing plants such as incinerators requires long contract periods to recover initial investment costs, causing a long-term lock-in. Incinerator lifetimes normally range from 25 to 30 years. This was highlighted by Peter Jones, OBE, the Mayor of London's waste representative in April 2009. Incinerators produce fine particles in the furnace. Even with modern particle filtering of the flue gases, a small part of these is emitted to the atmosphere. PM2.5 is not separately regulated in the European Waste Incineration Directive, even though they are repeatedly correlated spatially to infant mortality in the UK (M. Ryan's ONS data based maps around the EfW/CHP waste incinerators at Edmonton, Coventry, Chineham, Kirklees and Sheffield). Under WID there is no requirement to monitor stack top or downwind incinerator PM2.5 levels. Several European doctors associations (including cross discipline experts such as physicians, environmental chemists and toxicologists) in June 2008 representing over 33,000 doctors wrote a keynote statement directly to the European Parliament citing widespread concerns on incinerator particle emissions and the absence of specific fine and ultrafine particle size monitoring or in depth industry/government epidemiological studies of these minute and invisible incinerator particle size emissions. Local communities are often opposed to the idea of locating waste processing plants such as incinerators in their vicinity (the Not in My Back Yard phenomenon). Studies in Andover, Massachusetts correlated 10% property devaluations with close incinerator proximity. Prevention, waste minimisation, reuse and recycling of waste should all be preferred to incineration according to the waste hierarchy. Supporters of zero waste consider incinerators and other waste treatment technologies as barriers to recycling and separation beyond particular levels, and that waste resources are sacrificed for energy production. A 2008 Eunomia report found that under some circumstances and assumptions, incineration causes less CO2 reduction than other emerging EfW and CHP technology combinations for treating residual mixed waste. The authors found that CHP incinerator technology without waste recycling ranked 19 out of 24 combinations (where all alternatives to incineration were combined with advanced waste recycling plants); being 228% less efficient than the ranked 1 Advanced MBT maturation technology; or 211% less efficient than plasma gasification/autoclaving combination ranked 2. Some incinerators are visually undesirable. In many countries they require a visually intrusive chimney stack. If reusable waste fractions are handled in waste processing plants such as incinerators in developing nations, it would cut out viable work for local economies. It is estimated that there are 1 million people making a livelihood off collecting waste. The reduced levels of emissions from municipal waste incinerators and waste to energy plants from historical peaks are largely the product of the proficient use of emission control technology. Emission controls add to the initial and operational expenses. It should not be assumed that all new plants will employ the best available control technology if not required by law. Waste that has been deposited on a landfill can be mined even decades and centuries later, and recycled with future technologies – which is not the case with incineration. Trends in incinerator use The history of municipal solid waste (MSW) incineration is linked intimately to the history of landfills and other waste treatment technology. The merits of incineration are inevitably judged in relation to the alternatives available. Since the 1970s, recycling and other prevention measures have changed the context for such judgements. Since the 1990s alternative waste treatment technologies have been maturing and becoming viable. Incineration is a key process in the treatment of hazardous wastes and clinical wastes. It is often imperative that medical waste be subjected to the high temperatures of incineration to destroy pathogens and toxic contamination it contains. In North America The first incinerator in the U.S. was built in 1885 on Governors Island in New York. In 1949, Robert C. Ross founded one of the first hazardous waste management companies in the U.S. He began Robert Ross Industrial Disposal because he saw an opportunity to meet the hazardous waste management needs of companies in northern Ohio. In 1958, the company built one of the first hazardous waste incinerators in the U.S. The first full-scale, municipally operated incineration facility in the U.S. was the Arnold O. Chantland Resource Recovery Plant built in 1975 in Ames, Iowa. The plant is still in operation and produces refuse-derived fuel that is sent to local power plants for fuel. The first commercially successful incineration plant in the U.S. was built in Saugus, Massachusetts, in October 1975 by Wheelabrator Technologies, and is still in operation today. There are several environmental or waste management corporations that transport ultimately to an incinerator or cement kiln treatment center. Currently (2009), there are three main businesses that incinerate waste: Clean Harbours, WTI-Heritage, and Ross Incineration Services. Clean Harbours has acquired many of the smaller, independently run facilities, accumulating 5–7 incinerators in the process across the U.S. WTI-Heritage has one incinerator, located in the southeastern corner of Ohio across the Ohio River from West Virginia. Several old generation incinerators have been closed; of the 186 MSW incinerators in 1990, only 89 remained by 2007, and of the 6200 medical waste incinerators in 1988, only 115 remained in 2003. No new incinerators were built between 1996 and 2007. The main reasons for lack of activity have been: Economics. With the increase in the number of large inexpensive regional landfills and, up until recently, the relatively low price of electricity, incinerators were not able to compete for the 'fuel', i.e., waste in the U.S. Tax policies. Tax credits for plants producing electricity from waste were rescinded in the U.S. between 1990 and 2004. There has been renewed interest in incineration and other waste-to-energy technologies in the U.S. and Canada. In the U.S., incineration was granted qualification for renewable energy production tax credits in 2004. Projects to add capacity to existing plants are underway, and municipalities are once again evaluating the option of building incineration plants rather than continue landfilling municipal wastes. However, many of these projects have faced continued political opposition in spite of renewed arguments for the greenhouse gas benefits of incineration and improved air pollution control and ash recycling. In Europe In Europe, as a result of a ban on landfilling untreated waste, many incinerators have been built in the last decade, with more under construction. Recently, a number of municipal governments have begun the process of contracting for the construction and operation of incinerators. In Europe, some of the electricity generated from waste is deemed to be from a 'Renewable Energy Source' (RES) and is thus eligible for tax credits if privately operated. Also, some incinerators in Europe are equipped with waste recovery, allowing the reuse of ferrous and non-ferrous materials found in the burned waste. A prominent example is the AEB Waste Fired Power Plant, Amsterdam. In Sweden, about 50% of the generated waste is burned in waste-to-energy facilities, producing electricity and supplying local cities' district heating systems. The importance of waste in Sweden's electricity generation scheme is reflected on their 2,700,000 tons of waste imported per year (in 2014) to supply waste-to-energy facilities. Due to increasing targets for municipal solid waste recycling in the EU, at least 55% by 2025 up to 65% by 2035, several traditional incineration countries are at risk of not meeting them, since at most 35% will remain available for thermal treatment and disposal. Denmark has since decided to reduce its incineration capacity by 30% by 2030. Incineration of non-hazardous waste was not included as a form of green investment in the EU taxonomy for sustainable activities due to concerns about harming the circularity agenda, effectively stopping future EU funding to the municipal solid waste incineration sector. In the United Kingdom The technology employed in the UK waste management industry has been greatly lagging behind that of Europe due to the wide availability of landfills. The Landfill Directive set down by the European Union led to the Government of the United Kingdom imposing waste legislation including the landfill tax and Landfill Allowance Trading Scheme. This legislation is designed to reduce the release of greenhouse gases produced by landfills through the use of alternative methods of waste treatment. It is the UK Government's position that incineration will play an increasingly large role in the treatment of municipal waste and supply of energy in the UK. In 2008, plans for potential incinerator locations exists for approximately 100 sites. These have been interactively mapped by UK NGO's. Under a new plan in June 2012, a DEFRA-backed grant scheme (The Farming and Forestry Improvement Scheme) was set up to encourage the use of low-capacity incinerators on agricultural sites to improve their bio security. A permit has recently been granted for what would be the UK's largest waste incinerator in the centre of the Cambridge – Milton Keynes – Oxford corridor, in Bedfordshire. Following the construction of a large incinerator at Greatmoor in Buckinghamshire, and plans to construct a further one near Bedford, the Cambridge – Milton Keynes – Oxford corridor will become a major incineration hub in the UK. Mobile incinerators Incineration units for emergency use Emergency incineration systems exist for the urgent and biosecure disposal of animals and their by-products following a mass mortality or disease outbreak. An increase in regulation and enforcement from governments and institutions worldwide has been forced through public pressure and significant economic exposure. Contagious animal disease has cost governments and industry $200 billion over 20 years to 2012 and is responsible for over 65% of infectious disease outbreaks worldwide in the past sixty years. One-third of global meat exports (approx 6 million tonnes) is affected by trade restrictions at any time and as such the focus of Governments, public bodies and commercial operators is on cleaner, safer and more robust methods of animal carcass disposal to contain and control disease. Large-scale incineration systems are available from niche suppliers and are often bought by governments as a safety net in case of contagious outbreak. Many are mobile and can be quickly deployed to locations requiring biosecure disposal. Small incinerator units Small-scale incinerators exist for special purposes. For example, mobile small-scale incinerators are aimed for hygienically safe destruction of medical waste in developing countries. Companies such as Inciner8, a UK based company, are a good example of mobile incinerator manufacturers with their I8-M50 and I8-M70 models. Small incinerators can be quickly deployed to remote areas where an outbreak has occurred to dispose of infected animals quickly and without the risk of cross contamination. Containerised incinerator units Containerised incinerators are a unique type of incinerator that are specifically designed to function in remote locations where traditional infrastructure may not be available. These incinerators are typically built within a shipping container for easy transport and installation.
Technology
Industry: General
null
216211
https://en.wikipedia.org/wiki/Agroecology
Agroecology
Agroecology is an academic discipline that studies ecological processes applied to agricultural production systems. Bringing ecological principles to bear can suggest new management approaches in agroecosystems. The term can refer to a science, a movement, or an agricultural practice. Agroecologists study a variety of agroecosystems. The field of agroecology is not associated with any one particular method of farming, whether it be organic, regenerative, integrated, or industrial, intensive or extensive, although some use the name specifically for alternative agriculture. Definition Agroecology is defined by the OECD as "the study of the relation of agricultural crops and environment." Dalgaard et al. refer to agroecology as the study of the interactions between plants, animals, humans and the environment within agricultural systems. Francis et al. also use the definition in the same way, but thought it should be restricted to growing food. Agroecology is a holistic approach that seeks to reconcile agriculture and local communities with natural processes for the common benefit of nature and livelihoods. Agroecology is inherently multidisciplinary, including sciences such as agronomy, ecology, environmental science, sociology, economics, history and others. Agroecology uses different sciences to understand elements of ecosystems such as soil properties and plant-insect interactions, as well as using social sciences to understand the effects of farming practices on rural communities, economic constraints to developing new production methods, or cultural factors determining farming practices. The system properties of agroecosystems studied may include: productivity, stability, sustainability and equitability. Agroecology is not limited to any one scale; it can range from an individual gene to an entire population, or from a single field in a given farm to global systems. Wojtkowski differentiates the ecology of natural ecosystems from agroecology inasmuch as in natural ecosystems there is no role for economics, whereas in agroecology, focusing as it does on organisms within planned and managed environments, it is human activities, and hence economics, that are the primary governing forces that ultimately control the field. Wojtkowski discusses the application of agroecology in agriculture, forestry and agroforestry in his 2002 book. Varieties Buttel identifies four varieties of agroecology in a 2003 conference paper. The main varieties he calls ecosystem agroecology which he claims derives from the ecosystem ecology of Howard T. Odum and focuses less on the rural sociology, and agronomic agroecology which he identifies as being oriented towards developing knowledge and practices to agriculture more sustainable. The third long-standing variety Buttel calls ecological political economy which he defines as critiquing the politics and economy of agriculture and weighted to radical politics. The smallest and newest variety Buttel coins agro-population ecology, which he says is very similar to the first, but is derived from the science of ecology primarily based on the more modern theories of population ecology such as population dynamics of constituent species, and their relationships to climate and biogeochemistry, and the role of genetics. Dalgaard et al. identify different points of view: what they call early "integrative" agroecology, such as the investigations of Henry Gleason or Frederic Clements. The second version they cite Hecht (1995) as coining "hard" agroecology which they identify as more reactive to environmental politics but rooted in measurable units and technology. They themselves name "soft" agroecology which they define as trying to measure agroecology in terms of "soft capital" such as culture or experience. The term agroecology may used by people for a science, movement or practice. Using the name as a movement became more common in the 1990s, especially in the Americas. Miguel Altieri, whom Buttel groups with the "political" agroecologists, has published prolifically in this sense. He has applied agroecology to sustainable agriculture, alternative agriculture and traditional knowledge. History Overview The history of agroecology depends on whether you are referring to it as a body of thought or a method of practice, as many indigenous cultures around the world historically used and currently use practices we would now consider utilizing knowledge of agroecology. Examples include Maori, Nahuatl, and many other indigenous peoples. The Mexica people that inhabited Tenochtitlan pre-colonization of the Americas used a process called chinampas that in many ways mirrors the use of composting in sustainable agriculture today. The use of agroecological practices such as nutrient cycling and intercropping occurs across hundreds of years and many different cultures. Indigenous peoples also currently make up a large proportion of people using agroecological practices, and those involved in the movement to move more farming into an agroecological paradigm. Pre-WWII academic thought According to Gliessman and Francis et al., agronomy and ecology were first linked with the study of crop ecology by Klages in 1928. This work is a study of where crops can best be grown. Wezel et al. say the first mention of the term agroecology was in 1928, with the publication of the term by Basil Bensin. Dalgaard et al. claim the German zoologist Friederichs was the first to use the name in 1930 in his book on the zoology of agriculture and forestry, followed by American crop physiologist Hansen in 1939, both using the word for the application of ecology within agriculture. Post-WWII academic thought Tischler's 1965 book Agrarökologie may be the first to be titled 'agroecology'. He analyzed the different components (plants, animals, soils and climate) and their interactions within an agroecosystem as well as the impact of human agricultural management on these components. Gliessman describes that post-WWII ecologists gave more focus to experiments in the natural environment, while agronomists dedicated their attention to the cultivated systems in agriculture, but in the 1970s agronomists saw the value of ecology, and ecologists began to use the agricultural systems as study plots, studies in agroecology grew more rapidly. More books and articles using the concept of agroecosystems and the word agroecology started to appear in 1970s. According to Dalgaard et al., it probably was the concept of "process ecology" such as studied by Arthur Tansley in the 1930s which inspired Harper's 1974 concept of agroecosystems, which they consider the foundation of modern agroecology. Dalgaard et al. claim Frederic Clements's investigations on ecology using social sciences, community ecology and a "landscape perspective" is agroecology, as well as Henry Gleason's investigations of the population ecology of plants using different scientific disciplines. Ethnobotanist Efraim Hernandez X.'s work on traditional knowledge in Mexico in the 1970s led to new education programs in agroecology. Works such as Silent Spring and The Limits to Growth caused the public to be aware of the environmental costs of agricultural production, which caused more research in sustainability starting in the 1980s. The view that the socio-economic context are fundamental was used in the 1982 article Agroecologia del Tropico Americano by Montaldo, who argues that this context cannot be separated from agriculture when designing agricultural practices. In 1985 Miguel Altieri studied how the consolidation of the farms and cropping systems impact pest populations, and Gliessman how socio-economic, technological, and ecological components gave rise to producer choices of food production systems. In 1995, Edens et al. in Sustainable Agriculture and Integrated Farming Systems considered the economics of systems, ecological impacts, and ethics and values in agriculture. Social movements Several social movements have adopted agroecology as part of their larger organizing strategy. Groups like La Via Campesina have used agroecology as a method for achieving food sovereignty. Agroecology has also been utilized by farmers to resist global agricultural development patterns associated with the green revolution. By region Latin America Africa Garí wrote two papers for the FAO in the early 2000s about using an agroecological approach which he called "agrobiodiversity" to empower farmers to cope with the impacts of the AIDS on rural areas in Africa. In 2011, the first encounter of agroecology trainers took place in Zimbabwe and issued the Shashe Declaration. Europe The European Commission supports the use of sustainable practices, such as precision agriculture, organic farming, agroecology, agroforestry and stricter animal welfare standards through the Green Deal and the Farm to Fork Strategy. Debate Within academic research areas that focus on topics related to agriculture or ecology, such as agronomy, veterinarian science, environmental science, and others, there is much debate regarding what model of agriculture or agroecology should be supported through policy. Agricultural departments of different countries support agroecology to varying degrees, with the UN perhaps its biggest proponent.
Technology
Academic disciplines
null
216216
https://en.wikipedia.org/wiki/Conservation%20biology
Conservation biology
Conservation biology is the study of the conservation of nature and of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. It is an interdisciplinary subject drawing on natural and social sciences, and the practice of natural resource management. The conservation ethic is based on the findings of conservation biology. Origins The term conservation biology and its conception as a new field originated with the convening of "The First International Conference on Research in Conservation Biology" held at the University of California, San Diego in La Jolla, California, in 1978 led by American biologists Bruce A. Wilcox and Michael E. Soulé with a group of leading university and zoo researchers and conservationists including Kurt Benirschke, Sir Otto Frankel, Thomas Lovejoy, and Jared Diamond. The meeting was prompted due to concern over tropical deforestation, disappearing species, and eroding genetic diversity within species. The conference and proceedings that resulted sought to initiate the bridging of a gap between theory in ecology and evolutionary genetics on the one hand and conservation policy and practice on the other. Conservation biology and the concept of biological diversity (biodiversity) emerged together, helping crystallize the modern era of conservation science and policy. The inherent multidisciplinary basis for conservation biology has led to new subdisciplines including conservation social science, conservation behavior and conservation physiology. It stimulated further development of conservation genetics which Otto Frankel had originated first but is now often considered a subdiscipline as well. Description The rapid decline of established biological systems around the world means that conservation biology is often referred to as a "Discipline with a deadline". Conservation biology is tied closely to ecology in researching the population ecology (dispersal, migration, demographics, effective population size, inbreeding depression, and minimum population viability) of rare or endangered species. Conservation biology is concerned with phenomena that affect the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. The concern stems from estimates suggesting that up to 50% of all species on the planet will disappear within the next 50 years, which will increase poverty and starvation, and will reset the course of evolution on this planet. Researchers acknowledge that projections are difficult, given the unknown potential impacts of many variables, including species introduction to new biogeographical settings and a non-analog climate. Conservation biologists research and educate on the trends and process of biodiversity loss, species extinctions, and the negative effect these are having on our capabilities to sustain the well-being of human society. Conservation biologists work in the field and office, in government, universities, non-profit organizations and industry. The topics of their research are diverse, because this is an interdisciplinary network with professional alliances in the biological as well as social sciences. Those dedicated to the cause and profession advocate for a global response to the current biodiversity crisis based on morals, ethics, and scientific reason. Organizations and citizens are responding to the biodiversity crisis through conservation action plans that direct research, monitoring, and education programs that engage concerns at local through global scales. There is increasing recognition that conservation is not just about what is achieved but how it is done. History Natural resource conservation Conscious efforts to conserve and protect global biodiversity are a recent phenomenon. Natural resource conservation, however, has a history that extends prior to the age of conservation. Resource ethics grew out of necessity through direct relations with nature. Regulation or communal restraint became necessary to prevent selfish motives from taking more than could be locally sustained, therefore compromising the long-term supply for the rest of the community. This social dilemma with respect to natural resource management is often called the "Tragedy of the Commons". From this principle, conservation biologists can trace communal resource based ethics throughout cultures as a solution to communal resource conflict. For example, the Alaskan Tlingit peoples and the Haida of the Pacific Northwest had resource boundaries, rules, and restrictions among clans with respect to the fishing of sockeye salmon. These rules were guided by clan elders who knew lifelong details of each river and stream they managed. There are numerous examples in history where cultures have followed rules, rituals, and organized practice with respect to communal natural resource management. The Mauryan emperor Ashoka around 250 BC issued edicts restricting the slaughter of animals and certain kinds of birds, as well as opened veterinary clinics. Conservation ethics are also found in early religious and philosophical writings. There are examples in the Tao, Shinto, Hindu, Islamic and Buddhist traditions. In Greek philosophy, Plato lamented about pasture land degradation: "What is left now is, so to say, the skeleton of a body wasted by disease; the rich, soft soil has been carried off and only the bare framework of the district left." In the bible, through Moses, God commanded to let the land rest from cultivation every seventh year. Before the 18th century, however, much of European culture considered it a pagan view to admire nature. Wilderness was denigrated while agricultural development was praised. However, as early as AD 680 a wildlife sanctuary was founded on the Farne Islands by St Cuthbert in response to his religious beliefs. Early naturalists Natural history was a major preoccupation in the 18th century, with grand expeditions and the opening of popular public displays in Europe and North America. By 1900 there were 150 natural history museums in Germany, 250 in Great Britain, 250 in the United States, and 300 in France. Preservationist or conservationist sentiments are a development of the late 18th to early 20th centuries. Before Charles Darwin set sail on HMS Beagle, most people in the world, including Darwin, believed in special creation and that all species were unchanged. George-Louis Leclerc was one of the first naturalist that questioned this belief. He proposed in his 44 volume natural history book that species evolve due to environmental influences. Erasmus Darwin was also a naturalist who also suggested that species evolved. Erasmus Darwin noted that some species have vestigial structures which are anatomical structures that have no apparent function in the species currently but would have been useful for the species' ancestors. The thinking of these early 18th century naturalists helped to change the mindset and thinking of the early 19th century naturalists. By the early 19th century biogeography was ignited through the efforts of Alexander von Humboldt, Charles Lyell and Charles Darwin. The 19th-century fascination with natural history engendered a fervor to be the first to collect rare specimens with the goal of doing so before they became extinct by other such collectors. Although the work of many 18th and 19th century naturalists were to inspire nature enthusiasts and conservation organizations, their writings, by modern standards, showed insensitivity towards conservation as they would kill hundreds of specimens for their collections. Conservation movement The modern roots of conservation biology can be found in the late 18th-century Enlightenment period particularly in England and Scotland. Thinkers including Lord Monboddo described the importance of "preserving nature"; much of this early emphasis had its origins in Christian theology. Scientific conservation principles were first practically applied to the forests of British India. The conservation ethic that began to evolve included three core principles: that human activity damaged the environment, that there was a civic duty to maintain the environment for future generations, and that scientific, empirically based methods should be applied to ensure this duty was carried out. Sir James Ranald Martin was prominent in promoting this ideology, publishing many medico-topographical reports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities in British India through the establishment of Forest Departments. The Madras Board of Revenue started local conservation efforts in 1842, headed by Alexander Gibson, a professional botanist who systematically adopted a forest conservation program based on scientific principles. This was the first case of state conservation management of forests in the world. Governor-General Lord Dalhousie introduced the first permanent and large-scale forest conservation program in the world in 1855, a model that soon spread to other colonies, as well the United States, where Yellowstone National Park was opened in 1872 as the world's first national park. The term conservation came into widespread use in the late 19th century and referred to the management, mainly for economic reasons, of such natural resources as timber, fish, game, topsoil, pastureland, and minerals. In addition it referred to the preservation of forests (forestry), wildlife (wildlife refuge), parkland, wilderness, and watersheds. This period also saw the passage of the first conservation legislation and the establishment of the first nature conservation societies. The Sea Birds Preservation Act of 1869 was passed in Britain as the first nature protection law in the world after extensive lobbying from the Association for the Protection of Seabirds and the respected ornithologist Alfred Newton. Newton was also instrumental in the passage of the first Game laws from 1872, which protected animals during their breeding season so as to prevent the stock from being brought close to extinction. One of the first conservation societies was the Royal Society for the Protection of Birds, founded in 1889 in Manchester as a protest group campaigning against the use of great crested grebe and kittiwake skins and feathers in fur clothing. Originally known as "the Plumage League", the group gained popularity and eventually amalgamated with the Fur and Feather League in Croydon, and formed the RSPB. The National Trust formed in 1895 with the manifesto to "...promote the permanent preservation, for the benefit of the nation, of lands, ... to preserve (so far practicable) their natural aspect." In May 1912, a month after the Titanic sank, banker and expert naturalist Charles Rothschild held a meeting at the Natural History Museum in London to discuss his idea for a new organisation to save the best places for wildlife in the British Isles. This meeting led to the formation of the Society for the Promotion of Nature Reserves, which later became the Wildlife Trusts. In the United States, the Forest Reserve Act of 1891 gave the President power to set aside forest reserves from the land in the public domain. John Muir founded the Sierra Club in 1892, and the New York Zoological Society was set up in 1895. A series of national forests and preserves were established by Theodore Roosevelt from 1901 to 1909. The 1916 National Parks Act, included a 'use without impairment' clause, sought by John Muir, which eventually resulted in the removal of a proposal to build a dam in Dinosaur National Monument in 1959. In the 20th century, Canadian civil servants, including Charles Gordon Hewitt and James Harkin, spearheaded the movement toward wildlife conservation. In the 21st century professional conservation officers have begun to collaborate with indigenous communities for protecting wildlife in Canada. Some conservation efforts are yet to fully take hold due to ecological neglect. For example in the USA, 21st century bowfishing of native fishes, which amounts to killing wild animals for recreation and disposing of them immediately afterwards, remains unregulated and unmanaged. Global conservation efforts In the mid-20th century, efforts arose to target individual species for conservation, notably efforts in big cat conservation in South America led by the New York Zoological Society. In the early 20th century the New York Zoological Society was instrumental in developing concepts of establishing preserves for particular species and conducting the necessary conservation studies to determine the suitability of locations that are most appropriate as conservation priorities; the work of Henry Fairfield Osborn Jr., Carl E. Akeley, Archie Carr and his son Archie Carr III is notable in this era. Akeley for example, having led expeditions to the Virunga Mountains and observed the mountain gorilla in the wild, became convinced that the species and the area were conservation priorities. He was instrumental in persuading Albert I of Belgium to act in defense of the mountain gorilla and establish Albert National Park (since renamed Virunga National Park) in what is now Democratic Republic of Congo. By the 1970s, led primarily by work in the United States under the Endangered Species Act along with the Species at Risk Act (SARA) of Canada, Biodiversity Action Plans developed in Australia, Sweden, the United Kingdom, hundreds of species specific protection plans ensued. Notably the United Nations acted to conserve sites of outstanding cultural or natural importance to the common heritage of mankind. The programme was adopted by the General Conference of UNESCO in 1972. As of 2006, a total of 830 sites are listed: 644 cultural, 162 natural. The first country to pursue aggressive biological conservation through national legislation was the United States, which passed back to back legislation in the Endangered Species Act (1966) and National Environmental Policy Act (1970), which together injected major funding and protection measures to large-scale habitat protection and threatened species research. Other conservation developments, however, have taken hold throughout the world. India, for example, passed the Wildlife Protection Act of 1972. In 1980, a significant development was the emergence of the urban conservation movement. A local organization was established in Birmingham, UK, a development followed in rapid succession in cities across the UK, then overseas. Although perceived as a grassroots movement, its early development was driven by academic research into urban wildlife. Initially perceived as radical, the movement's view of conservation being inextricably linked with other human activity has now become mainstream in conservation thought. Considerable research effort is now directed at urban conservation biology. The Society for Conservation Biology originated in 1985. By 1992, most of the countries of the world had become committed to the principles of conservation of biological diversity with the Convention on Biological Diversity; subsequently many countries began programmes of Biodiversity Action Plans to identify and conserve threatened species within their borders, as well as protect associated habitats. The late 1990s saw increasing professionalism in the sector, with the maturing of organisations such as the Institute of Ecology and Environmental Management and the Society for the Environment. Since 2000, the concept of landscape scale conservation has risen to prominence, with less emphasis being given to single-species or even single-habitat focused actions. Instead an ecosystem approach is advocated by most mainstream conservationists, although concerns have been expressed by those working to protect some high-profile species. Ecology has clarified the workings of the biosphere; i.e., the complex interrelationships among humans, other species, and the physical environment. The burgeoning human population and associated agriculture, industry, and the ensuing pollution, have demonstrated how easily ecological relationships can be disrupted. Concepts and foundations Measuring extinction rates Extinction rates are measured in a variety of ways. Conservation biologists measure and apply statistical measures of fossil records, rates of habitat loss, and a multitude of other variables such as loss of biodiversity as a function of the rate of habitat loss and site occupancy to obtain such estimates. The Theory of Island Biogeography is possibly the most significant contribution toward the scientific understanding of both the process and how to measure the rate of species extinction. The current background extinction rate is estimated to be one species every few years. Actual extinction rates are estimated to be orders of magnitudes higher. While this is important, it's worth noting that there are no models in existence that account for the complexity of unpredictable factors like species movement, a non-analog climate, changing species interactions, evolutionary rates on finer time scales, and many other stochastic variables. The measure of ongoing species loss is made more complex by the fact that most of the Earth's species have not been described or evaluated. Estimates vary greatly on how many species actually exist (estimated range: 3,600,000–111,700,000) to how many have received a species binomial (estimated range: 1.5–8 million). Less than 1% of all species that have been described beyond simply noting its existence. From these figures, the IUCN reports that 23% of vertebrates, 5% of invertebrates and 70% of plants that have been evaluated are designated as endangered or threatened. Better knowledge is being constructed by The Plant List for actual numbers of species. Systematic conservation planning Systematic conservation planning is an effective way to seek and identify efficient and effective types of reserve design to capture or sustain the highest priority biodiversity values and to work with communities in support of local ecosystems. Margules and Pressey identify six interlinked stages in the systematic planning approach: Compile data on the biodiversity of the planning region Identify conservation goals for the planning region Review existing conservation areas Select additional conservation areas Implement conservation actions Maintain the required values of conservation areas Conservation biologists regularly prepare detailed conservation plans for grant proposals or to effectively coordinate their plan of action and to identify best management practices (e.g.). Systematic strategies generally employ the services of Geographic Information Systems to assist in the decision-making process. The SLOSS debate is often considered in planning. Conservation physiology: a mechanistic approach to conservation Conservation physiology was defined by Steven J. Cooke and colleagues as:An integrative scientific discipline applying physiological concepts, tools, and knowledge to characterizing biological diversity and its ecological implications; understanding and predicting how organisms, populations, and ecosystems respond to environmental change and stressors; and solving conservation problems across the broad range of taxa (i.e. including microbes, plants, and animals). Physiology is considered in the broadest possible terms to include functional and mechanistic responses at all scales, and conservation includes the development and refinement of strategies to rebuild populations, restore ecosystems, inform conservation policy, generate decision-support tools, and manage natural resources.Conservation physiology is particularly relevant to practitioners in that it has the potential to generate cause-and-effect relationships and reveal the factors that contribute to population declines. Conservation biology as a profession The Society for Conservation Biology is a global community of conservation professionals dedicated to advancing the science and practice of conserving biodiversity. Conservation biology as a discipline reaches beyond biology, into subjects such as philosophy, law, economics, humanities, arts, anthropology, and education. Within biology, conservation genetics and evolution are immense fields unto themselves, but these disciplines are of prime importance to the practice and profession of conservation biology. Conservationists introduce bias when they support policies using qualitative description, such as habitat degradation, or healthy ecosystems. Conservation biologists advocate for reasoned and sensible management of natural resources and do so with a disclosed combination of science, reason, logic, and values in their conservation management plans. This sort of advocacy is similar to the medical profession advocating for healthy lifestyle options, both are beneficial to human well-being yet remain scientific in their approach. There is a movement in conservation biology suggesting a new form of leadership is needed to mobilize conservation biology into a more effective discipline that is able to communicate the full scope of the problem to society at large. The movement proposes an adaptive leadership approach that parallels an adaptive management approach. The concept is based on a new philosophy or leadership theory steering away from historical notions of power, authority, and dominance. Adaptive conservation leadership is reflective and more equitable as it applies to any member of society who can mobilize others toward meaningful change using communication techniques that are inspiring, purposeful, and collegial. Adaptive conservation leadership and mentoring programs are being implemented by conservation biologists through organizations such as the Aldo Leopold Leadership Program. Approaches Conservation may be classified as either in-situ conservation, which is protecting an endangered species in its natural habitat, or ex-situ conservation, which occurs outside the natural habitat. In-situ conservation involves protecting or restoring the habitat. Ex-situ conservation, on the other hand, involves protection outside of an organism's natural habitat, such as on reservations or in gene banks, in circumstances where viable populations may not be present in the natural habitat. The conservation of habitats like forest, water or soil in its natural state is crucial for any species depending in it to thrive. Instead of making the whole new environment looking alike the original habitat of wild animals is less effective than preserving the original habitats. An approach in Nepal named reforestation campaign has helped increase the density and area covered by the original forests which proved to be better than creating entirely new environment after original one is let to lost. Old Forests Store More Carbon than Young Ones as proved by latest researches, so it is more crucial to protect the old ones. The reforestation campaign launched by Himalayan Adventure Therapy in Nepal basically visits the old forests in periodic basis which are vulnerable to loss of density and the area covered due to unplanned urbanization activities. Then they plant the new saplings of same tree families of that existing forest in the areas where the old forest has been lost and also plant those saplings to the barren areas connected to the forest. This maintains the density and area covered by the forest. Also, non-interference may be used, which is termed a preservationist method. Preservationists advocate for giving areas of nature and species a protected existence that halts interference from the humans. In this regard, conservationists differ from preservationists in the social dimension, as conservation biology engages society and seeks equitable solutions for both society and ecosystems. Some preservationists emphasize the potential of biodiversity in a world without humans. Ecological monitoring in conservation Ecological monitoring is the systematic collection of data relevant to the ecology of a species or habitat at repeating intervals with defined methods. Long-term monitoring for environmental and ecological metrics is an important part of any successful conservation initiative. Unfortunately, long-term data for many species and habitats is not available in many cases. A lack of historical data on species populations, habitats, and ecosystems means that any current or future conservation work will have to make assumptions to determine if the work is having any effect on the population or ecosystem health. Ecological monitoring can provide early warning signals of deleterious effects (from human activities or natural changes in an environment) on an ecosystem and its species. In order for signs of negative trends in ecosystem or species health to be detected, monitoring methods must be carried out at appropriate time intervals, and the metric must be able to capture the trend of the population or habitat as a whole. Long-term monitoring can include the continued measuring of many biological, ecological, and environmental metrics including annual breeding success, population size estimates, water quality, biodiversity (which can be measured in many way, i.e. Shannon Index), and many other methods. When determining which metrics to monitor for a conservation project, it is important to understand how an ecosystem functions and what role different species and abiotic factors have within the system. It is important to have a precise reason for why ecological monitoring is implemented; within the context of conservation, this reasoning is often to track changes before, during, or after conservation measures are put in place to help a species or habitat recover from degradation and/or maintain integrity. Another benefit of ecological monitoring is the hard evidence it provides scientists to use for advising policy makers and funding bodies about conservation efforts. Not only is ecological monitoring data important for convincing politicians, funders, and the public why a conservation program is important to implement, but also to keep them convinced that a program should be continued to be supported. There is plenty of debate on how conservation resources can be used most efficiently; even within ecological monitoring, there is debate on which metrics that money, time and personnel should be dedicated to for the best chance of making a positive impact. One specific general discussion topic is whether monitoring should happen where there is little human impact (to understand a system that has not been degraded by humans), where there is human impact (so the effects from humans can be investigated), or where there is data deserts and little is known about the habitats' and communities' response to human perturbations. The concept of bioindicators / indicator species can be applied to ecological monitoring as a way to investigate how pollution is affecting an ecosystem. Species like amphibians and birds are highly susceptible to pollutants in their environment due to their behaviours and physiological features that cause them to absorb pollutants at a faster rate than other species. Amphibians spend parts of their time in the water and on land, making them susceptible to changes in both environments. They also have very permeable skin that allows them to breath and intake water, which means they also take any air or water-soluble pollutants in as well. Birds often cover a wide range in habitat types annually, and also generally revisit the same nesting site each year. This makes it easier for researchers to track ecological effects at both an individual and a population level for the species. Many conservation researchers believe that having a long-term ecological monitoring program should be a priority for conservation projects, protected areas, and regions where environmental harm mitigation is used. Ethics and values Conservation biologists are interdisciplinary researchers that practice ethics in the biological and social sciences. Chan states that conservationists must advocate for biodiversity and can do so in a scientifically ethical manner by not promoting simultaneous advocacy against other competing values. A conservationist may be inspired by the resource conservation ethic, which seeks to identify what measures will deliver "the greatest good for the greatest number of people for the longest time." In contrast, some conservation biologists argue that nature has an intrinsic value that is independent of anthropocentric usefulness or utilitarianism. Aldo Leopold was a classical thinker and writer on such conservation ethics whose philosophy, ethics and writings are still valued and revisited by modern conservation biologists. Conservation priorities The International Union for Conservation of Nature (IUCN) has organized a global assortment of scientists and research stations across the planet to monitor the changing state of nature in an effort to tackle the extinction crisis. The IUCN provides annual updates on the status of species conservation through its Red List. The IUCN Red List serves as an international conservation tool to identify those species most in need of conservation attention and by providing a global index on the status of biodiversity. More than the dramatic rates of species loss, however, conservation scientists note that the sixth mass extinction is a biodiversity crisis requiring far more action than a priority focus on rare, endemic or endangered species. Concerns for biodiversity loss covers a broader conservation mandate that looks at ecological processes, such as migration, and a holistic examination of biodiversity at levels beyond the species, including genetic, population and ecosystem diversity. Extensive, systematic, and rapid rates of biodiversity loss threatens the sustained well-being of humanity by limiting supply of ecosystem services that are otherwise regenerated by the complex and evolving holistic network of genetic and ecosystem diversity. While the conservation status of species is employed extensively in conservation management, some scientists highlight that it is the common species that are the primary source of exploitation and habitat alteration by humanity. Moreover, common species are often undervalued despite their role as the primary source of ecosystem services. While most in the community of conservation science "stress the importance" of sustaining biodiversity, there is debate on how to prioritize genes, species, or ecosystems, which are all components of biodiversity (e.g. Bowen, 1999). While the predominant approach to date has been to focus efforts on endangered species by conserving biodiversity hotspots, some scientists (e.g) and conservation organizations, such as the Nature Conservancy, argue that it is more cost-effective, logical, and socially relevant to invest in biodiversity coldspots. The costs of discovering, naming, and mapping out the distribution of every species, they argue, is an ill-advised conservation venture. They reason it is better to understand the significance of the ecological roles of species. Biodiversity hotspots and coldspots are a way of recognizing that the spatial concentration of genes, species, and ecosystems is not uniformly distributed on the Earth's surface. For example, "... 44% of all species of vascular plants and 35% of all species in four vertebrate groups are confined to 25 hotspots comprising only 1.4% of the land surface of the Earth." Those arguing in favor of setting priorities for coldspots point out that there are other measures to consider beyond biodiversity. They point out that emphasizing hotspots downplays the importance of the social and ecological connections to vast areas of the Earth's ecosystems where biomass, not biodiversity, reigns supreme. It is estimated that 36% of the Earth's surface, encompassing 38.9% of the worlds vertebrates, lacks the endemic species to qualify as biodiversity hotspot. Moreover, measures show that maximizing protections for biodiversity does not capture ecosystem services any better than targeting randomly chosen regions. Population level biodiversity (mostly in coldspots) are disappearing at a rate that is ten times that at the species level. The level of importance in addressing biomass versus endemism as a concern for conservation biology is highlighted in literature measuring the level of threat to global ecosystem carbon stocks that do not necessarily reside in areas of endemism. A hotspot priority approach would not invest so heavily in places such as steppes, the Serengeti, the Arctic, or taiga. These areas contribute a great abundance of population (not species) level biodiversity and ecosystem services, including cultural value and planetary nutrient cycling. Those in favor of the hotspot approach point out that species are irreplaceable components of the global ecosystem, they are concentrated in places that are most threatened, and should therefore receive maximal strategic protections. This is a hotspot approach because the priority is set to target species level concerns over population level or biomass. Species richness and genetic biodiversity contributes to and engenders ecosystem stability, ecosystem processes, evolutionary adaptability, and biomass. Both sides agree, however, that conserving biodiversity is necessary to reduce the extinction rate and identify an inherent value in nature; the debate hinges on how to prioritize limited conservation resources in the most cost-effective way. Economic values and natural capital Conservation biologists have started to collaborate with leading global economists to determine how to measure the wealth and services of nature and to make these values apparent in global market transactions. This system of accounting is called natural capital and would, for example, register the value of an ecosystem before it is cleared to make way for development. The WWF publishes its Living Planet Report and provides a global index of biodiversity by monitoring approximately 5,000 populations in 1,686 species of vertebrate (mammals, birds, fish, reptiles, and amphibians) and report on the trends in much the same way that the stock market is tracked. This method of measuring the global economic benefit of nature has been endorsed by the G8+5 leaders and the European Commission. Nature sustains many ecosystem services that benefit humanity. Many of the Earth's ecosystem services are public goods without a market and therefore no price or value. When the stock market registers a financial crisis, traders on Wall Street are not in the business of trading stocks for much of the planet's living natural capital stored in ecosystems. There is no natural stock market with investment portfolios into sea horses, amphibians, insects, and other creatures that provide a sustainable supply of ecosystem services that are valuable to society. The ecological footprint of society has exceeded the bio-regenerative capacity limits of the planet's ecosystems by about 30 percent, which is the same percentage of vertebrate populations that have registered decline from 1970 through 2005. The inherent natural economy plays an essential role in sustaining humanity, including the regulation of global atmospheric chemistry, pollinating crops, pest control, cycling soil nutrients, purifying our water supply, supplying medicines and health benefits, and unquantifiable quality of life improvements. There is a relationship, a correlation, between markets and natural capital, and social income inequity and biodiversity loss. This means that there are greater rates of biodiversity loss in places where the inequity of wealth is greatest Although a direct market comparison of natural capital is likely insufficient in terms of human value, one measure of ecosystem services suggests the contribution amounts to trillions of dollars yearly. For example, one segment of North American forests has been assigned an annual value of 250 billion dollars; as another example, honey bee pollination is estimated to provide between 10 and 18 billion dollars of value yearly. The value of ecosystem services on one New Zealand island has been imputed to be as great as the GDP of that region. This planetary wealth is being lost at an incredible rate as the demands of human society is exceeding the bio-regenerative capacity of the Earth. While biodiversity and ecosystems are resilient, the danger of losing them is that humans cannot recreate many ecosystem functions through technological innovation. Strategic species concepts Keystone species Some species, called a keystone species form a central supporting hub unique to their ecosystem. The loss of such a species results in a collapse in ecosystem function, as well as the loss of coexisting species. Keystone species are usually predators due to their ability to control the population of prey in their ecosystem. The importance of a keystone species was shown by the extinction of the Steller's sea cow (Hydrodamalis gigas) through its interaction with sea otters, sea urchins, and kelp. Kelp beds grow and form nurseries in shallow waters to shelter creatures that support the food chain. Sea urchins feed on kelp, while sea otters feed on sea urchins. With the rapid decline of sea otters due to overhunting, sea urchin populations grazed unrestricted on the kelp beds and the ecosystem collapsed. Left unchecked, the urchins destroyed the shallow water kelp communities that supported the Steller's sea cow's diet and hastened their demise. The sea otter was thought to be a keystone species because the coexistence of many ecological associates in the kelp beds relied upon otters for their survival. However this was later questioned by Turvey and Risley, who showed that hunting alone would have driven the Steller's sea cow extinct. Indicator species An indicator species has a narrow set of ecological requirements, therefore they become useful targets for observing the health of an ecosystem. Some animals, such as amphibians with their semi-permeable skin and linkages to wetlands, have an acute sensitivity to environmental harm and thus may serve as a miner's canary. Indicator species are monitored in an effort to capture environmental degradation through pollution or some other link to proximate human activities. Monitoring an indicator species is a measure to determine if there is a significant environmental impact that can serve to advise or modify practice, such as through different forest silviculture treatments and management scenarios, or to measure the degree of harm that a pesticide may impart on the health of an ecosystem. Government regulators, consultants, or NGOs regularly monitor indicator species, however, there are limitations coupled with many practical considerations that must be followed for the approach to be effective. It is generally recommended that multiple indicators (genes, populations, species, communities, and landscape) be monitored for effective conservation measurement that prevents harm to the complex, and often unpredictable, response from ecosystem dynamics (Noss, 1997). Umbrella and flagship species An example of an umbrella species is the monarch butterfly, because of its lengthy migrations and aesthetic value. The monarch migrates across North America, covering multiple ecosystems and so requires a large area to exist. Any protections afforded to the monarch butterfly will at the same time umbrella many other species and habitats. An umbrella species is often used as flagship species, which are species, such as the giant panda, the blue whale, the tiger, the mountain gorilla and the monarch butterfly, that capture the public's attention and attract support for conservation measures. Paradoxically, however, conservation bias towards flagship species sometimes threatens other species of chief concern. Context and trends Conservation biologists study trends and process from the paleontological past to the ecological present as they gain an understanding of the context related to species extinction. It is generally accepted that there have been five major global mass extinctions that register in Earth's history. These include: the Ordovician (440 mya), Devonian (370 mya), Permian–Triassic (245 mya), Triassic–Jurassic (200 mya), and Cretaceous–Paleogene extinction event (66 mya) extinction spasms. Within the last 10,000 years, human influence over the Earth's ecosystems has been so extensive that scientists have difficulty estimating the number of species lost; that is to say the rates of deforestation, reef destruction, wetland draining and other human acts are proceeding much faster than human assessment of species. The latest Living Planet Report by the World Wide Fund for Nature estimates that we have exceeded the bio-regenerative capacity of the planet, requiring 1.6 Earths to support the demands placed on our natural resources. Holocene extinction Conservation biologists are dealing with and have published evidence from all corners of the planet indicating that humanity may be causing the sixth and fastest planetary extinction event. It has been suggested that an unprecedented number of species is becoming extinct in what is known as the Holocene extinction event. The global extinction rate may be approximately 1,000 times higher than the natural background extinction rate. It is estimated that two-thirds of all mammal genera and one-half of all mammal species weighing at least have gone extinct in the last 50,000 years. The Global Amphibian Assessment reports that amphibians are declining on a global scale faster than any other vertebrate group, with over 32% of all surviving species being threatened with extinction. The surviving populations are in continual decline in 43% of those that are threatened. Since the mid-1980s the actual rates of extinction have exceeded 211 times rates measured from the fossil record. However, "The current amphibian extinction rate may range from 25,039 to 45,474 times the background extinction rate for amphibians." The global extinction trend occurs in every major vertebrate group that is being monitored. For example, 23% of all mammals and 12% of all birds are Red Listed by the International Union for Conservation of Nature (IUCN), meaning they too are threatened with extinction. Even though extinction is natural, the decline in species is happening at such an incredible rate that evolution can simply not match, therefore, leading to the greatest continual mass extinction on Earth. Humans have dominated the planet and our high consumption of resources, along with the pollution generated is affecting the environments in which other species live. There are a wide variety of species that humans are working to protect such as the Hawaiian Crow and the Whooping Crane of Texas. People can also take action on preserving species by advocating and voting for global and national policies that improve climate, under the concepts of climate mitigation and climate restoration. The Earth's oceans demand particular attention as climate change continues to alter pH levels, making it uninhabitable for organisms with shells which dissolve as a result. Status of oceans and reefs Global assessments of coral reefs of the world continue to report drastic and rapid rates of decline. By 2000, 27% of the world's coral reef ecosystems had effectively collapsed. The largest period of decline occurred in a dramatic "bleaching" event in 1998, where approximately 16% of all the coral reefs in the world disappeared in less than a year. Coral bleaching is caused by a mixture of environmental stresses, including increases in ocean temperatures and acidity, causing both the release of symbiotic algae and death of corals. Decline and extinction risk in coral reef biodiversity has risen dramatically in the past ten years. The loss of coral reefs, which are predicted to go extinct in the next century, threatens the balance of global biodiversity, will have huge economic impacts, and endangers food security for hundreds of millions of people. Conservation biology plays an important role in international agreements covering the world's oceans and other issues pertaining to biodiversity. The oceans are threatened by acidification due to an increase in CO2 levels. This is a most serious threat to societies relying heavily upon oceanic natural resources. A concern is that the majority of all marine species will not be able to evolve or acclimate in response to the changes in the ocean chemistry. The prospects of averting mass extinction seems unlikely when "90% of all of the large (average approximately ≥50 kg), open ocean tuna, billfishes, and sharks in the ocean" are reportedly gone. Given the scientific review of current trends, the ocean is predicted to have few surviving multi-cellular organisms with only microbes left to dominate marine ecosystems. Groups other than vertebrates Serious concerns also being raised about taxonomic groups that do not receive the same degree of social attention or attract funds as the vertebrates. These include fungal (including lichen-forming species), invertebrate (particularly insect) and plant communities where the vast majority of biodiversity is represented. Conservation of fungi and conservation of insects, in particular, are both of pivotal importance for conservation biology. As mycorrhizal symbionts, and as decomposers and recyclers, fungi are essential for sustainability of forests. The value of insects in the biosphere is enormous because they outnumber all other living groups in measure of species richness. The greatest bulk of biomass on land is found in plants, which is sustained by insect relations. This great ecological value of insects is countered by a society that often reacts negatively toward these aesthetically 'unpleasant' creatures. One area of concern in the insect world that has caught the public eye is the mysterious case of missing honey bees (Apis mellifera). Honey bees provide an indispensable ecological services through their acts of pollination supporting a huge variety of agriculture crops. The use of honey and wax have become vastly used throughout the world. The sudden disappearance of bees leaving empty hives or colony collapse disorder (CCD) is not uncommon. However, in 16-month period from 2006 through 2007, 29% of 577 beekeepers across the United States reported CCD losses in up to 76% of their colonies. This sudden demographic loss in bee numbers is placing a strain on the agricultural sector. The cause behind the massive declines is puzzling scientists. Pests, pesticides, and global warming are all being considered as possible causes. Another highlight that links conservation biology to insects, forests, and climate change is the mountain pine beetle (Dendroctonus ponderosae) epidemic of British Columbia, Canada, which has infested of forested land since 1999. An action plan has been prepared by the Government of British Columbia to address this problem. Conservation biology of parasites A large proportion of parasite species are threatened by extinction. A few of them are being eradicated as pests of humans or domestic animals; however, most of them are harmless. Parasites also make up a significant amount of global biodiversity, given that they make up a large proportion of all species on earth, making them of increasingly prevalent conservation interest. Threats include the decline or fragmentation of host populations, or the extinction of host species. Parasites are intricately woven into ecosystems and food webs, thereby occupying valuable roles in ecosystem structure and function. Threats to biodiversity Today, many threats to biodiversity exist. An acronym that can be used to express the top threats of present-day H.I.P.P.O stands for Habitat Loss, Invasive Species, Pollution, Human Population, and Overharvesting. The primary threats to biodiversity are habitat destruction (such as deforestation, agricultural expansion, urban development), and overexploitation (such as wildlife trade).Habitat fragmentation also poses challenges, because the global network of protected areas only covers 11.5% of the Earth's surface. A significant consequence of fragmentation and lack of linked protected areas is the reduction of animal migration on a global scale. Considering that billions of tonnes of biomass are responsible for nutrient cycling across the earth, the reduction of migration is a serious matter for conservation biology. However, human activities need not necessarily cause irreparable harm to the biosphere. With conservation management and planning for biodiversity at all levels, from genes to ecosystems, there are examples where humans mutually coexist in a sustainable way with nature. Even with the current threats to biodiversity there are ways we can improve the current condition and start anew. Many of the threats to biodiversity, including disease and climate change, are reaching inside borders of protected areas, leaving them 'not-so protected' (e.g. Yellowstone National Park). Climate change, for example, is often cited as a serious threat in this regard, because there is a feedback loop between species extinction and the release of carbon dioxide into the atmosphere. Ecosystems store and cycle large amounts of carbon which regulates global conditions. In present day, there have been major climate shifts with temperature changes making survival of some species difficult. The effects of global warming add a catastrophic threat toward a mass extinction of global biological diversity. Numerous more species are predicted to face unprecedented levels of extinction risk due to population increase, climate change and economic development in the future. Conservationists have claimed that not all the species can be saved, and they have to decide which their efforts should be used to protect. This concept is known as the Conservation Triage. The extinction threat is estimated to range from 15 to 37 percent of all species by 2050, or 50 percent of all species over the next 50 years. The current extinction rate is 100–100,000 times more rapid today than the last several billion years.
Biology and health sciences
Ecology
Biology
216260
https://en.wikipedia.org/wiki/Horticulture
Horticulture
Horticulture is the art and science of growing ornamental plants, fruits, vegetables, flowers, trees and shrubs. Horticulture is commonly associated with the more professional and technical aspects of plant cultivation on a smaller and more controlled scale than agronomy. There are various divisions of horticulture because plants are grown for a variety of purposes. These divisions include, but are not limited to: propagation, arboriculture, landscaping, floriculture and turf maintenance. For each of these, there are various professions, aspects, tools used and associated challenges; Each requiring highly specialized skills and knowledge of the horticulturist. Typically, horticulture is characterized as the ornamental, small-scale and non-industrial cultivation of plants; horticulture is distinct from gardening by its emphasis on scientific methods, plant breeding, and technical cultivation practices, while gardening, even at a professional level, tends to focus more on the aesthetic care and maintenance of plants in gardens or landscapes. However, some aspects of horticulture are industrialized or commercial such as greenhouse production or CEA. Horticulture began with the domestication of plants years ago. At first, only plants for sustenance were grown and maintained, but as humanity became increasingly sedentary, plants were grown for their ornamental value. Horticulture emerged as a distinct field from agriculture when humans sought to cultivate plants for pleasure on a smaller scale rather than exclusively for sustenance. Emerging technologies are moving the industry forward, especially in the alteration of plants to be more resistant to parasites, disease and drought. Modifying technologies such as CRISPR are also improving the nutrition, taste and yield of crops. Many horticultural organizations and societies around the world have been formed by horticulturists and those within the industry. These include the Royal Horticultural Society, the International Society for Horticultural Science, and the American Society of Horticultural Science. Divisions of horticulture and types of horticulturists There are divisions and sub-divisions within horticulture because plants are grown for many different reasons. Some of the divisions in horticulture include: gardening plant production and propagation arboriculture landscaping floriculture garden design and maintenance turf maintenance plant conservation and landscape restoration. It includes the cultivation of all plants including, but not limited to: ornamental plants, fruits, vegetables, flowers, turf, nuts, seeds, herbs and other medicinal/edible plants. This cultivation may occur in garden spaces, nurseries, greenhouses, vineyards, orchards, parks, recreation areas, etc. Horticulturists study and practice the cultivation of plant material professionally. There are many different types of horticulturists with different job titles, including: gardener, grower, farmer, arborist, floriculturist, landscaper, agronomist, designer, landscape architect, lawn-care specialist, nursery manager, botanical garden curator, horticulture therapist, and much more. They may be hired by a variety of companies/institutions including, but not limited to: botanical gardens, private/public gardens, parks, cemeteries, greenhouses, golf courses, vineyards, estates, landscaping companies, nurseries, educational institutions, etc. They may also be self-employed. History Horticulture began with the domestication of plants 10,000–20,000 years ago and has since been deeply integrated into human history. The domestication of plants occurred independently within various civilizations across the globe. The history of horticulture overlaps with the history of agriculture and history of botany, as all three originated with the domestication of various plants for food. In Europe, agriculture and horticulture diverged at some point during the Middle Ages. Early practices in horticulture Early practices in horticulture include various tools and methods of land management, with different methods and plant types used for different uses. Methods, tools and plants grown have always depended on the culture and climate. Pre-colonized North and Central America Many traditional horticultural practices are known, such as the Indigenous peoples of pre-colonized North America using biochar to enhance soil productivity by smoldering plant wasteEuropean settlers called this soil Terra Preta de Indio. In North America, Indigenous people grew maize, squash, and sunflower, among other crops. Mesoamerican cultures focused on cultivating crops on a small scale, such as the milpa or maize field, around their dwellings or in specialized plots which were visited occasionally during migrations from one area to the next. In Central America, the Maya involved augmentation of the forest with useful trees such as papaya, avocado, cacao, ceiba and sapodilla. In the fields, multiple crops such as beans, squash, pumpkins and chili peppers were grown. The first horticulturists in many cultures were mainly or exclusively women. Historical uses for plants in horticulture In addition to plants' medicinal and nutritional value, plants have also been grown for their beauty, to impress and to demonstrate power, knowledge, status and even wealth of those in control of the cultivated plant material. This symbolic power that plants hold has existed even before the beginnings of their cultivation. There is evidence that various gardens maintained by the Aztecs were sacred, as they grew plants that held religious value. Plants were grown for their metaphorical relation to gods and goddesses. Flowers held symbolic power in religious rites, as they were offered to the gods and given in ceremonies to leaders to demonstrate their connection to the gods. Aspects of horticulture Propagation Plant propagation in horticulture is the process by which the number of individual plants is increased. Propagation involves both sexual and asexual methods. Sexual propagation uses seeds, while asexual propagation involves the division of plants, separation of tubers, corms, and bulbs using techniques such as cutting, layering, grafting. Plant selection When selecting plants to cultivate, a horticulturist may consider aspects based on the plant's intended use, including plant morphology, rarity, and utility. When selecting plants for the landscape, observations of the location must be made first. Soil type, temperature, climate, light, moisture, and pre-existing plants are considered when selecting plant material for the location. Plant selection may be for annual displays, or they may be for more permanent plantings. Characteristics of the plantsuch as mature height and size, colour, growth habit, ornamental value, flowering time and invasive potentialfinalize the plant selection process. Controlling environmental/growing variables Environmental factors affecting plant development include temperature, light, water, soil pH, nutrient availability, weather, humidity, elevation, terrain, and micro-climate. In horticulture, these environmental variables may be avoided, controlled or manipulated in an indoor growing environment. Temperature Plants require specific temperatures to grow and develop properly. Temperature can be controlled through a variety of methods. Covering plants with plastic in the form of cones called hot caps, or tunnels, can help to manipulate the surrounding temperature. Mulching is also an effective method to protect outdoor plants from frost during the winter. Inside, other frost prevention methods include wind machines, heaters, and sprinklers. Light Plants have evolved to require different amounts of light and lengths of daytime; their growth and development are determined by the amount of light they receive. Control of this may be achieved artificially with fluorescent lights in an indoor setting. Manipulating the amount of light also controls flowering. Lengthening the day encourages the flowering of long-day plants and discourages the flowering of short-day plants. Water Water management methods involve employing irrigation and drainage systems and controlling soil moisture to the needs of the species. Irrigation methods include surface irrigation, sprinkler irrigation, sub-irrigation, and trickle irrigation. Watering volume, pressure, and frequency are changed to optimize the growing environment. On a small scale, watering can be done manually. Growing media and soil management The choice of growing media and components to the media help support plant life. Within a greenhouse environment, growers may choose to grow their plants in an aquaponic system where no soil is used. Growers within a greenhouse setting will often opt for a soilless mix which does not include any actual components of naturally occurring soil. These mixes are generally very available within the industry and offer advantages such as water absorption and sterility. Soil management methods are broad but include the applying fertilizers, planned crop rotation to prevent the soil degradation seen in monocultures, and soil analysis. Control by use of enclosed environments Abiotic factors such as weather, light and temperature are all things that can be manipulated with enclosed environments such as cold frames, greenhouses, conservatories, poly houses and shade houses. Materials used in constructing these buildings are chosen based on the climate, purpose and budget. Cold frames provide an enclosed environment; they are built close to the ground and with a top made of glass or plastic. The glass or plastic allows sunlight into the frame during the day and prevents heat loss that would have been lost as long-wave radiation at night. This allows plants to begin growing before the growing season starts. Greenhouses and conservatories are similar in function but are larger and heated with an external energy source. They can be built out of glass but are now primarily made from plastic sheets. More expensive and modern greenhouses can include temperature control through shade and light control or air-conditioning and automatic watering. Shade houses provide shading to limit water loss by evapotranspiration. Challenges Abiotic stresses Commercial horticulture is required to support a rapidly growing population with demands for its products. Due to global climate change, extremes in temperatures, strength of precipitation events, flood frequency, and drought length and frequency are increasing. Together with other abiotic stressors such as salinity, heavy metal toxicity, UV damage, and air pollution, stressful environments are created for crop production. This is extrapolated as evapotranspiration is increased, soils are degraded of nutrients, and oxygen levels are depleted, resulting in up to a 70% loss in crop yield. Biotic stresses Living organisms such as bacteria, viruses, fungi, parasites, insects, weeds and native plants are sources of biotic stresses and can deprive the host of nutrients. Plants respond to these stresses using defence mechanisms such as morphological and structural barriers, chemical compounds, proteins, enzymes and hormones. The impact of biotic stresses can be prevented using practices such as incorporate tilling, spraying or Integrated Pest Management (IPM). Harvest management Care is required to reduce damages and losses to horticultural crops during harvest. Compression forces occur during harvesting, and horticultural goods can be hit in a series of impacts during transport and packhouse operations. Different techniques are used to minimize mechanical injuries and wounding to plants such as: Manual harvesting: This is the harvesting horticultural crops by hand. Fruits, such as apples, pears and peaches, can be harvested by clippers Sanitation: Harvest bags, crates, clippers and other equipment must be cleaned before harvest. Emerging technology Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) has recently gained recognition as a highly efficient, simplified, precise, and low-cost method of altering the genomes of species. Since 2013, CRISPR has been used to enhance a variety of species of grains, fruits, and vegetables. Crops are modified to increase their resistance to biotic and abiotic stressors such as parasites, disease, and drought as well as increase yield, nutrition, and flavour. Additionally, CRISPR has been used to edit undesirable traits, for example, reducing the browning and production of toxic and bitter substances of potatoes. CRISPR has also been employed to solve issues of low pollination rates and low fruit yield common in greenhouses. As compared to genetically modified organisms (GMO), CRISPR does not add any alien DNA to the plant's genes. Organizations Various organizations worldwide focus on promoting and encouraging research and education in all branches of horticultural science; such organizations include the International Society for Horticultural Science and the American Society of Horticultural Science. In the United Kingdom, there are two main horticulture societies. The Ancient Society of York Florists is the oldest horticultural society in the world and was founded in 1768; this organization continues to host four horticultural shows annually in York, England. Additionally, The Royal Horticultural Society, established in 1804, is a charity in United Kingdom that leads on the encouragement and improvement of the science, art, and practice of horticulture in all its branches. The organization shares the knowledge of horticulture through its community, learning programs, and world-class gardens and shows. The Chartered Institute of Horticulture (CIH) is the Chartered professional body for horticulturists and horticultural scientists representing all sectors of the horticultural industry across Great Britain, Ireland and overseas. While horticulture is an unregulated profession in the United Kingdom, the title of Chartered Horticulturalist is regulated by the CIH. The Australian Institute of Horticulture and Australian Society of Horticultural Science were established in 1990 as a professional society to promote and enhance Australian horticultural science and industry. Finally, the New Zealand Horticulture Institute is another known horticultural organization. In India, the Horticultural Society of India (now the Indian Academy of Horticultural Sciences) is the oldest society; it was established in 1941 at Lyallpur, Punjab (now in Pakistan) but was later shifted to Delhi in 1949. The other notable organization in operation since 2005 is the Society for Promotion of Horticulture based at Bengaluru. Both these societies publish scholarly journals – Indian Journal of Horticulture and Journal of Horticultural Sciences for the advancement of horticultural sciences. Horticulture in the Indian state of Kerala is led by Kerala State Horticulture Mission. The National Junior Horticultural Association (NJHA) was established in 1934 and was the first organization in the world dedicated solely to youth and horticulture. NJHA programs are designed to help young people obtain a basic understanding of horticulture and develop horticultural skills. The Global Horticulture Initiative (GlobalHort) fosters partnerships and collective action among different stakeholders in horticulture. This organization focuses on horticulture for development (H4D), which involves using horticulture to reduce poverty and improve nutrition worldwide. GlobalHort is organized in a consortium of national and international organizations which collaborate in research, training, and technology-generating activities designed to meet mutually agreed-upon objectives. GlobalHort is a non-profit organization registered in Belgium.
Technology
Basics_2
null
19109106
https://en.wikipedia.org/wiki/Baiji
Baiji
The baiji (Lipotes vexillifer), is a probably extinct species of freshwater dolphin native to the Yangtze river system in China. It is thought to be the first dolphin species driven to extinction due to the impact of humans. This dolphin is listed as "critically endangered: possibly extinct" by the IUCN, has not been seen in 20 years, and several surveys of the Yangtze have failed to find it. The species is also called the Chinese river dolphin, Yangtze river dolphin, Yangtze dolphin, and whitefin dolphin. The genus name Lipotes means "left behind" and the species epithet vexillifer means "flag bearer". It is nicknamed the "Goddess of the Yangtze" and was regarded as the goddess of protection by local fishermen and boatmen. It is not to be confused with the Chinese white dolphin (Sousa chinensis) or the finless porpoise (Neophocaena phocaenoides). This is the only species in the genus Lipotes. The baiji population declined drastically in decades as China industrialized and made heavy use of the river for fishing, transportation, and hydroelectricity. Following surveys in the Yangtze River during the 1980s, the baiji was claimed to be the first dolphin species in history driven to extinction by humans. A Conservation Action Plan for Cetaceans of the Yangtze River was approved by the Chinese Government in 2001. Efforts were made to conserve the species, but a late 2006 expedition failed to find any baiji in the river. Organizers declared the baiji functionally extinct. The baiji represents the first documented global extinction of an aquatic "megafaunal" vertebrate in over 50 years since the demise of the Japanese sea lion (Zalophus japonicus) and the Caribbean monk seal (Neomonachus tropicalis) in the 1950s. It also signified the disappearance of an entire mammal family of river dolphins (Lipotidae). The baiji's extinction would be the first recorded extinction of a well-studied cetacean species (it is unclear if some previously extinct varieties were species or subspecies) to be directly attributable to human influence. The baiji is one of a number of extinctions to have taken place due to the degradation of the Yangtze, alongside that of the Chinese paddlefish, as well as the now extinct in the wild Dabry's sturgeon. Swiss economist and CEO of the baiji.org Foundation August Pfluger funded an expedition in which an international team, taken in part from the National Oceanic and Atmospheric Administration and the Fisheries Research Agency in Japan, searched for six weeks for signs of the dolphin. The search took place almost a decade after the last exploration in 1997, which turned up only 13 of the cetaceans. In August 2007, a Chinese man reportedly videotaped a large white animal swimming in the Yangtze. Although the animal was tentatively identified as a baiji, the presence of only one or a few animals, particularly of advanced age, is not enough to save a functionally extinct species from true extinction. The last known living baiji was Qiqi, who died in 2002. The World Wildlife Fund is calling for the preservation of any possible baiji habitat, in case the species is located and can be revived. Anatomy and morphology Baiji were thought to breed in the first half of the year, the peak calving season being from February to April. A 30% pregnancy rate was observed. Gestation would last 10–11 months, delivering one calf at a time; the interbirth interval was 2 years. Calves measured around at birth, and nursed for 8–20 months. Males reached sexual maturity at age four, females at age 6. Mature males were about (7.5 ft) long, females , the longest specimen . The animal weighed , with a lifespan estimated at 24 years in the wild. The Yangtze River Dolphin is pale blue to grey on the dorsal (back) side, and white on the ventral (belly) side. It has a long and slightly upturned beak with 31–36 conical teeth on either jaw. Its dorsal fin is low and triangular in shape and resembles a light-colored flag when the dolphin swims just below the surface of the murky Yangtze River, hence the name "white-flag" dolphin. It has smaller eyes compared to oceanic dolphins. When escaping from danger, the baiji can reach , but usually stays within . Because of its poor vision, the baiji relies primarily on sonar for navigation. The sonar system also plays an important role in socializing, predator avoidance, group coordination, and expressing emotions. Sound emission is focused and highly directed by the shape of the skull and melon. Peak frequencies of echolocation clicks are between 70 kHz and 100 kHz. Distribution Historically the baiji occurred along of the middle and lower reaches of the Yangtze from Yichang in the west to the mouth of the river, near to Shanghai, as well as in Poyang and Dongting lakes, and the smaller Qiantang river to the south. This had been reduced by several hundred kilometres both upstream and downstream, and was limited to the main channel of the Yangtze, principally the middle reaches between the two large tributary lakes, Dongting and Poyang. Approximately 12% of the world's human population lives and works within the Yangtze River catchment area, putting pressure on the river. Evolutionary history The Baiji is not closely related to any living species of dolphin, having diverged from the ancestors of the La Plata dolphin and Amazon River dolphin during the Miocene, estimated to be around 16 million years ago. The closest known relative of the Baiji is Parapontoporia, native to the Western Coast of North America during the Latest Miocene and Pliocene. The Baiji was one of five species of dolphins known to have made fresh water their exclusive habitat. The other five species, including the boto and the La Plata dolphin, have survived in the Río de la Plata and Amazon rivers in South America and the Ganges and Indus rivers on the Indian subcontinent. It is well known the river dolphins are not a natural group. Their mitochondrial genome reveals a split of two separate lineages, Platanista and Lipotes + (Inia + Pontoporia), having no sister relationship with each other, and the Platanista lineage is always within the odontocete clade instead of having a closer affinity to Mysticeti. The position of the Platanista is more basal, suggesting separate divergence of this lineage well before the other one. The Lipotes has a sister relationship with Inia + pontoporia, and they together formed the sister group to the Delphinoidea. This result strongly supports paraphyly of the classical river dolphins, and the nonplatanistoid river dolphins do represent a monophyletic grouping, with the Lipotidae as the sister taxa to (Iniidae + Pontoporiidae), and is well congruent with the studies based on short interspersed repetitive elements (SINEs). Low values of haplotype diversity and nucleotide diversity were found for the baiji of the Yangtze River. The analysis of molecular variance (AMOVA) supported a high level of overall genetic structure. The males having a higher genetic differentiation than the females suggested a significant female-biased dispersal. The aquatic adaptations of the baiji and other cetaceans have happened slowly and can be linked to positively selected genes (PSGs) and/or other functional changes. Comparative genopic analyses have uncovered that the baiji have a slow molecular clock and molecular adaptations to their aquatic environment. This information leads scientists to conclude that a bottleneck must have occurred near the end of the last deglaciation, a time that coincided with rapid temperature decrease and a rise in eustatic sea level. Scientists have also looked into PSGs in the baiji genome which are used for DNA repair and response to DNA stimulus. These PSGs have not been found in any other mammal species. Pathways being used for DNA repair have been known to have a major impact on brain development and have been implicated in diseases including microcephaly. The slow down of the substitution rate among cetaceans may have been affected by the evolution of DNA damage pathways. Over time, river dolphins, including the baiji, have had a reduction in the size of their eyes and the acuity of their vision. This probably stems from poor visibility in fluvial and estuarine environments. When analyzing the baiji genome, scientists have found that there are four genes that have lost their function due to a frameshift mutation or premature stop codons. The baiji has the lowest single nucleotide polymorphism (SNP) frequency reported thus far among mammals. This low frequency could be related to the relatively low rate of molecular evolution in cetaceans; however, considering that the decrease in the rate of molecular evolution in the baiji was not as great as the decrease in heterozygosity rate, it is likely that much of the low genetic diversity observed was caused by the precipitous decline in the total baiji population in recent decades and the associated breedings. The reconstructed demographic history over the last 100,000 years featured a continual population contraction through the last glacial maximum, a serious bottleneck during the last deglaciation, and sustained population growth after the eustatic sea level approached the current levels. The close correlation between population trends, regional temperatures, and eustatic sea levels suggest a dominant role for global and local climate changes in shaping the baiji's ancient population demography. Folklore Per Chinese folklore, a beautiful young girl is said to have lived with her stepfather on the banks of the river Yangtze. He was evil, and a greedy man out for his own self-interest. One day, he took the girl on a boat, intending to sell her on the market. Out on the river, though, he became infatuated with her beauty and tried to take advantage of her. But she freed herself by plunging into the river whereupon a big storm came and sank the boat. After the storm had thus settled, people saw a beautiful dolphin swimming – the incarnation of the girl – which became known as the "Goddess of the Yangtze". The baiji, in the region of Yangtze, is regarded as a symbol of peace and prosperity. In popular culture The baiji has been featured in pop culture and media on many occasions due to its status as a Chinese national treasure and as a critically endangered species. There have been various stamps and coins that feature the river dolphin. Prior to Minecon Earth 2018, the baiji was featured in a promotional poll in which Chinese players would guess which culturally significant Chinese animal would be added to Minecraft. The 2024 Hangzhou Super Cup, a canoeing event in China features a baiji named "豚豚" (tún tún) as the mascot for the event. Conservation In the 1950s, the population was estimated at 6,000 animals, but declined rapidly over the subsequent five decades. Only a few hundred were left by 1970. Then the number fell to 400 by the 1980s and then to 13 in 1997 when a full-fledged search was conducted. The baiji was last sighted in August 2004, though there have been many alleged sightings since. It is listed as an endangered species by the U.S. government under the Endangered Species Act. It is now thought to be functionally extinct. Causes of decline The World Conservation Union (IUCN) has noted the following as threats to the species: a period of hunting by humans during the Great Leap Forward, entanglement in fishing gear, the illegal practice of electric fishing, collisions with boats and ships, habitat loss, and pollution. Further studies have noted that a lack of information on the baiji's historical distribution or ecology, the environmental impact of the construction of the Three Gorges Dam on the living space of the baiji, and the failure to act for the protection of the baiji are also threats to the species. During the Great Leap Forward, when traditional veneration of the baiji was denounced, it was hunted for its flesh and skin, and quickly became scarce. As China developed economically, pressure on the river dolphin grew significantly. Industrial and residential waste flowed into the Yangtze. The riverbed was dredged and reinforced with concrete in many locations. Ship traffic multiplied, boats grew in size, and fishermen employed wider and more lethal nets. Noise pollution caused the nearly blind animal to collide with propellers. Stocks of the dolphin's prey declined drastically in the late 20th century, with some fish populations declining to one thousandth of their pre-industrial levels. A range of anthropogenic led causes (e.g. boat collisions, dam construction) which also threaten freshwater cetaceans in other river systems, have been implicated in the decline of the baiji population. However, the primary factor was probably unsustainable by-catch in local fisheries, which use rolling hooks, nets (gill nets and fyke nets) and electrofishing; similarly by-catch constitutes the principal cause of mortality in many populations of small cetaceans worldwide. Although there are relatively few data available on baiji mortality, at least half of all known baiji deaths in the 1970s and 1980s were caused by rolling hooks and other fishing gear, and electrofishing accounted for 40% of baiji deaths recorded during the 1990s. Unlike most historical-era extinctions of large-bodied animals, the baiji was the victim not of active persecution but of incidental mortality resulting from massive-scale human environmental impacts, primarily uncontrolled fishing. Its extinction merely reflects the latest stage in the progressive ecological deterioration of the Yangtze region. In the 1970s and 1980s, an estimated half of baiji deaths were attributed to entanglement in fishing gear and nets. By the early 2000s, electric fishing was considered "the most important and immediate direct threat to the baiji's survival". Though outlawed, this fishing technique is widely and illegally practiced throughout China. The building of the Three Gorges Dam further reduced the dolphin's habitat and facilitated an increase in ship traffic; these were thought to make it extinct in the wild. There are some scientists who have found that pollution has resulted in emerging diseases caused by parasitic infection in the Baiji population. The Baiji's reliance on aquatic environments could have resulted in interaction with both terrestrial and marine pathogen risks. Since the Baiji has a limited distribution endemic to the Yangtze River, the freshwater environment may have a higher pathogen level than marine waters (although systematic environmental studies have yet to be conducted). The pathogens in these waters could lead to viral infections that can result in epizootics, which has caused the deaths of thousands of marine mammals over the last 20 years. There have also been captured/killed individuals that have had helminth infestations in the stomach which leads scientists to believe that parasitic infections could be another cause of decline amongst the Baiji. It has been noted, however, that the declining geographical range that baiji have been spotted in is not connected to the population loss of baiji. A model provided by Yangtze fishing communities show that the baiji population was not connected by geographical range or fragmentation of location, as the baiji make long-term and periodic movements throughout several years. The movements of the baiji left the species unaffected by dwindling geographical range. Surveys Conservation efforts During the 1970s, China recognized the precarious state of the river dolphin. The government outlawed deliberate killing, restricted fishing, and established nature reserves. In 1978, the Chinese Academy of Sciences established the Freshwater Dolphin Research Centre as a branch of the Wuhan Institute of Hydrobiology. In the 1980s and 1990s, several attempts were made to capture dolphins and relocate them to a reserve. A breeding program would then allow the species to recover and be reintroduced to the Yangtze after conditions improve. However, capturing the rare, quick dolphins proved to be difficult, and few captives survived more than a few months. The first Chinese aquatic species protection organisation, the Baiji Dolphin Conservation Foundation of Wuhan (武汉白鱀豚保护基金), was founded in December 1996. It has raised 1,383,924.35 CNY (about 100,000) and used the funds for in vitro cell preservation and to maintain the baiji facilities, including the Shishou Sanctuary that was flooded in 1998. Since 1992, five protected areas of the Yangtze have been designated as baiji reserves. Four were built in the main Yangtze channel where baiji are actively protected and fishing is banned: two national reserves (Shishou City and Xin-Luo) and two provincial (Tongling and Zhenjiang). In the past 20 years, five nature reserves have been established along the river. Imposing maximum prohibition of harmful and illegal fishing methods in the reserves might prolong the process of extinction of these cetaceans in the wild, but so far, the administrative measures taken in the reserves have not yet kept the baiji population from sharply declining. As humans continue to occupy the river and use the natural resources it provided, the question as to whether the river itself can reach a point later in the future to become a habitat for these species to live in once again remained, for the most part, unanswered by conservationists. In Shishou, Hubei Province, and Tongling, Anhui Province, the two semi-natural reserves established in these regions aimed to build in an environment for the baiji, as well as another mammalian species, the finless porpoise, to breed. Through careful management, both these species not only survived, but did in fact reproduce successfully enough to provide some hope that the Baiji may be able to make a comeback. The fifth protected area is an isolated oxbow lake located off of the north bank of the river near to Shishou City: the Tian-e-Zhou Oxbow Semi-natural Reserve. Combined, these five reserves cover just over , about of the baijis' range, leaving two-thirds of the species' habitat unprotected. As well as these five protected areas there are also five "Protection Stations" in Jianli, Chenglingji, Hukou, Wuhu and Zhengjiang. These stations consist of two observers and a motorized fishing boat with the aim of conducting daily patrols, making observations and investigating reports of illegal fishing. In 2001, the Chinese government approved a Conservation Action Plan for Cetaceans of the Yangtze River. This plan re-emphasised the three measures identified at the 1986 workshop and was adopted as the national policy for the conservation of the Baiji. Despite all of these workshops and conventions little money was available in China to aid the conservation efforts. It has been estimated that US$1 million was needed to begin the project and maintain it for a further 3 years. Efforts to save the mammals proved to be too little and too late. August Pfluger, chief executive of the Baiji.org Foundation, said, "The strategy of the Chinese government was a good one, but we didn't have time to put it into action." Furthermore, the conservation attempts have been criticized, as even with the international attention about the need for conservation for the baiji, the Chinese government did not "[make] any serious investment" to protect the baiji. In situ and ex situ conservation Most scientists agreed that the best course of action was an ex situ effort working in parallel with an in situ effort. The deterioration of the Yangtze River had to be reversed to preserve the habitat. The ex-situ projects aimed to raise a large enough population over time so that some, if not all, of the dolphins could be returned to the Yangtze, so the habitat within the river had to be maintained anyway. The Shishou Tian-e-Zhou is a long, wide oxbow lake located near Shishou City in Hubei Province. Shishou has been described as being "like a miniature Yangtze ... possessing all of the requirements for a semi-natural reserve". From the designation as a national reserve in 1992 it has been intended to be used for not only the baiji but also the Yangtze finless porpoise. In 1990 the first finless porpoises were relocated to the reserve and since then have been surviving and reproducing well. As of April 2005 26 finless porpoises were known to live in the reserve. A baiji was introduced in December 1995, but died during the summer flood of 1996. To deal with these annual floods a dyke was constructed between the Yangtze and Shishou. Now water is controlled from a sluice gate located at the downstream mouth of the oxbow lake. It has been reported that since the installation of this sluice gate, water quality has declined since no annual transfer of nutrients can occur. Roughly 6,700 people live on the island within the oxbow lake and so some limited fishing is permitted. The success of Shishou with the porpoises and with migratory birds and other wetland fauna encouraged the local Wetlands Management Team to put forward an application to award the site Ramsar status. It has also been noted that the site has incredible potential for ecotourism, which could be used to generate much needed revenue to improve the quality of the reserve. The necessary infrastructure does not currently exist to realize these opportunities. Captive specimens A baiji conservation dolphinarium was established at the Institute of Hydrobiology (IHB) in Wuhan in 1980, and rebuilt in 1992. This was planned as a backup to any other conservation efforts by producing an area completely protected from any threats, and where the baiji could be easily observed. The site included an indoor and outdoor holding pool, a water filtration system, food storage and preparation facilities, research labs, and a small museum. The aim was to also generate income from tourism which could be put towards the baiji plight. The pools were not very large ( arc [kidney shaped] ×  wide ×  deep, diameter, deep, and diameter, deep) and would not be capable of holding many baijis at once. Douglas Adams and Mark Carwardine documented their encounters with the endangered animals on their conservation travels for the BBC programme Last Chance to See. They came across Baiji weighing scales and Baiji fertilizer. They met Qi Qi, who was just a year old then, injured by fishing hooks in 1980, and taken into captivity to be nursed back to health. The book by the same name, published in 1990, included pictures of the captive specimen. Qi Qi lived in the Wuhan Institute of Hydrobiology dolphinarium from 1980 to July 14, 2002. Discovered by a fisherman in Dongting Lake, he became the sole resident of the Baiji Dolphinarium beside East Lake. A sexually mature female was captured in late 1995, but died after half a year in 1996 when the Tian-e-Zhou Oxbow Nature Reserve, which had contained only finless porpoises since 1990, was flooded. Current status The Xinhua News Agency announced on December 4, 2006, that no Chinese river dolphins were detected in a six-week survey of the Yangtze River conducted by 30 researchers. The failure of the Yangtze Freshwater Dolphin Expedition raised suspicions of the first unequivocal extinction of a cetacean species due to human action (some extinct baleen whale populations may not have been distinct species). Poor water and weather conditions may have prevented sightings, but expedition leaders declared it "functionally extinct" on December 13, 2006, as fewer are likely to be alive than are needed to propagate the species. However, footage believed to be a baiji from August 2007 was released to the public. The Japanese sea lion and Caribbean monk seal disappeared in the 1950s, the most recent aquatic mammals to become extinct. Several land-based mammal species and subspecies have disappeared since then. If the baiji is extinct, the vaquita (Phocoena sinus) has become the most endangered marine mammal species. Some scientists retain hope for the species: A report of the expedition was published online in the journal Biology Letters on August 7, 2007, in which the authors conclude "We are forced to conclude that the baiji is now likely to be extinct, probably due to unsustainable by-catch in local fisheries". "Witness to Extinction: How We Failed To Save The Yangtze River Dolphin", an account of the 2006 baiji survey by Samuel Turvey, the lead author of the Biology Letters paper, was published by Oxford University Press in autumn 2008. This book investigated the baiji's probable extinction within the wider-scale context of how and why international efforts to conserve the species had failed, and whether conservation recovery programmes for other threatened species were likely to face similar potentially disastrous administrative hurdles. Some reports suggest that information about the baiji and its demise is being suppressed in China. Other reports cite government media English language reports in China Central Television and Xinhua News Agency as evidence to the contrary. Sightings In August 2007, Zeng Yujiang reportedly videotaped a large white animal swimming in the Yangtze in Anhui Province. Wang Kexiong of the Institute of Hydrobiology of the Chinese Academy of Sciences has tentatively confirmed that the animal on the video is a baiji. In October 2016 several news sources announced a recent sighting of what has been speculated to be a baiji. However, the purported re-discovery was disputed by the conservation biologist Samuel Turvey, who was a member of the 2006 survey team. Turvey instead proposed to shift conservation focus to the critically endangered Yangtze finless porpoise, the only freshwater cetaceans left in China. There was a sighting claimed to belong to baiji surfacing along with a pod of finless porpoises at Tongling in Anhui province in April 2018. On May 14, 2024, The Paper reported a suspected sighting of two baijis in the Yangtze.
Biology and health sciences
Toothed whale
Animals
19119938
https://en.wikipedia.org/wiki/Holly
Holly
Ilex () or holly is a genus of over 570 species of flowering plants in the family Aquifoliaceae, and the only living genus in that family. Ilex has the most species of any woody dioecious angiosperm genus. The species are evergreen or deciduous trees, shrubs, and climbers from tropics to temperate zones worldwide. The type species is Ilex aquifolium, the common European holly used in Christmas decorations and cards. Description The genus Ilex is divided into three subgenera: Ilex subg. Byronia, with the type species Ilex polypyrena Ilex subg. Prinos, with 12 species Ilex subg. Ilex, with the rest of the species The genus is widespread throughout the temperate and subtropical regions of the world. It includes species of trees, shrubs, and climbers, with evergreen or deciduous foliage and inconspicuous flowers. Its range was more extended in the Tertiary period and many species are adapted to laurel forest habitats. It occurs from sea level to more than with high mountain species. It is a genus of small, evergreen trees with smooth, glabrous, or pubescent branchlets. The plants are generally slow-growing with some species growing to tall. The type species is the European holly Ilex aquifolium described by Linnaeus. Plants in this genus have simple, alternate glossy leaves, frequently with a spiny leaf margin. The inconspicuous flower is greenish white, with four petals. They are generally dioecious, with male and female flowers on different plants. The small fruits of Ilex, although often referred to as berries, are technically drupes. They range in color from red to brown to black, and rarely green or yellow. The "bones" contain up to ten seeds each. Some species produce fruits parthenogenetically, such as the cultivar 'Nellie R. Stevens'. The fruits ripen in winter and thus provide winter colour contrast between the bright red of the fruits and the glossy green evergreen leaves. Hence the cut branches, especially of I. aquifolium, are widely used in Christmas decoration. The fruits are generally slightly toxic to humans, and can cause vomiting and diarrhea when ingested. However, they are a food source for certain birds and other animals, which help disperse the seeds. Unfortunately this can have negative impacts as well. Along the west coast of North America, from California to British Columbia, English holly (Ilex aquifolium), which is grown commercially, is quickly spreading into native forest habitat, where it thrives in shade and crowds out native species. It has been placed on the Washington State Noxious Weed Control Board's monitor list, and is a Class C invasive plant in Portland. Etymology Ilex in Latin means the holm-oak or evergreen oak (Quercus ilex). Despite the Linnaean classification of Ilex as holly, as late as the 19th century in Britain, the term Ilex was still being applied to the oak as well as the holly – possibly due to the superficial similarity of the leaves. The name "holly" in common speech refers to Ilex aquifolium, specifically stems with berries used in Christmas decoration. By extension, "holly" is also applied to the whole genus. The origin of the word "holly" is considered a reduced form of Old English , Middle English Holin, later Hollen. The French word for holly, , derives from the Old Low Franconian (Middle Dutch ). Both are related to Old High German , huls, as are Low German/Low Franconian terms like or . These Germanic words appear to be related to words for holly in Celtic languages, such as Welsh , Breton and Irish . Several Romance languages use the Latin word acrifolium, literally "sharp leaf" (turned into aquifolium in modern time), hence Italian , Occitan , etc. History The phylogeography of this group provides examples of various speciation mechanisms at work. In this scenario ancestors of this group became isolated from the remaining Ilex when the Earth mass broke away into Gondwana and Laurasia about 82million years ago, resulting in a physical separation of the groups and beginning a process of change to adapt to new conditions. This mechanism is called allopatric speciation. Over time, survivor species of the holly genus adapted to different ecological niches. This led to reproductive isolation, an example of ecological speciation. In the Pliocene, around five million years ago, mountain formation diversified the landscape and provided new opportunities for speciation within the genus. The fossil record indicates that the Ilex lineage was already widespread prior to the end of the Cretaceous period; the earliest records of the distinctive pollen of Ilex are from the Turonian of the Otway Basin of Australia. The earliest fossil holly fruit is known from the Maastrichtian of central Europe. Based on the molecular clock, the common ancestor of most of the extant species probably appeared during the Eocene, about 50million years ago, suggesting that older representatives of the genus belong to now extinct branches. Ilex sinica seems to be the most basal extant species. The laurel forest covered great areas of the Earth during the Paleogene, when the genus was more prosperous. This type of forest extended during the Neogene, more than 20million years ago. Most of the last remaining temperate broadleaf evergreen forests are believed to have disappeared about 10,000 years ago at the end of the Pleistocene. Many of the then-existing species with the strictest ecological requirements became extinct because they could not cross the barriers imposed by the geography, but others found refuge as a species relict in coastal enclaves, archipelagos, and coastal mountains sufficiently far from areas of extreme cold and aridity and protected by the oceanic influence. Selected species Ilex ambigua - Southeastern USA Ilex amelanchier - Southeastern USA Ilex anomala (Hawaii) Ilex aquifolium – European holly, English holly, Christ's thorn (western and southern Europe, northwest Africa, and southwest Asia) Ilex canariensis (Macaronesian islands) Ilex cassine – dahoon holly, cassena (Virginia to southeast Texas of US, Veracruz of Mexico, Bahamas, Cuba, and Puerto Rico) Ilex coriacea – gallberry (Virginia to Texas of United States) Ilex cornuta – Chinese holly, horned holly (eastern China and Korea) Ilex crenata – Japanese holly, box-leaved holly, inutsuge (Japanese) (eastern China, Japan, Korea, Taiwan, and Sakhalin) Ilex decidua Walter – possumhaw (eastern United States, northeastern Mexico) Ilex gardneriana (extinct: 20th century?) (India) Ilex glabra L. A.Gray – evergreen winterberry, bitter gallberry, inkberry (eastern North America) Ilex guayusa (Amazon rainforest) Ilex integra – mochi tree, Nepal holly (Korea; Taiwan; the mid-southern regions of China; and Honshu, Shikoku and Kyushu in Japan) Ilex kaushue (China) Ilex khasiana (India) Ilex latifolia – tarajo holly, tarayō (Japanese) (southern Japan and eastern and southern China ) Ilex mitis (southern Africa) Ilex montana Torrey & A.Gray – mountain winterberry (Eastern United States) Ilex mucronata (L.) M.Powell, Savol., & S.Andrews – mountain holly, catberry (Eastern North America) Ilex opaca – American holly (Eastern United States) Ilex paraguariensis – yerba mate (mate, erva-mate) Ilex pedunculosa – longstalked holly Ilex perado – Macaronesian holly Ilex quercetorum (Mexico and Guatemala) Ilex rotunda Ilex rugosa – Tsuru Holly (mountains of Japan, Sakhalin, Khabarovsk Krai and Kuril Islands Siberia) Ilex serrata – Japanese winterberry Ilex verticillata (L.) A.Gray American winterberry (Eastern North America) Ilex vomitoria – yaupon holly (southeastern United States) Range The genus is distributed throughout the world's different climates. Most species make their home in the tropics and subtropics, with a worldwide distribution in temperate zones. The greatest diversity of species is found in the Americas and in Southeast Asia. Ilex mucronata, formerly the type species of Nemopanthus, is native to eastern North America. Nemopanthus was treated as a separate genus with eight species. of the family Aquifoliaceae, now transferred to Ilex on molecular data; it is closely related to Ilex amelanchier. In Europe the genus is represented by a single species, the classically named holly Ilex aquifolium, and in continental Africa by this species and Ilex mitis. Ilex canariensis, from Macaronesia, and Ilex aquifolium arose from a common ancestor in the laurel forests of the Mediterranean. Australia, isolated at an early period, has Ilex arnhemensis. Of 204 species growing in China, 149 species are endemic. A species which stands out for its economic importance in Spanish-speaking countries and in Brazil is Ilex paraguariensis or Yerba mate. Having evolved numerous species that are endemic to islands and small mountain ranges, and being highly useful plants, many hollies are now becoming rare. Ecology Often the tropical species are especially threatened by habitat destruction and overexploitation. At least two species of Ilex have become extinct recently, and many others are barely surviving. They are an extremely important food for numerous species of birds, and also are eaten by other wild animals. In the autumn and early winter the fruits are hard and apparently unpalatable. After being frozen or frosted several times, the fruits soften, and become milder in taste. During winter storms, birds often take refuge in hollies, which provide shelter, protection from predators (by the spiny leaves), and food. The flowers are sometimes eaten by the larva of the double-striped pug moth (Gymnoscelis rufifasciata). Other Lepidoptera whose larvae feed on holly include Bucculatrix ilecella, which feeds exclusively on hollies, and the engrailed (Ectropis crepuscularia). Toxicity Holly can contain caffeic acid, caffeoyl derivatives, caffeoylshikimic acid, chlorogenic acid, feruloylquinic acid, quercetin, quinic acid, kaempferol, tannins, rutin, caffeine, theobromine, and ilicin. Holly berries can cause vomiting and diarrhea. They are especially dangerous in cases involving accidental consumption by children attracted to the bright red berries. Ingestion of over 20 berries may be fatal to children. Holly leaves, if eaten, might cause diarrhea, nausea, vomiting, and stomach and intestinal problems. Holly plants might be toxic to pets and livestock. Uses Culinary use Leaves of some holly species are used by some cultures to make daily tea. These species are Yerba mate (I. paraguariensis), Ilex guayusa, Kuding (Ilex kaushue), Yaupon (I. vomitoria) and others. Leaves of other species, such as gallberry (I. glabra) are bitter and emetic. In general little is known about inter-species variation in constituents or toxicity of hollies. Holly berries are fermented and distilled to produce an eau de vie. Ornamental use Many of the holly species are widely used as ornamental plants in temperate/European gardens and parks, notably: I. aquifolium (common European holly) I. crenata (box-leaved holly) I. verticillata (winterberry) Hollies are often used for hedges; the spiny leaves make them difficult to penetrate, and they take well to pruning and shaping. Many hundreds of hybrids and cultivars have been developed for garden use, among them the very popular "Highclere holly", Ilex × altaclerensis (I. aquifolium × I. perado) and the "blue holly", Ilex × meserveae (I. aquifolium × I. rugosa). The cultivars I. × meserveae = 'Conablu' and = 'Conapri' have gained the Royal Horticultural Society's Award of Garden Merit. Another hybrid is Ilex × koehneana, with the cultivar 'Chestnut Leaf'. Culture Holly – more specifically the European holly, Ilex aquifolium – is commonly referenced at Christmas time, and is often referred to by the name Christ's thorn. In many Western Christian cultures, holly is a traditional Christmas decoration, used especially in wreaths and illustrations, for instance on Christmas cards. Since medieval times the plant has carried a Christian symbolism, as expressed in the traditional Christmas carol "The Holly and the Ivy", in which the holly represents Jesus and the ivy represents the Virgin Mary. Angie Mostellar discusses the Christian use of holly at Christmas, stating that: In heraldry, holly is used to symbolize truth. The Norwegian municipality of Stord has a yellow twig of holly in its Coat-of-arms. The Druids held that "leaves of holly offered protection against evil spirits" and thus "wore holly in their hair". In the Harry Potter novels, holly is used as the wood in Harry's wand. In some traditions of Wicca, the Holly King is one of the faces of the Sun God. He is born at midsummer and rules from Mabon to Ostara. In the Irish language, the words mean 'son of holly'. Common anglicized forms of this arose; last names such as McCullen, McCullion, McQuillan, and MacCullion, which are quite common surnames in some areas. Gallery
Biology and health sciences
Aquifoliales
Plants
10058771
https://en.wikipedia.org/wiki/Gossypium%20hirsutum
Gossypium hirsutum
Gossypium hirsutum, also known as upland cotton or Mexican cotton, is the most widely planted species of cotton in the world. Globally, about 90% of all cotton production is of cultivars derived from this species. In the United States, the world's largest exporter of cotton, it constitutes approximately 95% of all cotton production. It is native to Mexico, the West Indies, northern South America, Central America and possibly tropical Florida. Archeological evidence from the Tehuacan Valley in Mexico shows the cultivation of this species as long ago as 3,500 BC, although there is as yet no evidence as to exactly where it may have been first domesticated. This is the earliest evidence of cotton cultivation in the Americas found thus far. Gossypium hirsutum includes a number of varieties or cross-bred cultivars with varying fiber lengths and tolerances to a number of growing conditions. The longer length varieties are called "long staple upland" and the shorter length varieties are referred to as "short staple upland". The long staple varieties are the most widely cultivated in commercial production. Besides being fibre crops, Gossypium hirsutum and Gossypium herbaceum are the main species used to produce cottonseed oil. The Zuni people use this plant to make ceremonial garments, and the fuzz is made into cords and used ceremonially. This species shows extrafloral nectar production. Synonyms Gossypium barbadense var. marie-galante (G. Watt) A. Chev., Rev. Int. Bot. Appl Agric. Trop. 18:118. 1938. Gossypium jamaicense Macfad., Fl. Jamaica 1:73. 1837. Gossypium lanceolatum Tod., Relaz. cult. coton. 185. 1877. Gossypium marie-galante G. Watt, Kew Bull. 1927:344. 1927. Gossypium mexicanum Tod., Ind. sem. panorm. 1867:20, 31. 1868. Gossypium morrillii O. F. Cook & J. Hubb., J. Washington Acad. Sci. 16:339. 1926. Gossypium palmeri G. Watt, Wild cult. cotton 204, t. 34. 1907. Gossypium punctatum Schumach., Beskr. Guin. pl. 309. 1827. Gossypium purpurascens Poir., Encycl. suppl. 2:369. 1811. Gossypium religiosum L., Syst. nat. ed. 12, 2:462. 1767. Gossypium schottii G. Watt, Wild cult. cotton 206. 1907. Gossypium taitense Parl., Sp. Cotoni 39, t. 6, fig. A. 1866. Gossypium tridens O. F. Cook & J. Hubb., J. Washington Acad. Sci. 16:547. 1926.
Biology and health sciences
Malvales
Plants
2417230
https://en.wikipedia.org/wiki/Sound%20recording%20and%20reproduction
Sound recording and reproduction
Sound recording and reproduction is the electrical, mechanical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. The two main classes of sound recording technology are analog recording and digital recording. Acoustic analog recording is achieved by a microphone diaphragm that senses changes in atmospheric pressure caused by acoustic sound waves and records them as a mechanical representation of the sound waves on a medium such as a phonograph record (in which a stylus cuts grooves on a record). In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, which is then converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a larger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of sampling. This lets the audio data be stored and transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers (zeros and ones) representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound. Early history Long before sound was first recorded, music was recorded—first by written music notation, then also by mechanical devices (e.g., wind-up music boxes, in which a mechanism turns a spindle, which plucks metal tines, thus reproducing a melody). Automatic music reproduction traces back as far as the 9th century, when the Banū Mūsā brothers invented the earliest known mechanical musical instrument, in this case, a hydropowered (water-powered) organ that played interchangeable cylinders. According to Charles B. Fowler, this "... cylinder with raised pins on the surface remained the basic device to produce and reproduce music mechanically until the second half of the nineteenth century." Carvings in the Rosslyn Chapel from the 1560s may represent an early attempt to record the Chladni patterns produced by sound-in-stone representations, although this theory has not been conclusively proved. In the 14th century, a mechanical bell-ringer controlled by a rotating cylinder was introduced in Flanders. Similar designs appeared in barrel organs (15th century), musical clocks (1598), barrel pianos (1805), and music boxes (). A music box is an automatic musical instrument that produces sounds by the use of a set of pins placed on a revolving cylinder or disc so as to pluck the tuned teeth (or lamellae) of a steel comb. The fairground organ, developed in 1892, used a system of accordion-folded punched cardboard books. The player piano, first demonstrated in 1876, used a punched paper scroll that could store a long piece of music. The most sophisticated of the piano rolls were hand-played, meaning that they were duplicates from a master roll that had been created on a special piano, which punched holes in the master as a live performer played the song. Thus, the roll represented a recording of the actual performance of an individual, not just the more common method of punching the master roll through transcription of the sheet music. This technology to record a live performance onto a piano roll was not developed until 1904. Piano rolls were in continuous mass production from 1896 to 2008. A 1908 U.S. Supreme Court copyright case noted that, in 1902 alone, there were between 70,000 and 75,000 player pianos manufactured, and between 1,000,000 and 1,500,000 piano rolls produced. Phonautograph The first device that could record actual sounds as they passed through the air (but could not play them back—the purpose was only visual study) was the phonautograph, patented in 1857 by Parisian inventor Édouard-Léon Scott de Martinville. The earliest known recordings of the human voice are phonautograph recordings, called phonautograms, made in 1857. They consist of sheets of paper with sound-wave-modulated white lines created by a vibrating stylus that cut through a coating of soot as the paper was passed under it. An 1860 phonautogram of "Au Clair de la Lune", a French folk song, was played back as sound for the first time in 2008 by scanning it and using software to convert the undulating line, which graphically encoded the sound, into a corresponding digital audio file. Phonograph Thomas Edison's work on two other innovations, the telegraph and the telephone, led to the development of the phonograph. Edison was working on a machine in 1877 that would transcribe telegraphic signals onto paper tape, which could then be transferred over the telegraph again and again. The phonograph was both in a cylinder and a disc form. Cylinder On April 30, 1877, French poet, humorous writer and inventor Charles Cros submitted a sealed envelope containing a letter to the Academy of Sciences in Paris fully explaining his proposed method, called the paleophone. Though no trace of a working paleophone was ever found, Cros is remembered by some historians as an early inventor of a sound recording and reproduction machine. The first practical sound recording and reproduction device was the mechanical phonograph cylinder, invented by Thomas Edison in 1877 and patented in 1878. The invention soon spread across the globe and over the next two decades the commercial recording, distribution, and sale of sound recordings became a growing new international industry, with the most popular titles selling millions of units by the early 1900s. A process for mass-producing duplicate wax cylinders by molding instead of engraving them was put into effect in 1901. The development of mass-production techniques enabled cylinder recordings to become a major new consumer item in industrial countries and the cylinder was the main consumer format from the late 1880s until around 1910. Disc The next major technical development was the invention of the gramophone record, generally credited to Emile Berliner and patented in 1887, though others had demonstrated similar disk apparatus earlier, most notably Alexander Graham Bell in 1881. Discs were easier to manufacture, transport and store, and they had the additional benefit of being marginally louder than cylinders. Sales of the gramophone record overtook the cylinder ca. 1910, and by the end of World War I the disc had become the dominant commercial recording format. Edison, who was the main producer of cylinders, created the Edison Disc Record in an attempt to regain his market. The double-sided (nominally 78 rpm) shellac disc was the standard consumer music format from the early 1910s to the late 1950s. In various permutations, the audio disc format became the primary medium for consumer sound recordings until the end of the 20th century. Although there was no universally accepted speed, and various companies offered discs that played at several different speeds, the major recording companies eventually settled on a de facto industry standard of nominally 78 revolutions per minute. The specified speed was 78.26 rpm in America and 77.92 rpm throughout the rest of the world. The difference in speeds was due to the difference in the cycle frequencies of the AC electricity that powered the stroboscopes used to calibrate recording lathes and turntables. The nominal speed of the disc format gave rise to its common nickname, the seventy-eight (though not until other speeds had become available). Discs were made of shellac or similar brittle plastic-like materials, played with needles made from a variety of materials including mild steel, thorn, and even sapphire. Discs had a distinctly limited playing life that varied depending on how they were manufactured. Earlier, purely acoustic methods of recording had limited sensitivity and frequency range. Mid-frequency range notes could be recorded, but very low and very high frequencies could not. Instruments such as the violin were difficult to transfer to disc. One technique to deal with this involved using a Stroh violin which uses a conical horn connected to a diaphragm that in turn is connected to the violin bridge. The horn was no longer needed once electrical recording was developed. The long-playing 33 rpm microgroove LP record, was developed at Columbia Records and introduced in 1948. The short-playing but convenient 45 rpm microgroove vinyl single was introduced by RCA Victor in 1949. In the US and most developed countries, the two new vinyl formats completely replaced 78 rpm shellac discs by the end of the 1950s, but in some corners of the world, the 78 lingered on far into the 1960s. Vinyl was much more expensive than shellac, one of the several factors that made its use for 78 rpm records very unusual, but with a long-playing disc the added cost was acceptable. The compact 45 format required very little material. Vinyl offered improved performance, both in stamping and in playback. Vinyl records were, over-optimistically, advertised as "unbreakable". They were not, but they were much less fragile than shellac, which had itself once been touted as unbreakable compared to wax cylinders. Electrical Sound recording began as a purely mechanical process. Except for a few crude telephone-based recording devices with no means of amplification, such as the telegraphone, it remained so until the 1920s. Between the invention of the phonograph in 1877 and the first commercial digital recordings in the early 1970s, arguably the most important milestone in the history of sound recording was the introduction of what was then called electrical recording, in which a microphone was used to convert the sound into an electrical signal that was amplified and used to actuate the recording stylus. This innovation eliminated the horn sound resonances characteristic of the acoustical process, produced clearer and more full-bodied recordings by greatly extending the useful range of audio frequencies, and allowed previously unrecordable distant and feeble sounds to be captured. During this time, several radio-related developments in electronics converged to revolutionize the recording process. These included improved microphones and auxiliary devices such as electronic filters, all dependent on electronic amplification to be of practical use in recording. In 1906, Lee De Forest invented the Audion triode vacuum tube, an electronic valve that could amplify weak electrical signals. By 1915, it was in use in long-distance telephone circuits that made conversations between New York and San Francisco practical. Refined versions of this tube were the basis of all electronic sound systems until the commercial introduction of the first transistor-based audio devices in the mid-1950s. During World War I, engineers in the United States and Great Britain worked on ways to record and reproduce, among other things, the sound of a German U-boat for training purposes. Acoustical recording methods of the time could not reproduce the sounds accurately. The earliest results were not promising. The first electrical recording issued to the public, with little fanfare, was of November 11, 1920, funeral service for The Unknown Warrior in Westminster Abbey, London. The recording engineers used microphones of the type used in contemporary telephones. Four were discreetly set up in the abbey and wired to recording equipment in a vehicle outside. Although electronic amplification was used, the audio was weak and unclear, as only possible in those circumstances. For several years, this little-noted disc remained the only issued electrical recording. Several record companies and independent inventors, notably Orlando Marsh, experimented with equipment and techniques for electrical recording in the early 1920s. Marsh's electrically recorded Autograph Records were already being sold to the public in 1924, a year before the first such offerings from the major record companies, but their overall sound quality was too low to demonstrate any obvious advantage over traditional acoustical methods. Marsh's microphone technique was idiosyncratic and his work had little if any impact on the systems being developed by others. Telephone industry giant Western Electric had research laboratories with material and human resources that no record company or independent inventor could match. They had the best microphone, a condenser type developed there in 1916 and greatly improved in 1922, and the best amplifiers and test equipment. They had already patented an electromechanical recorder in 1918, and in the early 1920s, they decided to intensively apply their hardware and expertise to developing two state-of-the-art systems for electronically recording and reproducing sound: one that employed conventional discs and another that recorded optically on motion picture film. Their engineers pioneered the use of mechanical analogs of electrical circuits and developed a superior rubber line recorder for cutting the groove into the wax master in the disc recording system. By 1924, such dramatic progress had been made that Western Electric arranged a demonstration for the two leading record companies, the Victor Talking Machine Company and the Columbia Phonograph Company. Both soon licensed the system and both made their earliest published electrical recordings in February 1925, but neither actually released them until several months later. To avoid making their existing catalogs instantly obsolete, the two long-time archrivals agreed privately not to publicize the new process until November 1925, by which time enough electrically recorded repertory would be available to meet the anticipated demand. During the next few years, the lesser record companies licensed or developed other electrical recording systems. By 1929 only the budget label Harmony was still issuing new recordings made by the old acoustical process. Comparison of some surviving Western Electric test recordings with early commercial releases indicates that the record companies artificially reduced the frequency range of recordings so they would not overwhelm non-electronic playback equipment, which reproduced very low frequencies as an unpleasant rattle and rapidly wore out discs with strongly recorded high frequencies. Optical and magnetic In the 1920s, Phonofilm and other early motion picture sound systems employed optical recording technology, in which the audio signal was graphically recorded on photographic film. The amplitude variations comprising the signal were used to modulate a light source which was imaged onto the moving film through a narrow slit, allowing the signal to be photographed as variations in the density or width of a sound track. The projector used a steady light and a photodetector to convert these variations back into an electrical signal, which was amplified and sent to loudspeakers behind the screen. Optical sound became the standard motion picture audio system throughout the world and remains so for theatrical release prints despite attempts in the 1950s to substitute magnetic soundtracks. Currently, all release prints on 35 mm movie film include an analog optical soundtrack, usually stereo with Dolby SR noise reduction. In addition, an optically recorded digital soundtrack in Dolby Digital or Sony SDDS form is likely to be present. An optically recorded timecode is also commonly included to synchronize CDROMs that contain a DTS soundtrack. This period also saw several other historic developments including the introduction of the first practical magnetic sound recording system, the magnetic wire recorder, which was based on the work of Danish inventor Valdemar Poulsen. Magnetic wire recorders were effective, but the sound quality was poor, so between the wars, they were primarily used for voice recording and marketed as business dictating machines. In 1924, a German engineer, Kurt Stille, improved the Telegraphone with an electronic amplifier. The following year, Ludwig Blattner began work that eventually produced the Blattnerphone, which used steel tape instead of wire. The BBC started using Blattnerphones in 1930 to record radio programs. In 1933, radio pioneer Guglielmo Marconi's company purchased the rights to the Blattnerphone, and newly developed Marconi-Stille recorders were installed in the BBC's Maida Vale Studios in March 1935. The tape used in Blattnerphones and Marconi-Stille recorders was the same material used to make razor blades, and not surprisingly the fearsome Marconi-Stille recorders were considered so dangerous that technicians had to operate them from another room for safety. Because of the high recording speeds required, they used enormous reels about one meter in diameter, and the thin tape frequently broke, sending jagged lengths of razor steel flying around the studio. Tape Magnetic tape recording uses an amplified electrical audio signal to generate analogous variations of the magnetic field produced by a tape head, which impresses corresponding variations of magnetization on the moving tape. In playback mode, the signal path is reversed, the tape head acting as a miniature electric generator as the varyingly magnetized tape passes over it. The original solid steel ribbon was replaced by a much more practical coated paper tape, but acetate soon replaced paper as the standard tape base. Acetate has fairly low tensile strength and if very thin it will snap easily, so it was in turn eventually superseded by polyester. This technology, the basis for almost all commercial recording from the 1950s to the 1980s, was developed in the 1930s by German audio engineers who also rediscovered the principle of AC biasing (first used in the 1920s for wire recorders), which dramatically improved the frequency response of tape recordings. The K1 Magnetophon was the first practical tape recorder, developed by AEG in Germany in 1935. The technology was further improved just after World War II by American audio engineer John T. Mullin with backing from Bing Crosby Enterprises. Mullin's pioneering recorders were modifications of captured German recorders. In the late 1940s, the Ampex company produced the first tape recorders commercially available in the US. Magnetic tape brought about sweeping changes in both radio and the recording industry. Sound could be recorded, erased and re-recorded on the same tape many times, sounds could be duplicated from tape to tape with only minor loss of quality, and recordings could now be very precisely edited by physically cutting the tape and rejoining it. Within a few years of the introduction of the first commercial tape recorder—the Ampex 200 model, launched in 1948—American musician-inventor Les Paul had invented the first multitrack tape recorder, ushering in another technical revolution in the recording industry. Tape made possible the first sound recordings totally created by electronic means, opening the way for the bold sonic experiments of the Musique Concrète school and avant-garde composers like Karlheinz Stockhausen, which in turn led to the innovative pop music recordings of artists such as the Beatles and the Beach Boys. The ease and accuracy of tape editing, as compared to the cumbersome disc-to-disc editing procedures previously in some limited use, together with tape's consistently high audio quality finally convinced radio networks to routinely prerecord their entertainment programming, most of which had formerly been broadcast live. Also, for the first time, broadcasters, regulators and other interested parties were able to undertake comprehensive audio logging of each day's radio broadcasts. Innovations like multitracking and tape echo allowed radio programs and advertisements to be produced to a high level of complexity and sophistication. The combined impact with innovations such as the endless loop broadcast cartridge led to significant changes in the pacing and production style of radio program content and advertising. Stereo and hi-fi In 1881, it was noted during experiments in transmitting sound from the Paris Opera that it was possible to follow the movement of singers on the stage if earpieces connected to different microphones were held to the two ears. This discovery was commercialized in 1890 with the Théâtrophone system, which operated for over forty years until 1932. In 1931, Alan Blumlein, a British electronics engineer working for EMI, designed a way to make the sound of an actor in a film follow his movement across the screen. In December 1931, he submitted a patent application including the idea, and in 1933 this became UK patent number 394,325. Over the next two years, Blumlein developed stereo microphones and a stereo disc-cutting head, and recorded a number of short films with stereo soundtracks. In the 1930s, experiments with magnetic tape enabled the development of the first practical commercial sound systems that could record and reproduce high-fidelity stereophonic sound. The experiments with stereo during the 1930s and 1940s were hampered by problems with synchronization. A major breakthrough in practical stereo sound was made by Bell Laboratories, who in 1937 demonstrated a practical system of two-channel stereo, using dual optical sound tracks on film. Major movie studios quickly developed three-track and four-track sound systems, and the first stereo sound recording for a commercial film was made by Judy Garland for the MGM movie Listen, Darling in 1938. The first commercially released movie with a stereo soundtrack was Walt Disney's Fantasia, released in 1940. The 1941 release of Fantasia used the Fantasound sound system. This system used a separate film for the sound, synchronized with the film carrying the picture. The sound film had four double-width optical soundtracks, three for left, center, and right audio—and a fourth as a control track with three recorded tones that controlled the playback volume of the three audio channels. Because of the complex equipment this system required, Disney exhibited the movie as a roadshow, and only in the United States. Regular releases of the movie used standard mono optical 35 mm stock until 1956, when Disney released the film with a stereo soundtrack that used the Cinemascope four-track magnetic sound system. German audio engineers working on magnetic tape developed stereo recording by 1941. Of 250 stereophonic recordings made during WW2, only three survive: Beethoven's 5th Piano Concerto with Walter Gieseking and Arthur Rother, a Brahms Serenade, and the last movement of Bruckner's 8th Symphony with Von Karajan. Other early German stereophonic tapes are believed to have been destroyed in bombings. Not until Ampex introduced the first commercial two-track tape recorders in the late 1940s did stereo tape recording become commercially feasible. Despite the availability of multitrack tape, stereo did not become the standard system for commercial music recording for some years, and remained a specialist market during the 1950s. EMI (UK) was the first company to release commercial stereophonic tapes. They issued their first Stereosonic tape in 1954. Others quickly followed, under the His Master's Voice (HMV) and Columbia labels. 161 Stereosonic tapes were released, mostly classical music or lyric recordings. RCA imported these tapes into the USA. Although some HMV tapes released in the USA cost up to $15, two-track stereophonic tapes were more successful in America during the second half of the 1950s. The history of stereo recording changed after the late 1957 introduction of the Westrex stereo phonograph disc, which used the groove format developed earlier by Blumlein. Decca Records in England came out with FFRR (Full Frequency Range Recording) in the 1940s, which became internationally accepted as a worldwide standard for higher-quality recording on vinyl records. The Ernest Ansermet recording of Igor Stravinsky's Petrushka was key in the development of full frequency range records and alerting the listening public to high fidelity in 1946. Until the mid-1960s, record companies mixed and released most popular music in monophonic sound. From mid-1960s until the early 1970s, major recordings were commonly released in both mono and stereo. Recordings originally released only in mono have been rerendered and released in stereo using a variety of techniques from remixing to pseudostereo. 1950s to 1980s Magnetic tape transformed the recording industry. By the early 1950s, most commercial recordings were mastered on tape instead of recorded directly to disc. Tape facilitated a degree of manipulation in the recording process that was impractical with mixes and multiple generations of directly recorded discs. An early example is Les Paul's 1951 recording of How High the Moon, on which Paul played eight overdubbed guitar tracks. In the 1960s Brian Wilson of The Beach Boys, Frank Zappa, and The Beatles (with producer George Martin) were among the first popular artists to explore the possibilities of multitrack recording techniques and effects on their landmark albums Pet Sounds, Freak Out!, and Sgt. Pepper's Lonely Hearts Club Band. The next important innovation was small cartridge-based tape systems, of which the compact cassette, commercialized by the Philips electronics company in 1964, is the best known. Initially a low-fidelity format for spoken-word voice recording and inadequate for music reproduction, after a series of improvements it entirely replaced the competing consumer tape formats: the larger 8-track tape (used primarily in cars). The compact cassette became a major consumer audio format and advances in electronic and mechanical miniaturization led to the development of the Sony Walkman, a pocket-sized cassette player introduced in 1979. The Walkman was the first personal music player and it gave a major boost to sales of prerecorded cassettes. A key advance in audio fidelity came with the Dolby A noise reduction system, invented by Ray Dolby and introduced into professional recording studios in 1966. It suppressed the background of hiss, which was the only easily audible downside of mastering on tape instead of recording directly to disc. A competing system, dbx, invented by David Blackmer, also found success in professional audio. A simpler variant of Dolby's noise reduction system, known as Dolby B, greatly improved the sound of cassette tape recordings by reducing the especially high level of hiss that resulted from the cassette's miniaturized tape format. The compact cassette format also benefited from improvements to the tape itself as coatings with wider frequency responses and lower inherent noise were developed, often based on cobalt and chrome oxides as the magnetic material instead of the more usual iron oxide. The multitrack audio cartridge had been in wide use in the radio industry, from the late 1950s to the 1980s, but in the 1960s the pre-recorded 8-track tape was launched as a consumer audio format by the Lear Jet aircraft company. Aimed particularly at the automotive market, they were the first practical, affordable car hi-fi systems, and could produce sound quality superior to that of the compact cassette. The smaller size and greater durabilityaugmented by the ability to create home-recorded music mixtapes since 8-track recorders were raresaw the cassette become the dominant consumer format for portable audio devices in the 1970s and 1980s. There had been experiments with multi-channel sound for many yearsusually for special musical or cultural eventsbut the first commercial application of the concept came in the early 1970s with the introduction of Quadraphonic sound. This spin-off development from multitrack recording used four tracks (instead of the two used in stereo) and four speakers to create a 360-degree audio field around the listener. Following the release of the first consumer 4-channel hi-fi systems, a number of popular albums were released in one of the competing four-channel formats; among the best known are Mike Oldfield's Tubular Bells and Pink Floyd's The Dark Side of the Moon. Quadraphonic sound was not a commercial success, partly because of competing and somewhat incompatible four-channel sound systems (e.g., CBS, JVC, Dynaco and others all had systems) and generally poor quality, even when played as intended on the correct equipment, of the released music. It eventually faded out in the late 1970s, although this early venture paved the way for the eventual introduction of domestic surround sound systems in home theatre use, which gained popularity following the introduction of the DVD. Audio components The replacement of the relatively fragile vacuum tube by the smaller, rugged and efficient transistor also accelerated the sale of consumer high-fidelity sound systems from the 1960s onward. In the 1950s, most record players were monophonic and had relatively low sound quality. Few consumers could afford high-quality stereophonic sound systems. In the 1960s, American manufacturers introduced a new generation of modular hi-fi components — separate turntables, pre-amplifiers, amplifiers, both combined as integrated amplifiers, tape recorders, and other ancillary equipment like the graphic equalizer, which could be connected together to create a complete home sound system. These developments were rapidly taken up by major Japanese electronics companies, which soon flooded the world market with relatively affordable, high-quality transistorized audio components. By the 1980s, corporations like Sony had become world leaders in the music recording and playback industry. Digital The advent of digital sound recording and later the compact disc (CD) in 1982 brought significant improvements in the quality and durability of recordings. The CD initiated another massive wave of change in the consumer music industry, with vinyl records effectively relegated to a small niche market by the mid-1990s. The record industry fiercely resisted the introduction of digital systems, fearing wholesale piracy on a medium able to produce perfect copies of original released recordings. The most recent and revolutionary developments have been in digital recording, with the development of various uncompressed and compressed digital audio file formats, processors capable and fast enough to convert the digital data to sound in real time, and inexpensive mass storage. This generated new types of portable digital audio players. The minidisc player, using ATRAC compression on small, re-writeable discs was introduced in the 1990s, but became obsolescent as solid-state non-volatile flash memory dropped in price. As technologies that increase the amount of data that can be stored on a single medium, such as Super Audio CD, DVD-A, Blu-ray Disc, and HD DVD became available, longer programs of higher quality fit onto a single disc. Sound files are readily downloaded from the Internet and other sources, and copied onto computers and digital audio players. Digital audio technology is now used in all areas of audio, from casual use of music files of moderate quality to the most demanding professional applications. New applications such as internet radio and podcasting have appeared. Technological developments in recording, editing, and consuming have transformed the record, movie and television industries in recent decades. Audio editing became practicable with the invention of magnetic tape recording, but technologies like MIDI, sound synthesis and digital audio workstations allow greater control and efficiency for composers and artists. Digital audio techniques and mass storage have reduced recording costs such that high-quality recordings can be produced in small studios. Today, the process of making a recording is separated into tracking, mixing and mastering. Multitrack recording makes it possible to capture signals from several microphones, or from different takes to tape, disc or mass storage allowing previously unavailable flexibility in the mixing and mastering stages. Software There are many different digital audio recording and processing programs running under several computer operating systems for all purposes, ranging from casual users and serious amateurs working on small projects to professional sound engineers who are recording albums, film scores and doing sound design for video games. Digital dictation software for recording and transcribing speech has different requirements; intelligibility and flexible playback facilities are priorities, while a wide frequency range and high audio quality are not. Cultural effects The development of analog sound recording in the nineteenth century and its widespread use throughout the twentieth century had a huge impact on the development of music. Before analog sound recording was invented, most music was as a live performance. Throughout the medieval, Renaissance, Baroque, Classical, and through much of the Romantic music era, the main way that songs and instrumental pieces were recorded was through music notation. While notation indicates the pitches of the melody and their rhythm many aspects of the performance are undocumented. Indeed, in the Medieval era, Gregorian chant did not indicate the rhythm of the chant. In the Baroque era, instrumental pieces often lack a tempo indication and usually none of the ornaments were written down. As a result, each performance of a song or piece would be slightly different. With the development of analog sound recording, though, a performance could be permanently fixed, in all of its elements: pitch, rhythm, timbre, ornaments and expression. This meant that many more elements of a performance would be captured and disseminated to other listeners. The development of sound recording also enabled a much larger proportion of people to hear famous orchestras, operas, singers and bands, because even if a person could not afford to hear the live concert, they may be able to hear the recording. The availability of sound recording thus helped to spread musical styles to new regions, countries and continents. The cultural influence went in a number of directions. Sound recordings enabled Western music lovers to hear actual recordings of Asian, Middle Eastern and African groups and performers, increasing awareness of non-Western musical styles. At the same time, sound recordings enabled music lovers outside the West to hear the most famous North American and European groups and singers. As digital recording developed, so did a controversy commonly known as the analog versus digital controversy. Audio professionals, audiophiles, consumers, musicians alike contributed to the debate based on their interaction with the media and the preferences for analog or digital processes. Scholarly discourse on the controversy came to focus on concern for the perception of moving image and sound. There are individual and cultural preferences for either method. While approaches and opinions vary, some emphasize sound as paramount, others focus on technology preferences as the deciding factor. Analog fans might embrace limitations as strengths of the medium inherent in the compositional, editing, mixing, and listening phases. Digital advocates boast flexibility in similar processes. This debate fosters a revival of vinyl in the music industry, as well as analog electronics, and analog type plug-ins for recording and mixing software. Legal status In copyright law, a phonogram or sound recording is a work that results from the fixation of sounds in a medium. The notice of copyright in a phonogram uses the sound recording copyright symbol, which the Geneva Phonograms Convention defines as ℗ (the letter P in a full circle). This usually accompanies the copyright notice for the underlying musical composition, which uses the ordinary © symbol. The recording is separate from the song, so copyright for a recording usually belongs to the record company. It is less common for an artist or producer to hold these rights. Copyright for recordings has existed since 1972, while copyright for musical composition, or songs, has existed since 1831. Disputes over sampling and beats are ongoing. United States United States copyright law defines "sound recordings" as "works that result from the fixation of a series of musical, spoken, or other sounds" other than an audiovisual work's soundtrack. Prior to the Sound Recording Amendment (SRA), which took effect in 1972, copyright in sound recordings was handled at the state level. Federal copyright law preempts most state copyright laws but allows state copyright in sound recordings to continue for one full copyright term after the SRA's effective date, which means 2067. United Kingdom Since 1934, copyright law in Great Britain has treated sound recordings (or phonograms) differently from musical works. The Copyright, Designs and Patents Act 1988 defines a sound recording as (a) a recording of sounds, from which the sounds may be reproduced, or (b) a recording of the whole or any part of a literary, dramatic or musical work, from which sounds reproducing the work or part may be produced, regardless of the medium on which the recording is made or the method by which the sounds are reproduced or produced. It thus covers vinyl records, tapes, compact discs, digital audiotapes, and MP3s that embody recordings.
Technology
Media and communication
null
2418285
https://en.wikipedia.org/wiki/Troilite
Troilite
Troilite () is a rare iron sulfide mineral with the simple formula of FeS. It is the iron-rich endmember of the pyrrhotite group. Pyrrhotite has the formula Fe(1-x)S (x = 0 to 0.2) which is iron deficient. As troilite lacks the iron deficiency which gives pyrrhotite its characteristic magnetism, troilite is non-magnetic. Troilite can be found as a native mineral on Earth but is more abundant in meteorites, in particular, those originating from the Moon and Mars. It is among the minerals found in samples of the meteorite that struck Russia in Chelyabinsk on February 15th, 2013. Uniform presence of troilite on the Moon and possibly on Mars has been confirmed by the Apollo, Viking and Phobos space probes. The relative intensities of isotopes of sulfur are rather constant in meteorites as compared to the Earth minerals, and therefore troilite from Canyon Diablo meteorite is chosen as the international sulfur isotope ratio standard, the Canyon Diablo Troilite (CDT). Structure Troilite has hexagonal structure (Pearson symbol hP24, Space group P-62c No 190). Its unit cell is approximately a combination of two vertically stacked basic NiAs-type cells of pyrrhotite, where the top cell is diagonally shifted. For this reason, troilite is sometimes called pyrrhotite-2C. Discovery A meteorite fall was observed in 1766 at Albareto, Modena, Italy. Samples were collected and studied by Domenico Troili who described the iron sulfide inclusions in the meteorite. These iron sulfides were long considered to be pyrite (i.e., ). In 1862, German mineralogist Gustav Rose analyzed the material and recognized it as stoichiometric 1:1 FeS and gave it the name troilite in recognition of the work of Domenico Troili. Occurrence Troilite has been reported from a variety of meteorites occurring with daubréelite, chromite, sphalerite, graphite, and a variety of phosphate and silicate minerals. It has also been reported from serpentinite in the Alta mine, Del Norte County, California, and in layered igneous intrusions in Western Australia, the Ilimaussaq intrusion of southern Greenland, the Bushveld Complex in South Africa and at Nordfjellmark, Norway. In the South African and Australian occurrence it is associated with copper, nickel, platinum iron ore deposits occurring with pyrrhotite, pentlandite, mackinawite, cubanite, valleriite, chalcopyrite and pyrite. Troilite is extremely rarely encountered in the Earth's crust (even pyrrhotite is relatively rare compared to pyrite and Iron(II) sulfate minerals). Most troilite on Earth is of meteoritic origin. One iron meteorite, Mundrabilla contains 25 to 35 volume percent troilite. The most famous troilite-containing meteorite is Canyon Diablo. Canyon Diablo Troilite (CDT) is used as a standard of relative concentration of different isotopes of sulfur. Meteoritic standard was chosen because of the constancy of the sulfur isotopic ratio in meteorites, whereas the sulfur isotopic composition in Earth materials varies due to the bacterial activity. In particular, certain sulfate reducing bacteria can reduce 1.07 times faster than , which may increase the / ratio by up to 10%. Troilite is the most common sulfide mineral at the lunar surface. It forms about one percent of the lunar crust and is present in any rock or meteorite originating from moon. In particular, all basalts brought by the Apollo 11, 12, 15 and 16 missions contain about 1% of troilite. Troilite is regularly found in Martian meteorites (i.e. those originating from Mars). Similar to the Moon's surface and meteorites, the fraction of troilite in Martian meteorites is close to 1%. Based on observations by the Voyager spacecraft in 1979 and Galileo in 1996, troilite might also be present in the rocks of Jupiter’s satellites Ganymede and Callisto. Whereas experimental data for Jupiter's moons are yet very limited, the theoretical modeling assumes large percentage of troilite (~22.5%) in the core of those moons.
Physical sciences
Minerals
Earth science
2418699
https://en.wikipedia.org/wiki/Pipefish
Pipefish
Pipefishes or pipe-fishes (Syngnathinae) are a subfamily of small fishes, which, together with the seahorses and seadragons (Phycodurus and Phyllopteryx), form the family Syngnathidae. Description Pipefish look like straight-bodied seahorses with tiny mouths. The name is derived from the peculiar form of the snout, which is like a long tube, ending in a narrow and small mouth which opens upwards and is toothless. The body and tail are long, thin, and snake-like. They each have a highly modified skeleton formed into armored plating. This dermal skeleton has several longitudinal ridges, so a vertical section through the body looks angular, not round or oval as in the majority of other fishes. A dorsal fin is always present, and is the principal (in some species, the only) organ of locomotion. The ventral fins are consistently absent, and the other fins may or may not be developed. The gill openings are extremely small and placed near the upper posterior angle of the gill cover. Many are very weak swimmers in open water, moving slowly by means of rapid movements of the dorsal fin. Some species of pipefish have prehensile tails, as in seahorses. The majority of pipefishes have some form of a caudal fin (unlike seahorses), which can be used for locomotion. See fish anatomy for fin descriptions. Some species of pipefish have more developed caudal fins, such as the group collectively known as flagtail pipefish, which are quite strong swimmers. Habitat and distribution Most pipefishes are marine dwellers; only a few are freshwater species. They are abundant on coasts of the tropical and temperate zones. Most species of pipefish are usually in length and generally inhabit sheltered areas in coral reefs or seagrass beds. Habitat loss and threats Due to their lack of strong swimming ability pipefish are often found in shallow waters that are easily disturbed by industrial runoffs and human recreation. Shorelines are also affected by boats and drag lines that move shoreline sediment. These disturbances cause a decrease in seagrasses and eelgrasses that are vital in pipefish habitats. The pipefish's narrow range distribution indicates that they are less able to adapt to new habitats or habitat change. Another factor that affects pipefish populations is their use in traditional Chinese medicine (TCM) remedies, despite the lack of evidence of efficacy beyond placebo. Syngnathidae in general are in high demand for pseudo-scientific medicinal cures but pipefish are even more exploited because of a belief in their higher level of potency (because they are longer than the more common variety of seahorses). The aquarium trade of pipefish has also increased in recent years. Reproduction and parental care Pipefishes, like their seahorse relatives, leave most of the parenting duties to the male, which provides all of the postzygotic care for its offspring, supplying them with nutrients and oxygen through a placenta-like connection. It broods the offspring either on distinct region of its body or in a brood pouch. Brood pouches vary significantly among different species of pipefish, but all contain a small opening through which female eggs can be deposited. The location of the brood pouch can be along the entire underside of the pipefish or just at the base of the tail, as with seahorses. Pipefish in the genus Syngnathus have a brood pouch with a ventral seam that can completely cover all of their eggs when sealed. In males without these pouches, eggs adhere to a strip of soft skin on the ventral surface of their bodies that does not contain any exterior covering. The evolution of male brooding in pipefish is thought to be a result of the reproductive advantage granted to pipefish ancestors that learned to deposit their eggs onto the males, who could escape predation and protect them. Furthermore, the ability to transfer immune information from both the mother (in the egg) and the father (in the pouch), unlike other chordates in which only the mother can transfer immune information, is believed to have an additive beneficial effect on offspring immunity. Courtship between male and female pipefish involves lengthy and complicated shows of display. For example, in Syngnathus typhle, copulation is always preceded by a ritualized dance by both sexes. The dance involves very conspicuous wriggling and shaking motions, especially in comparison to the species' otherwise extremely secretive lifestyle. Under the threat or presence of a predator, pipefish are more reluctant to perform their dances. In addition, when risk of predation is high, they copulate less frequently, dance less per copulation, and females transfer more eggs per copulation. Although S. thyphle males normally prefer to mate with larger females, they mate randomly when potentially threatened by predators. Furthermore, in Corythoichthys haematopterus, similar ritualized mating dances were hypothesized to aid in reproductive synchronization, by allowing the female to assess male willingness to spawn so her eggs are not wasted. During pipefish copulation, which signifies the termination of the courtship dance, the female transfers her eggs through a small ovipositor into the male brood pouch or onto the special patch of skin on the male's ventral body surface. While the eggs are being transferred, the mating pair rises through the water until implantation is complete. At this point, the male assumes an S-shaped posture and fertilizes the eggs, all the while descending back down the water column. Males possessing brood pouches release their sperm directly into them; the pouches are then vigorously shaken. The ventral seams are not opened until weeks later when the male pipefish give birth. A physical limit exists for the number of eggs a male pipefish can carry, so males are considered to be the limiting sex. Females can often produce more eggs than males can accommodate inside their brood pouches, resulting in more eggs than can be cared for. Other factors may restrict female reproductive success, including male pregnancy length and energy investment in progeny. Because the pipefish embryos develop within the male, feeding on nutrients supplied by him, male pipefish invest more energy than females in each zygote. Additionally, they invest more energy per unit time than females throughout each breeding season. As a result, some males may consume their embryos rather than continuing to rear them under situations to regain energy in which their bodies are exhausted of resources. Pregnant male pipefish can absorb nutrients from their broods, in a manner very similar to filial cannibalism found in many other families of fish. The smallest eggs in a brood of various egg sizes usually have lower survival rates than larger ones due to the larger eggs having a longer-lasting food source (absent contributions from the father), hence they more likely to develop into mature adults. In other instances, some pipefishes may consume the embryos of mates that seem less fit or desirable, as each male generally copulates with more than one female. Young are born free-swimming with relatively little or no yolk sac, and begin feeding immediately. From the time they hatch, they are independent of their parents, which at that time may view them as food. Some fry have short larval stages and live as plankton for a short while. Others are fully developed but miniature versions of their parents, assuming the same behaviors as their parents immediately. Pair bonding varies wildly between different species of pipefish. While some are monogamous or seasonally monogamous, others are not. Many species exhibit polyandry, a breeding system in which one female mates with two or more males. This tends to occur with greater frequency in internal-brooding species of pipefishes than with external-brooding ones due to limitation in male brood capacity. Polyandrous species are also more likely to have females with complex sexual signals such as ornaments. For example, the polyandrous Gulf pipefish (Syngnathus scovelli) displays considerable sexual dimorphic characteristics such as larger ornament area and number, and body size. Genera Subfamily Syngnathinae (pipefishes and seadragons) Genus Acentronura Kaup, 1853 Genus Amphelikturus Parr, 1930 Genus Anarchopterus Hubbs, 1935 Genus Apterygocampus Weber, 1913 Genus Bhanotia Hora, 1926 Genus Bryx Herald, 1940 Genus Bulbonaricus Herald, 1953 Genus Campichthys Whitley, 1931 Genus Choeroichthys Kaup, 1856 Genus Corythoichthys Kaup, 1853 Genus Cosmocampus Dawson, 1979 Genus Doryichthys Kaup, 1853 Genus Doryrhamphus Kaup, 1856 Genus Dunckerocampus Whitley, 1933 Genus Enneacampus Dawson, 1981 Genus Entelurus Duméril, 1870 Genus Festucalex Whitley, 1931 Genus Filicampus Whitley, 1948 Genus Halicampus Kaup, 1856 Genus Haliichthys Gray, 1859 Genus Heraldia Paxton, 1975 Genus Hippichthys Bleeker, 1849—river pipefishes Genus Histiogamphelus McCulloch, 1914 Genus Hypselognathus Whitley, 1948 Genus Ichthyocampus Kaup, 1853 Genus Idiotropiscis Whitely, 1947 Genus Kaupus Whitley, 1951 Genus Kimblaeus Dawson, 1980 Genus Kyonemichthys Gomon, 2007 Genus Leptoichthys Kaup, 1853 Genus Leptonotus Kaup, 1853 Genus Lissocampus Waite and Hale, 1921 Genus Maroubra Whitley, 1948 Genus †Maroubrichthys Parrin, 1992 (Oligocene of Russia) Genus Micrognathus Duncker, 1912 Genus Microphis Kaup, 1853—freshwater pipefishes Genus Minyichthys Herald and Randall, 1972 Genus Mitotichthys Whitley, 1948 Genus Nannocampus Günther, 1870 Genus †Nepigastrosyngnathus Pharisat, 1993 (Oligocene of France) Genus Nerophis Rafinesque, 1810 Genus Notiocampus Dawson, 1979 Genus Penetopteryx Lunel, 1881 Genus Phoxocampus Dawson, 1977 Genus Phycodurus Gill, 1896—leafy seadragon Genus Phyllopteryx Swainson, 1839—seadragons Genus Pseudophallus Herald, 1940—fluvial pipefishes Genus Pugnaso Whitley, 1948 Genus Siokunichthys Herald, 1953 Genus Solegnathus Swainson, 1839 Genus Stigmatopora Kaup, 1853 Genus Stipecampus Whitley, 1948 Genus Syngnathoides Bleeker, 1851 Genus Syngnathus Linnaeus, 1758 Genus Trachyrhamphus Kaup, 1853 Genus Urocampus Günther, 1870 Genus Vanacampus Whitley, 1951
Biology and health sciences
Acanthomorpha
Animals
15161047
https://en.wikipedia.org/wiki/Basket%20star
Basket star
The Euryalina are a suborder of brittle stars, which includes large species with either branching arms (called "basket stars") or long and curling arms (called "snake stars"). It is sometimes listed as the order Euryalida. Characteristics Many of the species in this order have characteristic repeatedly branched arms (a shape known as "basket stars", which includes most Gorgonocephalidae and two species in the family Euryalidae), while the other species have very long and curling arms, and go rather by the name of "snake stars" (mostly abyssal species). Many of them live in deep sea habitats or cold waters, though some basket stars can be seen at night in shallow tropical reefs. Most young basket stars live on specific type of coral. In the wild they may live up to 35 years. They weigh up to Like other echinoderms, basket stars lack blood and achieve gas exchange via their water vascular system. The basket stars are the largest ophiuroids with Gorgonocephalus stimpsoni measuring up to 70 cm in arm length with a disk diameter of 14 cm. Systematics and phylogeny The fossil record of this group is rather poor and only dates back to Carboniferous. Basket stars are divided into the following families: family Asteronychidae Ljungman, 1867 -- 4 genera (11 species) family Euryalidae Gray, 1840, emended Okanishi et al., 2011 -- 11 genera (89 species) family Gorgonocephalidae Ljungman, 1867 -- 34 genera (96 species) sub-family Astrocloninae Okanishi & Fujita, 2018 sub-family Astrothamninae Okanishi & Fujita, 2013 sub-family Astrotominae Matsumoto, 1915 sub-family Gorgonocephalinae Döderlein, 1911 Gallery
Biology and health sciences
Echinoderms
Animals
14080493
https://en.wikipedia.org/wiki/Grafting
Grafting
Grafting or graftage is a horticultural technique whereby tissues of plants are joined so as to continue their growth together. The upper part of the combined plant is called the scion () while the lower part is called the rootstock. The success of this joining requires that the vascular tissues grow together. The natural equivalent of this process is inosculation. The technique is most commonly used in asexual propagation of commercially grown plants for the horticultural and agricultural trades. The scion is typically joined to the rootstock at the soil line; however, top work grafting may occur far above this line, leaving an understock consisting of the lower part of the trunk and the root system. In most cases, the stock or rootstock is selected for its roots and the scion is selected for its stems, leaves, flowers, or fruits. The scion contains the desired genes to be duplicated in future production by the grafted plant. In stem grafting, a common grafting method, a shoot of a selected, desired plant cultivar is grafted onto the stock of another type. In another common form called bud grafting, a dormant side bud is grafted onto the stem of another stock plant, and when it has inosculated successfully, it is encouraged to grow by pruning off the stem of the stock plant just above the newly grafted bud. For successful grafting to take place, the vascular cambium tissues of the stock and scion plants must be placed in contact with each other. Both tissues must be kept alive until the graft has "taken", usually a period of a few weeks. Successful grafting only requires that a vascular connection take place between the grafted tissues. Research conducted in Arabidopsis thaliana hypocotyls has shown that the connection of phloem takes place after three days of initial grafting, whereas the connection of xylem can take up to seven days. Joints formed by grafting are not as strong as naturally formed joints, so a physical weak point often still occurs at the graft because only the newly formed tissues inosculate with each other. The existing structural tissue (or wood) of the stock plant does not fuse. Advantages Precocity: The ability to induce fruitfulness without the need for completing the juvenile phase. Juvenility is the natural state through which a seedling plant must pass before it can become reproductive. In most fruiting trees, juvenility may last between 5 and 9 years, but in some tropical fruits, e.g., mangosteen, juvenility may be prolonged for up to 15 years. Grafting of mature scions onto rootstocks can result in fruiting in as little as two years. Dwarfing: To induce dwarfing or cold tolerance or other characteristics to the scion. Most apple trees in modern orchards are grafted on to dwarf or semi-dwarf trees planted at high density. They provide more fruit per unit of land, of higher quality, and reduce the danger of accidents by harvest crews working on ladders. Care must be taken when planting dwarf or semi-dwarf trees. If such a tree is planted with the graft below the soil, then the scion portion can also grow roots and the tree will still grow to its standard size. Ease of propagation: Because the scion is difficult to propagate vegetatively by other means, such as by cuttings. In this case, cuttings of an easily rooted plant are used to provide a rootstock. In some cases, the scion may be easily propagated, but grafting may still be used because it is commercially the most cost-effective way of raising a particular type of plant. Hybrid breeding: To speed maturity of hybrids in fruit tree breeding programs. Hybrid seedlings may take ten or more years to flower and fruit on their own roots. Grafting can reduce the time to flowering and shorten the breeding program. Hardiness: Because the scion has weak roots or the roots of the stock plants are tolerant of difficult conditions. e.g. many Western Australian plants are sensitive to dieback on heavy soils, common in urban gardens, and are grafted onto hardier eastern Australian relatives. Grevilleas and eucalypts are examples. Sturdiness: To provide a strong, tall trunk for certain ornamental shrubs and trees. In these cases, a graft is made at a desired height on a stock plant with a strong stem. This is used to raise 'standard' roses, which are rose bushes on a high stem, and it is also used for some ornamental trees, such as certain weeping cherries. Disease/pest resistance: In areas where soil-borne pests or pathogens would prevent the successful planting of the desired cultivar, the use of pest/disease tolerant rootstocks allow the production from the cultivar that would be otherwise unsuccessful. A major example is the use of rootstocks in combating Phylloxera. Pollen source: To provide pollenizers. For example, in tightly planted or badly planned apple orchards of a single variety, limbs of crab apple may be grafted at regularly spaced intervals onto trees down rows, say every fourth tree. This takes care of pollen needs at blossom time. Repair: To repair damage to the trunk of a tree that would prohibit nutrient flow, such as stripping of the bark by rodents that completely girdles the trunk. In this case a bridge graft may be used to connect tissues receiving flow from the roots to tissues above the damage that have been severed from the flow. Where a water sprout, basal shoot or sapling of the same species is growing nearby, any of these can be grafted to the area above the damage by a method called inarch grafting. These alternatives to scions must be of the correct length to span the gap of the wound. Changing cultivars: To change the cultivar in a fruit orchard to a more profitable cultivar, called top working. It may be faster to graft a new cultivar onto existing limbs of established trees than to replant an entire orchard. Genetic consistency: Apples are notorious for their genetic variability, even differing in multiple characteristics, such as, size, color, and flavor, of fruits located on the same tree. In the commercial farming industry, consistency is maintained by grafting a scion with desired fruit traits onto a hardy stock. Curiosities: A practice sometimes carried out by gardeners is to graft related potatoes and tomatoes so that both are produced on the same plant, one above ground and one underground. Cacti of widely different forms are sometimes grafted on to each other. Multiple cultivars of fruits such as apples are sometimes grafted on a single tree. This so-called "family tree" provides more fruit variety for small spaces such as a suburban backyard, and also takes care of the need for pollenizers. The drawback is that the gardener must be sufficiently trained to prune them correctly, or one strong variety will usually "take over." Multiple cultivars of different "stone fruits" (Prunus species) can be grafted on a single tree. This is called a fruit salad tree. Ornamental and functional, tree shaping uses grafting techniques to join separate trees or parts of the same tree to itself. Furniture, hearts, entry archways are examples. Axel Erlandson was a prolific tree shaper who grew over 75 mature specimens. Factors for successful graft Compatibility of scion and stock: Because grafting involves the joining of vascular tissues between the scion and rootstock, plants lacking vascular cambium, such as monocots, cannot normally be grafted. As a general rule, the closer two plants are genetically, the more likely the graft union will form. Genetically identical clones and intra-species plants have a high success rate for grafting. Grafting between species of the same genus is sometimes successful. Grafting has a low success rate when performed with plants in the same family but in different genera, and grafting between different families is rare. Cambium alignment and pressure: The vascular cambium of the scion and stock should be tightly pressed together and oriented in the direction of normal growth. Proper alignment and pressure encourages the tissues to join quickly, allowing nutrients and water to transfer from the stockroot to the scion. Completed during appropriate stage of plant: The grafting is completed at a time when the scion and stock are capable of producing callus and other wound-response tissues. Generally, grafting is performed when the scion is dormant, as premature budding can drain the grafting site of moisture before the grafting union is properly established. Temperature greatly affects the physiological stage of plants. If the temperature is too warm, premature budding may result. Elsewise, high temperatures can slow or halt callus formation. Proper care of graft site: After grafting, it is important to nurse the grafted plant back to health for a period of time. Various grafting tapes and waxes are used to protect the scion and stock from excessive water loss. Furthermore, depending on the type of graft, twine or string is used to add structural support to the grafting site. Sometimes it is necessary to prune the site, as the rootstock may produce shoots that inhibit the growth of the scion. Tools Cutting tools: It is a good procedure to keep the cutting tool sharp to minimize tissue damage and clean from dirt and other substances to avoid the spread of disease. A good knife for general grafting should have a blade and handle length of about 3 inches and 4 inches respectively. Specialized knives for grafting include bud-grafting knives, surgical knives, and pruning knives. Cleavers, chisels, and saws are utilized when the stock is too large to be cut otherwise. Disinfecting tools: Treating the cutting tools with disinfectants ensures the grafting site is clear of pathogens. The most commonly used sterilizing agent is alcohol. Graft seals: Keeps the grafting site hydrated. Good seals should be tight enough to retain moisture while, at the same time, loose enough to accommodate plant growth. Includes specialized types of clay, wax, petroleum jelly, and adhesive tape. Tying and support materials: Adds support and pressure to the grafting site to hold the stock and scion together before the tissues join, which is especially important in herbaceous grafting. The employed material is often dampened before use to help protect the site from desiccation. Support equipment includes strips made from various substances, twine, nails, and splints. Grafting machines: Because grafting can take a lot of time and skill, grafting machines have been created. Automation is particularly popular for seedling grafting in countries such as Japan and Korea where farming land is both limited and used intensively. Certain machines can graft 800 seedlings /hr. Techniques Approach Approach grafting or inarching is used to join together plants that are otherwise difficult to join. The plants are grown close together, and then joined so that each plant has roots below and growth above the point of union. Both scion and stock retain their respective parents that may or may not be removed after joining. Also used in pleaching. The graft can be successfully accomplished any time of year. Bud Bud grafting (also called chip budding or shield budding) uses a bud instead of a twig. Grafting roses is the most common example of bud grafting. In this method a bud is removed from the parent plant, and the base of the bud is inserted beneath the bark of the stem of the stock plant from which the rest of the shoot has been cut. Any extra bud that starts growing from the stem of the stock plant is removed. Examples: roses and fruit trees like peaches. Budwood is a stick with several buds on it that can be cut out and used for bud grafting. It is a common method of propagation for citrus trees. Cleft In cleft grafting a small cut is made in the stock and then the pointed end of the scion is inserted in the stock. This is best done in the early spring and is useful for joining a thin scion about diameter to a thicker branch or stock. It is best if the former has 3–5 buds and the latter is in diameter. The branch or stock should be split carefully down the middle to form a cleft about deep. If it is a branch that is not vertical then the cleft should be cut horizontally. The end of the scion should be cut cleanly to a long shallow wedge, preferably with a single cut for each wedge surface, and not whittled. A third cut may be made across the end of the wedge to make it straight across. Slide the wedge into the cleft so that it is at the edge of the stock and the centre of the wedge faces are against the cambium layer between the bark and the wood. It is preferable if a second scion is inserted in a similar way into the other side of the cleft. This helps to seal off the cleft. Tape around the top of the stock to hold the scion in place and cover with grafting wax or sealing compound. This stops the cambium layers from drying out and also prevents the ingress of water into the cleft. Whip In whip grafting the scion and the stock are cut slanting and then joined. The grafted point is then bound with tape and covered with a soft sealant to prevent dehydration and infection by germs. The common variation is a whip and tongue graft, which is considered the most difficult to master but has the highest rate of success as it offers the most cambium contact between the scion and the stock. It is the most common graft used in preparing commercial fruit trees. It is generally used with stock less than diameter, with the ideal diameter closer to and the scion should be of roughly the same diameter as the stock. The stock is cut through on one side only at a shallow angle with a sharp knife. (If the stock is a branch and not the main trunk of the rootstock then the cut surface should face outward from the centre of the tree.) The scion is similarly sliced through at an equal angle starting just below a bud, so that the bud is at the top of the cut and on the other side than the cut face. In the whip and tongue variation, a notch is cut downwards into the sliced face of the stock and a similar cut upwards into the face of the scion cut. These act as the tongues and it requires some skill to make the cuts so that the scion and the stock marry up neatly. The elongated "Z" shape adds strength, removing the need for a companion rod in the first season (see illustration). The joint is then taped around and treated with tree-sealing compound or grafting wax. A whip graft without a tongue is less stable and may need added support. Stub Stub grafting is a technique that requires less stock than cleft grafting, and retains the shape of a tree. Also scions are generally of 6–8 buds in this process. An incision is made into the branch long, then the scion is wedged and forced into the branch. The scion should be at an angle of at most 35° to the parent tree so that the crotch remains strong. The graft is covered with grafting compound. After the graft has taken, the branch is removed and treated a few centimeters above the graft, to be fully removed when the graft is strong. Four-flap The four-flap graft (also called banana graft) is commonly used for pecans, and first became popular with this species in Oklahoma in 1975. It is heralded for maximum cambium overlap, but is a complex graft. It requires similarly sized diameters for the rootstock and scion. The bark of the rootstock is sliced and peeled back in four flaps, and the hardwood is removed, looking somewhat like a peeled banana. It is a difficult graft to learn. Awl Awl grafting takes the least resources and the least time. It is best done by an experienced grafter, as it is possible to accidentally drive the tool too far into the stock, reducing the scion's chance of survival. Awl grafting can be done by using a screwdriver to make a slit in the bark, not penetrating the cambium layer completely. Then inset the wedged scion into the incision. Veneer Veneer grafting, or inlay grafting, is a method used for stock larger than in diameter. The scion is recommended to be about as thick as a pencil. Clefts are made of the same size as the scion on the side of the branch, not on top. The scion end is shaped as a wedge, inserted, and wrapped with tape to the scaffolding branches to give it more strength. Rind (also called bark) Rind grafting involves grafting a small scion onto the end of a thick stock. The thick stock is sawn off, and a ~4 cm long bark-deep cut is made parallel to the stock, from the sawn-off end down, and the bark is separated from the wood on one or both sides. The scion is shaped as a wedge, exposing cambium on both sides, and is pushed in under the back of the stock, with a flat side against the wood. Natural grafting Tree branches and more often roots of the same species will sometimes naturally graft; this is called inosculation. The bark of the tree may be stripped away when the roots make physical contact with each other, exposing the vascular cambium and allowing the roots to graft together. A group of trees can share water and mineral nutrients via root grafts, which may be advantageous to weaker trees, and may also form a larger rootmass as an adaptation to promote fire resistance and regeneration as exemplified by the California black oak (Quercus kelloggii). Additionally, grafting may protect the group from wind damages as a result of the increased mechanical stability provided by the grafting. Albino redwoods use root grafting as a form of plant parasitism of normal redwoods. A problem with root grafts is that they allow transmission of certain pathogens, such as Dutch elm disease. Inosculation also sometimes occurs where two stems on the same tree, shrub or vine make contact with each other. This is common in plants such as strawberries and potato. Natural grafting is rarely seen in herbaceous plants as those types of plants generally have short-lived roots with little to no secondary growth in the vascular cambium. Graft chimera Occasionally, a so-called "graft hybrid" or more accurately graft chimera can occur where the tissues of the stock continue to grow within the scion. Such a plant can produce flowers and foliage typical of both plants as well as shoots intermediate between the two. The best-known example this is probably +Laburnocytisus 'Adamii', a graft hybrid between Laburnum and Cytisus, which originated in a nursery near Paris, France, in 1825. This small tree bears yellow flowers typical of Laburnum anagyroides, purple flowers typical of Cytisus purpureus and curious coppery-pink flowers that show characteristics of both "parents". Many species of cactus can also produce graft chimeras under the right conditions although they are often created unintentionally and such results are often hard to replicate. Scientific uses Grafting has been important in flowering research. Leaves or shoots from plants induced to flower can be grafted onto uninduced plants and transmit a floral stimulus that induces them to flower. The transmission of plant viruses has been studied using grafting. Virus indexing involves grafting a symptomless plant that is suspected of carrying a virus onto an indicator plant that is very susceptible to the virus. Grafting can transfer chloroplasts (plant organelles that can conduct photosynthesis), mitochondrial DNA and the entire cell nucleus containing the genome to potentially make a new species making grafting a form of natural genetic engineering. Examples White Spruce White spruce can be grafted with consistent success by using scions of current growth on thrifty 4- to 5-year-old rootstock (Nienstaedt and Teich 1972). Before greenhouse grafting, rootstocks should be potted in late spring, allowed to make seasonal growth, then subjected to a period of chilling outdoors, or for about 8 weeks in a cool room at 2 °C (Nienstaedt 1966). A method of grafting white spruce of seed-bearing age during the time of seed harvest in the fall was developed by Nienstaedt et al. (1958). Scions of white spruce of 2 ages of wood from 30- to 60-year-old trees were collected in the fall and grafted by 3 methods on potted stock to which different day-length treatments had been applied prior to grafting. The grafted stock were given long-day and natural-day treatments. Survival was 70% to 100% and showed effects of rootstock and post-grafting treatments in only a few cases. Photoperiod and temperature treatments after grafting, however, had considerable effect on scion activity and total growth. The best post-grafting treatment was 4 weeks of long-day treatment followed by 2 weeks of short-day treatment, then 8 weeks of chilling, and finally long-day treatment. Since grafts of white spruce put on relatively little growth in the 2 years after grafting, techniques for accelerating the early growth were studied by Greenwood (1988) and others. The cultural regimes used to promote one additional growth cycle in one year involve manipulation of day length and the use of cold storage to satisfy chilling requirements. Greenwood took dormant potted grafts into the greenhouse in early January then gradually raised the temperature during the course of a week until the minimum temperature rose to 15 °C. Photoperiod was increased to 18 hours using incandescent lighting. In this technique, grafts are grown until elongation has been completed, normally by mid-March. Soluble 10-52-10 fertilizer is applied at both ends of the growth cycle and 20-20-20 during the cycle, with irrigation as needed. When growth elongation is complete, day length is reduced to 8 hours using a blackout curtain. Budset follows, and the grafts are held in the greenhouse until mid-May. Grafts are then moved into a cooler at 4 °C for 1000 hours, after which they are moved to a shade frame where they grow normally, with applications of fertilizer and irrigation as in the first cycle. Grafts are moved into cold frames or unheated greenhouse in September until January. Flower induction treatments are begun on grafts that have reached a minimum length of 1.0 m. Repotting from an initial pot size of 4.5 litre to 16 litre containers with a 2:1:1 soil mix of peat moss, loam, and aggregate. In one of the first accelerated growth experiments, white spruce grafts made in January and February that would normally elongate shortly after grafting, set bud, and remain in that condition until the following spring, were refrigerated for 500, 1000, or 1500 hours beginning in mid-July, and a non-refrigerated control was held in the nursery. After completion of the cold treatment, the grafts were moved into the greenhouse with an 18-hour photoperiod until late October. Height increment was significantly (P 0.01) influenced by cold treatment. Best results were given by the 1000-hour treatment. The refrigeration (cold treatment) phase was subsequently shown to be effective when applied 2 months earlier with proper handling and use of blackout curtains, which allows the second growth cycle to be completed in time to satisfy dormancy requirements before January (Greenwood et al. 1988). Herbaceous grafting Grafting is often done for non-woody and vegetable plants (tomato, cucumber, eggplant and watermelon). Tomato grafting is very popular in Asia and Europe, and is gaining popularity in the United States. The main advantage of grafting is for disease-resistant rootstocks. Researchers in Japan developed automated processes using grafting robots as early as 1987. Plastic tubing can be used to prevent desiccation and support the healing at the graft/scion interface. History, society and culture Fertile Crescent As humans began to domesticate plants and animals, horticultural techniques that could reliably propagate the desired qualities of long-lived woody plants needed to be developed. Although grafting is not specifically mentioned in the Hebrew Bible, it is claimed that ancient Biblical text hints at the practice of grafting. For example, Leviticus 19:19 states "[the Hebrew people] shalt not sow their field with mingled seed..." (King James Bible). Some scholars believe the phrase mingled seeds includes grafting, although this interpretation remains contentious among scholars. Grafting is also mentioned in the New Testament. In Romans 11, starting at verse 17, there is a discussion about the grafting of wild olive trees concerning the relationship between Jews and Gentiles. By 500 BCE grafting was well established and practiced in the region as the Mishna describes grafting as a commonplace technique used to grow grapevines. China According to recent research: "grafting technology had been practiced in China before 2000 BC". Additional evidence for grafting in China is found in Jia Sixie's 6th century CE agricultural treatise Qimin Yaoshu (Essential Skills for the Common People). It discusses grafting pear twigs onto crab apple, jujube and pomegranate stock (domesticated apples had not yet arrived in China), as well as grafting persimmons. The Qimin yaoshu refers to older texts that referred to grafting, but those works are missing. Nonetheless, given the sophistication of the methods discussed, and the long history of arboriculture in the region, grafting must have already been practiced for centuries by this time. Greece and Rome, and Islamic Golden Age In Greece, a medical record written in 424 BCE contains the first direct reference to grafting. The title of the work is On the Nature of the Child and is thought to be written by a follower of Hippocrates. The language of the author suggests that grafting appeared centuries before this period. In Rome, Marcus Porcius Cato wrote the oldest surviving Latin text in 160 BCE. The book is called De Agri Cultura (On Farming Agriculture) and outlines several grafting methods. Other authors in the region would write about grafting in the following years, however, the publications often featured fallacious scion-stock combinations. Creating lavishly flourished gardens would be a common form of competition among medieval Islamic leaders at the time. Because the region would receive an influx of foreign ornamentals to decorate these gardens, grafting was used much during this period. Europe and the United States After the fall of the Roman Empire, grafting kept being practiced in Christian monasteries and regained popular appeal among lay people during the Renaissance. The invention of the printing press inspired a number of authors to publish books on gardening that included information on grafting. One example, A New Orchard and Garden: Or, the Best Way for Planting, Graffing, and to Make Any Ground Good for a Rich Orchard, Particularly in the North, was written by William Lawson in 1618. While the book contains practical grafting techniques, some even still used today, it suffers from exaggerated claims of scion-stock compatibility typical of this period. While grafting continued to grow in Europe during the eighteenth century, it was considered unnecessary in the United States as the produce from fruit trees was largely used either to make cider or feed hogs. French Wine Pandemic Beginning in 1864, and without warning, grapevines across France began to sharply decline. Thanks to the efforts of scientists such as C. V. Riley and J. E. Planchon, the culprit was identified to be phylloxera, an insect that infests the roots of vines and causes fungal infections. Initially, farmers unsuccessfully attempted to contain the pest by removing and burning affected vines. When it was discovered that phylloxera was an invasive species introduced from North America, some suggested importing rootstock from the region as the North American vines were resistant to the pest. Others, opposed to the idea, argued that American rootstocks would imbue the French grapes with an undesirable taste; they instead preferred to inject the soil with expensive pesticides. Ultimately, grafting French vines onto American rootstocks became prevalent throughout the region, creating new grafting techniques and machines. American rootstocks had trouble adapting to the high soil pH value of some regions in France so the final solution to the pandemic was to hybridize the American and French variants. Cultivated plants propagated by grafting Apple – grafting Avocado – grafting Conifer - stem cuttings, grafting Citrus (lemon, orange, grapefruit, Tangerine, dayap) – grafting Grapes – stem cuttings, grafting, aerial layering Kumquat – stem cutting, grafting Mango- grafting, budding Maple – stem cuttings, grafting Nut crops (walnut, pecan) – grafting Peach – grafting Pear – grafting Rubber Plant - bud grafting Rose - grafting
Technology
Horticulture
null
7783929
https://en.wikipedia.org/wiki/Horses%20in%20warfare
Horses in warfare
The first evidence of horses in warfare dates from Eurasia between 4000 and 3000 BC. A Sumerian illustration of warfare from 2500 BC depicts some type of equine pulling wagons. By 1600 BC, improved harness and chariot designs made chariot warfare common throughout the Ancient Near East, and the earliest written training manual for war horses was a guide for training chariot horses written about 1350 BC. As formal cavalry tactics replaced the chariot, so did new training methods, and by 360 BC, the Greek cavalry officer Xenophon had written an extensive treatise on horsemanship. The effectiveness of horses in battle was also revolutionized by improvements in technology, such as the invention of the saddle, the stirrup, and the horse collar. Many different types and sizes of horses were used in war, depending on the form of warfare. The type used varied with whether the horse was being ridden or driven, and whether they were being used for reconnaissance, cavalry charges, raiding, communication, or supply. Throughout history, mules and donkeys, as well as horses played a crucial role in providing support to armies in the field. Horses were well suited to the warfare tactics of the nomadic cultures from the steppes of Eastern Europe and Central Asia. Several cultures in East Asia made extensive use of cavalry and chariots. Muslim warriors relied upon light cavalry in their campaigns throughout Northern Africa, Asia, and Europe beginning in the 7th and 8th centuries AD. Europeans used several types of war horses in the Middle Ages, and the best-known heavy cavalry warrior of the period was the armoured knight. With the decline of the knight and rise of gunpowder in warfare, light cavalry again rose to prominence, used in both European warfare and in the conquest of the Americas. Battle cavalry developed to take on a multitude of roles in the late 18th century and early 19th century and was often crucial for victory in the Napoleonic Wars. In the Americas, the use of horses and development of mounted warfare tactics were learned by several tribes of indigenous people and in turn, highly mobile horse regiments were critical in the American Civil War. Horse cavalry began to be phased out after World War I in favour of tank warfare, though a few horse cavalry units were still used into World War II, especially as scouts. By the end of World War II, horses were seldom seen in battle, but were still used extensively for the transport of troops and supplies. Today, formal battle-ready horse cavalry units have almost disappeared, though the United States Army Special Forces used horses in battle during the 2001 invasion of Afghanistan. Horses are still seen in use by organized armed fighters in the Global South. Many nations still maintain small units of mounted riders for patrol and reconnaissance, and military horse units are also used for ceremonial and educational purposes. Horses are also used for historical reenactment of battles, law enforcement, and in equestrian competitions derived from the riding and training skills once used by the military. Types of horse used in warfare A fundamental principle of equine conformation is "form to function". Therefore, the type of horse used for various forms of warfare depended on the work performed, the weight a horse needed to carry or pull, and distance travelled. Weight affects speed and endurance, creating a trade-off: armour added protection, but added weight reduced maximum speed. Therefore, various cultures had different military needs. In some situations, one primary type of horse was favoured over all others. In other places, multiple types were needed; warriors would travel to battle riding a lighter horse of greater speed and endurance, and then switch to a heavier horse, with greater weight-carrying capacity, when wearing heavy armour in actual combat. The average horse can carry up to approximately 30% of its body weight. While all horses can pull more weight than they can carry, the maximum weight that horses can pull varies widely, depending on the build of the horse, the type of vehicle, road conditions, and other factors. Horses harnessed to a wheeled vehicle on a paved road can pull as much as eight times their weight, but far less if pulling wheelless loads over unpaved terrain. Thus, horses that were driven varied in size and had to make a trade-off between speed and weight, just as did riding animals. Light horses could pull a small war chariot at speed. Heavy supply wagons, artillery, and support vehicles were pulled by heavier horses or a larger number of horses. The method by which a horse was hitched to a vehicle also mattered: horses could pull greater weight with a horse collar than they could with a breast collar, and even less with an ox yoke. Light-weight Light, oriental horses such as the ancestors of the modern Arabian, Barb, and Akhal-Teke were used for warfare that required speed, endurance, and agility. Such horses ranged from about to just under , weighing approximately . To move quickly, riders had to use lightweight tack and carry relatively light weapons such as bows, light spears, javelins, or later rifles. This was the original horse used for early chariot warfare, raiding, and light cavalry. Relatively light horses were used by many cultures, including the Ancient Egyptians, the Mongols, the Arabs, and the Native Americans. Throughout the Ancient Near East, small, light animals were used to pull chariots designed to carry no more than two passengers, a driver and a warrior. In the European Middle Ages, a lightweight war horse became known as the rouncey. Medium-weight Medium-weight horses developed as early as the Iron Age with the needs of various civilizations to pull heavier loads, such as chariots capable of holding more than two people, and, as light cavalry evolved into heavy cavalry, to carry heavily armoured riders. The Scythians were among the earliest cultures to produce taller, heavier horses. Larger horses were also needed to pull supply wagons and, later on, artillery pieces. In Europe, horses were also used to a limited extent to maneuver cannons on the battlefield as part of dedicated horse artillery units. Medium-weight horses had the greatest range in size, from about but stocky, to as much as , weighing approximately . They generally were quite agile in combat, though they did not have the raw speed or endurance of a lighter horse. By the Middle Ages, larger horses in this class were sometimes called destriers. They may have resembled modern Baroque or heavy warmblood breeds. Later, horses similar to the modern warmblood often carried European cavalry. Heavy-weight Large, heavy horses, weighing from , the ancestors of today's draught horses, were used, particularly in Europe, from the Middle Ages onward. They pulled heavy loads like supply wagons and were disposed to remain calm in battle. Some historians believe they may have carried the heaviest-armoured knights of the Late Medieval Period, though others dispute this claim, indicating that the destrier, or knight's battle horse, was a medium-weight animal. It is also disputed whether the destrier class included draught animals or not. Breeds at the smaller end of the heavyweight category may have included the ancestors of the Percheron, agile for their size and physically able to maneuver in battle. Ponies The British Army's 2nd Dragoons in 1813 had 340 ponies of and 55 ponies of ; the Lovat Scouts, formed in 1899, were mounted on Highland ponies; the British Army recruited 200 Dales ponies in World War II for use as pack and artillery animals; and the British Territorial Army experimented with the use of Dartmoor ponies as pack animals in 1935, finding them to be better than mules for the job. Other equids Horses were not the only equids used to support human warfare. Donkeys have been used as pack animals from antiquity to the present. Mules were also commonly used, especially as pack animals and to pull wagons, but also occasionally for riding. Because mules are often both calmer and hardier than horses, they were particularly useful for strenuous support tasks, such as hauling supplies over difficult terrain. However, under gunfire, they were less cooperative than horses, so were generally not used to haul artillery on battlefields. The size of a mule and work to which it was put depended largely on the breeding of the mare that produced the mule. Mules could be lightweight, medium weight, or even, when produced from draught horse mares, of moderate heavy weight. Training and deployment The oldest known manual on training horses for chariot warfare was written c. 1350 BC by the Hittite horsemaster, Kikkuli. An ancient manual on the subject of training riding horses, particularly for the Ancient Greek cavalry is Hippike (On Horsemanship) written about 360 BC by the Greek cavalry officer Xenophon. and another early text was that of Kautilya, written about 323 BC. Whether horses were trained to pull chariots, to be ridden as light or heavy cavalry, or to carry the armoured knight, much training was required to overcome the horse's natural instinct to flee from noise, the smell of blood, and the confusion of combat. They also learned to accept any sudden or unusual movements of humans while using a weapon or avoiding one. Horses used in close combat may have been taught, or at least permitted, to kick, strike, and even bite, thus becoming weapons themselves for the warriors they carried. In most cultures, a war horse used as a riding animal was trained to be controlled with limited use of reins, responding primarily to the rider's legs and weight. The horse became accustomed to any necessary tack and protective armour placed upon it, and learned to balance under a rider who would also be laden with weapons and armour. Developing the balance and agility of the horse was crucial. The origins of the discipline of dressage came from the need to train horses to be both obedient and manoeuvrable. The Haute ecole or "High School" movements of classical dressage taught today at the Spanish Riding School have their roots in manoeuvres designed for the battlefield. However, the airs above the ground were unlikely to have been used in actual combat, as most would have exposed the unprotected underbelly of the horse to the weapons of foot soldiers. Horses used for chariot warfare were not only trained for combat conditions, but because many chariots were pulled by a team of two to four horses, they also had to learn to work together with other animals in close quarters under chaotic conditions. Technological innovations Horses were probably ridden in prehistory before they were driven. However, evidence is scant, mostly simple images of human figures on horse-like animals drawn on rock or clay. The earliest tools used to control horses were bridles of various sorts, which were invented nearly as soon as the horse was domesticated. Evidence of bit wear appears on the teeth of horses excavated at the archaeology sites of the Botai culture in northern Kazakhstan, dated 3500–3000 BC. Harness and vehicles The invention of the wheel was a major technological innovation that gave rise to chariot warfare. At first, equines, both horses and onagers, were hitched to wheeled carts by means of a yoke around their necks in a manner similar to that of oxen. However, such a design is incompatible with equine anatomy, limiting both the strength and mobility of the animal. By the time of the Hyksos invasions of Egypt, c. 1600 BC, horses were pulling chariots with an improved harness design that made use of a breastcollar and breeching, which allowed a horse to move faster and pull more weight. Even after the chariot had become obsolete as a tool of war, there still was a need for technological innovations in pulling technologies; horses were needed to pull heavy loads of supplies and weapons. The invention of the horse collar in China during the 5th century AD (Northern and Southern dynasties) allowed horses to pull greater weight than they could when hitched to a vehicle with the ox yokes or breast collars used in earlier times. The horse collar arrived in Europe during the 9th century, and became widespread by the 12th century. Riding equipment Two major innovations that revolutionised the effectiveness of mounted warriors in battle were the saddle and the stirrup. Riders quickly learned to pad their horse's backs to protect themselves from the horse's spine and withers, and fought on horseback for centuries with little more than a blanket or pad on the horse's back and a rudimentary bridle. To help distribute the rider's weight and protect the horse's back, some cultures created stuffed padding that resembles the panels of today's English saddle. Both the Scythians and Assyrians used pads with added felt attached with a surcingle or girth around the horse's barrel for increased security and comfort. Xenophon mentioned the use of a padded cloth on cavalry mounts as early as the 4th century BC. The saddle with a solid framework, or "tree", provided a bearing surface to protect the horse from the weight of the rider, but was not widespread until the 2nd century AD. However, it made a critical difference, as horses could carry more weight when distributed across a solid saddle tree. A solid tree, the predecessor of today's Western saddle, also allowed a more built-up seat to give the rider greater security in the saddle. The Romans are credited with the invention of the solid-treed saddle. An invention that made cavalry particularly effective was the stirrup. A toe loop that held the big toe was used in India possibly as early as 500 BC, and later a single stirrup was used as a mounting aid. The first set of paired stirrups appeared in China about 322 AD during the Jin dynasty. Following the invention of paired stirrups, which allowed a rider greater leverage with weapons, as well as both increased stability and mobility while mounted, nomadic groups such as the Mongols adopted this technology and developed a decisive military advantage. By the 7th century, due primarily to invaders from Central Asia, stirrup technology spread from Asia to Europe. The Avar invaders are viewed as primarily responsible for spreading the use of the stirrup into central Europe. However, while stirrups were known in Europe in the 8th century, pictorial and literary references to their use date only from the 9th century. Widespread use in Northern Europe, including England, is credited to the Vikings, who spread the stirrup in the 9th and 10th centuries to those areas. Tactics The first archaeological evidence of horses used in warfare dates from between 4000 and 3000 BC in the steppes of Eurasia, in what today is Ukraine, Hungary, and Romania. Not long after domestication of the horse, people in these locations began to live together in large fortified towns for protection from the threat of horseback-riding raiders, who could attack and escape faster than people of more sedentary cultures could follow. Horse-mounted nomads of the steppe and current day Eastern Europe spread Indo-European Languages as they conquered other tribes and groups. The use of horses in organised warfare was documented early in recorded history. One of the first depictions is the "war panel" of the Standard of Ur, in Sumer, dated c. 2500 BC, showing horses (or possibly onagers or mules) pulling a four-wheeled wagon. Chariot warfare Among the earliest evidence of chariot use are the burials of horse and chariot remains by the Andronovo (Sintashta-Petrovka) culture in modern Russia and Kazakhstan, dated to approximately 2000 BC. The oldest documentary evidence of what was probably chariot warfare in the Ancient Near East is the Old Hittite Anitta text, of the 18th century BC, which mentioned 40 teams of horses at the siege of Salatiwara. The Hittites became well known throughout the ancient world for their prowess with the chariot. Widespread use of the chariot in warfare across most of Eurasia coincides approximately with the development of the composite bow, known from c. 1600 BC. Further improvements in wheels and axles, as well as innovations in weaponry, soon resulted in chariots being driven in battle by Bronze Age societies from China to Egypt. The Hyksos invaders brought the chariot to Ancient Egypt in the 16th century BC and the Egyptians adopted its use from that time forward. The oldest preserved text related to the handling of war horses in the ancient world is the Hittite manual of Kikkuli, which dates to about 1350 BC, and describes the conditioning of chariot horses. Chariots existed in the Minoan civilization, as they were inventoried on storage lists from Knossos in Crete, dating to around 1450 BC. Chariots were also used in China as far back as the Shang dynasty (c. 1600–1050 BC), where they appear in burials. The high point of chariot use in China was in the Spring and Autumn period (770–476 BC), although they continued in use up until the 2nd century BC. Descriptions of the tactical role of chariots in Ancient Greece and Rome are rare. The Iliad, possibly referring to Mycenaen practices used c. 1250 BC, describes the use of chariots for transporting warriors to and from battle, rather than for actual fighting. Later, Julius Caesar, invading Britain in 55 and 54 BC, noted British charioteers throwing javelins, then leaving their chariots to fight on foot. Cavalry Some of the earliest examples of horses being ridden in warfare were horse-mounted archers or javelin-throwers, dating to the reigns of the Assyrian rulers Ashurnasirpal II and Shalmaneser III. However, these riders sat far back on their horses, a precarious position for moving quickly, and the horses were held by a handler on the ground, keeping the archer free to use the bow. Thus, these archers were more a type of mounted infantry than true cavalry. The Assyrians developed cavalry in response to invasions by nomadic people from the north, such as the Cimmerians, who entered Asia Minor in the 8th century BC and took over parts of Urartu during the reign of Sargon II, approximately 721 BC. Mounted warriors such as the Scythians also had an influence on the region in the 7th century BC. By the reign of Ashurbanipal in 669 BC, the Assyrians had learned to sit forward on their horses in the classic riding position still seen today and could be said to be true light cavalry. The ancient Greeks used both light horse scouts and heavy cavalry, although not extensively, possibly due to the cost of keeping horses. Heavy cavalry was believed to have been developed by the Ancient Persians, although others argue for the Sarmatians. By the time of Darius (558–486 BC), Persian military tactics required horses and riders that were completely armoured, and selectively bred a heavier, more muscled horse to carry the additional weight. The cataphract was a type of heavily armoured cavalry with distinct tactics, armour, and weaponry used from the time of the Persians up until the Middle Ages. In Ancient Greece, Phillip of Macedon is credited with developing tactics allowing massed cavalry charges. The most famous Greek heavy cavalry units were the companion cavalry of Alexander the Great. The Chinese of the 4th century BC during the Warring States period (403–221 BC) began to use cavalry against rival states. To fight nomadic raiders from the north and west, the Chinese of the Han dynasty (202 BC – 220 AD) developed effective mounted units. Cavalry was not used extensively by the Romans during the Roman Republic period, but by the time of the Roman Empire, they made use of heavy cavalry. However, the backbone of the Roman army was the infantry. Horse artillery Once gunpowder was invented, another major use of horses was as draught animals for heavy artillery, or cannon. In addition to field artillery, where horse-drawn guns were attended by gunners on foot, many armies had artillery batteries where each gunner was provided with a mount. Horse artillery units generally used lighter pieces, pulled by six horses. "9-pounders" were pulled by eight horses, and heavier artillery pieces needed a team of twelve. With the individual riding horses required for officers, surgeons and other support staff, as well as those pulling the artillery guns and supply wagons, an artillery battery of six guns could require 160 to 200 horses. Horse artillery usually came under the command of cavalry divisions, but in some battles, such as Waterloo, the horse artillery were used as a rapid response force, repulsing attacks and assisting the infantry. Agility was important; the ideal artillery horse was high, strongly built, but able to move quickly. Asia Central Asia Relations between steppe nomads and the settled people in and around Central Asia were often marked by conflict. The nomadic lifestyle was well suited to warfare, and steppe cavalry became some of the most militarily potent forces in the world, only limited by nomads' frequent lack of internal unity. Periodically, strong leaders would organise several tribes into one force, creating an almost unstoppable power. These unified groups included the Huns, who invaded Europe, and under Attila, conducted campaigns in both eastern France and northern Italy, over 500 miles apart, within two successive campaign seasons. Other unified nomadic forces included the Wu Hu rebellions in China, and the Mongol conquest of much of Eurasia. South Asia The literature of ancient India describes numerous horse nomads. Some of the earliest references to the use of horses in South Asian warfare are Puranic texts, which refer to an attempted invasion of India by the joint cavalry forces of the Sakas, Kambojas, Yavanas, Pahlavas, and Paradas, called the "five hordes" (pañca.ganah) or "Kśatriya" hordes (Kśatriya ganah). About 1600 BC, they captured the throne of Ayodhya by dethroning the Vedic king, Bahu. Later texts, such as the Mahābhārata, c. 950 BC, appear to recognise efforts taken to breed war horses and develop trained mounted warriors, stating that the horses of the Sindhu and Kamboja regions were of the finest quality, and the Kambojas, Gandharas, and Yavanas were expert in fighting from horses. In technological innovation, the early toe loop stirrup is credited to the cultures of India, and may have been in use as early as 500 BC. Not long after, the cultures of Mesopotamia and Ancient Greece clashed with those of central Asia and India. Herodotus (484–425 BC) wrote that Gandarian mercenaries of the Achaemenid Empire were recruited into the army of emperor Xerxes I of Persia (486–465 BC), which he led against the Greeks. A century later, the "Men of the Mountain Land," from north of Kabul River, served in the army of Darius III of Persia when he fought against Alexander the Great at Arbela in 331 BC. In battle against Alexander at Massaga in 326 BC, the Assakenoi forces included 20,000 cavalry. The Mudra-Rakshasa recounted how cavalry of the Shakas, Yavanas, Kambojas, Kiratas, Parasikas, and Bahlikas helped Chandragupta Maurya (c. 320–298 BC) defeat the ruler of Magadha and take the throne, thus laying the foundations of Mauryan dynasty in Northern India. Mughal cavalry used gunpowder weapons, but were slow to replace the traditional composite bow. Under the impact of European military successes in India, some Indian rulers adopted the European system of massed cavalry charges, although others did not. By the 18th century, Indian armies continued to field cavalry, but mainly of the heavy variety. East Asia The Chinese used chariots for horse-based warfare until light cavalry forces became common during the Warring States era (402–221 BC). A major proponent of the change to riding horses from chariots was Wu Ling, c. 320 BC. However, conservative forces in China often opposed change, as cavalry did not benefit from the additional cachet attached to being the military branch dominated by the nobility as in medieval Europe. Nevertheless, during the reign of Emperor Wu of Han (r. 141–87 BC), it is recorded that 300,000 government-owned horses were insufficient for the cavalry and baggage trains of the Han military in the campaigns to expel the Xiongnu nomads from the Ordos Desert, Qilian Mountains, Khangai Mountains and Gobi Desert, spurring new policies that encouraged households to hand over privately-bred horses in exchange for military and corvee labor exemptions. The Japanese samurai fought as cavalry for many centuries. They were particularly skilled in the art of using archery from horseback. The archery skills of mounted samurai were developed by training such as Yabusame, which originated in 530 AD and reached its peak under Minamoto no Yoritomo (1147–1199 AD) in the Kamakura period. They switched from an emphasis on mounted bowmen to mounted spearmen during the Sengoku period (1467–1615 AD). Middle East During the period when various Islamic empires controlled much of the Middle East as well as parts of West Africa and the Iberian peninsula, Muslim armies consisted mostly of cavalry, made up of fighters from various local groups, mercenaries and Turkoman tribesmen. The latter were considered particularly skilled as both lancers and archers from horseback. In the 9th century the use of Mamluks, slaves raised to be soldiers for various Muslim rulers, became increasingly common. Mobile tactics, advanced breeding of horses, and detailed training manuals made Mamluk cavalry a highly efficient fighting force. The use of armies consisting mostly of cavalry continued among the Turkish people who founded the Ottoman Empire. Their need for large mounted forces led to an establishment of the sipahi, cavalry soldiers who were granted lands in exchange for providing military service in times of war. Mounted Muslim warriors conquered North Africa and the Iberian Peninsula during the 7th and 8th centuries AD following the Hijrah, of Muhammad in 622 AD. By 630 AD, their influence expanded across the Middle East and into western North Africa. By 711 AD, the light cavalry of Muslim warriors had reached Spain, and controlled most of the Iberian peninsula by 720. Their mounts were of various oriental types, including the North African Barb. A few Arabian horses may have come with the Ummayads who settled in the Guadalquivir valley. Another strain of horse that came with Islamic invaders was the Turkoman horse. Muslim invaders travelled north from present-day Spain into France, where they were defeated by the Frankish ruler Charles Martel at the Battle of Tours in 732 AD. Europe Antiquity Middle Ages During the European Middle Ages, there were three primary types of war horses: the destrier, the courser, and the rouncey, which differed in size and usage. A generic word used to describe medieval war horses was charger, which appears interchangeable with the other terms. The medieval war horse was of moderate size, rarely exceeding . Heavy horses were logistically difficult to maintain and less adaptable to varied terrains. The destrier of the early Middle Ages was moderately larger than the courser or rouncey, in part to accommodate heavier armoured knights. However, destriers were not as large as draught horses, averaging between and . On the European continent, the need to carry more armour against mounted enemies such as the Lombards and Frisians led to the Franks developing heavier, bigger horses. As the amount of armour and equipment increased in the later Middle Ages, the height of the horses increased; some late medieval horse skeletons were of horses over . Stallions were often used as destriers due to their natural aggression. However, there may have been some use of mares by European warriors, and mares, who were quieter and less likely to call out and betray their position to the enemy, were the preferred war horse of the Moors, who invaded various parts of Southern Europe from 700 AD through the 15th century. Geldings were used in war by the Teutonic Knights, and known as "monk horses" ( or ). One advantage was if captured by the enemy, they could not be used to improve local bloodstock, thus maintaining the Knights' superiority in horseflesh. Uses The heavy cavalry charge, while it could be effective, was not a common occurrence. Battles were rarely fought on land suitable for heavy cavalry. While mounted riders remained effective for initial attacks, by the end of the 14th century, it was common for knights to dismount to fight, while their horses were sent to the rear, kept ready for pursuit. Pitched battles were avoided if possible, with most offensive warfare in the early Middle Ages taking the form of sieges, and in the later Middle Ages as mounted raids called chevauchées, with lightly armed warriors on swift horses. The war horse was also seen in hastiludes – martial war games such as the joust, which began in the 11th century both as sport and to provide training for battle. Specialised destriers were bred for the purpose, although the expense of keeping, training, and outfitting them kept the majority of the population from owning one. While some historians suggest that the tournament had become a theatrical event by the 15th and 16th centuries, others argue that jousting continued to help cavalry train for battle until the Thirty Years' War. Transition The decline of the armoured knight was probably linked to changing structures of armies and various economic factors, and not obsolescence due to new technologies. However, some historians attribute the demise of the knight to the invention of gunpowder, or to the English longbow. Some link the decline to both technologies. Others argue these technologies actually contributed to the development of knights: plate armour was first developed to resist early medieval crossbow bolts, and the full harness worn by the early 15th century developed to resist longbow arrows. From the 14th century onwards, most plate was made from hardened steel, which resisted early musket ammunition. In addition, stronger designs did not make plate heavier; a full harness of musket-proof plate from the 17th century weighed , significantly less than 16th century tournament armour. The move to predominately infantry-based battles from 1300 to 1550 was linked to both improved infantry tactics and changes in weaponry. By the 16th century, the concept of a combined-arms professional army had spread throughout Europe. Professional armies emphasized training, and were paid via contracts, a change from the ransom and pillaging which reimbursed knights in the past. When coupled with the rising costs involved in outfitting and maintaining armour and horses, the traditional knightly classes began to abandon their profession. Light horses, or prickers, were still used for scouting and reconnaissance; they also provided a defensive screen for marching armies. Large teams of draught horses or oxen pulled the heavy early cannon. Other horses pulled wagons and carried supplies for the armies. Early modern period During the early modern period the shift continued from heavy cavalry and the armoured knight to unarmoured light cavalry, including Hussars and Chasseurs à cheval. Light cavalry facilitated better communication, using fast, agile horses to move quickly across battlefields. The ratio of footmen to horsemen also increased over the period as infantry weapons improved and footmen became more mobile and versatile, particularly once the musket bayonet replaced the more cumbersome pike. During the Elizabethan era, mounted units included cuirassiers, heavily armoured and equipped with lances; light cavalry, who wore mail and bore light lances and pistols; and "petronels", who carried an early carbine. As heavy cavalry use declined armour was increasingly abandoned and dragoons, whose horses were rarely used in combat, became more common: mounted infantry provided reconnaissance, escort and security. However, many generals still used the heavy mounted charge, from the late 17th century and early 18th century, where sword-wielding wedge-formation shock troops penetrated enemy lines, to the early 19th century, where armoured heavy cuirassiers were employed. Light cavalry continued to play a major role, particularly after the Seven Years' War when Hussars started to play a larger part in battles. Though some leaders preferred tall horses for their mounted troops this was as much for prestige as for increased shock ability and many troops used more typical horses, averaging 15 hands. Cavalry tactics altered with fewer mounted charges, more reliance on drilled maneuvers at the trot, and use of firearms once within range. Ever-more elaborate movements, such as wheeling and caracole, were developed to facilitate the use of firearms from horseback. These tactics were not greatly successful in battle since pikemen protected by musketeers could deny cavalry room to manoeuvre. However the advanced equestrianism required survives into the modern world as dressage. While restricted, cavalry was not rendered obsolete. As infantry formations developed in tactics and skills, artillery became essential to break formations; in turn, cavalry was required to both combat enemy artillery, which was susceptible to cavalry while deploying, and to charge enemy infantry formations broken by artillery fire. Thus, successful warfare depended in a balance of the three arms: cavalry, artillery and infantry. As regimental structures developed many units selected horses of uniform type and some, such as the Royal Scots Greys, even specified colour. Trumpeters often rode distinctive horses so they stood out. Regional armies developed type preferences, such as British hunters, Hanoverians in central Europe, and steppe ponies of the Cossacks, but once in the field, the lack of supplies typical of wartime meant that horses of all types were used. Since horses were such a vital component of most armies in early modern Europe, many instituted state stud farms to breed horses for the military. However, in wartime, supply rarely matched the demand, resulting in some cavalry troops fighting on foot. 19th century In the 19th century distinctions between heavy and light cavalry became less significant; by the end of the Peninsular War, heavy cavalry were performing the scouting and outpost duties previously undertaken by light cavalry, and by the end of the 19th century the roles had effectively merged. Most armies at the time preferred cavalry horses to stand and weigh , although cuirassiers frequently had heavier horses. Lighter horses were used for scouting and raiding. Cavalry horses were generally obtained at 5 years of age and were in service from 10 to 12 years, barring loss. However losses of 30–40% were common during a campaign due to conditions of the march as well as enemy action. Mares and geldings were preferred over less-easily managed stallions. During the French Revolutionary Wars and the Napoleonic Wars the cavalry's main offensive role was as shock troops. In defence cavalry were used to attack and harass the enemy's infantry flanks as they advanced. Cavalry were frequently used prior to an infantry assault, to force an infantry line to break and reform into formations vulnerable to infantry or artillery. Infantry frequently followed behind in order to secure any ground won or the cavalry could be used to break up enemy lines following a successful infantry action. Mounted charges were carefully managed. A charge's maximum speed was 20 km/h; moving faster resulted in a break in formation and fatigued horses. Charges occurred across clear rising ground, and were effective against infantry both on the march and when deployed in a line or column. A foot battalion formed in line was vulnerable to cavalry, and could be broken or destroyed by a well-formed charge. Traditional cavalry functions altered by the end of the 19th century. Many cavalry units transferred in title and role to "mounted rifles": troops trained to fight on foot, but retaining mounts for rapid deployment, as well as for patrols, scouting, communications, and defensive screening. These troops differed from mounted infantry, who used horses for transport but did not perform the old cavalry roles of reconnaissance and support. Sub-Saharan Africa Horses were used for warfare in the central Sudan since the 9th century, where they were considered "the most precious commodity following the slave." The first conclusive evidence of horses playing a major role in the warfare of West Africa dates to the 11th century when the region was controlled by the Almoravids, a Muslim Berber dynasty. During the 13th and 14th centuries, cavalry became an important factor in the area. This coincided with the introduction of larger breeds of horse and the widespread adoption of saddles and stirrups. Increased mobility played a part in the formation of new power centers, such as the Oyo Empire in what today is Nigeria. The authority of many African Islamic states such as the Bornu Empire also rested in large part on their ability to subject neighboring peoples with cavalry. Despite harsh climate conditions, endemic diseases such as trypanosomiasis, the African horse sickness, and unsuitable terrain that limited the effectiveness of horses in many parts of Africa, horses were continuously imported and were, in some areas, a vital instrument of war. The introduction of horses also intensified existing conflicts, such as those between the Herero and Nama people in Namibia during the 19th century. The African slave trade was closely tied to the imports of war horses, and as the prevalence of slaving decreased, fewer horses were needed for raiding. This significantly decreased the amount of mounted warfare seen in West Africa. By the time of the Scramble for Africa and the introduction of modern firearms in the 1880s, the use of horses in African warfare had lost most of its effectiveness. Nonetheless, in South Africa during the Second Boer War (1899–1902), cavalry and other mounted troops were the major combat force for the British, since the horse-mounted Boers moved too quickly for infantry to engage. The Boers presented a mobile and innovative approach to warfare, drawing on strategies that had first appeared in the American Civil War. The terrain was not well-suited to the British horses, resulting in the loss of over 300,000 animals. As the campaign wore on, losses were replaced by more durable African Basuto ponies, and Waler horses from Australia. The Americas The horse had been extinct in the Western Hemisphere for approximately 10,000 years prior to the arrival of Spanish Conquistadors in the early 16th century. Consequently, the Indigenous peoples of the Americas had no warfare technologies that could overcome the considerable advantage provided by European horses and gunpowder weapons. In particular this resulted in the conquest of the Aztec and Inca empires. The speed and increased impact of cavalry contributed to a number of early victories by European fighters in open terrain, though their success was limited in more mountainous regions. The Incas' well-maintained roads in the Andes enabled quick mounted raids, such as those undertaken by the Spanish while resisting the siege of Cuzco in 1536–37. Indigenous populations of South America soon learned to use horses. In Chile, the Mapuche began using cavalry in the Arauco War in 1586. They drove the Spanish out of Araucanía at the beginning of the 17th century. Later, the Mapuche conducted mounted raids known as Malónes, first on Spanish, then on Chilean and Argentine settlements until well into the 19th century. In North America, Native Americans also quickly learned to use horses. In particular, the people of the Great Plains, such as the Comanche and the Cheyenne, became renowned horseback fighters. By the 19th century, they presented a formidable force against the United States Army. During the American Revolutionary War (1775–1783), the Continental Army made relatively little use of cavalry, primarily relying on infantry and a few dragoon regiments. The United States Congress eventually authorized regiments specifically designated as cavalry in 1855. The newly formed American cavalry adopted tactics based on experiences fighting over vast distances during the Mexican War (1846–1848) and against indigenous peoples on the western frontier, abandoning some European traditions. During the American Civil War (1861–1865), cavalry held the most important and respected role it would ever hold in the American military. Field artillery in the American Civil War was also highly mobile. Both horses and mules pulled the guns, though only horses were used on the battlefield. At the beginning of the war, most of the experienced cavalry officers were from the South and thus joined the Confederacy, leading to the Confederate Army's initial battlefield superiority. The tide turned at the 1863 Battle of Brandy Station, part of the Gettysburg campaign, where the Union cavalry, in the largest cavalry battle ever fought on the American continent, ended the dominance of the South. By 1865, Union cavalry were decisive in achieving victory. So important were horses to individual soldiers that the surrender terms at Appomattox allowed every Confederate cavalryman to take his horse home with him. This was because, unlike their Union counterparts, Confederate cavalrymen provided their own horses for service instead of drawing them from the government. 20th century Although cavalry was used extensively throughout the world during the 19th century, horses became less important in warfare at the beginning of the 20th century. Light cavalry was still seen on the battlefield, but formal mounted cavalry began to be phased out for combat during and immediately after World War I, although units that included horses still had military uses well into World War II. World War I World War I saw great changes in the use of cavalry. The mode of warfare changed, and the use of trench warfare, barbed wire and machine guns rendered traditional cavalry almost obsolete. Tanks, introduced in 1917, began to take over the role of shock combat. Early in the War, cavalry skirmishes were common, and horse-mounted troops widely used for reconnaissance. On the Western Front cavalry were an effective flanking force during the "Race to the Sea" in 1914, but were less useful once trench warfare was established. There a few examples of successful shock combat, and cavalry divisions also provided important mobile firepower. Cavalry played a greater role on the Eastern Front, where trench warfare was less common. On the Eastern Front, and also against the Ottomans, the "cavalry was literally indispensable." British Empire cavalry proved adaptable, since they were trained to fight both on foot and while mounted, while other European cavalry relied primarily on shock action. On both fronts, the horse was also used as a pack animal. Because railway lines could not withstand artillery bombardments, horses carried ammunition and supplies between the railheads and the rear trenches, though the horses generally were not used in the actual trench zone. This role of horses was critical, and thus horse fodder was the single largest commodity shipped to the front by some countries. Following the war, many cavalry regiments were converted to mechanised, armoured divisions, with light tanks developed to perform many of the cavalry's original roles. World War II Several nations used horse units during World War II. The Polish army used mounted infantry to defend against the armies of Nazi Germany during the 1939 invasion. Both the Germans and the Soviet Union maintained cavalry units throughout the war, particularly on the Eastern Front. The British Army used horses early in the war, and the final British cavalry charge was on March 21, 1942, when the Burma Frontier Force encountered Japanese infantry in central Burma. The only American cavalry unit during World War II was the 26th Cavalry. They challenged the Japanese invaders of Luzon, holding off armoured and infantry regiments during the invasion of the Philippines, repelled a unit of tanks in Binalonan, and successfully held ground for the Allied armies' retreat to Bataan. Throughout the war, horses and mules were an essential form of transport, especially by the British in the rough terrain of Southern Europe and the Middle East. The United States Army utilised a few cavalry and supply units during the war, but there were concerns that the Americans did not use horses often enough. In the campaigns in North Africa, generals such as George S. Patton lamented their lack, saying, "had we possessed an American cavalry division with pack artillery in Tunisia and in Sicily, not a German would have escaped." The German and the Soviet armies used horses until the end of the war for transportation of troops and supplies. The German Army, strapped for motorised transport because its factories were needed to produce tanks and aircraft, used around 2.75 million horses – more than it had used in World War I. One German infantry division in Normandy in 1944 had 5,000 horses. The Soviets used 3.5 million horses. Recognition While many statues and memorials have been erected to human heroes of war, often shown with horses, a few have also been created specifically to honor horses or animals in general. One example is the Horse Memorial in Port Elizabeth in the Eastern Cape province of South Africa. Both horses and mules are honored in the Animals in War Memorial in London's Hyde Park. Horses have also at times received medals for extraordinary deeds. After the Charge of the Light Brigade during the Crimean War, a surviving horse named Drummer Boy, ridden by an officer of the 8th Hussars, was given an unofficial campaign medal by his rider that was identical to those awarded to British troops who served in the Crimea, engraved with the horse's name and an inscription of his service. A more formal award was the PDSA Dickin Medal, an animals' equivalent of the Victoria Cross, awarded by the People's Dispensary for Sick Animals charity in the United Kingdom to three horses that served in World War II. Modern uses Today, many of the historical military uses of the horse have evolved into peacetime applications, including exhibitions, historical reenactments, work of peace officers, and competitive events. Formal combat units of mounted cavalry are mostly a thing of the past, with horseback units within the modern military used for reconnaissance, ceremonial, or crowd control purposes. With the rise of mechanised technology, horses in formal national militias were displaced by tanks and armored fighting vehicles, often still referred to as "cavalry". Active military Organised armed fighters on horseback are occasionally seen. The best-known current examples are the Janjaweed, militia groups seen in the Darfur region of Sudan, who became notorious for their attacks upon unarmed civilian populations in the Darfur conflict. Many nations still maintain small numbers of mounted military units for certain types of patrol and reconnaissance duties in extremely rugged terrain, including the conflict in Afghanistan. At the beginning of Operation Enduring Freedom, Operational Detachment Alpha 595 teams were covertly inserted into Afghanistan on October 19, 2001. Horses were the only suitable method of transport in the difficult mountainous terrain of Northern Afghanistan. They were the first U.S. soldiers to ride horses into battle since January 16, 1942, when the U.S. Army’s 26th Cavalry Regiment charged an advanced guard of the 14th Japanese Army as it advanced from Manila. The only remaining operationally ready, fully horse-mounted regular regiment in the world is the Indian Army's 61st Cavalry. Law enforcement and public safety Mounted police have been used since the 18th century, and still are used worldwide to control traffic and crowds, patrol public parks, keep order in processionals and during ceremonies and perform general street patrol duties. Today, many cities still have mounted police units. In rural areas, horses are used by law enforcement for mounted patrols over rugged terrain, crowd control at religious shrines, and border patrol. In rural areas, law enforcement that operates outside of incorporated cities may also have mounted units. These include specially deputised, paid or volunteer mounted search and rescue units sent into roadless areas on horseback to locate missing people. Law enforcement in protected areas may use horses in places where mechanised transport is difficult or prohibited. Horses can be an essential part of an overall team effort as they can move faster on the ground than a human on foot, can transport heavy equipment, and provide a more rested rescue worker when a subject is found. Ceremonial and educational uses Many countries throughout the world maintain traditionally trained and historically uniformed cavalry units for ceremonial, exhibition, or educational purposes. One example is the Horse Cavalry Detachment of the U.S. Army's 1st Cavalry Division. This unit of active duty soldiers approximates the weapons, tools, equipment and techniques used by the United States Cavalry in the 1880s. It is seen at change of command ceremonies and other public appearances. A similar detachment is the Governor General's Horse Guards, Canada's Household Cavalry regiment, the last remaining mounted cavalry unit in the Canadian Forces. Nepal's King's Household Cavalry is a ceremonial unit with over 100 horses and is the remainder of the Nepalese cavalry that existed since the 19th century. An important ceremonial use is in military funerals, which often have a caparisoned horse as part of the procession, "to symbolize that the warrior will never ride again". Horses are also used in many historical reenactments. Reenactors try to recreate the conditions of the battle or tournament with equipment that is as authentic as possible. Equestrian sport Modern-day Olympic equestrian events are rooted in cavalry skills and classical horsemanship. The first equestrian events at the Olympics were introduced in 1912, and through 1948, competition was restricted to active-duty officers on military horses. Only after 1952, as mechanisation of warfare reduced the number of military riders, were civilian riders allowed to compete. Dressage traces its origins to Xenophon and his works on cavalry training methods, developing further during the Renaissance in response to a need for different tactics in battles where firearms were used. The three-phase competition known as Eventing developed out of cavalry officers' needs for versatile, well-schooled horses. Though show jumping developed largely from fox hunting, the cavalry considered jumping to be good training for their horses, and leaders in the development of modern riding techniques over fences, such as Federico Caprilli, came from military ranks. Beyond the Olympic disciplines are other events with military roots. Competitions with weapons, such as mounted shooting and tent pegging, test the combat skills of mounted riders.
Technology
Military technology: General
null
7788156
https://en.wikipedia.org/wiki/Signature%20%28logic%29
Signature (logic)
In logic, especially mathematical logic, a signature lists and describes the non-logical symbols of a formal language. In universal algebra, a signature lists the operations that characterize an algebraic structure. In model theory, signatures are used for both purposes. They are rarely made explicit in more philosophical treatments of logic. Definition Formally, a (single-sorted) signature can be defined as a 4-tuple where and are disjoint sets not containing any other basic logical symbols, called respectively function symbols (examples: ), s or predicates (examples: ), constant symbols (examples: ), and a function which assigns a natural number called arity to every function or relation symbol. A function or relation symbol is called -ary if its arity is Some authors define a nullary (-ary) function symbol as constant symbol, otherwise constant symbols are defined separately. A signature with no function symbols is called a , and a signature with no relation symbols is called an . A is a signature such that and are finite. More generally, the cardinality of a signature is defined as The is the set of all well formed sentences built from the symbols in that signature together with the symbols in the logical system. Other conventions In universal algebra the word or is often used as a synonym for "signature". In model theory, a signature is often called a , or identified with the (first-order) language to which it provides the non-logical symbols. However, the cardinality of the language will always be infinite; if is finite then will be . As the formal definition is inconvenient for everyday use, the definition of a specific signature is often abbreviated in an informal way, as in: "The standard signature for abelian groups is where is a unary operator." Sometimes an algebraic signature is regarded as just a list of arities, as in: "The similarity type for abelian groups is " Formally this would define the function symbols of the signature as something like (which is binary), (which is unary) and (which is nullary), but in reality the usual names are used even in connection with this convention. In mathematical logic, very often symbols are not allowed to be nullary, so that constant symbols must be treated separately rather than as nullary function symbols. They form a set disjoint from on which the arity function is not defined. However, this only serves to complicate matters, especially in proofs by induction over the structure of a formula, where an additional case must be considered. Any nullary relation symbol, which is also not allowed under such a definition, can be emulated by a unary relation symbol together with a sentence expressing that its value is the same for all elements. This translation fails only for empty structures (which are often excluded by convention). If nullary symbols are allowed, then every formula of propositional logic is also a formula of first-order logic. An example for an infinite signature uses and to formalize expressions and equations about a vector space over an infinite scalar field where each denotes the unary operation of scalar multiplication by This way, the signature and the logic can be kept single-sorted, with vectors being the only sort. Use of signatures in logic and algebra In the context of first-order logic, the symbols in a signature are also known as the non-logical symbols, because together with the logical symbols they form the underlying alphabet over which two formal languages are inductively defined: The set of terms over the signature and the set of (well-formed) formulas over the signature. In a structure, an interpretation ties the function and relation symbols to mathematical objects that justify their names: The interpretation of an -ary function symbol in a structure with domain is a function and the interpretation of an -ary relation symbol is a relation Here denotes the -fold cartesian product of the domain with itself, and so is in fact an -ary function, and an -ary relation. Many-sorted signatures For many-sorted logic and for many-sorted structures, signatures must encode information about the sorts. The most straightforward way of doing this is via that play the role of generalized arities. Symbol types Let be a set (of sorts) not containing the symbols or The symbol types over are certain words over the alphabet : the relational symbol types and the functional symbol types for non-negative integers and (For the expression denotes the empty word.) Signature A (many-sorted) signature is a triple consisting of a set of sorts, a set of symbols, and a map which associates to every symbol in a symbol type over
Mathematics
Model theory
null
18092646
https://en.wikipedia.org/wiki/Geologic%20record
Geologic record
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. Correlating the rock record At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale using the law of superposition, for where tectonic forces have uplifted one ridge newly subject to erosion and weathering in folding and faulting the strata, they have also created a nearby trough or structural basin region that lies at a relative lower elevation that can accumulate additional deposits. By comparing overall formations, geologic structures and local strata, calibrated by those layers which are widespread, a nearly complete geologic record has been constructed since the 17th century. Discordant strata example Correcting for discordancies can be done in a number of ways and utilizing a number of technologies or field research results from studies in other disciplines. In this example, the study of layered rocks and the fossils they contain is called biostratigraphy and utilizes amassed geobiology and paleobiological knowledge. Fossils can be used to recognize rock layers of the same or different geologic ages, thereby coordinating locally occurring geologic stages to the overall geologic timeline. The pictures of the fossils of monocellular algae in this USGS figure were taken with a scanning electron microscope and have been magnified 250 times. In the U.S. state of South Carolina three marker species of fossil algae are found in a core of rock whereas in Virginia only two of the three species are found in the Eocene Series of rock layers spanning three stages and the geologic ages from 37.2–55.8 MA. Comparing the record about the discordance in the record to the full rock column shows the non-occurrence of the missing species and that portion of the local rock record, from the early part of the middle Eocene is missing there. This is one form of discordancy and the means geologists use to compensate for local variations in the rock record. With the two remaining marker species it is possible to correlate rock layers of the same age (early Eocene and latter part of the middle Eocene) in both South Carolina and Virginia, and thereby "calibrate" the local rock column into its proper place in the overall geologic record. Lithology vs paleontology Consequently, as the picture of the overall rock record emerged, and discontinuities and similarities in one place were cross-correlated to those in others, it became useful to subdivide the overall geologic record into a series of component sub-sections representing different sized groups of layers within known geologic time, from the shortest time span stage to the largest thickest strata eonothem and time spans eon. Concurrent work in other natural science fields required a time continuum be defined, and earth scientists decided to coordinate the system of rock layers and their identification criteria with that of the geologic time scale. This gives the pairing between the physical layers of the left column and the time units of the center column in the table at right. Gallery
Physical sciences
Geology: General
Earth science
91100
https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley%20experiment
Michelson–Morley experiment
The Michelson–Morley experiment was an attempt to measure the motion of the Earth relative to the luminiferous aether, a supposed medium permeating space that was thought to be the carrier of light waves. The experiment was performed between April and July 1887 by American physicists Albert A. Michelson and Edward W. Morley at what is now Case Western Reserve University in Cleveland, Ohio, and published in November of the same year. The experiment compared the speed of light in perpendicular directions in an attempt to detect the relative motion of matter, including their laboratory, through the luminiferous aether, or "aether wind" as it was sometimes called. The result was negative, in that Michelson and Morley found no significant difference between the speed of light in the direction of movement through the presumed aether, and the speed at right angles. This result is generally considered to be the first strong evidence against some aether theories, as well as initiating a line of research that eventually led to special relativity, which rules out motion against an aether. Of this experiment, Albert Einstein wrote, "If the Michelson–Morley experiment had not brought us into serious embarrassment, no one would have regarded the relativity theory as a (halfway) redemption." Michelson–Morley type experiments have been repeated many times with steadily increasing sensitivity. These include experiments from 1902 to 1905, and a series of experiments in the 1920s. More recently, in 2009, optical resonator experiments confirmed the absence of any aether wind at the 10−17 level. Together with the Ives–Stilwell and Kennedy–Thorndike experiments, Michelson–Morley type experiments form one of the fundamental tests of special relativity. Detecting the aether Physics theories of the 19th century assumed that just as surface water waves must have a supporting substance, i.e., a "medium", to move across (in this case water), and audible sound requires a medium to transmit its wave motions (such as air or water), so light must also require a medium, the "luminiferous aether", to transmit its wave motions. Because light can travel through a vacuum, it was assumed that even a vacuum must be filled with aether. Because the speed of light is so great, and because material bodies pass through the aether without obvious friction or drag, it was assumed to have a highly unusual combination of properties. Designing experiments to investigate these properties was a high priority of 19th-century physics. Earth orbits around the Sun at a speed of around , or . The Earth is in motion, so two main possibilities were considered: (1) The aether is stationary and only partially dragged by Earth (proposed by Augustin-Jean Fresnel in 1818), or (2) the aether is completely dragged by Earth and thus shares its motion at Earth's surface (proposed by Sir George Stokes, 1st Baronet in 1844). In addition, James Clerk Maxwell (1865) recognized the electromagnetic nature of light and developed what are now called Maxwell's equations, but these equations were still interpreted as describing the motion of waves through an aether, whose state of motion was unknown. Eventually, Fresnel's idea of an (almost) stationary aether was preferred because it appeared to be confirmed by the Fizeau experiment (1851) and the aberration of star light. According to the stationary and the partially dragged aether hypotheses, Earth and the aether are in relative motion, implying that a so-called "aether wind" (Fig. 2) should exist. Although it would be theoretically possible for the Earth's motion to match that of the aether at one moment in time, it was not possible for the Earth to remain at rest with respect to the aether at all times, because of the variation in both the direction and the speed of the motion. At any given point on the Earth's surface, the magnitude and direction of the wind would vary with time of day and season. By analyzing the return speed of light in different directions at various different times, it was thought to be possible to measure the motion of the Earth relative to the aether. The expected relative difference in the measured speed of light was quite small, given that the velocity of the Earth in its orbit around the Sun has a magnitude of about one hundredth of one percent of the speed of light. During the mid-19th century, measurements of aether wind effects of first order, i.e., effects proportional to v/c (v being Earth's velocity, c the speed of light) were thought to be possible, but no direct measurement of the speed of light was possible with the accuracy required. For instance, the Fizeau wheel could measure the speed of light to perhaps 5% accuracy, which was quite inadequate for measuring directly a first-order 0.01% change in the speed of light. A number of physicists therefore attempted to make measurements of indirect first-order effects not of the speed of light itself, but of variations in the speed of light (see First order aether-drift experiments). The Hoek experiment, for example, was intended to detect interferometric fringe shifts due to speed differences of oppositely propagating light waves through water at rest. The results of such experiments were all negative. This could be explained by using Fresnel's dragging coefficient, according to which the aether and thus light are partially dragged by moving matter. Partial aether-dragging would thwart attempts to measure any first order change in the speed of light. As pointed out by Maxwell (1878), only experimental arrangements capable of measuring second order effects would have any hope of detecting aether drift, i.e., effects proportional to v2/c2. Existing experimental setups, however, were not sensitive enough to measure effects of that size. 1881 and 1887 experiments Michelson experiment (1881) Michelson had a solution to the problem of how to construct a device sufficiently accurate to detect aether flow. In 1877, while teaching at his alma mater, the United States Naval Academy in Annapolis, Michelson conducted his first known light speed experiments as a part of a classroom demonstration. In 1881, he left active U.S. Naval service while in Germany concluding his studies. In that year, Michelson used a prototype experimental device to make several more measurements. The device he designed, later known as a Michelson interferometer, sent yellow light from a sodium flame (for alignment), or white light (for the actual observations), through a half-silvered mirror that was used to split it into two beams traveling at right angles to one another. After leaving the splitter, the beams traveled out to the ends of long arms where they were reflected back into the middle by small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference whose transverse displacement would depend on the relative time it takes light to transit the longitudinal vs. the transverse arms. If the Earth is traveling through an aether medium, a light beam traveling parallel to the flow of that aether will take longer to reflect back and forth than would a beam traveling perpendicular to the aether, because the increase in elapsed time from traveling against the aether wind is more than the time saved by traveling with the aether wind. Michelson expected that the Earth's motion would produce a fringe shift equal to 0.04 fringes—that is, of the separation between areas of the same intensity. He did not observe the expected shift; the greatest average deviation that he measured (in the northwest direction) was only 0.018 fringes; most of his measurements were much less. His conclusion was that Fresnel's hypothesis of a stationary aether with partial aether dragging would have to be rejected, and thus he confirmed Stokes' hypothesis of complete aether dragging. However, Alfred Potier (and later Hendrik Lorentz) pointed out to Michelson that he had made an error of calculation, and that the expected fringe shift should have been only 0.02 fringes. Michelson's apparatus was subject to experimental errors far too large to say anything conclusive about the aether wind. Definitive measurement of the aether wind would require an experiment with greater accuracy and better controls than the original. Nevertheless, the prototype was successful in demonstrating that the basic method was feasible. Michelson–Morley experiment (1887) In 1885, Michelson began a collaboration with Edward Morley, spending considerable time and money to confirm with higher accuracy Fizeau's 1851 experiment on Fresnel's drag coefficient, to improve on Michelson's 1881 experiment, and to establish the wavelength of light as a standard of length. At this time Michelson was professor of physics at the Case School of Applied Science, and Morley was professor of chemistry at Western Reserve University (WRU), which shared a campus with the Case School on the eastern edge of Cleveland. Michelson suffered a mental health crisis in September 1885, from which he recovered by October 1885. Morley ascribed this breakdown to the intense work of Michelson during the preparation of the experiments. In 1886, Michelson and Morley successfully confirmed Fresnel's drag coefficient – this result was also considered as a confirmation of the stationary aether concept. This result strengthened their hope of finding the aether wind. Michelson and Morley created an improved version of the Michelson experiment with more than enough accuracy to detect this hypothetical effect. The experiment was performed in several periods of concentrated observations between April and July 1887, in the basement of Adelbert Dormitory of WRU (later renamed Pierce Hall, demolished in 1962). As shown in the diagram to the right, the light was repeatedly reflected back and forth along the arms of the interferometer, increasing the path length to . At this length, the drift would be about 0.4 fringes. To make that easily detectable, the apparatus was assembled in a closed room in the basement of the heavy stone dormitory, eliminating most thermal and vibrational effects. Vibrations were further reduced by building the apparatus on top of a large block of sandstone (Fig. 1), about a foot thick and square, which was then floated in a circular trough of mercury. They estimated that effects of about 0.01 fringe would be detectable. Michelson and Morley and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used (partially) monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Purely monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even when the interferometer was set up in a basement. Because the fringes would occasionally disappear due to vibrations caused by passing horse traffic, distant thunderstorms and the like, an observer could easily "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. As Dayton Miller wrote, "White light fringes were chosen for the observations because they consist of a small group of fringes having a central, sharply defined black fringe which forms a permanent zero reference mark for all readings." Use of partially monochromatic light (yellow sodium light) during initial alignment enabled the researchers to locate the position of equal path length, more or less easily, before switching to white light. The mercury trough allowed the device to turn with close to zero friction, so that once having given the sandstone block a single push it would slowly rotate through the entire range of possible angles to the "aether wind", while measurements were continuously observed by looking through the eyepiece. The hypothesis of aether drift implies that because one of the arms would inevitably turn into the direction of the wind at the same time that another arm was turning perpendicularly to the wind, an effect should be noticeable even over a period of minutes. The expectation was that the effect would be graphable as a sine wave with two peaks and two troughs per rotation of the device. This result could have been expected because during each full rotation, each arm would be parallel to the wind twice (facing into and away from the wind giving identical readings) and perpendicular to the wind twice. Additionally, due to the Earth's rotation, the wind would be expected to show periodic changes in direction and magnitude during the course of a sidereal day. Because of the motion of the Earth around the Sun, the measured data were also expected to show annual variations. Most famous "failed" experiment After all this thought and preparation, the experiment became what has been called the most famous failed experiment in history. Instead of providing insight into the properties of the aether, Michelson and Morley's article in the American Journal of Science reported the measurement to be as small as one-fortieth of the expected displacement (Fig. 7), but "since the displacement is proportional to the square of the velocity" they concluded that the measured velocity was "probably less than one-sixth" of the expected velocity of the Earth's motion in orbit and "certainly less than one-fourth". Although this small "velocity" was measured, it was considered far too small to be used as evidence of speed relative to the aether, and it was understood to be within the range of an experimental error that would allow the speed to actually be zero. For instance, Michelson wrote about the "decidedly negative result" in a letter to Lord Rayleigh in August 1887: From the standpoint of the then current aether models, the experimental results were conflicting. The Fizeau experiment and its 1886 repetition by Michelson and Morley apparently confirmed the stationary aether with partial aether dragging, and refuted complete aether dragging. On the other hand, the much more precise Michelson–Morley experiment (1887) apparently confirmed complete aether dragging and refuted the stationary aether. In addition, the Michelson–Morley null result was further substantiated by the null results of other second-order experiments of different kind, namely the Trouton–Noble experiment (1903) and the experiments of Rayleigh and Brace (1902–1904). These problems and their solution led to the development of the Lorentz transformation and special relativity. After the "failed" experiment Michelson and Morley ceased their aether drift measurements and started to use their newly developed technique to establish the wavelength of light as a standard of length. Light path analysis and consequences Observer resting in the aether The beam travel time in the longitudinal direction can be derived as follows: Light is sent from the source and propagates with the speed of light in the aether. It passes through the half-silvered mirror at the origin at . The reflecting mirror is at that moment at distance (the length of the interferometer arm) and is moving with velocity . The beam hits the mirror at time and thus travels the distance . At this time, the mirror has traveled the distance . Thus and consequently the travel time . The same consideration applies to the backward journey, with the sign of reversed, resulting in and . The total travel time is: Michelson obtained this expression correctly in 1881, however, in transverse direction he obtained the incorrect expression because he overlooked the increase in path length in the rest frame of the aether. This was corrected by Alfred Potier (1882) and Hendrik Lorentz (1886). The derivation in the transverse direction can be given as follows (analogous to the derivation of time dilation using a light clock): The beam is propagating at the speed of light and hits the mirror at time , traveling the distance . At the same time, the mirror has traveled the distance in the x direction. So in order to hit the mirror, the travel path of the beam is in the y direction (assuming equal-length arms) and in the x direction. This inclined travel path follows from the transformation from the interferometer rest frame to the aether rest frame. Therefore, the Pythagorean theorem gives the actual beam travel distance of . Thus and consequently the travel time , which is the same for the backward journey. The total travel time is: The time difference between and is given by To find the path difference, simply multiply by ; The path difference is denoted by because the beams are out of phase by a some number of wavelengths (). To visualise this, consider taking the two beam paths along the longitudinal and transverse plane, and lying them straight (an animation of this is shown at minute 11:00, The Mechanical Universe, episode 41). One path will be longer than the other, this distance is . Alternatively, consider the rearrangement of the speed of light formula . If the relation is true (if the velocity of the aether is small relative to the speed of light), then the expression can be simplified using a first order binomial expansion; So, rewriting the above in terms of powers; Applying binomial simplification; Therefore; It can be seen from this derivation that aether wind manifests as a path difference. The path difference is zero only when the interferometer is aligned with or perpendicular to the aether wind, and it reaches a maximum when it is at a 45° angle. The path difference can be any fraction of the wavelength, depending on the angle and speed of the aether wind. To prove the existence of the aether, Michelson and Morley sought to find the "fringe shift". The idea was simple, the fringes of the interference pattern should shift when rotating it by 90° as the two beams have exchanged roles. To find the fringe shift, subtract the path difference in first orientation by the path difference in the second, then divide by the wavelength, , of light; Note the difference between , which is some number of wavelengths, and which is a single wavelength. As can be seen by this relation, fringe shift n is a unitless quantity. Since L ≈ 11 meters and λ ≈ 500 nanometers, the expected fringe shift was n ≈ 0.44. The negative result led Michelson to the conclusion that there is no measurable aether drift. However, he never accepted this on a personal level, and the negative result haunted him for the rest of his life. Observer comoving with the interferometer If the same situation is described from the view of an observer co-moving with the interferometer, then the effect of aether wind is similar to the effect experienced by a swimmer, who tries to move with velocity against a river flowing with velocity . In the longitudinal direction the swimmer first moves upstream, so his velocity is diminished due to the river flow to . On his way back moving downstream, his velocity is increased to . This gives the beam travel times and as mentioned above. In the transverse direction, the swimmer has to compensate for the river flow by moving at a certain angle against the flow direction, in order to sustain his exact transverse direction of motion and to reach the other side of the river at the correct location. This diminishes his speed to , and gives the beam travel time as mentioned above. Mirror reflection The classical analysis predicted a relative phase shift between the longitudinal and transverse beams which in Michelson and Morley's apparatus should have been readily measurable. What is not often appreciated (since there was no means of measuring it), is that motion through the hypothetical aether should also have caused the two beams to diverge as they emerged from the interferometer by about 10−8 radians. For an apparatus in motion, the classical analysis requires that the beam-splitting mirror be slightly offset from an exact 45° if the longitudinal and transverse beams are to emerge from the apparatus exactly superimposed. In the relativistic analysis, Lorentz-contraction of the beam splitter in the direction of motion causes it to become more perpendicular by precisely the amount necessary to compensate for the angle discrepancy of the two beams. Length contraction and Lorentz transformation A first step to explaining the Michelson and Morley experiment's null result was found in the FitzGerald–Lorentz contraction hypothesis, now simply called length contraction or Lorentz contraction, first proposed by George FitzGerald (1889) in a letter to same journal that published the Michelson-Morley paper, as "almost the only hypothesis that can reconcile" the apparent contradictions. It was independently also proposed by Hendrik Lorentz (1892). According to this law all objects physically contract by along the line of motion (originally thought to be relative to the aether), being the Lorentz factor. This hypothesis was partly motivated by Oliver Heaviside's discovery in 1888 that electrostatic fields are contracting in the line of motion. But since there was no reason at that time to assume that binding forces in matter are of electric origin, length contraction of matter in motion with respect to the aether was considered an ad hoc hypothesis. If length contraction of is inserted into the above formula for , then the light propagation time in the longitudinal direction becomes equal to that in the transverse direction: However, length contraction is only a special case of the more general relation, according to which the transverse length is larger than the longitudinal length by the ratio . This can be achieved in many ways. If is the moving longitudinal length and the moving transverse length, being the rest lengths, then it is given: can be arbitrarily chosen, so there are infinitely many combinations to explain the Michelson–Morley null result. For instance, if the relativistic value of length contraction of occurs, but if then no length contraction but an elongation of occurs. This hypothesis was later extended by Joseph Larmor (1897), Lorentz (1904) and Henri Poincaré (1905), who developed the complete Lorentz transformation including time dilation in order to explain the Trouton–Noble experiment, the Experiments of Rayleigh and Brace, and Kaufmann's experiments. It has the form It remained to define the value of , which was shown by Lorentz (1904) to be unity. In general, Poincaré (1905) demonstrated that only allows this transformation to form a group, so it is the only choice compatible with the principle of relativity, i.e., making the stationary aether undetectable. Given this, length contraction and time dilation obtain their exact relativistic values. Special relativity Albert Einstein formulated the theory of special relativity by 1905, deriving the Lorentz transformation and thus length contraction and time dilation from the relativity postulate and the constancy of the speed of light, thus removing the ad hoc character from the contraction hypothesis. Einstein emphasized the kinematic foundation of the theory and the modification of the notion of space and time, with the stationary aether no longer playing any role in his theory. He also pointed out the group character of the transformation. Einstein was motivated by Maxwell's theory of electromagnetism (in the form as it was given by Lorentz in 1895) and the lack of evidence for the luminiferous aether. This allows a more elegant and intuitive explanation of the Michelson–Morley null result. In a comoving frame the null result is self-evident, since the apparatus can be considered as at rest in accordance with the relativity principle, thus the beam travel times are the same. In a frame relative to which the apparatus is moving, the same reasoning applies as described above in "Length contraction and Lorentz transformation", except the word "aether" has to be replaced by "non-comoving inertial frame". Einstein wrote in 1916: The extent to which the null result of the Michelson–Morley experiment influenced Einstein is disputed. Alluding to some statements of Einstein, many historians argue that it played no significant role in his path to special relativity, while other statements of Einstein probably suggest that he was influenced by it. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance. It was later shown by Howard Percy Robertson (1949) and others (see Robertson–Mansouri–Sexl test theory), that it is possible to derive the Lorentz transformation entirely from the combination of three experiments. First, the Michelson–Morley experiment showed that the speed of light is independent of the orientation of the apparatus, establishing the relationship between longitudinal (β) and transverse (δ) lengths. Then in 1932, Roy Kennedy and Edward Thorndike modified the Michelson–Morley experiment by making the path lengths of the split beam unequal, with one arm being very short. The Kennedy–Thorndike experiment took place for many months as the Earth moved around the sun. Their negative result showed that the speed of light is independent of the velocity of the apparatus in different inertial frames. In addition it established that besides length changes, corresponding time changes must also occur, i.e., it established the relationship between longitudinal lengths (β) and time changes (α). So both experiments do not provide the individual values of these quantities. This uncertainty corresponds to the undefined factor as described above. It was clear due to theoretical reasons (the group character of the Lorentz transformation as required by the relativity principle) that the individual values of length contraction and time dilation must assume their exact relativistic form. But a direct measurement of one of these quantities was still desirable to confirm the theoretical results. This was achieved by the Ives–Stilwell experiment (1938), measuring α in accordance with time dilation. Combining this value for α with the Kennedy–Thorndike null result shows that β must assume the value of relativistic length contraction. Combining β with the Michelson–Morley null result shows that δ must be zero. Therefore, the Lorentz transformation with is an unavoidable consequence of the combination of these three experiments. Special relativity is generally considered the solution to all negative aether drift (or isotropy of the speed of light) measurements, including the Michelson–Morley null result. Many high precision measurements have been conducted as tests of special relativity and modern searches for Lorentz violation in the photon, electron, nucleon, or neutrino sector, all of them confirming relativity. Incorrect alternatives As mentioned above, Michelson initially believed that his experiment would confirm Stokes' theory, according to which the aether was fully dragged in the vicinity of the earth (see Aether drag hypothesis). However, complete aether drag contradicts the observed aberration of light and was contradicted by other experiments as well. In addition, Lorentz showed in 1886 that Stokes's attempt to explain aberration is contradictory. Furthermore, the assumption that the aether is not carried in the vicinity, but only within matter, was very problematic as shown by the Hammar experiment (1935). Hammar directed one leg of his interferometer through a heavy metal pipe plugged with lead. If aether were dragged by mass, it was theorized that the mass of the sealed metal pipe would have been enough to cause a visible effect. Once again, no effect was seen, so aether-drag theories are considered to be disproven. Walther Ritz's emission theory (or ballistic theory) was also consistent with the results of the experiment, not requiring aether. The theory postulates that light has always the same velocity in respect to the source. However de Sitter noted that emitter theory predicted several optical effects that were not seen in observations of binary stars in which the light from the two stars could be measured in a spectrometer. If emission theory were correct, the light from the stars should experience unusual fringe shifting due to the velocity of the stars being added to the speed of the light, but no such effect could be seen. It was later shown by J. G. Fox that the original de Sitter experiments were flawed due to extinction, but in 1977 Brecher observed X-rays from binary star systems with similar null results. Furthermore, Filippas and Fox (1964) conducted terrestrial particle accelerator tests specifically designed to address Fox's earlier "extinction" objection, the results being inconsistent with source dependence of the speed of light. Subsequent experiments Although Michelson and Morley went on to different experiments after their first publication in 1887, both remained active in the field. Other versions of the experiment were carried out with increasing sophistication. Morley was not convinced of his own results, and went on to conduct additional experiments with Dayton Miller from 1902 to 1904. Again, the result was negative within the margins of error. Miller worked on increasingly larger interferometers, culminating in one with a (effective) arm length that he tried at various sites, including on top of a mountain at the Mount Wilson Observatory. To avoid the possibility of the aether wind being blocked by solid walls, his mountaintop observations used a special shed with thin walls, mainly of canvas. From noisy, irregular data, he consistently extracted a small positive signal that varied with each rotation of the device, with the sidereal day, and on a yearly basis. His measurements in the 1920s amounted to approximately instead of the nearly expected from the Earth's orbital motion alone. He remained convinced this was due to partial entrainment or aether dragging, though he did not attempt a detailed explanation. He ignored critiques demonstrating the inconsistency of his results and the refutation by the Hammar experiment. Miller's findings were considered important at the time, and were discussed by Michelson, Lorentz and others at a meeting reported in 1928. There was general agreement that more experimentation was needed to check Miller's results. Miller later built a non-magnetic device to eliminate magnetostriction, while Michelson built one of non-expanding Invar to eliminate any remaining thermal effects. Other experimenters from around the world increased accuracy, eliminated possible side effects, or both. So far, no one has been able to replicate Miller's results, and modern experimental accuracies have ruled them out. Roberts (2006) has pointed out that the primitive data reduction techniques used by Miller and other early experimenters, including Michelson and Morley, were capable of creating apparent periodic signals even when none existed in the actual data. After reanalyzing Miller's original data using modern techniques of quantitative error analysis, Roberts found Miller's apparent signals to be statistically insignificant. Using a special optical arrangement involving a 1/20 wave step in one mirror, Roy J. Kennedy (1926) and K.K. Illingworth (1927) (Fig. 8) converted the task of detecting fringe shifts from the relatively insensitive one of estimating their lateral displacements to the considerably more sensitive task of adjusting the light intensity on both sides of a sharp boundary for equal luminance. If they observed unequal illumination on either side of the step, such as in Fig. 8e, they would add or remove calibrated weights from the interferometer until both sides of the step were once again evenly illuminated, as in Fig. 8d. The number of weights added or removed provided a measure of the fringe shift. Different observers could detect changes as little as 1/1500 to 1/300 of a fringe. Kennedy also carried out an experiment at Mount Wilson, finding only about 1/10 the drift measured by Miller and no seasonal effects. In 1930, Georg Joos conducted an experiment using an automated interferometer with arms forged from pressed quartz having a very low coefficient of thermal expansion, that took continuous photographic strip recordings of the fringes through dozens of revolutions of the apparatus. Displacements of 1/1000 of a fringe could be measured on the photographic plates. No periodic fringe displacements were found, placing an upper limit to the aether wind of . In the table below, the expected values are related to the relative speed between Earth and Sun of . With respect to the speed of the solar system around the galactic center of about , or the speed of the solar system relative to the CMB rest frame of about , the null results of those experiments are even more obvious. Recent experiments Optical tests Optical tests of the isotropy of the speed of light became commonplace. New technologies, including the use of lasers and masers, have significantly improved measurement precision. (In the following table, only Essen (1955), Jaseja (1964), and Shamir/Fox (1969) are experiments of Michelson–Morley type, i.e., comparing two perpendicular beams. The other optical experiments employed different methods.) Recent optical resonator experiments During the early 21st century, there has been a resurgence in interest in performing precise Michelson–Morley type experiments using lasers, masers, cryogenic optical resonators, etc. This is in large part due to predictions of quantum gravity that suggest that special relativity may be violated at scales accessible to experimental study. The first of these highly accurate experiments was conducted by Brillet & Hall (1979), in which they analyzed a laser frequency stabilized to a resonance of a rotating optical Fabry–Pérot cavity. They set a limit on the anisotropy of the speed of light resulting from the Earth's motions of Δc/c ≈ 10−15, where Δc is the difference between the speed of light in the x- and y-directions. As of 2015, optical and microwave resonator experiments have improved this limit to Δc/c ≈ 10−18. In some of them, the devices were rotated or remained stationary, and some were combined with the Kennedy–Thorndike experiment. In particular, Earth's direction and velocity (ca. ) relative to the CMB rest frame are ordinarily used as references in these searches for anisotropies. Other tests of Lorentz invariance Examples of other experiments not based on the Michelson–Morley principle, i.e., non-optical isotropy tests achieving an even higher level of precision, are Clock comparison or Hughes–Drever experiments. In Drever's 1961 experiment, 7Li nuclei in the ground state, which has total angular momentum J = 3/2, were split into four equally spaced levels by a magnetic field. Each transition between a pair of adjacent levels should emit a photon of equal frequency, resulting in a single, sharp spectral line. However, since the nuclear wave functions for different MJ have different orientations in space relative to the magnetic field, any orientation dependence, whether from an aether wind or from a dependence on the large-scale distribution of mass in space (see Mach's principle), would perturb the energy spacings between the four levels, resulting in an anomalous broadening or splitting of the line. No such broadening was observed. Modern repeats of this kind of experiment have provided some of the most accurate confirmations of the principle of Lorentz invariance.
Physical sciences
Theory of relativity
null
91127
https://en.wikipedia.org/wiki/Fermat%20number
Fermat number
In mathematics, a Fermat number, named after Pierre de Fermat (1607–1665), the first known to have studied them, is a positive integer of the form: where n is a non-negative integer. The first few Fermat numbers are: 3, 5, 17, 257, 65537, 4294967297, 18446744073709551617, 340282366920938463463374607431768211457, ... . If 2k + 1 is prime and , then k itself must be a power of 2, so is a Fermat number; such primes are called Fermat primes. , the only known Fermat primes are , , , , and . Basic properties The Fermat numbers satisfy the following recurrence relations: for n ≥ 1, for . Each of these relations can be proved by mathematical induction. From the second equation, we can deduce Goldbach's theorem (named after Christian Goldbach): no two Fermat numbers share a common integer factor greater than 1. To see this, suppose that and Fi and Fj have a common factor . Then a divides both and Fj; hence a divides their difference, 2. Since , this forces . This is a contradiction, because each Fermat number is clearly odd. As a corollary, we obtain another proof of the infinitude of the prime numbers: for each Fn, choose a prime factor pn; then the sequence is an infinite sequence of distinct primes. Further properties No Fermat prime can be expressed as the difference of two pth powers, where p is an odd prime. With the exception of F0 and F1, the last decimal digit of a Fermat number is 7. The sum of the reciprocals of all the Fermat numbers is irrational. (Solomon W. Golomb, 1963) Primality Fermat numbers and Fermat primes were first studied by Pierre de Fermat, who conjectured that all Fermat numbers are prime. Indeed, the first five Fermat numbers F0, ..., F4 are easily shown to be prime. Fermat's conjecture was refuted by Leonhard Euler in 1732 when he showed that Euler proved that every factor of Fn must have the form (later improved to by Lucas) for . That 641 is a factor of F5 can be deduced from the equalities 641 = 27 × 5 + 1 and 641 = 24 + 54. It follows from the first equality that 27 × 5 ≡ −1 (mod 641) and therefore (raising to the fourth power) that 228 × 54 ≡ 1 (mod 641). On the other hand, the second equality implies that 54 ≡ −24 (mod 641). These congruences imply that 232 ≡ −1 (mod 641). Fermat was probably aware of the form of the factors later proved by Euler, so it seems curious that he failed to follow through on the straightforward calculation to find the factor. One common explanation is that Fermat made a computational mistake. There are no other known Fermat primes Fn with , but little is known about Fermat numbers for large n. In fact, each of the following is an open problem: Is Fn composite for all ? Are there infinitely many Fermat primes? (Eisenstein 1844) Are there infinitely many composite Fermat numbers? Does a Fermat number exist that is not square-free? , it is known that Fn is composite for , although of these, complete factorizations of Fn are known only for , and there are no known prime factors for and . The largest Fermat number known to be composite is F18233954, and its prime factor was discovered in October 2020. Heuristic arguments Heuristics suggest that F4 is the last Fermat prime. The prime number theorem implies that a random integer in a suitable interval around N is prime with probability 1/ln N. If one uses the heuristic that a Fermat number is prime with the same probability as a random integer of its size, and that F5, ..., F32 are composite, then the expected number of Fermat primes beyond F4 (or equivalently, beyond F32) should be One may interpret this number as an upper bound for the probability that a Fermat prime beyond F4 exists. This argument is not a rigorous proof. For one thing, it assumes that Fermat numbers behave "randomly", but the factors of Fermat numbers have special properties. Boklan and Conway published a more precise analysis suggesting that the probability that there is another Fermat prime is less than one in a billion. Anders Bjorn and Hans Riesel estimated the number of square factors of Fermat numbers from F5 onward as in other words, there are unlikely to be any non-squarefree Fermat numbers, and in general square factors of are very rare for large n. Equivalent conditions Let be the nth Fermat number. Pépin's test states that for , is prime if and only if The expression can be evaluated modulo by repeated squaring. This makes the test a fast polynomial-time algorithm. But Fermat numbers grow so rapidly that only a handful of them can be tested in a reasonable amount of time and space. There are some tests for numbers of the form , such as factors of Fermat numbers, for primality. Proth's theorem (1878). Let with odd . If there is an integer a such that then is prime. Conversely, if the above congruence does not hold, and in addition (See Jacobi symbol) then is composite. If , then the above Jacobi symbol is always equal to −1 for , and this special case of Proth's theorem is known as Pépin's test. Although Pépin's test and Proth's theorem have been implemented on computers to prove the compositeness of some Fermat numbers, neither test gives a specific nontrivial factor. In fact, no specific prime factors are known for and 24. Factorization Because of Fermat numbers' size, it is difficult to factorize or even to check primality. Pépin's test gives a necessary and sufficient condition for primality of Fermat numbers, and can be implemented by modern computers. The elliptic curve method is a fast method for finding small prime divisors of numbers. Distributed computing project Fermatsearch has found some factors of Fermat numbers. Yves Gallot's proth.exe has been used to find factors of large Fermat numbers. Édouard Lucas, improving Euler's above-mentioned result, proved in 1878 that every factor of the Fermat number , with n at least 2, is of the form (see Proth number), where k is a positive integer. By itself, this makes it easy to prove the primality of the known Fermat primes. Factorizations of the first 12 Fermat numbers are: {| |- valign="top" |F0 ||=|| 21||+||1 ||=||3 is prime|| |- valign="top" |F1 ||=|| 22||+||1 ||=||5 is prime|| |- valign="top" |F2 ||=|| 24||+||1 ||=||17 is prime|| |- valign="top" |F3 ||=|| 28||+||1 ||=||257 is prime|| |- valign="top" |F4 ||=|| 216||+||1 ||=||65,537 is the largest known Fermat prime|| |- valign="top" |F5 ||=|| 232||+||1 ||=||4,294,967,297|| |- style="background:transparent; color:#B00000" | || || || || ||=||641 × 6,700,417 (fully factored 1732) |- valign="top" |F6 ||=|| 264||+||1 ||=||18,446,744,073,709,551,617 (20 digits) || |- style="background:transparent; color:#B00000" | || || || || ||=||274,177 × 67,280,421,310,721 (14 digits) (fully factored 1855) |- valign="top" |F7 ||=|| 2128||+||1 ||=||340,282,366,920,938,463,463,374,607,431,768,211,457 (39 digits)|| |- style="background:transparent; color:#B00000" | || || || || ||=||59,649,589,127,497,217 (17 digits) × 5,704,689,200,685,129,054,721 (22 digits) (fully factored 1970) |- valign="top" |F8 ||=|| 2256||+||1 ||=||115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,937 (78 digits)|| |- style="background:transparent; color:#B00000; vertical-align:top" | || || || || ||=||1,238,926,361,552,897 (16 digits) × 93,461,639,715,357,977,769,163,558,199,606,896,584,051,237,541,638,188,580,280,321 (62 digits) (fully factored 1980) |- valign="top" |F9 ||=|| 2512||+||1 ||=||13,407,807,929,942,597,099,574,024,998,205,846,127,479,365,820,592,393,377,723,561,443,721,764,030,073,546,976,801,874,298,166,903,427,690,031,858,186,486,050,853,753,882,811,946,569,946,433,649,006,084,097 (155 digits)|| |- style="background:transparent; color:#B00000; vertical-align:top" | || || || || ||=|| 2,424,833 × 7,455,602,825,647,884,208,337,395,736,200,454,918,783,366,342,657 (49 digits) × 741,640,062,627,530,801,524,787,141,901,937,474,059,940,781,097,519,023,905,821,316,144,415,759,504,705,008,092,818,711,693,940,737 (99 digits) (fully factored 1990) |- valign="top" |F10 ||=|| 21024||+||1 ||=||179,769,313,486,231,590,772,930...304,835,356,329,624,224,137,217 (309 digits)|| |- style="background:transparent; color:#B00000; vertical-align:top" | || || || || ||=||45,592,577 × 6,487,031,809 × 4,659,775,785,220,018,543,264,560,743,076,778,192,897 (40 digits) × 130,439,874,405,488,189,727,484...806,217,820,753,127,014,424,577 (252 digits) (fully factored 1995) |- valign="top" |F11 ||=|| 22048||+||1 ||=||32,317,006,071,311,007,300,714,8...193,555,853,611,059,596,230,657 (617 digits)|| |- style="background:transparent; color:#B00000; vertical-align:top" | || || || || ||=|| 319,489 × 974,849 × 167,988,556,341,760,475,137 (21 digits) × 3,560,841,906,445,833,920,513 (22 digits) × 173,462,447,179,147,555,430,258...491,382,441,723,306,598,834,177 (564 digits) (fully factored 1988) |- |} , only F0 to F11 have been completely factored. The distributed computing project Fermat Search is searching for new factors of Fermat numbers. The set of all Fermat factors is A050922 (or, sorted, A023394) in OEIS. The following factors of Fermat numbers were known before 1950 (since then, digital computers have helped find more factors): , 371 prime factors of Fermat numbers are known, and 324 Fermat numbers are known to be composite. Several new Fermat factors are found each year. Pseudoprimes and Fermat numbers Like composite numbers of the form 2p − 1, every composite Fermat number is a strong pseudoprime to base 2. This is because all strong pseudoprimes to base 2 are also Fermat pseudoprimes – i.e., for all Fermat numbers. In 1904, Cipolla showed that the product of at least two distinct prime or composite Fermat numbers will be a Fermat pseudoprime to base 2 if and only if . Other theorems about Fermat numbers A Fermat number cannot be a perfect number or part of a pair of amicable numbers. The series of reciprocals of all prime divisors of Fermat numbers is convergent. If is prime, there exists an integer m such that . The equation holds in that case. Let the largest prime factor of the Fermat number Fn be P(Fn). Then, Relationship to constructible polygons Carl Friedrich Gauss developed the theory of Gaussian periods in his Disquisitiones Arithmeticae and formulated a sufficient condition for the constructibility of regular polygons. Gauss stated that this condition was also necessary, but never published a proof. Pierre Wantzel gave a full proof of necessity in 1837. The result is known as the Gauss–Wantzel theorem: An n-sided regular polygon can be constructed with compass and straightedge if and only if n is either a power of 2 or the product of a power of 2 and distinct Fermat primes: in other words, if and only if n is of the form or , where k, s are nonnegative integers and the pi are distinct Fermat primes. A positive integer n is of the above form if and only if its totient φ(n) is a power of 2. Applications of Fermat numbers Pseudorandom number generation Fermat primes are particularly useful in generating pseudo-random sequences of numbers in the range 1, ..., N, where N is a power of 2. The most common method used is to take any seed value between 1 and , where P is a Fermat prime. Now multiply this by a number A, which is greater than the square root of P and is a primitive root modulo P (i.e., it is not a quadratic residue). Then take the result modulo P. The result is the new value for the RNG. (see linear congruential generator) This is useful in computer science, since most data structures have members with 2X possible values. For example, a byte has 256 (28) possible values (0–255). Therefore, to fill a byte or bytes with random values, a random number generator that produces values 1–256 can be used, the byte taking the output value −1. Very large Fermat primes are of particular interest in data encryption for this reason. This method produces only pseudorandom values, as after repetitions, the sequence repeats. A poorly chosen multiplier can result in the sequence repeating sooner than . Generalized Fermat numbers Numbers of the form with a, b any coprime integers, , are called generalized Fermat numbers. An odd prime p is a generalized Fermat number if and only if p is congruent to 1 (mod 4). (Here we consider only the case , so is not a counterexample.) An example of a probable prime of this form is 200262144 + 119262144 (found by Kellen Shenton). By analogy with the ordinary Fermat numbers, it is common to write generalized Fermat numbers of the form as Fn(a). In this notation, for instance, the number 100,000,001 would be written as F3(10). In the following we shall restrict ourselves to primes of this form, , such primes are called "Fermat primes base a". Of course, these primes exist only if a is even. If we require , then Landau's fourth problem asks if there are infinitely many generalized Fermat primes Fn(a). Generalized Fermat primes of the form Fn(a) Because of the ease of proving their primality, generalized Fermat primes have become in recent years a topic for research within the field of number theory. Many of the largest known primes today are generalized Fermat primes. Generalized Fermat numbers can be prime only for even , because if is odd then every generalized Fermat number will be divisible by 2. The smallest prime number with is , or 3032 + 1. Besides, we can define "half generalized Fermat numbers" for an odd base, a half generalized Fermat number to base a (for odd a) is , and it is also to be expected that there will be only finitely many half generalized Fermat primes for each odd base. In this list, the generalized Fermat numbers () to an even are , for odd , they are . If is a perfect power with an odd exponent , then all generalized Fermat number can be algebraic factored, so they cannot be prime. See for even bases up to 1000, and for odd bases. For the smallest number such that is prime, see . For the smallest even base such that is prime, see . The generalized Fermat prime F14(71) is the largest known generalized Fermat prime in bases b ≤ 1000, it is proven prime by elliptic curve primality proving. The smallest even base b such that Fn(b) = b2n + 1 (for given n = 0, 1, 2, ...) is prime are 2, 2, 2, 2, 2, 30, 102, 120, 278, 46, 824, 150, 1534, 30406, 67234, 70906, 48594, 62722, 24518, 75898, 919444, ... The smallest odd base b such that Fn(b) = (b2n + 1)/2 (for given n = 0, 1, 2, ...) is prime (or probable prime) are 3, 3, 3, 9, 3, 3, 3, 113, 331, 513, 827, 799, 3291, 5041, 71, 220221, 23891, 11559, 187503, 35963, ... Conversely, the smallest k such that (2n)k + 1 (for given n) is prime are 1, 1, 1, 0, 1, 1, 2, 1, 1, 2, 1, 2, 2, 1, 1, 0, 4, 1, ... (The next term is unknown) (also see and ) A more elaborate theory can be used to predict the number of bases for which will be prime for fixed . The number of generalized Fermat primes can be roughly expected to halve as is increased by 1. Generalized Fermat primes of the form Fn(a, b) It is also possible to construct generalized Fermat primes of the form . As in the case where b=1, numbers of this form will always be divisible by 2 if a+b is even, but it is still possible to define generalized half-Fermat primes of this type. For the smallest prime of the form (for odd ), see also . Largest known generalized Fermat primes The following is a list of the ten largest known generalized Fermat primes. The whole top-10 is discovered by participants in the PrimeGrid project. On the Prime Pages one can find the current top 20 generalized Fermat primes and the current top 100 generalized Fermat primes.
Mathematics
Prime numbers
null
91173
https://en.wikipedia.org/wiki/Axial%20tilt
Axial tilt
In astronomy, axial tilt, also known as obliquity, is the angle between an object's rotational axis and its orbital axis, which is the line perpendicular to its orbital plane; equivalently, it is the angle between its equatorial plane and orbital plane. It differs from orbital inclination. At an obliquity of 0 degrees, the two axes point in the same direction; that is, the rotational axis is perpendicular to the orbital plane. The rotational axis of Earth, for example, is the imaginary line that passes through both the North Pole and South Pole, whereas the Earth's orbital axis is the line perpendicular to the imaginary plane through which the Earth moves as it revolves around the Sun; the Earth's obliquity or axial tilt is the angle between these two lines. Over the course of an orbital period, the obliquity usually does not change considerably, and the orientation of the axis remains the same relative to the background of stars. This causes one pole to be pointed more toward the Sun on one side of the orbit, and more away from the Sun on the other side—the cause of the seasons on Earth. Standards There are two standard methods of specifying a planet's tilt. One way is based on the planet's north pole, defined in relation to the direction of Earth's north pole, and the other way is based on the planet's positive pole, defined by the right-hand rule: The International Astronomical Union (IAU) defines the north pole of a planet as that which lies on Earth's north side of the invariable plane of the Solar System; under this system, Venus is tilted 3° and rotates retrograde, opposite that of most of the other planets. The IAU also uses the right-hand rule to define a positive pole for the purpose of determining orientation. Using this convention, Venus is tilted 177° ("upside down") and rotates prograde. Earth Earth's orbital plane is known as the ecliptic plane, and Earth's tilt is known to astronomers as the obliquity of the ecliptic, being the angle between the ecliptic and the celestial equator on the celestial sphere. It is denoted by the Greek letter Epsilon ε. Earth currently has an axial tilt of about 23.44°. This value remains about the same relative to a stationary orbital plane throughout the cycles of axial precession. But the ecliptic (i.e., Earth's orbit) moves due to planetary perturbations, and the obliquity of the ecliptic is not a fixed quantity. At present, it is decreasing at a rate of about 46.8″ per century (see details in Short term below). History The ancient Greeks had good measurements of the obliquity since about 350 BCE, when Pytheas of Marseilles measured the shadow of a gnomon at the summer solstice. About 830 CE, the Caliph Al-Mamun of Baghdad directed his astronomers to measure the obliquity, and the result was used in the Arab world for many years. In 1437, Ulugh Beg determined the Earth's axial tilt as 23°30′17″ (23.5047°). During the Middle Ages, it was widely believed that both precession and Earth's obliquity oscillated around a mean value, with a period of 672 years, an idea known as trepidation of the equinoxes. Perhaps the first to realize this was incorrect (during historic time) was Ibn al-Shatir in the fourteenth century and the first to realize that the obliquity is decreasing at a relatively constant rate was Fracastoro in 1538. The first accurate, modern, western observations of the obliquity were probably those of Tycho Brahe from Denmark, about 1584, although observations by several others, including al-Ma'mun, al-Tusi, Purbach, Regiomontanus, and Walther, could have provided similar information. Seasons Earth's axis remains tilted in the same direction with reference to the background stars throughout a year (regardless of where it is in its orbit) – this is known as axial parallelism. This means that one pole (and the associated hemisphere of Earth) will be directed away from the Sun at one side of the orbit, and half an orbit later (half a year later) this pole will be directed towards the Sun. This is the cause of Earth's seasons. Summer occurs in the Northern hemisphere when the north pole is directed toward and the south pole away from the Sun. Variations in Earth's axial tilt can influence the seasons and is likely a factor in long-term climatic change (also see Milankovitch cycles). Oscillation Short term The exact angular value of the obliquity is found by observation of the motions of Earth and planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived. Annual almanacs are published listing the derived values and methods of use. Until 1983, the Astronomical Almanac's angular value of the mean obliquity for any date was calculated based on the work of Newcomb, who analyzed positions of the planets until about 1895: where is the obliquity and is tropical centuries from B1900.0 to the date in question. From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated: where hereafter is Julian centuries from J2000.0. JPL's fundamental ephemerides have been continually updated. For instance, according to IAU resolution in 2006 in favor of the P03 astronomical model, the Astronomical Almanac for 2010 specifies: These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps several centuries. Jacques Laskar computed an expression to order good to 0.02″ over 1000 years and several arcseconds over 10,000 years. where here is multiples of 10,000 Julian years from J2000.0. These expressions are for the so-called mean obliquity, that is, the obliquity free from short-term variations. Periodic motions of the Moon and of Earth in its orbit cause much smaller (9.2 arcseconds) short-period (about 18.6 years) oscillations of the rotation axis of Earth, known as nutation, which add a periodic component to Earth's obliquity. The true or instantaneous obliquity includes this nutation. Long term Using numerical methods to simulate Solar System behavior over a period of several million years, long-term changes in Earth's orbit, and hence its obliquity, have been investigated. For the past 5 million years, Earth's obliquity has varied between and , with a mean period of 41,040 years. This cycle is a combination of precession and the largest term in the motion of the ecliptic. For the next 1 million years, the cycle will carry the obliquity between and . The Moon has a stabilizing effect on Earth's obliquity. Frequency map analysis conducted in 1993 suggested that, in the absence of the Moon, the obliquity could change rapidly due to orbital resonances and chaotic behavior of the Solar System, reaching as high as 90° in as little as a few million years (also see Orbit of the Moon). However, more recent numerical simulations made in 2011 indicated that even in the absence of the Moon, Earth's obliquity might not be quite so unstable; varying only by about 20–25°. To resolve this contradiction, diffusion rate of obliquity has been calculated, and it was found that it takes more than billions of years for Earth's obliquity to reach near 90°. The Moon's stabilizing effect will continue for less than two billion years. As the Moon continues to recede from Earth due to tidal acceleration, resonances may occur which will cause large oscillations of the obliquity. Solar System bodies All four of the innermost, rocky planets of the Solar System may have had large variations of their obliquity in the past. Since obliquity is the angle between the axis of rotation and the direction perpendicular to the orbital plane, it changes as the orbital plane changes due to the influence of other planets. But the axis of rotation can also move (axial precession), due to torque exerted by the Sun on a planet's equatorial bulge. Like Earth, all of the rocky planets show axial precession. If the precession rate were very fast the obliquity would actually remain fairly constant even as the orbital plane changes. The rate varies due to tidal dissipation and core-mantle interaction, among other things. When a planet's precession rate approaches certain values, orbital resonances may cause large changes in obliquity. The amplitude of the contribution having one of the resonant rates is divided by the difference between the resonant rate and the precession rate, so it becomes large when the two are similar. Mercury and Venus have most likely been stabilized by the tidal dissipation of the Sun. Earth was stabilized by the Moon, as mentioned above, but before its formation, Earth, too, could have passed through times of instability. Mars's obliquity is quite variable over millions of years and may be in a chaotic state; it varies as much as 0° to 60° over some millions of years, depending on perturbations of the planets. Some authors dispute that Mars's obliquity is chaotic, and show that tidal dissipation and viscous core-mantle coupling are adequate for it to have reached a fully damped state, similar to Mercury and Venus. The occasional shifts in the axial tilt of Mars have been suggested as an explanation for the appearance and disappearance of rivers and lakes over the course of the existence of Mars. A shift could cause a burst of methane into the atmosphere, causing warming, but then the methane would be destroyed and the climate would become arid again. The obliquities of the outer planets are considered relatively stable. Extrasolar planets The stellar obliquity , i.e. the axial tilt of a star with respect to the orbital plane of one of its planets, has been determined for only a few systems. By 2012, 49 stars have had sky-projected spin-orbit misalignment has been observed, which serves as a lower limit to . Most of these measurements rely on the Rossiter–McLaughlin effect. Since the launch of space-based telescopes such as Kepler space telescope, it has been made possible to determine and estimate the obliquity of an extrasolar planet. The rotational flattening of the planet and the entourage of moons and/or rings, which are traceable with high-precision photometry provide access to planetary obliquity, . Many extrasolar planets have since had their obliquity determined, such as Kepler-186f and Kepler-413b. Astrophysicists have applied tidal theories to predict the obliquity of extrasolar planets. It has been shown that the obliquities of exoplanets in the habitable zone around low-mass stars tend to be eroded in less than 109 years, which means that they would not have tilt-induced seasons as Earth has.
Physical sciences
Planetary science
Astronomy
91191
https://en.wikipedia.org/wiki/Maintenance
Maintenance
The technical meaning of maintenance involves functional checks, servicing, repairing or replacing of necessary devices, equipment, machinery, building infrastructure and supporting utilities in industrial, business, and residential installations. Over time, this has come to include multiple wordings that describe various cost-effective practices to keep equipment operational; these activities occur either before or after a failure. Definitions Maintenance functions can be defined as maintenance, repair and overhaul (MRO), and MRO is also used for maintenance, repair and operations. Over time, the terminology of maintenance and MRO has begun to become standardized. The United States Department of Defense uses the following definitions: Any activity—such as tests, measurements, replacements, adjustments, and repairs—intended to retain or restore a functional unit in or to a specified state in which the unit can perform its required functions. All action taken to retain material in a serviceable condition or to restore it to serviceability. It includes inspections, testing, servicing, classification as to serviceability, repair, rebuilding, and reclamation. All supply and repair action taken to keep a force in condition to carry out its mission. The routine recurring work required to keep a facility (plant, building, structure, ground facility, utility system, or other real property) in such condition that it may be continuously used, at its original or designed capacity and efficiency for its intended purpose. Maintenance is strictly connected to the utilization stage of the product or technical system, in which the concept of maintainability must be included. In this scenario, maintainability is considered as the ability of an item, under stated conditions of use, to be retained in or restored to a state in which it can perform its required functions, using prescribed procedures and resources. In some domains like aircraft maintenance, terms maintenance, repair and overhaul also include inspection, rebuilding, alteration and the supply of spare parts, accessories, raw materials, adhesives, sealants, coatings and consumables for aircraft maintenance at the utilization stage. In international civil aviation maintenance means: The performance of tasks required to ensure the continuing airworthiness of an aircraft, including any one or combination of overhaul, inspection, replacement, defect rectification, and the embodiment of a modification or a repair. This definition covers all activities for which aviation regulations require issuance of a maintenance release document (aircraft certificate of return to service – CRS). Types The marine and air transportation, offshore structures, industrial plant and facility management industries depend on maintenance, repair and overhaul (MRO) including scheduled or preventive paint maintenance programmes to maintain and restore coatings applied to steel in environments subject to attack from erosion, corrosion and environmental pollution. The basic types of maintenance falling under MRO include: Preventive maintenance, where equipment is checked and serviced in a planned manner (in a scheduled points in time or continuously) Corrective maintenance, where equipment is repaired or replaced after wear, malfunction or break down Reinforcement Architectural conservation employs MRO to preserve, rehabilitate, restore, or reconstruct historical structures with stone, brick, glass, metal, and wood which match the original constituent materials where possible, or with suitable polymer technologies when not. Preventive maintenance Preventive maintenance (PM) is "a routine for periodically inspecting" with the goal of "noticing small problems and fixing them before major ones develop." Ideally, "nothing breaks down." The main goal behind PM is for the equipment to make it from one planned service to the next planned service without any failures caused by fatigue, extreme fluctuation in temperature(such as heat waves) during seasonal changes, neglect, or normal wear (preventable items), which Planned Maintenance and Condition Based Maintenance help to achieve by replacing worn components before they actually fail. Maintenance activities include partial or complete overhauls at specified periods, oil changes, lubrication, minor adjustments, and so on. In addition, workers can record equipment deterioration so they know to replace or repair worn parts before they cause system failure. The New York Times gave an example of "machinery that is not lubricated on schedule" that functions "until a bearing burns out." Preventive maintenance contracts are generally a fixed cost, whereas improper maintenance introduces a variable cost: replacement of major equipment. Main objective of PM are: Enhance capital equipment productive life. Reduce critical equipment breakdown. Minimize production loss due to equipment failures. Preventive maintenance or preventative maintenance (PM) has the following meanings: The care and servicing by personnel for the purpose of maintaining equipment in satisfactory operating condition by providing for systematic inspection, detection, and correction of incipient failures either before they occur or before they develop into major defects. The work carried out on equipment in order to avoid its breakdown or malfunction. It is a regular and routine action taken on equipment in order to prevent its breakdown. Maintenance, including tests, measurements, adjustments, parts replacement, and cleaning, performed specifically to prevent faults from occurring. Other terms and abbreviations related to PM are: scheduled maintenance planned maintenance, which may include scheduled downtime for equipment replacement planned preventive maintenance (PPM) is another name for PM : fixing things only when they break. This is also known as "a reactive maintenance strategy" and may involve "consequential damage." Planned maintenance Planned preventive maintenance (PPM), more commonly referred to as simply planned maintenance (PM) or scheduled maintenance, is any variety of scheduled maintenance to an object or item of equipment. Specifically, planned maintenance is a scheduled service visit carried out by a competent and suitable agent, to ensure that an item of equipment is operating correctly and to therefore avoid any unscheduled breakdown and downtime. The key factor as to when and why this work is being done is timing, and involves a service, resource or facility being unavailable. By contrast, condition-based maintenance is not directly based on equipment age. Planned maintenance is preplanned, and can be date-based, based on equipment running hours, or on distance travelled. Parts that have scheduled maintenance at fixed intervals, usually due to wearout or a fixed shelf life, are sometimes known as time-change interval, or TCI items. Predictive maintenance Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed. This approach promises cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Thus, it is regarded as condition-based maintenance carried out as suggested by estimations of the degradation state of an item. The main promise of predictive maintenance is to allow convenient scheduling of corrective maintenance, and to prevent unexpected equipment failures. This maintenance strategy uses sensors to monitor key parameters within a machine or system, and uses this data in conjunction with analysed historical trends to continuously evaluate the system health and predict a breakdown before it happens. This strategy allows maintenance to be performed more efficiently, since more up-to-date data is obtained about how close the product is to failure. Predictive replacement is the replacement of an item that is still functioning properly. Usually it is a tax-benefit based replacement policy whereby expensive equipment or batches of individually inexpensive supply items are removed and donated on a predicted/fixed shelf life schedule. These items are given to tax-exempt institutions. Condition-based maintenance Condition-based maintenance (CBM), shortly described, is maintenance when need arises. Albeit chronologically much older, It is considered one section or practice inside the broader and newer predictive maintenance field, where new AI technologies and connectivity abilities are put to action and where the acronym CBM is more often used to describe 'condition Based Monitoring' rather than the maintenance itself. CBM maintenance is performed after one or more indicators show that equipment is going to fail or that equipment performance is deteriorating. This concept is applicable to mission-critical systems that incorporate active redundancy and fault reporting. It is also applicable to non-mission critical systems that lack redundancy and fault reporting. Condition-based maintenance was introduced to try to maintain the correct equipment at the right time. CBM is based on using real-time data to prioritize and optimize maintenance resources. Observing the state of the system is known as condition monitoring. Such a system will determine the equipment's health, and act only when maintenance is actually necessary. Developments in recent years have allowed extensive instrumentation of equipment, and together with better tools for analyzing condition data, the maintenance personnel of today is more than ever able to decide what is the right time to perform maintenance on some piece of equipment. Ideally, condition-based maintenance will allow the maintenance personnel to do only the right things, minimizing spare parts cost, system downtime and time spent on maintenance. Challenges Despite its usefulness of equipment, there are several challenges to the use of CBM. First and most important of all, the initial cost of CBM can be high. It requires improved instrumentation of the equipment. Often the cost of sufficient instruments can be quite large, especially on equipment that is already installed. Wireless systems have reduced the initial cost. Therefore, it is important for the installer to decide the importance of the investment before adding CBM to all equipment. A result of this cost is that the first generation of CBM in the oil and gas industry has only focused on vibration in heavy rotating equipment. Secondly, introducing CBM will invoke a major change in how maintenance is performed, and potentially to the whole maintenance organization in a company. Organizational changes are in general difficult. Also, the technical side of it is not always as simple. Even if some types of equipment can easily be observed by measuring simple values such as vibration (displacement, velocity or acceleration), temperature or pressure, it is not trivial to turn this measured data into actionable knowledge about the health of the equipment. Value potential As systems get more costly, and instrumentation and information systems tend to become cheaper and more reliable, CBM becomes an important tool for running a plant or factory in an optimal manner. Better operations will lead to lower production cost and lower use of resources. And lower use of resources may be one of the most important differentiators in a future where environmental issues become more important by the day. Another scenario where value can be created is by monitoring the health of a car motor. Rather than changing parts at predefined intervals, the car itself can tell you when something needs to be changed based on cheap and simple instrumentation. It is Department of Defense policy that condition-based maintenance (CBM) be "implemented to improve maintenance agility and responsiveness, increase operational availability, and reduce life cycle total ownership costs". Advantages and disadvantages CBM has some advantages over planned maintenance: Improved system reliability Decreased maintenance costs Decreased number of maintenance operations causes a reduction of human error influences Its disadvantages are: High installation costs, for minor equipment items often more than the value of the equipment Unpredictable maintenance periods cause costs to be divided unequally. Increased number of parts (the CBM installation itself) that need maintenance and checking. Today, due to its costs, CBM is not used for less important parts of machinery despite obvious advantages. However it can be found everywhere where increased safety is required, and in future will be applied even more widely. Corrective maintenance Corrective maintenance is a type of maintenance used for equipment after equipment break down or malfunction is often most expensive – not only can worn equipment damage other parts and cause multiple damage, but consequential repair and replacement costs and loss of revenues due to down time during overhaul can be significant. Rebuilding and resurfacing of equipment and infrastructure damaged by erosion and corrosion as part of corrective or preventive maintenance programmes involves conventional processes such as welding and metal flame spraying, as well as engineered solutions with thermoset polymeric materials.
Technology
General
null
91420
https://en.wikipedia.org/wiki/Existential%20quantification
Existential quantification
In predicate logic, an existential quantification is a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". It is usually denoted by the logical operator symbol ∃, which, when used together with a predicate variable, is called an existential quantifier ("" or "" or "). Existential quantification is distinct from universal quantification ("for all"), which asserts that the property or relation holds for all members of the domain. Some sources use the term existentialization to refer to existential quantification. Quantification in general is covered in the article on quantification (logic). The existential quantifier is encoded as in Unicode, and as \exists in LaTeX and related formula editors. Basics Consider the formal sentence For some natural number , . This is a single statement using existential quantification. It is roughly analogous to the informal sentence "Either , or , or , or... and so on," but more precise, because it doesn't need us to infer the meaning of the phrase "and so on." (In particular, the sentence explicitly specifies its domain of discourse to be the natural numbers, not, for example, the real numbers.) This particular example is true, because 5 is a natural number, and when we substitute 5 for n, we produce the true statement . It does not matter that "" is true only for that single natural number, 5; the existence of a single solution is enough to prove this existential quantification to be true. In contrast, "For some even number , " is false, because there are no even solutions. The domain of discourse, which specifies the values the variable n is allowed to take, is therefore critical to a statement's trueness or falseness. Logical conjunctions are used to restrict the domain of discourse to fulfill a given predicate. For example, the sentence For some positive odd number , is logically equivalent to the sentence For some natural number , is odd and . The mathematical proof of an existential statement about "some" object may be achieved either by a constructive proof, which exhibits an object satisfying the "some" statement, or by a nonconstructive proof, which shows that there must be such an object without concretely exhibiting one. Notation In symbolic logic, "∃" (a turned letter "E" in a sans-serif font, Unicode U+2203) is used to indicate existential quantification. For example, the notation represents the (true) statement There exists some in the set of natural numbers such that . The symbol's first usage is thought to be by Giuseppe Peano in Formulario mathematico (1896). Afterwards, Bertrand Russell popularised its use as the existential quantifier. Through his research in set theory, Peano also introduced the symbols and to respectively denote the intersection and union of sets. Properties Negation A quantified propositional function is a statement; thus, like statements, quantified functions can be negated. The symbol is used to denote negation. For example, if P(x) is the predicate "x is greater than 0 and less than 1", then, for a domain of discourse X of all natural numbers, the existential quantification "There exists a natural number x which is greater than 0 and less than 1" can be symbolically stated as: This can be demonstrated to be false. Truthfully, it must be said, "It is not the case that there is a natural number x that is greater than 0 and less than 1", or, symbolically: . If there is no element of the domain of discourse for which the statement is true, then it must be false for all of those elements. That is, the negation of is logically equivalent to "For any natural number x, x is not greater than 0 and less than 1", or: Generally, then, the negation of a propositional function's existential quantification is a universal quantification of that propositional function's negation; symbolically, (This is a generalization of De Morgan's laws to predicate logic.) A common error is stating "all persons are not married" (i.e., "there exists no person who is married"), when "not all persons are married" (i.e., "there exists a person who is not married") is intended: Negation is also expressible through a statement of "for no", as opposed to "for some": Unlike the universal quantifier, the existential quantifier distributes over logical disjunctions: Rules of inference A rule of inference is a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference which utilize the existential quantifier. Existential introduction (∃I) concludes that, if the propositional function is known to be true for a particular element of the domain of discourse, then it must be true that there exists an element for which the proposition function is true. Symbolically, Existential instantiation, when conducted in a Fitch style deduction, proceeds by entering a new sub-derivation while substituting an existentially quantified variable for a subject—which does not appear within any active sub-derivation. If a conclusion can be reached within this sub-derivation in which the substituted subject does not appear, then one can exit that sub-derivation with that conclusion. The reasoning behind existential elimination (∃E) is as follows: If it is given that there exists an element for which the proposition function is true, and if a conclusion can be reached by giving that element an arbitrary name, that conclusion is necessarily true, as long as it does not contain the name. Symbolically, for an arbitrary c and for a proposition Q in which c does not appear: must be true for all values of c over the same domain X; else, the logic does not follow: If c is not arbitrary, and is instead a specific element of the domain of discourse, then stating P(c) might unjustifiably give more information about that object. The empty set The formula is always false, regardless of P(x). This is because denotes the empty set, and no x of any description – let alone an x fulfilling a given predicate P(x) – exist in the empty set.
Mathematics
Mathematical logic
null
91983
https://en.wikipedia.org/wiki/Willow
Willow
Willows, also called sallows and osiers, of the genus Salix, comprise around 350 species (plus numerous hybrids) of typically deciduous trees and shrubs, found primarily on moist soils in cold and temperate regions. Most species are known as willow, but some narrow-leaved shrub species are called osier, and some broader-leaved species are referred to as sallow (from Old English sealh, related to the Latin word salix, willow). Some willows (particularly arctic and alpine species) are low-growing or creeping shrubs; for example, the dwarf willow (Salix herbacea) rarely exceeds in height, though it spreads widely across the ground. Description Willows all have abundant watery bark sap, which is heavily charged with salicylic acid, soft, usually pliant, tough wood, slender branches, and large, fibrous, often stoloniferous roots. The roots are remarkable for their toughness, size, and tenacity to live, and roots readily sprout from aerial parts of the plant. Leaves The leaves are typically elongated, but they might also be round to oval, frequently with serrated edges. Most species are deciduous; semi-evergreen willows with coriaceous leaves are rare, e.g. Salix micans and S. australior in the eastern Mediterranean. All the buds are lateral; no absolutely terminal bud is ever formed. The buds are covered by a single scale. Usually, the bud scale is fused into a cap-like shape, but in some species it wraps around and the edges overlap. The leaves are simple, feather-veined, and typically linear-lanceolate. Usually they are serrate, rounded at base, acute or acuminate. The leaf petioles are short, the stipules often very conspicuous, resembling tiny, round leaves, and sometimes remaining for half the summer. On some species, however, they are small, inconspicuous, and caducous (soon falling). In color, the leaves show a great variety of greens, ranging from yellowish to bluish color. Willows are among the earliest woody plants to leaf out in spring and the last to drop their leaves in autumn. In the northern hemisphere, leafout may occur as early as February depending on the climate and is stimulated by air temperature. If daytime highs reach for a few consecutive days, a willow will attempt to put out leaves and flowers. In the northern hemisphere, leaf drop in autumn occurs when day length shortens to approximately ten hours and 25 minutes, which varies by latitude (as early as the first week of October for boreal species such as S. alaxensis and as late as the third week of December for willows growing in far southern areas). Flowers With the exception of Salix martiana, willows are dioecious, with male and female flowers appearing as catkins on separate plants; the catkins are produced early in the spring, often before the leaves. The staminate (male) flowers have neither calyx nor corolla; they consist simply of stamens, varying in number from two to 10, accompanied by a nectariferous gland and inserted on the base of a scale which is itself borne on the rachis of a drooping raceme called a catkin, or ament. This scale is square, entire, and very hairy. The anthers are rose-colored in the bud, but orange or purple after the flower opens; they are two-celled and the cells open latitudinally. The filaments are threadlike, usually pale brown, and often bald. The pistillate (female) flowers are also without calyx or corolla, and consist of a single ovary accompanied by a small, flat nectar gland and inserted on the base of a scale which is likewise borne on the rachis of a catkin. The ovary is one-celled, the style two-lobed, and the ovules numerous. Taxonomy The scientific use of the genus name Salix originates with Carl Linnaeus in 1753. The modern concept of types did not exist at the time, so types for Linnaeus' genera had to be designated later. The type species, i.e., the species on which the genus name is based, is Salix alba, based on a conserved type. The generic name Salix comes from Latin and was already used by the Romans for various types of willow. A theory is that the word is ultimately derived from a Celtic language, sal meaning 'near' and lis meaning 'water', alluding to their habitat. Willows are classified into subgenera though what they should be is in flux. Morphological studies generally divide the species into 3 or 5 subgenera: Salix (though some split off subgenera Longifoliae and Protitae), Chamaetia, and Vetrix. Phylogenetic studies have suggested that Chamaetia and Vetrix be in one clade. The oldest fossils of the genus are known from the early Eocene of North America, with the earliest occurrences in Europe during the Early Oligocene. Selected species The genus Salix is made up of around 350 species of deciduous trees and shrubs. They hybridise freely, and over 160 such hybrids have been named.Examples of well-known willows include: Salix aegyptiaca L. – musk willow Salix alba L. – white willow Salix amygdaloides Andersson – peachleaf willow Salix arctica Pall. – Arctic willow Salix babylonica L. – Babylon willow, Peking willow or weeping willow Salix bebbiana Sarg. – beaked willow, long-beaked willow, or Bebb's willow Salix caprea L. – goat willow or pussy willow Salix cinerea L. – grey willow Salix discolor Muhl. – American pussy willow or glaucous willow Salix euxina I.V.Belyaeva – eastern crack willow Salix exigua Nutt. – sandbar willow, narrowleaf willow, or coyote willow Salix × fragilis L. – common crack willow Salix glauca L. – gray willow, grayleaf willow, white willow, or glaucous willow Salix gooddingii C.R.Ball - Goodding's black willow Salix herbacea L. – dwarf willow, least willow or snowbed willow Salix humboldtiana Willd. - Humbolt's willow, native to central and South America Salix integra Thunb. Salix laevigata Bebb – red willow or polished willow Salix lasiolepis Benth. – arroyo willow Salix microphylla Schltdl. & Cham. Salix mucronata Andersson - Safsaf willow, a species from southern Africa Salix nigra Marshall – black willow Salix paradoxa Kunth Salix pierotii Miq. – Korean willow Salix purpurea L. – purple willow or purple osier Salix scouleriana Barratt ex Hook. – Scouler's willow Salix sepulcralis group – hybrid willows Salix tetrasperma Roxb. – Indian willow Salix triandra L. – almond willow or almond-leaved willow Salix viminalis L. – common osier Ecology Willows are shade tolerant and typically short-lived. They require disturbances to outcompete conifers or large deciduous species. The seeds are tiny, plentiful, carried by wind and water, and viable only for a few days; they require warm and moist conditions to take root. The plants can also reproduce vegetatively from decapitated stumps and branches. Willows produce a modest amount of nectar from which bees can make honey, and are especially valued as a source of early pollen for bees. Various animals browse the foliage or shelter amongst the plants. Beavers use willows to build dams. The trees are used as food by the larvae of some species of Lepidoptera, such as the mourning cloak butterfly. Ants, such as wood ants, are common on willows inhabited by aphids, coming to collect aphid honeydew, as sometimes do wasps. Pests and diseases Willow species are hosts to more than a hundred aphid species, belonging to Chaitophorus and other genera, forming large colonies to feed on plant juices, on the underside of leaves in particular. Corythucha elegans, the willow lace bug, is a bug species in the family Tingidae found on willows in North America. Rhabdophaga rosaria is a type of gall found on willows. Rust, caused by fungi of genus Melampsora, is known to damage leaves of willows, covering them with orange spots. Conservation Some Native Americans allowed wildfires to burn and set fires intentionally, allowing new stands to form. A small number of willow species were widely planted in Australia, notably as erosion-control measures along watercourses. They are now regarded as invasive weeds which occupy extensive areas across southern Australia and are considered 'Weeds of National Significance'. Many catchment management authorities are removing and replacing them with native trees. Cultivation Almost all willows take root very readily from cuttings or where broken branches lie on the ground (an exception is the peachleaf willow (Salix amygdaloides)). One famous example of such growth from cuttings involves the poet Alexander Pope, who begged a twig from a parcel tied with twigs sent from Spain to Lady Suffolk. This twig was planted and thrived, and legend has it that all of England's weeping willows are descended from this first one. Willows are extensively cultivated around the world. They are used in hedges and landscaping. The high end shopping district of Ginza in Tokyo, Japan, has a long history of cultivating willow, and is well known for its willow lined streets. Hybrids and cultivars Willows are very cross-compatible, and numerous hybrids occur, both naturally and in cultivation. A well-known ornamental example is the weeping willow (Salix × sepulcralis), which is a hybrid of Peking willow (Salix babylonica) from China and white willow (Salix alba) from Europe. The widely planted Chinese willow Salix matsudana is now considered a synonym of S. babylonica. Numerous cultivars of Salix have been developed and named over the centuries. New selections of cultivars with superior technical and ornamental characteristics have been chosen deliberately and applied to various purposes. Many cultivars and unmodified species of Salix have gained the Royal Horticultural Society's Award of Garden Merit. Most recently, Salix has become an important source for bioenergy production and for various ecosystem services. Names of hybrids and cultivars were until recently (2021) compiled by a working party of the UN FAO, the International Cultivar Registration Authority (ICRAs) for the genus Salix (willows) , but it is no longer active. Uses The Quinault people made the bark into a twine which sometimes served as harpoon line. The wood was used by some Native American tribes to start fires by friction, the shoots to weave baskets, and both the branches and stems to build various items including fishing weirs. Medicinal The leaves and bark of the willow have been mentioned in ancient texts from Assyria, Sumer and Egypt and in Ancient Greece the physician Hippocrates wrote about its medicinal properties in the fifth century BC. Interpreting Mesopotamian cuneiform texts is a challenge, especially when looking for something as specific as a species of plant being used to treat a recognisable condition. Some 5,000 medical prescriptions have been identified from Babylonian writings of the 7th to 3rd centuries BC, involving 1,300 drugs from 340 different plants. Whether any of these relate to willow is uncertain. The seeds of the Haluppu-tree were recommended in the Sumerian narrative of Gilgameš, Enkidu and the Nether World as treatment for infertility, but the "Haluppu-tree" could have been oak, poplar or willow. The ancient Egyptian Ebers Papyrus mentions willow (of uncertain species) in three remedies. One, as part of an elaborate recipe for a poultice to "make the met supple," which involved 36 other ingredients including "fruit of the dompalm, beans and amaa grains." The meaning of met is uncertain, but it may be something to do with the nervous system. The second is as part of a treatment for the "Great Debility," when "rush from the green willow tree" is combined with ass's semen, fresh bread, herbs of the field, figs, grapes and wine. Finally, it is used as a stiffening agent in a concoction of "fat flesh, figs, dates, incense, garlic and sweet beer" to put the heart into proper working order and make it take up nourishment. The Roman author Aulus Cornelius Celsus only mentions willow once: the leaves, pounded and boiled in vinegar, were to be used as treatment for uterine prolapse, but it is unclear what he considered the therapeutic action to be; it is unlikely to have been pain relief, as he recommended cauterization in the following paragraph. Nicholas Culpeper, in The Complete Herbal, gives many uses for willow, including to staunch wounds, to "stay the heat of lust" in man or woman, and to provoke urine ("if stopped"), but he makes no mention of any supposed analgesic properties. His recommendation to use the burnt ashes of willow bark, mixed with vinegar, to "take away warts, corns, and superfluous flesh," seems to correspond with modern uses of salicylic acid. William Turner's account, written about 1597, focuses on the ability of the leaves and bark to "stay the spitting of blood, and all other fluxes of blood", if boiled in wine and drunk, but adds a treatment for fever, saying: "the green boughs with the leaves may very well be brought into chambers and set about the beds of those that be sick of fevers, for they do mightily cool the heat of the air, which thing is a wonderful refreshing to the sick patients." In 1763, Reverend Edward Stone, of Chipping Norton, Oxfordshire, England, sent a letter to the Royal Society describing his experiments with powdered bark of white willow (Salix alba). He had noticed the willow bark tasted bitter, like 'Peruvian Bark' (cinchona), which was used to treat fevers, and he speculated that the willow would have a similar effect. Over several years he tested it on as many as fifty patients and found it to be highly effective (especially when mixed with cinchona). Whether this was a real effect or not is unknown, but although Stone's remedy was experimented with by others at the time, it was never adopted by medical practitioners. During the American Civil War, Confederate forces also experimented with willow as a cure for malaria, without success. In his novel The Mysterious Island (1875), the French novelist Jules Verne outlined the state of scientific knowledge concerning medicinal uses of willow when one of his characters, Herbert (Harbert) Brown, was suffering from a fever induced by a bullet wound: "The bark of the willow has, indeed, been justly considered as a succedaneum for Peruvian bark, as has also that of the horse-chestnut tree, the leaf of the holly, the snake-root, etc.", he wrote. In the story, Herbert is treated with powdered willow bark to no effect, and is saved when a supply of quinine is discovered. It is clear in the novel that the causes of fevers were poorly understood, and there is no suggestion at all of any possible analgesic effect from the use of willow. The first lasting evidence that salicylate, from willow and other plant species, might have real medicinal uses came in 1876, when the Scottish physician Thomas MacLagan experimented with salicin as a treatment for acute rheumatism, with considerable success, as he reported in The Lancet. Meanwhile, German scientists tried salicylic acid in the form of sodium salicylate, a sodium salt, with less success and more severe side effects. The treatment of rheumatic fever with salicin gradually gained some acceptance in medical circles. The discovery of acetanilide, in the 1880s, gave rise to an 'acetylation' craze, where chemists experimented with adding an acetyl group to various aromatic organic compounds. Back in 1853, chemist Charles Frédéric Gerhardt treated the medicine sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time. More than 40 years later in 1897, Felix Hoffmann created the same acid (in his case derived from the Spiraea plant), which was found in 1899 to have an analgesic effect. This acid was named "Aspirin" by Hoffmann's employer Bayer AG. The discovery of aspirin is therefore only indirectly connected to willow. In the late 1990s, Daniel Moerman reported many uses of willow by Native Americans. One modern field guide claims that Native Americans across the Americas relied on the willow as a staple of their medical treatments, using the bark to treat ailments such as sore throat and tuberculosis, and further alleging that "Several references mention chewing willow bark as an analgesic for headache and other pain, apparently presaging the development of aspirin in the late 1800s." Herbal uses of willow have continued into modern times. In the early 20th century, Maud Grieve described using the bark and the powdered root of white willow (Salix alba) for its tonic, antiperiodic and astringent qualities and recommended its use in treating dyspepsia, worms, chronic diarrhoea and dysentery. Like other herbalists, she makes no mention of it having any analgesic effect, despite widespread awareness of aspirin by this time, and she considered tannin to be the active constituent. It was long after the invention of aspirin that the idea emerged that willow bark is an effective painkiller. It may often be based on the belief that willow actually contains aspirin. Articles asserting that the ancients used willow for this purpose have been published in academic journals such as the British Journal of Haematology. There are now many papers, books and articles repeating the claim that the ancients used willow for pain relief, and numerous willow-based products can be purchased for this purpose. Modern research suggests that only the mildest analgesic effect could be derived from the use of willow extract, and even that may be due to flavonoids and polyphenols as much as salicylic acid. Manufacturing Some of humans' earliest manufactured items may have been made from willow. A fishing net made from willow dates back to 8300 BC. Basic crafts, such as baskets, fish traps, wattle fences and wattle and daub house walls, were woven from osiers or withies (rod-like willow shoots, often grown in pollards). One of the forms of Welsh coracle boat traditionally uses willow in the framework. Thin or split willow rods can be woven into wicker, which has a long history. The relatively pliable willow is less likely to split while being woven than many other woods, and can be bent around sharp corners in basketry. Willow wood is used in the manufacture of boxes, brooms, cricket bats, cradle boards, chairmans and other furniture, dolls, willow flutes, poles, sweat lodges, toys, turnery, tool handles, wood veneer, wands and whistles. In addition, tannin, fibre, paper, rope and string can be produced from the wood. Willow is used in the manufacture of double basses for backs, sides and linings, and in making splines and blocks for bass repair. Horticulture An aqueous extract of willow bark is used as a fungicide in the European Union. The willow bark extract is approved as a 'basic substance' product in the European Union and United Kingdom for the control of scab, leaf peach curl and powdery mildew on grapes, apples and peach crops. Weeds Willow roots spread widely and are very aggressive in seeking out moisture; for this reason, they can become problematic when planted in residential areas, where the roots are notorious for clogging French drains, drainage systems, weeping tiles, septic systems, storm drains, and sewer systems, particularly older, tile, concrete, or ceramic pipes. Newer, PVC sewer pipes are much less leaky at the joints, and are therefore less susceptible to problems from willow roots; the same is true of water supply piping. Others Warfare: Willow wood were used by the British to make parachute baskets throughout World War II. Being light and strong, they could be made in any shape and bounced on impact. British production of willow baskets was about 2000 tonnes per year by some 630 manufacturers employing 7000 basket makers. Lawrence Ogilvie (a plant pathologist who had studied and written his 1920s Cambridge University master's degree thesis about willow diseases) worked at Long Ashton Research Station, near Bristol and was much involved with these willows and their diseases. Dyeing: Willow is used to dye textiles, used to produce kimono. The kimono retailer Ginza Motoji hosts annual willow dyeing lessons with fifth grade students of Taimei Elementary School Art: Willow is used to make charcoal (for drawing) as well as living sculptures, woven from live willow rods into shapes such as domes and tunnels. Willow stems are used to weave baskets and three-dimensional sculptures of animals and other figures. Willow stems are also used to create garden features, such as decorative panels and obelisks. Energy: There have been experiments or mathematical models in using willows for biomass or biofuel, in energy forestry systems, due to its fast growth. Programs in other countries are being developed through initiatives such as the Willow Biomass Project in the US, and the Energy Coppice Project in the UK. Willow may also be grown to produce charcoal. Environment: There has been research into possibly using willows for future biofiltration of wastewater (i.e. phytoremediation and land reclamation), although this is not commercially viable. They are used for streambank stabilisation (bioengineering), slope stabilisation, soil erosion control, shelterbelt and windbreak, and wildlife habitat. Willows are often planted on the borders of streams so their interlacing roots may protect the bank against the action of the water. The roots are often larger than the stem which grows from them. Food: Poor people at one time often ate willow catkins that had been cooked to form a mash. The inner bark can be eaten raw or cooked, as can the young leaves and underground shoots. Culture The willow is one of the four species associated with the Jewish festival of Sukkot, or the Feast of Tabernacles, cited in Leviticus 23:40. Willow branches are used during the synagogue service on Hoshana Rabbah, the seventh day of Sukkot. In Buddhism, a willow branch is one of the chief attributes of Guanyin, the bodhisattva of compassion. In traditional pictures of Guanyin, she is often shown seated on a rock with a willow branch in a vase of water at her side. Orthodox churches often use willow branches in place of palms in the ceremonies on Palm Sunday. In China, some people carry willow branches with them on the day of their Tomb Sweeping or Qingming Festival. Willow branches are also put up on gates and/or front doors, which they believe help ward off the evil spirits that wander on Qingming. Legend states that on Qingming Festival, the ruler of the underworld allows the spirits of the dead to return to earth. Since their presence may not always be welcome, willow branches keep them away. Taoist witches use a small carving made from willow wood for communicating with the spirits of the dead. The image is sent to the nether world, where the disembodied spirit is deemed to enter it, and give the desired information to surviving relatives on its return. The willow is a famous subject in many East Asian nations' cultures, particularly in pen and ink paintings from China and Japan. A gisaeng (Korean courtesan) named Hongrang, who lived in the middle of the Joseon Dynasty, wrote the poem "By the willow in the rain in the evening", which she gave to her parting lover (Choi Gyeong-chang). Hongrang wrote: ... I will be the willow on your bedside. In Japanese tradition, the willow is associated with ghosts. It is popularly supposed that a ghost will appear where a willow grows. Willow trees are also quite prevalent in folklore and myths. In English folklore, a willow tree is believed to be quite sinister, capable of uprooting itself and stalking travellers. The Viminal Hill, one of the Seven Hills of Rome, derives its name from the Latin word for osier, viminia (pl.). Hans Christian Andersen wrote a story called "Under the Willow Tree" (1853) in which children ask questions of a tree they call "willow-father", paired with another entity called "elder-mother". "Green Willow" is a Japanese ghost story in which a young samurai falls in love with a woman called Green Willow who has a close spiritual connection with a willow tree. "The Willow Wife" is another, not dissimilar tale. "Wisdom of the Willow Tree" is an Osage Nation story in which a young man seeks answers from a willow tree, addressing the tree in conversation as 'Grandfather'.
Biology and health sciences
Malpighiales
null
92171
https://en.wikipedia.org/wiki/Webcam
Webcam
A webcam is a video camera which is designed to record or stream to a computer or computer network. They are primarily used in video telephony, live streaming and social media, and security. Webcams can be built-in computer hardware or peripheral devices, and are commonly connected to a device using USB or wireless protocol. Webcams have been used on the Internet as early as 1993, and the first widespread commercial one became available in 1994. Early webcam usage on the Internet was primarily limited to stationary shots streamed to web sites. In the late 1990s and early 2000s, instant messaging clients added support for webcams, increasing their popularity in video conferencing. Computer manufacturers later started integrating webcams into laptop hardware. In 2020, the COVID-19 pandemic caused a shortage of webcams due to the increased number of people working from home. History Early development (early 1990s) First developed in 1991, a webcam was pointed at the Trojan Room coffee pot in the Cambridge University Computer Science Department (initially operating over a local network instead of the web). The camera was finally switched off on August 22, 2001. The final image captured by the camera can still be viewed at its homepage. The oldest continuously operating webcam, San Francisco State University's FogCam, has run since 1994 and is still operating It updates every 20 seconds. The SGI Indy, released in 1993, is the first commercial computer to have a standard video camera, and the first SGI computer to have standard video inputs. The maximum supported input resolution is 640×480 for NTSC or 768×576 for PAL. A fast machine is required to capture at either of these resolutions, though; an Indy with slower R4600PC CPU, for example, may require the input resolution to be reduced before storage or processing. However, the Vino hardware is capable of DMAing video fields directly into the frame buffer with minimal CPU overhead. The first widespread commercial webcam, the black-and-white QuickCam, entered the marketplace in 1994, created by the U.S. computer company Connectix. QuickCam was available in August 1994 for the Apple Macintosh, connecting via a serial port, at a cost of $100. Jon Garber, the designer of the device, had wanted to call it the "Mac-camera", but was overruled by Connectix's marketing department; a version with a PC-compatible parallel port and software for Microsoft Windows was launched in October 1995. The original Quick Cam provided 320x240-pixel resolution with a grayscale depth of 16 shades at 60 frames per second, or 256 shades at 15 frames per second. These cam were tested on several Delta II launch using a variety of communication protocols including CDMA, TDMA, GSM and HF. Videoconferencing via computers already existed, and at the time client-server based videoconferencing software such as CU-SeeMe had started to become popular. The first widely known laptop with integrated webcam option, at a pricepoint starting at US$ 12,000, was an IBM RS/6000 860 laptop and its related ThinkPad 850, released in 1996. Entering the mainstream (late 1990s) One of the most widely reported-on webcam sites was JenniCam, created in 1996, which allowed Internet users to observe the life of its namesake constantly, in the same vein as the reality TV series Big Brother, launched four years later. Other cameras are mounted overlooking bridges, public squares, and other public places, their output made available on a public web page in accordance with the original concept of a "webcam". Aggregator websites have also been created, providing thousands of live video streams or up-to-date still pictures, allowing users to find live video streams based on location or other criteria. In the late 1990s, Microsoft NetMeeting was the only videoconferencing software on PC in widespread use, making use of webcams. In the following years, instant messaging clients started adding webcam support: Yahoo Messenger introduced this with version 5.5 in 2002, allowing video calling in 20 frames per second using a webcam. MSN Messenger gained this in version 5.0 in 2003. 2000s–2019 Around the turn of the 21st century, computer hardware manufacturers began building webcams directly into laptop and desktop screens, thus eliminating the need to use an external USB or FireWire camera. Gradually webcams came to be used more for telecommunications, or videotelephony, between two people, or among several people, than for offering a view on a Web page to an unknown public. For less than US$100 in 2012, a three-dimensional space webcam became available, producing videos and photos in 3D anaglyph image with a resolution up to 1280 × 480 pixels. Viewers must use 3D glasses to see the effect of three dimensional image. 2020–present With remote work entering the mainstream, the built-in cameras of average laptops were sometimes considered inadequate. Consequently, during the COVID-19 pandemic, a shortage of external webcams in retail occurred. Most laptops before and during the pandemic were made with cameras capping out at 720p recording quality at best, compared to the industry standard of 1080p or 4K seen in smartphones and televisions from the same period. The backlog on new developments for built-in webcams is the result of a design flaw with laptops being too thin to support the 7mm camera modules to fit inside, instead resorting to ~2.5mm. Also the camera components are more expensive and not a high level of demand for this feature, Smartphones started to be used as a backup option or webcam replacement, with kits including lighting and tripods or downloadable apps. Technology Image sensor Image sensors can be CMOS or CCD, the former being dominant for low-cost cameras, but CCD cameras do not necessarily outperform CMOS-based cameras in the low-price range. Most consumer webcams are capable of providing VGA-resolution video at a frame rate of 30 frames per second. Many newer devices can produce video in multi-megapixel resolutions, and a few can run at high frame rates such as the PlayStation Eye, which can produce 320×240 video at 120 frames per second. Most image sensors are sourced from Omnivision or Sony. As webcams evolved simultaneously with display technologies, USB interface speeds and broadband internet speeds, the resolution went up from gradually from 320×240, to 640×480, and some now even offer 1280×720 (aka 720p) or 1920×1080 (aka 1080p) resolution. Despite the low cost, the resolution offered as of 2019 is impressive, with now the low-end webcams offering resolutions of 720p, mid-range webcams offering 1080p resolution, and high-end webcams offering 4K resolution at 60 fps. Optics Various lenses are available, the most common in consumer-grade webcams being a plastic lens that can be manually moved in and out to focus the camera. Fixed-focus lenses, which have no provision for adjustment, are also available. As a camera system's depth of field is greater for small image formats and is greater for lenses with a large f-number (small aperture), the systems used in webcams have a sufficiently large depth of field that the use of a fixed-focus lens does not impact image sharpness to a great extent. Most models use simple, focal-free optics (fixed focus, factory-set for the usual distance from the monitor to which it is fastened to the user) or manual focus. Webcams can come with different presets and fields of view. Individual users can make use of less than 90° horizontal FOV for home offices and live streaming. Webcams with as much as 360° horizontal FOV can be used for small- to medium-sized rooms (sometimes even large rooms). Depending on the users' purposes, webcams in the market can display the whole room or just the general vicinity. Internal software As the bayer filter is proprietary, any webcam contains some built-in image processing, separate from compression. Digital video streams are represented by huge amounts of data, burdening its transmission (from the image sensor, where the data is continuously created) and storage alike. Most if not all cheap webcams come with built-in ASIC to do video compression in real-time. Support electronics read the image from the sensor and transmit it to the host computer. Most webcams come with a controller that translates video over USB from Sonix, Suyin, Ricoh, Realtek or others. Typically, each frame is transmitted uncompressed in RGB or YUV or compressed as JPEG. Some cameras, such as mobile-phone cameras, use a CMOS sensor with supporting electronics "on die", i.e. the sensor and the support electronics are built on a single silicon chip to save space and manufacturing costs. Most webcams feature built-in microphones to make video calling and videoconferencing more convenient. Interface and external software Typical interfaces used by articles marketed as a "webcam" are USB, Ethernet and IEEE 802.11 (denominated as IP camera). Further interfaces such as e.g. Composite video, S-Video or FireWire were also available. The USB video device class (UVC) specification allows inter-connectivity of webcams to computers without the need for proprietary device drivers. Various proprietary as well as free and open-source software is available to handle the UVC stream. One could use Guvcview or GStreamer and GStreamer-based software to handle the UVC stream. Another could use multiple USB cameras attached to the host computer the software resides on, and broadcast multiple streams at once over (Wireless) Ethernet, such as MotionEye. MotionEye can either be installed onto a Raspberry Pi as MotionEyeOs, or afterwards on Raspbian as well. MotionEye can also be set up on Debian, Raspbian is a variant of Debian. MotionEye V4.1.1 ( Aug '21 ) can only run on Debian 10 Buster ( oldstable ) and Python 2.7. Newer versions such as 3.X are not supported at this point of time according to Ccrisan, foundator and author of MotionEye. Various software tools in wide use can be employed to take video and pictures, such as PicMaster and Microsoft's Camera app (for use with Windows operating systems), Photo Booth (Mac), or Cheese (with Unix systems). For a more complete list see Comparison of webcam software. Uses The most popular use of webcams is the establishment of video links, permitting computers to act as videophones or videoconference stations. For example, Apple's iSight camera, which is built into Apple laptops, iMacs and a majority of iPhones, can be used for video chat sessions, using the Messages instant messaging program. Other popular uses include security surveillance, computer vision, video broadcasting, and for recording social videos. Videotelephony Webcams can be added to instant messaging, text chat services such as AOL Instant Messenger, and VoIP services such as Skype, one-to-one live video communication over the Internet has now reached millions of mainstream PC users worldwide. Improved video quality has helped webcams encroach on traditional video conferencing systems. New features such as automatic lighting controls, real-time enhancements (retouching, wrinkle smoothing and vertical stretch), automatic face tracking and autofocus, assist users by providing substantial ease-of-use, further increasing the popularity of webcams. Webcams can also encourage remote work, enabling people to work remotely via the Internet. This usage was crucial to the survival of many businesses during the COVID-19 pandemic, when in-person office work was discouraged. Businesses, schools, and individuals have relied on video conferencing instead of spending on business travel for meetings. Moreover, the number of video conferencing cameras and software have multiplied since then due to their popularity. Webcam features and performance can vary by program, computer operating system, and also by the computer's processor capabilities. Video calling support has also been added to several popular instant messaging programs. Webcams allow for inexpensive, real-time video chat and webcasting, in both amateur and professional pursuits. They are frequently used in online dating and for online personal services offered mainly by women when camgirling. However, the ease of webcam use through the Internet for video chat has also caused issues. For example, moderation system of various video chat websites such as Omegle has been criticized as being ineffective, with sexual content still rampant. In a 2013 case, the transmission of nude photos and videos via Omegle from a teenage girl to a schoolteacher resulted in a child pornography charge. The popularity of webcams among teenagers with Internet access has raised concern about the use of webcams for cyber-bullying. Webcam recordings of teenagers, including underage teenagers, are frequently posted on popular Web forums and imageboards such as 4chan. Monitoring Webcams can be used as security cameras. Software is available to allow PC-connected cameras to watch for movement and sound, recording both when they are detected. These recordings can then be saved to the computer, e-mailed, or uploaded to the Internet. In one well-publicised case, a computer e-mailed images of the burglar during the theft of the computer, enabling the owner to give police a clear picture of the burglar's face even after the computer had been stolen. In December 2011, Russia announced that 290,000 Webcams would be installed in 90,000 polling stations to monitor the 2012 Russian presidential election. Webcams may be installed at places such as childcare centres, offices, shops and private areas to monitor security and general activity. Astrophotography With very-low-light capability, a few specific models of webcams are very popular to photograph the night sky by astronomers and astro photographers. Mostly, these are manual-focus cameras and contain an old CCD array instead of comparatively newer CMOS array. The lenses of the cameras are removed and then these are attached to telescopes to record images, video, still, or both. In newer techniques, videos of very faint objects are taken for a couple of seconds and then all the frames of the video are "stacked" together to obtain a still image of respectable contrast. Laser beam profiling A webcam's CCD response is linearly proportional to the incoming light. Therefore, webcams are suitable to record laser beam profiles, after the lens is removed. The resolution of a laser beam profiler depends on the pixel size. Commercial webcams are usually designed to record color images. The size of a webcam's color pixel depends on the model and may lie in the range of 5 to 10 μm. However, a color pixel consists of four black and white pixels each equipped with a color filter (for details see Bayer filter). Although these color filters work well in the visible, they may be rather transparent in the near infrared. By switching a webcam into the Bayer-mode it is possible to access the information of the single pixels and a resolution below 3 μm was possible. Privacy concerns Many users do not wish the continuous exposure for which webcams were originally intended, but rather prefer privacy. Such privacy is lost when malware allow malicious hackers to activate the webcam without the user's knowledge, providing the hackers with a live video and audio feed. This is a particular concern on many laptop computers, as such cameras normally cannot be physically disabled if hijacked by such a Trojan Horse program or other similar spyware programs. Cameras such as Apple's older external iSight cameras include lens covers to thwart this. Some webcams have built-in hardwired LED indicators that light up whenever the camera is active, sometimes only in video mode. However, it is possible, depending on the circuit design of a webcam, for malware to circumvent the indicator and activate the camera surreptitiously, as researchers demonstrated in the case of a MacBook's built-in camera in 2013. Various companies sell sliding lens covers and stickers that allow users to retrofit a computer or smartphone to close access to the camera lens as needed. One such company reported having sold more than 250,000 such items from 2013 to 2016. However, any opaque material will work just as well. The process of attempting to hack into a person's webcam and activate it without the webcam owner's permission has been called camfecting, a portmanteau of cam and infecting. The remotely activated webcam can be used to watch anything within the webcam's field of vision. Camfecting is most often carried out by infecting the victim's computer with a computer virus.
Technology
User interface
null
92236
https://en.wikipedia.org/wiki/Electron%20transport%20chain
Electron transport chain
An electron transport chain (ETC) is a series of protein complexes and other molecules which transfer electrons from electron donors to electron acceptors via redox reactions (both reduction and oxidation occurring simultaneously) and couples this electron transfer with the transfer of protons (H+ ions) across a membrane. Many of the enzymes in the electron transport chain are embedded within the membrane. The flow of electrons through the electron transport chain is an exergonic process. The energy from the redox reactions creates an electrochemical proton gradient that drives the synthesis of adenosine triphosphate (ATP). In aerobic respiration, the flow of electrons terminates with molecular oxygen as the final electron acceptor. In anaerobic respiration, other electron acceptors are used, such as sulfate. In an electron transport chain, the redox reactions are driven by the difference in the Gibbs free energy of reactants and products. The free energy released when a higher-energy electron donor and acceptor convert to lower-energy products, while electrons are transferred from a lower to a higher redox potential, is used by the complexes in the electron transport chain to create an electrochemical gradient of ions. It is this electrochemical gradient that drives the synthesis of ATP via coupling with oxidative phosphorylation with ATP synthase. In eukaryotic organisms, the electron transport chain, and site of oxidative phosphorylation, is found on the inner mitochondrial membrane. The energy released by reactions of oxygen and reduced compounds such as cytochrome c and (indirectly) NADH and FADH is used by the electron transport chain to pump protons into the intermembrane space, generating the electrochemical gradient over the inner mitochondrial membrane. In photosynthetic eukaryotes, the electron transport chain is found on the thylakoid membrane. Here, light energy drives electron transport through a proton pump and the resulting proton gradient causes subsequent synthesis of ATP. In bacteria, the electron transport chain can vary between species but it always constitutes a set of redox reactions that are coupled to the synthesis of ATP through the generation of an electrochemical gradient and oxidative phosphorylation through ATP synthase. Mitochondrial electron transport chains Most eukaryotic cells have mitochondria, which produce ATP from reactions of oxygen with products of the citric acid cycle, fatty acid metabolism, and amino acid metabolism. At the inner mitochondrial membrane, electrons from NADH and FADH pass through the electron transport chain to oxygen, which provides the energy driving the process as it is reduced to water. The electron transport chain comprises an enzymatic series of electron donors and acceptors. Each electron donor will pass electrons to an acceptor of higher redox potential, which in turn donates these electrons to another acceptor, a process that continues down the series until electrons are passed to oxygen, the terminal electron acceptor in the chain. Each reaction releases energy because a higher-energy donor and acceptor convert to lower-energy products. Via the transferred electrons, this energy is used to generate a proton gradient across the mitochondrial membrane by "pumping" protons into the intermembrane space, producing a state of higher free energy that has the potential to do work. This entire process is called oxidative phosphorylation since ADP is phosphorylated to ATP by using the electrochemical gradient that the redox reactions of the electron transport chain have established driven by energy-releasing reactions of oxygen. Mitochondrial redox carriers Energy associated with the transfer of electrons down the electron transport chain is used to pump protons from the mitochondrial matrix into the intermembrane space, creating an electrochemical proton gradient (ΔpH) across the inner mitochondrial membrane. This proton gradient is largely but not exclusively responsible for the mitochondrial membrane potential (ΔΨ). It allows ATP synthase to use the flow of H+ through the enzyme back into the matrix to generate ATP from adenosine diphosphate (ADP) and inorganic phosphate. Complex I (NADH coenzyme Q reductase; labeled I) accepts electrons from the Krebs cycle electron carrier nicotinamide adenine dinucleotide (NADH), and passes them to coenzyme Q (ubiquinone; labeled Q), which also receives electrons from Complex II (succinate dehydrogenase; labeled II). Q passes electrons to Complex III (cytochrome bc1 complex; labeled III), which passes them to cytochrome c (cyt c). Cyt c passes electrons to Complex IV (cytochrome c oxidase; labeled IV). Four membrane-bound complexes have been identified in mitochondria. Each is an extremely complex transmembrane structure that is embedded in the inner membrane. Three of them are proton pumps. The structures are electrically connected by lipid-soluble electron carriers and water-soluble electron carriers. The overall electron transport chain can be summarized as follows: NADH, H → Complex I → Q → Complex III → cytochrome c → Complex IV → HO ↑ Complex II ↑ Succinate Complex I In Complex I (NADH ubiquinone oxidoreductase, Type I NADH dehydrogenase, or mitochondrial complex I; ), two electrons are removed from NADH and transferred to a lipid-soluble carrier, ubiquinone (Q). The reduced product, ubiquinol (QH), freely diffuses within the membrane, and Complex I translocates four protons (H) across the membrane, thus producing a proton gradient. Complex I is one of the main sites at which premature electron leakage to oxygen occurs, thus being one of the main sites of production of superoxide. The pathway of electrons is as follows: NADH is oxidized to NAD, by reducing flavin mononucleotide to FMNH in one two-electron step. FMNH is then oxidized in two one-electron steps, through a semiquinone intermediate. Each electron thus transfers from the FMNH to an Fe–S cluster, from the Fe-S cluster to ubiquinone (Q). Transfer of the first electron results in the free-radical (semiquinone) form of Q, and transfer of the second electron reduces the semiquinone form to the ubiquinol form, QH. During this process, four protons are translocated from the mitochondrial matrix to the intermembrane space. As the electrons move through the complex an electron current is produced along the 180 Angstrom width of the complex within the membrane. This current powers the active transport of four protons to the intermembrane space per two electrons from NADH. Complex II In Complex II (succinate dehydrogenase or succinate-CoQ reductase; ) additional electrons are delivered into the quinone pool (Q) originating from succinate and transferred (via flavin adenine dinucleotide (FAD)) to Q. Complex II consists of four protein subunits: succinate dehydrogenase (SDHA); succinate dehydrogenase [ubiquinone] iron–sulfur subunit mitochondrial (SDHB); succinate dehydrogenase complex subunit C (SDHC); and succinate dehydrogenase complex subunit D (SDHD). Other electron donors (e.g., fatty acids and glycerol 3-phosphate) also direct electrons into Q (via FAD). Complex II is a parallel electron transport pathway to Complex I, but unlike Complex I, no protons are transported to the intermembrane space in this pathway. Therefore, the pathway through Complex II contributes less energy to the overall electron transport chain process. Complex III In Complex III (cytochrome bc1 complex or CoQH-cytochrome c reductase; ), the Q-cycle contributes to the proton gradient by an asymmetric absorption/release of protons. Two electrons are removed from QH at the QO site and sequentially transferred to two molecules of cytochrome c, a water-soluble electron carrier located within the intermembrane space. The two other electrons sequentially pass across the protein to the Qi site where the quinone part of ubiquinone is reduced to quinol. A proton gradient is formed by one quinol (2H+2e-) oxidations at the Qo site to form one quinone (2H+2e-) at the Qi site. (In total, four protons are translocated: two protons reduce quinone to quinol and two protons are released from two ubiquinol molecules.) QH2 + 2(Fe^{III}) + 2 H -> Q + 2(Fe^{II}) + 4 H When electron transfer is reduced (by a high membrane potential or respiratory inhibitors such as antimycin A), Complex III may leak electrons to molecular oxygen, resulting in superoxide formation. This complex is inhibited by dimercaprol (British Anti-Lewisite, BAL), naphthoquinone and antimycin. Complex IV In Complex IV (cytochrome c oxidase; ), sometimes called cytochrome AA3, four electrons are removed from four molecules of cytochrome c and transferred to molecular oxygen (O) and four protons, producing two molecules of water. The complex contains coordinated copper ions and several heme groups. At the same time, eight protons are removed from the mitochondrial matrix (although only four are translocated across the membrane), contributing to the proton gradient. The exact details of proton pumping in Complex IV are still under study. Cyanide is an inhibitor of Complex IV. Coupling with oxidative phosphorylation According to the chemiosmotic coupling hypothesis, proposed by Nobel Prize in Chemistry winner Peter D. Mitchell, the electron transport chain and oxidative phosphorylation are coupled by a proton gradient across the inner mitochondrial membrane. The efflux of protons from the mitochondrial matrix creates an electrochemical gradient (proton gradient). This gradient is used by the FF ATP synthase complex to make ATP via oxidative phosphorylation. ATP synthase is sometimes described as Complex V of the electron transport chain. The F component of ATP synthase acts as an ion channel that provides for a proton flux back into the mitochondrial matrix. It is composed of a, b and c subunits. Protons in the inter-membrane space of mitochondria first enter the ATP synthase complex through an a subunit channel. Then protons move to the c subunits. The number of c subunits determines how many protons are required to make the F turn one full revolution. For example, in humans, there are 8 c subunits, thus 8 protons are required. After c subunits, protons finally enter the matrix through an a subunit channel that opens into the mitochondrial matrix. This reflux releases free energy produced during the generation of the oxidized forms of the electron carriers (NAD and Q) with energy provided by O. The free energy is used to drive ATP synthesis, catalyzed by the F component of the complex.Coupling with oxidative phosphorylation is a key step for ATP production. However, in specific cases, uncoupling the two processes may be biologically useful. The uncoupling protein, thermogenin—present in the inner mitochondrial membrane of brown adipose tissue—provides for an alternative flow of protons back to the inner mitochondrial matrix. Thyroxine is also a natural uncoupler. This alternative flow results in thermogenesis rather than ATP production. Reverse electron flow Reverse electron flow is the transfer of electrons through the electron transport chain through the reverse redox reactions. Usually requiring a significant amount of energy to be used, this can reduce the oxidized forms of electron donors. For example, NAD+ can be reduced to NADH by Complex I. There are several factors that have been shown to induce reverse electron flow. However, more work needs to be done to confirm this. One example is blockage of ATP synthase, resulting in a build-up of protons and therefore a higher proton-motive force, inducing reverse electron flow. Prokaryotic electron transport chains In eukaryotes, NADH is the most important electron donor. The associated electron transport chain is NADH → Complex I → Q → Complex III → cytochrome c → Complex IV → O where Complexes I, III and IV are proton pumps, while Q and cytochrome c are mobile electron carriers. The electron acceptor for this process is molecular oxygen. In prokaryotes (bacteria and archaea) the situation is more complicated, because there are several different electron donors and several different electron acceptors. The generalized electron transport chain in bacteria is: Donor Donor Donor ↓ ↓ ↓ dehydrogenase → quinone → bc → cytochrome ↓ ↓ oxidase(reductase) oxidase(reductase) ↓ ↓ Acceptor Acceptor Electrons can enter the chain at three levels: at the level of a dehydrogenase, at the level of the quinone pool, or at the level of a mobile cytochrome electron carrier. These levels correspond to successively more positive redox potentials, or to successively decreased potential differences relative to the terminal electron acceptor. In other words, they correspond to successively smaller Gibbs free energy changes for the overall redox reaction. Individual bacteria use multiple electron transport chains, often simultaneously. Bacteria can use a number of different electron donors, a number of different dehydrogenases, a number of different oxidases and reductases, and a number of different electron acceptors. For example, E. coli (when growing aerobically using glucose and oxygen as an energy source) uses two different NADH dehydrogenases and two different quinol oxidases, for a total of four different electron transport chains operating simultaneously. A common feature of all electron transport chains is the presence of a proton pump to create an electrochemical gradient over a membrane. Bacterial electron transport chains may contain as many as three proton pumps, like mitochondria, or they may contain two or at least one. Electron donors In the current biosphere, the most common electron donors are organic molecules. Organisms that use organic molecules as an electron source are called organotrophs. Chemoorganotrophs (animals, fungi, protists) and photolithotrophs (plants and algae) constitute the vast majority of all familiar life forms. Some prokaryotes can use inorganic matter as an electron source. Such an organism is called a (chemo)lithotroph ("rock-eater"). Inorganic electron donors include hydrogen, carbon monoxide, ammonia, nitrite, sulfur, sulfide, manganese oxide, and ferrous iron. Lithotrophs have been found growing in rock formations thousands of meters below the surface of Earth. Because of their volume of distribution, lithotrophs may actually outnumber organotrophs and phototrophs in our biosphere. The use of inorganic electron donors such as hydrogen as an energy source is of particular interest in the study of evolution. This type of metabolism must logically have preceded the use of organic molecules and oxygen as an energy source. Dehydrogenases: equivalents to complexes I and II Bacteria can use several different electron donors. When organic matter is the electron source, the donor may be NADH or succinate, in which case electrons enter the electron transport chain via NADH dehydrogenase (similar to Complex I in mitochondria) or succinate dehydrogenase (similar to Complex II). Other dehydrogenases may be used to process different energy sources: formate dehydrogenase, lactate dehydrogenase, glyceraldehyde-3-phosphate dehydrogenase, H dehydrogenase (hydrogenase), electron transport chain. Some dehydrogenases are also proton pumps, while others funnel electrons into the quinone pool. Most dehydrogenases show induced expression in the bacterial cell in response to metabolic needs triggered by the environment in which the cells grow. In the case of lactate dehydrogenase in E. coli, the enzyme is used aerobically and in combination with other dehydrogenases. It is inducible and is expressed when the concentration of DL-lactate in the cell is high. Quinone carriers Quinones are mobile, lipid-soluble carriers that shuttle electrons (and protons) between large, relatively immobile macromolecular complexes embedded in the membrane. Bacteria use ubiquinone (Coenzyme Q, the same quinone that mitochondria use) and related quinones such as menaquinone (Vitamin K). Archaea in the genus Sulfolobus use caldariellaquinone. The use of different quinones is due to slight changes in redox potentials caused by changes in structure. The change in redox potentials of these quinones may be suited to changes in the electron acceptors or variations of redox potentials in bacterial complexes. Proton pumps A proton pump is any process that creates a proton gradient across a membrane. Protons can be physically moved across a membrane, as seen in mitochondrial Complexes I and IV. The same effect can be produced by moving electrons in the opposite direction. The result is the disappearance of a proton from the cytoplasm and the appearance of a proton in the periplasm. Mitochondrial Complex III is this second type of proton pump, which is mediated by a quinone (the Q cycle). Some dehydrogenases are proton pumps, while others are not. Most oxidases and reductases are proton pumps, but some are not. Cytochrome bc1 is a proton pump found in many, but not all, bacteria (not in E. coli). As the name implies, bacterial bc1 is similar to mitochondrial bc1 (Complex III). Cytochrome electron carriers Cytochromes are proteins that contain iron. They are found in two very different environments. Some cytochromes are water-soluble carriers that shuttle electrons to and from large, immobile macromolecular structures imbedded in the membrane. The mobile cytochrome electron carrier in mitochondria is cytochrome c. Bacteria use a number of different mobile cytochrome electron carriers. Other cytochromes are found within macromolecules such as Complex III and Complex IV. They also function as electron carriers, but in a very different, intramolecular, solid-state environment. Electrons may enter an electron transport chain at the level of a mobile cytochrome or quinone carrier. For example, electrons from inorganic electron donors (nitrite, ferrous iron, electron transport chain) enter the electron transport chain at the cytochrome level. When electrons enter at a redox level greater than NADH, the electron transport chain must operate in reverse to produce this necessary, higher-energy molecule. Electron acceptors and terminal oxidase/reductase As there are a number of different electron donors (organic matter in organotrophs, inorganic matter in lithotrophs), there are a number of different electron acceptors, both organic and inorganic. As with other steps of the ETC, an enzyme is required to help with the process. If oxygen is available, it is most often used as the terminal electron acceptor in aerobic bacteria and facultative anaerobes. An oxidase reduces the O to water while oxidizing something else. In mitochondria, the terminal membrane complex (Complex IV) is cytochrome oxidase, which oxidizes the cytochrome. Aerobic bacteria use a number of differet terminal oxidases. For example, E. coli (a facultative anaerobe) does not have a cytochrome oxidase or a bc1 complex. Under aerobic conditions, it uses two different terminal quinol oxidases (both proton pumps) to reduce oxygen to water. Bacterial terminal oxidases can be split into classes according to the molecules act as terminal electron acceptors. Class I oxidases are cytochrome oxidases and use oxygen as the terminal electron acceptor. Class II oxidases are quinol oxidases and can use a variety of terminal electron acceptors. Both of these classes can be subdivided into categories based on what redox-active components they contain. E.g. Heme aa3 Class 1 terminal oxidases are much more efficient than Class 2 terminal oxidases. Mostly in anaerobic environments different electron acceptors are used, including nitrate, nitrite, ferric iron, sulfate, carbon dioxide, and small organic molecules such as fumarate. When bacteria grow in anaerobic environments, the terminal electron acceptor is reduced by an enzyme called a reductase. E. coli can use fumarate reductase, nitrate reductase, nitrite reductase, DMSO reductase, or trimethylamine-N-oxide reductase, depending on the availability of these acceptors in the environment. Most terminal oxidases and reductases are inducible. They are synthesized by the organism as needed, in response to specific environmental conditions. Photosynthetic In oxidative phosphorylation, electrons are transferred from an electron donor such as NADH to an acceptor such as O through an electron transport chain, releasing energy. In photophosphorylation, the energy of sunlight is used to create a high-energy electron donor which can subsequently reduce oxidized components and couple to ATP synthesis via proton translocation by the electron transport chain. Photosynthetic electron transport chains, like the mitochondrial chain, can be considered as a special case of the bacterial systems. They use mobile, lipid-soluble quinone carriers (phylloquinone and plastoquinone) and mobile, water-soluble carriers (cytochromes). They also contain a proton pump. The proton pump in all photosynthetic chains resembles mitochondrial Complex III. The commonly-held theory of symbiogenesis proposes that both organelles descended from bacteria.
Biology and health sciences
Metabolic processes
Biology
699235
https://en.wikipedia.org/wiki/Square%20foot
Square foot
The square foot (; abbreviated sq ft, sf, or ft2; also denoted by '2 and ⏍, also known as "squeet") is an imperial unit and U.S. customary unit (non-SI, non-metric) of area, used mainly in the United States, Canada, the United Kingdom, Bangladesh, India, Nepal, Pakistan, Ghana, Liberia, Malaysia, Myanmar, Singapore and Hong Kong. It is defined as the area of a square with sides of 1 foot. Although the pluralization is regular in the noun form, when used as an adjective, the singular is preferred. So, an apartment measuring 700 square feet could be described as a 700 square-foot apartment. This corresponds to common linguistic usage of foot. The square foot unit is commonly used in real estate. Dimensions are generally taken with a laser device, the latest in a long line of tools used to gauge the size of apartments or other spaces. Real estate agents often measure straight corner-to-corner, then deduct non-heated spaces, and add heated spaces whose footprints exceed the end-to-end measurement. 1 square foot conversion to other units of area: 1 square foot (ft2) = 0.0000000358701 square miles (mi2) 1 square foot (ft2) = 0.000022956341 acres (ac) 1 square foot (ft2) = 0.111111111111 square yards (yd2) 1 square foot (ft2) = 144 square inches (in2) 1 square foot (ft2) = 144,000,000,000,000 square microinches (μin2) 1 square foot (ft2) = 0.00000009290304 square kilometers (km2) 1 square foot (ft2) = 0.000009290304 hectare (ha) 1 square foot (ft2) = 0.09290304 square meters (m2) 1 square foot (ft2) = 9.290304 square decimeters (dm2) (uncommon) 1 square foot (ft2) = 929.0304 square centimeters (cm2) 1 square foot (ft2) = 92,903.04 square millimeters (mm2) 1 square foot (ft2) = 92,903,040,000 square micrometers (μm2)
Physical sciences
Area
Basics and measurement
699689
https://en.wikipedia.org/wiki/Polaron
Polaron
A polaron is a quasiparticle used in condensed matter physics to understand the interactions between electrons and atoms in a solid material. The polaron concept was proposed by Lev Landau in 1933 and Solomon Pekar in 1946 to describe an electron moving in a dielectric crystal where the atoms displace from their equilibrium positions to effectively screen the charge of an electron, known as a phonon cloud. This lowers the electron mobility and increases the electron's effective mass. The general concept of a polaron has been extended to describe other interactions between the electrons and ions in metals that result in a bound state, or a lowering of energy compared to the non-interacting system. Major theoretical work has focused on solving Fröhlich and Holstein Hamiltonians. This is still an active field of research to find exact numerical solutions to the case of one or two electrons in a large crystal lattice, and to study the case of many interacting electrons. Experimentally, polarons are important to the understanding of a wide variety of materials. The electron mobility in semiconductors can be greatly decreased by the formation of polarons. Organic semiconductors are also sensitive to polaronic effects, which is particularly relevant in the design of organic solar cells that effectively transport charge. Polarons are also important for interpreting the optical conductivity of these types of materials. The polaron, a fermionic quasiparticle, should not be confused with the polariton, a bosonic quasiparticle analogous to a hybridized state between a photon and an optical phonon. Polaron theory The energy spectrum of an electron moving in a periodical potential of a rigid crystal lattice is called the Bloch spectrum, which consists of allowed bands and forbidden bands. An electron with energy inside an allowed band moves as a free electron but has an effective mass that differs from the electron mass in vacuum. However, a crystal lattice is deformable and displacements of atoms (ions) from their equilibrium positions are described in terms of phonons. Electrons interact with these displacements, and this interaction is known as electron-phonon coupling. One possible scenario was proposed in the seminal 1933 paper by Lev Landau, which includes the production of a lattice defect such as an F-center and a trapping of the electron by this defect. A different scenario was proposed by Solomon Pekar that envisions dressing the electron with lattice polarization (a cloud of virtual polar phonons). Such an electron with the accompanying deformation moves freely across the crystal, but with increased effective mass. Pekar coined for this charge carrier the term polaron. Landau and Pekar constructed the basis of polaron theory. A charge placed in a polarizable medium will be screened. Dielectric theory describes the phenomenon by the induction of a polarization around the charge carrier. The induced polarization will follow the charge carrier when it is moving through the medium. The carrier together with the induced polarization is considered as one entity, which is called a polaron (see Fig. 1). While polaron theory was originally developed for electrons, there is no fundamental reason why it could not be any other charged particle interacting with phonons. Indeed, other charged particles such as (electron) holes and ions generally follow the polaron theory. For example, the proton polaron was identified experimentally in 2017 and on ceramic electrolytes after its existence was hypothesized. Usually, in covalent semiconductors the coupling of electrons with lattice deformation is weak and polarons do not form. In polar semiconductors the electrostatic interaction with induced polarization is strong and polarons are formed at low temperature, provided that their concentration is not large and the screening is not efficient. Another class of materials in which polarons are observed is molecular crystals, where the interaction with molecular vibrations may be strong. In the case of polar semiconductors, the interaction with polar phonons is described by the Fröhlich Hamiltonian. On the other hand, the interaction of electrons with molecular phonons is described by the Holstein Hamiltonian. Usually, the models describing polarons may be divided into two classes. The first class represents continuum models where the discreteness of the crystal lattice is neglected. In that case, polarons are weakly coupled or strongly coupled depending on whether the polaron binding energy is small or large compared to the phonon frequency. The second class of systems commonly considered are lattice models of polarons. In this case, there may be small or large polarons, depending on the relative size of the polaron radius to the lattice constant . A conduction electron in an ionic crystal or a polar semiconductor is the prototype of a polaron. Herbert Fröhlich proposed a model Hamiltonian for this polaron through which its dynamics are treated quantum mechanically (Fröhlich Hamiltonian). The strength of electron-phonon interaction is determined by the dimensionless coupling constant . Here is electron mass, is the phonon frequency and , , are static and high frequency dielectric constants. In table 1 the Fröhlich coupling constant is given for a few solids. The Fröhlich Hamiltonian for a single electron in a crystal using second quantization notation is: The exact form of γ depends on the material and the type of phonon being used in the model. In the case of a single polar mode , here is the volume of the unit cell. In the case of molecular crystal γ is usually momentum independent constant. A detailed advanced discussion of the variations of the Fröhlich Hamiltonian can be found in J. T. Devreese and A. S. Alexandrov. The terms Fröhlich polaron and large polaron are sometimes used synonymously since the Fröhlich Hamiltonian includes the continuum approximation and long range forces. There is no known exact solution for the Fröhlich Hamiltonian with longitudinal optical (LO) phonons and linear (the most commonly considered variant of the Fröhlich polaron) despite extensive investigations. Despite the lack of an exact solution, some approximations of the polaron properties are known. The physical properties of a polaron differ from those of a band-carrier. A polaron is characterized by its self-energy , an effective mass and by its characteristic response to external electric and magnetic fields (e. g. dc mobility and optical absorption coefficient). When the coupling is weak ( small), the self-energy of the polaron can be approximated as: and the polaron mass , which can be measured by cyclotron resonance experiments, is larger than the band mass of the charge carrier without self-induced polarization: When the coupling is strong (α large), a variational approach due to Landau and Pekar indicates that the self-energy is proportional to α² and the polaron mass scales as α⁴. The Landau–Pekar variational calculation yields an upper bound to the polaron self-energy , valid for all α, where is a constant determined by solving an integro-differential equation. It was an open question for many years whether this expression was asymptotically exact as α tends to infinity. Finally, Donsker and Varadhan, applying large deviation theory to Feynman's path integral formulation for the self-energy, showed the large α exactitude of this Landau–Pekar formula. Later, Lieb and Thomas gave a shorter proof using more conventional methods, and with explicit bounds on the lower order corrections to the Landau–Pekar formula. Feynman introduced the variational principle for path integrals to study the polaron. He simulated the interaction between the electron and the polarization modes by a harmonic interaction between a hypothetical particle and the electron. The analysis of an exactly solvable ("symmetrical") 1D-polaron model, Monte Carlo schemes and other numerical schemes demonstrate the remarkable accuracy of Feynman's path-integral approach to the polaron ground-state energy. Experimentally more directly accessible properties of the polaron, such as its mobility and optical absorption, have been investigated subsequently. In the strong coupling limit, , the spectrum of excited states of a polaron begins with polaron-phonon bound states with energies less than , where is the frequency of optical phonons. In the lattice models the main parameter is the polaron binding energy: , here summation is taken over the Brillouin zone. Note that this binding energy is purely adiabatic, i.e. does not depend on the ionic masses. For polar crystals the value of the polaron binding energy is strictly determined by the dielectric constants ,, and is of the order of 0.3-0.8 eV. If polaron binding energy is smaller than the hopping integral the large polaron is formed for some type of electron-phonon interactions. In the case when the small polaron is formed. There are two limiting cases in the lattice polaron theory. In the physically important adiabatic limit all terms which involve ionic masses are cancelled and formation of polaron is described by nonlinear Schrödinger equation with nonadiabatic correction describing phonon frequency renormalization and polaron tunneling. In the opposite limit the theory represents the expansion in . Polaron optical absorption The expression for the magnetooptical absorption of a polaron is: Here, is the cyclotron frequency for a rigid-band electron. The magnetooptical absorption Γ(Ω) at the frequency Ω takes the form Σ(Ω) is the so-called "memory function", which describes the dynamics of the polaron. Σ(Ω) depends also on α, β (β, where is the Boltzmann constant and is the temperature) and . In the absence of an external magnetic field () the optical absorption spectrum (3) of the polaron at weak coupling is determined by the absorption of radiation energy, which is reemitted in the form of LO phonons. At larger coupling, , the polaron can undergo transitions toward a relatively stable internal excited state called the "relaxed excited state" (RES) (see Fig. 2). The RES peak in the spectrum also has a phonon sideband, which is related to a Franck–Condon-type transition. A comparison of the DSG results with the optical conductivity spectra given by approximation-free numerical and approximate analytical approaches is given in ref. Calculations of the optical conductivity for the Fröhlich polaron performed within the Diagrammatic Quantum Monte Carlo method, see Fig. 3, fully confirm the results of the path-integral variational approach at In the intermediate coupling regime the low-energy behavior and the position of the maximum of the optical conductivity spectrum of ref. follow well the prediction of Devreese. There are the following qualitative differences between the two approaches in the intermediate and strong coupling regime: in ref., the dominant peak broadens and the second peak does not develop, giving instead rise to a flat shoulder in the optical conductivity spectrum at . This behavior can be attributed to the optical processes with participation of two or more phonons. The nature of the excited states of a polaron needs further study. The application of a sufficiently strong external magnetic field allows one to satisfy the resonance condition , which {(for )} determines the polaron cyclotron resonance frequency. From this condition also the polaron cyclotron mass can be derived. Using the most accurate theoretical polaron models to evaluate , the experimental cyclotron data can be well accounted for. Evidence for the polaron character of charge carriers in AgBr and AgCl was obtained through high-precision cyclotron resonance experiments in external magnetic fields up to 16 T. The all-coupling magneto-absorption calculated in ref., leads to the best quantitative agreement between theory and experiment for AgBr and AgCl. This quantitative interpretation of the cyclotron resonance experiment in AgBr and AgCl by the theory of Peeters provided one of the most convincing and clearest demonstrations of Fröhlich polaron features in solids. Experimental data on the magnetopolaron effect, obtained using far-infrared photoconductivity techniques, have been applied to study the energy spectrum of shallow donors in polar semiconductor layers of CdTe. The polaron effect well above the LO phonon energy was studied through cyclotron resonance measurements, e. g., in II–VI semiconductors, observed in ultra-high magnetic fields. The resonant polaron effect manifests itself when the cyclotron frequency approaches the LO phonon energy in sufficiently high magnetic fields. In the lattice models the optical conductivity is given by the formula: Here is the activation energy of polaron, which is of the order of polaron binding energy . This formula was derived and extensively discussed in and was tested experimentally for example in photodoped parent compounds of high temperature superconductors. Polarons in two dimensions and in quasi-2D structures The great interest in the study of the two-dimensional electron gas (2DEG) has also resulted in many investigations on the properties of polarons in two dimensions. A simple model for the 2D polaron system consists of an electron confined to a plane, interacting via the Fröhlich interaction with the LO phonons of a 3D surrounding medium. The self-energy and the mass of such a 2D polaron are no longer described by the expressions valid in 3D; for weak coupling they can be approximated as: It has been shown that simple scaling relations exist, connecting the physical properties of polarons in 2D with those in 3D. An example of such a scaling relation is: where () and () are, respectively, the polaron and the electron-band masses in 2D (3D). The effect of the confinement of a Fröhlich polaron is to enhance the effective polaron coupling. However, many-particle effects tend to counterbalance this effect because of screening. Also in 2D systems cyclotron resonance is a convenient tool to study polaron effects. Although several other effects have to be taken into account (nonparabolicity of the electron bands, many-body effects, the nature of the confining potential, etc.), the polaron effect is clearly revealed in the cyclotron mass. An interesting 2D system consists of electrons on films of liquid He. In this system the electrons couple to the ripplons of the liquid He, forming "ripplopolarons". The effective coupling can be relatively large and, for some values of the parameters, self-trapping can result. The acoustic nature of the ripplon dispersion at long wavelengths is a key aspect of the trapping. For GaAs/AlxGa1−xAs quantum wells and superlattices, the polaron effect is found to decrease the energy of the shallow donor states at low magnetic fields and leads to a resonant splitting of the energies at high magnetic fields. The energy spectra of such polaronic systems as shallow donors ("bound polarons"), e. g., the D0 and D− centres, constitute the most complete and detailed polaron spectroscopy realised in the literature. In GaAs/AlAs quantum wells with sufficiently high electron density, anticrossing of the cyclotron-resonance spectra has been observed near the GaAs transverse optical (TO) phonon frequency rather than near the GaAs LO-phonon frequency. This anticrossing near the TO-phonon frequency was explained in the framework of the polaron theory. Besides optical properties, many other physical properties of polarons have been studied, including the possibility of self-trapping, polaron transport, magnetophonon resonance, etc. Extensions of the polaron concept Significant are also the extensions of the polaron concept: acoustic polaron, piezoelectric polaron, electronic polaron, bound polaron, trapped polaron, spin polaron, molecular polaron, solvated polarons, polaronic exciton, Jahn-Teller polaron, small polaron, bipolarons and many-polaron systems. These extensions of the concept are invoked, e. g., to study the properties of conjugated polymers, colossal magnetoresistance perovskites, high- superconductors, layered MgB2 superconductors, fullerenes, quasi-1D conductors, semiconductor nanostructures. The possibility that polarons and bipolarons play a role in high- superconductors has renewed interest in the physical properties of many-polaron systems and, in particular, in their optical properties. Theoretical treatments have been extended from one-polaron to many-polaron systems. A new aspect of the polaron concept has been investigated for semiconductor nanostructures: the exciton-phonon states are not factorizable into an adiabatic product Ansatz, so that a non-adiabatic treatment is needed. The non-adiabaticity of the exciton-phonon systems leads to a strong enhancement of the phonon-assisted transition probabilities (as compared to those treated adiabatically) and to multiphonon optical spectra that are considerably different from the Franck–Condon progression even for small values of the electron-phonon coupling constant as is the case for typical semiconductor nanostructures. In biophysics Davydov soliton is a propagating along the protein α-helix self-trapped amide I excitation that is a solution of the Davydov Hamiltonian. The mathematical techniques that are used to analyze Davydov's soliton are similar to some that have been developed in polaron theory. In this context the Davydov soliton corresponds to a polaron that is (i) large so the continuum limit approximation in justified, (ii) acoustic because the self-localization arises from interactions with acoustic modes of the lattice, and (iii) weakly coupled because the anharmonic energy is small compared with the phonon bandwidth. It has been shown that the system of an impurity in a Bose–Einstein condensate is also a member of the polaron family. This allows the hitherto inaccessible strong coupling regime to be studied, since the interaction strengths can be externally tuned through the use of a Feshbach resonance. This was recently realized experimentally by two research groups. The existence of the polaron in a Bose–Einstein condensate was demonstrated for both attractive and repulsive interactions, including the strong coupling regime and dynamically observed.
Physical sciences
Basics_2
Physics
699928
https://en.wikipedia.org/wiki/Rainbow%20Bridge%20%28Niagara%20Falls%29
Rainbow Bridge (Niagara Falls)
The Niagara Falls International Rainbow Bridge, commonly known as the Rainbow Bridge, is a steel arch bridge across the Niagara River, connecting the cities of Niagara Falls, New York, United States, and Niagara Falls, Ontario, Canada. Construction The Rainbow Bridge was built near the site of the earlier Honeymoon Bridge, which collapsed in 1938 due to an ice jam in the Niagara Gorge. Architect Richard (Su Min) Lee designed the bridge; a design also used for the Lewiston-Queenston Bridge, approximately downriver. The bridge's Rainbow Tower and Canadian side plaza are the work of another Canadian architect, William Lyon Somerville. King George VI and Queen Elizabeth, during their visit to Niagara Falls as part of their 1939 royal tour of Canada, dedicated the future construction site of the Rainbow Bridge; a monument was later erected to commemorate the occasion. Construction began in May 1940. The bridge officially opened on November 1, 1941. The Niagara Falls Bridge Commission chose the name "Rainbow Bridge" in March 1939, because rainbows occur frequently near the falls due to water spray and mist in the air. Description and specifications The New York State Department of Transportation designates the bridge as , an unsigned reference route. Roads that adjoin the bridge include New York routes 104 and 384, and the Niagara Scenic Parkway. The Ontario Ministry of Transportation designates the bridge as part of Highway 420. The Rainbow Tower, part of the plaza complex on the Canadian side, houses a carillon, which plays several times daily. The Rainbow Bridge does not permit commercial trucks; the nearest border crossing accessible to trucks is the Lewiston-Queenston Bridge. For each pedestrian or bicyclist, the toll to cross the bridge is $1.00 USD or CAD. For vehicles, the toll is only collected when leaving the United States and entering Canada. As of August 2022, the cash toll for personal vehicles is $5.00 USD or $6.50 CAD. 2023 car crash On November 22, 2023, a 2022 Bentley Flying Spur traveling at a high rate of speed left the roadway, went airborne, crashed and exploded at the Rainbow Bridge border crossing on the American side. The vehicle's two occupants died, and a U.S. Customs and Border Protection officer was injured. The crash and subsequent explosion was at first investigated as a potential terrorist attack, but was later determined to have been an unintentional crash. Gallery
Technology
Bridges
null
699935
https://en.wikipedia.org/wiki/Rainbow%20Bridge%20%28Tokyo%29
Rainbow Bridge (Tokyo)
The is a suspension bridge crossing northern Tokyo Bay between Shibaura Pier and the Odaiba waterfront development in Minato, Tokyo, Japan. It is named Tōkyō Kō Renrakukyō (東京港連絡橋) as the official name in Japanese. It was built by Kawasaki Heavy Industries, with construction starting in 1987 and completed in 1993. The bridge is long with a main span of . Officially called the "Shuto Expressway No. 11 Daiba Route - Port of Tokyo Connector Bridge", the name "Rainbow Bridge" was decided by the public. The towers supporting the bridge are white in color, designed to harmonize with the skyline of central Tokyo seen from Odaiba. There are lamps placed on the wires supporting the bridge, which are illuminated into three different colors, red, white and green every night using solar energy obtained during the day. The bridge can be accessed by foot from Tamachi Station (JR East) or Shibaura-futō Station (Yurikamome) on the mainland side. Usage The Rainbow Bridge carries three transportation lines on two decks. The upper deck carries the Shuto Expressway's Daiba Route, while the lower deck carries the Yurikamome rapid transit system in the centre, walkways on the outer side, and Tokyo Prefectural Route 482 in-between. Light motorcycles under 50cc are not permitted on either deck or the walkway of the bridge. Motorcycle pillion passengers are also banned. Walkway The bridge has two separate walkways on the north and south sides of the lower deck; the north side offers views of the inner Tokyo harbour and Tokyo Tower, while the south side offers views of Tokyo Bay and occasionally Mount Fuji. The walkways may only be used during certain hours (9 am to 9 pm in the summer; 10 am to 6 pm in the winter, access to the walkways close 30 minutes before closing time.) Bicycles are permitted on the condition that they are pushed (as opposed to riding them). Panorama Gallery
Technology
Bridges
null
700154
https://en.wikipedia.org/wiki/WKB%20approximation
WKB approximation
In mathematical physics, the WKB approximation or WKB method is a method for finding approximate solutions to linear differential equations with spatially varying coefficients. It is typically used for a semiclassical calculation in quantum mechanics in which the wavefunction is recast as an exponential function, semiclassically expanded, and then either the amplitude or the phase is taken to be changing slowly. The name is an initialism for Wentzel–Kramers–Brillouin. It is also known as the LG or Liouville–Green method. Other often-used letter combinations include JWKB and WKBJ, where the "J" stands for Jeffreys. Brief history This method is named after physicists Gregor Wentzel, Hendrik Anthony Kramers, and Léon Brillouin, who all developed it in 1926. In 1923, mathematician Harold Jeffreys had developed a general method of approximating solutions to linear, second-order differential equations, a class that includes the Schrödinger equation. The Schrödinger equation itself was not developed until two years later, and Wentzel, Kramers, and Brillouin were apparently unaware of this earlier work, so Jeffreys is often neglected credit. Early texts in quantum mechanics contain any number of combinations of their initials, including WBK, BWK, WKBJ, JWKB and BWKJ. An authoritative discussion and critical survey has been given by Robert B. Dingle. Earlier appearances of essentially equivalent methods are: Francesco Carlini in 1817, Joseph Liouville in 1837, George Green in 1837, Lord Rayleigh in 1912 and Richard Gans in 1915. Liouville and Green may be said to have founded the method in 1837, and it is also commonly referred to as the Liouville–Green or LG method. The important contribution of Jeffreys, Wentzel, Kramers, and Brillouin to the method was the inclusion of the treatment of turning points, connecting the evanescent and oscillatory solutions at either side of the turning point. For example, this may occur in the Schrödinger equation, due to a potential energy hill. Formulation Generally, WKB theory is a method for approximating the solution of a differential equation whose highest derivative is multiplied by a small parameter . The method of approximation is as follows. For a differential equation assume a solution of the form of an asymptotic series expansion in the limit . The asymptotic scaling of in terms of will be determined by the equation – see the example below. Substituting the above ansatz into the differential equation and cancelling out the exponential terms allows one to solve for an arbitrary number of terms in the expansion. WKB theory is a special case of multiple scale analysis. An example This example comes from the text of Carl M. Bender and Steven Orszag. Consider the second-order homogeneous linear differential equation where . Substituting results in the equation To leading order in ϵ (assuming, for the moment, the series will be asymptotically consistent), the above can be approximated as In the limit , the dominant balance is given by So is proportional to ϵ. Setting them equal and comparing powers yields which can be recognized as the eikonal equation, with solution Considering first-order powers of fixes This has the solution where is an arbitrary constant. We now have a pair of approximations to the system (a pair, because can take two signs); the first-order WKB-approximation will be a linear combination of the two: Higher-order terms can be obtained by looking at equations for higher powers of . Explicitly, for . Precision of the asymptotic series The asymptotic series for is usually a divergent series, whose general term starts to increase after a certain value . Therefore, the smallest error achieved by the WKB method is at best of the order of the last included term. For the equation with an analytic function, the value and the magnitude of the last term can be estimated as follows: where is the point at which needs to be evaluated and is the (complex) turning point where , closest to . The number can be interpreted as the number of oscillations between and the closest turning point. If is a slowly changing function, the number will be large, and the minimum error of the asymptotic series will be exponentially small. Application in non relativistic quantum mechanics The above example may be applied specifically to the one-dimensional, time-independent Schrödinger equation, which can be rewritten as Approximation away from the turning points The wavefunction can be rewritten as the exponential of another function (closely related to the action), which could be complex, so that its substitution in Schrödinger's equation gives: Next, the semiclassical approximation is used. This means that each function is expanded as a power series in . Substituting in the equation, and only retaining terms up to first order in , we get: which gives the following two relations: which can be solved for 1D systems, first equation resulting in:and the second equation computed for the possible values of the above, is generally expressed as: Thus, the resulting wavefunction in first order WKB approximation is presented as, In the classically allowed region, namely the region where the integrand in the exponent is imaginary and the approximate wave function is oscillatory. In the classically forbidden region , the solutions are growing or decaying. It is evident in the denominator that both of these approximate solutions become singular near the classical turning points, where , and cannot be valid. (The turning points are the points where the classical particle changes direction.) Hence, when , the wavefunction can be chosen to be expressed as:and for ,The integration in this solution is computed between the classical turning point and the arbitrary position x'. Validity of WKB solutions From the condition: It follows that: For which the following two inequalities are equivalent since the terms in either side are equivalent, as used in the WKB approximation: The first inequality can be used to show the following: where is used and is the local de Broglie wavelength of the wavefunction. The inequality implies that the variation of potential is assumed to be slowly varying. This condition can also be restated as the fractional change of or that of the momentum , over the wavelength , being much smaller than . Similarly it can be shown that also has restrictions based on underlying assumptions for the WKB approximation that:which implies that the de Broglie wavelength of the particle is slowly varying. Behavior near the turning points We now consider the behavior of the wave function near the turning points. For this, we need a different method. Near the first turning points, , the term can be expanded in a power series, To first order, one finds This differential equation is known as the Airy equation, and the solution may be written in terms of Airy functions, Although for any fixed value of , the wave function is bounded near the turning points, the wave function will be peaked there, as can be seen in the images above. As gets smaller, the height of the wave function at the turning points grows. It also follows from this approximation that: Connection conditions It now remains to construct a global (approximate) solution to the Schrödinger equation. For the wave function to be square-integrable, we must take only the exponentially decaying solution in the two classically forbidden regions. These must then "connect" properly through the turning points to the classically allowed region. For most values of , this matching procedure will not work: The function obtained by connecting the solution near to the classically allowed region will not agree with the function obtained by connecting the solution near to the classically allowed region. The requirement that the two functions agree imposes a condition on the energy , which will give an approximation to the exact quantum energy levels.The wavefunction's coefficients can be calculated for a simple problem shown in the figure. Let the first turning point, where the potential is decreasing over x, occur at and the second turning point, where potential is increasing over x, occur at . Given that we expect wavefunctions to be of the following form, we can calculate their coefficients by connecting the different regions using Airy and Bairy functions. First classical turning point For ie. decreasing potential condition or in the given example shown by the figure, we require the exponential function to decay for negative values of x so that wavefunction for it to go to zero. Considering Bairy functions to be the required connection formula, we get: We cannot use Airy function since it gives growing exponential behaviour for negative x. When compared to WKB solutions and matching their behaviours at , we conclude: , and . Thus, letting some normalization constant be , the wavefunction is given for increasing potential (with x) as: Second classical turning point For ie. increasing potential condition or in the given example shown by the figure, we require the exponential function to decay for positive values of x so that wavefunction for it to go to zero. Considering Airy functions to be the required connection formula, we get: We cannot use Bairy function since it gives growing exponential behaviour for positive x. When compared to WKB solutions and matching their behaviours at , we conclude: , and . Thus, letting some normalization constant be , the wavefunction is given for increasing potential (with x) as: Common oscillating wavefunction Matching the two solutions for region , it is required that the difference between the angles in these functions is where the phase difference accounts for changing cosine to sine for the wavefunction and difference since negation of the function can occur by letting . Thus: Where n is a non-negative integer. This condition can also be rewritten as saying that: The area enclosed by the classical energy curve is . Either way, the condition on the energy is a version of the Bohr–Sommerfeld quantization condition, with a "Maslov correction" equal to 1/2. It is possible to show that after piecing together the approximations in the various regions, one obtains a good approximation to the actual eigenfunction. In particular, the Maslov-corrected Bohr–Sommerfeld energies are good approximations to the actual eigenvalues of the Schrödinger operator. Specifically, the error in the energies is small compared to the typical spacing of the quantum energy levels. Thus, although the "old quantum theory" of Bohr and Sommerfeld was ultimately replaced by the Schrödinger equation, some vestige of that theory remains, as an approximation to the eigenvalues of the appropriate Schrödinger operator. General connection conditions Thus, from the two cases the connection formula is obtained at a classical turning point, : and: The WKB wavefunction at the classical turning point away from it is approximated by oscillatory sine or cosine function in the classically allowed region, represented in the left and growing or decaying exponentials in the forbidden region, represented in the right. The implication follows due to the dominance of growing exponential compared to decaying exponential. Thus, the solutions of oscillating or exponential part of wavefunctions can imply the form of wavefunction on the other region of potential as well as at the associated turning point. Probability density One can then compute the probability density associated to the approximate wave function. The probability that the quantum particle will be found in the classically forbidden region is small. In the classically allowed region, meanwhile, the probability the quantum particle will be found in a given interval is approximately the fraction of time the classical particle spends in that interval over one period of motion. Since the classical particle's velocity goes to zero at the turning points, it spends more time near the turning points than in other classically allowed regions. This observation accounts for the peak in the wave function (and its probability density) near the turning points. Applications of the WKB method to Schrödinger equations with a large variety of potentials and comparison with perturbation methods and path integrals are treated in Müller-Kirsten. Examples in quantum mechanics Although WKB potential only applies to smoothly varying potentials, in the examples where rigid walls produce infinities for potential, the WKB approximation can still be used to approximate wavefunctions in regions of smoothly varying potentials. Since the rigid walls have highly discontinuous potential, the connection condition cannot be used at these points and the results obtained can also differ from that of the above treatment. Bound states for 1 rigid wall The potential of such systems can be given in the form: where . Finding wavefunction in bound region, ie. within classical turning points and , by considering approximations far from and respectively we have two solutions: Since wavefunction must vanish near , we conclude . For airy functions near , we require . We require that angles within these functions have a phase difference where the phase difference accounts for changing sine to cosine and allowing . Where n is a non-negative integer. Note that the right hand side of this would instead be if n was only allowed to non-zero natural numbers. Thus we conclude that, for In 3 dimensions with spherically symmetry, the same condition holds where the position x is replaced by radial distance r, due to its similarity with this problem. Bound states within 2 rigid wall The potential of such systems can be given in the form: where . For between and which are thus the classical turning points, by considering approximations far from and respectively we have two solutions: Since wavefunctions must vanish at and . Here, the phase difference only needs to account for which allows . Hence the condition becomes: where but not equal to zero since it makes the wavefunction zero everywhere. Quantum bouncing ball Consider the following potential a bouncing ball is subjected to: The wavefunction solutions of the above can be solved using the WKB method by considering only odd parity solutions of the alternative potential . The classical turning points are identified and . Thus applying the quantization condition obtained in WKB: Letting where , solving for with given , we get the quantum mechanical energy of a bouncing ball: This result is also consistent with the use of equation from bound state of one rigid wall without needing to consider an alternative potential. Quantum Tunneling The potential of such systems can be given in the form: where . Its solutions for an incident wave is given as where the wavefunction in the classically forbidden region is the WKB approximation but neglecting the growing exponential. This is a fair assumption for wide potential barriers through which the wavefunction is not expected to grow to high magnitudes. By the requirement of continuity of wavefunction and its derivatives, the following relation can be shown: where and . Using we express the values without signs as: Thus, the transmission coefficient is found to be: where , and . The result can be stated as where .
Physical sciences
Quantum mechanics
Physics
700633
https://en.wikipedia.org/wiki/Dram%20%28unit%29
Dram (unit)
The dram (alternative British spelling drachm; apothecary symbol ʒ or ℨ; abbreviated dr) is a unit of mass in the avoirdupois system, and both a unit of mass and a unit of volume in the apothecaries' system. It was originally both a coin and a weight in ancient Greece. The unit of volume is more correctly called a fluid dram, fluid drachm, fluidram or fluidrachm (abbreviated fl dr, ƒ 3, or fʒ). Ancient unit of mass The Attic Greek drachma () was a weight of 6 obols, Greek mina, or about 4.37 grams. The Roman drachma was a weight of Roman pounds, or about 3.41 grams. A coin weighing one drachma is known as a stater, drachm, or drachma. The Ottoman dirhem () was based on the Sassanian drachm, which was itself based on the Roman dram/drachm. British unit of mass The British Weights and Measures Act 1878 introduced verification and consequent stamping of apothecary weights, making them officially recognized units of measurement. By 1900, Britain had enforced the distinction between the avoirdupois and apothecaries' versions by making the spelling different: dram now meant only avoirdupois drams, which were of an avoirdupois ounce. An ounce consisted of 437.5 grains, thus making the dram approximately 27.34 grains. drachm now meant only apothecaries' drachms, which were of an apothecaries' ounce of 480 grains, thus equal to 60 grains. Modern unit of mass In the avoirdupois system, the dram is the mass of  pound or  ounce. The dram weighs  grains, or exactly grams. In the apothecaries' system, which was widely used in the United States until the middle of the 20th century, the dram is the mass of  pounds apothecaries (lb ap), or  ounces apothecaries (oz ap or ℥) (the pound apothecaries and ounce apothecaries are equal to the troy pound (lb t), and troy ounce (oz t), respectively). The dram apothecaries is equal to or , or exactly grams. "Dram" is also used as a measure of the powder charge in a shotgun shell, representing the equivalent of black powder in drams avoirdupois. Unit of volume The fluid dram (or fluid drachm in British spelling) is defined as of a fluid ounce, and is exactly equal to: in the U.S. customary system in the British Imperial system A teaspoonful has been considered equal to one fluid dram for medical prescriptions. However, by 1876 the teaspoon had grown considerably larger than it was previously, measuring 80–85 minims. As there are 60 minims in a fluid dram, using this equivalent for the dosage of medicine was no longer suitable. Today's US teaspoon is equivalent to exactly 4.92892159375 ml, which is also US fluid ounces, US fluid drams, or 80 US minims. While pharmaceuticals are measured nowadays exclusively in metric units, fluid drams are still used to measure the capacity of pill containers. Dram is used informally to mean a small amount of liquor, especially Scotch whisky. The unit is referenced by the phrase dram shop, the U.S. legal term for an establishment that serves alcoholic beverages. In popular culture The line "Where'd you get your whiskey, where'd you get your dram?" appears in some versions of the traditional pre–Civil War American song "Cindy". In the Monty Python's song "The Bruces' Philosophers Song", there is the line "Hobbes was fond of his dram". In the old-time music tradition of the United States, there is a tune entitled "Gie the Fiddler a Dram", "gie" being the Scots language word for "give", brought over by immigrants and commonly used by their descendants in Appalachia at the time of writing. "Little Maggie," a traditional song popular in bluegrass music, refers to the title character as having a "dram glass in her hand." In the episode "Double Indecency" of the TV series Archer, the character Cheryl/Carol was carrying around 10 drams of Vole's blood and even offered to pay for a taxi ride with it. In Frank Herbert's Dune, the Fremen employ a sophisticated measurement system that involves the drachm (and fractions thereof) to accurately count and economize water, an ultra-precious resource on their home, the desert planet Arrakis. In the video game Caves of Qud water is used as the primary currency, being measured by the Dram.
Physical sciences
Mass and weight
Basics and measurement
701188
https://en.wikipedia.org/wiki/Quantum%20vacuum%20state
Quantum vacuum state
In quantum field theory, the quantum vacuum state (also called the quantum vacuum or vacuum state) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. The term zero-point field is sometimes used as a synonym for the vacuum state of a quantized field which is completely individual. According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space". According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of the quantum field. The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s, it was reformulated by Feynman, Tomonaga, and Schwinger, who jointly received the Nobel prize for this work in 1965. Today, the electromagnetic interactions and the weak interactions are unified (at very high energies only) in the theory of the electroweak interaction. The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics (or QCD) is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions. Non-zero expectation value If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator, or more accurately, the ground state of a measurement problem. In this case, the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity), field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass. Energy The vacuum state is associated with a zero-point energy, and this zero-point energy (equivalent to the lowest possible energy state) has measurable effects. It may be detected as the Casimir effect in the laboratory. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. The energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg (or 0.6 eV). An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant. Symmetry For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms but can also be proved directly without these axioms. Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case, the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred. See Higgs mechanism, standard model. Non-linear permittivity Quantum corrections to Maxwell's equations are expected to result in a tiny nonlinear electric polarization term in the vacuum, resulting in a field-dependent electrical permittivity ε deviating from the nominal value ε0 of vacuum permittivity. These theoretical developments are described, for example, in Dittrich and Gies. The theory of quantum electrodynamics predicts that the QED vacuum should exhibit a slight nonlinearity so that in the presence of a very strong electric field, the permittivity is increased by a tiny amount with respect to ε0. Subject to ongoing experimental efforts is the possibility that a strong electric field would modify the effective permeability of free space, becoming anisotropic with a value slightly below μ0 in the direction of the electric field and slightly exceeding μ0 in the perpendicular direction. The quantum vacuum exposed to an electric field exhibits birefringence for an electromagnetic wave traveling in a direction other than the electric field. The effect is similar to the Kerr effect but without matter being present. This tiny nonlinearity can be interpreted in terms of virtual pair production A characteristic electric field strength for which the nonlinearities become sizable is predicted to be enormous, about V/m, known as the Schwinger limit; the equivalent Kerr constant has been estimated, being about 1020 times smaller than the Kerr constant of water. Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed. Experimentally measuring such an effect is challenging, and has not yet been successful. Virtual particles The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not. The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state, and is described picturesquely as evidence of "virtual particles". It is sometimes attempted to provide an intuitive picture of virtual particles, or variances, based upon the Heisenberg energy-time uncertainty principle: (with ΔE and Δt being the energy and time variations respectively; ΔE is the accuracy in the measurement of energy and Δt is the time taken in the measurement, and is the Reduced Planck constant) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times. Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation because energy and time (unlike position and momentum , for example) do not satisfy a canonical commutation relation (such as ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. Many approaches to the energy-time uncertainty principle are a long and continuing subject. Physical nature of the quantum vacuum According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a Gedankenexperiment [thought experiment] the quantum vacuum state." According to Fowler & Guggenheim (1939/1965), the third law of thermodynamics may be precisely enunciated as follows: It is impossible by any procedure, no matter how idealized, to reduce any assembly to the absolute zero in a finite number of operations. (
Physical sciences
Quantum mechanics
Physics
701610
https://en.wikipedia.org/wiki/Zebroid
Zebroid
A zebroid is the offspring of any cross between a zebra and any other equine to create a hybrid. In most cases, the sire is a zebra stallion but not every time. The offspring of a donkey sire and zebra dam, called a donkra, and the offspring of a horse sire and a zebra dam, called a hebra, do exist, but are rare and are usually sterile. Zebroids have been bred since the 19th century. Charles Darwin noted several zebra hybrids in his works. Types Zebroid is the term generally used for all zebra hybrids. The different hybrids are generally named using a portmanteau of the sire's name and the dam's name. Generally, no distinction is made as to which zebra species is used. Many times, when zebras are crossbred, they develop some form of dwarfism. Breeding of different branches of the equine family, which does not occur in the wild, generally results in sterile offspring. The combination of sire and dam also affects the offspring phenotype. A zorse is the offspring of a zebra stallion and a horse mare. This cross is also called a zebrose, zebrula, zebrule, or zebra mule. The rarer reverse pairing is sometimes called a hebra, horsebra, zebrinny, or zebra hinny. Like most other animal hybrids, the zorse is sterile. A zony is the offspring of a zebra stallion and a pony mare. Medium-sized pony mares are preferred to produce riding zonies, but zebras have been crossed with smaller pony breeds such as the Shetland, resulting in so-called "Zetlands". A cross between a zebra and a donkey is known as a zenkey, zonkey (a term also used for donkeys in Tijuana, Mexico, painted as zebras for tourists to pose with them in souvenir photos), zebrass, or zedonk. Donkeys are closely related to zebras and both animals belong to the horse family. These zebra–donkey hybrids are very rare. In South Africa, they occur where zebras and donkeys are found in proximity to each other. Like mules and hinnies, however, they are generally genetically unable to breed, due to an odd number of chromosomes disrupting meiosis. Genetics Living equids show wide variation in the number of chromosomes, ranging from a diploid number of 32 chromosomes in the mountain zebra to 66 in Przewalski's horse. This is due to several chromosomal fusion and fission events during the evolution of equids. The zebra has between 32 and 46 chromosomes depending on the species. In spite of this difference, viable hybrids are possible, provided the gene combination in the hybrid allows for embryonic development to birth. A hybrid will have a number of chromosomes exactly halfway between that of its parents; for example, a cross between a horse (64 chromosomes), and a plains zebra (44 chromosomes), will produce a zebroid offspring with 54 chromosomes. The chromosome difference makes female hybrids poorly fertile and male hybrids generally sterile, due to a phenomenon called Haldane's rule. The evolutionary biologist J. B. S. Haldane first recorded in 1922 that genetic hybrids are often inviable or sterile. Since none of the males are fertile, the females must be paired with either a donkey or a zebra. The difference in chromosome number is most likely due to horses having two longer chromosomes that contain similar gene content to four zebra chromosomes. Extant Equus species: chromosome number Zebras are more closely related to wild asses (a group which includes donkeys) than to horses. The horse lineage diverged from other equids an estimated 4.0 - 4.7 million years ago; zebras and asses diverged an estimated 1.69–1.99 million years ago. The cladogram of Equus below is simplified from Vistrup et al. (2013). Physical characteristics Zebroids physically resemble their nonzebra parent, but are striped like a zebra. The stripes generally do not cover the whole body and might be confined to the legs, or spread onto parts of the body or neck. If the non-zebra parent was patterned (such as a roan, Appaloosa, pinto/paint, piebald, or skewbald), this pattern might be passed down to the zebroid, in which case the stripes are usually confined to non-white areas. The alternative name "golden zebra" relates to the interaction of zebra striping and a horse's bay or chestnut colour to give a zebra-like black-on-bay or black-on-chestnut pattern that superficially resembles the extinct quagga. Zebra–donkey hybrids usually have a dorsal (back) stripe and a ventral (belly) stripe. Zorses combine the zebra striping overlaid on coloured areas of the hybrid's coat. Zorses are most often bred using solid-colored horses. If the horse parent is piebald (black and white) or skewbald (color other than black and white), the zorse may inherit the dominant depigmentation genes for white patches. The tobiano (the most common white modifier found in the horse) directly interacts with the zorse coat to give it white markings. Only the non-depigmented areas will have zebra striping, resulting in a zorse with white patches and striped patches. This effect is seen in the zebroid named Eclyse (a hebra rather than a zorse) born in Stukenbrock, Germany, in 2007 to a zebra mare called Eclipse and a horse stallion called Ulysses. Zonkeys tend to be either tan, brown or grey in colour from their donkey parent with a lighter underside, and it is on the lighter parts of their body like their legs and belly with stripes on some parts from their zebra parent. Zebroids are preferred over zebras for practical uses, such as riding, because the zebra has a different body shape from a horse or a donkey and, consequently, it is difficult to find tack to fit a zebra. However, a zebroid is usually more inclined to be temperamental than a purebred horse and can be difficult to handle. Zebras, being wild animals and not domesticated like horses and donkeys, can pass on their wild traits to their offspring. Zebras, while not usually very large, are extremely strong and aggressive. Similarly, zorses have a strong temperament and can be aggressive. Historical and notable zebroids In 1815, Lord Morton mated a quagga stallion to a chestnut Arabian mare. The result was a female hybrid which resembled both parents. This provoked the interest of Cossar Ewart, Professor of Natural History at Edinburgh (1882-1927) and a keen geneticist. Ewart crossed a zebra stallion with pony mares to investigate the theory of telegony, or paternal impression. In The Origin of Species (1859), Charles Darwin mentioned four colored drawings of hybrids between the ass and zebra. He also wrote In Lord Morton's famous hybrid from a chestnut mare and male quagga, the hybrid, and even the pure offspring subsequently produced from the mare by a black Arabian sire, were much more plainly barred across the legs than is even the pure quagga. In his book The Variation of Animals and Plants Under Domestication, Darwin described a hybrid ass-zebra specimen in the British Museum as being dappled on its flanks. He also mentioned a "triple hybrid, from a bay mare, by a hybrid from a male ass and female zebra" displayed at London Zoo. This would have required the zebroid sire to be fertile. During the South African War, the Boers crossed Chapman's zebras with ponies to produce animals for transport work, chiefly for hauling guns. A specimen was captured by British forces, presented to King Edward VII by Lord Kitchener and photographed by W. S. Berridge. Zebras are resistant to sleeping sickness, whereas purebred horses and ponies are not, and zebra mules hopefully would inherit this resistance. Grévy's zebra has been crossed with the Somali wild ass in the early 20th century. Zorses were bred by the U.S. government and reported in Genetics in Relation to Agriculture by E. B. Babcock and R. E. Clausen (early 20th century), in an attempt to investigate inheritance and telegony. The experiments were also reported in The Science of Life by H. G. Wells, J. Huxley and G. P. Wells (around 1929). Interest in zebra crosses continued in the 1970s. In 1973, a cross between a zebra and a donkey was foaled at the Jerusalem Zoo. They called it a "hamzab". In the 1970s, the Colchester Zoo in England bred zedonks, at first by accident and later to create a disease-resistant riding and draft animal. The experiment was discontinued when zoos became more conservation-minded. A number of hybrids were kept at the zoo after this; the last one died in 2009. As of 2010, one adult still remained at the tourist attraction of Groombridge Place near Tunbridge Wells in Kent. 21st century Today, various zebroids are bred as riding and draft animals and as curiosities in circuses and smaller zoos. A zorse (more accurately a zony) was born at Eden Ostrich World, Cumbria, England, in 2001, after a zebra was left in a field with a Shetland pony. It was referred to as a Zetland. Usually, a zebra stallion is paired with a horse mare or donkey jenny, but in 2005, a Burchell's zebra mare named Allison produced a zonkey called Alex, sired by a donkey jack at Highland Plantation in the parish of Saint Thomas, Barbados. Alex, born 21 April 2005, is apparently the first zonkey in Barbados. In 2007, a horse stallion, Ulysses, and a zebra mare, Eclipse, produced a hebra named Eclyse, displaying an unusually patchy color coating. In July 2010, a zonkey was born at the Chestatee Wildlife Preserve in Dahlonega, Georgia. Another zebra–donkey hybrid, like the Barbados zonkey sired by a donkey, was born 3 July 2011 in Haicang Safari Park, Haicang, Xiamen, China. A zonkey, Ippo, was born 21 July 2013 in an animal reserve in Florence, Italy. Khumba, the offspring of a zebra mare and a dwarf albino donkey jack, was born on 21 April 2014 in the zoo of Reynosa in the state of Tamaulipas, Mexico. More recently, in November 2018 at a farm in Somerset, a cross between a donkey jack and a zebra mare was born. The male foal was described as a zonkey by its owner and has been named Zippy.
Biology and health sciences
Hybrids
Animals
703058
https://en.wikipedia.org/wiki/Aircraft%20flight%20control%20system
Aircraft flight control system
A conventional fixed-wing aircraft flight control system (AFCS) consists of flight control surfaces, the respective cockpit controls, connecting linkages, and the necessary operating mechanisms to control an aircraft's direction in flight. Aircraft engine controls are also considered flight controls as they change speed. The fundamentals of aircraft controls are explained in flight dynamics. This article centers on the operating mechanisms of the flight controls. The basic system in use on aircraft first appeared in a readily recognizable form as early as April 1908, on Louis Blériot's Blériot VIII pioneer-era monoplane design. Cockpit controls Primary controls Generally, the primary cockpit flight controls are arranged as follows: A control yoke (also known as a control column), centre stick or side-stick (the latter two also colloquially known as a control or joystick), governs the aircraft's roll and pitch by moving the ailerons (or activating wing warping on some very early aircraft designs) when turned or deflected left and right, and moves the elevators when moved backwards or forwards. Rudder pedals, or the earlier, pre-1919 "rudder bar", control yaw by moving the rudder; the left foot forward will move the rudder left for instance. Thrust lever or throttle, which controls engine speed or thrust for powered aircraft. The control yokes also vary greatly among aircraft. There are yokes where roll is controlled by rotating the yoke clockwise/counterclockwise (like steering a car) and pitch is controlled by moving the control column towards or away from the pilot, but in others the pitch is controlled by sliding the yoke into and out of the instrument panel (like most Cessnas, such as the 152 and 172), and in some the roll is controlled by sliding the whole yoke to the left and right (like the Cessna 162). Centre sticks also vary between aircraft. Some are directly connected to the control surfaces using cables, others (fly-by-wire airplanes) have a computer in between which then controls the electrical actuators. Even when an aircraft uses variant flight control surfaces such as a V-tail ruddervator, flaperons, or elevons, because these various combined-purpose control surfaces control rotation about the same three axes in space, the aircraft's flight control system will still be designed so that the stick or yoke controls pitch and roll conventionally, as will the rudder pedals for yaw. The basic pattern for modern flight controls was pioneered by French aviation figure Robert Esnault-Pelterie, with fellow French aviator Louis Blériot popularizing Esnault-Pelterie's control format initially on Louis' Blériot VIII monoplane in April 1908, and standardizing the format on the July 1909 Channel-crossing Blériot XI. Flight control has long been taught in such fashion for many decades, as popularized in ab initio instructional books such as the 1944 work Stick and Rudder. In some aircraft, the control surfaces are not manipulated with a linkage. In ultralight aircraft and motorized hang gliders, for example, there is no mechanism at all. Instead, the pilot just grabs the lifting surface by hand (using a rigid frame that hangs from its underside) and moves it. Secondary controls In addition to the primary flight controls for roll, pitch, and yaw, there are often secondary controls available to give the pilot finer control over flight or to ease the workload. The most commonly available control is a wheel or other device to control elevator trim, so that the pilot does not have to maintain constant backward or forward pressure to hold a specific pitch attitude (other types of trim, for rudder and ailerons, are common on larger aircraft but may also appear on smaller ones). Many aircraft have wing flaps, controlled by a switch or a mechanical lever or in some cases are fully automatic by computer control, which alter the shape of the wing for improved control at the slower speeds used for take-off and landing. Other secondary flight control systems may include slats, spoilers, air brakes and variable-sweep wings. Flight control systems Mechanical Mechanical or manually operated flight control systems are the most basic method of controlling an aircraft. They were used in early aircraft and are currently used in small aircraft where the aerodynamic forces are not excessive. Very early aircraft, such as the Wright Flyer I, Blériot XI and Fokker Eindecker used a system of wing warping where no conventionally hinged control surfaces were used on the wing, and sometimes not even for pitch control as on the Wright Flyer I and original versions of the 1909 Etrich Taube, which only had a hinged/pivoting rudder in addition to the warping-operated pitch and roll controls. A manual flight control system uses a collection of mechanical parts such as pushrods, tension cables, pulleys, counterweights, and sometimes chains to transmit the forces applied to the cockpit controls directly to the control surfaces. Turnbuckles are often used to adjust control cable tension. The Cessna Skyhawk is a typical example of an aircraft that uses this type of system. Gust locks are often used on parked aircraft with mechanical systems to protect the control surfaces and linkages from damage from wind. Some aircraft have gust locks fitted as part of the control system. Increases in the control surface area, and the higher airspeeds required by faster aircraft resulted in higher aerodynamic loads on the flight control systems. As a result, the forces required to move them also become significantly larger. Consequently, complicated mechanical gearing arrangements were developed to extract maximum mechanical advantage in order to reduce the forces required from the pilots. This arrangement can be found on bigger or higher performance propeller aircraft such as the Fokker 50. Some mechanical flight control systems use servo tabs that provide aerodynamic assistance. Servo tabs are small surfaces hinged to the control surfaces. The flight control mechanisms move these tabs, aerodynamic forces in turn move, or assist the movement of the control surfaces reducing the amount of mechanical forces needed. This arrangement was used in early piston-engined transport aircraft and in early jet transports. The Boeing 737 incorporates a system, whereby in the unlikely event of total hydraulic system failure, it automatically and seamlessly reverts to being controlled via servo-tab. Hydro-mechanical The complexity and weight of mechanical flight control systems increase considerably with the size and performance of the aircraft. Hydraulically powered control surfaces help to overcome these limitations. With hydraulic flight control systems, the aircraft's size and performance are limited by economics rather than a pilot's muscular strength. At first, only-partially boosted systems were used in which the pilot could still feel some of the aerodynamic loads on the control surfaces (feedback). A hydro-mechanical flight control system has two parts: The mechanical circuit, which links the cockpit controls with the hydraulic circuits. Like the mechanical flight control system, it consists of rods, cables, pulleys, and sometimes chains. The hydraulic circuit, which has hydraulic pumps, reservoirs, filters, pipes, valves and actuators. The actuators are powered by the hydraulic pressure generated by the pumps in the hydraulic circuit. The actuators convert hydraulic pressure into control surface movements. The electro-hydraulic servo valves control the movement of the actuators. The pilot's movement of a control causes the mechanical circuit to open the matching servo valve in the hydraulic circuit. The hydraulic circuit powers the actuators which then move the control surfaces. As the actuator moves, the servo valve is closed by a mechanical feedback linkage - one that stops movement of the control surface at the desired position. This arrangement was found in the older-designed jet transports and in some high-performance aircraft. Examples include the Antonov An-225 and the Lockheed SR-71. Artificial feel devices With purely mechanical flight control systems, the aerodynamic forces on the control surfaces are transmitted through the mechanisms and are felt directly by the pilot, allowing tactile feedback of airspeed. With hydromechanical flight control systems, the load on the surfaces cannot be felt and there is a risk of overstressing the aircraft through excessive control surface movement. To overcome this problem, artificial feel systems can be used. For example, for the controls of the RAF's Avro Vulcan jet bomber and the RCAF's Avro Canada CF-105 Arrow supersonic interceptor (both 1950s-era designs), the required force feedback was achieved by a spring device. The fulcrum of this device was moved in proportion to the square of the air speed (for the elevators) to give increased resistance at higher speeds. For the controls of the American Vought F-8 Crusader and the LTV A-7 Corsair II warplanes, a 'bob-weight' was used in the pitch axis of the control stick, giving force feedback that was proportional to the airplane's normal acceleration. Stick shaker A stick shaker is a device that is attached to the control column in some hydraulic aircraft. It shakes the control column when the aircraft is approaching stall conditions. Some aircraft such as the McDonnell Douglas DC-10 are equipped with a back-up electrical power supply that can be activated to enable the stick shaker in case of hydraulic failure. Power-by-wire In most current systems the power is provided to the control actuators by high-pressure hydraulic systems. In fly-by-wire systems the valves, which control these systems, are activated by electrical signals. In power-by-wire systems, electrical actuators are used in favour of hydraulic pistons. The power is carried to the actuators by electrical cables. These are lighter than hydraulic pipes, easier to install and maintain, and more reliable. Elements of the F-35 flight control system are power-by-wire. The actuators in such an electro-hydrostatic actuation (EHA) system are self-contained hydraulic devices, small closed-circuit hydraulic systems. The overall aim is towards more- or all-electric aircraft and an early example of the approach was the Avro Vulcan. Serious consideration was given to using the approach on the Airbus A380. Fly-by-wire control systems A fly-by-wire (FBW) system replaces manual flight control of an aircraft with an electronic interface. The movements of flight controls are converted to electronic signals transmitted by wires (hence the term fly-by-wire), and flight control computers determine how to move the actuators at each control surface to provide the expected response. Commands from the computers are also input without the pilot's knowledge to stabilize the aircraft and perform other tasks. Electronics for aircraft flight control systems are part of the field known as avionics. Fly-by-optics, also known as fly-by-light, is a further development using fiber-optic cables. Research Several technology research and development efforts exist to integrate the functions of flight control systems such as ailerons, elevators, elevons, flaps, and flaperons into wings to perform the aerodynamic purpose with the advantages of less: mass, cost, drag, inertia (for faster, stronger control response), complexity (mechanically simpler, fewer moving parts or surfaces, less maintenance), and radar cross section for stealth. These may be used in many unmanned aerial vehicles (UAVs) and 6th generation fighter aircraft. Two promising approaches are flexible wings, and fluidics. Flexible wings In flexible wings, also known as "morphing aerofoils", much or all of a wing surface can change shape in flight to deflect air flow much like an ornithopter. Adaptive compliant wings are a military and commercial effort. The X-53 Active Aeroelastic Wing was a US Air Force, NASA, and Boeing effort. Notable efforts have also been made by FlexSys, who have conducted flight tests using flexible aerofoils retrofitted to a Gulf stream III aircraft. Active Flow Control In active flow control systems, forces in vehicles occur via circulation control, in which larger and more complex mechanical parts are replaced by smaller, simpler fluidic systems (slots which emit air flows) where larger forces in fluids are diverted by smaller jets or flows of fluid intermittently, to change the direction of vehicles. In this use, active flow control promises simplicity and lower mass, costs (up to half less), and inertia and response times. This was demonstrated in the Demon UAV, which flew for the first time in the UK in September 2010.
Technology
Aircraft components
null
703078
https://en.wikipedia.org/wiki/Twill
Twill
Twill is a type of textile weave with a pattern of parallel, diagonal ribs. It is one of three fundamental types of weave, along with plain weave and satin. It is made by passing the weft thread over one or more warp threads then under two or more warp threads and so on, with a "step", or offset, between rows to create the characteristic diagonal pattern. Because of this structure, twill generally drapes well. Classification Twill weaves can be classified from four points of view: According to the stepping: Warp-way: 3/1 warp way twill, etc. Weft-way: 2/3 weft way twill, etc. According to the direction of twill lines on the face of the fabric: S-twill, or left-hand twill weave: 2/1 S, etc. Z-twill, or right-hand twill weave: 3/2 Z, etc. According to the face yarn (warp or weft): Warp face twill weave: 4/2 S, etc. Weft face twill weave: 1/3 Z, etc. Double face twill weave: 3/3 Z, etc. According to the nature of the produced twill line: Simple twill weave: 1/2 S, 3/1 Z etc. Expanded twill weave: 4/3 S, 3/2 Z, etc. Multiple twill weave: 2/3/3/1 S, etc. Structure In a twill weave, each weft or filling yarn floats across the warp yarns in a progression of interlacings to the right or left, forming a pattern of distinct diagonal lines. This diagonal pattern is also known as a wale. A float is the portion of a yarn that crosses over two or more perpendicular yarns. A twill weave requires three or more harnesses, depending on its complexity and is the second most basic weave that can be made on a fairly simple loom. Twill weave is often designated as a fraction, such as , in which the numerator indicates the number of harnesses that are raised (and thus threads crossed: in this example, two), and the denominator indicates the number of harnesses that are lowered when a filling yarn is inserted (in this example, one). The fraction is read as "two up, one down" (the fraction for plain weave is .). The minimum number of harnesses needed to produce a twill can be determined by totaling the numbers in the fraction; for the example described, the number of harnesses is three. Twill weave can be identified by its diagonal lines. Characteristics Twill fabrics technically have a front and a back side, unlike plain weave, whose two sides are the same. The front side of the twill is called the "technical face", and the back the "technical back". The technical face side of a twill weave fabric is the side with the most pronounced wale; it is usually more durable and more attractive, is most often used as the fashion side of the fabric, and is the side visible during weaving. If there are warp floats on the technical face (i.e. if the warp crosses over two or more wefts), there will be filling floats (the weft will cross over two or more warps) on the technical back. If the twill wale goes up to the right on one side, it will go up to the left on the other side. Twill fabrics have no "up" and "down" as they are woven. Sheer fabrics are seldom made with a twill weave. Because a twill surface already has interesting texture and design, printed twills (where a design is printed on the cloth) are much less common than printed plain weaves. When twills are printed, this is typically done on lightweight fabrics. Soiling and stains are less noticeable on the uneven surface of twills than on a smooth surface, such as plain weaves, and as a result twills are often used for sturdy work clothing and for durable upholstery—denim, for example, is a twill. In addition, twill's durability, wrinkle-resistance, and low maintenance make it ideal for a range of other items, such as jackets, pants, backpacks, and even draperies. The fewer interlacings in twills as compared to other weaves allow the yarns to move more freely, and therefore they are softer and more pliable, and drape better than plain-weave textiles. Twills also recover from creasing better than plain-weave fabrics do. When there are fewer interlacings, the yarns can be packed closer together to produce high-count fabrics. With higher counts, including high-count twills, the fabric is more durable, and is air- and water-resistant. Twills can be divided into even-sided, warp-faced, and weft-faced''. Even-sided twills have the same amount of warp and weft threads visible on both sides of the fabric. Warp-faced twills have more warp threads visible on the face side, and weft-faced twills have more weft threads visible on the face side. Even-sided twills include foulard or surah, herringbone, houndstooth, serge, sharkskin, and twill flannel. Warp-faced twills include cavalry twill, chino, covert, denim, drill, fancy twill, gabardine, and lining twill.
Technology
Weaving
null
703648
https://en.wikipedia.org/wiki/Mumbai%20Metro
Mumbai Metro
The Mumbai Metro is a rapid transit system serving the city of Mumbai and the wider Mumbai Metropolitan Region in Maharashtra, India. While the Maharashtra Metro Rail Corporation Limited is responsible for all metro rail projects being developed in Maharashtra, except for those in the Mumbai Metropolitan Area, the Mumbai Metropolitan Region Development Authority is the authority responsible for maintaining the metro system in the Greater Mumbai area. The rapid transit metro system is designed to reduce traffic congestion in the city and supplement the overcrowded Mumbai Suburban Railway network. It is being built in three phases, over 15 years, with overall completion expected in October 2026. The Mumbai Metro is the fourth longest operational metro network in India with an operational length of as of January 2023. When completed, the core system will comprise sixteen high-capacity metro railway lines, spanning a total of more than (25% underground, the rest elevated, with a minuscule portion built at-grade) and serviced by 350 stations. Line 1 of the Mumbai Metro is operated by Mumbai Metro One Private Limited (MMOPL), a joint venture between Reliance Infrastructure (50%), Mumbai Metropolitan Region Development Authority, (50%) and formerly by RATP Dev Transdev Asia (5%) and Hong Kong MTR. While lines 2, 4, 5, 6, 7 and their extensions will be built and operated by the Mumbai Metropolitan Region Development Authority (MMRDA), the completely underground Line 3 and Line 11 will be built by Mumbai Metro Railway Corporation Limited (MMRCL). In June 2006, Prime Minister Manmohan Singh laid the foundation stone for the first phase of the Mumbai Metro project, although construction work began in February 2008. A successful trial run was conducted in May 2013, and the system's first line commenced operations on 8 June 2014. Many metro projects were delayed because of late environmental clearances, land acquisition troubles and protests. After nearly eight years, two new metro corridors, 2A and 7, were inaugurated on 2 April 2022, and are now operational. On 5 October 2024, the 12 km underground BKC to Aarey Jogeshwari-Vikhroli Link Road section of Aqua Line was inaugurated. Additionally, there are 8 other metro lines currently under construction in the city. History Being the capital of Maharashtra, Mumbai is among the largest cities in the world, with a total metropolitan area population of over 2 crore (20 million) as of 2011, and a population growth rate of around 2% per annum. Mumbai has the advantage of a high modal share of the public (88%) in favour of a public mass transport system. The existing Mumbai Suburban Railway carries over 70 lakh (7 million) passengers per day, and is supplemented by the Brihanmumbai Electric Supply and Transport (BEST) bus system, which provides feeder services to station-going passengers to allow them to complete their journeys. Until 1980s, transport in Mumbai was not a big problem. The discontinuation of trams resulted in a direct increase of passenger pressure on the suburban railway network. By 2010, the population of Mumbai doubled. However, due to the city's geographical constraints and rapid population growth, road and rail infrastructure development has not been able to keep pace with growing demand over the last 4-5 decades. Moreover, the Mumbai Suburban Railway, though extensive, is not built to rapid transit specifications. The main objective of the Mumbai Metro is to provide mass rapid transit services to people within an approach distance of between , and to serve the areas not connected by the existing Suburban Rail network. The master plan unveiled by the MMRDA in 2004 encompassed a total of of track, of which would be underground. The Mumbai Metro was proposed to be built in three phases, at an estimated cost of 19,525 crore. In September 2009, the proposed Hutatma Chowk – Ghatkopar was reduced to a line between Hutatma Chowk and Carnac Bunder. In 2011, the MMRDA unveiled plans for an extended Colaba-Bandra-SEEPZ metro line. According to its earlier plans, a Colaba-to-Bandra metro line was to be constructed, running underground for from Colaba to Mahalaxmi, and then on an elevated track from Mahalaxmi to Bandra. However, the MMRDA decided to increase ridership on the line by running it out past Bandra to Chhatrapati Shivaji Maharaj International Airport. The Colaba-Bandra-SEEPZ line will be built at a cost of , and will be the city's first underground metro line. It will have 27 stations.On 27 February 2012, the Union Government gave in-principle approval to the plan for Line 3. Money for the project is being borrowed from Japanese International Cooperation Agency (50%), the state government (16%), the central government (14%), and others. In April 2012, the MMRDA announced plans to grant the Mumbai Metro Rail Company increased management autonomy, in an effort to enhance the project's operational efficiency. In July 2012, the MMRDA announced plans to add more metro lines to its existing plan, including a line parallel to the Western Express Highway from Bandra to Dahisar. This line is expected to reduce the passenger load on the Western Line and vehicle traffic on the highway. Another proposed route, the , 28-station Wadala–Kasarvadavali line, received in-principle approval from the state government in 2013. The MMRDA also intends to convert the proposed Lokhandwala–SEEPZ–Kanjurmarg monorail route into a metro line. The Mumbai Metro master plan was revised by the MMRDA in 2012, increasing the total length of the proposed network to . In June 2015, two new lines were proposed. A line from Andheri West to Dahisar West, and a line from BKC to Mankhurd. The following table shows the updated master plan unveiled by the MMRDA: On 18 February 2013, the MMRDA signed a memorandum of understanding with Transport for London, the transit authority in Greater London. The arrangement will facilitate the exchange of information, personnel and technology in the transportation sector. The revised Mumbai Metro master plan had proposed a line along the Thane-Teen Haath Naka-Kaapurbavdi-Ghodbunder Road route. The feasibility report concluded that the line was not feasible as most residents of Thane and its neighbouring areas travelled to Mumbai for work daily. On 14 June 2014, Chavan announced that the MMRDA was instead examining a proposal for a metro line along the new proposed route of Wadala-Ghatkopar-Teen Haat Naka route. RITES will prepare the detailed project report and is expected to submit it by August 2014. The preliminary report proposed a line with 29 stations, to be built at an estimated cost of 22,000 crore. This would be the fourth line of the metro, after the previously proposed Charkop-Dahisar route was merged with the Charkop-Bandra-Mankhurd route to form Line 2. In May 2015, the MMRDA said that it had begun planning for the Andheri-Dahisar line and Seepz-Kanjurmarg. Both lines are expected to be elevated, although the latter could be constructed underground if a proposal to extend Line 3 to Kanjurmarg is undertaken. DPRs for both lines had been prepared in 2004, along with the master plan, and the MMRDA would now update the DPRs. The agency also intends to construct Line 9 of the metro as an underground corridor from Sewri to Worli. However, planning for the project will only begun after the construction of the proposed Mumbai Trans Harbour Link commences. In a report on 14 November 2014 about the cancellation of the PPP agreement for Line 2, Mint quoted a senior MMRDA official: "as decided earlier, all future lines of Mumbai Metro will be constructed by the Mumbai Metro Railway Corp. Ltd (MMRCL), a joint venture between the state government and the Union government." On 20 May 2015, Chief Minister Devendra Fadnavis requested officials to consider constructing the Charkop-Bandra-Dahisar and the Wadala-Thane-Kasarvadavali lines as elevated corridors. Although both corridors had been planned as elevated lines in the Mumbai Metro master plan, the previous Congress-NCP had decided to construct all metro lines underground, after delays and difficulties caused by acquiring land for Line 1. However, Fadnavis believes that the two proposed lines can be constructed quicker and cheaper if they were elevated due to the proposed route of the alignment. The Government plans to implement all future metro lines (except Line 3) as elevated corridors. On 15 June 2015, the MMRDA announced that it would implement Line 2 of the metro in three parts. The Andheri-Dahisar line will have connectivity with the existing Line 1 and the proposed JVLR-Kanjurmarg line. In June 2015, Fadnavis announced that he would request the Delhi Metro Rail Corporation (DMRC) to assist in the implementation of the Mumbai Metro. He said that he intends to expand the metro system by before the state assembly elections in October 2019. In July 2015, UPS Madan announced that the State Government formally appointed the DMRC to revise and update the Mumbai Metro master plan. The DMRC will prepare DPRs for the Andheri East to Dahisar East, Jogeshwari to Kanjurmarg, Andheri West to Dahisar West and Bandra Kurla Complex to Mankhurd lines. The Andheri-Dahisar line will have connectivity with the existing Line 1 and the proposed JVLR-Kanjurmarg line. All four lines are proposed to be elevated and constructed as cash contracts. The lines are estimated to cost a total of , or about per km. In addition, the planned Line 3 and Wadala-Ghatkopar-Thane-Kasarvadavli line of the metro would also be constructed. Fadnavis announced on 8 April 2017 that the government was considering a circular metro loop line along the Kalyan-Dombivli-Taloja route. The proposed line would link Kalyan and Shil Phata with 13 stations, bring metro connectivity to Kalyan East, Dombivli, Ambernath and Diva. The Mumbai Metro resumed services for general public on 19 October 2020, after being shut down since March 2020 due to the COVID-19 pandemic. Protests and delay The project has faced significant and costly legal challenges. In 2018, protestors rallied to protect trees that were to be chopped down as part of construction plans. The central government first proposed the construction of a metro station at the forest area of Aarey colony. The protest resulted in several arrests. Protests again flared in 2022. Lines Line 1 connects Versova in the Western Suburbs to Ghatkopar in the Central Suburbs, covering a distance of . It is fully elevated, and consists of 12 stations. Work on the Versova-Andheri-Ghatkopar corridor, a part of Phase I, began on 8 February 2008. A crucial bridge on the project was completed at the end of 2012. The line opened for service on 8 June 2014. This corridor is being executed in two phases. Prime Minister Narendra Modi launched phase 2A in January 2023. The long 2A corridor was executed by DMRC on behalf of MMRDA. The corridor has 17 stations (Dahisar (West) to D N Nagar), and cost . The line had been partially operational since 2 April 2022 and became fully operational from 19 January 2023. The 2B line will be long, and is estimated to cost , including land acquisition cost of . This section will have 22 stations (D N Nagar to Mandale), work on which began in mid 2018. A fully underground section of the metro, Line 3 is planned to be long, and will have 27 stations once fully completed. The metro line will connect the Cuffe Parade business district in the south of Mumbai with SEEPZ and Aarey in the north. It will also pass through the Domestic and International airports of Mumbai, for which the airport operator (GVK) has promised an equity infusion of . The cost of this corridor was estimated at . The original deadline for the project was 2016, but it was extended due to several delays including COVID. A section of the corridor from Aarey Colony to BKC was inaugurated on 5 October 2024 by Prime Minister Narendra Modi. This section cost ₹14,120 crore, and consists of 10 stations in a 12.44 km stretch. The remaining section from BKC to Cuffe Parade is expected to be completed by March 2025. The line will have interchanges with the Line 1 at Marol Naka, under-construction Line 6 at Aarey JVLR, under-construction Line 7A and proposed Line 8 at CSMI Airport T-2, under-construction Line 2B at BKC, Central Line & proposed Line 11 at Chhatrapati Shivaji Terminus, Mumbai Monorail at Mahalaxmi (Jacob Circle) and Western Line at Santacruz, Dadar, Mahalaxmi, Mumbai Central, Grant Road and Churchgate. The line 4 of Mumbai Metro is planned to be a long elevated corridor, covering 32 stations from Kasarvadavali (beyond Thane) in the north to Wadala in the south. It is estimated to cost . This project will help connect the city of Thane with Mumbai with an alternate mode of public transport. The line was approved by the Maharashtra Government on 27 September 2016, and construction work on this line began in mid 2018. The Asian Infrastructure Investment Bank has extended a multilateral loan of for this project. It is expected to get completed by 2025 at a cost of ₹15,498 crore and an anticipated daily ridership of 13.4 Lakh (1,340,000). The -long Thane-Bhiwandi-Kalyan Metro-V corridor will have 17 stations and will cost ₹8,416 crore. It will be an elevated corridor. It will connect Thane to Bhiwandi and Kalyan in the eastern suburbs, with further extension to Taloja in Navi Mumbai that is line 12. The line was approved by Chief Minister Devendra Fadnavis on 19 October 2016.The 12.811 km Thane – Bhiwandi section is under construction. Bhiwandi – Kalyan is on-hold (route modification in progress). The corridor is being constructucted by Afcons in one package from Kalyan to Bhiwandi including 7 stations. The long Lokhandwala-Jogeshwari-Vikhroli-Kanjurmarg Metro-VI corridor will have 13 stations and cost ₹6,672 crore. It will be an elevated corridor. It will connect Lokhandwala Complex in Andheri in the western suburbs to Vikhroli and Kanjurmarg in the eastern suburbs. The stations include Lokhandwala Complex, Adarsh Nagar, Momin Nagar, JVLR, Shyam Nagar, Mahakali Caves, SEEPZ Village, Saki Vihar Road, Ram Baug, Powai Lake, IIT Powai, Kanjurmarg (W), Vikhroli-Eastern Express Highway. Metro 6 will provide interchange with Metro 2 at Infinity Mall in Andheri, with Metro 3 at SEEPZ, with Metro 4 and the Mumbai Suburban Railway at Jogeshwari and Kanjurmarg, and with Metro 7 at JVLR. The line was approved by Chief Minister Devendra Fadnavis on 19 October 2016. The MMRDA issued a tender to conduct a detailed aerial mapping survey of the alignment in April 2017. Authorities will also be able to determine the location of trees along the alignment accuracy of up to 10 cm utilizing a differential GPS (DGPS), while a digital aerial triangulation system will help determine the types of trees, their heights and diameters. Line-6's first section is expected to open in 2025. This corridor is long, and runs from Dahisar (East) in the north to Andheri (East) in the south, with a further extension till Bhayander in the north, and Mumbai International Airport Terminal 2 in the south. The line is partially elevated (under construction, with completion slated for 2019), and partially underground (approved, with construction planned to begin in 2018). The elevated section is expected to cost , while the outlay for the recently approved underground section is . Civil works, including viaduct and station works, is being executed by NCC, Simplex Infrastructure, and J. Kumar Infraprojects. The corridor is partially operational since 2 April 2022 and fully operational since 19 January 2023. A section of Line 7 from Dahisar East to Aarey (along with the section of Line 2 from Dahanukarwadi to Dahisar East ) was opened on 2 April 2022, by the then CM Uddhav Thackeray. The final section of the line (along with the final section of Line 2A from Dhanukarwadi to Andheri West) was inaugurated on 19 January 2023 by Prime Minister Narendra Modi. This is a proposed metro line between the Chhatrapati Shivaji Maharaj International Airport and the under construction Navi Mumbai International Airport. It will connect Mumbai airport to upcoming Navi Mumbai airport and its length would be approx 32 km. It's expected to cost ₹15,000 crore and finish construction by October 2026. The line has an anticipated daily ridership of 3 lakh. Extension of Line 7 (Red Line) Metro line 9 will ply between Dahisar-Mira Bhayander route. It will have 10 stations all elevated. This Under Construction Metro of Dahisar to Mira-Bhayandar will cut down approximately 30 km of travel distance between the Mira-Bhayandar and Mumbai suburbs. It is an extension of Line 7. Phase 1 of the line is now expected to be completed by June 2025. Extension of Line 4 (Green Line) In February 2017, the MMRDA announced that the DMRC was preparing a detailed project report (DPR) on Metro 10, a proposed elevated extension of Metro 4 from Gaimukh to Shivaji Chowk. The project is estimated to cost with an estimated ridership of 2.5 lakh (250,000). It will be completed by 2025. Extension of Line 4 (Green Line) In November 2018, the MMRDA cleared the Metro 11, which connects Wadala with CSMT. This would be considered to be an extension to Metro 4. The length of the line is , and it proposed to cost ₹8,739 crore (US$1.207 billion). The line will be partially underground, with 8 underground and 2 elevated stations. It will be completed by October 2026. Extension of Line 5 (Orange Line) MMRDA has planned the Metro 12, which will connect Kalyan with Taloja. It is an extension of line 5. In another boost to connectivity, the Mumbai Metropolitan Region Development Authority (MMRDA) has decided to connect Mumbai, Thane and Navi Mumbai with a Metro line. The construction started in April 2024 and is estimated to be completed in about 3 years.Kalyan Station and Kalyan APMC station have been added to the new route It is a proposed metro project to connect Mira Road with Virar. The project length is and the estimated cost of the project is ₹6,900 crore. It is an approved metro project to connect Vikhroli with Kanjurmarg and further to Ambernath-Badlapur. It will have an intersection at Kanjurmarg with Line 6, the Pink Line. This project is now at the DPR stage. The project length is and the estimated cost of the project is ₹13,500 crore. It will be completed by October 2026. Network Lines on the Mumbai Metro are currently identified by numbers. In March 2016, MMRDA Metropolitan Commissioner, U.P.S. Madan, announced that all lines on the system would be color-coded after more lines are opened. Rolling stock Reliance Infrastructure consulted a number of major international rolling stock builders to provide the train fleet for the Mumbai Metro. Bidders for the contract included established metro-vehicle manufacturers such as Kawasaki, Alstom, Siemens and Bombardier, but CRRC Nanjing Puzhen of China was ultimately chosen to supply rolling stock for ₹600 crore. In May 2008, CSR Nanjing completed the first 16 trains, each comprising four cars. The first ten trains were reported to be ready for operation in January 2013. The coaches are fire retardant, air-conditioned and designed to reduce noise and vibration, and will feature both high seating capacity and ample space for standing passengers. They will be outfitted with a number of features for safety and convenience, including LCD screens, 3D route maps, first-aid kits, wheelchair facilities, fire-fighting equipment and intercom systems permitting communication with the train driver. Each coach will furthermore feature a black box to assist in accident investigations. The trains will be capable of carrying over 1,100 passengers in a four-car unit, with each carriage being approximately wide. In 2018, the Mumbai Metro Rail Corporation chose Alstom to supply 31 eight car trains for Aqua line (line 3). These trains are capable of driverless operations and were built at Alstom's factory in Sri City, Andhra Pradesh. In 2018, Mumbai Metropolitan Region Development Authority which will operate all metro lines except Line 3, awarded a tender to Bharat Earth Movers Ltd. (BEML) to supply 63 trainsets (378 coaches) for Yellow Line (Line 2) and Red Line (Line 7) at a cost of ₹3,015 crores ($427.33 million). Capable of driverless operations, the trains are manufactured at BEML's factory in Bengaluru and first trainset for the Yellow Line arrived in Mumbai on 27 January 2021, and will continue to receive the rest of the sets until 2022. On 17 January 2021, Bombardier won the tender to supply 234 driverless coaches for the Green Line (Line 4) of six-car configuration for Mumbai metro. However, the contract was cancelled in March 2022 due to delays and uncertainties of the project. Power supply Mumbai Metro runs on alternating current (AC) which is typically more labour and cost-intensive. MMRDA joint project director Dilip Kawathkar stated that AC power was chosen "after a proper study by a team of experts" which found that the AC model was "a better option". Bidders for Line 3 were reportedly in favour of the DC model. Experts believe that the decision to use AC will escalate the project cost of underground lines by 15%, since more digging is required for the rail to work on AC. Signalling and communications The Mumbai Metro will feature an advanced signalling system, including an automatic train protection system (ATPS) and automated signalling to control train movements on the Line 1. A four-minute service interval is anticipated on the route. Siemens will supply the signalling systems required for the project, while Thales Group will supply the Metro's communication systems. The network's signalling and train control systems will be based on LZB 700M technology. Ridership On 21 October 2019, exactly 1,960 days (approx. 5 years) after Mumbai Metro Line 1's inception, the system crossed 60 crore passengers, with an average daily ridership of around 4,50,000 (4.5 lakh) passengers. Railway map of Mumbai including metro List of Mumbai Metro stations
Technology
India
null
3316408
https://en.wikipedia.org/wiki/Wattleseed
Wattleseed
Wattleseeds are the edible seeds from any of 120 species of Australian Acacia that were traditionally used as food by Aboriginal Australians, and eaten either green (and cooked) or dried (and milled to a flour) to make a type of bush bread. Acacia murrayana and A. victoriae have been studied as candidates for commercial production. Acacia seed flour has recently gained popularity in Australia due to its high nutritional content, hardiness, and low toxicity. Due to its low glycemic index, it is suitable for incorporation into diabetic foods. It is used due to its chocolate, coffee, hazelnut flavour profile. It is added to ice cream, granola, bread, chocolate, coffee, and used by chefs to enhance sauces and dairy desserts. History Wattleseeds were a relatively recent addition to Aboriginal Australian diets with specialized seed grinders first dated to 1000 years ago. Many species require pre-treatment of seeds through parching at high temperatures, soaking, or cracking using seed grinders. Following the colonization of Australia, consumption of wattleseed decreased but was not eliminated. As early as 1814, Australian Acacia species were recognized for their tannin content and developed into plantation crops across the British Empire. Several species, including Acacia colei and A. mearnsii, have been maintained for their wattleseed in Niger and South Africa respectively. Cultivation Wattleseed Acacia are perennial woody crops of varying age and size with some reaching 4m tall and 5m across. Their large size and multiple stems is an impediment to harvesting and has resulted in the development of several strategies of collecting seed pods, including 'finger stripping' of pods off of foliage, 'butt shaking' of the tree to dislodge pods, and whole biomass harvesting. The large number of species of Acacia has been seen as beneficial for the development of wattleseed as a commercial crop. Trees can be arranged as either alley or phase arrangements for plantations, but Acacia have also seen use as windbreaks and groundcover for high salinity areas. Preparation can involve deep ripping of the soil to allow for water to soak as well as species specific pre-treatments. Water stress has the possibility to induce flowering and increase seed production with sufficient nutrient availability. The longevity of trees depends on spacing, climate, and species. In A. victoriae, trees do not begin flowering until 2–4 years old.
Biology and health sciences
Pseudocereals
Plants
12438990
https://en.wikipedia.org/wiki/Gas-fired%20power%20plant
Gas-fired power plant
A gas-fired power plant, sometimes referred to as gas-fired power station, natural gas power plant, or methane gas power plant, is a thermal power station that burns natural gas to generate electricity. Gas-fired power plants generate almost a quarter of world electricity and are significant sources of greenhouse gas emissions. However, they can provide seasonal, dispatchable energy generation to compensate for variable renewable energy deficits, where hydropower or interconnectors are not available. In the early 2020s batteries became competitive with gas peaker plants. Basic concepts: heat into mechanical energy into electrical energy A gas-fired power plant is a type of fossil fuel power station in which chemical energy stored in natural gas, which is mainly methane, is converted successively into: thermal energy, mechanical energy and, finally, electrical energy. Although they cannot exceed the Carnot cycle limit for conversion of heat energy into useful work, the excess heat, ie the difference between the chemical energy used up and the useful work generated, may be used in cogeneration plants to heat buildings, to produce hot water, or to heat materials on an industrial scale. Plant types Gas turbine Industrial gas turbines differ from aeronautical designs in that the frames, bearings, and blading are of heavier construction. They are also much more closely integrated with the devices they power—often an electric generator—and the secondary-energy equipment that is used to recover residual energy (largely heat). They range in size from portable mobile plants to large, complex systems weighing more than a hundred tonnes housed in purpose-built buildings. When the gas turbine is used solely for shaft power, its thermal efficiency is about 30%. However, it may be cheaper to buy electricity than to generate it. Therefore, many engines are used in CHP (Combined Heat and Power) configurations that can be small enough to be integrated into portable container configurations. Gas turbines can be particularly efficient when waste heat from the turbine is recovered by a heat recovery steam generator (HRSG) to power a conventional steam turbine in a combined cycle configuration. The 605 MW General Electric 9HA achieved a 62.22% efficiency rate with temperatures as high as . For 2018, GE offers its 826 MW HA at over 64% efficiency in combined cycle due to advances in additive manufacturing and combustion breakthroughs, up from 63.7% in 2017 orders and on track to achieve 65% by the early 2020s. In March 2018, GE Power achieved a 63.08% gross efficiency for its 7HA turbine. Aeroderivative gas turbines can also be used in combined cycles, leading to a higher efficiency, but it will not be as high as a specifically designed industrial gas turbine. They can also be run in a cogeneration configuration: the exhaust is used for space or water heating, or drives an absorption chiller for cooling the inlet air and increase the power output, technology known as turbine inlet air cooling. Another significant advantage is their ability to be turned on and off within minutes, supplying power during peak, or unscheduled, demand. Since single cycle (gas turbine only) power plants are less efficient than combined cycle plants, they are usually used as peaking power plants, which operate anywhere from several hours per day to a few dozen hours per year—depending on the electricity demand and the generating capacity of the region. In areas with a shortage of base-load and load following power plant capacity or with low fuel costs, a gas turbine powerplant may regularly operate most hours of the day. A large single-cycle gas turbine typically produces 100 to 400 megawatts of electric power and has 35–40% thermodynamic efficiency. Simple cycle gas-turbine In a simple cycle gas-turbine, also known as open-cycle gas-turbine (OCGT) generators, hot gas drives a gas turbine to generate electricity. This type of plant is relatively cheap to build and can start very quickly, but due to its lower efficiency is at most only run for a few hours a day as a peaking power plant. Combined cycle gas-turbine (CCGT) CCGT power plants consist of simple cycle gas-turbines which use the Brayton cycle, followed by a heat recovery steam generator and a steam turbine which use the Rankine cycle. The most common configuration is two gas-turbines supporting one steam turbine. They are slightly more expensive than simple cycle plants but can achieve efficiencies up to 55% and dispatch times of around half an hour. Reciprocating engine Reciprocating internal combustion engines tend to be under 20 MW, thus much smaller than other types of natural gas-fired electricity generator, and are typically used for emergency power or to balance variable renewable energy such as wind and solar. Greenhouse gas emissions Relatively efficient gas-fired power stations – such as those based on combined cycle gas turbines – emit about of per kilowatt-hour of electricity generated. This is about half that of coal-fired power stations but much more than nuclear power plants and renewable energy. However, the more flexible simple-cycle turbines have a significantly higher emissions intensity, frequently as high as of CO2 per kWh, and some older gas turbines can have emissions intensities comparable with even the most emissions intensive coal power stations. However, full Life-cycle emissions of gas-fired power stations is increased by methane emissions from gas leaks associated with gas production and distribution pipelines as well as from significant venting of waste CO2 after amine gas treating if carbon capture and storage is employed. Carbon capture Very few power plants have carbon capture and storage. Hydrogen Gas-fired power plants can be modified to run on hydrogen, and according to General Electric a more economically viable option than CCS would be to use more and more hydrogen in the gas turbine fuel. Hydrogen can at first be created from natural gas through steam reforming, or by heating to precipitate carbon, as a step towards a hydrogen economy, thus eventually reducing carbon emissions. However others think low-carbon hydrogen (such as natural hydrogen) should be used for things which are harder to decarbonize, such as making fertilizer, so there may not be enough for electricity generation. Economics New plants Sometimes a new battery storage power station together with solar power or wind power is cheaper in the long-term than building a new gas plant, as the gas plant risks becoming a stranded asset. Existing plants a few gas-fired power plants are being retired because they are unable to stop and start quickly enough. Despite the falling cost of variable renewable energy most existing gas-fired power plants remain profitable, especially in countries without a carbon price, due to their dispatchable generation and because shale gas and liquefied natural gas prices have fallen since they were built. Even in places with a carbon price, such as the EU, existing gas-fired power stations remain economically viable, partly due to increasing restrictions on coal-fired power because of its pollution. Politics Even when replacing coal power, the decision to build a new plant may be controversial.
Technology
Power generation
null
12439068
https://en.wikipedia.org/wiki/Medical%20test
Medical test
A medical test is a medical procedure performed to detect, diagnose, or monitor diseases, disease processes, susceptibility, or to determine a course of treatment. Medical tests such as, physical and visual exams, diagnostic imaging, genetic testing, chemical and cellular analysis, relating to clinical chemistry and molecular diagnostics, are typically performed in a medical setting. Types of tests By purpose Medical tests can be classified by their purposes, including diagnosis, screening or monitoring. Diagnostic A diagnostic test is a procedure performed to confirm or determine the presence of disease in an individual suspected of having a disease, usually following the report of symptoms, or based on other medical test results. This includes posthumous diagnosis. Examples of such tests are: Using nuclear medicine to examine a patient suspected of having a lymphoma. Measuring the blood sugar in a person suspected of having diabetes mellitus after periods of increased urination. Taking a complete blood count of an individual experiencing a high fever to check for a bacterial infection. Monitoring electrocardiogram readings on a patient with chest pain to diagnose or determine any heart irregularities. Screening Screening refers to a medical test or series of tests used to detect or predict the presence of disease in at-risk individuals within a defined group such as a population, family, or workforce. Screenings may be performed to monitor disease prevalence, manage epidemiology, aid in prevention, or strictly for statistical purposes. Examples of screenings include measuring the level of TSH in the blood of a newborn infant as part of newborn screening for congenital hypothyroidism, checking for Lung cancer in non-smoking individuals who are exposed to second-hand smoke in an unregulated working environment, and Pap smear screening for prevention or early detection of cervical cancer. Monitoring Some medical tests are used to monitor the progress of, or response to medical treatment. By method Most test methods can be classified into one of the following broad groups: Patient observations, which may be photographed or recorded Questions asked when taking an individual's medical history Tests performed in a physical examination Radiologic tests, in which, for example, x-rays are used to form an image of a body target. These tests often involve administration of a contrast agent. In vivo diagnostics which test in the body, such as: Manometry Administering a diagnostic agent and measuring the body's response, as in the gluten challenge test, contraction stress test, bronchial challenge test, oral food challenge, or the ACTH stimulation test. which test a sample of tissue or bodily fluids, such as: Liquid biopsy Microbiological culturing, which determines the presence or absence of microbes in a sample from the body, and usually targeted at detecting pathogenic bacteria. Genetic testing Blood sugar level Liver function testing Calcium testing Testing for electrolytes in the blood, such as sodium, potassium, creatinine, and urea By sample location In vitro tests can be classified according to the location of the sample being tested, including: Blood tests Urine tests, including naked eye exam of the urine Stool tests, including naked eye exam of the feces Sputum (phlegm), including naked eye exam of the sputum Accuracy and precision Accuracy of a laboratory test is its correspondence with the true value. Accuracy is maximized by calibrating laboratory equipment with reference material and by participating in external quality control programs. Precision of a test is its reproducibility when it is repeated on the same sample. An imprecise test yields widely varying results on repeated measurement. Precision is monitored in laboratory by using control material. Detection and quantification Tests performed in a physical examination are usually aimed at detecting a symptom or sign, and in these cases, a test that detects a symptom or sign is designated a positive test, and a test that indicated absence of a symptom or sign is designated a negative test, as further detailed in a separate section below.A quantification of a target substance, a cell type or another specific entity is a common output of, for example, most blood tests. This is not only answering if a target entity is present or absent, but also how much is present. In blood tests, the quantification is relatively well specified, such as given in mass concentration, while most other tests may be quantifications as well although less specified, such as a sign of being "very pale" rather than "slightly pale". Similarly, radiologic images are technically quantifications of radiologic opacity of tissues. Especially in the taking of a medical history, there is no clear limit between a detecting or quantifying test versus rather descriptive information of an individual. For example, questions regarding the occupation or social life of an individual may be regarded as tests that can be regarded as positive or negative for the presence of various risk factors, or they may be regarded as "merely" descriptive, although the latter may be at least as clinically important. Positive or negative The result of a test aimed at detection of an entity may be positive or negative: this has nothing to do with a bad prognosis, but rather means that the test worked or not, and a certain parameter that was evaluated was present or not. For example, a negative screening test for breast cancer means that no sign of breast cancer could be found (which is in fact very positive for the patient). The classification of tests into either positive or negative gives a binary classification, with resultant ability to perform bayesian probability and performance metrics of tests, including calculations of sensitivity and specificity. Continuous values Tests whose results are of continuous values, such as most blood values, can be interpreted as they are, or they can be converted to a binary ones by defining a cutoff value, with test results being designated as positive or negative depending on whether the resultant value is higher or lower than the cutoff. Interpretation In the finding of a pathognomonic sign or symptom it is almost certain that the target condition is present, and in the absence of finding a sine qua non sign or symptom it is almost certain that the target condition is absent. In reality, however, the subjective probability of the presence of a condition is never exactly 100% or 0%, so tests are rather aimed at estimating a post-test probability of a condition or other entity. Most diagnostic tests basically use a reference group to establish performance data such as predictive values, likelihood ratios and relative risks, which are then used to interpret the post-test probability for an individual. In monitoring tests of an individual, the test results from previous tests on that individual may be used as a reference to interpret subsequent tests. Risks Some medical testing procedures have associated health risks, and even require general anesthesia, such as the mediastinoscopy. Other tests, such as the blood test or pap smear have little to no direct risks. Medical tests may also have indirect risks, such as the stress of testing, and riskier tests may be required as follow-up for a (potentially) false positive test result. Consult the health care provider (including physicians, physician assistants, and nurse practitioners) prescribing any test for further information. Indications Each test has its own indications and contraindications. An indication is a valid medical reason to perform the test. A contraindication is a valid medical reason not to perform the test. For example, a basic cholesterol test may be indicated (medically appropriate) for a middle-aged person. However, if the same test was performed on that person very recently, then the existence of the previous test is a contraindication for the test (a medically valid reason to not perform it). Information bias is the cognitive bias that causes healthcare providers to order tests that produce information that they do not realistically expect or intend to use for the purpose of making a medical decision. Medical tests are indicated when the information they produce will be used. For example, a screening mammogram is not indicated (not medically appropriate) for a woman who is dying, because even if breast cancer is found, she will die before any cancer treatment could begin. In a simplified fashion, how much a test is indicated for an individual depends largely on its net benefit for that individual. Tests are chosen when the expected benefit is greater than the expected harm. The net benefit may roughly be estimated by: , where: bn is the net benefit of performing a test Λp is the absolute difference between pre- and posttest probability of conditions (such as diseases) that the test is expected to achieve. A major factor for such an absolute difference is the power of the test itself, such as can be described in terms of, for example, sensitivity and specificity or likelihood ratio. Another factor is the pre-test probability, with a lower pre-test probability resulting in a lower absolute difference, with the consequence that even very powerful tests achieve a low absolute difference for very unlikely conditions in an individual (such as rare diseases in the absence of any other indicating sign), but on the other hand, that even tests with low power can make a great difference for highly suspected conditions. The probabilities in this sense may also need to be considered in context of conditions that are not primary targets of the test, such as profile-relative probabilities in a differential diagnostic procedure. ri is the rate of how much probability differences are expected to result in changes in interventions (such as a change from "no treatment" to "administration of low-dose medical treatment"). For example, if the only expected effect of a medical test is to make one disease more likely compared to another, but the two diseases have the same treatment (or neither can be treated), then, this factor is very low and the test is probably without value for the individual in this aspect. bi is the benefit of changes in interventions for the individual hi is the harm of changes in interventions for the individual, such as side effects of medical treatment ht is the harm caused by the test itself. Some additional factors that influence a decision whether a medical test should be performed or not included: cost of the test, availability of additional tests, potential interference with subsequent test (such as an abdominal palpation potentially inducing intestinal activity whose sounds interfere with a subsequent abdominal auscultation), time taken for the test or other practical or administrative aspects. The possible benefits of a diagnostic test may also be weighed against the costs of unnecessary tests and resulting unnecessary follow-up and possibly even unnecessary treatment of incidental findings. In some cases, tests being performed are expected to have no benefit for the individual being tested. Instead, the results may be useful for the establishment of statistics in order to improve health care for other individuals. Patients may give informed consent to undergo medical tests that will benefit other people. Patient expectations In addition to considerations of the nature of medical testing noted above, other realities can lead to misconceptions and unjustified expectations among patients. These include: Different labs have different normal reference ranges; slightly different values will result from repeating a test; "normal" is defined by a spectrum along a bell curve resulting from the testing of a population, not by "rational, science-based, physiological principles"; sometimes tests are used in the hope of turning something up to give the doctor a clue as to the nature of a given condition; and imaging tests are subject to fallible human interpretation and can show "incidentalomas", most of which "are benign, will never cause symptoms, and do not require further evaluation," although clinicians are developing guidelines for deciding when to pursue diagnoses of incidentalomas. Standard for the reporting and assessment The QUADAS-2 revision is available. List of medical tests
Biology and health sciences
Medical procedures
null
1740015
https://en.wikipedia.org/wiki/Flame%20test
Flame test
A flame test is relatively quick test for the presence of some elements in a sample. The technique is archaic and of questionable reliability, but once was a component of qualitative inorganic analysis. The phenomenon is related to pyrotechnics and atomic emission spectroscopy. The color of the flames is understood through the principles of atomic electron transition and photoemission, where varying elements require distinct energy levels (photons) for electron transitions. History Robert Bunsen invented the now-famous Bunsen burner in 1855, which was useful in flame tests due to its non-luminous flame that did not disrupt the colors emitted by the test materials. The Bunsen burner, combined with a prism (filtering the color interference of contaminants), led to the creation of the spectroscope, capable of emitting the spectral emission of various elements. In 1860, the unexpected appearance of sky-blue and dark red was observed in spectral emissions by Robert Bunsen and Gustav Kirchhoff, leading to the discovery of two alkali metals, caesium (sky-blue) and rubidium (dark red). Today, this low-cost method is used in secondary education to teach students to detect metals in samples qualitatively. Process A flame test involves introducing a sample of the element or compound to a hot, non-luminous flame and observing the color of the flame that results. The compound can be made into a paste with concentrated hydrochloric acid, as metal halides, being volatile, give better results. Different flames can be tried to verify the accuracy of the color. Wooden splints, Nichrome wires, platinum wires, magnesia rods, cotton swabs, and melamine foam are suggested for support. Safety precautions are crucial due to the flammability and toxicity of some substances involved. When using a splint, one must be careful to wave the splint through the flame rather than holding it in the flame for extended periods, to avoid setting the splint itself on fire. The use of a cotton swab or melamine foam (used in “eraser” cleaning sponges) as a support has also been suggested. Sodium is a common component or contaminant in many samples, and its spectrum tends to dominate many flame tests others. The test flame is often viewed through cobalt blue glass to filter out the yellow of sodium and allow for easier viewing of other metal ions. The color of the flames also generally depends on temperature and oxygen fed; see flame colors. The procedure uses different solvents and flames to view the test flame through a cobalt blue glass or didymium glass to filter the interfering light of contaminants such as sodium. Flame tests are subject of a number of limitations. The range of elements positively detectable under standard conditions is small. Some elements emit weakly and others (Na) very strongly. Gold, silver, platinum, palladium, and a number of other elements do not produce a characteristic flame color, although some may produce sparks (as do metallic titanium and iron); salts of beryllium and gold reportedly deposit pure metal on cooling. The test is highly subjective. Principle In flame tests, ions are excited thermally. These excited states then relax to the ground state with emission of a photon. The energy of the excited state(s) and associated emitted photon is characteristic of the element. The nature of the excited and ground states depends only on the element. Ordinarily, there are no bonds to be broken, and molecular orbital theory is not applicable. The emission spectrum observed in flame test is also the basis of flame emission spectroscopy, atomic emission spectroscopy, and flame photometry. Common elements Some common elements and their corresponding colors are:
Physical sciences
Chemical methods
Chemistry
1740813
https://en.wikipedia.org/wiki/Acid%20salt
Acid salt
Acid salts are a class of salts that produce an acidic solution after being dissolved in a solvent. Its formation as a substance has a greater electrical conductivity than that of the pure solvent. An acidic solution formed by acid salt is made during partial neutralization of diprotic or polyprotic acids. A half-neutralization occurs due to the remaining of replaceable hydrogen atoms from the partial dissociation of weak acids that have not been reacted with hydroxide ions () to create water molecules. Formation Acid–base property of the resulting solution from a neutralization reaction depends on the remaining salt products. A salt containing reactive cations undergo hydrolysis by which they react with water molecules, causing deprotonation of the conjugate acids. For example, the acid salt ammonium chloride is the main species formed upon the half neutralization of ammonia in aqueous solution of hydrogen chloride: Examples of acid salts Use in food Acid salts are often used in foods as part of leavening agents. In this context, the acid salts are referred to as "leavening acids." Common leavening acids include cream of tartar and monocalcium phosphate. An acid salt can be mixed with certain base salt (such as sodium bicarbonate or baking soda) to create baking powders which release carbon dioxide. Leavening agents can be slow-acting (e.g. sodium aluminum phosphate) which react when heated, or fast-acting (e.g., cream of tartar) which react immediately at low temperatures. Double-acting baking powders contain both slow- and fast-acting leavening agents and react at low and high temperatures to provide leavening rising throughout the baking process. Disodium phosphate, , is used in foods and monosodium phosphate, , is used in animal feed, toothpaste and evaporated milk. Intensity of acid An acid with higher value dominates the chemical reaction. It serves as a better contributor of protons (). A comparison between the and indicates the acid–base property of the resulting solution by which: The solution is acidic if . It contains a greater concentration of ions than concentration of ions due more extensive cation hydrolysis compared to that of anion hydrolysis. The solution is alkaline if . Anions hydrolyze more than cations, causing an exceeding concentration of ions. The solution is expected to be neutral only when . Other possible factors that could vary pH level of a solution are the relevant equilibrium constants and the additional amounts of any base or acid. For example, in ammonium chloride solution, is the main influence for acidic solution. It has greater value compared to that of water molecules; of is , and of is . This ensures its deprotonation when reacting with water, and is responsible for the pH below 7 at room temperature. will have no affinity for nor tendency to hydrolyze, as its value is very low ( of is ). Hydrolysis of ammonium at room temperature produces: NH4+_{(aq)}\ + H2O_{(aq)} <=> NH3_{(aq)}\ + H3O+_{(aq)}
Physical sciences
Salts and ions: General
Chemistry
1741326
https://en.wikipedia.org/wiki/Fatty%20alcohol
Fatty alcohol
Fatty alcohols (or long-chain alcohols) are usually high-molecular-weight, straight-chain primary alcohols, but can also range from as few as 4–6 carbons to as many as 22–26, derived from natural fats and oils. The precise chain length varies with the source. Some commercially important fatty alcohols are lauryl, stearyl, and oleyl alcohols. They are colourless oily liquids (for smaller carbon numbers) or waxy solids, although impure samples may appear yellow. Fatty alcohols usually have an even number of carbon atoms and a single alcohol group (–OH) attached to the terminal carbon. Some are unsaturated and some are branched. They are widely used in industry. As with fatty acids, they are often referred to generically by the number of carbon atoms in the molecule, such as "a C12 alcohol", that is an alcohol having 12 carbons, for example dodecanol. Production and occurrence Fatty alcohols became commercially available in the early 1900s. They were originally obtained by reduction of wax esters with sodium by the Bouveault–Blanc reduction process. In the 1930s catalytic hydrogenation was commercialized, which allowed the conversion of fatty acid esters, typically tallow, to the alcohols. In the 1940s and 1950s, petrochemicals became an important source of chemicals, and Karl Ziegler had discovered the polymerization of ethylene. These two developments opened the way to synthetic fatty alcohols. From natural sources Most fatty alcohols in nature are found as waxes, which are esters of fatty acids and fatty alcohols. They are produced by bacteria, plants and animals for purposes of buoyancy, as source of metabolic water and energy, biosonar lenses (marine mammals) and for thermal insulation in the form of waxes (in plants and insects). The traditional sources of fatty alcohols have largely been various vegetable oils, which remain a large-scale feedstock. Animal fats (tallow) were of historic importance, particularly whale oil, however they are no longer used on a large scale. Tallows produce a fairly narrow range of alcohols, predominantly C16–C18, while plant sources produce a wider range of alcohols from (C6–C24), making them the preferred source. The alcohols are obtained from the triglycerides (fatty acid triesters), which form the bulk of the oil. The process involves the transesterification of the triglycerides to give methyl esters which are then hydrogenated to produce the fatty alcohols. Higher alcohols (C20–C22) can be obtained from rapeseed oil or mustard seed oil. Midcut alcohols are obtained from coconut oil (C12–C14) or palm kernel oil (C16–C18). From petrochemical sources Fatty alcohols are also prepared from petrochemical sources. In the Ziegler process, ethylene is oligomerized using triethylaluminium followed by air oxidation. This process affords even-numbered alcohols: Al(C2H5)3 + 18 C2H4 → Al(C14H29)3 Al(C14H29)3 +  O2 +  H2O → 3 HOC14H29 +  Al2O3 Alternatively ethylene can be oligomerized to give mixtures of alkenes, which are subjected to hydroformylation, this process affording odd-numbered aldehyde, which is subsequently hydrogenated. For example, from 1-decene, hydroformylation gives the C11 alcohol: C8H17CH=CH2 + H2 + CO → C8H17CH2CH2CHO C8H17CH2CH2CHO + H2 → C8H17CH2CH2CH2OH In the Shell higher olefin process, the chain-length distribution in the initial mixture of alkene oligomers is adjusted so as to more closely match market demand. Shell does this by means of an intermediate metathesis reaction. The resultant mixture is fractionated and hydroformylated/hydrogenated in a subsequent step. Applications Fatty alcohols are mainly used in the production of detergents and surfactants. Due to their amphipathic nature, fatty alcohols behave as nonionic surfactants. They find use as co-emulsifiers, emollients and thickeners in cosmetics and food industry. About 50% of fatty alcohols used commercially are of natural origin, the remainder being synthetic. Fatty alcohol are converted to their ethoxylates by treatment with ethylene oxide: The resulting fatty alcohol ethoxylates are important surfactants. Another large class of surfactants are the sodium alkylsulfates such as sodium dodecylsulfate (SDS). Five million tons of SDS and related materials are produced annually by sulfation of dodecyl alcohol and related fatty alcohols. Nutrition The metabolism of fatty alcohols is impaired in several inherited human peroxisomal disorders, including adrenoleukodystrophy and Sjögren–Larsson syndrome. Safety Human health Fatty alcohols are relatively benign materials, with LD50 (oral, rat) ranging from 3.1–4 g/kg for hexanol to 6–8 g/kg for octadecanol. For a 50 kg person, these values translate to more than 100 g. Tests of acute and repeated exposures have revealed a low level of toxicity from inhalation, oral or dermal exposure of fatty alcohols. Fatty alcohols are not very volatile and the acute lethal concentration is greater than the saturated vapor pressure. Longer-chain (C12–C16) fatty alcohols produce fewer health effects than short-chain (smaller than C12). Short-chain fatty alcohols are considered eye irritants, while long chain alcohols are not. Fatty alcohols exhibit no skin sensitization. Repeated exposure to fatty alcohols produce low-level toxicity and certain compounds in this category can cause local irritation on contact or low-grade liver effects (essentially linear alcohols have a slightly higher rate of occurrence of these effects). No effects on the central nervous system have been seen with inhalation and oral exposure. Tests of repeated bolus dosages of 1-hexanol and 1-octanol showed potential for CNS depression and induced respiratory distress. No potential for peripheral neuropathy has been found. In rats, the no observable adverse effect level (NOAEL) ranges from 200 mg/kg/day to 1000 mg/kg/day by ingestion. There has been no evidence that fatty alcohols are mutagenic or cause reproductive toxicity or infertility. Fatty alcohols are effectively eliminated from the body when exposed, limiting possibility of retention or bioaccumulation. Margins of exposure resulting from consumer uses of these chemicals are adequate for the protection of human health as determined by the Organisation for Economic Co-operation and Development (OECD) high production volume chemicals program. Environment Fatty alcohols up to chain length C18 are biodegradable, with length up to C16 biodegrading within 10 days completely. Chains C16 to C18 were found to biodegrade from 62% to 76% in 10 days. Chains greater than C18 were found to degrade by 37% in 10 days. Field studies at wastewater treatment plants have shown that 99% of fatty alcohols lengths C12–C18 are removed. Fate prediction using fugacity modeling has shown that fatty alcohols with chain lengths of C10 and greater in water partition into sediment. Lengths C14 and above are predicted to stay in the air upon release. Modeling shows that each type of fatty alcohol will respond independently upon environmental release. Aquatic organisms Fish, invertebrates and algae experience similar levels of toxicity with fatty alcohols although it is dependent on chain length with the shorter chain having greater toxicity potential. Longer chain lengths show no toxicity to aquatic organisms. This category of chemicals was evaluated under the Organisation for Economic Co-operation and Development (OECD) high production volume chemicals program. No unacceptable environmental risks were identified. Table with common names This table lists some alkyl alcohols. Note that in general the alcohols with even numbers of carbon atoms have common names, since they are found in nature, whereas those with odd numbers of carbon atoms generally do not have a common name.
Physical sciences
Alcohols
Chemistry
1741375
https://en.wikipedia.org/wiki/Rearrangement%20reaction
Rearrangement reaction
In organic chemistry, a rearrangement reaction is a broad class of organic reactions where the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. Often a substituent moves from one atom to another atom in the same molecule, hence these reactions are usually intramolecular. In the example below, the substituent R moves from carbon atom 1 to carbon atom 2: Intermolecular rearrangements also take place. A rearrangement is not well represented by simple and discrete electron transfers (represented by curved arrows in organic chemistry texts). The actual mechanism of alkyl groups moving, as in Wagner–Meerwein rearrangement, probably involves transfer of the moving alkyl group fluidly along a bond, not ionic bond-breaking and forming. In pericyclic reactions, explanation by orbital interactions give a better picture than simple discrete electron transfers. It is, nevertheless, possible to draw the curved arrows for a sequence of discrete electron transfers that give the same result as a rearrangement reaction, although these are not necessarily realistic. In allylic rearrangement, the reaction is indeed ionic. Three key rearrangement reactions are 1,2-rearrangements, pericyclic reactions and olefin metathesis. 1,2-rearrangements A 1,2-rearrangement is an organic reaction where a substituent moves from one atom to another atom in a chemical compound. In a 1,2 shift the movement involves two adjacent atoms but moves over larger distances are possible. Skeletal isomerization is not normally encountered in the laboratory, but is the basis of large applications in oil refineries. In general, straight-chain alkanes are converted to branched isomers by heating in the presence of a catalyst. Examples include isomerisation of n-butane to isobutane and pentane to isopentane. Highly branched alkanes have favorable combustion characteristics for internal combustion engines. Further examples are the Wagner–Meerwein rearrangement: and the Beckmann rearrangement, which is relevant to the production of certain nylons: Pericyclic reactions A pericyclic reaction is a type of reaction with multiple carbon–carbon bond making and breaking wherein the transition state of the molecule has a cyclic geometry, and the reaction progresses in a concerted fashion. Examples are hydride shifts and the Claisen rearrangement: Olefin metathesis Olefin metathesis is a formal exchange of the alkylidene fragments in two alkenes. It is a catalytic reaction with carbene, or more accurately, transition metal carbene complex intermediates. In this example (ethenolysis, a pair of vinyl compounds form a new symmetrical alkene with expulsion of ethylene. Other rearragement reactions 1,3-rearrangements 1,3-rearrangements take place over 3 carbon atoms. Examples: the Fries rearrangement a 1,3-alkyl shift of verbenone to chrysanthenone
Physical sciences
Organic reactions
Chemistry
1741584
https://en.wikipedia.org/wiki/Ageusia
Ageusia
Ageusia (from negative prefix a- and Ancient Greek γεῦσις geûsis 'taste') is the loss of taste functions of the tongue, particularly the inability to detect sweetness, sourness, bitterness, saltiness, and umami (meaning 'savory taste'). It is sometimes confused with anosmia – a loss of the sense of smell. True ageusia is relatively rare compared to hypogeusia – a partial loss of taste – and dysgeusia – a distortion or alteration of taste. Even though ageusia is considered relatively rare it can impact individuals of any age or demographic. There has been an increase in reported cases of ageusia, due to the COVID-19 pandemic making ageusia more commonly diagnosed than before. Symptoms The complete loss of taste. Causes Ageusia can arise from various factors: Issues with the water-soluble molecules responsible for taste, causing oral dryness or damage to taste buds. Radiation therapy treatments. Facial nerve damage due to surgery. Head traumas, traumas to middle ear or jaw. Sinusitis, strep throat, salivary gland infections, common cold, influenza, and COVID-19. Bell's palsy or dental procedures like a molar extraction and tonsillectomy. Epilepsy, tumors, stroke, or multiple sclerosis. Diseases that can affect the autonomic nervous system, like diabetes. Some medications, including muscle relaxants, chemotherapy medication, anti-fungal, chemical compounds found in anti-depressants, anti-seizure medications, and blood pressure medications. Sialadenitis, gingivitis, oral infections, or glossodynia (burning mouth syndrome (BMS)). Ageusia resulting from a significant head injury is relatively uncommon, affecting only around 1% of individuals with this type of injury. COVID-19 Ageusia can be an indication of a COVID-19 infection. Ageusia and anosmia are among the prominent symptoms commonly associated with COVID-19, with symptoms that could last up to 4 weeks. However, it is noteworthy that ageusia may manifest differently from anosmia, as anosmia primarily affects the olfactory system versus ageusia primarily affecting the gustatory receptors. As a result, emerging research indicates that the various variants of COVID-19 might be associated with differences in the severity of ageusia experienced by patients, as well as the severity of other taste and smell disorders. Implying that certain strains of the virus may have differing impacts on the sensory functions of affected individuals. Studies investigating the prevalence of taste disorders stemming from the COVID-19 pandemic indicate that a wide range of individuals were impacted, with some experiencing these issues more severely than others: Patients with ageusia observed in 28.0% of patients, hypogeusia in about 33.5%, and dysgeusia in about 41.3% of patients. In April 2020, 88% of a series of over 400 COVID-19 disease patients in Europe were reported to report gustatory dysfunction (86% reported olfactory dysfunction). Additionally, in South Korea, out of approximately 2,000 recorded cases of individuals with ageusia related infection from COVID-19, only 30% exhibited ageusia. The duration of ageusia recovery can vary significantly depending on cause of infection. In a COVID-19-related infection, the recovery timeline for ageusia can vary among individuals, influenced by factors such as variants or strains of the virus, individual immune responses, demographic characteristics, and other factors. Proposed mechanisms of infection Recent research has hinted at a connection between the distribution of taste cells and ACE2 receptors in ageusia. Higher amounts of the receptors suggests an easy route for a COVID-19 infection with a possible outcome of ageusia. Ageusia could also possibly occur due to changes in the abundance or lack of saliva that can eventually cause damage to the cells on the tongue's surface. Saliva and taste perception Saliva is essential in taste sensation and perception. Studies have indicated that saliva plays a critical role in detecting a COVID-19 infection, and ageusia can serve as an indication of an infection that is affecting the salivary glands. However, there is still insufficient research to fully clarify the full effects of ageusia, COVID-19, and their potential impacts on saliva and 'salivary flow rate.' Zinc deficiency In cases of zinc deficiency, a shortage of zinc-binding proteins that help with the growth and development of taste buds, could result in taste bud issues associated with ageusia, hypogeusia, and hyposalivation. Low levels of cyclic adenosine monophosphate (cAMP) and cyclic guanosine monophosphate (cGMP), which help with the growth of taste buds, in saliva have also been linked to ageusia. Diagnosis Ageusia is usually diagnosed by an otolaryngologist, also known as an ear nose and throat doctor(ENT). These individuals can evaluate a patient's loss of taste among other things. To do this, a specialist will look into any other factors that could be causing ageusia, such as examining the head, nose, ears, and mouth. As well as imaging of the head and neck, to help further identify or eliminate the presence of tumors, focal lesions, or any type of injury that could possibly be affecting any taste-related networks. An otolaryngologist can also conduct a series of tests to assess the severity of ageusia, which includes identifying specific tastes that the patient can sense or recognize. An example of a test used by researchers and doctors is electrogustometry. This test involves applying mild electric currents to specific tongue areas, to assess taste sensitivity in patients exhibiting ageusia and its symptoms. Another test that can be used to detect the severity of an individuals ageusia is an 'suprathreshold taste test,' also known as a edible strip taste test. The edible strip is placed on the individuals tongue and it contains various flavors for the patient to be able to detect, or not able to detect. Treatment Treatment for ageusia varies depending on its cause, whether it stems from certain illnesses, medications, traumatic injuries or other causes. If ageusia is triggered by a medication prescribed to a patient, discontinuing the medication under the guidance of a healthcare professional may alleviate the symptoms. Switching to an alternative medication can also help resolve the issue. In cases where ageusia is associated with an underlying illness or trauma, some medications can also help alleviate symptoms. Some of these medications include, antihistamines, decongestants, and antibiotics. Complications People experiencing ageusia can endure daily discomfort, which frequently diminishes their enjoyment of eating. This discomfort can cause many individuals affected by taste disorders with feelings of: Isolation in individuals experiences Feelings of depression Social withdrawal Unhealthy eating habits Such eating habits may involve either insufficient food intake or excessive consumption of sour or sweet foods. This dietary pattern could pose risks, particularly for individuals with diabetes. Ageusia and Diabetes Diabetes has been shown to sometimes lead to ageusia, often starting with fluctuations in glucose levels. When blood sugar levels fluctuate, it can disrupt the taste buds, making it difficult to detect flavors. But not everyone with diabetes will experience this. The severity of an individuals ageusia or other taste dysfunction can differ from person to person.
Biology and health sciences
Symptoms and signs
Health
1741642
https://en.wikipedia.org/wiki/1%2C2-rearrangement
1,2-rearrangement
A 1,2-rearrangement or 1,2-migration or 1,2-shift or Whitmore 1,2-shift is an organic reaction where a substituent moves from one atom to another atom in a chemical compound. In a 1,2 shift the movement involves two adjacent atoms but moves over larger distances are possible. In the example below the substituent R moves from carbon atom C2 to C3. The rearrangement is intramolecular and the starting compound and reaction product are structural isomers. The 1,2-rearrangement belongs to a broad class of chemical reactions called rearrangement reactions. A rearrangement involving a hydrogen atom is called a 1,2-hydride shift. If the substituent being rearranged is an alkyl group, it is named according to the alkyl group's anion: i.e. 1,2-methanide shift, 1,2-ethanide shift, etc. Reaction mechanism A 1,2-rearrangement is often initialised by the formation of a reactive intermediate such as: a carbocation by heterolysis in a nucleophilic rearrangement or anionotropic rearrangement a carbanion in an electrophilic rearrangement or cationotropic rearrangement a free radical by homolysis a nitrene. The driving force for the actual migration of a substituent in step two of the rearrangement is the formation of a more stable intermediate. For instance a tertiary carbocation is more stable than a secondary carbocation and therefore the SN1 reaction of neopentyl bromide with ethanol yields tert-pentyl ethyl ether. Carbocation rearrangements are more common than the carbanion or radical counterparts. This observation can be explained on the basis of Hückel's rule. A cyclic carbocationic transition state is aromatic and stabilized because it holds 2 electrons. In an anionic transition state on the other hand 4 electrons are present thus antiaromatic and destabilized. A radical transition state is neither stabilized or destabilized. The most important carbocation 1,2-shift is the Wagner–Meerwein rearrangement. A carbanionic 1,2-shift is involved in the benzilic acid rearrangement. Radical 1,2-rearrangements The first radical 1,2-rearrangement reported by Heinrich Otto Wieland in 1911 was the conversion of bis(triphenylmethyl)peroxide 1 to the tetraphenylethane 2. The reaction proceeds through the triphenylmethoxyl radical A, a rearrangement to diphenylphenoxymethyl C and its dimerization. It is unclear to this day whether in this rearrangement the cyclohexadienyl radical intermediate B is a transition state or a reactive intermediate as it (or any other such species) has thus far eluded detection by ESR spectroscopy. An example of a less common radical 1,2-shift can be found in the gas phase pyrolysis of certain polycyclic aromatic compounds. The energy required in an aryl radical for the 1,2-shift can be high (up to 60 kcal/mol or 250 kJ/mol) but much less than that required for a proton abstraction to an aryne (82 kcal/mol or 340 kJ/mol). In alkene radicals proton abstraction to an alkyne is preferred. 1,2-Rearrangements The following mechanisms involve a 1,2-rearrangement: 1,2-Wittig rearrangement Alpha-ketol rearrangement Beckmann rearrangement Benzilic acid rearrangement Brook rearrangement Criegee rearrangement Curtius rearrangement Dowd–Beckwith ring expansion reaction Favorskii rearrangement Friedel–Crafts reaction Fritsch–Buttenberg–Wiechell rearrangement Halogen dance rearrangement Hofmann rearrangement Lossen rearrangement Pinacol rearrangement Seyferth–Gilbert homologation SN1 reaction (generally) Stevens rearrangement Stieglitz rearrangement Wagner–Meerwein rearrangement Westphalen–Lettré rearrangement Wolff rearrangement
Physical sciences
Organic reactions
Chemistry
1742315
https://en.wikipedia.org/wiki/Vaccine%20hesitancy
Vaccine hesitancy
Vaccine hesitancy is a delay in acceptance, or refusal, of vaccines despite the availability of vaccine services and supporting evidence. The term covers refusals to vaccinate, delaying vaccines, accepting vaccines but remaining uncertain about their use, or using certain vaccines but not others. Although adverse effects associated with vaccines are occasionally observed, the scientific consensus that vaccines are generally safe and effective is overwhelming. Vaccine hesitancy often results in disease outbreaks and deaths from vaccine-preventable diseases. Therefore, the World Health Organization characterizes vaccine hesitancy as one of the top ten global health threats. Vaccine hesitancy is complex and context-specific, varying across time, place and vaccines. It can be influenced by factors such as lack of proper scientifically based knowledge and understanding about how vaccines are made or work, as well as psychological factors including fear of needles and distrust of public authorities, a person's lack of confidence (mistrust of the vaccine and/or healthcare provider), complacency (the person does not see a need for the vaccine or does not see the value of the vaccine), and convenience (access to vaccines). It has existed since the invention of vaccination and pre-dates the coining of the terms "vaccine" and "vaccination" by nearly eighty years. "Anti-vaccinationism" refers to total opposition to vaccination. Anti-vaccinationists have been known as "anti-vaxxers" or "anti-vax". The specific hypotheses raised by anti-vaccination advocates have been found to change over time. Anti-vaccine activism has been increasingly connected to political and economic goals. Although myths, conspiracy theories, misinformation and disinformation spread by the anti-vaccination movement and fringe doctors leads to vaccine hesitancy and public debates around the medical, ethical, and legal issues related to vaccines, there is no serious hesitancy or debate within mainstream medical and scientific circles about the benefits of vaccination. Proposed laws that mandate vaccination, such as California Senate Bill 277 and Australia's No Jab No Pay, have been opposed by anti-vaccination activists and organizations. Opposition to mandatory vaccination may be based on anti-vaccine sentiment, concern that it violates civil liberties or reduces public trust in vaccination, or suspicion of profiteering by the pharmaceutical industry. Effectiveness Scientific evidence for the effectiveness of large-scale vaccination campaigns is well established. Two to three million deaths are prevented each year worldwide by vaccination, and an additional 1.5 million deaths could be prevented each year if all recommended vaccines were used. Vaccination campaigns helped eradicate smallpox, which once killed as many as one in seven children in Europe, and have nearly eradicated polio. As a more modest example, infections caused by Haemophilus influenzae (Hib), a major cause of bacterial meningitis and other serious diseases in children, have decreased by over 99% in the US since the introduction of a vaccine in 1988. It is estimated that full vaccination, from birth to adolescence, of all US children born in a given year would save 33,000 lives and prevent 14 million infections. There is anti-vaccine literature that argues that reductions in infectious disease result from improved sanitation and hygiene (rather than vaccination) or that these diseases were already in decline before the introduction of specific vaccines. These claims are not supported by scientific data; the incidence of vaccine-preventable diseases tended to fluctuate over time until the introduction of specific vaccines, at which point the incidence dropped to near zero. A Centers for Disease Control and Prevention website aimed at countering common misconceptions about vaccines argued, "Are we expected to believe that better sanitation caused the incidence of each disease to drop, just at the time a vaccine for that disease was introduced?" Another rallying cry of the anti-vaccine movement is to call for randomized clinical trials in which an experimental group of children are vaccinated while a control group are unvaccinated. Such a study would never be approved because it would require deliberately denying children standard medical care, rendering the study unethical. Studies have been done that compare vaccinated to unvaccinated people, but the studies are typically not randomized. Moreover, literature already exists that demonstrates the safety of vaccines using other experimental methods. Other critics argue that the immunity granted by vaccines is only temporary and requires boosters, whereas those who survive the disease become permanently immune. As discussed below, the philosophies of some alternative medicine practitioners are incompatible with the idea that vaccines are effective. Population health Incomplete vaccine coverage increases the risk of disease for the entire population, including those who have been vaccinated, because it reduces herd immunity. For example, the measles vaccine is given to children 9–12 months old, and the window between the disappearance of maternal antibody and seroconversion means that vaccinated children are frequently still vulnerable. Strong herd immunity reduces this vulnerability. Increasing herd immunity during an outbreak or when there is a risk of an outbreak is perhaps the most widely accepted justification for mass vaccination. When a new vaccine is introduced, mass vaccination can help increase coverage rapidly. If enough of a population is vaccinated, herd immunity takes effect, decreasing risk to people who cannot receive vaccines because they are too young or old, immunocompromised, or have severe allergies to the ingredients in the vaccine. The outcome for people with compromised immune systems who get infected is often worse than that of the general population. Cost-effectiveness Commonly used vaccines are a cost-effective and preventive way of promoting health, compared to the treatment of acute or chronic disease. In 2001, the United States spent approximately $2.8 billion to promote and implement routine childhood immunizations against seven diseases. The societal benefits of those vaccinations were estimated to be $46.6 billion, yielding a benefit-cost ratio of 16.5. Necessity When a vaccination program successfully reduces the disease threat, it may reduce the perceived risk of disease as cultural memories of the effects of that disease fade. At this point, parents may feel they have nothing to lose by not vaccinating their children. If enough people hope to become free-riders, gaining the benefits of herd immunity without vaccination, vaccination levels may drop to a level where herd immunity is ineffective. According to Jennifer Reich, those parents who believe vaccination to be quite effective but might prefer their children to remain unvaccinated, are those who are the most likely to be convinced to change their mind, as long as they are approached properly. Safety concerns While some anti-vaccinationists openly deny the improvements vaccination has made to public health or believe in conspiracy theories, it is much more common to cite concerns about safety. As with any medical treatment, there is a potential for vaccines to cause serious complications, such as severe allergic reactions, but unlike most other medical interventions, vaccines are given to healthy people and so a higher standard of safety is demanded. While serious complications from vaccinations are possible, they are extremely rare and much less common than similar risks from the diseases they prevent. As the success of immunization programs increases and the incidence of disease decreases, public attention shifts away from the risks of disease to the risk of vaccination, and it becomes challenging for health authorities to preserve public support for vaccination programs. The overwhelming success of certain vaccinations has made certain diseases rare, and, consequently, has led to incorrect heuristic thinking in weighing risks against benefits among people who are vaccine-hesitant. Once such diseases (e.g., Haemophilus influenzae B) decrease in prevalence, people may no longer appreciate how serious the illness is due to a lack of familiarity with it and become complacent. The lack of personal experience with these diseases reduces the perceived danger and thus reduces the perceived benefit of immunization. Conversely, certain illnesses (e.g., influenza) remain so common that vaccine-hesitant people mistakenly perceive the illness to be non-threatening despite clear evidence that the illness poses a significant threat to human health. Omission and disconfirmation biases also contribute to vaccine hesitancy. Various concerns about immunization have been raised. They have been addressed and the concerns are not supported by evidence. Concerns about immunization safety often follow a pattern. First, some investigators suggest that a medical condition of increasing prevalence or unknown cause is an adverse effect of vaccination. The initial study and subsequent studies by the same group have an inadequate methodology, typically a poorly controlled or uncontrolled case series. A premature announcement is made about the alleged adverse effect, resonating with individuals who have the condition, and underestimating the potential harm of forgoing vaccination to those whom the vaccine could protect. Other groups attempt to replicate the initial study but fail to get the same results. Finally, it takes several years to regain public confidence in the vaccine. Adverse effects ascribed to vaccines typically have an unknown origin, an increasing incidence, some biological plausibility, occurrences close to the time of vaccination, and dreaded outcomes. In almost all cases, the public health effect is limited by cultural boundaries: English speakers worry about one vaccine causing autism, while French speakers worry about another vaccine causing multiple sclerosis, and Nigerians worry that a third vaccine causes infertility. Thiomersal Thiomersal (called "thimerosal" in the US) is an antifungal preservative used in small amounts in some multi-dose vaccines (where the same vial is opened and used for multiple patients) to prevent contamination of the vaccine. Despite thiomersal's efficacy, the use of thiomersal is controversial because it can be metabolized or degraded in the body to ethylmercury (C2H5Hg+) and thiosalicylate. As a result, in 1999, the Centers for Disease Control (CDC) and the American Academy of Pediatrics (AAP) asked vaccine makers to remove thiomersal from vaccines as quickly as possible on the precautionary principle. Thiomersal is now absent from all common US and European vaccines, except for some preparations of influenza vaccine. Trace amounts remain in some vaccines due to production processes, at an approximate maximum of one microgramme, around 15% of the average daily mercury intake in the US for adults and 2.5% of the daily level considered tolerable by the WHO. The action sparked concern that thiomersal could have been responsible for autism. The idea is now considered disproven, as incidence rates for autism increased steadily even after thiomersal was removed from childhood vaccines. Currently there is no accepted scientific evidence that exposure to thiomersal is a factor in causing autism. Since 2000, parents in the United States have pursued legal compensation from a federal fund arguing that thiomersal caused autism in their children. A 2004 Institute of Medicine (IOM) committee favored rejecting any causal relationship between thiomersal-containing vaccines and autism. The concentration of thiomersal used in vaccines as an antimicrobial agent ranges from 0.001% (1 part in 100,000) to 0.01% (1 part in 10,000). A vaccine containing 0.01% thiomersal has 25 micrograms of mercury per 0.5 mL dose, roughly the same amount of elemental mercury found in a can of tuna. There is robust peer-reviewed scientific evidence supporting the safety of thiomersal-containing vaccines. MMR vaccine In the UK, the MMR vaccine was the subject of controversy after the publication in The Lancet of a 1998 paper by Andrew Wakefield and others reporting case histories of twelve children mostly with autism spectrum disorders with onset soon after administration of the vaccine. At a 1998 press conference, Wakefield suggested that giving children the vaccines in three separate doses would be safer than a single vaccination. This suggestion was not supported by the paper, and several subsequent peer-reviewed studies have failed to show any association between the vaccine and autism. It later emerged that Wakefield had received funding from litigants against vaccine manufacturers and that he had not informed colleagues or medical authorities of his conflict of interest: Wakefield reportedly stood to earn up to $43 million per year selling diagnostic kits. Had this been known, publication in The Lancet would not have taken place in the way that it did. Wakefield has been heavily criticized on scientific and ethical grounds for the way the research was conducted and for triggering a decline in vaccination rates, which fell in the UK to 80% in the years following the study. In 2004, the MMR-and-autism interpretation of the paper was formally retracted by ten of its thirteen coauthors, and in 2010 The Lancets editors fully retracted the paper. Wakefield was struck off the UK medical register, with a statement identifying deliberate falsification in the research published in The Lancet, and is barred from practicing medicine in the UK. The CDC, the IOM of the National Academy of Sciences, Australia's Department of Health, and the UK National Health Service have all concluded that there is no evidence of a link between the MMR vaccine and autism. A Cochrane review concluded that there is no credible link between the MMR vaccine and autism, that MMR has prevented diseases that still carry a heavy burden of death and complications, that the lack of confidence in MMR has damaged public health, and that the design and reporting of safety outcomes in MMR vaccine studies are largely inadequate. Additional reviews agree, with studies finding that vaccines are not linked to autism even in high risk populations with autistic siblings. In 2009, The Sunday Times reported that Wakefield had manipulated patient data and misreported results in his 1998 paper, creating the appearance of a link with autism. A 2011 article in the British Medical Journal described how the data in the study had been falsified by Wakefield so that it would arrive at a predetermined conclusion. An accompanying editorial in the same journal described Wakefield's work as an "elaborate fraud" that led to lower vaccination rates, putting hundreds of thousands of children at risk and diverting energy and money away from research into the true cause of autism. A special court convened in the United States to review claims under the National Vaccine Injury Compensation Program ruled on February 12, 2009, that the evidence "failed to demonstrate that thimerosal-containing vaccines can contribute to causing immune dysfunction, or that the MMR vaccine can contribute to causing either autism or gastrointestinal dysfunction", and that parents of autistic children were therefore not entitled to compensation in their contention that certain vaccines caused autism in their children. Vaccine overload Vaccine overload, a non-medical term, is the notion that giving many vaccines at once may overwhelm or weaken a child's immature immune system and lead to adverse effects. Despite scientific evidence that strongly contradicts this idea, there are still parents of autistic children that believe that vaccine overload causes autism. The resulting controversy has caused many parents to delay or avoid immunizing their children. Such parental misperceptions are major obstacles towards immunization of children. The concept of vaccine overload is flawed on several levels. Despite the increase in the number of vaccines over recent decades, improvements in vaccine design have reduced the immunologic load from vaccines; the total number of immunological components in the 14 vaccines administered to US children in 2009 is less than ten percent of what it was in the seven vaccines given in 1980. A study published in 2013 found no correlation between autism and the antigen number in the vaccines the children were administered up to the age of two. There were 1,008 children in the study, one quarter of whom were diagnosed with autism, and the whole cohort was born between 1994 and 1999, when the routine vaccine schedule could contain more than 3,000 antigens (in a single shot of DTP vaccine). The vaccine schedule in 2012 contains several more vaccines, but the number of antigens the child is exposed to by the age of two is 315. Vaccines pose a very small immunologic load compared to the pathogens naturally encountered by a child in a typical year; common childhood conditions such as fevers and middle-ear infections pose a much greater challenge to the immune system than vaccines, and studies have shown that vaccinations, even multiple concurrent vaccinations, do not weaken the immune system or compromise overall immunity. The lack of evidence supporting the vaccine overload hypothesis, combined with these findings directly contradicting it, has led to the conclusion that currently recommended vaccine programs do not "overload" or weaken the immune system. Any experiment based on withholding vaccines from children is considered unethical, and observational studies would likely be confounded by differences in the healthcare-seeking behaviors of under-vaccinated children. Thus, no study directly comparing rates of autism in vaccinated and unvaccinated children has been done. However, the concept of vaccine overload is biologically implausible, as vaccinated and unvaccinated children have the same immune response to non-vaccine-related infections, and autism is not an immune-mediated disease, so claims that vaccines could cause it by overloading the immune system go against current knowledge of the pathogenesis of autism. As such, the idea that vaccines cause autism has been effectively dismissed by the weight of current evidence. Prenatal infection There is evidence that schizophrenia is associated with prenatal exposure to rubella, influenza, and toxoplasmosis infection. For example, one study found a sevenfold increased risk of schizophrenia when mothers were exposed to influenza in the first trimester of gestation. This may have public health implications, as strategies for preventing infection include vaccination, simple hygiene, and, in the case of toxoplasmosis, antibiotics. Based on studies in animal models, theoretical concerns have been raised about a possible link between schizophrenia and maternal immune response activated by virus antigens; a 2009 review concluded that there was insufficient evidence to recommend routine use of trivalent influenza vaccine during the first trimester of pregnancy, but that the vaccine was still recommended outside the first trimester and in special circumstances such as pandemics or in women with certain other conditions. The CDC's Advisory Committee on Immunization Practices, the American College of Obstetricians and Gynecologists, and the American Academy of Family Physicians all recommend routine flu shots for pregnant women, for several reasons: their risk for serious influenza-related medical complications during the last two trimesters; their greater rates for flu-related hospitalizations compared to non-pregnant women; the possible transfer of maternal anti-influenza antibodies to children, protecting the children from the flu; and several studies that found no harm to pregnant women or their children from the vaccinations. Despite this recommendation, only 16% of healthy pregnant US women surveyed in 2005 had been vaccinated against the flu. Ingredient concerns Aluminum compounds are used as immunologic adjuvants to increase the effectiveness of many vaccines. The aluminum in vaccines simulates or causes small amounts of tissue damage, driving the body to respond more powerfully to what it sees as a serious infection and promoting the development of a lasting immune response. In some cases these compounds have been associated with redness, itching, and low-grade fever, but the use of aluminum in vaccines has not been associated with serious adverse events. In some cases, aluminum-containing vaccines are associated with macrophagic myofasciitis (MMF), localized microscopic lesions containing aluminum salts that persist for up to 8 years. However, recent case-controlled studies have found no specific clinical symptoms in individuals with biopsies showing MMF, and there is no evidence that aluminum-containing vaccines are a serious health risk or justify changes to immunization practice. Infants are exposed to greater quantities of aluminum in daily life in breastmilk and infant formula than in vaccines. In general, people are exposed to low levels of naturally occurring aluminum in nearly all foods and drinking water. The amount of aluminum present in vaccines is small, less than one milligram, and such low levels are not believed to be harmful to human health. In 2015., N. Petrovsky, summarizing the current evidence on the vaccine adjuvants, writes: "Unfortunately, adjuvant research has lagged behind other vaccine areas... the biggest remaining challenge in the adjuvant field is to decipher the potential relationship between adjuvants and rare vaccine adverse reactions, such as narcolepsy, macrophagic myofasciitis or Alzheimer's disease. While existing adjuvants based on aluminium salts have a strong safety record, there are ongoing needs for new adjuvants and more intensive research into adjuvants and their effects." In 2023, Łukasz Bryliński writes: "Even though its toxicity is well-documented, the role of Al in the pathogenesis of several neurological diseases remains debatable...Despite poor absorption via mucosa, the biggest amount of Al comes with food, drinking water, and inhalation. Vaccines introduce negligible amounts of Al, while the data on skin absorption (which might be linked with carcinogenesis) is limited and requires further investigation...Although we know for sure that Al accumulates in the brain, it is not fully understood how it reaches it." Vaccine hesitant people have also voiced strong concerns about the presence of formaldehyde in vaccines. Formaldehyde is used in very small concentrations to inactivate viruses and bacterial toxins used in vaccines. Very small amounts of residual formaldehyde can be present in vaccines but are far below values harmful to human health. The levels present in vaccines are minuscule when compared to naturally occurring levels of formaldehyde in the human body and pose no significant risk of toxicity. The human body continuously produces formaldehyde naturally and contains 50–70 times the greatest amount of formaldehyde present in any vaccine. Furthermore, the human body is capable of breaking down naturally occurring formaldehyde as well as the small amount of formaldehyde present in vaccines. There is no evidence linking the infrequent exposures to small quantities of formaldehyde present in vaccines with cancer. Sudden infant death syndrome Sudden infant death syndrome (SIDS) is most common in infants around the time in life when they receive many vaccinations. Since the cause of SIDS has not been fully determined, this led to concerns about whether vaccines, in particular diphtheria-tetanus toxoid vaccines, were a possible causal factor. Several studies investigated this and found no evidence supporting a causal link between vaccination and SIDS. In 2003, the Institute of Medicine favored rejection of a causal link to DTwP vaccination and SIDS after reviewing the available evidence. Additional analyses of VAERS data also showed no relationship between vaccination and SIDS. Studies have shown a negative correlation between SIDs and vaccination. That is vaccinated children are less likely to die but no causal link has been found. One suggestion is that infants who are less likely to develop SIDS are more likely to be presented for vaccination. Anthrax vaccines In the mid-1990s media reports on vaccines discussed the Gulf War Syndrome, a multi-symptomatic disorder affecting returning US military veterans of the 1990–1991 Persian Gulf War. Among the first articles of the online magazine Slate was one by Atul Gawande in which the required immunizations received by soldiers, including an anthrax vaccination, were named as one of the likely culprits for the symptoms associated with the Gulf War Syndrome. In the late 1990s Slate published an article on the "brewing rebellion" in the military against anthrax immunization because of "the availability to soldiers of vaccine misinformation on the Internet". Slate continued to report on concerns about the required anthrax and smallpox immunization for US troops after the September 11 attacks and articles on the subject also appeared on the Salon website. The 2001 anthrax attacks heightened concerns about bioterrorism and the Federal government of the United States stepped up its efforts to store and create more vaccines for American citizens. In 2002, Mother Jones published an article that was highly skeptical of the anthrax and smallpox immunization required by the United States Armed Forces. With the 2003 invasion of Iraq a wider controversy ensued in the media about requiring US troops to be vaccinated against anthrax. From 2003 to 2008 a series of court cases were brought to oppose the compulsory anthrax vaccination of US troops. Swine flu vaccine The US swine flu immunization campaign in response to the 1976 swine flu outbreak has become known as "the swine flu fiasco" because the outbreak did not lead to a pandemic as US President Gerald Ford had feared and the hastily rolled out vaccine was found to increase the number of Guillain–Barré Syndrome cases two weeks after immunization. Government officials stopped the mass immunization campaign due to great anxiety about the safety of the swine flu vaccine. The general public was left with greater fear of the vaccination campaign than the virus itself, and vaccination policies, in general, were challenged. During the 2009 flu pandemic, significant controversy broke out regarding whether the 2009 H1N1 flu vaccine was safe in, among other countries, France. Numerous different French groups publicly criticized the vaccine as potentially dangerous. Because of similarities between the 2009 influenza A subtype H1N1 virus and the 1976 influenza A/NJ virus many countries established surveillance systems for vaccine-related adverse effects on human health. A possible link between the 2009 H1N1 flu vaccine and Guillain–Barré Syndrome cases was studied in Europe and the United States. Blood transfusion After the introduction of COVID-19 vaccines, vaccine hesitant people have at times demanded that they get donor blood from donors that have not received the vaccine. In the US and Canada, blood centers do not keep data on whether a donor has been COVID-19 infected or vaccinated, and in August 2021 it was estimated that 60-70% of US blood donors had COVID-19 antibodies. Research director Timothy Caulfield said that "This really highlights, I think, how powerful misinformation can be. It can really have an impact in a way that can be dangerous ... There is no evidence to support these concerns." As of August 2021, such demands are rare in the US. Doctors in Alberta, Canada, warned in November 2022 that the demands were becoming more common. In Italy and New Zealand, parents have gone to court to stop their children's urgent heart surgery, unless COVID-19 vaccine free blood was provided. In both cases the parents were ruled against, though they stated that they could provide willing donors they found acceptable. The New Zealand Blood Service does not label blood according to the donor's COVID-19 vaccine history, and as of 2022, about 90% of New Zealand's population over twelve years of age has had two COVID-19 vaccinations. In another Italian case, a blood transfusion for a sick 90-year-old man was refused by his two daughters, due to vaccine hesitancy concerns. Another New Zealand couple stated that they were trying to arrange their child to have her next heart surgery in India, to avoid her being given blood from COVID-19 vaccinated donors. Other safety concerns Other safety concerns about vaccines have been promoted on the Internet, in informal meetings, in books, and at symposia. These include hypotheses that vaccination can cause epileptic seizures, allergies, multiple sclerosis, and autoimmune diseases such as type1 diabetes, as well as hypotheses that vaccinations can transmit bovine spongiform encephalopathy, hepatitis C virus, and HIV. These hypotheses have been investigated, with the conclusion that currently used vaccines meet high safety standards and that criticism of vaccine safety in the popular press is not justified. Large well-controlled epidemiologic studies have been conducted and the results do not support the hypothesis that vaccines cause chronic diseases. Furthermore, some vaccines are probably more likely to prevent or modify than cause or exacerbate autoimmune diseases. Another common concern parents often have is about the pain associated with administering vaccines during a doctor's office visit. This may lead to parental requests to space out vaccinations; however, studies have shown a child's stress response is not different when receiving one vaccination or two. The act of spacing out vaccinations may actually lead to more stressful stimuli for the child. Vaccine myths and misinformation Several vaccination myths contribute to parental concerns and vaccine hesitancy. These include the alleged superiority of natural infection when compared to vaccination, questioning whether the diseases vaccines prevent are dangerous, whether vaccines pose moral or religious dilemmas, suggesting that vaccines are not effective, proposing unproven or ineffective approaches as alternatives to vaccines, and conspiracy theories that center on mistrust of the government and medical institutions. Autism The idea of a link between vaccines and autism has been extensively investigated and conclusively shown to be false. The scientific consensus is that there is no relationship, causal or otherwise, between vaccines and incidence of autism, and vaccine ingredients do not cause autism. Nevertheless, the anti-vaccination movement continues to promote myths, conspiracy theories, and misinformation linking the two. A developing tactic appears to be the "promotion of irrelevant research [as] an active aggregation of several questionable or peripherally related research studies in an attempt to justify the science underlying a questionable claim", to quote the Skeptical Inquirer. Vaccination during illness Many parents are concerned about the safety of vaccination when their child is sick. Moderate to severe acute illness with or without a fever is indeed a precaution when considering vaccination. Vaccines remain effective during childhood illness. The reason vaccines may be withheld if a child is moderately to severely ill is because certain expected side effects of vaccination (e.g. fever or rash) may be confused with the progression of the illness. It is safe to administer vaccines to well-appearing children who are mildly ill with the common cold. Natural infection Another common anti-vaccine myth is that the immune system produces a better immune protection in response to natural infection when compared to vaccination. However, strength and duration of immune protection gained varies by both disease and vaccine, with some vaccines giving better protection than natural infection. For example, the HPV vaccine generates better immune protection than natural infection due to the vaccine containing higher concentrations of a viral coat protein, while also not containing proteins the HPV viruses use to inhibit immune response. While it is true that infection with certain illnesses may produce lifelong immunity, many natural infections do not produce lifelong immunity, while carrying a higher risk of harming a person's health than vaccines. For example, natural varicella infection carries a higher risk of bacterial superinfection with Group A streptococci. Natural measles infection carries a high risk of many serious, and sometimes life-long, complications, all of which can be avoided by vaccination. Those infected with measles rarely have a symptomatic reinfection. Most people survive measles, though in some cases, complications may occur. Among those that experience complications, about 1 in 4 individuals will be hospitalized and 1–2 in 1000 will die. Complications are more likely in children under age 5 and adults over age 20. Pneumonia is the most common fatal complication of measles infection and accounts for 56–86% of measles-related deaths. Possible consequences of measles virus infection include laryngotracheobronchitis, sensorineural hearing loss, and—in about 1 in 10,000 to 1 in 300,000 cases—panencephalitis, which is usually fatal. Acute measles encephalitis is another serious risk of measles virus infection. It typically occurs two days to one week after the measles rash breaks out and begins with very high fever, severe headache, convulsions and altered mentation. A person with measles encephalitis may become comatose, and death or brain injury may occur. The measles virus can deplete previously acquired immune memory by killing cells that make antibodies, and thus weakens the immune system which can cause deaths from other diseases. Suppression of the immune system by measles lasts about two years and has been epidemiologically implicated in up to 90% of childhood deaths in third world countries, and historically may have caused rather more deaths in the United States, the UK and Denmark than were directly caused by measles. Although the measles vaccine contains an attenuated strain, it does not deplete immune memory. HPV vaccine The idea that the HPV vaccine is linked to increased sexual behavior is not supported by scientific evidence. A review of nearly 1,400 adolescent girls found no difference in teen pregnancy, the incidence of sexually transmitted infection, or contraceptive counseling regardless of whether they received the HPV vaccine. Thousands of Americans die each year from cancers preventable by the vaccine. There remains a disproportionate rate of HPV-related cancers amongst LatinX populations, leading researchers to explore how messaging may be made more effective to address vaccine hesitancy. Vaccine schedule Other concerns have been raised about the vaccine schedule recommended by the Advisory Committee on Immunization Practices (ACIP). The immunization schedule is designed to protect children against preventable diseases when they are most vulnerable. The practice of delaying or spacing out these vaccinations increases the amount of time the child is susceptible to these illnesses. Receiving vaccines on the schedule recommended by the ACIP is not linked to autism or developmental delay. Information warfare An analysis of tweets from July 2014 through September 2017 revealed an active campaign on Twitter by the Internet Research Agency (IRA), a Russian troll farm accused of interference in the 2016 U.S. elections, to sow discord about the safety of vaccines. The campaign used sophisticated Twitter bots to amplify polarizing pro-vaccine and anti-vaccine messages, containing the hashtag #VaccinateUS, posted by IRA trolls. Throughout 2020 and 2021, the United States ran a propaganda campaign to spread disinformation about the Sinovac Chinese COVID-19 vaccine, including using fake social media accounts to spread the disinformation that the Sinovac vaccine contained pork-derived ingredients and was therefore haram under Islamic law. The campaign primarily targeted people in the Philippines and used a social media hashtag for "China is the virus" in Tagalog. Alternative medicine Many forms of alternative medicine are based on philosophies that oppose vaccination (including germ theory denialism) and have practitioners who voice their opposition. As a consequence, the increase in popularity of alternative medicine in the 1970s planted the seeds of the modern anti-vaccination movement. More specifically, some elements of the chiropractic community, some homeopaths, and naturopaths developed anti-vaccine rhetoric. The reasons for this negative vaccination view are complicated and rest at least in part on the early philosophies that shaped the foundation of these groups. Chiropractic Historically, chiropractic strongly opposed vaccination based on its belief that all diseases were traceable to causes in the spine and therefore could not be affected by vaccines. Daniel D. Palmer (1845–1913), the founder of chiropractic, wrote: "It is the very height of absurdity to strive to 'protect' any person from smallpox or any other malady by inoculating them with a filthy animal poison." Vaccination remains controversial within the profession. Most chiropractic writings on vaccination focus on its negative aspects. A 1995 survey of US chiropractors found that about one third believed there was no scientific proof that immunization prevents disease. While the Canadian Chiropractic Association supports vaccination, a survey in Alberta in 2002 found that 25% of chiropractors advised patients for, and 27% advised against, vaccinations for patients or for their children. Although most chiropractic colleges try to teach about vaccination in a manner consistent with scientific evidence, several have faculty who seem to stress negative views. A survey of a 1999–2000 cross-section of students of Canadian Memorial Chiropractic College (CMCC), which does not formally teach anti-vaccination views, reported that fourth-year students opposed vaccination more strongly than did first-year students, with 29.4% of fourth-year students opposing vaccination. A follow-up study on 2011–12 CMCC students found that pro-vaccination attitudes heavily predominated. Students reported support rates ranging from 84% to 90%. One of the study's authors proposed the change in attitude to be due to the lack of the previous influence of a "subgroup of some charismatic students who were enrolled at CMCC at the time, students who championed the Palmer postulates that advocated against the use of vaccination". Policy positions The American Chiropractic Association and the International Chiropractic Association support individual exemptions to compulsory vaccination laws. In March 2015, the Oregon Chiropractic Association invited Andrew Wakefield, chief author of a fraudulent research paper, to testify against Senate Bill 442, "a bill that would eliminate nonmedical exemptions from Oregon's school immunization law". The California Chiropractic Association lobbied against a 2015 bill ending belief exemptions for vaccines. They had also opposed a 2012 bill related to vaccination exemptions. Homeopathy Several surveys have shown that some practitioners of homeopathy, particularly homeopaths without any medical training, advise patients against vaccination. For example, a survey of registered homeopaths in Austria found that only 28% considered immunization an important preventive measure, and 83% of homeopaths surveyed in Sydney, Australia, did not recommend vaccination. Many practitioners of naturopathy also oppose vaccination. Homeopathic "vaccines" (nosodes) are ineffective because they do not contain any active ingredients and thus do not stimulate the immune system. They can be dangerous if they take the place of effective treatments. Some medical organizations have taken action against nosodes. In Canada, the labeling of homeopathic nosodes require the statement: "This product is neither a vaccine nor an alternative to vaccination." Financial motives Alternative medicine proponents gain from promoting vaccine conspiracy theories through the sale of ineffective and expensive medications, supplements, and procedures such as chelation therapy and hyperbaric oxygen therapy, sold as able to cure the 'damage' caused by vaccines. Homeopaths in particular gain through the promotion of water injections or 'nosodes' that they allege have a 'natural' vaccine-like effect. Additional bodies with a vested interest in promoting the "unsafeness" of vaccines may include lawyers and legal groups organizing court cases and class action lawsuits against vaccine providers. Conversely, alternative medicine providers have accused the vaccine industry of misrepresenting the safety and effectiveness of vaccines, covering up and suppressing information, and influencing health policy decisions for financial gain. In the late 20th century, vaccines were a product with low profit margins, and the number of companies involved in vaccine manufacture declined. In addition to low profits and liability risks, manufacturers complained about low prices paid for vaccines by the CDC and other US government agencies. In the early 21st century, the vaccine market greatly improved with the approval of the vaccine Prevnar, along with a small number of other high-priced blockbuster vaccines, such as Gardasil and Pediarix, which each had sales revenues of over $1 billion in 2008. Despite high growth rates, vaccines represent a relatively small portion of overall pharmaceutical profits. As recently as 2010, the World Health Organization estimated vaccines to represent 2–3% of total sales for the pharmaceutical industry. Psychological factors The rise in vaccine hesitancy has led to research on the psychology of those who actively oppose vaccines.The largest psychological factors leading to anti-vaccination attitudes are conspiratorial thinking, reactance, disgust regarding blood or needles, and individualistic or hierarchical worldviews. In contrast, demographic variables are not significant. Researchers have also investigated the psychological roots of vaccine hesitancy with regard to specific vaccines. For instance, a 2021 study published in Nature Communications investigated psychological characteristics associated with COVID-19 vaccine hesitancy and resistance in Ireland and the UK. The study found that vaccine hesitant or resistant respondents in the two countries varied across socio-demographic and health-related variables, however, they were similar in range of psychological factors. Such respondents were less likely to obtain information about the pandemic from authoritative and traditional media sources and demonstrated similar skepticism towards these sources compared to respondents who accepted the vaccine. Fear of needles Blood-injection-injury phobia and general fear of needles and injections can lead people to avoid vaccinations. One survey conducted in January and February 2021 estimated this was responsible for 10% of the COVID-19 vaccine hesitancy in the UK at the time. A 2012 survey of American parents found that a fear of needles was the most common reason for adolescents to forgo their second dose of a HPV vaccine. Various treatments for fear of needles can help overcome this problem, from offering pain reduction at the time of injection to long-term behavioral therapy. Tensing the stomach muscles can help avoid fainting, swearing can reduce perceived pain, and distraction can also improve the perceived experience, such as by pretending to cough, performing a visual task, watching a video, or playing a video game. To avoid dissuading people who have a needle phobia, vaccine update researchers recommend against using pictures of needles, people getting an injection, or faces displaying negative emotions (like a crying baby) in promotional materials. Instead, they recommend medically accurate photos depicting smiling, diverse people with bandages, vaccination cards, or a rolled-up sleeve; depicting vials instead of needles; and depicting the people who develop and test vaccines. Development of vaccines that can be administered orally or with a jet injector can also avoid triggering the fear of needles. Social factors Beyond misinformation, social and economic conditions also influence how many people take vaccines. Factors such as income, socioeconomic status, ethnicity, age, and education can determine the uptake of vaccines and their impact, especially among vulnerable communities. Social factors like whether one lives with others may affect vaccine uptake. For example, older individuals who live alone are much more likely not to take up vaccines compared to those living with other people. Other factors may be racial, with minority groups being affected by low vaccine uptake. People with weaker immune systems or chronic illness are more likely to take up a vaccine if recommended by their physicians. Other reasons Unethical human experimentation and medical racism Some people in groups experiencing medical racism are less willing to trust doctors and modern medicine due to real historical incidents of unethical human experimentation and involuntary sterilization. Famous examples include drug trials in Africa without informed consent, the Guatemala syphilis experiments, the Tuskegee Syphilis Study, the culturing of cells from Henrietta Lacks without consent, and Nazi human experimentation. To overcome this type of distrust, experts recommend including representative samples of majority and minority populations in drug trials, including minority groups in study design, being diligent about informed consent, and being transparent about the process of drug design and testing. Malpractice and fraud CIA fake vaccination clinic In Pakistan, the CIA ran a fake vaccination clinic in an attempt to locate Osama bin Laden. As a direct consequence, there have been several attacks and deaths among vaccination workers. Several Islamist preachers and militant groups, including some factions of the Taliban, view vaccination as a plot to kill or sterilize Muslims. Efforts to eradicate polio have furthermore been disrupted by American drone strikes. Pakistan is among the only countries where polio remained endemic as of 2015. Fake COVID-19 vaccines In July 2021, Indian police arrested 14 people for administering doses of saline solution instead of the AstraZeneca vaccine at nearly a dozen private vaccination sites in Mumbai. The organizers, including medical professionals, charged between $10 and $17 for each dose, and more than 2,600 people paid to receive what they thought was the vaccine. The federal government downplayed the scandal, claiming these cases were isolated. McAfee stated India was among the top countries to have been targeted by fake apps to lure people with a promise of vaccines. In Bhopal, slum residents were misled into thinking they would get an approved COVID-19 vaccine, but instead were actually part of an experimental clinical trial for the domestic vaccine Covaxin. Only 50% of participants in the trials received a vaccine with the rest receiving a placebo. One participant stated, "...I didn't know that there was a possibility you could get a water shot." Religion Since most religions predate the invention of vaccines, scriptures do not specifically address the topic of vaccination. However, vaccination has been opposed by some on religious grounds ever since it was first introduced. When vaccination was first becoming widespread, some Christian opponents argued that preventing smallpox deaths would be thwarting God's will and that such prevention is sinful. Opposition from some religious groups continues to the present day, on various grounds, raising ethical difficulties when the number of unvaccinated children threatens harm to the entire population. Many governments allow parents to opt out of their children's otherwise mandatory vaccinations for religious reasons; some parents falsely claim religious beliefs to get vaccination exemptions. Many Jewish community leaders support vaccination. Among early Hasidic leaders, Rabbi Nachman of Breslov (1772–1810) was known for his criticism of the doctors and medical treatments of his day. However, when the first vaccines were successfully introduced, he stated: "Every parent should have his children vaccinated within the first three months of life. Failure to do so is tantamount to murder. Even if they live far from the city and have to travel during the great winter cold, they should have the child vaccinated before three months." Although gelatin can be derived from many animals, Jewish and Islamic scholars have determined that since the gelatin is cooked and not consumed as food, vaccinations containing gelatin are acceptable. However, in 2015 and again in 2020, the possible use of porcine-based gelatin in vaccines raised religious concerns among Muslims and Orthodox Jews about the halal or kosher status of several vaccinations against COVID-19. The Muslim Council of Britain raised concern about the UK's intranasal influenza vaccine deployment in 2019 due to the presence of gelatin in the vaccine. The MCB subsequently clarified that it never advised against the vaccine, it did not have any religious authority to issue a fatwa on the matter, and that vaccines containing porcine gelatin are generally not considered haram if alternatives are unavailable (the injectable flu vaccine was also offered in Scotland, but not England). In India, in 2018, a three-minute doctored clip circulated among Muslims claiming that the MR-VAC vaccine against measles and rubella was a "Modi government-RSS conspiracy" to stop the population growth of Muslims. The clip was taken from a TV show that exposed the baseless rumors. Hundreds of madrassas in the state of Uttar Pradesh refused permission to health department teams to administer vaccines because of rumors spread using WhatsApp. Some Christians have objected to the use of cell cultures of some viral vaccines, and the virus of the rubella vaccine, on the grounds that they are derived from tissues taken from therapeutic abortions performed in the 1960s. The principle of double effect, originated by Thomas Aquinas, holds that actions with both good and bad consequences are morally acceptable in specific circumstances. The Vatican Curia has said that for vaccines originating from embryonic cells, Catholics have "a grave responsibility to use alternative vaccines and to make a conscientious objection", but concluded that it is acceptable for Catholics to use the existing vaccines until an alternative becomes available. In the United States, some parents falsely claim religious exemptions when their real motivation for avoiding vaccines is supposed safety concerns. For a number of years, only Mississippi, West Virginia, and California did not provide religious exemptions. Following the 2019 measles outbreaks, Maine and New York repealed their religious exemptions, and the state of Washington did so for the measles vaccination. According to a March 2021 poll conducted by The Associated Press/NORC, vaccine skepticism is more widespread among white evangelicals than most other blocs of Americans. Forty percent of white evangelical Protestants said they were not likely to get vaccinated against COVID-19. Countermeasures Vaccine hesitancy is challenging and optimal strategies for approaching it remain uncertain. Multicomponent initiatives which include targeting undervaccinated populations, improving the convenience of and access to vaccines, educational initiatives, and mandates may improve vaccination uptake. The World Health Organization (WHO) published a paper in 2016 intending to aid experts on how to respond to vaccine deniers in public. The WHO recommends for experts to view the general public as their target audience rather than the vaccine denier when debating in a public forum. The WHO also suggests for experts to make unmasking the techniques that the vaccine denier uses to spread misinformation as the goal of the conversation. The WHO asserts that this will make the public audience more resilient against anti-vaccine tactics. Providing information Many interventions designed to address vaccine hesitancy have been based on the information deficit model. This model assumes that vaccine hesitancy is due to a person lacking the necessary information and attempts to provide them with that information to solve the problem. Despite many educational interventions attempting this approach, ample evidence indicates providing more information is often ineffective in changing a vaccine-hesitant person's views and may, in fact, have the opposite of the intended effect and reinforce their misconceptions. It is unclear whether interventions intended to educate parents about vaccines improve the rate of vaccination. It is also unclear whether citing the reasons of benefit to others and herd immunity improves parents' willingness to vaccinate their children. In one trial, an educational intervention designed to dispel common misconceptions about the influenza vaccine decreased parents' false beliefs about the vaccines but did not improve uptake of the influenza vaccine. In fact, parents with significant concerns about adverse effects from the vaccine were less likely to vaccinate their children with the influenza vaccine after receiving this education. Communication strategies Several communication strategies are recommended for use when interacting with vaccine-hesitant parents. These include establishing honest and respectful dialogue; acknowledging the risks of a vaccine but balancing them against the risk of disease; referring parents to reputable sources of vaccine information; and maintaining ongoing conversations with vaccine-hesitant families. The American Academy of Pediatrics recommends healthcare providers directly address parental concerns about vaccines when questioned about their efficacy and safety. Additional recommendations include asking permission to share information; maintaining a conversational tone (as opposed to lecturing); not spending excessive amounts of time debunking specific myths (this may have the opposite effect of strengthening the myth in the person's mind); focusing on the facts and simply identifying the myth as false; and keeping information as simple as possible (if the myth seems simpler than the truth, it may be easier for people to accept the simple myth). Storytelling and anecdote (e.g., about the decision to vaccinate one's own children) can be powerful communication tools for conversations about the value of vaccination. A New Zealand-based General Practitioner has used a comic, Jenny & the Eddies, both to educate children about vaccines and address his patients' concerns through open, trusting, and non-threatening conversations, concluding [that] "I always listen to what people have to say on any matter. That includes vaccine hesitancy. That's a very important opening stage to improving the therapeutic relationship. If I'm going to change anyone's attitude, first I need to listen to them and be open-minded." The perceived strength of the recommendation, when provided by a healthcare provider, also seems to influence uptake, with recommendations that are perceived to be stronger resulting in higher vaccination rates than perceived weaker recommendations. Provider presumption and persistence Limited evidence suggests that a more paternalistic or presumptive approach ("Your son needs three shots today.") is more likely to result in patient acceptance of vaccines during a clinic visit than a participatory approach ("What do you want to do about shots?") but decreases patient satisfaction with the visit. A presumptive approach helps to establish that this is the normative choice. Similarly, one study found that the way in which physicians respond to parental vaccine resistance is important. Nearly half of initially vaccine-resistant parents accepted vaccinations if physicians persisted in their initial recommendation. The Centers for Disease Control and Prevention has released resources to aid healthcare providers in having more effective conversations with parents about vaccinations. Pain mitigation for children Parents may be hesitant to have their children vaccinated due to concerns about the pain of vaccination. Several strategies can be used to reduce the child's pain. Such strategies include distraction techniques (pinwheels); deep breathing techniques; breastfeeding the child; giving the child sweet-tasting solutions; quickly administering the vaccine without aspirating; keeping the child upright; providing tactile stimulation; applying numbing agents to the skin; and saving the most painful vaccine for last. As above, the number of vaccines offered in a particular encounter is related to the likelihood of parental vaccine refusal (the more vaccines offered, the higher the likelihood of vaccine deferral). The use of combination vaccines to protect against more diseases but with fewer injections may provide reassurance to parents. Similarly, reframing the conversation with less emphasis on the number of diseases the healthcare provider is immunizing against (e.g., "we will do two injections (combined vaccinations) and an oral vaccine") may be more acceptable to parents than "we're going to vaccinate against seven diseases". Cultural sensitivity Cultural sensitivity is important to reducing vaccine hesitancy. For example, pollster Frank Luntz discovered that for conservative Americans, family is by far the "most powerful motivator" to get a vaccine (over country, economy, community, or friends). Luntz "also found a very pronounced preference for the word 'vaccine' over 'jab. Avoiding online misinformation It is recommended that healthcare providers advise parents against performing their own web search queries since many websites on the Internet contain significant misinformation. Many parents perform their own research online and are often confused, frustrated, and unsure of which sources of information are trustworthy. Additional recommendations include introducing parents to the importance of vaccination as far in advance of the initial well-child visit as possible; presenting parents with vaccine safety information while in their pediatrician's waiting room; and using prenatal open houses and postpartum maternity ward visits as opportunities to vaccinate. Internet advertising, especially on social networking websites, is purchased by both public health authorities and anti-vaccination groups. In the United States, the majority of anti-vaccine Facebook advertising in December 2018 and February 2019 had been paid for one of two groups: Children's Health Defense and Stop Mandatory Vaccination. The ads targeted women and young couples and generally highlighted the alleged risks of vaccines, while asking for donations. Several anti-vaccination advertising campaigns also targeted areas where measles outbreaks were underway during this period. The impact of Facebook's subsequent advertising policy changes has not been studied. Incentive programs Several countries have implemented programs to counter vaccine hesitancy, including raffles, lotteries, rewards and mandates. In the US State of Washington, authorities have given the green light to licensed cannabis dispensaries to offer free joints as incentives to get COVID-19 vaccination in an effort dubbed "Joints for Jabs". Vaccine mandates Mandatory vaccination is one set of policy measures to address vaccine hesitancy by imposing penalties or burdens on those who fail to vaccinate. An example of this kind of measure is Australia's vaccine mandates around childhood vaccination, the No Jab No Pay policy. This policy linked financial payments to children's vaccine status and, while studies have found significant improvements in vaccination compliance, years later there were still issues of vaccine hesitancy. In 2021, Australian airline Qantas issued plans to mandate COVID-19 vaccination for their work force. Geographical distribution Vaccine hesitancy is becoming an increasing concern, particularly in industrialized nations. For example, one study surveying parents in Europe found that 12–28% of surveyed parents expressed doubts about vaccinating their children. Several studies have assessed socioeconomic and cultural factors associated with vaccine hesitancy. Both high and low socioeconomic status as well as high and low education levels have all been associated with vaccine hesitancy in different populations. Other studies examining various populations around the world in different countries found that both high and low socioeconomic status are associated with vaccine hesitancy. Migrant populations Migrants and refugees arriving and living in Europe face various difficulties in getting vaccinated and many of them are not fully vaccinated. People arriving from Africa, Eastern Europe, the Eastern Mediterranean, and Asia are more likely to be under-vaccinated (partial or delayed vaccination). Also, recently arrived refugees, migrants and seekers of asylum were less likely to be fully vaccinated than other people from the same groups. Those with little contact to healthcare services, no citizenship and lower income are also more likely to be under-vaccinated. Vaccination barriers for migrants include language/literacy barriers, lack of understanding of the need for or their entitlement to vaccines, concerns about the side-effects, health professionals lack of knowledge of vaccination guidelines for migrants, and practical/legal issues, for example, having no fixed address. Vaccines uptake of migrants can be increased by customised communications, clear policies, community-guided interventions (such as vaccine advocates), and vaccine offers in local accessible settings. Australia An Australian study that examined the factors associated with vaccine attitudes and uptake separately found that under-vaccination correlated with lower socioeconomic status but not with negative attitudes towards vaccines. The researchers suggested that practical barriers are more likely to explain under-vaccination among individuals with lower socioeconomic status. A 2012 Australian study found that 52% of parents had concerns about the safety of vaccines. During the COVID-19 pandemic, COVID-19 vaccine hesitancy reportedly was spreading in remote Indigenous communities, where people are typically poorer and less educated. Europe Confidence in vaccines varies over place and time and among different vaccines. The Vaccine Confidence Project in 2016 found that confidence was lower in Europe than in the rest of the world. Refusal of the MMR vaccine has increased in twelve European states since 2010. The project published a report in 2018 assessing vaccine hesitancy among the public in all the 28 EU member states and among general practitioners in ten of them. Younger adults in the survey had less confidence than older people. Confidence had risen in France, Greece, Italy, and Slovenia since 2015 but had fallen in the Czech Republic, Finland, Poland, and Sweden. 36% of the GPs surveyed in the Czech Republic and 25% of those in Slovakia did not agree that the MMR vaccine was safe. Most of the GPs did not recommend the seasonal influenza vaccine. Confidence in the population correlated with confidence among GPs. Policy implications Multiple major medical societies including the Infectious Diseases Society of America, the American Medical Association, and the American Academy of Pediatrics support the elimination of all nonmedical exemptions for childhood vaccines. Individual liberty Compulsory vaccination policies have been controversial as long as they have existed, with opponents of mandatory vaccinations arguing that governments should not infringe on an individual's freedom to make medical decisions for themselves or their children, while proponents of compulsory vaccination cite the well-documented public health benefits of vaccination. Others argue that, for compulsory vaccination to effectively prevent disease, there must be not only available vaccines and a population willing to immunize, but also sufficient ability to decline vaccination on grounds of personal belief. Vaccination policy involves complicated ethical issues, as unvaccinated individuals are more likely to contract and spread disease to people with weaker immune systems, such as young children and the elderly, and to other individuals in whom the vaccine has not been effective. However, mandatory vaccination policies raise ethical issues regarding parental rights and informed consent. In the United States, vaccinations are not truly compulsory, but they are typically required in order for children to attend public schools. As of January 2021, five statesMississippi, West Virginia, California, Maine, and New Yorkhave eliminated religious and philosophical exemptions to required school immunizations. Children's rights Medical ethicist Arthur Caplan argues that children have a right to the best available medical care, including vaccines, regardless of parental feelings toward vaccines, saying "Arguments about medical freedom and choice are at odds with the human and constitutional rights of children. When parents won't protect them, governments must." A review of American court cases from 1905 to 2016 found that, of the nine courts that have heard cases regarding whether not vaccinating a child constitutes neglect, seven have held vaccine refusal to be a form of child neglect. To prevent the spread of disease by unvaccinated individuals, some schools and doctors' surgeries have prohibited unvaccinated children from being enrolled, even where not required by law. Refusal of doctors to treat unvaccinated children may cause harm to both the child and public health, and may be considered unethical, if the parents are unable to find another healthcare provider for the child. Opinion on this is divided, with the largest professional association, the American Academy of Pediatrics, saying that exclusion of unvaccinated children may be an option under narrowly defined circumstances. History Variolation Early attempts to prevent smallpox involved deliberate inoculation with the milder form of the disease (Variola Minor) in the expectation that a mild case would confer immunity and avoid Variola Major. Originally called inoculation, this technique was later called variolation to avoid confusion with cowpox inoculation (vaccination) when that was introduced by Edward Jenner. Although variolation had a long history in China and India, it was first used in North America and England in 1721. Reverend Cotton Mather introduced variolation to Boston, Massachusetts, during the 1721 smallpox epidemic. Despite strong opposition in the community, Mather convinced Zabdiel Boylston to try it. Boylston first experimented on his 6-year-old son, his slave, and his slave's son; each subject contracted the disease and was sick for several days until the sickness vanished and they were "no longer gravely ill". Boylston went on to variolate thousands of Massachusetts residents, and many places were named for him in gratitude as a result. Lady Mary Wortley Montagu introduced variolation to England. She had seen it used in Turkey and, in 1718, had her son successfully variolated in Constantinople under the supervision of Charles Maitland. When she returned to England in 1721, she had her daughter variolated by Maitland. This aroused considerable interest, and Sir Hans Sloane organized the variolation of some inmates in Newgate Prison. These were successful, and after a further short trial in 1722, two daughters of Caroline of Ansbach Princess of Wales were variolated without mishap. With this royal approval, the procedure became common when smallpox epidemics threatened. Religious arguments against inoculation were soon advanced. For example, in a 1722 sermon entitled "The Dangerous and Sinful Practice of Inoculation", the English theologian Reverend Edmund Massey argued that diseases are sent by God to punish sin and that any attempt to prevent smallpox via inoculation is a "diabolical operation". It was customary at the time for popular preachers to publish sermons, which reached a wide audience. This was the case with Massey, whose sermon reached North America, where there was early religious opposition, particularly by John Williams. A greater source of opposition there was William Douglass, a medical graduate of Edinburgh University and a Fellow of the Royal Society, who had settled in Boston. Smallpox vaccination After Edward Jenner introduced the smallpox vaccine in 1798, variolation declined and was banned in some countries. As with variolation, there was some religious opposition to vaccination, although this was balanced to some extent by support from clergymen, such as Reverend Robert Ferryman, a friend of Jenner's, and Rowland Hill, who not only preached in its favour but also performed vaccination themselves. There was also opposition from some variolators who saw the loss of a lucrative monopoly. William Rowley published illustrations of deformities allegedly produced by vaccination, lampooned in James Gillray's famous caricature depicted on this page, and Benjamin Moseley likened cowpox to syphilis, starting a controversy that would last into the 20th century. There was legitimate concern from supporters of vaccination about its safety and efficacy, but this was overshadowed by general condemnation, particularly when legislation started to introduce compulsory vaccination. The reason for this was that vaccination was introduced before laboratory methods were developed to control its production and account for its failures. Vaccine was maintained initially through arm-to-arm transfer and later through production on the skin of animals, and bacteriological sterility was impossible. Further, identification methods for potential pathogens were not available until the late 19th to early 20th century. Diseases later shown to be caused by contaminated vaccine included erysipelas, tuberculosis, tetanus, and syphilis. This last, though rareestimated at 750 cases in 100 million vaccinationsattracted particular attention. Much later, Charles Creighton, a leading medical opponent of vaccination, claimed that the vaccine itself was a cause of syphilis and devoted a book to the subject. As cases of smallpox started to occur in those who had been vaccinated earlier, supporters of vaccination pointed out that these were usually very mild and occurred years after the vaccination. In turn, opponents of vaccination pointed out that this contradicted Jenner's belief that vaccination conferred complete protection. The views of opponents of vaccination that it was both dangerous and ineffective led to the development of determined anti-vaccination movements in England when legislation was introduced to make vaccination compulsory. England Because of its greater risks, variolation was banned in England by the Vaccination Act 1840 (3 & 4 Vict. c. 29), which also introduced free voluntary vaccination for infants. Thereafter Parliament passed successive acts to enact and enforce compulsory vaccination. The Vaccination Act 1853 (16 & 17 Vict. c. 100) introduced compulsory vaccination, with fines for non-compliance and imprisonment for non-payment. The Vaccination Act 1867 (30 & 31 Vict. c. 84) extended the age requirement to 14 years and introduced repeated fines for repeated refusal for the same child. Initially, vaccination regulations were organised by the local Poor Law Guardians, and in towns where there was strong opposition to vaccination, sympathetic guardians were elected who did not pursue prosecutions. This was changed by the Vaccination Act 1871 (34 & 35 Vict. c. 98), which required guardians to act. This significantly changed the relationship between the government and the public, and organized protests increased. In Keighley, Yorkshire, in 1876 the guardians were arrested and briefly imprisoned in York Castle, prompting large demonstrations in support of the "Keighley Seven". The protest movements crossed social boundaries. The financial burden of fines fell hardest on the working class, who would provide the largest numbers at public demonstrations. Societies and publications were organized by the middle classes, and support came from celebrities such as George Bernard Shaw and Alfred Russel Wallace, doctors such as Charles Creighton and Edgar Crookshank, and parliamentarians such as Jacob Bright and James Allanson Picton. By 1885, with over 3,000 prosecutions pending in Leicester, a mass rally there was attended by over 20,000 protesters. Under increasing pressure, the government appointed a Royal Commission on Vaccination in 1889, which issued six reports between 1892 and 1896, with a detailed summary in 1898. Its recommendations were incorporated into the Vaccination Act 1898 (61 & 62 Vict. c. 49), which still required compulsory vaccination but allowed exemption on the grounds of conscientious objection on presentation of a certificate signed by two magistrates. These were not easy to obtain in towns where magistrates supported compulsory vaccination, and after continued protests, a further act in 1907 allowed exemption on a simple signed declaration. Although this solved the immediate problem, the compulsory vaccination acts remained legally enforceable, and determined opponents lobbied for their repeal. No Compulsory Vaccination was one of the demands of the 1900 Labour Party General Election Manifesto. This was done as a matter of routine when the National Health Service was introduced in 1948, with "almost negligible" opposition from supporters of compulsory vaccination. Vaccination in Wales was covered by English legislation, but the Scottish legal system was separate. Vaccination was not made compulsory there until 1863, and a conscientious objection was allowed after vigorous protest only in 1907. In the late 19th century, Leicester in the UK received much attention because of how smallpox was managed there. There was particularly strong opposition to compulsory vaccination, and medical authorities had to work within this framework. They developed a system that did not use vaccination but was based on the notification of cases, the strict isolation of patients and contacts, and the provision of isolation hospitals. This proved successful but required acceptance of compulsory isolation rather than vaccination. C. Killick Millard, initially, a supporter of compulsory vaccination was appointed Medical Officer of Health in 1901. He moderated his views on compulsion but encouraged contacts and his staff to accept vaccination. This approach, developed initially due to overwhelming opposition to government policy, became known as the Leicester Method. In time it became generally accepted as the most appropriate way to deal with smallpox outbreaks and was listed as one of the "important events in the history of smallpox control" by those most involved in the World Health Organization's successful Smallpox Eradication Campaign. The final stages of the campaign generally referred to as "surveillance containment", owed much to the Leicester method. United States In the US, President Thomas Jefferson took a close interest in vaccination, alongside Benjamin Waterhouse, chief physician at Boston. Jefferson encouraged the development of ways to transport vaccine material through the Southern states, which included measures to avoid damage by heat, a leading cause of ineffective batches. Smallpox outbreaks were contained by the latter half of the 19th century, a development widely attributed to the vaccination of a large portion of the population. Vaccination rates fell after this decline in smallpox cases, and the disease again became epidemic in the late 19th century. After an 1879 visit to New York by prominent British anti-vaccinationist William Tebb, The Anti-Vaccination Society of America was founded. The New England Anti-Compulsory Vaccination League formed in 1882, and the Anti-Vaccination League of New York City in 1885. Tactics in the US largely followed those used in England. Vaccination in the US was regulated by individual states, in which there followed a progression of compulsion, opposition, and repeal similar to that in England. Although generally organized on a state-by-state basis, the vaccination controversy reached the US Supreme Court in 1905. There, in the case of Jacobson v. Massachusetts, the court ruled that states have the authority to require vaccination against smallpox during a smallpox epidemic. John Pitcairn, the wealthy founder of the Pittsburgh Plate Glass Company (now PPG Industries), emerged as a major financier and leader of the American anti-vaccination movement. On March 5, 1907, in Harrisburg, Pennsylvania, he delivered an address to the Committee on Public Health and Sanitation of the Pennsylvania General Assembly criticizing vaccination. He later sponsored the National Anti-Vaccination Conference, which, held in Philadelphia in October 1908, led to the creation of The Anti-Vaccination League of America. When the league organized later that month, members chose Pitcairn as their first president. On December 1, 1911, Pitcairn was appointed by Pennsylvania Governor John K. Tener to the Pennsylvania State Vaccination Commission and subsequently authored a detailed report strongly opposing the commission's conclusions. He remained a staunch opponent of vaccination until his death in 1916. Brazil In November 1904, in response to years of inadequate sanitation and disease, followed by a poorly explained public health campaign led by the renowned Brazilian public health official Oswaldo Cruz, citizens and military cadets in Rio de Janeiro arose in a Revolta da Vacina, or Vaccine Revolt. Riots broke out on the day a vaccination law took effect; vaccination symbolized the most feared and most tangible aspect of a public health plan that included other features, such as urban renewal, that many had opposed for years. Later vaccines and antitoxins Opposition to smallpox vaccination continued into the 20th century and was joined by controversy over new vaccines and the introduction of antitoxin treatment for diphtheria. Injection of horse serum into humans as used in antitoxin can cause hypersensitivity, commonly referred to as serum sickness. Moreover, the continued production of the smallpox vaccine in animals and the production of antitoxins in horses prompted anti-vivisectionists to oppose vaccination. Diphtheria antitoxin was serum from horses that had been immunized against diphtheria, and was used to treat human cases by providing passive immunity. In 1901, antitoxin from a horse named Jim was contaminated with tetanus and killed 13 children in St. Louis, Missouri. This incident, together with nine deaths from tetanus from contaminated smallpox vaccine in Camden, New Jersey, led directly and quickly to the passing of the Biologics Control Act in 1902. The Bundaberg tragedy of 1928 saw a diphtheria antitoxin contaminated with the Staph. aureus bacterium kill 12 children in Bundaberg, Australia, resulting in the suspension of local immunisation programs. Robert Koch developed tuberculin in 1890. Inoculated into individuals who have had tuberculosis, it produces a hypersensitivity reaction and is still used to detect those who have been infected. However, Koch used tuberculin as a vaccine. This caused serious reactions and deaths in individuals whose latent tuberculosis was reactivated by the tuberculin. This was a major setback for supporters of new vaccines. Such incidents and others ensured that any untoward results concerning vaccination and related procedures received continued publicity, which grew as the number of new procedures increased. In 1955, in a tragedy known as the Cutter incident, Cutter Laboratories produced 120,000 doses of the Salk polio vaccine that inadvertently contained some live poliovirus along with inactivated virus. This vaccine caused 40,000 cases of polio, 53 cases of paralysis, and five deaths. The disease spread through the recipients' families, creating a polio epidemic that led to a further 113 cases of paralytic polio and another five deaths. It was one of the worst pharmaceutical disasters in US history. Later 20th-century events included the 1982 broadcast of DPT: Vaccine Roulette, which sparked debate over the DPT vaccine, and the 1998 publication of a fraudulent academic article by Andrew Wakefield which sparked the MMR vaccine controversy. Also recently, the HPV vaccine has become controversial due to concerns that it may encourage promiscuity when given to 11- and 12-year-old girls. Arguments against vaccines in the 21st century are often similar to those of 19th-century anti-vaccinationists. Around 2014, anti-vaccine rhetoric shifted from being mostly scientific and medical arguments, such as the idea that vaccines were harming children, to political arguments, such as what David Broniatowski of George Washington University has called a "don't-tell-me-what-to-do freedom movement." At the same time, according to Renée DiResta, a researcher at the Stanford Internet Observatory, anti-vaxxers began networking with Tea Party and Second Amendment activists in a "weird libertarian crossover". This happened partly due to anti-vaccine medical arguments failing to stop the passage of SB277 in California. COVID-19 In mid-2020, surveys on whether people would be willing to take a potential COVID-19 vaccine estimated that 67% or 80% of people in the US would accept a new vaccination against COVID-19. In the United Kingdom, a 16 November 2020 YouGov poll showed that 42% said they were very likely to take the vaccine and 25% were fairly likely (67% likely overall); 11% would be very unlikely and 10% fairly unlikely (21% unlikely overall) and 12% are unsure. There have been a number of reasons expressed why people might not wish to take COVID-19 vaccines, such as concerns over safety, self-perception of being "low risk", or questioning the Pfizer-BioNTech vaccine in particular. 8% of those reluctant to take it say it is because they oppose vaccinations overall; this amounts to just 2% of the British public. A December 2020 Ipsos/World Economic Forum 15-country poll asked online respondents whether they agreed with the statement: "If a vaccine for COVID-19 were available, I would get it." Rates of agreement were smallest in France (40%), Russia (43%) and South Africa (53%). In the United States, 69% of those polled agreed with the statement; rates were even higher in Britain (77%) and China (80%). A March 2021 NPR/PBS NewsHour/Marist poll found the difference between white and black Americans to be within the margin of error, but 47% of Trump supporters said they would refuse a COVID-19 vaccine, compared to 30% of all adults. In May 2021, a report titled "Global attitudes towards a COVID-19 vaccine" from the Institute of Global Health Innovation and Imperial College London, which included detailed survey data from March to May 2021 including survey data from 15 countries Australia, Canada, Denmark, France, Germany, Israel, Italy, Japan, Norway, Singapore, South Korea, Spain, Sweden, the UK, and the US. It found that in 13 of the 15 countries more than 50% of people were confident in COVID-19 vaccines. In the UK 87% of survey respondents said they trusted the vaccines, which showed a significant increase in confidence following earlier less reliable polls. The survey also found trust in different vaccine brands varied, with the Pfizer–BioNTech COVID-19 vaccine being the most trusted across all age groups in most countries and particularly the most trusted for under 65s. A January 2022 report from Time magazine noted that the anti-vaccine movement "has repositioned itself as an opposition to mandates and government overreach." A May 2022 report from The New York Times noted that "A wave of parents has been radicalized by Covid-era misinformation to reject ordinary childhood immunizations—with potentially lethal consequences." Events following reductions in vaccination In several countries, reductions in the use of some vaccines were followed by increases in the diseases' morbidity and mortality. According to the Centers for Disease Control and Prevention, continued high levels of vaccine coverage are necessary to prevent a resurgence of diseases that have been nearly eliminated. Pertussis remains a major health problem in developing countries, where mass vaccination is not practiced; the World Health Organization estimates it caused 294,000 deaths in 2002. Vaccine hesitancy has contributed to the resurgence of preventable disease. For example, in 2019, the number of measles cases increased by thirty percent worldwide and many cases occurred in countries that had nearly eliminated measles. Stockholm, smallpox (1873–74) An anti-vaccination campaign motivated by religious objections, concerns about effectiveness, and concerns about individual rights led to the vaccination rate in Stockholm dropping to just over 40%, compared to about 90% elsewhere in Sweden. A major smallpox epidemic began there in 1873. It led to a rise in vaccine uptake and an end of the epidemic. UK, pertussis (1970s–80s) In a 1974 report ascribing 36 reactions to whooping cough (pertussis) vaccine, a prominent public-health academic claimed that the vaccine was only marginally effective and questioned whether its benefits outweigh its risks, and extended television and press coverage caused a scare. Vaccine uptake in the UK decreased from 81% to 31%, and pertussis epidemics followed, leading to the deaths of some children. The mainstream medical opinion continued to support the effectiveness and safety of the vaccine; public confidence was restored after the publication of a national reassessment of vaccine efficacy. Vaccine uptake then increased to levels above 90%, and disease incidence declined dramatically. Sweden, pertussis (1979–96) In the vaccination moratorium period that occurred when Sweden suspended vaccination against whooping cough (pertussis) from 1979 to 1996, 60% of the country's children contracted the disease before the age of 10; close medical monitoring kept the death rate from whooping cough at about one per year. Netherlands, measles (1999–2000) An outbreak at a religious community and school in the Netherlands resulted in three deaths and 68 hospitalizations among 2,961 cases. The population in the several provinces affected had a high level of immunization, with the exception of one of the religious denominations, which traditionally does not accept vaccination. Ninety-five percent of those who contracted measles were unvaccinated. UK and Ireland, measles (2000) As a result of the MMR vaccine controversy, vaccination rates dropped sharply in the United Kingdom after 1996. From late 1999 until the summer of 2000, there was a measles outbreak in North Dublin, Ireland. At the time, the national immunization level had fallen below 80%, and in parts of North Dublin the level was around 60%. There were more than 100 hospital admissions from over 300 cases. Three children died and several more were gravely ill, some requiring mechanical ventilation to recover. Nigeria, polio, measles, diphtheria (2001–) In the early first decade of the 21st century, conservative religious leaders in northern Nigeria, suspicious of Western medicine, advised their followers not to have their children vaccinated with the oral polio vaccine. The boycott was endorsed by the governor of Kano State, and immunization was suspended for several months. Subsequently, polio reappeared in a dozen formerly polio-free neighbors of Nigeria, and genetic tests showed the virus was the same one that originated in northern Nigeria. Nigeria had become a net exporter of the poliovirus to its African neighbors. People in the northern states were also reported to be wary of other vaccinations, and Nigeria reported over 20,000 measles cases and nearly 600 deaths from measles from January through March 2005. In Northern Nigeria, it is a common belief that vaccination is a strategy created by the westerners to reduce the Northerners' population. As a result of this belief, a large number of Northerners reject vaccination. In 2006, Nigeria accounted for over half of all new polio cases worldwide. Outbreaks continued thereafter; for example, at least 200 children died in a late-2007 measles outbreak in Borno State. United States, measles (2005–) In 2000, measles was declared eliminated from the United States because the internal transmission had been interrupted for one year; the remaining reported cases were due to importation. A 2005 measles outbreak in the US state of Indiana was attributed to parents who had refused to have their children vaccinated. The Centers for Disease Control and Prevention (CDC) reported that the three biggest outbreaks of measles in 2013 were attributed to clusters of people who were unvaccinated due to their philosophical or religious beliefs. As of August 2013, three pockets of outbreakNew York City, North Carolina, and Texascontributed to 64% of the 159 cases of measles reported in 16 states. The number of cases in 2014 quadrupled to 644, including transmission by unvaccinated visitors to Disneyland in California, during the Disneyland measles outbreak. Some 97% of cases in the first half of the year were confirmed to be due directly or indirectly to importation (the remainder were unknown), and 49% from the Philippines. More than half the patients (165 out of 288, or 57%) during that time were confirmed to be unvaccinated by choice; 30 (10%) were confirmed to have been vaccinated. The final count of measles in 2014 was 668 cases in 27 states. From January 1 to June 26, 2015, 178 people from 24 states and the District of Columbia were reported to have measles. Most of these cases (117 cases [66%]) were part of a large multi-state outbreak linked to Disneyland in California, continued from 2014. Analysis by the CDC scientists showed that the measles virus type in this outbreak (B3) was identical to the virus type that caused the large measles outbreak in the Philippines in 2014. On July 2, 2015, the first confirmed death from measles in twelve years was recorded. An immunocompromised woman in Washington State was infected and later died of pneumonia due to measles. By July 2016, a three-month measles outbreak affecting at least 22 people was spread by unvaccinated employees of the Eloy, Arizona detention center, an Immigration and Customs Enforcement (ICE) facility owned by for-profit prison operator CoreCivic. Pinal County's health director presumed the outbreak likely originated with a migrant, but detainees had since received vaccinations. However convincing CoreCivic's employees to become vaccinated or demonstrate proof of immunity was much more difficult, he said. In spring 2017, a measles outbreak occurred in Minnesota. As of June 16, 78 cases of measles had been confirmed in the state, 71 were unvaccinated and 65 were Somali-Americans. The outbreak has been attributed to low vaccination rates among Somali-American children, which can be traced back to 2008, when Somali parents began to express concern about disproportionately high numbers of Somali preschoolers in special education classes who were receiving services for autism spectrum disorder. Around the same time, disgraced former doctor Andrew Wakefield visited Minneapolis, teaming up with anti-vaccine groups to raise concerns that vaccines were the cause of autism, despite the fact that multiple studies have shown no connection between the MMR vaccine and autism. From fall 2018 to early 2019, New York State experienced an outbreak of over 200 confirmed measles cases. Many of these cases were attributed to ultra-Orthodox Jewish communities with low vaccination rates in areas within Brooklyn and Rockland County. State Health Commissioner Howard Zucker stated that this was the worst outbreak of measles in his recent memory. In January 2019, Washington state reported an outbreak of at least 73 confirmed cases of measles, most within Clark County, which has a higher rate of vaccination exemptions compared to the rest of the state. This led state governor Jay Inslee to declare a state of emergency, and the state's congress to introduce legislation to disallow vaccination exemption for personal or philosophical reasons. Wales, measles (2013–) In 2013, an outbreak of measles occurred in the Welsh city of Swansea. One death was reported. Some estimates indicate that while MMR uptake for two-year-olds was at 94% in Wales in 1995, it had fallen to as low as 67.5% in Swansea by 2003, meaning the region had a "vulnerable" age group. This has been linked to the MMR vaccine controversy, which caused a significant number of parents to fear allowing their children to receive the MMR vaccine. June 5, 2017, saw a new measles outbreak in Wales, at Lliswerry High School in the town of Newport. United States, tetanus Most cases of pediatric tetanus in the U.S. occur in unvaccinated children. In Oregon, in 2017, an unvaccinated boy had a scalp wound that his parents sutured themselves. Later the boy arrived at a hospital with tetanus. He spent 47 days in the Intensive Care Unit (ICU), and 57 total days in the hospital, for $811,929, not including the cost of airlifting him to the Oregon Health and Science University, Doernbecher Children's Hospital, or the subsequent two and a half weeks of inpatient rehabilitation he required. Despite this, his parents declined the administration of subsequent tetanus boosters or other vaccinations. Romania, measles (2016–present) As of September 2017, a measles epidemic was ongoing across Europe, especially Eastern Europe. In Romania, there were about 9300 cases, and 34 people (all unvaccinated) had died. This was preceded by a 2008 controversy regarding the HPV vaccine. In 2012, doctor Christa Todea-Gross published a free downloadable book online, this book contained misinformation about vaccination from abroad translated into Romanian, which significantly stimulated the growth of the anti-vaccine movement. The government of Romania officially declared a measles epidemic in September 2016 and started an information campaign to encourage parents to have their children vaccinated. By February 2017, however, the stockpile of MMR vaccines was depleted, and doctors were overburdened. Around April, the vaccine stockpile had been restored. By March 2019, the death toll had risen to 62, with 15,981 cases reported. Samoa, measles (2019) The 2019 Samoa measles outbreak began in October 2019 and as of December 12, there were 4,995 confirmed cases of measles and 72 deaths, out of a Samoan population of 201,316. A state of emergency was declared on November 17, ordering all schools to be closed, barring children under 17 from public events, and making vaccination mandatory. UNICEF has sent 110,500 vaccines to Samoa. Tonga and Fiji have also declared states of emergency. The outbreak has been attributed to a sharp drop in measles vaccination from the previous year, following an incident in 2018 when two infants died shortly after receiving measles vaccinations, which led the country to suspend its measles vaccination program. The reason for the two infants' deaths was incorrect preparation of the vaccine by two nurses who mixed vaccine powder with expired anesthetic. As of November 30, more than 50,000 people were vaccinated by the government of Samoa. 2019–2020 measles outbreaks
Biology and health sciences
Drugs and pharmacology
null
1742892
https://en.wikipedia.org/wiki/Weaver%20ant
Weaver ant
Weaver ants or green ants are eusocial insects of the Hymenoptera family Formicidae belonging to the tribe Oecophyllini. Weaver ants live in trees (they are obligately arboreal) and are known for their unique nest building behaviour where workers construct nests by weaving together leaves using larval silk. Colonies can be extremely large consisting of more than a hundred nests spanning numerous trees and containing more than half a million workers. Like many other ant species, weaver ants prey on small insects and supplement their diet with carbohydrate-rich honeydew excreted by scale insects (Hemiptera). Weaver ant workers exhibit a clear bimodal size distribution, with almost no overlap between the size of the minor and major workers. The major workers are approximately in length and the minors approximately half the length of the majors. Major workers forage, defend, maintain, and expand the colony whereas minor workers tend to stay within the nests where they care for the brood and 'milk' scale insects in or close to the nests. Weaver ants vary in color from reddish to yellowish brown dependent on the species. Oecophylla smaragdina found in Australia often have bright green gasters. Weaver ants are highly territorial and workers aggressively defend their territories against intruders. Because they prey on insects harmful to their host trees, weaver ants are sometimes used by indigenous farmers, particularly in southeast Asia, as natural biocontrol agents against agricultural pests. Although weaver ants lack a functional sting they can inflict painful bites and often spray formic acid directly at the bite wound resulting in intense discomfort. Taxonomy Oecophylla (subfamily Formicinae) is one group of weaver ants containing two closely related living species: O. longinoda and O. smaragdina. They are placed in a tribe of their own, Oecophyllini with the extinct genus Eoecophylla. The weaver ant genus Oecophylla is relatively old, and 15 fossil species have been described from Eocene to Miocene deposits. The oldest members of both Oecophyllini and Oecophylla are fossils described from the mid-Ypresian Eocene Okanagan Highlands of Northwestern North America. Two other genera of weaving ants, Polyrhachis and Camponotus, also use larval silk in nest construction, but the construction and architecture of their nests are simpler than those of Oecophylla. The common features of the genus include an elongated first funicular segment, presence of propodeal lobes, helcium at midheight of abdominal segment 3 and gaster capable of reflexion over the mesosoma. Males have vestigial pretarsal claws. Genera and species Oecophylla Extant species: Oecophylla kolhapurensis Oecophylla longinoda Oecophylla smaragdina Extinct species: †Oecophylla atavina †Oecophylla bartoniana †Oecophylla brischkei †Oecophylla crassinoda †Oecophylla eckfeldiana †Oecophylla grandimandibula †Oecophylla kraussei †Oecophylla leakeyi †Oecophylla longiceps †Oecophylla megarche †Oecophylla obesa †Oecophylla praeclara †Oecophylla sicula †Oecophylla superba †Eoecophylla Description Oecophylla have 12-segmented antennae, a feature shared with some other ant genera. The mandibles each have 10 or more teeth, and the fourth tooth from the tip is longer than the third and fifth teeth. The palps are short, with the maxillary palps being 5-segmented and the labial palps being 4-segmented. The mesonotum is constricted and (in dorsal view) narrower than the pronotum and propodeum. The node of the petiole is low and rounded. Distribution and habitat O. longinoda is distributed in the Afrotropics and O. smaragdina from India and Sri Lanka in southern Asia, through southeastern Asia to northern Australia and Melanesia. In Australia, Oecophylla smaragdina is found in the tropical coastal areas as far south as Broome in Western Australia and across the coastal tropics of the Northern Territory down to Yeppoon in Queensland. Colony ontogeny and social organization Weaver ant colonies are founded by one or more mated females (queens). A queen lays her first clutch of eggs on a leaf and protects and feeds the larvae until they develop into mature workers. The workers then construct leaf nests and help rear new brood laid by the queen. As the number of workers increases, more nests are constructed and colony productivity and growth increase significantly. Workers perform tasks that are essential to colony survival, including foraging, nest construction, and colony defense. The exchange of information and modulation of worker behaviour that occur during worker-worker interactions are facilitated by the use of chemical and tactile communication signals. These signals are used primarily in the contexts of foraging and colony defense. Successful foragers lay down pheromone trails that help recruit other workers to new food sources. Pheromone trails are also used by patrollers to recruit workers against territorial intruders. Along with chemical signals, workers also use tactile communication signals such as attenation and body shaking to stimulate activity in signal recipients. Multimodal communication in Oecophylla weaver ants importantly contributes to colony self-organization. Like many other ant species, Oecophylla workers exhibit social carrying behavior as part of the recruitment process, in which one worker will carry another worker in its mandibles and transport it to a location requiring attention. Nest building behaviour Oecophylla weaver ants are known for their cooperative behaviour used in nest construction. Possibly the first description of weaver ants' nest building behaviour was made by the English naturalist Joseph Banks, who took part in Captain James Cook's voyage to Australia in 1768. An excerpt from Joseph Banks' Journal (cited in Hölldobler and Wilson 1990) is included below: The ants...one green as a leaf, and living upon trees, where it built a nest, in size between that of a man's head and his fist, by bending the leaves together, and gluing them with whitish paperish substances which held them firmly together. In doing this their management was most curious: they bend down four leaves broader than a man's hand, and place them in such a direction as they choose. This requires a much larger force than these animals seem capable of; many thousands indeed are employed in the joint work. I have seen as many as could stand by one another, holding down such a leaf, each drawing down with all his might, while others within were employed to fasten the glue. How they had bent it down I had not the opportunity of seeing, but it was held down by main strength, I easily proved by disturbing a part of them, on which the leaf bursting from the rest, returned to its natural situation, and I had an opportunity of trying with my finger the strength of these little animals must have used to get it down. The weaver ants' ability to build capacious nests from living leaves has undeniably contributed to their ecological success. The first phase in nest construction involves workers surveying potential nesting leaves by pulling on the edges with their mandibles. When a few ants have successfully bent a leaf onto itself or drawn its edge toward another, other workers nearby join the effort. The probability of a worker joining the concerted effort is dependent on the size of the group, with workers showing a higher probability of joining when group size is large. When the span between two leaves is beyond the reach of a single ant, workers form chains with their bodies by grasping one another's petiole (waist). Multiple intricate chains working in unison are often used to ratchet together large leaves during nest construction. Once the edges of the leaves are drawn together, other workers retrieve larvae from existing nests using their mandibles. Upon reaching a seam to be joined, these workers tap the head of the clutched larvae, which causes them to excrete silk. They can only produce so much silk, so the larva will have to pupate without a cocoon. The workers then maneuver between the leaves in a highly coordinated fashion to bind them together. Weaver ants' nests are usually elliptical in shape and range in size from a single small leaf folded and bound onto itself to large nests consisting of many leaves and measure over half a meter in length. The time required to construct a nest varies depending on leaf type and eventual size, but often a large nest can be built in significantly less than 24 hours. Although weaver ants' nests are strong and impermeable to water, new nests are continually being built by workers in large colonies to replace old dying nests and those damaged by storms. Relationship with humans In agriculture Large colonies of Oecophylla weaver ants consume significant amounts of food, and workers continuously kill a variety of arthropods (primarily other insects) close to their nests. Insects are not only consumed by workers, but this protein source is necessary for brood development. Because weaver ant workers hunt and kill insects that are potentially harmful plant pests, trees harboring weaver ants benefit from having decreased levels of herbivory. They have traditionally been used in biological control in Chinese and Southeast Asian citrus orchards from at least 400 AD. Many studies have shown the efficacy of using weaver ants as natural biocontrol agents against agricultural pests. The use of weaver ants as biocontrol agents has especially been effective for fruit agriculture, particularly in Australia and southeast Asia. Fruit trees harboring weaver ants produce higher quality fruits, show less leaf damage by herbivores, and require fewer applications of synthetic pesticides. They do on the other hand protect the scale insects which they 'milk' for honeydew. In several cases the use of weaver ants has nonetheless been shown to be more efficient than applying chemical insecticides and at the same time cheaper, leaving farmers with increased net incomes and more sustainable pest control. Weaver ant husbandry is often practiced in Southeast Asia, where farmers provide shelter, food and construct ropes between trees populated with weaver ants in order to protect their colonies from potential competitors. Oecophylla colonies may not be entirely beneficial to the host plants. Studies indicate that the presence of Oecophylla colonies may also have negative effects on the performance of host plants by reducing fruit removal by mammals and birds and therefore reducing seed dispersal and by lowering the flower-visiting rate of flying insects including pollinators. Weaver ants also have an adverse effect on tree productivity by protecting sap feeding insects such as scale insects and leafhoppers from which they collect honeydew. By protecting these insects from predators they increase their population and increase the damage they cause to trees. As food, feed and medicine Weaver ants are one of the most valued types of edible insects consumed by humans (human entomophagy). In addition to being used as a biological control agent to increase plant production, weaver ants can be utilized directly as a protein and food source since the ants (especially the ant larvae) are edible for humans and high in protein and fatty acids. In some countries the weaver ant is a highly prized delicacy harvested in vast amounts and in this way contribute to local socio-economics. In Northeastern Thailand the price of weaver ant larvae is twice the price of good-quality beef and in a single Thai province ant larvae worth US$620,000 are harvested every year. It has furthermore been shown that the harvest of weaver ants can be maintained while at the same time using the ants for biocontrol of pest insects in tropical plantations, since the queen larvae and pupae that are the primary target of harvest, are not vital for colony survival. The larvae of weaver ants are also collected commercially as an expensive feed for insect-eating birds in Indonesia. In India and China, the worker ants are used in traditional medicine.
Biology and health sciences
Hymenoptera
Animals
1743431
https://en.wikipedia.org/wiki/Convergent%20series
Convergent series
In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series that is denoted The th partial sum is the sum of the first terms of the sequence; that is, A series is convergent (or converges) if and only if the sequence of its partial sums tends to a limit; that means that, when adding one after the other in the order given by the indices, one gets partial sums that become closer and closer to a given number. More precisely, a series converges, if and only if there exists a number such that for every arbitrarily small positive number , there is a (sufficiently large) integer such that for all , If the series is convergent, the (necessarily unique) number is called the sum of the series. The same notation is used for the series, and, if it is convergent, to its sum. This convention is similar to that which is used for addition: denotes the operation of adding and as well as the result of this addition, which is called the sum of and . Any series that is not convergent is said to be divergent or to diverge. Examples of convergent and divergent series The reciprocals of the positive integers produce a divergent series (harmonic series): Alternating the signs of the reciprocals of positive integers produces a convergent series (alternating harmonic series): The reciprocals of prime numbers produce a divergent series (so the set of primes is "large"; see divergence of the sum of the reciprocals of the primes): The reciprocals of triangular numbers produce a convergent series: The reciprocals of factorials produce a convergent series (see e): The reciprocals of square numbers produce a convergent series (the Basel problem): The reciprocals of powers of 2 produce a convergent series (so the set of powers of 2 is "small"): The reciprocals of powers of any n>1 produce a convergent series: Alternating the signs of reciprocals of powers of 2 also produces a convergent series: Alternating the signs of reciprocals of powers of any n>1 produces a convergent series: The reciprocals of Fibonacci numbers produce a convergent series (see ψ): Convergence tests There are a number of methods of determining whether a series converges or diverges. Comparison test. The terms of the sequence are compared to those of another sequence . If, for all n, , and converges, then so does However, if, for all n, , and diverges, then so does Ratio test. Assume that for all n, is not zero. Suppose that there exists such that If r < 1, then the series is absolutely convergent. If then the series diverges. If the ratio test is inconclusive, and the series may converge or diverge. Root test or nth root test. Suppose that the terms of the sequence in question are non-negative. Define r as follows: where "lim sup" denotes the limit superior (possibly ∞; if the limit exists it is the same value). If r < 1, then the series converges. If then the series diverges. If the root test is inconclusive, and the series may converge or diverge. The ratio test and the root test are both based on comparison with a geometric series, and as such they work in similar situations. In fact, if the ratio test works (meaning that the limit exists and is not equal to 1) then so does the root test; the converse, however, is not true. The root test is therefore more generally applicable, but as a practical matter the limit is often difficult to compute for commonly seen types of series. Integral test. The series can be compared to an integral to establish convergence or divergence. Let be a positive and monotonically decreasing function. If then the series converges. But if the integral diverges, then the series does so as well. Limit comparison test. If , and the limit exists and is not zero, then converges if and only if converges. Alternating series test. Also known as the Leibniz criterion, the alternating series test states that for an alternating series of the form , if is monotonically decreasing, and has a limit of 0 at infinity, then the series converges. Cauchy condensation test. If is a positive monotone decreasing sequence, then converges if and only if converges. Dirichlet's test Abel's test Conditional and absolute convergence For any sequence , for all n. Therefore, This implies that if converges, then also converges (but not vice versa). If the series converges, then the series is said absolutely convergent. The Maclaurin series of the exponential function is absolutely convergent for every complex value of the variable. If the series converges but the series diverges, then the series is conditionally convergent. The Maclaurin series of the logarithm function is conditionally convergent for . The Riemann series theorem states that if a series converges conditionally, it is possible to rearrange the terms of the series in such a way that the series converges to any value, or even diverges. Agnew's theorem characterizes rearrangements that preserve convergence for all series. Uniform convergence Let be a sequence of functions. The series is said to converge uniformly to f if the sequence of partial sums defined by converges uniformly to f. There is an analogue of the comparison test for infinite series of functions called the Weierstrass M-test. Cauchy convergence criterion The Cauchy convergence criterion states that a series converges if and only if the sequence of partial sums is a Cauchy sequence. This means that for every there is a positive integer such that for all we have This is equivalent to
Mathematics
Sequences and series
null
1743750
https://en.wikipedia.org/wiki/Pace%20%28unit%29
Pace (unit)
A pace is a unit of length consisting either of one normal walking step (approximately ), or of a double step, returning to the same foot (approximately ). The normal pace length decreases with age and some health conditions. The word "pace" is also used for units inverse to speed, used mainly for walking and running, commonly minutes per kilometer. The word "pace" is also used to translate similar formal units in other systems of measurement. Pacing is also used as an informal measure in surveying, with the "pace" equal to two of the surveyor's steps reckoned through comparison with a standard rod or chain. Standardized units Like other traditional measurements, the pace started as an informal unit of length, but was later standardized, often with the specific length set according to a typical brisk or military marching stride. In the United States the pace is an uncommon customary unit of length denoting a brisk single step and equal to . The Ancient Roman pace () was notionally the distance of a full stride from the position of one heel where it raised off of the ground to where it set down again at the end of the step: two steps, one by each foot. Under Marcus Vipsanius Agrippa, it was standardized as the distance of two steps () or five Roman feet (), about . One thousand paces were described simply as or , now known as a Roman mile; this is the origin of the English term "mile". The Byzantine pace (, bḗma) was an adaption of the Roman step, a distance of 2½ Greek feet. The double pace (, bḗma diploûn), meanwhile, was similar to the Roman unit, comprising 5 Greek feet. The Welsh pace () was reckoned as 3 Welsh feet of 9 inches and thus may be seen as similar to the English yard: 3 paces made up a leap and 9000 a Welsh mile.
Physical sciences
Other
Basics and measurement
1744650
https://en.wikipedia.org/wiki/Dispersion%20%28water%20waves%29
Dispersion (water waves)
In fluid dynamics, dispersion of water waves generally refers to frequency dispersion, which means that waves of different wavelengths travel at different phase speeds. Water waves, in this context, are waves propagating on the water surface, with gravity and surface tension as the restoring forces. As a result, water with a free surface is generally considered to be a dispersive medium. For a certain water depth, surface gravity waves – i.e. waves occurring at the air–water interface and gravity as the only force restoring it to flatness – propagate faster with increasing wavelength. On the other hand, for a given (fixed) wavelength, gravity waves in deeper water have a larger phase speed than in shallower water. In contrast with the behavior of gravity waves, capillary waves (i.e. only forced by surface tension) propagate faster for shorter wavelengths. Besides frequency dispersion, water waves also exhibit amplitude dispersion. This is a nonlinear effect, by which waves of larger amplitude have a different phase speed from small-amplitude waves. Frequency dispersion for surface gravity waves This section is about frequency dispersion for waves on a fluid layer forced by gravity, and according to linear theory. For surface tension effects on frequency dispersion, see surface tension effects in Airy wave theory and capillary wave. Wave propagation and dispersion The simplest propagating wave of unchanging form is a sine wave. A sine wave with water surface elevation is given by: where a is the amplitude (in metres) and ) is the phase function (in radians), depending on the horizontal position (x, in metres) and time (t, in seconds): with and where: λ is the wavelength (in metres), T is the period (in seconds), k is the wavenumber (in radians per metre) and ω is the angular frequency (in radians per second). Characteristic phases of a water wave are: the upward zero-crossing at θ = 0, the wave crest at θ =  π, the downward zero-crossing at θ = π and the wave trough at θ =  π. A certain phase repeats itself after an integer m multiple of 2π: sin(θ) = sin(θ+m•2π). Essential for water waves, and other wave phenomena in physics, is that free propagating waves of non-zero amplitude only exist when the angular frequency ω and wavenumber k (or equivalently the wavelength λ and period T ) satisfy a functional relationship: the frequency dispersion relation The dispersion relation has two solutions: ω = +Ω(k) and ω = −Ω(k), corresponding to waves travelling in the positive or negative x–direction. The dispersion relation will in general depend on several other parameters in addition to the wavenumber k. For gravity waves, according to linear theory, these are the acceleration by gravity g and the water depth h. The dispersion relation for these waves is: an implicit equation with tanh denoting the hyperbolic tangent function. An initial wave phase θ = θ0 propagates as a function of space and time. Its subsequent position is given by: This shows that the phase moves with the velocity: which is called the phase velocity. Phase velocity A sinusoidal wave, of small surface-elevation amplitude and with a constant wavelength, propagates with the phase velocity, also called celerity or phase speed. While the phase velocity is a vector and has an associated direction, celerity or phase speed refer only to the magnitude of the phase velocity. According to linear theory for waves forced by gravity, the phase speed depends on the wavelength and the water depth. For a fixed water depth, long waves (with large wavelength) propagate faster than shorter waves. In the left figure, it can be seen that shallow water waves, with wavelengths λ much larger than the water depth h, travel with the phase velocity with g the acceleration by gravity and cp the phase speed. Since this shallow-water phase speed is independent of the wavelength, shallow water waves do not have frequency dispersion. Using another normalization for the same frequency dispersion relation, the figure on the right shows that for a fixed wavelength λ the phase speed cp increases with increasing water depth. Until, in deep water with water depth h larger than half the wavelength λ (so for h/λ > 0.5), the phase velocity cp is independent of the water depth: with T the wave period (the reciprocal of the frequency f, T=1/f ). So in deep water the phase speed increases with the wavelength, and with the period. Since the phase speed satisfies , wavelength and period (or frequency) are related. For instance in deep water: The dispersion characteristics for intermediate depth are given below. Group velocity Interference of two sinusoidal waves with slightly different wavelengths, but the same amplitude and propagation direction, results in a beat pattern, called a wave group. As can be seen in the animation, the group moves with a group velocity cg different from the phase velocity cp, due to frequency dispersion. The group velocity is depicted by the red lines (marked B) in the two figures above. In shallow water, the group velocity is equal to the shallow-water phase velocity. This is because shallow water waves are not dispersive. In deep water, the group velocity is equal to half the phase velocity: {{math|cg cp. The group velocity also turns out to be the energy transport velocity. This is the velocity with which the mean wave energy is transported horizontally in a narrow-band wave field. In the case of a group velocity different from the phase velocity, a consequence is that the number of waves counted in a wave group is different when counted from a snapshot in space at a certain moment, from when counted in time from the measured surface elevation at a fixed position. Consider a wave group of length Λg and group duration of τg. The group velocity is: The number of waves in a wave group, measured in space at a certain moment is: Λg / λ. While measured at a fixed location in time, the number of waves in a group is: τg / T. So the ratio of the number of waves measured in space to those measured in time is: So in deep water, with cg = cp, a wave group has twice as many waves in time as it has in space. The water surface elevation η(x,t), as a function of horizontal position x and time t, for a bichromatic wave group of full modulation can be mathematically formulated as: with: a the wave amplitude of each frequency component in metres, k1 and k2 the wave number of each wave component, in radians per metre, and ω1 and ω2 the angular frequency of each wave component, in radians per second. Both ω1 and k1, as well as ω2 and k2, have to satisfy the dispersion relation: and Using trigonometric identities, the surface elevation is written as: The part between square brackets is the slowly varying amplitude of the group, with group wave number  ( k1 − k2 ) and group angular frequency  ( ω1 − ω2 ). As a result, the group velocity is, for the limit k1 → k2 : Wave groups can only be discerned in case of a narrow-banded signal, with the wave-number difference k1 − k2 small compared to the mean wave number  (k1 + k2). Multi-component wave patterns The effect of frequency dispersion is that the waves travel as a function of wavelength, so that spatial and temporal phase properties of the propagating wave are constantly changing. For example, under the action of gravity, water waves with a longer wavelength travel faster than those with a shorter wavelength. While two superimposed sinusoidal waves, called a bichromatic wave, have an envelope which travels unchanged, three or more sinusoidal wave components result in a changing pattern of the waves and their envelope. A sea state – that is: real waves on the sea or ocean – can be described as a superposition of many sinusoidal waves with different wavelengths, amplitudes, initial phases and propagation directions. Each of these components travels with its own phase velocity, in accordance with the dispersion relation. The statistics of such a surface can be described by its power spectrum. Dispersion relation In the table below, the dispersion relation ω2 = [Ω(k)]2 between angular frequency ω = 2π / T and wave number k = 2π / λ is given, as well as the phase and group speeds. Deep water corresponds with water depths larger than half the wavelength, which is the common situation in the ocean. In deep water, longer period waves propagate faster and transport their energy faster. The deep-water group velocity is half the phase velocity. In shallow water, for wavelengths larger than twenty times the water depth, as found quite often near the coast, the group velocity is equal to the phase velocity. History The full linear dispersion relation was first found by Pierre-Simon Laplace, although there were some errors in his solution for the linear wave problem. The complete theory for linear water waves, including dispersion, was derived by George Biddell Airy and published in about 1840. A similar equation was also found by Philip Kelland at around the same time (but making some mistakes in his derivation of the wave theory). The shallow water (with small h / λ) limit, ω2 = gh k2, was derived by Joseph Louis Lagrange. Surface tension effects In case of gravity–capillary waves, where surface tension affects the waves, the dispersion relation becomes: with σ the surface tension (in N/m). For a water–air interface (with and ) the waves can be approximated as pure capillary waves – dominated by surface-tension effects – for wavelengths less than . For wavelengths above the waves are to good approximation pure surface gravity waves with very little surface-tension effects. Interfacial waves For two homogeneous layers of fluids, of mean thickness h below the interface and above – under the action of gravity and bounded above and below by horizontal rigid walls – the dispersion relationship ω2 = Ω2(k) for gravity waves is provided by: where again ρ and are the densities below and above the interface, while coth is the hyperbolic cotangent function. For the case is zero this reduces to the dispersion relation of surface gravity waves on water of finite depth h. When the depth of the two fluid layers becomes very large (h→∞, →∞), the hyperbolic cotangents in the above formula approaches the value of one. Then: Nonlinear effects Shallow water Amplitude dispersion effects appear for instance in the solitary wave: a single hump of water traveling with constant velocity in shallow water with a horizontal bed. Note that solitary waves are near-solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind. The single soliton solution of the Korteweg–de Vries equation, of wave height H in water depth h far away from the wave crest, travels with the velocity: So for this nonlinear gravity wave it is the total water depth under the wave crest that determines the speed, with higher waves traveling faster than lower waves. Note that solitary wave solutions only exist for positive values of H, solitary gravity waves of depression do not exist. Deep water The linear dispersion relation – unaffected by wave amplitude – is for nonlinear waves also correct at the second order of the perturbation theory expansion, with the orders in terms of the wave steepness (where a is wave amplitude). To the third order, and for deep water, the dispersion relation is so This implies that large waves travel faster than small ones of the same frequency. This is only noticeable when the wave steepness is large. Waves on a mean current: Doppler shift Water waves on a mean flow (so a wave in a moving medium) experience a Doppler shift. Suppose the dispersion relation for a non-moving medium is: with k the wavenumber. Then for a medium with mean velocity vector V, the dispersion relationship with Doppler shift becomes: where k is the wavenumber vector, related to k as: k = |k|. The dot product k•V is equal to: k•V = kV cos α, with V the length of the mean velocity vector V: V = |V|. And α the angle between the wave propagation direction and the mean flow direction. For waves and current in the same direction, k•V=kV.
Physical sciences
Waves
Physics
1744868
https://en.wikipedia.org/wiki/Damping
Damping
In physical systems, damping is the loss of energy of an oscillating system by dissipation. Damping is an influence within or upon an oscillatory system that has the effect of reducing or preventing its oscillation. Examples of damping include viscous damping in a fluid (see viscous drag), surface friction, radiation, resistance in electronic oscillators, and absorption and scattering of light in optical oscillators. Damping not based on energy loss can be important in other oscillating systems such as those that occur in biological systems and bikes (ex. Suspension (mechanics)). Damping is not to be confused with friction, which is a type of dissipative force acting on a system. Friction can cause or be a factor of damping. The damping ratio is a dimensionless measure describing how oscillations in a system decay after a disturbance. Many systems exhibit oscillatory behavior when they are disturbed from their position of static equilibrium. A mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system tends to return to its equilibrium position, but overshoots it. Sometimes losses (e.g. frictional) damp the system and can cause the oscillations to gradually decay in amplitude towards zero or attenuate. The damping ratio is a measure describing how rapidly the oscillations decay from one bounce to the next. The damping ratio is a system parameter, denoted by ("zeta"), that can vary from undamped (), underdamped () through critically damped () to overdamped (). The behaviour of oscillating systems is often of interest in a diverse range of disciplines that include control engineering, chemical engineering, mechanical engineering, structural engineering, and electrical engineering. The physical quantity that is oscillating varies greatly, and could be the swaying of a tall building in the wind, or the speed of an electric motor, but a normalised, or non-dimensionalised approach can be convenient in describing common aspects of behavior. Oscillation cases Depending on the amount of damping present, a system exhibits different oscillatory behaviors and speeds. Where the spring–mass system is completely lossless, the mass would oscillate indefinitely, with each bounce of equal height to the last. This hypothetical case is called undamped. If the system contained high losses, for example if the spring–mass experiment were conducted in a viscous fluid, the mass could slowly return to its rest position without ever overshooting. This case is called overdamped. Commonly, the mass tends to overshoot its starting position, and then return, overshooting again. With each overshoot, some energy in the system is dissipated, and the oscillations die towards zero. This case is called underdamped. Between the overdamped and underdamped cases, there exists a certain level of damping at which the system will just fail to overshoot and will not make a single oscillation. This case is called critical damping. The key difference between critical damping and overdamping is that, in critical damping, the system returns to equilibrium in the minimum amount of time. Damped sine wave A damped sine wave or damped sinusoid is a sinusoidal function whose amplitude approaches zero as time increases. It corresponds to the underdamped case of damped second-order systems, or underdamped second-order differential equations. Damped sine waves are commonly seen in science and engineering, wherever a harmonic oscillator is losing energy faster than it is being supplied. A true sine wave starting at time = 0 begins at the origin (amplitude = 0). A cosine wave begins at its maximum value due to its phase difference from the sine wave. A given sinusoidal waveform may be of intermediate phase, having both sine and cosine components. The term "damped sine wave" describes all such damped waveforms, whatever their initial phase. The most common form of damping, which is usually assumed, is the form found in linear systems. This form is exponential damping, in which the outer envelope of the successive peaks is an exponential decay curve. That is, when you connect the maximum point of each successive curve, the result resembles an exponential decay function. The general equation for an exponentially damped sinusoid may be represented as: where: is the instantaneous amplitude at time ; is the initial amplitude of the envelope; is the decay rate, in the reciprocal of the time units of the independent variable ; is the phase angle at ; is the angular frequency. Other important parameters include: Frequency: , the number of cycles per time unit. It is expressed in inverse time units , or hertz. Time constant: , the time for the amplitude to decrease by the factor of e. Half-life is the time it takes for the exponential amplitude envelope to decrease by a factor of 2. It is equal to which is approximately . Damping ratio: is a non-dimensional characterization of the decay rate relative to the frequency, approximately , or exactly . Q factor: is another non-dimensional characterization of the amount of damping; high Q indicates slow damping relative to the oscillation. Damping ratio definition The damping ratio is a parameter, usually denoted by ζ (Greek letter zeta), that characterizes the frequency response of a second-order ordinary differential equation. It is particularly important in the study of control theory. It is also important in the harmonic oscillator. In general, systems with higher damping ratios (one or greater) will demonstrate more of a damping effect. Underdamped systems have a value of less than one. Critically damped systems have a damping ratio of exactly 1, or at least very close to it. The damping ratio provides a mathematical means of expressing the level of damping in a system relative to critical damping. For a damped harmonic oscillator with mass m, damping coefficient c, and spring constant k, it can be defined as the ratio of the damping coefficient in the system's differential equation to the critical damping coefficient: where the system's equation of motion is . and the corresponding critical damping coefficient is or where is the natural frequency of the system. The damping ratio is dimensionless, being the ratio of two coefficients of identical units. Derivation Using the natural frequency of a harmonic oscillator and the definition of the damping ratio above, we can rewrite this as: This equation is more general than just the mass–spring system, and also applies to electrical circuits and to other domains. It can be solved with the approach where C and s are both complex constants, with s satisfying Two such solutions, for the two values of s satisfying the equation, can be combined to make the general real solutions, with oscillatory and decaying properties in several regimes: Undamped Is the case where corresponds to the undamped simple harmonic oscillator, and in that case the solution looks like , as expected. This case is extremely rare in the natural world with the closest examples being cases where friction was purposefully reduced to minimal values. Underdamped If s is a pair of complex values, then each complex solution term is a decaying exponential combined with an oscillatory portion that looks like . This case occurs for , and is referred to as underdamped (e.g., bungee cable). Overdamped If s is a pair of real values, then the solution is simply a sum of two decaying exponentials with no oscillation. This case occurs for , and is referred to as overdamped. Situations where overdamping is practical tend to have tragic outcomes if overshooting occurs, usually electrical rather than mechanical. For example, landing a plane in autopilot: if the system overshoots and releases landing gear too late, the outcome would be a disaster. Critically damped The case where is the border between the overdamped and underdamped cases, and is referred to as critically damped. This turns out to be a desirable outcome in many cases where engineering design of a damped oscillator is required (e.g., a door closing mechanism). Q factor and decay rate The Q factor, damping ratio ζ, and exponential decay rate α are related such that When a second-order system has (that is, when the system is underdamped), it has two complex conjugate poles that each have a real part of ; that is, the decay rate parameter represents the rate of exponential decay of the oscillations. A lower damping ratio implies a lower decay rate, and so very underdamped systems oscillate for long times. For example, a high quality tuning fork, which has a very low damping ratio, has an oscillation that lasts a long time, decaying very slowly after being struck by a hammer. Logarithmic decrement For underdamped vibrations, the damping ratio is also related to the logarithmic decrement . The damping ratio can be found for any two peaks, even if they are not adjacent. For adjacent peaks: where where x0 and x1 are amplitudes of any two successive peaks. As shown in the right figure: where , are amplitudes of two successive positive peaks and , are amplitudes of two successive negative peaks. Percentage overshoot In control theory, overshoot refers to an output exceeding its final, steady-state value. For a step input, the percentage overshoot (PO) is the maximum value minus the step value divided by the step value. In the case of the unit step, the overshoot is just the maximum value of the step response minus one. The percentage overshoot (PO) is related to damping ratio (ζ) by: Conversely, the damping ratio (ζ) that yields a given percentage overshoot is given by: Examples and applications Viscous drag When an object is falling through the air, the only force opposing its freefall is air resistance. An object falling through water or oil would slow down at a greater rate, until eventually reaching a steady-state velocity as the drag force comes into equilibrium with the force from gravity. This is the concept of viscous drag, which for example is applied in automatic doors or anti-slam doors. Damping in electrical systems Electrical systems that operate with alternating current (AC) use resistors to damp LC resonant circuits. Magnetic damping and Magnetorheological damping Kinetic energy that causes oscillations is dissipated as heat by electric eddy currents which are induced by passing through a magnet's poles, either by a coil or aluminum plate. Eddy currents are a key component of electromagnetic induction where they set up a magnetic flux directly opposing the oscillating movement, creating a resistive force. In other words, the resistance caused by magnetic forces slows a system down. An example of this concept being applied is the brakes on roller coasters. Magnetorheological Dampers (MR Dampers) use Magnetorheological fluid, which changes viscosity when subjected to a magnetic field. In this case, Magnetorheological damping may be considered an interdisciplinary form of damping with both viscous and magnetic damping mechanisms.
Physical sciences
Classical mechanics
Physics
5932107
https://en.wikipedia.org/wiki/Ulmus%20davidiana
Ulmus davidiana
Ulmus davidiana, also known as the David elm, or Father David elm (named after the botanist Armand David, who collected specimens), is a small deciduous tree widely distributed across China, Mongolia, Korea, Siberia, and Japan, where it is found in wetlands along streams at elevations of 2000–2300 m (6,500–7,500 ft). The tree was first described in 1873 from the hills north of Beijing, China. Classification Two varieties of Ulmus davidiana are recognized: var. davidiana, occurring only in China, and var. japonica Rehder, the more widely ranging Japanese Elm. Some authorities, however, do not consider japonica to be a variety of U. davidiana, The Illustrated Flora of the Primorsky Territory, Russian Far East (2019), for example, maintaining U. japonica as a species. In 1916 Arnold Arboretum described the two as different species. Harold Hillier originally (1973) listed and described japonica as a variety of U. davidiana,<ref>Hilliers' Manual of Trees & Shrubs. Ed. 3, 399, (1973); David & Charles, Newton Abbot, UK</ref> but Hillier's separated the two as distinct species in later editions of their Manual of Trees and Shrubs. DescriptionUlmus davidiana is considered to have a remarkable resemblance to the American elm (U. americana) in all but ultimate size. The tree grows to a maximum height of 15 m (50 ft), with a relatively slender trunk < 0.3 m (1 ft) d.b.h. supporting a dense canopy casting a heavy shade. Its bark remains smooth for a comparatively long time, before becoming longitudinally fissured. The leaves are obovate to obovate-elliptic < 10 cm (4 in) × < 5 cm (2 in), with a petiole of about 10 mm, according to the protologue (the original description); the Flora of China description gives a petiole range of 5-10 mm with an extreme of 17 mm, while Rehder described the petiole simply as 'short'; the upper surface is rough. The perfect, wind-pollinated apetalous flowers are produced on second-year shoots in March, followed by obovate samarae < 19 mm (3/4 in) long × < 14 mm (1/2 in) wide. Pests and diseases Evaluated with other Chinese elms at the Morton Arboretum in Illinois, the tree was found to have a good resistance to Dutch elm disease (DED) . The species is reputed to have a good resistance to elm leaf beetle Xanthogaleruca luteola, elm yellows (elm phloem necrosis) and leafminers in the US. Cultivation The tree was briefly propagated and marketed by the Hillier & Sons nursery, Winchester, Hampshire from 1971 to 1977, during which time only four were sold.Hillier & Sons Sales inventory 1962 to 1977 (unpublished). There are no known cultivars of this taxon, nor is it known to be in commerce beyond the United States. American testing The David Elm has shown some promise as a result of testing at the Ohio State University (OSU) in Ohio.Struve, D. K. and Rhodus, T. (1990). Turning copper into gold. Amer. Nurseryman, 172: 114-123. At OSU, the plants were cultivated in copper-lined pots and planted in a wide lawn under a powerline and in small home lawns. The tree's performance has been mixed, but shows potential. Some specimens did extremely well, while others struggled. The tree seems to perform well on disturbed sites, in calciferous (alkaline) soils, and also seems to have a better tolerance for wet soil than the literature has indicated. A number of strong saplings were cultivated that show promise. Some saplings underwent judicious pruning early on to maximize structural stability of the plant ("pruning can help the plant result in a more structurally stable branching pattern" ), and blue-colored tree shelters were used on some plants until the stem reached a diameter of 25–37 mm. Additional observation shows that at least 50% of emerging leaves on the trees survived a hard freeze that lasted 5 days during April 2007. Leaves were approximately 70% emerged when temperatures fell to −6°C (21°F). Temperatures fell below freezing for 5 days (April 4–8, 2007). Notable trees The UK TROBI Champion is a relatively young tree at White House Farm, Ivy Hatch, Kent, measuring 5 m high by 17 cm d.b.h. in 2009. Etymology The tree is named for Father Armand David, the French missionary and naturalist who introduced the tree to France in the 19th century. Accessions North America Arnold Arboretum, US. Acc. nos. 5957 (wild collected), 785-80 (cult. from wild material). Brenton Arboretum, US. No details available. Chicago Botanic Garden, US. Five trees, no other details available. Dawes Arboretum, US. , Newark, US. Two trees, Acc. nos. D1997-0455.001 & D2006-0032.001. Denver Botanic Gardens, US. Acc. no. 950870. No details available. Holden Arboretum, US. Acc. no. 00-318, Three specimens wild collected. Morton Arboretum, US. Acc. no. 427-84 United States National Arboretum, Washington, D.C., US. Acc. no. 64463 Europe Arboretum Poort Bulten , Losser, The Netherlands. Acc. no. LOS0243. Brighton & Hove City Council, UK. NCCPG Elm Collection. Grange Farm Arboretum, Lincolnshire, UK. Acc. no. 510 Hortus Botanicus Nationalis, Salaspils, Latvia. Acc. no. 18095 Linnaean Gardens of Uppsala, Sweden. Acc. no. 2001-1659, obtained from South Korea. Oxford University Botanic Garden, UK. Acc. no. 0004891. Royal Botanic Garden Edinburgh, UK. Acc. no. 20030905 grown from seed wild collected in Korea. Sir Harold Hillier Gardens, Hampshire, UK. Acc. nos. 2000.0018, 2004.0514 Wijdemeren City Council, Netherlands. Elm collection. Planted Smeerdijkgaarde, Kortenhoef 2013.
Biology and health sciences
Rosales
Plants
4499533
https://en.wikipedia.org/wiki/Type-II%20superconductor
Type-II superconductor
In superconductivity, a type-II superconductor is a superconductor that exhibits an intermediate phase of mixed ordinary and superconducting properties at intermediate temperature and fields above the superconducting phases. It also features the formation of magnetic field vortices with an applied external magnetic field. This occurs above a certain critical field strength Hc1. The vortex density increases with increasing field strength. At a higher critical field Hc2, superconductivity is destroyed. Type-II superconductors do not exhibit a complete Meissner effect. History In 1935, J.N. Rjabinin and Lev Shubnikov experimentally discovered the type-II superconductors. In 1950, the theory of the two types of superconductors was further developed by Lev Landau and Vitaly Ginzburg in their paper on Ginzburg–Landau theory. In their argument, a type-I superconductor had positive free energy of the superconductor-normal metal boundary. Ginzburg and Landau pointed out the possibility of type-II superconductors that should form inhomogeneous state in strong magnetic fields. However, at that time, all known superconductors were type-I, and they commented that there was no experimental motivation to consider precise structure of type-II superconducting state. The theory for the behavior of the type-II superconducting state in magnetic field was greatly improved by Alexei Alexeyevich Abrikosov, who was elaborating on the ideas by Lars Onsager and Richard Feynman of quantum vortices in superfluids. Quantum vortex solution in a superconductor is also very closely related to Fritz London's work on magnetic flux quantization in superconductors. The Nobel Prize in Physics was awarded for the theory of type-II superconductivity in 2003. Vortex state Ginzburg–Landau theory introduced the superconducting coherence length ξ in addition to London magnetic field penetration depth λ. According to Ginzburg–Landau theory, in a type-II superconductor . Ginzburg and Landau showed that this leads to negative energy of the interface between superconducting and normal phases. The existence of the negative interface energy was also known since the mid-1930s from the early works by the London brothers. A negative interface energy suggests that the system should be unstable against maximizing the number of such interfaces. This instability was not observed until the experiments of Shubnikov in 1936 where two critical fields were found. In 1952 an observation of type-II superconductivity was also reported by Zavaritskii. Fritz London demonstrated that a magnetic flux can penetrate a superconductor via a topological defect that has integer phase winding and carries quantized magnetic flux. Onsager and Feynman demonstrated that quantum vortices should form in superfluids. A 1957 paper by A. A. Abrikosov generalizes these ideas. In the limit of very short coherence length the vortex solution is identical to London's fluxoid, where the vortex core is approximated by a sharp cutoff rather than a gradual vanishing of superconducting condensate near the vortex center. Abrikosov found that the vortices arrange themselves into a regular array known as a vortex lattice. Near a so-called upper critical magnetic field, the problem of a superconductor in an external field is equivalent to the problem of vortex state in a rotating superfluid, discussed by Lars Onsager and Richard Feynman. Flux pinning In the vortex state, a phenomenon known as flux pinning becomes possible. This is not possible with type-I superconductors, since they cannot be penetrated by magnetic fields. If a superconductor is cooled in a field, the field can be trapped, which can allow the superconductor to be suspended over a magnet, with the potential for a frictionless joint or bearing. The worth of flux pinning is seen through many implementations such as lifts, frictionless joints, and transportation. The thinner the superconducting layer, the stronger the pinning that occurs when exposed to magnetic fields. Materials Type-II superconductors are usually made of metal alloys or complex oxide ceramics. All high-temperature superconductors are type-II superconductors. While most elemental superconductors are type-I, niobium, vanadium, and technetium are elemental type-II superconductors. Boron-doped diamond and silicon are also type-II superconductors. Metal alloy superconductors can also exhibit type-II behavior (e.g., niobium–titanium, one of the most common superconductors in applied superconductivity), as well as intermetallic compounds like niobium–tin. Other type-II examples are the cuprate-perovskite ceramic materials which have achieved the highest superconducting critical temperatures. These include La1.85Ba0.15CuO4, BSCCO, and YBCO (Yttrium-Barium-Copper-Oxide), which is famous as the first material to achieve superconductivity above the boiling point of liquid nitrogen (77 K). Due to strong vortex pinning, the cuprates are close to ideally hard superconductors. Important uses Strong superconducting electromagnets (used in MRI scanners, NMR machines, and particle accelerators) often use coils wound of niobium-titanium wires or, for higher fields, niobium-tin wires. These materials are type-II superconductors with substantial upper critical field Hc2, and in contrast to, for example, the cuprate superconductors with even higher Hc2, they can be easily machined into wires. Recently, however, 2nd generation superconducting tapes are allowing replacement of cheaper niobium-based wires with much more expensive, but superconductive at much higher temperatures and magnetic fields "2nd generation" tapes.
Physical sciences
Electrical circuits
Physics
4500678
https://en.wikipedia.org/wiki/River%20morphology
River morphology
The terms river morphology and its synonym stream morphology are used to describe the shapes of river channels and how they change in shape and direction over time. The morphology of a river channel is a function of a number of processes and environmental conditions, including the composition and erodibility of the bed and banks (e.g., sand, clay, bedrock); erosion comes from the power and consistency of the current, and can effect the formation of the river's path. Also, vegetation and the rate of plant growth; the availability of sediment; the size and composition of the sediment moving through the channel; the rate of sediment transport through the channel and the rate of deposition on the floodplain, banks, bars, and bed; and regional aggradation or degradation due to subsidence or uplift. River morphology can also be affected by human interaction, which is a way the river responds to a new factor in how the river can change its course. An example of human induced change in river morphology is dam construction, which alters the ebb flow of fluvial water and sediment, therefore creating or shrinking estuarine channels. A river regime is a dynamic equilibrium system, which is a way of classifying rivers into different categories. The four categories of river regimes are Sinuous canali- form rivers, Sinuous point bar rivers, Sinuous braided rivers, and Non-sinuous braided rivers. The study of river morphology is accomplished in the field of fluvial geomorphology, the scientific term.
Physical sciences
Fluvial landforms
Earth science
19133401
https://en.wikipedia.org/wiki/Google%20Chrome
Google Chrome
Google Chrome is a web browser developed by Google. It was first released in 2008 for Microsoft Windows, built with free software components from Apple WebKit and Mozilla Firefox. Versions were later released for Linux, macOS, iOS, iPadOS, and also for Android, where it is the default browser. The browser is also the main component of ChromeOS, where it serves as the platform for web applications. Most of Chrome's source code comes from Google's free and open-source software project Chromium, but Chrome is licensed as proprietary freeware. WebKit was the original rendering engine, but Google eventually forked it to create the Blink engine; all Chrome variants except iOS used Blink as of 2017. , StatCounter estimates that Chrome has a 65% worldwide browser market share (after peaking at 72.38% in November 2018) on personal computers (PC), is most used on tablets (having surpassed Safari), and is also dominant on smartphones. With a market share of 65% across all platforms combined, Chrome is the most used web browser in the world today. Google chief executive Eric Schmidt was previously involved in the "browser wars", a part of U.S. corporate history, and opposed the expansion of the company into such a new area. However, Google co-founders Sergey Brin and Larry Page spearheaded a software demonstration that pushed Schmidt into making Chrome a core business priority, which resulted in commercial success. Because of the proliferation of Chrome, Google has expanded the "Chrome" brand name to other products. These include not just ChromeOS but also Chromecast, Chromebook, Chromebit, Chromebox, and Chromebase. History Google chief executive Eric Schmidt opposed the development of an independent web browser for six years. He stated that "at the time, Google was a small company", and he did not want to go through "bruising browser wars". Company co-founders Sergey Brin and Larry Page hired several Mozilla Firefox developers and built a demonstration of Chrome. Afterwards, Schmidt said, "It was so good that it essentially forced me to change my mind." In September 2004, rumors of Google building a web browser first appeared. Online journals and U.S. newspapers stated at the time that Google was hiring former Microsoft web developers among others. It also came shortly after the release of Mozilla Firefox 1.0, which was surging in popularity and taking market share from Internet Explorer, which had noted security problems. Chrome is based on the open-source code of the Chromium project. Development of the browser began in 2006, spearheaded by Sundar Pichai. Chrome was "largely developed" in Google's Kitchener office. Announcement The release announcement was originally scheduled for September 3, 2008, and a comic by Scott McCloud was to be sent to journalists and bloggers explaining the features within the new browser. Copies intended for Europe were shipped early and German blogger Philipp Lenssen of Google Blogoscoped made a scanned copy of the 38-page comic available on his website after receiving it on September 1, 2008. Google subsequently made the comic available on Google Books, and mentioned it on their official blog along with an explanation for the early release. The product was named "Chrome" as an initial development project code name, because it is associated with fast cars and speed. Google kept the development project name as the final release name, as a "cheeky" or ironic moniker, as one of the main aims was to minimize the user interface chrome. Public release The browser was first publicly released, officially as a beta version, on September 2, 2008, for Windows XP and newer, and with support for 43 languages, and later as a "stable" public release on December 11, 2008. On that same day, a CNET news item drew attention to a passage in the Terms of Service statement for the initial beta release, which seemed to grant to Google a license to all content transferred via the Chrome browser. This passage was inherited from the general Google terms of service. Google responded to this criticism immediately by stating that the language used was borrowed from other products, and removed this passage from the Terms of Service. Chrome quickly gained about 1% usage share. After the initial surge, usage share dropped until it hit a low of 0.69% in October 2008. It then started rising again and by December 2008, Chrome again passed the 1% threshold. In early January 2009, CNET reported that Google planned to release versions of Chrome for macOS and Linux in the first half of the year. The first official macOS and Linux developer previews of Chrome were announced on June 4, 2009, with a blog post saying they were missing many features and were intended for early feedback rather than general use. In December 2009, Google released beta versions of Chrome for macOS and Linux. Google Chrome 5.0, announced on May 25, 2010, was the first stable release to support all three platforms. Chrome was one of the twelve browsers offered on BrowserChoice.eu to European Economic Area users of Microsoft Windows in 2010. Development Chrome was assembled from 25 different code libraries from Google and third parties such as Mozilla's Netscape Portable Runtime, Network Security Services, NPAPI (dropped as of version 45), Skia Graphics Engine, SQLite, and a number of other open-source projects. The V8 JavaScript virtual machine was considered a sufficiently important project to be split off (as was Adobe/Mozilla's Tamarin) and handled by a separate team in Denmark coordinated by Lars Bak. According to Google, existing implementations were designed "for small programs, where the performance and interactivity of the system weren't that important", but web applications such as Gmail "are using the web browser to the fullest when it comes to DOM manipulations and JavaScript", and therefore would significantly benefit from a JavaScript engine that could work faster. Chrome initially used the WebKit rendering engine to display web pages. In 2013, they forked the WebCore component to create their own layout engine Blink. Based on WebKit, Blink only uses WebKit's "WebCore" components, while substituting other components, such as its own multi-process architecture, in place of WebKit's native implementation. Chrome is internally tested with unit testing, automated testing of scripted user actions, fuzz testing, as well as WebKit's layout tests (99% of which Chrome is claimed to have passed), and against commonly accessed websites inside the Google index within 20–30 minutes. Google created Gears for Chrome, which added features for web developers typically relating to the building of web applications, including offline support. Google phased out Gears as the same functionality became available in the HTML5 standards. In March 2011, Google introduced a new simplified logo to replace the previous 3D logo that had been used since the project's inception. Google designer Steve Rura explained the company reasoning for the change: "Since Chrome is all about making your web experience as easy and clutter-free as possible, we refreshed the Chrome icon to better represent these sentiments. A simpler icon embodies the Chrome spiritto make the web quicker, lighter, and easier for all." On January 11, 2011, the Chrome product manager, Mike Jazayeri, announced that Chrome would remove H.264 video codec support for its HTML5 player, citing the desire to bring Google Chrome more in line with the currently available open codecs available in the Chromium project, which Chrome is based on. Despite this, on November 6, 2012, Google released a version of Chrome on Windows which added hardware-accelerated H.264 video decoding. In October 2013, Cisco announced that it was open-sourcing its H.264 codecs, and it would cover all fees required. On February 7, 2012, Google launched Google Chrome Beta for Android 4.0 devices. On many new devices with Android 4.1 or later preinstalled, Chrome is the default browser. In May 2017, Google announced a version of Chrome for augmented reality and virtual reality devices. Features Google Chrome features a minimalistic user interface, with its user-interface principles later being implemented into other browsers. For example, the merging of the address bar and search bar into the omnibox or omnibar Chrome also has a reputation for strong browser performance. Web standards support The first release of Google Chrome passed both the Acid1 and Acid2 tests. Beginning with version 4.0, Chrome has passed all aspects of the Acid3 test. , Chrome has very good support for JavaScript/ECMAScript according to Ecma International's ECMAScript standards conformance Test 262 (version ES5.1 May 18, 2012). This test reports as the final score the number of tests a browser failed; hence lower scores are better. In this test, Chrome version 37 scored 10 failed/11,578 passed. For comparison, Firefox 19 scored 193 failed/11,752 passed and Internet Explorer 9 had a score of 600+ failed, while Internet Explorer 10 had a score of 7 failed. In 2011, on the official CSS 2.1 test suite by standardization organization W3C, WebKit, the Chrome rendering engine, passed 89.75% (89.38% out of 99.59% covered) CSS 2.1 tests. On the HTML5 web standards test, Chrome 41 scored 518 out of 555 points, placing it ahead of the five most popular desktop browsers. Chrome 41 on Android scored 510 out of 555 points. Chrome 44 scored 526, only 29 points less than the maximum score. User interface By default, the main user interface includes back, forward, refresh/cancel and menu buttons. A home button is not shown by default, but can be added through the Settings page to take the user to the new tab page or a custom home page. Tabs are the main component of Chrome's user interface and have been moved to the top of the window rather than below the controls. This subtle change contrasts with many existing tabbed browsers which are based on windows and contain tabs. Tabs, with their state, can be transferred between window containers by dragging. Each tab has its own set of controls, including the Omnibox. The Omnibox is a URL box that combines the functions of both the address bar and search box. If a user enters the URL of a site previously searched from, Chrome allows pressing Tab to search the site again directly from the Omnibox. When a user starts typing in the Omnibox, Chrome provides suggestions for previously visited sites (based on the URL or in-page text), popular websites (not necessarily visited beforepowered by Google Instant), and popular searches. Although Instant can be turned off, suggestions based on previously visited sites cannot be turned off. Chrome will also autocomplete the URLs of sites visited often. If a user types keywords into the Omnibox that do not match any previously visited websites and presses enter, Chrome will conduct the search using the default search engine. One of Chrome's differentiating features is the New Tab Page, which can replace the browser home page and is displayed when a new tab is created. Originally, this showed thumbnails of the nine most visited websites, along with frequent searches, recent bookmarks, and recently closed tabs; similar to Internet Explorer and Firefox with Google Toolbar, or Opera's Speed Dial. In Google Chrome 2.0, the New Tab Page was updated to allow users to hide thumbnails they did not want to appear. Starting in version 3.0, the New Tab Page was revamped to display thumbnails of the eight most visited websites. The thumbnails could be rearranged, pinned, and removed. Alternatively, a list of text links could be displayed instead of thumbnails. It also features a "Recently closed" bar that shows recently closed tabs and a "tips" section that displays hints and tricks for using the browser. Starting with Google Chrome 3.0, users can install themes to alter the appearance of the browser. Many free third-party themes are provided in an online gallery, accessible through a "Get themes" button in Chrome's options. Chrome includes a bookmarks submenu that lists the user's bookmarks, provides easy access to Chrome's Bookmark Manager, and allows the user to toggle a bookmarks bar on or off. On January 2, 2019, Google introduced Native Dark Theme for Chrome on Windows 10. In 2023, it was announced that Chrome would be completely revamped, using Google's Material You design language, the revamp would include more rounded corners, Chrome colors being swapped out for a similar dynamic color system introduced in Android 12, a revamped address bar, new icons and tabs, and a more simplified 3 dot menu. Built-in tools Starting with Google Chrome 4.1, the application added a built-in translation bar using Google Translate. Language translation is currently available for 52 languages. When Chrome detects a foreign language other than the user's preferred language set during the installation time, it asks the user whether or not to translate. Chrome allows users to synchronize their bookmarks, history, and settings across all devices with the browser installed by sending and receiving data through a chosen Google Account, which in turn updates all signed-in instances of Chrome. This can be authenticated either through Google credentials, or a sync passphrase. For web developers, Chrome has an element inspector which allows users to look into the DOM and see what makes up the webpage. Chrome has special URLs that load application-specific pages instead of websites or files on disk. Chrome also has a built-in ability to enable experimental features. Originally called about:labs, the address was changed to about:flags to make it less obvious to casual users. The desktop edition of Chrome is able to save pages as HTML with assets in a "_files" subfolder, or as unprocessed HTML-only document. It also offers an option to save in the MHTML format. Desktop shortcuts and apps Chrome allows users to make local desktop shortcuts that open web applications in the browser. The browser, when opened in this way, contains none of the regular interface except for the title bar, so as not to "interrupt anything the user is trying to do". This allows web applications to run alongside local software (similar to Mozilla Prism and Fluid). This feature, according to Google, would be enhanced with the Chrome Web Store, a one-stop web-based web applications directory which opened in December 2010. In September 2013, Google started making Chrome apps "For your desktop". This meant offline access, desktop shortcuts, and less dependence on Chrome—apps launch in a window separate from Chrome, and look more like native applications. Chrome Web Store Announced on December 7, 2010, the Chrome Web Store allows users to install web applications as extensions to the browser, although most of these extensions function simply as links to popular web pages or games, some of the apps like Springpad do provide extra features like offline access. The themes and extensions have also been tightly integrated into the new store, allowing users to search the entire catalog of Chrome extras. The Chrome Web Store was opened on February 11, 2011, with the release of Google Chrome 9.0. Extensions Browser extensions are able to modify Google Chrome. They are supported by the browser's desktop edition, but not on mobile. These extensions are written using web technologies like HTML, JavaScript, and CSS. They are distributed through Chrome Web Store, initially known as the Google Chrome Extensions Gallery. Some extensions focus on providing accessibility features. Google Tone is an extension developed by Google that when enabled, can use a computer's speakers to exchange URLs with nearby computers with an Internet connection that have the extension enabled as well. On September 9, 2009, Google enabled extensions by default on Chrome's developer channel, and provided several sample extensions for testing. In December, the Google Chrome Extensions Gallery beta began with approximately 300 extensions. It was launched on January 25, 2010, along with Google Chrome 4.0, containing approximately 1500 extensions. In 2014, Google started preventing some Windows users from installing extensions not hosted on the Chrome Web Store. The following year Google reported a "75% drop in customer support help requests for uninstalling unwanted extensions" which led them to expand this restriction to all Windows and Mac users. Manifest V3 In October 2018, Google announced a major future update to Chrome's extension API, known as "Manifest V3" (in reference to the manifest file contained within extensions). Manifest V3 is intended to modernize the extension architecture and improve the security and performance of the browser; it adopts declarative APIs to "decrease the need for overly-broad access and enable more performant implementation by the browser", replaces background pages with feature-limited "Service Workers" to reduce resource usage, and prohibits remotely-hosted code. Google faced a criticism for this change since it limits the number of rules and types of expressions that may be checked by adblockers. Additionally, the prohibition of remotely-hosted code will restrict the ability for adblocking filter lists to be updated independently of the extension itself. Notable examples Adblock Plus (no longer available from Google due to an update on the terms of use on Chrome) Adblock for Chrome (no longer available from Google due to an update on the terms of use on Chrome) Cut the Rope Dropbox Evernote Web Facebook Messenger Ghostery Google Maps HTTPS Everywhere (discontinued) Pandora Radio Pixlr Express Privacy Badger Streamus (discontinued) TweetDeck Stop Tony Meow (discontinued) uBlock Origin (no longer available due to terms of use change on Chrome) Speed The JavaScript virtual machine used by Chrome, the V8 JavaScript engine, has features such as dynamic code generation, hidden class transitions, and precise garbage collection. In 2008, several websites performed benchmark tests using the SunSpider JavaScript Benchmark tool as well as Google's own set of computationally intense benchmarks, which include ray tracing and constraint solving. They unanimously reported that Chrome performed much faster than all competitors against which it had been tested, including Safari (for Windows), Firefox 3.0, Internet Explorer 7, Opera, and Internet Explorer 8.{ However, on October 11, 2010, independent tests of JavaScript performance, Chrome has been scoring just behind Opera's Presto engine since it was updated in version 10.5. On September 3, 2008, Mozilla responded by stating that their own TraceMonkey JavaScript engine (then in beta), was faster than Chrome's V8 engine in some tests. John Resig, Mozilla's JavaScript evangelist, further commented on the performance of different browsers on Google's own suite, commenting on Chrome's "decimating" of the other browsers, but he questioned whether Google's suite was representative of real programs. He stated that Firefox 3.0 performed poorly on recursion-intensive benchmarks, such as those of Google, because the Mozilla team had not implemented recursion-tracing yet. Two weeks after Chrome's launch in 2008, the WebKit team announced a new JavaScript engine, SquirrelFish Extreme, citing a 36% speed improvement over Chrome's V8 engine. Like most major web browsers, Chrome uses DNS prefetching to speed up website lookups, as do other browsers like Firefox, Safari, Internet Explorer (called DNS Pre-resolution), and in Opera as a UserScript (not built-in). Chrome formerly used their now-deprecated SPDY protocol instead of only HTTP when communicating with servers that support it, such as Google services, Facebook, Twitter. SPDY support was removed in Chrome version 51. This was due to SPDY being replaced by HTTP/2, a standard that was based upon it. In November 2019, Google said it was working on several "speed badging" systems that let visitors know why a page is taking time to show up. The variations include simple text warnings and more subtle signs that indicate a site is slow. No date has been given for when the badging system will be included with the Chrome browser. Chrome formerly supported a Data Saver feature for making pages load faster called Lite Mode. Previously, Chrome engineers Addy Osmani and Scott Little announced Lite Mode would automatically lazy-load images and iframes for faster page loads. Lite Mode was switched off in Chrome 100, citing a decrease in mobile data costs for many countries. Security Chrome periodically retrieves updates of two blacklists (one for phishing and one for malware), and warns users when they attempt to visit a site flagged as potentially harmful. This service is also made available for use by others via a free public API called "Google Safe Browsing API". Chrome uses a process-allocation model to sandbox tabs. Using the principle of least privilege, each tab process cannot interact with critical memory functions (e.g. OS memory, user files) or other tab processessimilar to Microsoft's "Protected Mode" used by Internet Explorer 9 or greater. The Sandbox Team is said to have "taken this existing process boundary and made it into a jail". This enforces a computer security model whereby there are two levels of multilevel security (user and sandbox) and the sandbox can only respond to communication requests initiated by the user. On Linux sandboxing uses the seccomp mode. In January 2015, TorrentFreak reported that using Chrome when connected to the internet using a VPN can be a serious security issue due to the browser's support for WebRTC. On September 9, 2016, it was reported that starting with Chrome 56, users will be warned when they visit insecure HTTP websites to encourage more sites to make the transition to HTTPS. On December 4, 2018, Google announced its Chrome 71 release with new security features, including a built-in ad featuring system. In addition, Google also announced its plan to crack down on websites that make people involuntarily subscribe to mobile subscription plans. On September 2, 2020, with the release of Chrome 85, Google extended support for Secure DNS in Chrome for Android. DNS-over-HTTPS (DoH), was designed to improve safety and privacy while browsing the web. Under the update, Chrome automatically switches to DNS-over-HTTPS (DoH), if the current DNS provider supports the feature. Password management Windows Since 2008, Chrome has been faulted for not including a master password to prevent casual access to a user's passwords. Chrome developers have indicated that a master password does not provide real security against determined hackers and have refused to implement one. Bugs filed on this issue have been marked "WontFix". , Google Chrome asks the user to enter their Windows account password before showing saved passwords. Linux On Linux, Google Chrome/Chromium can store passwords in three ways: GNOME Keyring, KWallet or plain text. Google Chrome/Chromium chooses which store to use automatically, based on the desktop environment in use. Passwords stored in GNOME Keyring or KWallet are encrypted on disk, and access to them is controlled by dedicated daemon software. Passwords stored in plain text are not encrypted. Because of this, when either GNOME Keyring or KWallet is in use, any unencrypted passwords that have been stored previously are automatically moved into the encrypted store. Support for using GNOME Keyring and KWallet was added in version 6, but using these (when available) was not made the default mode until version 12. macOS As of version 45, the Google Chrome password manager is no longer integrated with Keychain, since the interoperability goal is no longer possible. Security vulnerabilities No security vulnerabilities in Chrome were exploited in the three years of Pwn2Own from 2009 to 2011. At Pwn2Own 2012, Chrome was defeated by a French team who used zero day exploits in the version of Flash shipped with Chrome to take complete control of a fully patched 64-bit Windows 7 PC using a booby-trapped website that overcame Chrome's sandboxing. Chrome was compromised twice at the 2012 CanSecWest Pwnium. Google's official response to the exploits was delivered by Jason Kersey, who congratulated the researchers, noting "We also believe that both submissions are works of art and deserve wider sharing and recognition." Fixes for these vulnerabilities were deployed within 10 hours of the submission. A significant number of security vulnerabilities in Chrome occurred in the Adobe Flash Player. For example, the 2016 Pwn2Own successful attack on Chrome relied on four security vulnerabilities. Two of the vulnerabilities were in Flash, one was in Chrome, and one was in the Windows kernel. In 2016, Google announced that it was planning to phase out Flash Player in Chrome, starting in version 53. The first phase of the plan was to disable Flash for ads and "background analytics", with the ultimate goal of disabling it completely by the end of the year, except on specific sites that Google has deemed to be broken without it. Flash would then be re-enabled with the exclusion of ads and background analytics on a site-by-site basis. Leaked documents from 2013 to 2016 codenamed Vault 7 detail the capabilities of the United States Central Intelligence Agency, such as the ability to compromise web browsers (including Google Chrome). Malware blocking and ad blocking Google introduced download scanning protection in Chrome 17. In February 2018, Google introduced an ad blocking feature based on recommendations from the Interactive Advertising Bureau. Sites that employ invasive ads are given a 30-day warning, after which their ads will be blocked. Consumer Reports recommended users install dedicated ad-blocking tools instead, which offer increased security against malware and tracking. Plugins Chrome supported, up to version 45, plug-ins with the Netscape Plugin Application Programming Interface (NPAPI), so that plug-ins (for example Adobe Flash Player) run as unrestricted separate processes outside the browser and cannot be sandboxed as tabs are. ActiveX is not supported. Since 2010, Adobe Flash has been integral to Chrome and does not need be installed separately. Flash is kept up to date as part of Chrome's own updates. Java applet support was available in Chrome with Java 6 update 12 and above. Support for Java under macOS was provided by a Java Update released on May 18, 2010. On August 12, 2009, Google introduced a replacement for NPAPI that is more portable and more secure called Pepper Plugin API (PPAPI). The default bundled PPAPI Flash Player (or Pepper-based Flash Player) was available on ChromeOS first, then replaced the NPAPI Flash Player on Linux from Chrome version 20, on Windows from version 21 (which also reduced Flash crashes by 20%), and eventually came to macOS at version 23. On September 23, 2013, Google announced that it would be deprecating and then removing NPAPI support. NPAPI support was removed from Linux in Chrome release 35. NPAPI plugins like Java can no longer work in Chrome (but there are workarounds for Flash by using PPAPI Flash Player on Linux including for Chromium). On April 14, 2015, Google released Chrome v42, disabling the NPAPI by default. This makes plugins that do not have a PPAPI plugin counterpart incompatible with Chrome, such as Java, Silverlight and Unity. However, NPAPI support could be enabled through the chrome://flags menu, until the release of version 45 on September 1, 2015, that removed NPAPI support entirely. Privacy Incognito mode The private browsing feature called Incognito mode prevents the browser from locally storing any history information, cookies, site data, or form inputs. Downloaded files and bookmarks will be stored. In addition, user activity is not hidden from visited websites or the Internet service provider. Incognito mode is similar to the private browsing feature in other web browsers. It does not prevent saving in all windows: "You can switch between an incognito window and any regular windows you have open. You'll only be in incognito mode when you're using the incognito window". The iOS version of Chrome also supports the optional ability to lock incognito tabs with Face ID, Touch ID or the device's passcode. Do Not Track In February 2012, Google announced that Chrome would implement the Do Not Track (DNT) standard to inform websites the user's desire not to be tracked. The protocol was implemented in version 23. In line with the W3's draft standard for DNT, it is turned off by default in Chrome. Stability A multi-process architecture is implemented in Chrome where, by default, a separate process is allocated to each site instance and plugin. This procedure is termed process isolation, and raises security and stability by preventing tasks from interfering with each other. An attacker successfully gaining access to one application gains access to no others, and failure in one instance results in a Sad Tab screen of death, similar to the well-known Sad Mac, but only one tab crashes instead of the whole application. This strategy exacts a fixed per-process cost up front, but results in less memory bloat over time as fragmentation is confined to each instance and no longer needs further memory allocations. This architecture was later adopted in Safari and Firefox. Chrome includes a process management utility called Task Manager which lets users see what sites and plugins are using the most memory, downloading the most bytes and overusing the CPU and provides the ability to terminate them. Chrome Version 23 ensures its users an improved battery life for the systems supporting Chrome's GPU accelerated video decoding. Release channels, cycles and updates The first production release on December 11, 2008, marked the end of the initial Beta test period and the beginning of production. Shortly thereafter, on January 8, 2009, Google announced an updated release system with three channels: Stable (corresponding to the traditional production), Beta, and Developer preview (also called the "Dev" channel). Where there were before only two channels: Beta and Developer, now there were three. Concurrently, all Developer channel users were moved to the Beta channel along with the promoted Developer release. Google explained that now the Developer channel builds would be less stable and polished than those from the initial Google Chrome's Beta period. Beta users could opt back to the Developer channel as desired. Each channel has its own release cycle and stability level. The Stable channel updated roughly quarterly, with features and fixes that passed "thorough" testing in the Beta channel. Beta updated roughly monthly, with "stable and complete" features migrated from the Developer channel. The Developer channel updated once or twice per week and was where ideas and features were first publicly exposed "(and sometimes fail) and can be very unstable at times". [Quoted remarks from Google's policy announcements.] On July 22, 2010, Google announced it would ramp up the speed at which it releases new stable versions; the release cycles were shortened from quarterly to six weeks for major Stable updates. Beta channel releases now come roughly at the same rate as Stable releases, though approximately one month in advance, while Dev channel releases appear roughly once or twice weekly, allowing time for basic release-critical testing. This faster release cycle also brought a fourth channel: the "Canary" channel, updated daily from a build produced at 09:00 UTC from the most stable of the last 40 revisions. The name refers to the practice of using canaries in coal mines, so if a change "kills" Chrome Canary, it will be blocked from migrating down to the Developer channel, at least until fixed in a subsequent Canary build. Canary is "the most bleeding-edge official version of Chrome and somewhat of a mix between Chrome dev and the Chromium snapshot builds". Canary releases run side by side with any other channel; it is not linked to the other Google Chrome installation and can therefore run different synchronization profiles, themes, and browser preferences. This ensures that fallback functionality remains even when some Canary updates may contain release-breaking bugs. It does not natively include the option to be the default browser, although on Windows and macOS it can be set through System Preferences. Canary was Windows-only at first; a macOS version was released on May 3, 2011. The Chrome beta channel for Android was launched on January 10, 2013; like Canary, it runs side by side with the stable channel for Android. Chrome Dev for Android was launched on April 29, 2015. All Chrome channels are automatically distributed according to their respective release cycles. The mechanism differs by platform. On Windows, it uses Google Update, and auto-update can be controlled via Group Policy. Alternatively, users may download a standalone installer of a version of Chrome that does not auto-update. On macOS, it uses Google Update Service, and auto-update can be controlled via the macOS "defaults" system. On Linux, it lets the system's normal package management system supply the updates. This auto-updating behavior is a key difference from Chromium, the non-branded open-source browser which forms the core of Google Chrome. Because Chromium also serves as the pre-release development trunk for Chrome, its revisions are provided as source code and buildable snapshots are produced continuously with each new commit, requiring users to manage their own browser updates. In March 2021, Google announced that starting with Chrome 94 in the third quarter of 2021, Google Chrome Stable releases will be made every four weeks, instead of six weeks as they have been since 2010. Also, Google announced a new release channel for system administrators and browser embedders with releases every eight weeks. Release version numbers Releases are identified by a four-part version number, e.g. 42.0.2311.90 (Windows Stable release April 14, 2015). The components are major.minor.build.patch. Major.minor reflects scheduling policy Build.patch identifies content progression Major represents a product release. These are scheduled 7–8 per year, unlike other software systems where the major version number updates only with substantial new content. Minor is usually 0.
Technology
Browsers
null
10062061
https://en.wikipedia.org/wiki/SSPSF%20model
SSPSF model
The SSPSF (stochastic self-propagating star formation) model of star formation was proposed by Mueller & Arnett in 1976, generalized afterward by Gerola & Seiden in 1978 and Gerola, Seiden, & Schulman in 1980. This model proposes that star formation propagates via the action of shock waves produced by stellar winds and supernovae traversing the gas that composes the interstellar medium. The Henize 206 nebula provides a clear example. In particular, 24μ infrared (MIPS) emission shows where a new generation of stars heats the remains of the supernova remnant that induced their formation. In contrast to star formation in density-wave theories, which are limited to disk-shaped galaxies and produce global spiral patterns, SSPSF applies equally well to spirals, to irregular galaxies and to any local concentrations of gas in elliptical galaxies. The effect may be envisioned as an "SIR infection model" in a differentially rotating disk, the host galaxy. The SIR model (perhaps most popularly familiar in the form of Conway's Game of Life) is applied to star formation propagating through the galaxy: Each generation of stars in a neighborhood includes some massive ones whose stellar winds and, soon, supernovae, produce shock waves in the gas (Susceptible material). These lead to collapsing nearby gas clouds, which produce the next generation of stars (Infection propagation); but in the immediate neighborhood, all initially available gas is used, so no further stars are born there for some period of time despite the shocks (Recovery from infection). In a non-flattened galaxy, the infection would produce an outward propagating sphere. In a non-rotating flattened (disk) environment, the infection would produce an outward propagating ring. But in a differentially rotating flattened environment, i.e., with mass closer to the galactic center orbiting the center somewhat more quickly, the ring is sheared into an ellipse, the innermost parts moving ahead of the ring's center and the outermost parts lagging. For disk galaxies, virtually all star formation occurs in the disk. In that case, the elongated rings are likewise confined to the disk, and collectively they evolve to appear as (possibly disconnected) segments of spiral arms: See (e.g) NGC 4414, as well as figures in. In 1999, the prevailing density wave model for the generation of spiral arms in galaxies was combined with SSPSF in a doctoral thesis by Auer (an idea first suggested by Gerola and Seiden in 1980). Auer concluded that density waves are in fact less effective in producing star formation, and more effective in simply organizing ongoing SSPSF into large-scale (spiral) patterns, ultimately into the Grand Design spiral form if conditions allow. In the figure you can see a simulation of a simple model for SSPSF on a circular grid. It is generated by randomly starting star formation in certain boxes of the grid, which propagates to nearby boxes in the grid while time progresses. Star formation dies out with time and a box has a certain regeneration time which prevents it from starting new star formation just after it was active. Adding (differential) rotation to the disk during propagation creates spiral patterns that are of the same nature of those in actual spiral galaxies. Dark spots are areas of active star formation, lighter spots are areas of recent star formation/areas in regeneration. SSPSF processes were demonstrated in an early prototype ("Gaslight") of the 2008 video game Spore.
Physical sciences
Galaxy classification
Astronomy
10064152
https://en.wikipedia.org/wiki/Continental%20rise
Continental rise
The continental rise is a low-relief zone of accumulated sediments that lies between the continental slope and the abyssal plain. It is a major part of the continental margin, covering around 10% of the ocean floor. Formation This geologic structure results from deposition of sediments, mainly due to mass wasting, the gravity-driven downhill motion of sand and other sediments. Mass wasting can occur gradually, with sediments accumulating discontinuously, or in large, sudden events. Large mass wasting occurrences are often triggered by sudden events such as earthquakes or oversteepening of the continental slope. More gradual accumulation of sediments occurs when hemipelagic sediments suspended in the ocean slowly settle to the ocean basin. Slope Because the continental rise lies below the continental slope and is formed from sediment deposition, it has a very gentle slope, usually ranging from 1:50 to 1:500. As the continental rise extends seaward, the layers of sediment thin, and the rise merges with the abyssal plain, typically forming a slope of around 1:1000. Accompanying Structures Alluvial Fans Deposition of sediments at the mouth of submarine canyons may form enormous fan-shaped accumulations called submarine fans on both the continental slope and continental rise. Alluvial or sedimentary fans are shallow cone-shaped reliefs at the base of the continental slope that merge together, forming the continental rise. Erosional submarine canyons slope downward and lead to alluvial fan valleys with increasing depth. It is in this zone that sediment is deposited, forming the continental rise. Alluvial fans such as the Bengal Fan, which stretches , make up one of the largest sedimentary structures in the world. Many alluvial fans also contain critical oil and natural gas reservoirs, making them key points for the collection of seismic data. Abyssal Plain Beyond the continental rise stretches the abyssal plain, which lies on top of basaltic oceanic crust and spans the majority of the seafloor. The abyssal plain hosts life forms which are uniquely adapted to survival in its cold, high pressure, and dark conditions. The flatness of the abyssal plain is interrupted by massive underwater mountain chains near the tectonic boundaries of Earth's plates. The sediments are mostly silt and clay.
Physical sciences
Tectonics
Earth science
10068133
https://en.wikipedia.org/wiki/Sunda%20clouded%20leopard
Sunda clouded leopard
The Sunda clouded leopard (Neofelis diardi) is a medium-sized wild cat native to Borneo and Sumatra. It is listed as Vulnerable on the IUCN Red List since 2015, as the total effective population probably consists of fewer than 10,000 mature individuals, with a decreasing population trend. On both Sunda Islands, it is threatened by deforestation. It was classified as a separate species, distinct from its close relative, the clouded leopard in mainland Southeast Asia based on a study in 2006. Its fur is darker with a smaller cloud pattern. This cat is also known as the Sundaland clouded leopard, Enkuli clouded leopard, Diard's clouded leopard, and Diard's cat. Characteristics The Sunda clouded leopard is overall grayish yellow or gray hue. It has a double midline on the back and is marked with small irregular cloud-like patterns on shoulders. These cloud markings have frequent spots inside and form two or more rows that are arranged vertically from the back on the flanks. It can purr as its hyoid bone is ossified. Its pupils contract to vertical slits. It has a stocky build and weighs around . Its canine teeth are long, which, in proportion to the skull length, are longer than those of any other living cat. Its tail can grow to be as long as its body, aiding balance. Distribution and habitat The Sunda clouded leopard is restricted to the islands of Borneo and Sumatra. In Borneo, it occurs in lowland rainforest, and at lower density in logged forest below . In Sumatra, it appears to be more abundant in hilly, montane areas. It is unknown if it still occurs on the Batu Islands close to Sumatra. Between March and August 2005, tracks of clouded leopards were recorded during field research in the Tabin Wildlife Reserve in Sabah. The population size in the research area was estimated to be five individuals, based on a capture-recapture analysis of four confirmed animals differentiated by their tracks. The density was estimated at eight to 17 individuals per . The population in Sabah is roughly estimated at 1,500–3,200 individuals, with only 275–585 of them living in totally protected reserves that are large enough to hold a long-term viable population of more than 50 individuals. Density outside protected areas in Sabah is probably much lower, estimated at one individual per . In Sumatra, it was recorded in Kerinci Seblat, Gunung Leuser and Bukit Barisan Selatan National Parks. It occurs most probably in much lower densities than on Borneo. One explanation for this lower density of about 1.29 individuals per might be that on Sumatra it is sympatric with the Sumatran tiger, whereas on Borneo it is the largest carnivore. Clouded leopard fossils were excavated on Java, where it perhaps became extinct in the Holocene. Ecology and behaviour The habits of the Sunda clouded leopard are largely unknown because of the animal's secretive nature. It is assumed that it is generally solitary. It hunts mainly on the ground and uses its climbing skills to hide from dangers. Taxonomy and evolution Felis diardi was the scientific name proposed by Georges Cuvier in 1823 in honour of Pierre-Médard Diard, who sent a skin and a drawing from Java to National Museum of Natural History, France. It was subordinated as a clouded leopard subspecies by Reginald Innes Pocock in 1917. Results of molecular genetic analysis of hair samples from mainland and Sunda clouded leopards showed differences in mtDNA, nuclear DNA sequences, and microsatellite and cytogenetic variation. This indicates that they diverged between 2 and 0.9 million years ago; their last common ancestor probably crossed a now submerged land bridge to reach Borneo and Sumatra. Results of a morphometric analysis of the pelages of 57 clouded leopards sampled throughout the genus' wide geographical range indicated that the two morphological groups differ primarily in the size of their cloud markings. The genus Neofelis was therefore reclassified as comprising two distinct species, N. nebulosa on the mainland and N. diardi in Sumatra and Borneo. Molecular, craniomandibular, and dental analysis indicates the Sunda clouded leopard has two distinct subspecies with separate evolutionary histories: Bornean clouded leopard (N. d. borneensis) Sumatran clouded leopard (N. d. diardi) Both populations are estimated to have diverged during the Middle to Late Pleistocene. This split corresponds roughly with the catastrophic super-eruption of the Toba Volcano in Sumatra 69,000–77,000 years ago. A probable scenario is that Sunda clouded leopards from Borneo recolonized Sumatra during periods of low sea levels in the Pleistocene, and were later separated from their source population by rising sea levels. Threats Sunda clouded leopards being strongly arboreal are forest-dependent, and are increasingly threatened by habitat destruction following deforestation in Indonesia as well as in Malaysia. Since the early 1970s, much of the forest cover has been cleared in southern Sumatra, in particular lowland tropical evergreen forest. Fragmentation of forest stands and agricultural encroachments have rendered wildlife particularly vulnerable to human pressure. Borneo has one of the world's highest deforestation rates. While in the mid-1980s forests still covered nearly three quarters of the island, by 2005 only 52% of Borneo was still forested. Both forests and land make way for human settlement. Illegal trade in wildlife is a widely spread practice. The population status of Sunda clouded leopards in Sumatra and Borneo has been estimated to decrease due to forest loss, forest conversion, illegal logging, encroachment, and possibly hunting. In Borneo, forest fires pose an additional threat, particularly in Kaltimantan and in the Sebangau National Park. There have been reports of poaching of Sunda clouded leopards in Brunei's Belait District where locals are selling their pelts at a lucrative price. In Indonesia, the Sunda clouded leopard is threatened by illegal hunting and trade. Between 2011 and 2019, body parts of 32 individuals were seized including 17 live individuals, six skins, several canines and claws. One live individual seized in Jakarta had been ordered by a Kuwaiti buyer. Conservation Neofelis diardi is listed on CITES Appendix I, and is fully protected in Sumatra, Kalimantan, Sabah, Sarawak and Brunei. Sunda clouded leopards occur in most protected areas along the Sumatran mountain spine and in most protected areas on Borneo. Since November 2006, the Bornean Wild Cat and Clouded Leopard Project based in the Danum Valley Conservation Area and the Tabin Wildlife Reserve aims to study the behaviour and ecology of the five species of Bornean wild cat — bay cat, flat-headed cat, marbled cat, leopard cat, and Sunda clouded leopard — and their prey, with a focus on the clouded leopard; investigate the effects of habitat alteration; increase awareness of the Bornean wild cats and their conservation needs, using the clouded leopard as a flagship species; and investigate threats to the Bornean wild cats from hunting and trade in Sabah. The Sunda clouded leopard is one of the focal cats of the project Conservation of Carnivores in Sabah based in northeastern Borneo since July 2008. The project team evaluates the consequences of different forms of forest exploitation for the abundance and density of felids in three commercially used forest reserves. They intend to assess the conservation needs of these felids and develop species specific conservation action plans together with other researchers and all local stakeholders. Names The scientific name of the genus Neofelis is a composite of the Greek word νεο- meaning "new, fresh, strange", and the Latin word feles meaning "cat", so it literally means "new cat." The Indonesian name for the clouded leopard rimau-dahan means "tree tiger" or "branch tiger". In Sarawak, it is known as entulu.
Biology and health sciences
Felines
Animals
7804521
https://en.wikipedia.org/wiki/Rain%20gutter
Rain gutter
A rain gutter, eavestrough, eaves-shoot or surface water collection channel is a component of a water discharge system for a building. It is necessary to prevent water dripping or flowing off roofs in an uncontrolled manner for several reasons: to prevent it damaging the walls, drenching persons standing below or entering the building, and to direct the water to a suitable disposal site where it will not damage the foundations of the building. In the case of a flat roof, removal of water is essential to prevent water ingress and to prevent a build-up of excessive weight. Water from a pitched roof flows down into a valley gutter, a parapet gutter or an eaves gutter. An eaves gutter is also known as an eavestrough (especially in Canada), spouting in New Zealand, rhone or rone (Scotland), eaves-shoot (Ireland) eaves channel, dripster, guttering, rainspouting or simply as a gutter. The word gutter derives from Latin gutta (noun), meaning "a droplet". Guttering in its earliest form consisted of lined wooden or stone troughs. Lead was a popular liner and is still used in pitched valley gutters. Many materials have been used to make guttering: cast iron, asbestos cement, UPVC (PVCu), cast and extruded aluminium, galvanized steel, wood, copper, zinc, and bamboo. Description Gutters prevent water ingress into the fabric of the building by channelling the rainwater away from the exterior of the walls and their foundations. Water running down the walls causes dampness in the affected rooms and provides a favourable environment for growth of mould, and wet rot in timber. A rain gutter may be a: Roof integral trough along the lower edge of the roof slope which is fashioned from the roof covering and flashing materials. Discrete trough of metal, or other material that is suspended beyond the roof edge and below the projected slope of the roof. Wall integral structure beneath the roof edge, traditionally constructed of masonry, fashioned as the crowning element of a wall. A roof must be designed with a suitable fall to allow the rainwater to discharge. The water drains into a gutter that is fed into a downpipe. A flat roof should have a watertight surface with a minimum finished fall of 1 in 80. They can drain internally or to an eaves gutter, which has a minimum 1 in 360 fall towards the downpipe. The pitch of a pitched roof is determined by the construction material of the covering. For slate this will be at 25%, for machine made tiles it will be 35%. Water falls towards a parapet gutter, a valley gutter or an eaves gutter. When two pitched roofs meet at an angle, they also form a pitched valley gutter: the join is sealed with valley flashing. Parapet gutters and valley gutters discharge into internal rainwater pipes or directly into external down pipes at the end of the run. The capacity of the gutter is a significant design consideration. The area of the roof is calculated (metres) and this is multiplied by rainfall (litres/sec/metres²) which is assumed to be 0.0208. This gives a required discharge outfall capacity. (litres/sec) . Rainfall intensity, the amount of water likely to generated in a two-minute rainstorm is more important than average rainfall, the British Standards Institute notes that an indicative storm in Essex, (annual rainfall 500 mm per annum) delivers 0.022 L/s/m²- while one in Cumbria (annual rainfall 1800 mm per annum) delivers 0.014 L/s/m². Eaves gutters can be made from a variety of materials such as cast iron, lead, zinc, galvanised steel, painted steel, copper, painted aluminium, PVC (and other plastics) and occasionally from concrete, stone, and wood. Water collected by a rain gutter is fed, usually via a downpipe (also called a leader or conductor), from the roof edge to the base of the building where it is either discharged or collected. The down pipe can terminate in a shoe and discharge directly onto the surface, but using modern construction techniques would be connected through an inspection chamber to a drain that led to a surface water drain or soakaway. Alternatively it would connect via a storm drain (u-bend) with 50 mm water seal to a combined drain. Water from rain gutters may be harvested in a rain barrel or a cistern. Rain gutters can be equipped with gutter screens, micro mesh screens, louvers or solid hoods to allow water from the roof to flow through, while reducing passage of roof debris into the gutter. Clogged gutters can also cause water ingress into the building as the water backs up. Clogged gutters can also lead to stagnant water build up which in some climates allows mosquitoes to breed. History The Romans brought rainwater systems to Britain. The technology was subsequently lost, but was re-introduced by the Normans. The White Tower, at the Tower of London had external gutters. In March 1240 the Keeper of the Works at the Tower of London was ordered by King Henry "to have the Great Tower whitened both inside and out". This was according to the fashion at the time. Later that year the king wrote to the Keeper, commanding that the White Tower's lead guttering should be extended with the effect that "the wall of the tower ... newly whitened, may be in no danger of perishing or falling outwards through the trickling of the rain". In Saxon times, the thanes erected buildings with large overhanging roofs to throw the water clear of the walls in the same way that occurs in thatched cottages. The cathedral builder used lead parapet gutters, with elaborate gargoyles for the same purpose. With the dissolution of the monasteries- those buildings were recycled and there was plenty of lead that could be used for secular building. The yeoman would use wooden gutters or lead lined wooden gutters. When The Crystal Palace was designed in 1851 by Joseph Paxton with its innovative ridge-and-furrow roof, the rafters that spanned the space between the roof girders of the glass roof also served as the gutters. The wooden Paxton gutters had a deep semi-circular channel to remove the rainwater and grooves at the side to handle the condensation. They were under trussed with an iron plate and had preformed notches for the glazing bars: they drained into a wooden box gutter that drained into and through structural cast iron columns. The Industrial Revolution introduced new methods of casting-iron and the railways brought a method of distributing the heavy cast-iron items to building sites. The relocation into the cities created a demand for housing that needed to be compact. Dryer houses controlled asthma, bronchitis, emphysema as well as pneumonia. In 1849 Joseph Bazalgette proposed a sewerage system for London, that prevented run-off being channelled into the Thames. By the 1870s all houses were constructed with cast iron gutters and down pipes. The Victorian gutter was an ogee, 115 mm in width, that was fitted directly to the fascia boards eliminating the need for brackets. Square and half-round profiles were also available. For a brief period after the first world war, asbestos-cement guttering became popular due to it being maintenance free: the disadvantages however ensured this was a short period: it was more bulky and fractured on impact. Types Cast iron Cast iron gutters were introduced in the late 18th century as an alternative to lead. Cast iron enabled eaves gutters to be mass-produced: they were rigid and non-porous while lead could only be used as a liner within timber gutters. Installation was a single process and didn't require heat. They could be attached directly to the fascia board. Cast iron gutters are still specified for restoration work in conservation areas, but are usually replaced with cast aluminium made to the same profile. Extruded aluminium gutters can be made to a variety of profiles from a roll of aluminium sheet on site in lengths of up to 30 m. They feature internal brackets at 400 mm spacing. UPVC In UK domestic architecture, guttering is often made from UPVC sections. The first PVC pipes were introduced in the 1930s for use in sanitary drainage systems. Polyethylene was developed in 1933. The first pressurised plastic drinking water pipes were installed in the Netherlands in the 1950s. During the 1960s rain water pipes, guttering and down pipes using plastic materials were introduced followed by PVC soil systems which became viable with the introduction of ring seals. A British Standard was launched for soil systems, local authorities started to specify PVC systems. By 1970 plastic rainwater systems accounted for over 60% of new installations. A European Standard EN607 has existed since 2004. It is easy to install, economical, lightweight requires minimum maintenance and has a life expectancy of 50 years. The material has a disadvantageous coefficient of thermal expansion 0.06 mm/m°C, so design allowances have to be made. A 4-metre gutter, enduring a −5 °C to 25 °C temperature range will need space to expand, 30 × 4 × 0.06 = 7.2 mm within its end stops. As a rule of thumb a gutter with a single downpipe will drain a roof. Stainless steel High quality stainless steel guttering systems are available for homes and commercial projects. The advantages of stainless steel are durability, corrosion-resistance, ease of cleaning, and superior aesthetics. Compared with concrete or wood, a stainless steel gutter will undergo non-negligible cycles of thermal expansion and contraction as the temperature changes; if allowance for this movement is not made during installation, there will be a potential for deformation of the gutter, which may lead to improper drainage of the gutter system. Seamless gutters Seamless gutters have the advantage of being produced on site with a portable roll forming machine to match the specifications of the structure and are generally installed by experienced tradesman. Seamless gutter is .027" thick and if properly installed will last 30+ years. Zinc In commercial and domestic architecture, guttering is often made from zinc coated mild steel for corrosion resistance. Metal gutters with bead stiffened fronts is governed in the UK by BS EN612:2005. Aluminium Aluminium gutters offer good corrosion resistance, are lightweight, and are easy to install. Additionally, aluminium gutters come in a variety of finishes and styles. Finlock gutters Finlock gutters, a proprietary name for concrete gutters, can be employed on a large range of buildings. There were used on domestic properties in the 1950s and 1960s, as a replacement for cast iron gutters when there was a shortage of steel and surplus of concrete. They were discredited after differential movement was found to open joints and allow damp to penetrate, but can be fitted with an aluminium and bitumastic liner. Finlock concrete gutter units are made up of two troughs – one is the visible gutter and the other sits across the cavity wall. The blocks which can range from can be joined using reinforcing rods and concrete, to form lintels for doors and windows. Vernacular buildings Guttering can be made from any locally available material such as stone or wood. Porous materials may be lined with pitch or bitumen. Shapes Today in Western construction we use mainly three types of gutter - K-Style, round, and square. In days past there were 12 gutter shapes/styles. K-Style gets its name from its letter designation being the eleventh out of the twelve. Gutter guards Gutter guards (also called gutter covers, gutter protection or leaf guards) are primarily aimed at preventing damage caused from clogged gutters and reducing the need for regular gutter cleaning. They are a common add-on or included as an option for custom-built homes. Types of gutter guards Brush gutter guards resemble pipe cleaners and are easy to install. They prevent large debris from clogging gutters, but are less effective at reducing smaller debris. Foam gutter guards are also easy to install. They fit into gutters, so they prevent large objects from obstructing waterflow, but they do not prevent algae and plant growth. A negative feature of foam type filters is that the pores quickly get clogged and thus need replacement due to not allowing water to pass through. Reverse curve or surface tension guards reduce clogged gutters by narrowing the opening of the gutters. Many find them to be unattractive and difficult to maintain. Screen gutter guards are among the most common and most effective. They can be snapped on or mounted, made of metal or plastic. Micromesh gutter guards provide the most protection from small and large debris. PVC type gutter guards are a less costly option, however, they tend to quickly become brittle due to sun exposure.
Technology
Hydraulic infrastructure
null
2422921
https://en.wikipedia.org/wiki/Black%20fly
Black fly
A black fly or blackfly (sometimes called a buffalo gnat, turkey gnat, or white socks) is any member of the family Simuliidae of the Culicomorpha infraorder. It is related to the Ceratopogonidae, Chironomidae, and Thaumaleidae. Over 2,200 species of black flies have been formally named, of which 15 are extinct. They are divided into two subfamilies: Parasimuliinae contains only one genus and four species; Simuliinae contains all the rest. Over 1,800 of the species belong to the genus Simulium. Most black flies gain nourishment by feeding on the blood of mammals, including humans, although the males feed mainly on nectar. They are usually small, black or gray, with short legs and antennae. They are a common nuisance for humans, and many U.S. states have programs to suppress the black fly population. They spread several diseases, including river blindness in Africa (Simulium damnosum and S. neavei) and the Americas (S. callidum and S. metallicum in Central America, S. ochraceum in Central and South America). Ecology Eggs are laid in running water, and the larvae attach themselves to rocks. Breeding success is highly sensitive to water pollution. The larvae use tiny hooks at the ends of their abdomens to hold on to the substrate, using silk holdfasts and threads to move or hold their place. They have foldable fans surrounding their mouths, also termed "mouth brushes". The fans expand when feeding, catching passing debris (small organic particles, algae, and bacteria). The larva scrapes the fan's catch into its mouth every few seconds. Black flies depend on lotic habitats to bring food to them. They will pupate under water and then emerge in a bubble of air as flying adults. They are often preyed upon by trout during emergence. The larva of some South African species are known to be phoretic on mayfly nymphs. Adult males feed on nectar, while females exhibit anautogeny and feed on blood before laying eggs. Some species in Africa can range as far as from aquatic breeding sites in search of their blood meals, while other species have more limited ranges. Different species prefer different host sources for their blood meals, which is sometimes reflected in the common name for the species. They feed in the daytime, preferably when wind speeds are low. Black flies may be either univoltine or multivoltine, depending on the species. The number of generations a particular pest species has each year tends to correlate with the intensity of human efforts to control those pests. Work conducted at Portsmouth University in 1986–1987 indicates Simulium spp. create highly acidic conditions within their midguts. This acidic environment provides conditions ideally suited to bacteria that metabolise cellulose. Insects cannot metabolise cellulose independently, but the presence of these bacteria allows cellulose to be metabolised into basic sugars. This provides nutrition to the black fly larvae, as well as the bacteria. This symbiotic relationship indicates a specific adaptation, as fresh-flowing streams could not provide sufficient nutrition to the growing larva in any other way. Regional effects of black fly populations In the wetter parts of the northern latitudes of North America, including parts of Canada, New England, Upstate New York, Minnesota, and the Upper Peninsula of Michigan, black fly populations swell from late April to July, becoming a nuisance to humans engaging in common outdoor activities, such as gardening, boating, camping, backpacking, and even walking. They can also be a significant nuisance in mountainous areas. Black flies are a scourge to livestock in Canada, causing weight loss in cattle and sometimes death. Pennsylvania operates the largest single black fly control program in North America. The program is seen as beneficial to both the quality of life for residents and to the state's tourism industry. The Blandford fly (Simulium posticatum) in England was once a public health problem in the area around Blandford Forum, Dorset, due to its large numbers and the painful lesions caused by its bite. It was eventually controlled by carefully targeted applications of Bacillus thuringiensis israelensis. In 2010, a summer surge of insect bites blamed on the Blandford fly required many who had been bitten to be treated in a hospital. The New Zealand "sandflies" are actually black flies of the species Austrosimulium australense and A. ungulatum. In parts of Scotland, various species of black flies are a nuisance and bite humans, mainly between May and September. They are found mainly in mixed birch and juniper woodlands, and at lower levels in pine forests, moorlands, and pastures. Bites are most often found on the head, neck, and back. They also frequently land on legs and arms. In Peninsular Malaysia 35 species of preimaginal black flies were discovered in a study in 2016, including Simulium digrammicum, which had been considered locally extinct. Public health Only four genera in the family Simuliidae, Simulium, Prosimulium, Austrosimulium, and Cnephia, contain species that feed on people, though other species prefer to feed on other mammals or on birds. Simulium, the type genus, is the most widespread and is a vector for several diseases, including river blindness. Mature adults can disperse tens or hundreds of miles from their breeding grounds in fresh flowing water, under their own power and assisted by prevailing winds, complicating control efforts. Swarming behavior can make outdoor activities unpleasant or intolerable, and can affect livestock production. During the 18th century, the "Golubatz fly" (Simulium colombaschense) was a notorious pest in central Europe. Even non-biting clouds of black flies, whether composed of males or of species that do not feed on humans or do not require a blood meal before egg laying, can form a nuisance by swarming into orifices. Bites are shallow and accomplished by first stretching the skin using teeth on the labrum and then abrading it with the maxillae and mandibles, cutting the skin and rupturing its fine capillaries. Feeding is facilitated by a powerful anticoagulant in the flies' saliva, which also partially numbs the site of the bite, reducing the host's awareness of being bitten and thereby extending the flies' feeding time. Biting flies feed during daylight hours only and tend to zero in on areas of thinner skin, such as the nape of the neck or ears and ankles. Itching and localized swelling and inflammation sometimes result from a bite. Swelling can be quite pronounced depending on the species and the individual's immune response, and irritation may persist for weeks. Intense feeding can cause "black fly fever", with headache, nausea, fever, swollen lymph nodes, and aching joints; these symptoms are probably a reaction to a compound from the flies' salivary glands. Less common severe allergic reactions may require hospitalization. Repellents provide some protection against biting flies. Products containing the active ingredient ethyl butylacetylaminopropionate (IR3535), DEET (N,N-diethyl-meta-toluamide), or picaridin are most effective. Some beauty products have been found effective, and their use as insect repellents have been approved by EPA (e.g., Skin So Soft). However, given the limited effectiveness of repellents, protecting oneself against biting flies requires taking additional measures, such as avoiding areas inhabited by the flies, avoiding peak biting times, and wearing heavy-duty, light-colored clothing, including long-sleeve shirts, long pants and hats. When black flies are numerous and unavoidable, netting that covers the head, like the “bee bonnets” used by beekeepers, can provide protection. River blindness Black flies are central to the transmission of the parasitic nematode Onchocerca volvulus which causes onchocerciasis, or "river blindness", which is endemic in parts of South America, Africa, and the Arabian Peninsula. It serves as the larval host for the nematode and acts as the vector by which the disease is spread. The parasite lives on human skin and is transmitted to the black fly during feeding.
Biology and health sciences
Flies (Diptera)
null
2423712
https://en.wikipedia.org/wiki/Magic%20acid
Magic acid
Magic acid () is a superacid consisting of a mixture, most commonly in a 1:1 molar ratio, of fluorosulfuric acid () and antimony pentafluoride (). This conjugate Brønsted–Lewis superacid system was developed in the 1960s by Ronald Gillespie and his team at McMaster University, and has been used by George Olah to stabilise carbocations and hypercoordinated carbonium ions in liquid media. Magic acid and other superacids are also used to catalyze isomerization of saturated hydrocarbons, and have been shown to protonate even weak bases, including methane, xenon, halogens, and molecular hydrogen. History The term "superacid" was first used in 1927 when James Bryant Conant found that perchloric acid could protonate ketones and aldehydes to form salts in nonaqueous solution. The term itself was coined by R. J. Gillespie, after Conant combined sulfuric acid with fluorosulfuric acid, and found the solution to be several million times more acidic than sulfuric acid alone. The magic acid system was developed in the 1960s by Ronald Gillespie, and was to be used to study stable carbocations. Gillespie also used the acid system to generate electron-deficient inorganic cations. The name originated after a Christmas party in 1966, when a member of the Olah lab placed a paraffin candle into the acid, and found that it dissolved quite rapidly. Examination of the solution with 1H-NMR showed a tert-butyl cation, suggesting that the paraffin chain that forms the wax had been cleaved, then isomerized into the relatively stable tertiary carbocation. The name appeared in a paper published by the Olah lab. Properties Structure Although a 1:1 molar ratio of and best generates carbonium ions, the effects of the system at other molar ratios have also been documented. When the ratio : is less than 0.2, the following two equilibria, determined by 19F NMR spectroscopy, are the most prominent in solution: (In both of these structures, the sulfur has tetrahedral coordination, not planar. The double bonds between sulfur and oxygen are more properly represented as single bonds, with formal negative charges on the oxygen atoms and a formal plus two charge on the sulfur. The antimony atoms will also have a formal charge of minus one.) In the above figure, Equilibrium I accounts for 80% of the NMR data, while Equilibrium II accounts for about 20%. As the ratio of the two compounds increases from 0.4–1.4, new NMR signals appear and increase in intensity with increasing concentrations of . The resolution of the signals decreases as well, because of the increasing viscosity of the liquid system. Strength All proton-producing acids stronger than 100% sulfuric acid are considered superacids, and are characterized by low values of the Hammett acidity function. For instance, sulfuric acid, , has a Hammett acidity function, H0, of −12, perchloric acid, , has a Hammett acidity function, of −13, and that of the 1:1 magic acid system, , is −23. Fluoroantimonic acid, the strongest known superacid, is believed to reach extrapolated H0 values down to −28. Uses Observations of stable carbocations Magic acid has low nucleophilicity, allowing for increased stability of carbocations in solution. The "classical" trivalent carbocation can be observed in the acid medium, and has been found to be planar and sp2-hybridized. Because the carbon atom is surrounded by only six valence electrons, it is highly electron deficient and electrophilic. It is easily described by Lewis dot structures because it contains only two-electron, single bonds to adjacent carbon atoms. Many tertiary cycloalkyl cations can also be formed in superacidic solutions. One such example is the 1-methyl-1-cyclopentyl cation, which is formed from both the cyclopentane and cyclohexane precursor. In the case of the cyclohexane, the cyclopentyl cation is formed from isomerization of the secondary carbocation to the tertiary, more stable carbocation. Cyclopropylcarbenium ions, alkenyl cations, and arenium cations have also been observed. As use of the Magic acid system became more widespread, however, higher-coordinate carbocations were observed. Penta-coordinate carbocations, also described as nonclassical ions, cannot be depicted using only two-electron, two-center bonds, and require, instead, two-electron, three (or more) center bonding. In these ions, two electrons are delocalized over more than two atoms, rendering these bond centers so electron deficient that they enable saturated alkanes to participate in electrophilic reactions. The discovery of hypercoordinated carbocations fueled the nonclassical ion controversy of the 1950s and 60s. Due to the slow timescale of 1H-NMR, the rapidly equilibrating positive charges on hydrogen atoms would likely go undetected. However, IR spectroscopy, Raman spectroscopy, and 13C NMR have been used to investigate bridged carbocation systems. One controversial cation, the norbornyl cation, has been observed in several media, Magic acid among them. The bridging methylene carbon atom is pentacoordinated, with three two-electron, two-center bonds, and one two-electron, three-center bond with its remaining sp3 orbital. Quantum mechanical calculations have also shown that the classical model is not an energy minimum. Reactions with alkanes Magic acid is capable of protonating alkanes. For instance, methane reacts to form the ion at 140 °C and atmospheric pressure, though some hydrocarbon ions of greater molecular weights are also formed as byproducts. Hydrogen gas is another reaction byproduct. In the presence of rather than , methane has been shown to interchange hydrogen atoms for deuterium atoms, and HD is released rather than . This is evidence to suggest that in these reactions, methane is indeed a base, and can accept a proton from the acid medium to form . This ion is then deprotonated, explaining the hydrogen exchange, or loses a hydrogen molecule to form – the carbonium ion. This species is quite reactive, and can yield several new carbocations, shown below. Larger alkanes, such as ethane, are also reactive in magic acid, and both exchange hydrogen atoms and condense to form larger carbocations, such as protonated neopentane. This ion is then cloven at higher temperatures, and reacts to release hydrogen gas and forms the t-amyl cation at lower temperatures. It is on this note that George Olah suggests we no longer take as synonymous the names "alkane" and "paraffin." The word "paraffin" is derived from the Latin "parum affinis", meaning "lacking in affinity." He says, "It is, however, with some nostalgia that we make this recommendation, as ‘inert gases’ at least maintained their ‘nobility’ as their chemical reactivity became apparent, but referring to ‘noble hydrocarbons’ would seem to be inappropriate." Catalysis with hydroperoxides Magic acid catalyzes cleavage-rearrangement reactions of tertiary hydroperoxides and tertiary alcohols. The nature of the experiments used to determine the mechanism, namely the fact that they took place in superacid medium, allowed observation of the carbocation intermediates formed. It was determined that the mechanism depends on the amount of magic acid used. Near molar equivalency, only O–O cleavage is observed, but with increasing excess of magic acid, C–O cleavage competes with O–O cleavage. The excess acid likely deactivates the hydrogen peroxide formed in C–O heterolysis. Magic acid also catalyzes electrophilic hydroxylation of aromatic compounds with hydrogen peroxide, resulting in high-yield preparation of monohydroxylated products. Phenols exist as completely protonated species in superacid solutions, and when produced in the reaction, are then deactivated toward further electrophilic attack. Protonated hydrogen peroxide is the active hydroxylating agent. Catalysis with ozone Oxygenation of alkanes can be catalyzed by a magic acid– solution in the presence of ozone. The mechanism is similar to that of protolysis of alkanes, with an electrophilic insertion into the single σ bonds of the alkane. The hydrocarbon–ozone complex transition state has the form of a penta-coordinated ion. Alcohols, ketones, and aldehydes are oxygenated by electrophilic insertion as well. Safety As with all strong acids, and especially superacids, proper personal protective equipment should be used. In addition to the obligatory gloves and goggles, the use of a faceshield and full-face respirator are also recommended. Predictably, magic acid is highly toxic upon ingestion and inhalation, causes severe skin and eye burns, and is toxic to aquatic life.
Physical sciences
Specific acids
Chemistry
2426137
https://en.wikipedia.org/wiki/Tapinoma%20melanocephalum
Tapinoma melanocephalum
Tapinoma melanocephalum is a species of ant that goes by the common name ghost ant. They are recognised by their dark head and pale or translucent legs and gaster (abdomen). This colouring makes this tiny ant seem even smaller. Description The ghost ant is small, with average lengths ranging between in workers. The antennae composes of 12 segments that thickens towards the tip. The antennal scapes exceeds the occipital border. The head and thorax is a dark brown colour while the gaster, legs and antennae are a milky white colour. Due to its small size and light colour, the ghost ant is difficult to see. Ghost ants are monomorphic and the thorax is spineless. The gaster is hairless, and has a back opening that is similar to a slit-like opening. The abdominal pedicel is formed from a single segment that is usually unable to be seen due to the gaster, and the species do not contain a stinger. During development, this species undergoes three larval instars, which are all naked and fusiform, with reduced mouthparts. The queens are similar in appearance to a worker, but the alitrunk (mesosoma) is enlarged. The queen measures in length, making them the largest member of the colony. The male's head and dorsum is dark in colour, while the gaster is light. It may contain several dark marks. They are usually in length. Distribution and habitat Due to how widespread the ghost ant is, the exact native range is not exactly known. However, the species is assumed to originate from the African or Oriental regions, seeing it is a tropical species. This has been proven considering the ghost ant cannot adapt to colder climates and are only confined to greenhouses and buildings that provide considerable conditions that allows the species to thrive, although a colony of ghost ants was discovered in an apartment block in Canada. One report has even stated the presence of ghost ants in isolated regions, with a colony being found in the Galapagos Islands. The ant is found in 154 geographical areas. The species is a common pest in the United States, particularly in the states of Hawaii and Florida, although the species is expanding further north, even reaching Texas by the mid 1990s. They are commonly found in the southern parts of Florida, and is considered a key pest, along with several other invasive ant species. The earliest record of the ghost ant in the United States was in 1887, where the species was found in Hawaii. It was then recorded in Washington, D.C. in 1894. After these two records, the ghost ant would later be found in Maine, New York, Connecticut, Virginia, North Carolina, Georgia, Florida, Illinois, Michigan, Wisconsin, Minnesota, Iowa, Missouri, Louisiana, Texas, Kansas, New Mexico, Arizona, California, Oregon and Washington. Ghost ants can be found in the U.S. territories of Puerto Rico and the U.S. Virgin Islands.
Biology and health sciences
Hymenoptera
Animals
2426926
https://en.wikipedia.org/wiki/Snow%20squall
Snow squall
A snow squall, or snowsquall, is a sudden moderately heavy snowfall with blowing snow and strong, gusty surface winds. It is often referred to as a whiteout and is similar to a blizzard but is localized in time or in location and snow accumulations may or may not be significant. Types There are two primary types of snow squalls: lake effect and frontal. Both types can strongly reduce visibilities and sometimes produce heavy snowfall. Lake-effect snow When arctic air moves over large expanses of warmer open waters in winter, convective clouds develop which cause heavy snow showers due to the large amount of moisture available. This occurs southwest of extratropical cyclones, with the curved cyclonic wind flow bringing cold air across the relatively warm Great Lakes which then leads to narrow lake-effect snow bands that can produce significant localized snowfall. Whiteout conditions will affect narrow corridors from shores to inland areas aligned along the prevailing wind direction. This will be enhanced when the moving air mass is uplifted by higher elevations. The name originates from the Great Lakes area of North America, however any body of water can produce them. Regions in lee of oceans, such as the Canadian Maritimes could experience such snow squalls. The areas affected by lake-effect snow are called snowbelts and deposition rate of many inches (centimetres) of snow per hour are common in these situations. In order for lake-effect snow to form, the temperature difference between the water and should be at least , surface temperature be around the freezing mark, the lake unfrozen, the path over the lake at least and the directional wind shear with height should be less than 30° from the surface to . Extremely cold air over still warm water in early winter can even produce thundersnow, snow showers accompanied by lightning and thunder. Frontal snow squall A frontal snow squall is an intense frontal convective line (similar to a squall line), when temperature is near freezing at the surface. The strong convection that develops has enough moisture to produce whiteout conditions at places which line passes over as the wind causes intense blowing snow. This type of snow squall generally lasts less than 30 minutes at any point along its path but the motion of the line can cover large distances. Frontal squalls may form a short distance ahead of the surface cold front or behind the cold front in situations where there are other contributing factors such as dynamic lifting from a deepening low pressure system or a series of trough lines which act similar to a traditional cold frontal passage. In situations where squalls develop post-frontally it is not unusual to have two or three linear squall bands pass in rapid succession only separated by with each passing the same point in roughly 30 minutes apart. This is similar to a line of thunderstorms in the summer but the tops of the clouds are only , often difficult to see on radar. Forecasting these types of events is equivalent to summer severe weather forecast for squall lines: presence of a sharp frontal trough with wind shift and low level jet of more than . However, the cold dome behind the trough is at 850 millibars instead of a higher level and must be at least . The presence of surface moisture from bodies of water or preexisting liquid precipitation is also a significant contributing factor helping to raise the dew point temperature and saturate the boundary layer. This saturate can significantly increase the amount of convective available potential energy leading to deeper vertical growth and higher precipitable water levels increasing the volume of snow which can be produced by the squall. In cases where there is a large amount of vertical growth and mixing the squall may develop embedded cumulonimbus clouds resulting in lightning and thunder which is dubbed thundersnow. Dangers Both types of snow squalls are very dangerous for motorists, airplanes, and other travelers; even can be more dangerous than blizzards. The change in conditions is very sudden, with slippery conditions and abrupt loss of visibility due to whiteouts, which often cause multiple-vehicle collisions. In the case of lake-effect snow, heavy amounts of snow can accumulate in short periods of time, possibly causing road closures and paralyzing cities. For instance, on January 9, 2015, a localized, heavy snow squall caused a 193-vehicle pile-up on I-94 highway near Galesburg, Michigan. On very rare occasions, particularly powerful snow squalls can even become supercells and form tornados, even if no lightning or thundersnow is present.
Physical sciences
Storms
Earth science
412544
https://en.wikipedia.org/wiki/Scatter%20plot
Scatter plot
A scatter plot, also called a scatterplot, scatter graph, scatter chart, scattergram, or scatter diagram, is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data. If the points are coded (color/shape/size), one additional variable can be displayed. The data are displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis. History According to Michael Friendly and Daniel Denis, the defining characteristic distinguishing scatter plots from line charts is the representation of specific observations of bivariate data where one variable is plotted on the horizontal axis and the other on the vertical axis. The two variables are often abstracted from a physical representation like the spread of bullets on a target or a geographic or celestial projection. While Edmund Halley created a bivariate plot of temperature and pressure in 1686, he omitted the specific data points used to demonstrate the relationship. Friendly and Denis claim his visualization was different from an actual scatter plot. Friendly and Denis attribute the first scatter plot to John Herschel. In 1833, Herschel plotted the angle between the central star in the constellation Virgo and Gamma Virginis over time to find how the angle changes over time, not through calculation but with freehand drawing and human judgment. Sir Francis Galton extended and popularized the scatter plot and many other statistical tools to pursue a scientific basis for eugenics. When, in 1886, Galton published a scatter plot and correlation ellipse of the height of parents and children, he extended Herschel's mere plotting of data points by binning and averaging adjacent cells to create a smoother visualization. Karl Pearson, R. A. Fischer, and other statisticians and eugenicists built on Galton's work and formalized correlations and significance testing. Overview A scatter plot can be used either when one continuous variable is under the control of the experimenter and the other depends on it or when both continuous variables are independent. If a parameter exists that is systematically incremented and/or decremented by the other, it is called the control parameter or independent variable and is customarily plotted along the horizontal axis. The measured or dependent variable is customarily plotted along the vertical axis. If no dependent variable exists, either type of variable can be plotted on either axis and a scatter plot will illustrate only the degree of correlation (not causation) between two variables. A scatter plot can suggest various kinds of correlations between variables with a certain confidence interval. For example, weight and height would be on the -axis, and height would be on the -axis. Correlations may be positive (rising), negative (falling), or null (uncorrelated). If the dots' pattern slopes from lower left to upper right, it indicates a positive correlation between the variables being studied. If the pattern of dots slopes from upper left to lower right, it indicates a negative correlation. A line of best fit (alternatively called 'trendline') can be drawn to study the relationship between the variables. An equation for the correlation between the variables can be determined by established best-fit procedures. For a linear correlation, the best-fit procedure is known as linear regression and is guaranteed to generate a correct solution in a finite time. No universal best-fit procedure is guaranteed to generate a correct solution for arbitrary relationships. A scatter plot is also very useful when we wish to see how two comparable data sets agree to show nonlinear relationships between variables. The ability to do this can be enhanced by adding a smooth line such as LOESS. Furthermore, if the data are represented by a mixture model of simple relationships, these relationships will be visually evident as superimposed patterns. The scatter diagram is one of the seven basic tools of quality control. Scatter charts can be built in the form of bubble, marker, or/and line charts. Example For example, to display a link between a person's lung capacity, and how long that person could hold their breath, a researcher would choose a group of people to study, then measure each one's lung capacity (first variable) and how long that person could hold their breath (second variable). The researcher would then plot the data in a scatter plot, assigning "lung capacity" to the horizontal axis, and "time holding breath" to the vertical axis. A person with a lung capacity of who held their breath for would be represented by a single dot on the scatter plot at the point (400, 21.7) in the Cartesian coordinates. The scatter plot of all the people in the study would enable the researcher to obtain a visual comparison of the two variables in the data set and will help to determine what kind of relationship there might be between the two variables. Scatter plot matrices For a set of data variables (dimensions) X1, X2, ... , Xk, the scatter plot matrix shows all the pairwise scatter plots of the variables on a single view with multiple scatterplots in a matrix format. For variables, the scatterplot matrix will contain rows and columns. A plot located on the intersection of row and th column is a plot of variables Xi versus Xj. This means that each row and column is one dimension, and each cell plots a scatter plot of two dimensions. A generalized scatter plot matrix offers a range of displays of paired combinations of categorical and quantitative variables. A mosaic plot, fluctuation diagram, or faceted bar chart may be used to display two categorical variables. Other plots are used for one categorical and one quantitative variables.
Mathematics
Statistics
null
412649
https://en.wikipedia.org/wiki/Chicago%20%22L%22
Chicago "L"
The Chicago "L" (short for "elevated") is the rapid transit system serving the city of Chicago and some of its surrounding suburbs in the U.S. state of Illinois. Operated by the Chicago Transit Authority (CTA), it is the fourth-largest rapid transit system in the United States in terms of total route length, at long as of 2014, and the third-busiest rapid transit system in the United States after the New York City Subway and the Washington Metro. As of January 2024, the "L" had 1,480 rail cars operating across eight different routes on 224.1 miles of track. CTA trains make about 1,888 trips each day servicing 146 train stations. In , the system had rides, or about per weekday in . The "L" provides 24-hour service on the Red and Blue Lines, making Chicago, New York City, and Copenhagen the only three cities in the world to offer 24-hour train service on some of their lines throughout their respective city limits. The oldest sections of the Chicago "L" started operations in 1892, making it the second-oldest rapid transit system in the Americas, after New York City's elevated lines. The "L" gained its name from "el" because large parts of the system run on elevated track. Portions of the network are in subway tunnels, at grade level, or in open cuts. The "L" has been credited for fostering the growth of Chicago's dense city core that is one of the city's distinguishing features. And according to urban engineer Christof Speiler, the system stands out in the United States because it continued to invest in services even through the post-World-War era growth of the expressway; its general use of alleyways instead of streets throughout its history, and expressway mediums after the war, better knit the system into the city, and in pioneering ways. It consists of eight rapid transit lines laid out in a spoke–hub distribution paradigm focusing transit towards the Loop. In a 2005 poll, Chicago Tribune readers voted it one of the "seven wonders of Chicago", behind the lakefront and Wrigley Field, and ahead of Willis Tower (formerly the Sears Tower), the Water Tower, the University of Chicago, and the Museum of Science and Industry. History Pre-CTA era The first "L", the Chicago and South Side Rapid Transit Railroad, began revenue service on June 6, 1892, when a steam locomotive pulling four wooden coaches, carrying more than a couple of dozen people, departed the 39th Street station and arrived at the Congress Street Terminal 14 minutes later, over tracks that are still in use by the Green Line. Over the next year, service was extended to 63rd Street and Stony Island Avenue, then the Transportation Building of the World's Columbian Exposition in Jackson Park. In 1893, trains began running on the Lake Street Elevated Railroad and in 1895 on the Metropolitan West Side Elevated, which had lines to Douglas Park, Garfield Park (since replaced), Humboldt Park (since demolished), and Logan Square. The Metropolitan was the United States' first non-exhibition rapid transit system powered by electric traction motors, a technology whose practicality had been demonstrated in 1890 on the "intramural railway" at the World Fair that had been held in Chicago. Two years later the South Side "L" introduced multiple-unit control, in which the operator can control all the motorized cars in a train, not just the lead unit. Electrification and MU control remain standard features of most of the world's rapid transit systems. A drawback of early "L" service was that none of the lines entered the central business district. Instead trains dropped passengers at stub terminals on the periphery due to a state law at the time requiring approval by neighboring property owners for tracks built over public streets, something not easily obtained downtown. This obstacle was overcome by the legendary traction magnate Charles Tyson Yerkes, who went on to play a pivotal role in the development of the London Underground, and who was immortalized by Theodore Dreiser as the ruthless schemer Frank Cowperwood in The Titan (1914) and other novels. Yerkes, who controlled much of the city's streetcar system, obtained the necessary signatures through cash and guile—at one point he secured a franchise to build a mile-long "L" over Van Buren Street from Wabash Avenue to Halsted Street, extracting the requisite majority from the pliable owners on the western half of the route, then building tracks chiefly over the eastern half, where property owners had opposed him. Designed by noted bridge builder John Alexander Low Waddell, the elevated tracks used a multiple close-rivet system to withstand the forces of the passing trains' kinetic energy. The Union Loop opened in 1897 and greatly increased the rapid transit system's convenience. Operation on the Yerkes-owned Northwestern Elevated, which built the North Side "L" lines, began three years later, essentially completing the elevated infrastructure in the urban core although extensions and branches continued to be constructed in outlying areas through the 1920s. After 1911, the "L" lines came under the control of Samuel Insull, president of the Chicago Edison electric utility (now Commonwealth Edison), whose interest stemmed initially from the fact that the trains were the city's largest consumer of electricity. Insull instituted many improvements, including free transfers and through routing, although he did not formally combine the original firms into the Chicago Rapid Transit Company until 1924. He also bought three other Chicago electrified railroads, the Chicago North Shore and Milwaukee Railroad, Chicago Aurora and Elgin Railroad, and South Shore interurban lines, and ran the trains of the first two into downtown Chicago via the "L" tracks. This period of relative prosperity ended when Insull's empire collapsed in 1932, but later in the decade the city with the help of the federal government accumulated sufficient funds to begin construction of two subway lines to supplement and, some hoped, permit eventual replacement of the Loop elevated; as early as the 1920s some city leaders wanted to replace the "ugly" elevated tracks and these plans advanced in the 1970s under mayors Richard J. Daley and Michael Bilandic until a public outcry against tearing down the popular "L" began, led by Chicago Tribune columnist Paul Gapp, and architect Harry Weese. Instead, then new Mayor Jane Byrne protected the elevated lines and directed their rehabilitation. The State Street subway opened on October 17, 1943. The Dearborn Subway, on which work had been suspended during World War II, opened on February 25, 1951. The subways were constructed with a secondary purpose of serving as bomb shelters, as evidenced by the close spacing of the support columns (a more extensive plan proposed replacing the entire elevated system with subways). The subways bypassed a number of tight curves and circuitous routings on the original elevated lines (Milwaukee trains, for example, originated on Chicago's northwest side but entered the Loop at the southwest corner), speeding service for many riders. Gallery CTA assumes control By the 1940s, the financial condition of the "L", and of Chicago mass transit in general, had become too precarious to permit continued operation without subsidies, and the necessary steps were taken to enable a public takeover. In 1947, the Chicago Transit Authority (CTA) acquired the assets of the Chicago Rapid Transit Company and the Chicago Surface Lines, operator of the city's streetcars. Over the next few years CTA modernized the "L", replacing wooden cars with new steel ones and closing lightly used branch lines and stations, many of which had been spaced only a quarter-mile apart. The CTA introduced fare cards for the first time in 1997. Rail service to the O'Hare International Airport first opened in 1984 and to the Midway International Airport in 1993. That same year, the CTA renamed all of its rail lines; they are now identified by color. Skip-stop service Later, after assuming control of the "L", the CTA introduced A/B skip-stop service. Under this service, trains were designated as either "A" or "B" trains, and stations were alternately designated as "A" stations or "B" stations, with heavily used stations designated as both – "AB". "A" trains would stop only at "A" and "AB" stations, and "B" trains would stop only at "B" and "AB" stations. Station signage carried the station's skip-stop letter and was also color-coded by skip-stop type; "A" stations had red signage, "B" stations had green signage, and "AB" stations had blue signage. The system was designed to speed up lines by having trains skip stations while still allowing for frequent service at the heavily used "AB" stations. A/B skip-stop service debuted on the Lake Street Elevated in 1948, and the service proved effective as travel times were cut by a third. By the 1950s, the service was used throughout the system. All lines used the A/B skip-stop service between the 1950s and the 1990s with the exception of the Evanston and Skokie lines, which were suburban-only lines and did not justify skip-stop service. On the lines with branches, skip-stop service sent all "A" trains to one branch and "B" trains to another branch. On what became the Blue Line, "A" trains were routed on the Congress branch while "B" trains were sent to the Douglas branch. On the North-South Line, "A" trains went to the Englewood branch and "B" trains went to the Jackson Park branch. In both cases, individual stops were not skipped beyond the points where those branches diverged. As time went by, the time periods which employed skip-stop service gradually decreased, as the waits at "A" and "B" stations became increasingly longer during non-peak service. By the 1990s, use of the A/B skip-stop system was only used during rush hour service. Another problem was that trains skipping stations to save time still could not pass the train that was directly ahead, so skipping stations was not advantageous in all regards. In 1993, the CTA began to eliminate skip-stop service when it switched the southern branches of the West-South and North-South Lines to improve rider efficiency, creating the current Red and Green Lines. From this point, Green Line trains made all stops along the entire route, while Red Line trains stopped at all stations south of Harrison. The elimination of A/B skip-stop service continued with the opening of the all-stop Orange Line and the conversion of the Brown Line to all-stop service. In April 1995, the last of the A/B skip-stop system was eliminated with the conversion of the O'Hare branch of the Blue Line and the Howard branch of the Red Line to all-stop service. The removal of skip-stop service resulted in some increases in travel times, and greatly increased ridership at former "A" and "B" stations due to increased train frequencies. Station signage highlighting the former skip-stop patterns would remain into the 2000s, when it was gradually replaced across the system. New rolling stock The first air-conditioned cars were introduced in 1964. The last pre–World War II cars were retired in 1973. New lines were built in expressway medians, a technique implemented in Chicago and followed by other cities worldwide. The Congress branch, built in the median of the Eisenhower Expressway, replaced the Garfield Park "L" in 1958. The Dan Ryan branch, built in the median of the Dan Ryan Expressway, opened on September 28, 1969, followed by an extension of the Milwaukee elevated into the Kennedy Expressway in 1970. The "L" today As of 2014, Chicago "L" trains run over a total of of track. Ridership Ridership has been growing steadily after the CTA takeover despite declining mass transit usage nationwide, with an average of 594,000 riders boarding each weekday in 1960 and 759,866 in 2016 (or 47% of all CTA rides). Due to the Loop Flood in April 1992, ridership was at 418,000 that year because CTA was forced to suspend operation for several weeks in both the State and Dearborn subways, used by the most heavily traveled lines. Growing ridership has not been uniformly distributed. Use of North Side lines is heavy and continues to grow, while that of West Side and South Side lines tend to remain stable. Ridership on the North Side Brown Line, for instance, has increased 83% since 1979, necessitating a station reconstruction project to accommodate longer trains. Annual traffic on the Howard branch of the Red Line, which reached 38.7 million in 2010 and 40.9 million in 2011, has exceeded the 1927 prewar peak of 38.5 million. The section of the Blue Line between the Loop and Logan Square, which serves once-neglected but now bustling neighborhoods such as Wicker Park, Bucktown, and Palmer Square, has seen a 54% increase in weekday riders since 1992. On the other hand, weekday ridership on the South Side portion of the Green Line, which closed for two years for reconstruction from January 1994 to May 1996, was 50,400 in 1978 but only 13,000 in 2006. Boardings at the 95th/Dan Ryan stop on the Red Line, though still among the system's busiest at 11,100 riders per weekday as of February 2015, are less than half the peak volume in the 1980s. In 1976, three North Side "L" branches – what were then known as the Howard, Milwaukee, and Ravenswood lines − accounted for 42% of non-downtown boardings. Today (with the help of the Blue Line extension to O'Hare), they account for 58%. The North Side, which has historically been the highest density area of the city, reflects the Chicago building boom between 2000 and 2010, which has focused primarily on North Side neighborhoods and downtown. It may ease somewhat in the wake of the current high level of residential construction along the south lakefront. For example, ridership at the linked Roosevelt stops on the Green, Orange, and Red Lines, which serve the burgeoning South Loop neighborhood, has tripled since 1992, with an average of 8,000 boardings per weekday. Patronage at the Cermak-Chinatown stop on the Red Line, with 4,000 weekday boardings, is at the highest level since the station opened in 1969. The 2003 Chicago Central Area Plan proposed construction of a Green Line station at Cermak, between Chinatown and the McCormick Place convention center, in expectation of continued density growth in the vicinity. This station opened in 2015. Service Currently, the Red Line and the Blue Line provide 24-hour service, while all other lines operate from early morning to late night. Prior to 1998, the Green Line, the Purple Line and the Douglas branch of the Blue Line (the modern-day Pink Line) also had 24 hour service. In the years of private ownership, the South Side Elevated Railroad (now the South Side Elevated portion of the Green Line) provided 24 hour service, a major advantage when compared to Chicago's cable railroads which required daily overnight shutdown for cable maintenance. Fares In 2015, the CTA introduced a new fare payment system called Ventra. Ventra enables passengers to purchase individual tickets, passes, or transit value online, by smart phone, or at participating retail locations. Ventra also works with CTA buses, Pace (suburban buses), and Metra (commuter rail). Payment by a smartphone app, the Ventra app, or by a contactless bankcard is possible. , the "L" uses a flat fare of $2.50 for almost the entire system, the only exception being O'Hare International Airport on the Blue Line, at which passengers entering the station are charged a higher fare of $5.00 (passengers leaving the system at this station are not charged this higher fare). The higher fare is being charged for what the CTA considers "premium-level" service to O'Hare. Use of the Midway International Airport Station does not require this higher fare; it only requires the $2.50 regular fare. The higher charge at O'Hare has been the source of some controversy in recent years, because of the CTA's plan to eliminate the exemption from the premium fare for airport workers, Transportation Security Administration workers, and airline workers. After protests from those groups, the CTA extended the exemptions for six months. Lines Since 1993, "L" lines have been officially identified by color, although older route names survive to some extent in CTA publications and popular usage to distinguish branches of longer lines. Stations are found throughout Chicago, as well as in the suburbs of Forest Park, Oak Park, Evanston, Wilmette, Cicero, Rosemont, and Skokie.  Blue Line, consisting of the O'Hare, Milwaukee–Dearborn subway, and Congress branches The Blue Line extends from O'Hare International Airport through the Loop via the Milwaukee-Dearborn subway to the West Side. Trains travel to Des Plaines Avenue in Forest Park via the Eisenhower Expressway median. The route from O'Hare to Forest Park is long. The number of stations is 33. Until 1970, the northern section of the Blue Line terminated at Logan Square. During that time, the line was called the Milwaukee route after Milwaukee Avenue, which runs parallel to it; in that year, service was extended to Jefferson Park via the Kennedy Expressway median, and in 1984 to O'Hare. The Blue Line is the second-busiest, with 176,120 weekday boardings. It operates 24 hours a day, 7 days a week.  Brown Line or Ravenswood branch The Brown Line follows an route, between the Kimball terminal in Albany Park and the Loop in downtown Chicago. In 2013, the Brown Line had an average weekday ridership of 108,529.  Green Line, consisting of the Lake Street Elevated, South Side Elevated, Ashland, and East 63rd branches A completely elevated route utilizing the system's oldest segments (dating back to 1892), the Green Line extends with 30 stops between Forest Park and Oak Park (Harlem/Lake), through The Loop, to the South Side. South of the Garfield station the line splits into two branches, with trains terminating at Ashland/63rd in West Englewood and terminating at Cottage Grove/63rd in Woodlawn. The East 63rd branch formerly extended to Jackson Park, but the portion of the line east of Cottage Grove, which ran above 63rd Street, was demolished in the 1980s and 1997 due to structural problems and was never rebuilt due to community demands. The average number of weekday boardings in 2013 was 68,230.  Orange Line or Midway Line The long Orange Line was constructed from 1987 until 1993 on existing railroad embankments and new concrete and steel elevated structure. It runs from a station adjacent to Midway International Airport on the Southwest Side to The Loop in downtown Chicago. Average weekday ridership in 2013 was 58,765.  Pink Line consisting of the Cermak branch and Paulina connector The Pink Line is an rerouting of former Blue Line branch trains from 54th/Cermak in Cicero via the previously non-revenue Paulina Connector and the Green Line on Lake Street to the Loop. Its average weekday ridership in 2013 was 31,572. The branch formerly ran to Oak Park Avenue in Berwyn, west of its current terminal. In 1952, service on the portion of the line west of 54th Avenue closed and over the next decade the stations and tracks were demolished. The street level right-of-way is used to this day as parking, locally known as the ""L" Strip".  Purple Line, consisting of the Evanston Shuttle and Evanston Express The Purple Line is a branch serving north suburban Evanston and Wilmette with express service to the Loop during weekday rush hours. The local service operates from the Linden terminal in Wilmette through Evanston to the Howard terminal on the north side of Chicago where it connects with the Red and Yellow lines. The weekday rush hour express service continues from Howard to the Loop, running nonstop on the four-track line used by the Red Line to Wilson station, then serving Belmont station, followed by all Brown Line stops to the Loop. 2013 average weekday ridership was 42,673 passenger boardings. The stops from Belmont to Chicago Avenue were added in the 1990s to relieve crowding on the Red and Brown lines. The name "purple line" is a reference to nearby Northwestern University, with four stops (Davis, Foster, Noyes, and Central) located just two blocks west of the university campus.  Red Line, consisting of the North Side main line, State Street subway, and Dan Ryan branch The Red Line is the busiest route, with 234,232 passenger boardings on an average weekday in 2013. It includes 33 stations on its route, traveling from the Howard terminal on the city's north side, through downtown Chicago via the State Street subway, then down the Dan Ryan Expressway median to 95th/Dan Ryan on the South Side. Despite its length, the Red Line stops short of the city's southern border. Extension plans to 130th are currently being considered. The Red Line is one of two lines operating 24 hours a day, seven days a week and is the only CTA "L" line that goes to both Wrigley Field and Rate Field, the homes of Chicago's Major League Baseball teams, the Chicago Cubs and Chicago White Sox. Rail cars are stored at the Howard Yard on the north end of the line and at the 98th Yard at the south end.  Yellow Line, or Skokie Swift The Yellow Line is a three station line that runs from the Howard Street terminal to the Dempster–Skokie terminal in north suburban Skokie. The Yellow Line is the only "L" route that does not provide direct service to the Loop. This line was originally part of the North Shore Line's rail service, and was acquired by the CTA in the 1960s. The Yellow Line previously operated as a nonstop shuttle, until the downtown Skokie station Oakton–Skokie opened on April 30, 2012. Its average weekday ridership in 2013 was 6,338 passenger boardings. The Loop Brown, Green, Orange, Pink, and Purple Line Express trains serve downtown Chicago via the Loop elevated. The Loop's eight stations average 72,843 weekday boardings. The Orange Line, Purple Line and the Pink Line run clockwise, the Brown Line runs counter-clockwise. The Green Line runs in both directions through the north and east sides; the other four lines circle the Loop and return to their starting points. The Loop forms a rectangle roughly long east-to-west and long north-to-south. The loop crossing at Lake and Wells has been described in the Guinness Book of World Records as the world's busiest railroad crossing. Rolling stock The CTA operates over 1,350 "L" cars, divided among four series, all of which are semi-permanently coupled into married pairs. All cars on the system utilize 600-volt direct current power delivered through a third rail. The 2600-series was built from 1981 until 1987 by the Budd Company of Philadelphia, Pennsylvania. After the completion of the order of the 2600-series cars, Budd changed its name to Transit America and ceased production of railcars. With 509 cars in operation, the 2600-series is the largest of the three series of "L" cars in operation. The cars were rebuilt by Alstom of Hornell, New York, from 1999 until 2002. The 3200-series, was built from 1992 until 1994 by Morrison-Knudsen of Hornell, New York. These cars have fluted, stainless steel sides similar to the now-retired 2200-series. The 5000-series train cars are equipped with AC propulsion; interior security cameras; aisle-facing seating, which allow for greater passenger capacity; LED destination signs, interior readouts, and interior maps; GPS; glow-in-the-dark evacuation signs; operator-controlled ventilation systems; among other features. AC propulsion allows for smoother acceleration, lower operational costs, less wear and tear, and greater energy efficiency. The AC propulsion can take advantage of regenerative braking, meaning the train returns excess energy to the third rail as it slows down. With the DC propulsion of the previous series, they utilize dynamic braking which converts the excess kinetic energy into heat within a resistor bank. Next-generation train cars, the 7000-series, have been ordered and are beginning to enter service. Each 7000-series rail car will feature LEDs, 37 to 38 seats, and is a hybrid of the 3200-series and 5000-series. The design and arrangement of seats were modified to improve ergonomics and increase leg room. Enhanced air conditioning will circulate air more efficiently during hot summer days. Laser sensors above the doors will count the number of passengers, allowing the CTA to track passenger volumes and change its schedules accordingly. State-owned manufacturer CRRC Sifang America (China Rail Rolling Stock Corporation) won the contract, besting the other major competitor, Bombardier from Canada by $226 million. Concerns have been raised over possible malware, cyber attacks, and mass surveillance by the Chinese government. The computer and software components and the automatic train control system will be made by U.S. and Canadian firms. The cars are being built at a new CRRC Sifang America rail car manufacturing plant at 13535 South Torrence Avenue in Chicago's Hegewisch neighborhood. Production of the 7000-series cars commenced in June 2019. This is the first time in more than 50 years CTA rail cars are manufactured in Chicago. Ten cars in the 7000-series began testing revenue service on April 21, 2021. The base order is for 400 cars and will be used to replace the 2600-series cars. If the CTA ordered the additional 446 cars, they would also replace the 3200-series cars. In May 2023, the CTA announced it has received $200 million funding from the Federal Transit Authority; this money will go towards the development of the 9000-series rail cars. The plan is to acquire up to 300 new train sets. Nickname Chicago's rapid-transit system is officially nicknamed the "L". This name for the CTA rail system applies to the whole system: its elevated, subway, at-grade, and open-cut segments. The use of the nickname dates from the earliest days of the elevated railroads. Newspapers of the late 1880s referred to proposed elevated railroads in Chicago as L' roads." The first route to be constructed, the Chicago and South Side Rapid Transit Railroad gained the nickname "Alley Elevated", or "Alley L" during its planning and construction, a term that was widely used by 1893, less than a year after the line opened. In discussing various stylings of "Loop" and "L" in Destination Loop: The Story of Rapid Transit Railroading in and around Chicago (1982), author Brian J. Cudahy quotes a passage from The Neon Wilderness (1947) by Chicago author Nelson Algren: "beneath the curved steel of the El, beneath the endless ties." Cudahy then comments, "Note that in the quotation above ... it says 'El' to mean 'elevated rapid transit railroad.' We trust that this usage can be ascribed to a publisher's editor in New York or some other east coast city; in Chicago the same expression is routinely rendered 'L'." As used by CTA, the name is rendered as the capital letter 'L', in single quotation marks. "L" (with double quotation marks) was often used by CTA predecessors such as the Chicago Rapid Transit Company; however, the CTA uses single quotation marks (') on some printed materials and signs rather than double. In Chicago, the term "subway" only applies to the State Street and Milwaukee–Dearborn subways and is not applied to the entire system as a whole, as in New York City where both the elevated and underground portions make up the New York City Subway. Renovation and expansion plans Like other large and aging rapid transit systems, the Chicago "L" faces problems of delays, breakdowns, and a multi-billion-dollar backlog of deferred maintenance. The CTA is currently focused on eliminating slow zones, modernizing the Red, Blue, and Purple lines, and improving "L" stations. In addition, CTA has studied numerous other proposals for expanded rail service and renovations, some of which may be implemented in the future. Recent service improvements and capital projects 2000–2010 During the 2000s and 2010s, the CTA has completed several renovation and new construction projects. Pink Line service began on June 25, 2006, though it did not include any new tracks or stations. The Pink Line travels over what was formerly a branch of the Blue Line from the 54th/Cermak terminal in Cicero to the Polk station in Chicago. Pink Line trains then proceed via the Paulina Connector to the Lake Street branch of the Green Line and then clockwise around the Loop elevated via Lake-Wabash-Van Buren-Wells. Douglas trains used the same route between April 4, 1954, and June 22, 1958, after the old Garfield Park "L" line was demolished to make way for the Eisenhower Expressway. The new route, which serves 22 stations, offered more frequent service for riders on both the Congress and Douglas branches. Pink Line trains could be scheduled independently of Blue Line trains, and ran more frequently than the Douglas branch of the Blue Line did. In late 2007, trains were forced to operate at reduced speed over more than 22% of the system due to deteriorated track, structure, and other problems. By October 2008, system-wide slow zones had been reduced to 9.1% and by January 2010, total slow zones were reduced to 6.3%. CTA's Slow Zone Elimination Project is an ongoing effort to restore track work to conditions where trains no longer have to reduce speeds through deteriorating areas. The Loop received track work in 2012–2013. The Purple Line in Evanston received track work and viaduct replacement in 2011–2013. The Green Line Ashland branch received track work in 2013, prior to the Red Line Dan Ryan branch reconstruction. The Brown Line Capacity Expansion Project enabled CTA to run eight-car trains on the Brown Line, and rebuilt stations to modern standards, including accessibility. Before the project, Brown Line platforms could only accommodate six-car trains, and increasing ridership led to uncomfortably crowded trains. After several years of construction, eight-car trains began to run at rush hour on the Brown Line in April 2008. The project was completed in December 2009, on time and on budget, with only minor punch list work remaining. The project's total cost was expected to be around $530 million. 2010–present While various mayors of Chicago had recognized the importance of reliable public transit, Rahm Emanuel received credit for making improving service a top priority. In addition to local funding, he managed to secure federal dollars by lobbying. One of the largest reconstruction projects in the CTA's history, at a cost of $425 million, was the Red Line South reconstruction project. From May 19, 2013, through October 20, 2013, the project closed and rebuilt the entire Dan Ryan branch—replacing and rebuilding all the tracks, ties, ballast and drainage systems—from Cermak-Chinatown to 95th/Dan Ryan. The station work involved renewing and improving eight stations, including new paint and lights, bus bridge improvements, new elevators at the Garfield, 63rd, and 87th stations and new roofs and canopies at some stations. "We are looking forward to providing our south Red Line customers with improved stations that are cleaner, brighter and better than they have been in years," said CTA President Forrest Claypool. Shutting down a portion of the railway instead of relegating work to the weekends enabled the project to be completed in months rather than years. In 2014, the CTA initiated station and track upgrades on the Blue Line between Grand and O'Hare. This $492 million project has modernized stations (some of which were originally built in 1895), rebuilt tracks, replaced station platforms, upgraded water utilities, installed public art, and improved access to some stations (by adding elevators). This project, nearing completion as of 2022, is expected to cut travel time between the Loop and O'Hare by ten minutes. In late 2015, extensive 4G wireless coverage was added to both Blue and Red Line subways, with the $32.5 million installation cost paid for by T-Mobile, Sprint, AT&T and Verizon. Upon the project's completion, Chicago became the largest American city with 4G Internet service in all of its subways and tunnels, a total of . Besides adding to passenger convenience, it also improved security by allowing CTA personnel and first responders to communicate more easily in case of an emergency. The new Wilson Station officially reopened in October 2017. The century-old station now includes accessible elevators, escalators, new security cameras, three entrances, wider stairwells, additional turnstiles, larger platforms, new lights and signage, as well as bus and train trackers. FastTracks is a program intended to address the slow zones and to make train rides smoother and more reliable. In order to achieve this, crews would replace worn tracks, rail ties, and ballasts. An upgraded power system along the O'Hare Blue Line branch would enable more trains to operate during peak periods. The Blue, Brown, Green, and Red lines would be worked on. This program was set to begin in early 2018 and would continue through 2021. Funding for this $179 million project comes from a fee increase imposed on mobile app-based vehicle for hire companies in Chicago, the first of its kind in the country. In Mayor Emanuel's 2018 budget proposal, the fee went from 52 cents to 67 cents per trip. First introduced in 2015, this fee rose by another five cents in 2019. In December 2018, The CTA Board approved $2.1 billion worth of contracts for the modernization of the Red and Purple Lines. The largest and most expensive in CTA history, this project includes the reconstruction of the junction between the Brown Line and Red/Purple line into a flying junction to reduce delays, and the reconstruction of the Lawrence, Argyle, Berwyn, and Bryn Mawr stations. Construction of the project began on October 2, 2019, and is scheduled for completion in 2025. $100 million in federal funding for the reconstruction of the Red Line was approved September 2019. In the final days of the Barack Obama administration, the federal government agreed to provide $957 million in funding in total; the rest would come from a tax hike on property owners who lived within of the Red Line. Google announced in November 2024 that it would pay for the redesign of the Clark/Lake station in a project related to the company's new Chicago headquarters at the former Thompson Center. Planned project This new rail service proposal under active consideration by CTA is currently undergoing Alternatives Analysis Studies. These studies are the first step in a five-step process. This process is required by the Federal New Starts program, which is an essential source of funding for CTA's expansion projects. CTA uses a series of "Screens" to develop a "Locally Preferred Alternative", which is submitted to the federal New Starts program. Red Line Extension An extension of the Red Line was first proposed by then Mayor Richard J. Daley in 1969. It would provide service from the current terminus at 95th/Dan Ryan to 130th Street, decreasing transit times for Far South Side residents and relieving crowding and congestion at the current terminus, The CTA presented its locally preferred alternative at meetings in August 2009. This consists of a new elevated rail line between 95th/Dan Ryan and a new terminal station at 130th Street, paralleling the Union Pacific Railroad and the South Shore Line, through the Far South Side neighborhoods of Roseland, Washington Heights, West Pullman, and Riverdale. In addition to the terminal station at 130th, three new stations would be built at 103rd, 111th, and Michigan. Basic engineering, along with an environmental impact statement, were underway in 2010. Alignment commenting was opened in 2016. The CTA announced the route, in length, and four new stations on January 26, 2018. Construction of the extension would begin in late 2025 and be scheduled to be completed in 2030. Contracts for preliminary work were approved in December 2018. In August 2022, the Red Line Extension reached the Federal Funding Phase. In December 2022, the City Council approved the creation of a district that will send nearly $1 billion in tax revenue over the next few decades to extend the Red Line south of 95th Street, a major step toward completing the project after a half-century of false starts. In March 2023, President Biden’s proposed 2024 budget includes $350 million in federal funding for the Red Line Extension project. In September 2023, the Federal Transit Administration announced $1.973 billion for the extension. In October 2023, the CTA received another $100 million for the extension. In August 2024, the CTA awarded a $2.3 billion construction contract to Walsh-VINCI Transit Community Partners for the 4-stop extension, with a total project cost (including financing) of $5.3 billion. Advance construction work began in fall 2024. In January 2025, the CTA secured $1.9 billion for the extension. Previously studied projects These projects are currently not being pursued by the CTA because of the cost and the concerns of residents. Circle Line The proposed Circle Line would form an "outer loop", traversing downtown via the State Street subway, then going southwest on the Orange Line and north along Ashland, before re-joining the subway at North/Clybourn or Clark/Division. The Circle Line would connect several different Metra lines with the "L" system, and would facilitate transfers between existing CTA lines; these connections would be situated near the existing Metra and "L" lines' maximum load points. CTA initiated official "Alternatives Analysis" planning for the Circle Line in 2005. The Circle Line concept garnered significant public interest and media coverage. Early conceptual planning divided the Circle Line into three segments. Phase 1 would be a restoration of the dilapidated "Paulina Connector", a short track segment that links Ashland/Lake with Polk. This track section has since been restored and service on the 54th/Cermak branch was transferred to the Pink Line. Phase 2 would link 18th on the Pink Line to Ashland on the Orange Line, with a new elevated structure running through a large industrial area. Phase 3, the final phase, would link Ashland/Lake to North/Clybourn with a new subway running through the dense neighborhoods of West Town and Wicker Park. With the completion of all three phases, the perimeter area would be served by Circle Line trains. In 2009, CTA released the results of its Alternatives Analysis Screen 3, in which it decided to begin early engineering work on Phase 2, due to its simple alignment through unpopulated areas and its relatively low cost (estimated to be $1.1 billion). Preliminary engineering work was performed on Phase 2. In addition to the new line, CTA planned to build four new stations as part of Phase 2. Three out of the four would be located along existing lines that the Circle Line will utilize. These would be at 18th/Clark, Cermak/Blue Island, Roosevelt/Paulina, and Congress/Paulina. 18th/Clark would be along the Orange Line in the Chinatown neighborhood, and would include a direct transfer connection to the Cermak/Chinatown station on the Red Line. Cermak/Blue Island would be located on the newly built elevated tracks in the Pilsen neighborhood. Roosevelt/Paulina would be located on the Pink Line in the Illinois Medical District. Congress/Paulina would be built above the Eisenhower Expressway, with a direct transfer connection to the Illinois Medical District station on the Blue Line. Existing stations would provide service near the United Center. Phase 3 was not implemented, and planning stopped after 2009. Phase 3 ran through dense residential areas, so alignment must be considered carefully to avoid adversely impacting those neighborhoods. CTA estimated that Phase 3 would be far more costly than Phase 2 due to its being underground. After a number of alternate plans were evaluated, in 2009, CTA adopted the "locally preferred alternative" (LPA), which stopped Circle Line work after Phase 2, and Phase 3 was relegated to a "long term vision". Orange Line Extension A proposed extension of the Orange Line would have provided transit service from the current terminus, Midway International Airport, to the Ford City Mall, which was originally meant to be the Orange Line's southern terminus when the line was planned in the 1980s. This would have alleviated congestion at the current Midway terminal. The CTA presented its locally preferred alternative at meetings in August 2009. This consisted of a new elevated rail line that would have run south from the Midway terminal along Belt Railway tracks, crossing the Clearing Yard while heading southwest to Cicero Avenue, then would have run south in the median of Cicero to a terminal on the east side of Cicero near 76th Street. Basic engineering, along with an environmental impact statement, were underway in 2010. This extension was canceled. Yellow Line Extension A proposed extension of the Yellow Line would have provided transit service from the current terminus, at Dempster Street, to the corner of Old Orchard Road and the Edens Expressway, just west of the Westfield Old Orchard shopping center. The CTA presented its locally preferred alternative at meetings in August 2009. This consisted of a new elevated rail line from Dempster north along a former rail right-of-way to the Edens Expressway, where the line would have turned to the north and run along the east side of the expressway to a terminus at Old Orchard Road. Basic engineering, along with an environmental impact statement, were underway in 2010. Unlike extensions to the Red and Orange Lines, the Yellow Line extension had attracted significant community opposition from residents of Skokie, as well as parents of students at the Niles North High School, on whose land the new line would have been constructed. Residents and parents cited concerns about noise, visual pollution, and crime. As a result, the extension was canceled. Canceled projects Numerous plans have been advanced over the years to reorganize downtown Chicago rapid transit service, originally with the intention of replacing the elevated Loop lines with subways. That idea has been largely abandoned as the city seems keen on keeping an elevated/subway mix. There have been continued calls to improve transit within the city's greatly enlarged central core. At present the "L" does not provide direct service between the Metra commuter rail terminals in the West Loop and Michigan Avenue, the principal shopping district, nor does it offer convenient access to popular downtown destinations such as Navy Pier, Soldier Field, and McCormick Place. Plans for the Central Area Circulator, a $700 million downtown light rail system meant to remedy this, were shelved in 1995 for lack of funding. An underground line running along the lake shore would connect some of the city's major tourist destinations, but this plan has not been widely discussed. Recognizing the cost and difficulty of implementing an all-rail solution, the Chicago Central Area Plan was introduced. It proposes a mix of rail and bus improvements, the centerpiece of which was the West Loop Transportation Center. The top level would be a pedestrian mezzanine, buses would operate in the second level, rapid transit trains in the third level, and commuter/high-speed intercity trains in the bottom level. The rapid transit level would connect to the existing Blue Line subway at its north and south ends, making possible the "Blue Line loop", envisioned as an underground counterpart to the Loop elevated. Alternatively, this level might be occupied by the Clinton Street subway. Among other advantages, the West Loop Transportation Center would provide a direct link between the "L" and the city's two busiest commuter rail terminals, Ogilvie Transportation Center and Union Station. The plan proposed transitways along Carroll Avenue, a former rail right-of-way north of the main branch of the Chicago River, and under Monroe Street in the Loop, which earlier transit schemes had proposed as rail routes. The Carroll Avenue route would provide faster bus service between the commuter stations and the rapidly redeveloping Near North Side, with possible rail service later. These new busways would tie into the bus level of the West Loop Transportation Center. These are canceled projects, identified in various city and regional planning studies. The CTA has not begun official studies of these expansions, so it is unclear whether they will ever be implemented, or simply remain as visionary projects. Clinton Street Subway It would run through the West Loop, connecting the Red Line near North/Clybourn to the Red Line again, near Cermak-Chinatown. From North/Clybourn, the subway would run south along Larrabee Street, then under the Chicago River to Clinton Street in the West Loop. Running south under Clinton, the subway would pass Ogilvie Transportation Center and Union Station, with short connections to Metra trains. It would then continue south on Clinton until 16th Street, where it would turn east, cross the river again, and rejoin the Red Line just north of the current Cermak-Chinatown stop. The estimated cost of this line was $3 billion, with no local funding source identified. Airport Express Airport Express service to O'Hare International Airport and Midway International Airport to and from a downtown terminal on State Street. On the 3200 series cars, black Midway and O'Hare destination signs exist, suggesting a possible Airport Express service, since the sign used for Express trains is written in a black background. A business plan prepared for the CTA called for a private firm to manage the venture with service starting in 2008. The project has been criticized as a boondoggle. The custom-equipped, premium-fare trains would offer nonstop service at faster speeds than the current Blue and Orange Lines. Although the trains would not run on dedicated rails (construction of such tracks could cost more than $1.5 billion), several short sections of passing track built at stations would allow the express trains to pass Blue and Orange trains while they sit at those stations. The CTA has pledged $130 million and the city of Chicago $42 million toward the cost of the downtown station. In comments posted to her blog in 2006, CTA chair Carole Brown said, "I would support premium rail service only if it brought significant new operating dollars, capital funding, or other efficiencies to CTA… The most compelling reason to proceed with the project is the opportunity to connect the Blue and Red subway tunnels," which are one block apart downtown. In the meantime, The CTA announced that due to cost overruns, it would only complete the shell of the Block 37 station; its president said "it would not make sense to completely build out the station or create the final tunnel connections until a partner is selected because final layout, technology and finishes are dependent on an operating plan." Mid-City Transitway It would run around, rather than through the Chicago Loop. The line would follow the Cicero Avenue/Belt Line corridor (former Crosstown Expressway alignment) between the O'Hare branch of the Blue Line at Montrose and the Dan Ryan branch of the Red Line at 87th Street. It may be an "L" line, but busway and other options are being considered. Security and safety Violent crime on the CTA has increased since the COVID-19 pandemic, and as of March 2, 2022, was up 17% compared to the year before. The CTA Union has called for conductors, which were dropped in 1997 to be brought back, while the city has promised more security guards. The CTA has had incidents where operators apparently overrode automatic train stops on red signals. These include the 1977 collision at Wabash and Lake, when four cars of a Lake-Dan Ryan train fell from the elevated structure, killing 11. There were two minor incidents in 2001, and two more in 2008, the more serious involving a Green Line train that derailed and straddled the split in the elevated structure at the 59th Street junction between the Ashland and East 63rd Street branches, and a minor one near 95th Street on the Red line. In 2014, the O'Hare station train crash occurred when a Blue Line train overran a bumper at the airport station and ascended up an escalator. In 2002, 25-year-old Joseph Konopka, self-styled as "Dr. Chaos", was arrested by Chicago police for hoarding potassium cyanide and sodium cyanide in a Chicago Transit Authority storeroom in the Chicago "L" Blue Line subway. Konopka had picked the original locks on several doors in the tunnels, then changed the locks so that he could access the rarely used storage rooms freely. Some 2017, studies have highlighted the Belmont and 95th stops on the Red Line being the "most dangerous". As of 2018, the Chicago Police Department (CPD)'s Public Transportation Unit, and the police departments of Evanston, Skokie, Forest Park, and Oak Park patrol the CTA system. The CPD provides random screenings for explosives. In addition, the CTA has its own K-9 patrol units and has installed more than 23,000 surveillance cameras. From 2008 to 2019, there have been a total of 27 derailments. Most were minor, but some resulted in serious injuries for passengers. On September 24, 2019, a Brown Line train collided with a Purple Line train near Sedgwick on the North Side Main Line during the morning rush hour. Fourteen injuries were reported and the cause is under investigation. According to the Chicago Fire Department, the injured were later found to be in stable condition. No derailments occurred. The CTA said service resumed about an hour after the incident. On October 3, 2019, a train operating Pink Line service on the Cermak branch struck a car that had driven past and around the gates at the level crossing on 47th Avenue near Cermak Road. Six injuries were reported. On June 7, 2021, a Red Line train derailed near Bryn Mawr. 24 passengers were aboard. No injuries were reported. On November 16, 2023, a Yellow Line train collided with a CTA snowplow at Howard Yard, resulting in 16 injuries. On September 2, 2024, four people were killed in a mass shooting aboard a Blue Line train as it traveled between the Oak Park and Harlem stations. In popular culture Movies and television shows use establishing shots to orient audiences to the location. For media set in Chicago, the "L" is a common feature because it is a distinctive part of the city. Films Some of the more prominent films which have used footage of the "L" include: The Sting (1973) Cooley High (1975) Sheba, Baby (1975) The Blues Brothers (1980) The Hunter (1980) Risky Business (1983) features the "L" in several erotic sequences Code of Silence (1985) Ferris Bueller's Day Off (1986) Running Scared (1986) shows a car chase taking place on the "L" tracks. The sounds of the "L" are also distinctive and are therefore also used to establish location Adventures in Babysitting (1987) Planes, Trains and Automobiles (1987) Child's Play (1988) Above the Law (1988) Next of Kin (1989) The Package (1989) Only the Lonely (1991) Gladiator (1992) Rapid Fire (1992) Straight Talk (1992) The Fugitive (1993), which also contained a short scene inside of an "L" train Blue Chips (1994) Hoop Dreams (1994) While You Were Sleeping (1995) includes a main character, Lucy, who works as an "L" fare collector Stir of Echoes (1999) On the Line (2001) Barbershop 2: Back in Business (2004) Spider-Man 2 (2004) features a fight between Doctor Octopus (Alfred Molina) and Spider-Man (Tobey Maguire) on top of and inside a New York subway train that strongly resembles the "L" Shall We Dance (2004), in which Richard Gere learns to dance with Jennifer Lopez by the light of the "L" Divergent (2014) features the "L" as the method of transportation around the city and plays a key role in certain situations. In Allegiant (2016) several of the main characters take the "L" to near the John Hancock Tower to scatter the ashes of another. These films portray futuristic versions of current train models. Television series Dick Wolf's Chicago franchise of four TV shows is set in and filmed on site in the city, and features the "L" in various episodes. In the U.S. version of the TV series Shameless, set on the South Side of Chicago, several main characters take the "L" for transportation. Many scenes take place underneath or adjacent to the elevated tracks, and the trains frequently appear in the background and in establishing shots. On the CBS sitcom The Bob Newhart Show, which was set in Chicago, the opening credits sequence shows various CTA "L" trains. On the CBS sitcom Good Times, features the "L" in the opening and end credits. On the CBS sitcom Mike & Molly, an "L" train can be seen passing by Mike and Carl's preferred eatery, Abe's. Another CBS series, Early Edition, also features the "L" in its opening credits and in several scenes of its episodes. On the NBC medical drama ER, set in Chicago, the "L" is often shown. Disney Channel's family sitcom Raven's Home features the "L" in many establishing shots. On the Showtime sitcom, The Chi, The "L" is featured in the episodes.
Technology
United States
null
412735
https://en.wikipedia.org/wiki/Rheumatic%20fever
Rheumatic fever
Rheumatic fever (RF) is an inflammatory disease that can involve the heart, joints, skin, and brain. The disease typically develops two to four weeks after a streptococcal throat infection. Signs and symptoms include fever, multiple painful joints, involuntary muscle movements, and occasionally a characteristic non-itchy rash known as erythema marginatum. The heart is involved in about half of the cases. Damage to the heart valves, known as rheumatic heart disease (RHD), usually occurs after repeated attacks but can sometimes occur after one. The damaged valves may result in heart failure, atrial fibrillation and infection of the valves. Rheumatic fever may occur following an infection of the throat by the bacterium Streptococcus pyogenes. If the infection is left untreated, rheumatic fever occurs in up to three percent of people. The underlying mechanism is believed to involve the production of antibodies against a person's own tissues. Due to their genetics, some people are more likely to get the disease when exposed to the bacteria than others. Other risk factors include malnutrition and poverty. Diagnosis of RF is often based on the presence of signs and symptoms in combination with evidence of a recent streptococcal infection. Treating people who have strep throat with antibiotics, such as penicillin, decreases the risk of developing rheumatic fever. In order to avoid antibiotic misuse this often involves testing people with sore throats for the infection; however, testing might not be available in the developing world. Other preventive measures include improved sanitation. In those with rheumatic fever and rheumatic heart disease, prolonged periods of antibiotics are sometimes recommended. Gradual return to normal activities may occur following an attack. Once RHD develops, treatment is more difficult. Occasionally valve replacement surgery or valve repair is required. Otherwise complications are treated as usual. Rheumatic fever occurs in about 325,000 children each year and about 33.4 million people currently have rheumatic heart disease. Those who develop RF are most often between the ages of 5 and 14, with 20% of first-time attacks occurring in adults. The disease is most common in the developing world and among indigenous peoples in the developed world. In 2015 it resulted in 319,400 deaths down from 374,000 deaths in 1990. Most deaths occur in the developing world where as many as 12.5% of people affected may die each year. Descriptions of the condition are believed to date back to at least the 5th century BCE in the writings of Hippocrates. The disease is so named because its symptoms are similar to those of some rheumatic disorders. Signs and symptoms The disease typically develops two to four weeks after a throat infection. Symptoms include: fever, painful joints with those joints affected changing with time, involuntary muscle movements, and occasionally a characteristic non-itchy rash known as erythema marginatum. The heart is involved in about half of the cases. Damage to the heart valves usually occurs only after several attacks but may occasionally occur after a single case of RF. The damaged valves may result in heart failure and also increase the risk of atrial fibrillation and infection of the valves. Pathophysiology Rheumatic fever is a systemic disease affecting the connective tissue around arterioles, and can occur after an untreated strep throat infection, specifically due to group A streptococcus (GAS), Streptococcus pyogenes. The similarity between antigens of Streptococcus pyogenes and multiple cardiac proteins can cause a life-threatening type II hypersensitivity reaction. Usually, self reactive B cells remain anergic in the periphery without T cell co-stimulation. During a streptococcal infection, mature antigen-presenting cells such as B cells present the bacterial antigen to CD4+T cells which differentiate into helper T2 cells. Helper T2 cells subsequently activate the B cells to become plasma cells and induce the production of antibodies against the cell wall of Streptococcus. However the antibodies may also react against the myocardium and joints, producing the symptoms of rheumatic fever. S. pyogenes is a species of aerobic, cocci, gram-positive bacteria that are non-motile, non-spore forming, and forms chains and large colonies. S. pyogenes has a cell wall composed of branched polymers which sometimes contain M protein, a virulence factor that is highly antigenic. The antibodies which the immune system generates against the M protein may cross-react with heart muscle cell protein myosin, heart muscle glycogen and smooth muscle cells of arteries, inducing cytokine release and tissue destruction. However, the only proven cross-reaction is with perivascular connective tissue. This inflammation occurs through direct attachment of complement and Fc receptor-mediated recruitment of neutrophils and macrophages. Characteristic Aschoff bodies, composed of swollen eosinophilic collagen surrounded by lymphocytes and macrophages can be seen on light microscopy. The larger macrophages may become Anitschkow cells or Aschoff giant cells. Rheumatic valvular lesions may also involve a cell-mediated immunity reaction as these lesions predominantly contain T-helper cells and macrophages. In rheumatic fever, these lesions can be found in any layer of the heart causing different types of carditis. The inflammation may cause a serofibrinous pericardial exudate described as "bread-and-butter" pericarditis, which usually resolves without sequelae. Involvement of the endocardium typically results in fibrinoid necrosis and wart formation along the lines of closure of the left-sided heart valves. Warty projections arise from the deposition, while subendocardial lesions may induce irregular thickenings called MacCallum plaques. Rheumatic heart disease Chronic rheumatic heart disease (RHD) is characterized by repeated inflammation with fibrinous repair. The cardinal anatomic changes of the valve include leaflet thickening, commissural fusion, and shortening and thickening of the tendinous cords. It is caused by an autoimmune reaction to Group A β-hemolytic streptococci (GAS) that results in valvular damage. Fibrosis and scarring of valve leaflets, commissures and cusps leads to abnormalities that can result in valve stenosis or regurgitation. The inflammation caused by rheumatic fever, usually during childhood, is referred to as rheumatic valvulitis. About half of patients with rheumatic fever develop inflammation involving valvular endothelium. The majority of morbidity and mortality associated with rheumatic fever is caused by its destructive effects on cardiac valve tissue. The complicated pathogenesis of RHD is not fully understood, though it has been observed to use molecular mimicry via group A streptococci carbohydrates and genetic predisposition involving HLA Class II genes that trigger autoimmune reactions. Molecular mimicry occurs when epitopes are shared between host antigens and Streptococcus antigens. This causes an autoimmune reaction against native tissues in the heart that are incorrectly recognized as "foreign" due to the cross-reactivity of antibodies generated as a result of epitope sharing. The valvular endothelium is a prominent site of lymphocyte-induced damage. CD4+ T cells are the major effectors of heart tissue autoimmune reactions in RHD. Normally, T cell activation is triggered by the presentation of bacterial antigens. In RHD, molecular mimicry results in incorrect T cell activation, and these T lymphocytes can go on to activate B cells, which will begin to produce self-antigen-specific antibodies. This leads to an immune response attack mounted against tissues in the heart that have been misidentified as pathogens. Rheumatic valves display increased expression of VCAM-1, a protein that mediates the adhesion of lymphocytes. Self-antigen-specific antibodies generated via molecular mimicry between human proteins and streptococcal antigens up-regulate VCAM-1 after binding to the valvular endothelium. This leads to the inflammation and valve scarring observed in rheumatic valvulitis, mainly due to CD4+ T cell infiltration. While the mechanisms of genetic predisposition remain unclear, a few genetic factors have been found to increase susceptibility to autoimmune reactions in RHD. The dominant contributors are a component of MHC class II molecules, found on lymphocytes and antigen-presenting cells, specifically the DR and DQ alleles on human chromosome 6. Certain allele combinations appear to increase RHD autoimmune susceptibility. Human leukocyte antigen (HLA) class II allele DR7 (HLA-DR7) is most often associated with RHD, and its combination with certain DQ alleles is seemingly associated with the development of valvular lesions. The mechanism by which MHC class II molecules increase a host's susceptibility to autoimmune reactions in RHD is unknown, but it is likely related to the role HLA molecules play in presenting antigens to T cell receptors, thus triggering an immune response. Also found on human chromosome 6 is the cytokine TNF-α which is also associated with RHD. High expression levels of TNF-α may exacerbate valvular tissue inflammation, because as this cytokine circulates in the bloodstream, it triggers the activation of multiple pathways that stimulate further pro-inflammatory cytokine secretion. Mannose-binding lectin (MBL) is an inflammatory protein involved in pathogen recognition. Different variants of MBL2 gene regions are associated with RHD. RHD-induced mitral valve stenosis has been associated with MBL2 alleles encoding for high production of MBL. Aortic valve regurgitation in RHD patients has been associated with different MBL2 alleles that encode for low production of MBL. In addition, the allele IGHV4-61, located on chromosome 14, which helps code for the immunoglobulin heavy chain (IgH) is linked to greater susceptibility to RHD because it may affect protein structure of the IgH. Other genes are also being investigated to better understand the complexity of autoimmune reactions that occur in RHD. Diagnosis The original method of diagnosing rheumatic heart disease was through heart auscultation, specifically listening for the sound of blood regurgitation from possibly dysfunctional valves. However, studies have shown that echocardiography is much more efficient in detecting RHD due to its high sensitivity. An echocardiogram has the ability to detect signs of RHD before the development of more obvious symptoms such as tissue scarring and stenosis. Modified Jones criteria were first published in 1944 by T. Duckett Jones, MD. They have been periodically revised by the American Heart Association in collaboration with other groups. According to revised Jones criteria, the diagnosis of rheumatic fever can be made when two of the major criteria, or one major criterion plus two minor criteria, are present along with evidence of streptococcal infection: elevated or rising antistreptolysin O titre or anti-DNase B. A recurrent episode can be diagnosed when three minor criteria are present. Exceptions are chorea and indolent carditis, each of which by itself can indicate rheumatic fever. An April 2013 review article in the Indian Journal of Medical Research stated that echocardiographic and Doppler (E & D) studies, despite some reservations about their utility, have identified a massive burden of rheumatic heart disease, which suggests the inadequacy of the 1992 Jones' criteria. E & D studies have identified subclinical carditis in patients with rheumatic fever, as well as in follow-ups of rheumatic heart disease patients who initially presented as having isolated cases of Sydenham's chorea. Signs of a preceding streptococcal infection include: recent scarlet fever, raised antistreptolysin O or other streptococcal antibody titre, or positive throat culture. The last revision of 2015 suggested variable diagnostic criteria in low-risk and high-risk populations to avoid overdiagnosis in the first category and underdiagnosis in the last one. Low-risk populations were defined as those with acute rheumatic fever annual incidence ≤2 per 100 000 school-aged children or all-age rheumatic heart disease prevalence of ≤1 per 1000. All other populations were categorised as having a moderate or high risk. Major criteria Joint manifestations are the unique clinical signs that have different implications for different population-risk categories : Only polyarthritis (a temporary migrating inflammation of the large joints, usually starting in the legs and migrating upwards) is considered as a major criterion in low-risk populations, whereas monoarthritis, polyarthritis and polyarthralgia (joint pain without swelling) are all included as major criteria in high-risk populations. Carditis: Carditis can involve the pericardium (pericarditis which resolves without sequelae), some regions of the myocardium (which might not provoke systolic dysfunction), and more consistently the endocardium in the form of valvulitis. Carditis is diagnosed clinically (palpitations, shortness of breath, heart failure, or a new heart murmur) or by echocardiography/Doppler studies revealing mitral or aortic valvulitis. Both of clinical and subclinical carditis are now considered a major criterion. Subcutaneous nodules: Painless, firm collections of collagen fibers over bones or tendons. They commonly appear on the back of the wrist, the outside elbow, and the front of the knees. Erythema marginatum: A long-lasting reddish rash that begins on the trunk or arms as macules, which spread outward and clear in the middle to form rings, which continue to spread and coalesce with other rings, ultimately taking on a snake-like appearance. This rash typically spares the face and is made worse with heat. Sydenham's chorea (St. Vitus' dance): A characteristic series of involuntary rapid movements of the face and arms. This can occur very late in the disease for at least three months from onset of infection. Minor criteria Arthralgia: Polyarthralgia in low-risk populations and monoarthralgia in others. However, joint manifestations cannot be considered in both major and minor categories in the same patient. Fever: ≥ 38.5 °C (101.3 °F) in low-incidence populations and ≥ 38 °C (100.4 °F) in high-risk populations. Raised erythrocyte sedimentation rate (≥60 mm in the first hour in lox-risk populations and ≥30 mm/h in others) or C reactive protein (>3.0 mg/dL). ECG showing a prolonged PR interval after accounting for age variability (Cannot be included if carditis is present as a major symptom) Prevention Rheumatic fever can be prevented by effectively and promptly treating strep throat with antibiotics. Globally, rheumatic fever is seen in populations that are socioeconomically disadvantaged and with limited access to health care. Overcrowding and exposure to domestic air pollution have been cited as associated risk factors. In those who have previously had rheumatic fever, antibiotics may be used in a preventative manner as secondary prophylaxis. Antibiotic prophylaxis after an episode of acute rheumatic fever is recommended owing to the high likelihood of recurrence. Streptococcal pharyngitis may occur asymptomatically and rheumatic fever may recur even after a treated infection. The American Heart Association recommends, based on low quality evidence but with high predicted efficacy, that people with mitral stenosis due to rheumatic heart disease receive prophylactic antibiotics for 10 years or until age 40, whichever would be longer. The AHA also supports good dental hygiene in people with RHD, and antibiotics for the prevention of infective endocarditis during dental procedures are recommended in high-risk patients. Vaccine No vaccines are currently available to protect against S. pyogenes infection, although research is underway to develop one. Difficulties in developing a vaccine include the wide variety of strains of S. pyogenes present in the environment and the large amount of time and number of people that will be needed for appropriate trials for safety and efficacy of the vaccine. Treatment The management of rheumatic fever is directed toward the reduction of inflammation with anti-inflammatory medications such as aspirin or corticosteroids. Individuals with positive cultures for strep throat should also be treated with antibiotics. Infection People with positive cultures for Streptococcus pyogenes should be treated with penicillin as long as allergy is not present. The use of antibiotics will not alter cardiac involvement in the development of rheumatic fever. Some suggest the use of benzathine benzylpenicillin. Monthly injections of long-acting penicillin must be given for a period of five years in patients having one attack of rheumatic fever. If there is evidence of carditis, the length of therapy may be up to 40 years. Another important cornerstone in treating rheumatic fever includes the continual use of low-dose antibiotics (such as penicillin, sulfadiazine, or erythromycin) to prevent recurrence. Inflammation Aspirin at high doses has historically been used for treatment of rheumatic fever. However, due to side effects like gastritis and salicylate poisoning, necessitating serum monitoring of salicylate levels, and the risk of Reye syndrome, a serious and potentially deadly condition that may arise in children treated with aspirin or aspirin-containing products, alternatives to aspirin have been sought, especially in children. While evidence suggests that treatment of rheumatic fever–associated arthritis with naproxen may be equally effective as with aspirin, its role in managing carditis has not been established. Management of carditis in acute rheumatic fever is controversial and based on dated literature. Corticosteroids may be considered, especially in people with allergies to NSAIDs or severe disease, although use of steroids may cause tissue atrophy, which could present challenges during future cardiac surgery for valve repair. Heart failure Some patients develop significant carditis which manifests as congestive heart failure. This requires the usual treatment for heart failure: ACE inhibitors, diuretics, beta blockers, and digoxin. Unlike typical heart failure, rheumatic heart failure responds well to corticosteroids. Epidemiology About 33 million people are affected by rheumatic heart disease with an additional 47 million having asymptomatic damage to their heart valves. As of 2010 globally it resulted in 345,000 deaths, down from 463,000 in 1990. In Western countries, rheumatic fever has become fairly rare since the 1960s, probably due to the widespread use of antibiotics to treat streptococcus infections. While it has been far less common in the United States since the beginning of the 20th century, there have been a few outbreaks since the 1980s. The disease is most common among Indigenous Australians (particularly in central and northern Australia), Māori, and Pacific Islanders, and is also common in Sub-Saharan Africa, Latin America, the Indian subcontinent, and North Africa. Rheumatic fever primarily affects children between ages 5 and 17 years and occurs approximately 20 days after strep throat. In up to a third of cases, the underlying strep infection may not have caused any symptoms. The rate of development of rheumatic fever in individuals with untreated strep infection is estimated to be 3%. The incidence of recurrence with a subsequent untreated infection is substantially greater (about 50%). The rate of development is far lower in individuals who have received antibiotic treatment. Persons who have had a case of rheumatic fever have a tendency to develop flare-ups with repeated strep infections. The recurrence of rheumatic fever is relatively common in the absence of maintenance of low dose antibiotics, especially during the first three to five years after the first episode. Recurrent bouts of rheumatic fever can lead to valvular heart disease. Heart complications may be long-term and severe, particularly if valves are involved. In countries in Southeast-Asia, sub-Saharan Africa, and Oceania, the percentage of people with rheumatic heart disease detected by listening to the heart was 2.9 per 1000 children and by echocardiography it was 12.9 per 1000 children. To assist in the identification of RHD in low resource settings and where prevalence of GAS infections is high, the World Heart Federation has developed criteria for RHD diagnosis using echocardiography, supported by clinical history if available. The WHF additionally defines criteria for use in people younger than age 20 to diagnose "borderline" RHD, as identification of cases of RHD among children is a priority to prevent complications and progression. However, spontaneous regression is more likely in borderline RHD than in definite cases, and its natural history may vary between populations. Echocardiographic screening among children and timely initiation of secondary antibiotic prophylaxis in children with evidence of early stages of rheumatic heart disease may be effective to reduce the burden of rheumatic heart disease in endemic regions. The efficacy of treating latent RHD in populations with high prevalence is balanced by the potential development of antibiotic resistance, which might be offset through use of narrow-spectrum antibiotics like benzathine benzapenicillin. Public health research is ongoing to determine if screening is beneficial and cost effective.
Biology and health sciences
Bacterial infections
Health
412753
https://en.wikipedia.org/wiki/Allergic%20rhinitis
Allergic rhinitis
Allergic rhinitis, of which the seasonal type is called hay fever, is a type of inflammation in the nose that occurs when the immune system overreacts to allergens in the air. It is classified as a type I hypersensitivity reaction. Signs and symptoms include a runny or stuffy nose, sneezing, red, itchy, and watery eyes, and swelling around the eyes. The fluid from the nose is usually clear. Symptom onset is often within minutes following allergen exposure, and can affect sleep and the ability to work or study. Some people may develop symptoms only during specific times of the year, often as a result of pollen exposure. Many people with allergic rhinitis also have asthma, allergic conjunctivitis, or atopic dermatitis. Allergic rhinitis is typically triggered by environmental allergens such as pollen, pet hair, dust, or mold. Inherited genetics and environmental exposures contribute to the development of allergies. Growing up on a farm and having multiple siblings decreases this risk. The underlying mechanism involves IgE antibodies that attach to an allergen, and subsequently result in the release of inflammatory chemicals such as histamine from mast cells. It causes mucous membranes in the nose, eyes and throat to become inflamed and itchy as they work to eject the allergen. Diagnosis is typically based on a combination of symptoms and a skin prick test or blood tests for allergen-specific IgE antibodies. These tests, however, can give false positives. The symptoms of allergies resemble those of the common cold; however, they often last for more than two weeks and, despite the common name, typically do not include a fever. Exposure to animals early in life might reduce the risk of developing these specific allergies. Several different types of medications reduce allergic symptoms, including nasal steroids, intranasal antihistamines such as olopatadine or azelastine, 2nd generation oral antihistamines such as loratadine, desloratadine, cetirizine, or fexofenadine; the mast cell stabilizer cromolyn sodium, and leukotriene receptor antagonists such as montelukast. Oftentimes, medications do not completely control symptoms, and they may also have side effects. Exposing people to larger and larger amounts of allergen, known as allergen immunotherapy, is often effective and is used when first line treatments fail to control symptoms. The allergen can be given as an injection under the skin or as a tablet under the tongue. Treatment typically lasts three to five years, after which benefits may be prolonged. Allergic rhinitis is the type of allergy that affects the greatest number of people. In Western countries, between 10 and 30% of people are affected in a given year. It is most common between the ages of twenty and forty. The first accurate description is from the 10th-century physician Abu Bakr al-Razi. In 1859, Charles Blackley identified pollen as the cause. In 1906, the mechanism was determined by Clemens von Pirquet. The link with hay came about due to an early (and incorrect) theory that the symptoms were brought about by the smell of new hay. Signs and symptoms The characteristic symptoms of allergic rhinitis are: rhinorrhea (excess nasal secretion), itching, sneezing fits, and nasal congestion/obstruction. Characteristic physical findings include conjunctival swelling and erythema, eyelid swelling with Dennie–Morgan folds, lower eyelid venous stasis (rings under the eyes known as "allergic shiners"), swollen nasal turbinates, and middle ear effusion. Nasal endoscopy may show findings such as pale and boggy inferior turbinates from mucosal edema, stringy mucus throughout the nasal cavities, and cobblestoning. There can also be behavioral signs; in order to relieve the irritation or flow of mucus, people may wipe or rub their nose with the palm of their hand in an upward motion: an action known as the "nasal salute" or the "allergic salute". This may result in a crease running across the nose (or above each nostril if only one side of the nose is wiped at a time), commonly referred to as the "transverse nasal crease", and can lead to permanent physical deformity if repeated enough. People might also find that cross-reactivity occurs. For example, people allergic to birch pollen may also find that they have an allergic reaction to the skin of apples or potatoes. A clear sign of this is the occurrence of an itchy throat after eating an apple or sneezing when peeling potatoes or apples. This occurs because of similarities in the proteins of the pollen and the food. There are many cross-reacting substances. Hay fever is not a true fever, meaning it does not cause a core body temperature in the fever over . Cause Pollen is often considered as a cause of allergic rhinitis, hence called hay fever (See sub-section below). Predisposing factors to allergic rhinitis include eczema (atopic dermatitis) and asthma. These three conditions can often occur together which is referred to as the atopic triad. Additionally, environmental exposures such as air pollution and maternal tobacco smoking can increase an individual's chances of developing allergies. Pollen-related causes Allergic rhinitis triggered by the pollens of specific seasonal plants is commonly known as "hay fever", because it is most prevalent during haying season. However, it is possible to have allergic rhinitis throughout the year. The pollen that causes hay fever varies between individuals and from region to region; in general, the tiny, hardly visible pollens of wind-pollinated plants are the predominant cause. The study of the dispersion of these bioaerosols is called Aerobiology. Pollens of insect-pollinated plants are too large to remain airborne and pose no risk. Examples of plants commonly responsible for hay fever include: Trees: such as pine (Pinus), mulberry (Morus), birch (Betula), alder (Alnus), cedar (Cedrus), hazel (Corylus), hornbeam (Carpinus), horse chestnut (Aesculus), willow (Salix), poplar (Populus), plane (Platanus), linden/lime (Tilia), and olive (Olea). In northern latitudes, birch is considered to be the most common allergenic tree pollen, with an estimated 15–20% of people with hay fever sensitive to birch pollen grains. A major antigen in these is a protein called Bet V I. Olive pollen is most predominant in Mediterranean regions. Hay fever in Japan is caused primarily by sugi (Cryptomeria japonica) and hinoki (Chamaecyparis obtusa) tree pollen. "Allergy friendly" trees include: female ash, red maple, yellow poplar, dogwood, magnolia, double-flowered cherry, fir, spruce, and flowering plum. Grasses (Family Poaceae): especially ryegrass (Lolium sp.) and timothy (Phleum pratense). An estimated 90% of people with hay fever are allergic to grass pollen. Weeds: ragweed (Ambrosia), plantain (Plantago), nettle/parietaria (Urticaceae), mugwort (Artemisia Vulgaris), Fat hen (Chenopodium), and sorrel/dock (Rumex) Allergic rhinitis may also be caused by allergy to Balsam of Peru, which is in various fragrances and other products. Genetic factors The causes and pathogenesis of allergic rhinitis are hypothesized to be affected by both genetic and environmental factors, with many recent studies focusing on specific loci that could be potential therapeutic targets for the disease. Genome-wide association studies (GWAS) have identified a number of different loci and genetic pathways that seem to mediate the body's response to allergens and promote the development of allergic rhinitis, with some of the most promising results coming from studies involving single-nucleotide polymorphisms (SNPs) in the interleukin-33 (IL-33) gene. The IL-33 protein that is encoded by the IL-33 gene is part of the interleukin family of cytokines that interact with T-helper 2 (Th2) cells, a specific type of T cell. Th2 cells contribute to the body's inflammatory response to allergens, and specific ST2 receptors, also known as IL1RL1, on these cells bind to the ligand IL-33 protein. This IL-33/ST2 signaling pathway has been found to be one of the main genetic determinants in bronchial asthma pathogenesis, and because of the pathological linkage between asthma and rhinitis, the experimental focus of IL-33 has now turned to its role in the development of allergic rhinitis in humans and mouse models. Recently, it was found that allergic rhinitis patients expressed higher levels of IL-33 in their nasal epithelium and had a higher concentration of ST2 serum in nasal passageways following their exposure to pollen and other allergens, indicating that this gene and its associated receptor are expressed at a higher rate in allergic rhinitis patients. In a 2020 study on polymorphisms of the IL-33 gene and their link to allergic rhinitis within the Han Chinese population, researchers found that five SNPs specifically contributed to the pathogenesis of allergic rhinitis, with three of those five SNPs previously identified as genetic determinants for asthma. Another study focusing on Han Chinese children found that certain SNPs in the protein tyrosine phosphatase non-receptor 22 (PTPN22) gene and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) gene can be associated with childhood allergic rhinitis and allergic asthma. The encoded PTPN22 protein, which is found primarily in lymphoid tissue, acts as a post-translational regulator by removing phosphate groups from targeted proteins. Importantly, PTPN22 can affect the phosphorylation of T cell responses, and thus the subsequent proliferation of the T cells. As mentioned earlier, T cells contribute to the body's inflammatory response in a variety of ways, so any changes to the cells' structure and function can have potentially deleterious effects on the body's inflammatory response to allergens. To date, one SNP in the PTPN22 gene has been found to be significantly associated with allergic rhinitis onset in children. On the other hand, CTLA-4 is an immune-checkpoint protein that helps mediate and control the body's immune response to prevent overactivation. It is expressed only in T cells as a glycoprotein for the Immunoglobulin (Ig) protein family, also known as antibodies. There have been two SNPs in CTLA-4 that were found to be significantly associated with childhood allergic rhinitis. Both SNPs most likely affect the associated protein's shape and function, causing the body to exhibit an overactive immune response to the posed allergen. The polymorphisms in both genes are only beginning to be examined, therefore more research is needed to determine the severity of the impact of polymorphisms in the respective genes. Finally, epigenetic alterations and associations are of particular interest to the study and ultimate treatment of allergic rhinitis. Specifically, microRNAs (miRNA) are hypothesized to be imperative to the pathogenesis of allergic rhinitis due to the post-transcriptional regulation and repression of translation in their mRNA complement. Both miRNAs and their common carrier vessel exosomes have been found to play a role in the body's immune and inflammatory responses to allergens. miRNAs are housed and packaged inside of exosomes until they are ready to be released into the section of the cell that they are coded to reside and act. Repressing the translation of proteins can ultimately repress parts of the body's immune and inflammatory responses, thus contributing to the pathogenesis of allergic rhinitis and other autoimmune disorders. There are many miRNAs that have been deemed potential therapeutic targets for the treatment of allergic rhinitis by many different researchers, with the most widely studied being miR-133, miR-155, miR-205, miR-498, and let-7e. Pathophysiology The pathophysiology of allergic rhinitis involves Th2 Helper T cell and IgE mediated inflammation with overactive function of the adaptive and innate immune systems. The process begins when an aeroallergen penetrates the nasal mucosal barrier. This barrier may be more permeable in susceptible individuals. The allergen is then engulfed by an antigen presenting cell (APC) (such as a dendritic cell). The APC then presents the antigen to a Naive CD4+ helper T cell stimulating it to differentiate into a Th2 helper T cell. The Th2 helper T cell then secretes inflammatory cytokines including IL-4, IL-5, IL-13, IL-14, and IL-31. These inflammatory cytokines stimulate B cells to differentiate into plasma cells and release allergen specific IgE immunoglobulins. The IgE immunoglobulins attach to mast cells. The inflammatory cytokines also recruit inflammatory cells such as basophils, eosinophils and fibroblasts to the area. The person is now sensitized, and upon re-exposure to the allergen, mast cells with allergen specific IgE will bind the allergens and release inflammatory molecules including histamine, leukotrienes, platelet activating factor, prostaglandins and thromboxane with these inflammatory molecules' local effects on blood vessels (dilation), mucous glands (secrete mucous) and sensory nerves (activation) leading to the clinical signs and symptoms of allergic rhinitis. Disruption of the nasal mucosal epithelial barrier may also release alarmins (a type of damage associated molecular pattern (DAMP) molecule) such as thymic stromal lymphopoietin, IL-25 and IL-33 which activate group 2 innate lymphoid cells (ILC2) which then also releases inflammatory cytokines leading to activation of immune cells. Diagnosis Allergy testing may reveal the specific allergens to which an individual is sensitive. Skin testing is the most common method of allergy testing. This may include a patch test to determine if a particular substance is causing the rhinitis, or an intradermal, scratch, or other test. Less commonly, the suspected allergen is dissolved and dropped onto the lower eyelid as a means of testing for allergies. This test should be done only by a physician, since it can be harmful if done improperly. In some individuals not able to undergo skin testing (as determined by the doctor), the RAST blood test may be helpful in determining specific allergen sensitivity. Peripheral eosinophilia can be seen in differential leukocyte count. Allergy testing is not definitive. At times, these tests can reveal positive results for certain allergens that are not actually causing symptoms, and can also not pick up allergens that do cause an individual's symptoms. The intradermal allergy test is more sensitive than the skin prick test, but is also more often positive in people that do not have symptoms to that allergen. Even if a person has negative skin-prick, intradermal and blood tests for allergies, they may still have allergic rhinitis, from a local allergy in the nose. This is called local allergic rhinitis. Specialized testing is necessary to diagnose local allergic rhinitis. Classification Seasonal allergic rhinitis (hay fever): Caused by seasonal peaks in the airborne load of pollens. Perennial allergic rhinitis (nonseasonal allergic rhinitis; atopic rhinitis): Caused by allergens present throughout the year (e.g., dander). Allergic rhinitis may be seasonal, perennial, or episodic. Seasonal allergic rhinitis occurs in particular during pollen seasons. It does not usually develop until after 6 years of age. Perennial allergic rhinitis occurs throughout the year. This type of allergic rhinitis is commonly seen in younger children. Allergic rhinitis may also be classified as mild-intermittent, moderate-severe intermittent, mild-persistent, and moderate-severe persistent. Intermittent is when the symptoms occur <4 days per week or <4 consecutive weeks. Persistent is when symptoms occur >4 days/week and >4 consecutive weeks. The symptoms are considered mild with normal sleep, no impairment of daily activities, no impairment of work or school, and if symptoms are not troublesome. Severe symptoms result in sleep disturbance, impairment of daily activities, and impairment of school or work. Local allergic rhinitis Local allergic rhinitis is an allergic reaction in the nose to an allergen, without systemic allergies. So skin-prick and blood tests for allergy are negative, but there are IgE antibodies produced in the nose that react to a specific allergen. Intradermal skin testing may also be negative. The symptoms of local allergic rhinitis are the same as the symptoms of allergic rhinitis, including symptoms in the eyes. Just as with allergic rhinitis, people can have either seasonal or perennial local allergic rhinitis. The symptoms of local allergic rhinitis can be mild, moderate, or severe. Local allergic rhinitis is associated with conjunctivitis and asthma. In one study, about 25% of people with rhinitis had local allergic rhinitis. In several studies, over 40% of people having been diagnosed with nonallergic rhinitis were found to actually have local allergic rhinitis. Steroid nasal sprays and oral antihistamines have been found to be effective for local allergic rhinitis. As of 2014, local allergenic rhinitis had mostly been investigated in Europe; in the United States, the nasal provocation testing necessary to diagnose the condition was not widely available. Prevention Prevention often focuses on avoiding specific allergens that cause an individual's symptoms. These methods include not having pets, not having carpets or upholstered furniture in the home, and keeping the home dry. Specific anti-allergy zippered covers on household items like pillows and mattresses have also proven to be effective in preventing dust mite allergies. Studies have shown that growing up on a farm and having many older siblings can decrease an individual's risk for developing allergic rhinitis. Studies in young children have shown that there is higher risk of allergic rhinitis in those who have early exposure to foods or formula or heavy exposure to cigarette smoking within the first year of life. Treatment The goal of rhinitis treatment is to prevent or reduce the symptoms caused by the inflammation of affected tissues. Measures that are effective include avoiding the allergen. Intranasal corticosteroids are the preferred medical treatment for persistent symptoms, with other options if this is not effective. Second line therapies include antihistamines, decongestants, cromolyn, leukotriene receptor antagonists, and nasal irrigation. Antihistamines by mouth are suitable for occasional use with mild intermittent symptoms. Mite-proof covers, air filters, and withholding certain foods in childhood do not have evidence supporting their effectiveness. Antihistamines Antihistamine drugs can be taken orally and nasally to control symptoms such as sneezing, rhinorrhea, itching, and conjunctivitis. It is best to take oral antihistamine medication before exposure, especially for seasonal allergic rhinitis. In the case of nasal antihistamines like azelastine antihistamine nasal spray, relief from symptoms is experienced within 15 minutes allowing for a more immediate 'as-needed' approach to dosage. There is not enough evidence of antihistamine efficacy as an add-on therapy with nasal steroids in the management of intermittent or persistent allergic rhinitis in children, so its adverse effects and additional costs must be considered. Ophthalmic antihistamines (such as azelastine in eye drop form and ketotifen) are used for conjunctivitis, while intranasal forms are used mainly for sneezing, rhinorrhea, and nasal pruritus. Antihistamine drugs can have undesirable side-effects, the most notable one being drowsiness in the case of oral antihistamine tablets. First-generation antihistamine drugs such as diphenhydramine cause drowsiness, while second- and third-generation antihistamines such as fexofenadine and loratadine are less likely to. Pseudoephedrine is also indicated for vasomotor rhinitis. It is used only when nasal congestion is present and can be used with antihistamines. In the United States, oral decongestants containing pseudoephedrine must be purchased behind the pharmacy counter in an effort to prevent the manufacturing of methamphetamine. Desloratadine/pseudoephedrine can also be used for this condition Steroids Intranasal corticosteroids are used to control symptoms associated with sneezing, rhinorrhea, itching, and nasal congestion. Steroid nasal sprays are effective and safe, and may be effective without oral antihistamines. They take several days to act and so must be taken continually for several weeks, as their therapeutic effect builds up with time. In 2013, a study compared the efficacy of mometasone furoate nasal spray to betamethasone oral tablets for the treatment of people with seasonal allergic rhinitis and found that the two have virtually equivalent effects on nasal symptoms in people. Systemic steroids such as prednisone tablets and intramuscular triamcinolone acetonide or glucocorticoid (such as betamethasone) injection are effective at reducing nasal inflammation, but their use is limited by their short duration of effect and the side-effects of prolonged steroid therapy. Other Other measures that may be used second line include: decongestants, cromolyn, leukotriene receptor antagonists, and nonpharmacologic therapies such as nasal irrigation. Topical decongestants may also be helpful in reducing symptoms such as nasal congestion, but should not be used for long periods, as stopping them after protracted use can lead to a rebound nasal congestion called rhinitis medicamentosa. For nocturnal symptoms, intranasal corticosteroids can be combined with nightly oxymetazoline, an adrenergic alpha-agonist, or an antihistamine nasal spray without risk of rhinitis medicamentosa. Nasal saline irrigation (a practice where salt water is poured into the nostrils), may have benefits in both adults and children in relieving the symptoms of allergic rhinitis and it is unlikely to be associated with adverse effects. Allergen immunotherapy Allergen immunotherapy, also called desensitization, treatment involves administering doses of allergens to accustom the body to substances that are generally harmless (pollen, house dust mites), thereby inducing specific long-term tolerance. Allergen immunotherapy is the only treatment that alters the disease mechanism. Immunotherapy can be administered orally (as sublingual tablets or sublingual drops), or by injections under the skin (subcutaneous). Subcutaneous immunotherapy is the most common form and has the largest body of evidence supporting its effectiveness. Alternative medicine There are no forms of complementary or alternative medicine that are evidence-based for allergic rhinitis. Therapeutic efficacy of alternative treatments such as acupuncture and homeopathy is not supported by available evidence. While some evidence shows that acupuncture is effective for rhinitis, specifically targeting the sphenopalatine ganglion acupoint, these trials are still limited. Overall, the quality of evidence for complementary-alternative medicine is not strong enough to be recommended by the American Academy of Allergy, Asthma and Immunology. Epidemiology Allergic rhinitis is the type of allergy that affects the greatest number of people. In Western countries, between 10 and 30 percent of people are affected in a given year. It is most common between the ages of twenty and forty. History The first accurate description is from the 10th century physician Rhazes. Pollen was identified as the cause in 1859 by Charles Blackley. In 1906 the mechanism was determined by Clemens von Pirquet. The link with hay came about due to an early (and incorrect) theory that the symptoms were brought about by the smell of new hay. Although the scent per se is irrelevant, the correlation with hay checks out, as peak hay-harvesting season overlaps with peak pollen season, and hay-harvesting work puts people in close contact with seasonal allergens.
Biology and health sciences
Specific diseases
Health
412903
https://en.wikipedia.org/wiki/Gourami
Gourami
Gouramis, or gouramies , are a group of freshwater anabantiform fish that comprise the family Osphronemidae. The fish are native to Asia—from the Indian Subcontinent to Southeast Asia and northeasterly towards Korea. The name "gourami", of Indonesian origin, is also used for fish of the families Helostomatidae and Anabantidae. Many gouramis have an elongated, feeler-like ray at the front of each of their pelvic fins. All living species show parental care until fry are free swimming: some are mouthbrooders, like the Krabi mouth-brooding betta (Betta simplex), and others, like the Siamese fighting fish (Betta splendens), build bubble nests. Currently, about 133 species are recognised, placed in four subfamilies and about 15 genera. The name Polyacanthidae has also been used for this family. Some fish now classified as gouramis were previously placed in family Anabantidae. The subfamily Belontiinae was recently demoted from the family Belontiidae. As labyrinth fishes, gouramis have a lung-like labyrinth organ that allows them to gulp air and use atmospheric oxygen. This organ is a vital adaptation for fish that often inhabit warm, shallow, oxygen-poor water. Gouramis can live for 1–5 years. The earliest fossil gourami is Ombilinichthys from the early-mid Eocene Sangkarewang Formation of Sumatra, Indonesia. A second fossil taxon from the same formation, known from several specimens and tentatively assigned to Osphronemus goramy when analyzed in the 1930s, is now lost. Subfamilies and genera The family Osphronemidae is divided into the following subfamilies and genera: family Osphronemidae van der Hoeven, 1832 Genus †Ombilinichthys Murray, Zaim, Rizal, Aswan, Gunnell & Ciochon, 2015 Subfamily Belontiinae Liem, 1962 Belontia Myers, 1923 Subfamily Osphroneminae van der Hoeven, 1832 Osphronemus Lacepède, 1801 Subfamily Luciocephalinae Bleeker, 1852 Luciocephalus Bleeker, 1851 Sphaerichthys Canestrini, 1860 Ctenops McClelland, 1845 Parasphaerichthys Prashad & Mukerji, 1929 Trichopodus Lacepède, 1801 Subfamily Macropodusinae Hoedeman, 1948 Betta Bleeker, 1850 Parosphromenus Bleeker, 1877 Macropodus Lacepède, 1801 Malpulutta Deraniyagala, 1937 Pseudosphromenus Bleeker, 1879 Trichopsis Canestrini, 1860 Subafmily Trichogastrinae Bleeker, 1879 Trichogaster Bloch & Schneider, 1801 As food Giant gouramis, Osphronemus goramy, or Kaloi in Malay, are eaten in some parts of the world. In Maritime Southeast Asian countries, they are often deep-fried and served in sweet-sour sauce, chili sauce, and other spices. The paradise fish, Macropodus opercularis, and other members of that genus are the target of a cannery industry in China, the products of which are available in Asian supermarkets around the world. Gouramis are particularly found in Sundanese cuisine. In Malaysia, Indonesia, Thailand, Philippines, Brunei, gouramis are readily fished at streams, brooks, canal, rivers and many more large water area systems. In the aquarium Numerous gourami species, such as the dwarf gourami, pearl gourami, are popular aquarium fish widely kept throughout the world. They are sought after due to their bright colours and relative intelligence, being able to recognise their owners and "greeting" them, having a desire to explore the plants and rocks placed across their aquarium, and displaying extensive paternal care with the males protecting the eggs until they hatch, and building a foam raft to keep them afloat. As labyrinth fish, they will often swim near the top of the tank in order to breathe air. As with other tropical freshwater fish, an aquarium heater is often used. Gouramis will eat either prepared or live foods. Some species can grow quite large and are unsuitable for the general hobbyist. Big gouramis may become territorial with fish that are colourful and a comparable size to them, however that generally depends on the individual's temperament, as some gourami will be more tolerant of tankmates than others. Gouramis may nip at other fish, and males should never be kept together as they will become aggressive. Compatibility Generally regarded as peaceful, gouramis are still capable of harassing or killing smaller or long-finned fish. Depending on the species, adult and juvenile males have been known to spar with one another. Aggression can also occur as a result of overcrowding. Gouramis have been housed with many species, such as danios, mollies, silver dollars, Neon tetras, and plecostomus catfish. Compatibility depends on the species of gourami and the fish it is housed with. Some species (e.g., Macropodus or Belontia) are highly aggressive or predatory and may harass or kill smaller or less aggressive fish; whereas, others (Parosphromenus and Sphaerichthys, for instance) are very shy or have specific water requirements and thus will be outcompeted by typical community fish. Gallery
Biology and health sciences
Acanthomorpha
Animals
412916
https://en.wikipedia.org/wiki/Inkscape
Inkscape
Inkscape is a free and open-source vector graphics editor for traditional Unix-compatible systems such as GNU/Linux, BSD derivatives and Illumos, as well as Windows and macOS. It offers a rich set of features and is widely used for both artistic and technical illustrations such as cartoons, clip art, logos, typography, diagramming and flowcharting. It uses vector graphics to allow for sharp printouts and renderings at unlimited resolution and is not bound to a fixed number of pixels like raster graphics. Inkscape uses the standardized Scalable Vector Graphics (SVG) file format as its main format, which is supported by many other applications including web browsers. It can import and export various other file formats, including SVG, AI, EPS, PDF, PS and PNG. Inkscape can render primitive vector shapes (e.g. rectangles, ellipses, polygons, arcs, spirals, stars and 3D boxes) and text. These objects may be filled with solid colors, patterns, radial or linear color gradients and their borders may be stroked, both with adjustable transparency. Embedding and optional tracing of raster graphics is also supported, enabling the editor to create vector graphics from photos and other raster sources. Created shapes can be further manipulated with transformations, such as moving, rotating, scaling and skewing. History Inkscape began in 2003 as a code fork of the Sodipodi project. Sodipodi, developed since 1999, was itself based on Raph Levien's Gill (GNOME Illustration Application). One of the main priorities of the Inkscape project was interface consistency and usability by following the GNOME human interface guidelines. Inkscape FAQ interprets the word Inkscape as a compound of ink and . Four former Sodipodi developers (Ted Gould, Bryce Harrington, Nathan Hurst, and MenTaLguY) led the fork, citing differences over project objectives, openness to third-party contributions, and technical disagreements. They said that Inkscape would focus development on implementing the complete SVG standard, whereas Sodipodi development emphasized developing a general-purpose vector graphics editor, possibly at the expense of SVG. Following the fork, Inkscape's developers changed the programming language from C to C++; adopted the GTK (formerly GIMP Toolkit) toolkit C++ bindings (gtkmm); redesigned its user interface, and added a number of new features. Inkscape's implementation of the SVG standard, although incomplete, has shown gradual improvement. Since 2005, Inkscape has participated in the Google Summer of Code program. Up until the end of November 2007, Inkscape's source code repository was hosted by SourceForge. Thereafter it moved to Launchpad. In June 2017, it moved to GitLab. Features Object creation Inkscape workflow is based around vector objects. Tools allow manipulating primitive vector shapes: simple ones like rectangles, ellipses and arcs, as well as more complex objects like 3D boxes with adjustable perspectives, stars, polygons and spirals. Rendering feature that can create objects like barcodes, calendars, grids, gears and roulette curves (using the spirograph tool). These objects may be filled with solid colors, patterns, radial or linear color gradients and their borders may be stroked, both with adjustable transparency. All of those can be further edited by transformations—such as moving, rotating, scaling and skewing—or by editing paths. Other tools allow creating Bézier curves, freehand drawing of lines (pencil), or calligraphic (brush-like) strokes which support a graphics tablet. Inkscape is able to write and edit text with tools available for changing font, spacing, kerning, rotation, flowing along the path or into a shape. Text can be converted to paths for further editing. The program also has layers (as well as objects) feature that allows the user to organize objects in a preferred stacking order in the canvas. Objects can be made visible/invisible and locked/unlocked through these features. Symbol libraries enable Inkscape to use existing symbols like logic-gate symbols or DOT pictograms. Additional libraries can be included by the user. Inkscape supports image tracing, the process of extracting vector graphics from raster sources. Clones are child objects of an original parent object. Different transformations can be applied to them, such as: size, position, rotation, blur, opacity, color and symmetry. Clones are updated live whenever the parent object changes. Object manipulation Every object in the drawing can be subjected to arbitrary affine transformations: moving, rotating, scaling, skewing and a configurable matrix. Transformation parameters can be specified numerically. Transformations can snap to angles, grids, guidelines and nodes of other objects, or be aligned in specified direction, spaced equally, scattered at random. Objects can be grouped together. Groups of objects behave similarly to objects. Objects in a group can be edited without having to ungroup them first. The Z-order determines the order in which objects are drawn on the canvas. Objects with a high Z-order are drawn on top of objects lower in the Z-order. Order of objects can be managed either using layers, or by manually moving the object up and down in the Z-order. Layers can be locked or hidden, preventing modifying and accidental selection. The Create Tiled Clones tool allows symmetrical or grid-like drawings using various plane symmetries. Appearance of objects can be further changed by using masks and clipping paths, which can be created from arbitrary objects, including groups. The style attributes are 'attached' to the source object, so after cutting/copying an object onto the clipboard, the style's attributes can be pasted to another object. Objects can also be moved by manually entering the location coordinates in the top toolbar. Even additions and subtractions can be done this way. Operations on paths Inkscape has a comprehensive tool set to edit paths (as they are the basic element of a vector file): Edit Path by Node tool: allows for the editing of single or multiple paths and or their associated node(s). There are four types of path nodes; Cusp (corner), Smooth, Symmetric and Auto-Smooth. Editing is available for the positioning of nodes and their associated handles (angle and length) for Linear and Bézier paths or Spiro curves. A path segment can also be adjusted. When multiple nodes are selected, they can be moved, scaled and rotated using keyboard shortcut or mouse controls. Additional nodes can be inserted into paths at arbitrary or even placements, and an effect can be used to insert nodes at predefined intervals. When nodes are deleted, the handles on remaining ones are adjusted to preserve the original shape as closely as possible. Tweak tool (sculpting/painting): provides whole object(s) or node editing regions (parts) of an object. It can push, repel/attract, randomize positioning, shrink/enlarge, rotate, copy/delete selected whole objects. With parts of a path you can push, shrink/enlarge, repel/attract, roughen edges, blur and color. Nodes are dynamically created and deleted when needed while using this tool, so it can also be used on simple paths without pre-processing. Path-Offsets; Outset, Inset, Linked or Dynamic: can create a Linked or Dynamic (unlinked) Inset and or an Outset of an existing path which can then be fine tuned using the given Shape or Node tool. Creating a Linked Offset of a path will update whenever the original is modified. Making symmetrical (i.e., picture frame) graphics easier to edit. Path-Conversion; Object to Path: conversions of Objects; Shapes (square, circle, etc.) or Text into paths. Path-Conversion; Stroke to Path: conversions of the Stroke of a shape to a path. Path-Simplify: a given path's node count will reduce while preserving the shape. Path-Operations (Boolean operations): use of multiple objects to Union, Difference, Intersection, Exclusion, Division and Cut Path. Inkscape includes a feature called Live Path Effects (LPE), which can apply various modifiers to a path. Envelope Deformation is available via the Path Effects and provides a perspective effect. There are more than a dozen of these live path effects. LPE can be stacked onto a single object and have interactive live on canvas and menu-based editing of the effects. File formats Inkscape's primary format is SVG 1.1, meaning that it can create and edit with the abilities and within the constraints of this format. Any other format must either be imported (converted to SVG) or exported (converted from SVG). The SVG format is using the Cascading Style Sheets (CSS) standard internally. Inkscape's implementation of SVG and CSS standards is incomplete. Most notably, it does not support animation natively. Inkscape has multilingual support, particularly for complex scripts. Formats that used the UniConvertor library are not supported beyond the 1.0 release. A workaround is to have a parallel installation of version 0.92.x. Other features XML Editor for direct manipulation of the SVG XML structure Support for SVG filter effects Editing of Resource Description Framework (RDF), a World Wide Web Consortium (W3C) metadata information model Command-line interface, exposes format conversion functions and full-featured GUI scripting More than sixty interface languages Extensible to new file formats, effects and other features Mathematical diagramming, with various uses of LaTeX Experimental support for scripting lib2Geom is now also external usable. 2Geom is a computational geometry library, originally developed for Inkscape. While developed for Inkscape it is a library that can be used from any application. It provides support for basic geometric algebra, paths, distortions, Boolean operations, plotting implicit functions, non-uniform rational B-spline (NURBS) and more. 2Geom is free software released under LGPL 2.1 or MPL 1.1. Platform support The latest version of Inkscape 1.0.x (and older line 0.92.x) is available for Linux, Windows 7+, and macOS 10.11–10.15 platforms. Inkscape is packaged with AppImage, Flatpak, PPA, Snap and source by all major Linux distributions (including Debian, Ubuntu, Fedora, OpenSUSE) with GTK+ 3.24+ (0.92.x with GTK+ 2.20+ for older Linux). Inkscape can also be installed via FreeBSD ports and pkgsrc, the latter being native to NetBSD, but well-supported on most POSIX platforms, including GNU/Linux, Illumos, and macOS. , Wacom tablet support for GTK 3 is in a reviving project. Version 1.0.x includes GTK 3 and Wacom support depending necessary Wacom Linux or Unix driver. macOS An issue had affected all GTK3 based apps on macOS Ventura (macOS 13), making the app unresponsive to certain mouse events. GTK is used by many different programs. GTK is a free and open-source cross-platform widget toolkit for creating graphical user interfaces (GUIs). Inkscape 1.2.2 was also affected and the web site of Inkscape recommended not to install it on Ventura as long as a stable solution was not available. These issues were fixed from version 1.3. Most of the compatibility issues with Apple silicon processors (M1, M2 and M3 families) appear to have also been resolved from version 1.3 and the macOS download site for Inkscape offers two options: the Intel version and the arm64 corresponding to the Apple Silicon M family. Release history Reception In its 2012 Best of Open Source Software Awards, InfoWorld gave Inkscape an award for being one of the best open-source desktop applications, commending its typographic controls and ability to directly edit the XML text of its documents. PC Magazines February 2019 review was rather mixed, giving the application three out of five. It criticized the interface's graphics and lack of optimization for stylus support, the application's poor interoperability with other graphics editors, unwieldy text formatting controls, and the quality of the Mac version. However, it did praise the ability to add custom filters and extensions, the Inkscape community's passion for creating and sharing them, and the precise path and placement tools. The review concluded that whilst Inkscape "boasts outstanding features and a passionate user base for a free program ... it's not suitable for busy professionals." In January 2020, TechRadar gave Inkscape a positive rating of four stars out of five. It lauded the wide range of editing tools and support for many file formats, but noted that the application's processing can be slow. It considered Inkscape to be a good free alternative to proprietary graphics editors such as Adobe Illustrator. According to It's FOSS in July 2023 the 1.3 release of Inkscape mainly focuses on making the user's workflow more organized to work more efficiently, with some new features making it a better alternative to Adobe Illustrator. Gallery
Technology
Multimedia_2
null
412942
https://en.wikipedia.org/wiki/Lipopolysaccharide
Lipopolysaccharide
Lipopolysaccharide, now more commonly known as endotoxin, is a collective term for components of the outermost membrane of the cell envelope of gram-negative bacteria, such as E. coli and Salmonella with a common structural architecture. Lipopolysaccharides (LPS) are large molecules consisting of three parts: an outer core polysaccharide termed the O-antigen, an inner core oligosaccharide and Lipid A (from which toxicity is largely derived), all covalently linked. In current terminology, the term endotoxin is often used synonymously with LPS, although there are a few endotoxins (in the original sense of toxins that are inside the bacterial cell that are released when the cell disintegrates) that are not related to LPS, such as the so-called delta endotoxin proteins produced by Bacillus thuringiensis. Lipopolysaccharides can have substantial impacts on human health, primarily through interactions with the immune system. LPS is a potent activator of the immune system and is a pyrogen (agent that causes fever). In severe cases, LPS can trigger a brisk host response and multiple types of acute organ failure which can lead to septic shock. In lower levels and over a longer time period, there is evidence LPS may play an important and harmful role in autoimmunity, obesity, depression, and cellular senescence. Discovery The toxic activity of LPS was first discovered and termed endotoxin by Richard Friedrich Johannes Pfeiffer. He distinguished between exotoxins, toxins that are released by bacteria into the surrounding environment, and endotoxins, which are toxins "within" the bacterial cell and released only after destruction of the bacterial outer membrane. Subsequent work showed that release of LPS from Gram negative microbes does not necessarily require the destruction of the bacterial cell wall, but rather, LPS is secreted as part of the normal physiological activity of membrane vesicle trafficking in the form of bacterial outer membrane vesicles (OMVs), which may also contain other virulence factors and proteins. Functions in bacteria LPS is a major component of the outer cell membrane of gram-negative bacteria, contributing greatly to the structural integrity of the bacteria and protecting the membrane from certain kinds of chemical attack. LPS is the most abundant antigen on the cell surface of most gram-negative bacteria, contributing up to 80% of the outer membrane of E. coli and Salmonella. LPS increases the negative charge of the cell membrane and helps stabilize the overall membrane structure. It is of crucial importance to many gram-negative bacteria, which die if the genes coding for it are mutated or removed. However, it appears that LPS is nonessential in at least some gram-negative bacteria, such as Neisseria meningitidis, Moraxella catarrhalis, and Acinetobacter baumannii. It has also been implicated in non-pathogenic aspects of bacterial ecology, including surface adhesion, bacteriophage sensitivity, and interactions with predators such as amoebae. LPS is also required for the functioning of omptins, a class of bacterial protease. Composition LPS are amphipathic and composed of three parts: the O antigen (or O polysaccharide) which is hydrophilic, the core oligosaccharide (also hydrophilic), and Lipid A, the hydrophobic domain. O-antigen The repetitive glycan polymer contained within an LPS is referred to as the O antigen, O polysaccharide, or O side-chain of the bacteria. The O antigen is attached to the core oligosaccharide, and comprises the outermost domain of the LPS molecule. The structure and composition of the O chain is highly variable from strain to strain, determining the serological specificity of the parent bacterial strain; there are over 160 different O antigen structures produced by different E. coli strains. The presence or absence of O chains determines whether the LPS is considered "rough" or "smooth". Full-length O-chains would render the LPS smooth, whereas the absence or reduction of O-chains would make the LPS rough. Bacteria with rough LPS usually have more penetrable cell membranes to hydrophobic antibiotics, since a rough LPS is more hydrophobic. O antigen is exposed on the very outer surface of the bacterial cell, and, as a consequence, is a target for recognition by host antibodies. Core The core domain always contains an oligosaccharide component that attaches directly to lipid A and commonly contains sugars such as heptose and 3-Deoxy-D-manno-oct-2-ulosonic acid (also known as KDO, keto-deoxyoctulosonate). The core oligosaccharide is less variable in its structure and composition, a given core structure being common to large groups of bacteria. The LPS cores of many bacteria also contain non-carbohydrate components, such as phosphate, amino acids, and ethanolamine substituents. Lipid A Lipid A is, in normal circumstances, a phosphorylated glucosamine disaccharide decorated with multiple fatty acids. These hydrophobic fatty acid chains anchor the LPS into the bacterial membrane, and the rest of the LPS projects from the cell surface. The lipid A domain is the most bioactive and responsible for much of the toxicity of Gram-negative bacteria. When bacterial cells are lysed by the immune system, fragments of membrane containing lipid A may be released into the circulation, causing fever, diarrhea, and possible fatal endotoxic septic shock (a form of septic shock). The Lipid A moiety is a very conserved component of the LPS. However Lipid A structure varies among bacterial species. Lipid A structure largely defines the degree and nature of the overall host immune activation. Lipooligosaccharides The "rough form" of LPS has a lower molecular weight due to the absence of the O polysaccharide. In its place is a short oligosaccharide: this form is known as Lipooligosaccharide (LOS), and is a glycolipid found in the outer membrane of some types of Gram-negative bacteria, such as Neisseria spp. and Haemophilus spp. LOS plays a central role in maintaining the integrity and functionality of the outer membrane of the Gram negative cell envelope. LOS play an important role in the pathogenesis of certain bacterial infections because they are capable of acting as immunostimulators and immunomodulators. Furthermore, LOS molecules are responsible for the ability of some bacterial strains to display molecular mimicry and antigenic diversity, aiding in the evasion of host immune defenses and thus contributing to the virulence of these bacterial strains. In the case of Neisseria meningitidis, the lipid A portion of the molecule has a symmetrical structure and the inner core is composed of 3-deoxy-D-manno-2-octulosonic acid (KDO) and heptose (Hep) moieties. The outer core oligosaccharide chain varies depending on the bacterial strain. LPS detoxification A highly conserved host enzyme called acyloxyacyl hydrolase (AOAH) may detoxify LPS when it enters, or is produced in, animal tissues. It may also convert LPS in the intestine into an LPS inhibitor. Neutrophils, macrophages and dendritic cells produce this lipase, which inactivates LPS by removing the two secondary acyl chains from lipid A to produce tetraacyl LPS. If mice are given LPS parenterally, those that lack AOAH develop high titers of non-specific antibodies, develop prolonged hepatomegaly, and experience prolonged endotoxin tolerance. LPS inactivation may be required for animals to restore homeostasis after parenteral LPS exposure. Although mice have many other mechanisms for inhibiting LPS signaling, none is able to prevent these changes in animals that lack AOAH. Dephosphorylation of LPS by intestinal alkaline phosphatase can reduce the severity of Salmonella tryphimurium and Clostridioides difficile infection restoring normal gut microbiota. Alkaline phosphatase prevents intestinal inflammation (and "leaky gut") from bacteria by dephosphorylating the Lipid A portion of LPS. Biosynthesis and transport The entire process of making LPS starts with a molecule called lipid A-Kdo2, which is first created on the surface of the bacterial cell's inner membrane. Then, additional sugars are added to this molecule on the inner membrane before it's moved to the space between the inner and outer membranes (periplasmic space) with the help of a protein called MsbA. The O-antigen, another part of LPS, is made by special enzyme complexes on the inner membrane. It is then moved to the outer membrane through three different systems: one is Wzy-dependent, another relies on ABC transporters, and the third involves a synthase-dependent process. Ultimately, LPS is transported to the outer membrane by a membrane-to-membrane bridge of lipolysaccharide transport (Lpt) proteins. This transporter is a potential antibiotic target. Biological effects on hosts infected with Gram-negative bacteria LPS storage in the body The human body carries endogenous stores on LPS. The epithelial surfaces are colonized by a complex microbial flora (including gram-negative bacteria), which outnumber human cells by a factor of 10 to 1. Gram-negative bacterial will shed endotoxins. This host-microbial interaction is a symbiotic relationship which plays a critical role in systemic immunologic homeostasis. When this is disrupted, it can lead to disease such as endotoxemia and endotoxic septic shock. Immune response LPS acts as the prototypical endotoxin because it binds the CD14/TLR4/MD2 receptor complex in many cell types, but especially in monocytes, dendritic cells, macrophages and B cells, which promotes the secretion of pro-inflammatory cytokines, nitric oxide, and eicosanoids. Bruce Beutler was awarded a portion of the 2011 Nobel Prize in Physiology or Medicine for his work demonstrating that TLR4 is the LPS receptor. As part of the cellular stress response, superoxide is one of the major reactive oxygen species induced by LPS in various cell types that express TLR (toll-like receptor). LPS is also an exogenous pyrogen (fever-inducing substance). LPS function has been under experimental research for several years due to its role in activating many transcription factors. LPS also produces many types of mediators involved in septic shock. Of mammals, humans are much more sensitive to LPS than other primates, and other animals as well (e.g., mice). A dose of 1 μg/kg induces shock in humans, but mice will tolerate a dose up to a thousand times higher. This may relate to differences in the level of circulating natural antibodies between the two species. It may also be linked to multiple immune tactics against pathogens, and part of a multi-faceted anti-microbial strategy that has been informed by human behavioral changes over our species' evolution (e.g., meat eating, agricultural practices, and smoking). Said et al. showed that LPS causes an IL-10-dependent inhibition of CD4 T-cell expansion and function by up-regulating PD-1 levels on monocytes which leads to IL-10 production by monocytes after binding of PD-1 by PD-L1. Endotoxins are in large part responsible for the dramatic clinical manifestations of infections with pathogenic Gram-negative bacteria, such as Neisseria meningitidis, the pathogens that causes meningococcal disease, including meningococcemia, Waterhouse–Friderichsen syndrome, and meningitis. Portions of the LPS from several bacterial strains have been shown to be chemically similar to human host cell surface molecules; the ability of some bacteria to present molecules on their surface which are chemically identical or similar to the surface molecules of some types of host cells is termed molecular mimicry. For example, in Neisseria meningitidis L2,3,5,7,9, the terminal tetrasaccharide portion of the oligosaccharide (lacto-N-neotetraose) is the same tetrasaccharide as that found in paragloboside, a precursor for ABH glycolipid antigens found on human erythrocytes. In another example, the terminal trisaccharide portion (lactotriaose) of the oligosaccharide from pathogenic Neisseria spp. LOS is also found in lactoneoseries glycosphingolipids from human cells. Most meningococci from groups B and C, as well as gonococci, have been shown to have this trisaccharide as part of their LOS structure. The presence of these human cell surface 'mimics' may, in addition to acting as a 'camouflage' from the immune system, play a role in the abolishment of immune tolerance when infecting hosts with certain human leukocyte antigen (HLA) genotypes, such as HLA-B35. LPS can be sensed directly by hematopoietic stem cells (HSCs) through the bonding with TLR4, causing them to proliferate in reaction to a systemic infection. This response activate the TLR4-TRIF-ROS-p38 signaling within the HSCs and through a sustained TLR4 activation can cause a proliferative stress, leading to impair their competitive repopulating ability. Infection in mice using S. typhimurium showed similar results, validating the experimental model also in vivo. Effect of variability on immune response O-antigens (the outer carbohydrates) are the most variable portion of the LPS molecule, imparting antigenic specificity. In contrast, lipid A is the most conserved part. However, lipid A composition also may vary (e.g., in number and nature of acyl chains even within or between genera). Some of these variations may impart antagonistic properties to these LPS. For example, diphosphoryl lipid A of Rhodobacter sphaeroides (RsDPLA) is a potent antagonist of LPS in human cells, but is an agonist in hamster and equine cells. It has been speculated that conical lipid A (e.g., from E. coli) is more agonistic, while less conical lipid A like that of Porphyromonas gingivalis may activate a different signal (TLR2 instead of TLR4), and completely cylindrical lipid A like that of Rhodobacter sphaeroides is antagonistic to TLRs. In general, LPS gene clusters are highly variable between different strains, subspecies, species of bacterial pathogens of plants and animals. Normal human blood serum contains anti-LOS antibodies that are bactericidal and patients that have infections caused by serotypically distinct strains possess anti-LOS antibodies that differ in their specificity compared with normal serum. These differences in humoral immune response to different LOS types can be attributed to the structure of the LOS molecule, primarily within the structure of the oligosaccharide portion of the LOS molecule. In Neisseria gonorrhoeae it has been demonstrated that the antigenicity of LOS molecules can change during an infection due to the ability of these bacteria to synthesize more than one type of LOS, a characteristic known as phase variation. Additionally, Neisseria gonorrhoeae, as well as Neisseria meningitidis and Haemophilus influenzae, are capable of further modifying their LOS in vitro, for example through sialylation (modification with sialic acid residues), and as a result are able to increase their resistance to complement-mediated killing or even down-regulate complement activation or evade the effects of bactericidal antibodies. Sialylation may also contribute to hindered neutrophil attachment and phagocytosis by immune system cells as well as a reduced oxidative burst. Haemophilus somnus, a pathogen of cattle, has also been shown to display LOS phase variation, a characteristic which may help in the evasion of bovine host immune defenses. Taken together, these observations suggest that variations in bacterial surface molecules such as LOS can help the pathogen evade both the humoral (antibody and complement-mediated) and the cell-mediated (killing by neutrophils, for example) host immune defenses. Non-canonical pathways of LPS recognition Recently, it was shown that in addition to TLR4 mediated pathways, certain members of the family of the transient receptor potential ion channels recognize LPS. LPS-mediated activation of TRPA1 was shown in mice and Drosophila melanogaster flies. At higher concentrations, LPS activates other members of the sensory TRP channel family as well, such as TRPV1, TRPM3 and to some extent TRPM8. LPS is recognized by TRPV4 on epithelial cells. TRPV4 activation by LPS was necessary and sufficient to induce nitric oxide production with a bactericidal effect. Testing Lipopolysaccharide is a significant factor that makes bacteria harmful, and it helps categorize them into different groups based on their structure and function. This makes LPS a useful marker for telling apart various Gram-negative bacteria. Swiftly identifying and understanding the types of pathogens involved is crucial for promptly managing and treating infections. Since LPS is the main trigger for the immune response in our cells, it acts as an early signal of an acute infection. Therefore, LPS testing is more specific and meaningful than many other serological tests. The current methods for testing LPS are quite sensitive, but many of them struggle to differentiate between different LPS groups. Additionally, the nature of LPS, which has both water-attracting and water-repelling properties (amphiphilic), makes it challenging to develop sensitive and user-friendly tests. The typical detection methods rely on identifying the lipid A part of LPS because Lipid A is very similar among different bacterial species and serotypes. LPS testing techniques fall into six categories, and they often overlap: in vivo tests, in vitro tests, modified immunoassays, biological assays, and chemical assays. Endotoxin Activity Assay Because the LPS is very difficult to measure in whole blood and because most LPS is bound to proteins and complement, the Endotoxin Activity Assay (EAA™) was developed and cleared by the US FDA in 2003. EAA is a rapid in vitro chemiluminescent immunodiagnostic test. It utilizes a specific monoclonal antibody to measure the endotoxin activity in EDTA whole blood specimens. This assay uses the biological response of the neutrophils in a patient’s blood to an immunological complex of endotoxin and exogenous antibody – the chemiluminescent reaction formed creates an emission of light. The amount of chemiluminescence is proportional to the logarithmic concentration of LPS in the sample and is a measure of the endotoxin activity in the blood. The assay reacts specifically with the Lipid A moiety of LPS of Gram-negative bacteria and does not cross-react with cell wall constituents of Gram-positive bacteria and other microorganisms. Pathophysiology LPS is a powerful toxin that, when in the body, triggers inflammation by binding to cell receptors. Excessive LPS in the blood, endotoxemia, may cause a highly lethal form of sepsis known as endotoxic septic shock. This condition includes symptoms that fall along a continuum of pathophysiologic states, starting with a systemic inflammatory response syndrome (SIRS) and ending in multiorgan dysfunction syndrome (MODS) before death. Early symptoms include rapid heart rate, quick breathing, temperature changes, and blood clotting issues, resulting in blood vessels widening and reduced blood volume, leading to cellular dysfunction. Recent research indicates that even small LPS exposure is associated with autoimmune diseases and allergies. High levels of LPS in the blood can lead to metabolic syndrome, increasing the risk of conditions like diabetes, heart disease, and liver problems. LPS also plays a crucial role in symptoms caused by infections from harmful bacteria, including severe conditions like Waterhouse-Friderichsen syndrome, meningococcemia, and meningitis. Certain bacteria can adapt their LPS to cause long-lasting infections in the respiratory and digestive systems. Recent studies have shown that LPS disrupts cell membrane lipids, affecting cholesterol and metabolism, potentially leading to high cholesterol, abnormal blood lipid levels, and non-alcoholic fatty liver disease. In some cases, LPS can interfere with toxin clearance, which may be linked to neurological issues. Health effects In general the health effects of LPS are due to its abilities as a potent activator and modulator of the immune system, especially its inducement of inflammation. LPS is directly cytoxic and is highly immunostimulatory – as host immune cells recognize LPS, complement are strongly activated. Complement activation and a rising anti-inflammatory response can lead to immune cell dysfunction, immunosuppression, widespread coagulopathy, serious tissue damage and can progress to multi-system organ failure and death. Endotoxemia The presence of endotoxins in the blood is called endotoxemia. High level of endotoxemia can lead to septic shock, or more specifically endotoxic septic shock, while lower concentration of endotoxins in the bloodstream is called metabolic endotoxemia. Endotoxemia is associated with obesity, diet, cardiovascular diseases, and diabetes, while also host genetics might have an effect. Moreover, endotoxemia of intestinal origin, especially, at the host-pathogen interface, is considered to be an important factor in the development of alcoholic hepatitis, which is likely to develop on the basis of the small bowel bacterial overgrowth syndrome and an increased intestinal permeability. Lipid A may cause uncontrolled activation of mammalian immune systems with production of inflammatory mediators that may lead to endotoxic septic shock. This inflammatory reaction is primarily mediated by Toll-like receptor 4 which is responsible for immune system cell activation. Damage to the endothelial layer of blood vessels caused by these inflammatory mediators can lead to capillary leak syndrome, dilation of blood vessels and a decrease in cardiac function and can further worsen shock. LPS is also a potent activator of complemen. Uncontrolled complement activation may trigger destructive endothelial damage leading to disseminated intravascular coagulation (DIC), or atypical hemolytic uremic syndrome (aHUS) with injury to various organs such as including kidneys and lungs. The skin can show the effects of vascular damage often coupled with depletion of coagulation factors in the form of petechiae, purpura and ecchymoses. The limbs can also be affected, sometimes with devastating consequences such as the development of gangrene, requiring subsequent amputation. Loss of function of the adrenal glands can cause adrenal insufficiency and additional hemorrhage into the adrenals causes Waterhouse-Friderichsen syndrome, both of which can be life-threatening. It has also been reported that gonococcal LOS can cause damage to human fallopian tubes. Treatment of Endotoxemia Toraymyxin is a widely used extracorporeal endotoxin removal therapy through direct hemoadsorption (also referred to as hemoperfusion). It is a polystyrene-derived cartridge with molecules of polymyxin B (PMX-B) covalently bound to mesh fibers contained within it. Polymyxins are cyclic cationic polypeptide antibiotics derived from Bacillus polymyxa with an effective antimicrobial activity against Gram-negative bacteria, but their intravenous clinical use has been limited due to their nephrotoxicity and neurotoxicity side effects. The extracorporeal use of the Toraymyxin cartridge allows PMX-B to bind lipid A with a very stable interaction with its hydrophobic residues thereby neutralizing endotoxins as the blood is filtered through the extracorporeal circuit inside the cartridge, thus reversing endotoxemia and avoiding its toxic systemic effects. Auto-immune disease The molecular mimicry of some LOS molecules is thought to cause autoimmune-based host responses, such as flareups of multiple sclerosis. Other examples of bacterial mimicry of host structures via LOS are found with the bacteria Helicobacter pylori and Campylobacter jejuni, organisms which cause gastrointestinal disease in humans, and Haemophilus ducreyi which causes chancroid. Certain C. jejuni LPS serotypes (attributed to certain tetra- and pentasaccharide moieties of the core oligosaccharide) have also been implicated with Guillain–Barré syndrome and a variant of Guillain–Barré called Miller-Fisher syndrome. Link to obesity Epidemiological studies have shown that increased endotoxin load, which can be a result of increased populations of endotoxin-producing bacteria in the intestinal tract, is associated with certain obesity-related patient groups. Other studies have shown that purified endotoxin from Escherichia coli can induce obesity and insulin-resistance when injected into germ-free mouse models. A more recent study has uncovered a potentially contributing role for Enterobacter cloacae B29 toward obesity and insulin resistance in a human patient. The presumed mechanism for the association of endotoxin with obesity is that endotoxin induces an inflammation-mediated pathway accounting for the observed obesity and insulin resistance. Bacterial genera associated with endotoxin-related obesity effects include Escherichia and Enterobacter. Depression There is experimental and observational evidence that LPS might play a role in depression. Administration of LPS in mice can lead to depressive symptoms, and there seem to be elevated levels of LPS in some people with depression. Inflammation may sometimes play a role in the development of depression, and LPS is pro-inflammatory. Cellular senescence Inflammation induced by LPS can induce cellular senescence, as has been shown for the lung epithelial cells and microglial cells (the latter leading to neurodegeneration). Role as contaminant in biotechnology and research Lipopolysaccharides are frequent contaminants in plasmid DNA prepared from bacteria or proteins expressed from bacteria, and must be removed from the DNA or protein to avoid contaminating experiments and to avoid toxicity of products manufactured using industrial fermentation. Ovalbumin is frequently contaminated with endotoxins. Ovalbumin is one of the extensively studied proteins in animal models and also an established model allergen for airway hyper-responsiveness (AHR). Commercially available ovalbumin that is contaminated with LPS can falsify research results, as it does not accurately reflect the effect of the protein antigen on animal physiology. In pharmaceutical production, it is necessary to remove all traces of endotoxin from drug product containers, as even small amounts of endotoxin will cause illness in humans. A depyrogenation oven is used for this purpose. Temperatures in excess of 300 °C are required to fully break down LPS. The standard assay for detecting presence of endotoxin is the Limulus Amebocyte Lysate (LAL) assay, utilizing blood from the Horseshoe crab (Limulus polyphemus). Very low levels of LPS can cause coagulation of the limulus lysate due to a powerful amplification through an enzymatic cascade. However, due to the dwindling population of horseshoe crabs, and the fact that there are factors that interfere with the LAL assay, efforts have been made to develop alternative assays, with the most promising ones being ELISA tests using a recombinant version of a protein in the LAL assay, Factor C.
Biology and health sciences
Lipids
Biology
413274
https://en.wikipedia.org/wiki/Vehicle%20for%20hire
Vehicle for hire
A vehicle for hire is a vehicle providing private transport or shared transport for a fee, in which passengers are generally free to choose their points or approximate points of origin and destination, unlike public transport, and which they do not drive themselves, as in car rental and carsharing. They may be offered via a ridesharing company. Vehicles Vehicles for hire include taxicabs pulled rickshaws, cycle rickshaws, auto rickshaws, motorcycle taxis, Zémidjans, okadas, boda bodas, sedan services, limousines, party buses, carriages (including hackney carriages, fiacres, and caleches), pet taxis, water taxis, and air charters. Share taxis, paratransit, dollar vans, marshrutkas, dolmuş, nanny vans, demand responsive transport, public light buses, and airport buses operate along fixed routes, but offer some flexibility in the point of origin and/or destination. Notable companies Some of the largest vehicle for hire companies include Uber, Ola Cabs, Bolt, DiDi, and Grab.
Technology
Motorized road transport
null
413340
https://en.wikipedia.org/wiki/Sporophyte
Sporophyte
A sporophyte () is the diploid multicellular stage in the life cycle of a plant or alga which produces asexual spores. This stage alternates with a multicellular haploid gametophyte phase. Life cycle The sporophyte develops from the zygote produced when a haploid egg cell is fertilized by a haploid sperm and each sporophyte cell therefore has a double set of chromosomes, one set from each parent. All land plants, and most multicellular algae, have life cycles in which a multicellular diploid sporophyte phase alternates with a multicellular haploid gametophyte phase. In the seed plants, the largest groups of which are the gymnosperms and flowering plants (angiosperms), the sporophyte phase is more prominent than the gametophyte, and is the familiar green plant with its roots, stem, leaves and cones or flowers. In flowering plants, the gametophytes are very reduced in size, and are represented by the germinated pollen and the embryo sac. The sporophyte produces spores (hence the name) by meiosis, a process also known as "reduction division" that reduces the number of chromosomes in each spore mother cell by half. The resulting meiospores develop into a gametophyte. Both the spores and the resulting gametophyte are haploid, meaning they only have one set of chromosomes. The mature gametophyte produces male or female gametes (or both) by mitosis. The fusion of male and female gametes produces a diploid zygote which develops into a new sporophyte. This cycle is known as alternation of generations or alternation of phases. Examples Bryophytes (mosses, liverworts and hornworts) have a dominant gametophyte phase on which the adult sporophyte is dependent for nutrition. The embryo sporophyte develops by cell division of the zygote within the female sex organ or archegonium, and in its early development is therefore nurtured by the gametophyte. Because this embryo-nurturing feature of the life cycle is common to all land plants they are known collectively as the embryophytes. Most algae have dominant gametophyte generations, but in some species the gametophytes and sporophytes are morphologically similar (isomorphic). An independent sporophyte is the dominant form in all clubmosses, horsetails, ferns, gymnosperms, and angiosperms that have survived to the present day. Early land plants had sporophytes that produced identical spores (isosporous or homosporous) but the ancestors of the gymnosperms evolved complex heterosporous life cycles in which the spores producing male and female gametophytes were of different sizes, the female megaspores tending to be larger, and fewer in number, than the male microspores. Evolutionary history During the Devonian period several plant groups independently evolved heterospory and subsequently the habit of endospory, in which the gametophytes develop in miniaturized form inside the spore wall. By contrast in exosporous plants, including modern ferns, the gametophytes break the spore wall open on germination and develop outside it. The megagametophytes of endosporic plants such as the seed ferns developed within the sporangia of the parent sporophyte, producing a miniature multicellular female gametophyte complete with female sex organs, or archegonia. The oocytes were fertilized in the archegonia by free-swimming flagellate sperm produced by windborne miniaturized male gametophytes in the form of pre-pollen. The resulting zygote developed into the next sporophyte generation while still retained within the pre-ovule, the single large female meiospore or megaspore contained in the modified sporangium or nucellus of the parent sporophyte. The evolution of heterospory and endospory were among the earliest steps in the evolution of seeds of the kind produced by gymnosperms and angiosperms today. The rRNA genes seems to escape global methylation machinery in bryophytes, unlike seed plants.
Biology and health sciences
Plant reproduction
null
413522
https://en.wikipedia.org/wiki/Box
Box
A box (plural: boxes) is a container with rigid sides used for the storage or transportation of its contents. Most boxes have flat, parallel, rectangular sides (typically rectangular prisms). Boxes can be very small (like a matchbox) or very large (like a shipping box for furniture) and can be used for a variety of purposes, from functional to decorative. Boxes may be made of a variety of materials, both durable (such as wood and metal) and non-durable (such as corrugated fiberboard and paperboard). Corrugated metal boxes are commonly used as shipping containers. Boxes may be closed and shut with flaps, doors, or a separate lid. They can be secured shut with adhesives, tapes, string, or more decorative or elaborately functional mechanisms, such as catches, clasps or locks. Packaging Several types of boxes are used in packaging and storage. A corrugated box is a shipping container made from corrugated fiberboard, most commonly used to transport products from a warehouse during distribution. Corrugated boxes are also known as cartons, cases, and cardboard boxes in various regions. Corrugated boxes are rated based on the strength of their material or their carrying capacity. Corrugated boxes are also used as product packaging, or in point of sale displays. Folding cartons (sometimes known as a box) are paperboard boxes manufactured with a folding lid. These are used to package a wide range of goods, and can be used for either one-time (non-resealable) usage, or as a storage box for more permanent use. Folding cartons are first printed (if necessary) before being die-cut and scored to form a blank; these are then transported and stored flat, before being constructed at the point of use. A gift box is a variant on the folding carton, used for birthday or Christmas gifts. Gable boxes are paperboard cartons used for liquids. Setup boxes (also known as rigid paperboard boxes) are made of stiff paperboard and are permanently glued together with paper skins that can be printed or colored. Unlike folding cartons, these are assembled at the point of manufacture and transported as already constructed ("set-up"). Set up boxes are more expensive than folding boxes and are typically used for protecting high-value items such as cosmetics, watches or smaller consumer electronics. Crates are heavy duty shipping containers. Originally made of wood, crates are distinct from wooden boxes, also used as heavy-duty shipping containers, as a wooden container must have all six of its sides put in place to result in the rated strength of the container. The strength of a wooden box, on the other hand, is rated based on the weight it can carry before the top or opening is installed. A wooden wine box or wine crate, originally used for shipping and storing expensive wines, is a variant of the wooden box now used for decorative or promotional purposes, or as a storage box during shipping. Bulk boxes are large boxes often used in industrial environments, sized to fit on a pallet. An ammunition box is a metal can or box for ammunition. Depending on locale and usage, the terms carton and box are sometimes used interchangeably. The invention of large steel intermodal shipping containers has helped advance the globalization of commerce. Storage Boxes for storing various items in can often be very decorative, as they are intended for permanent use and sometimes are put on display in certain locations. The following are some types of storage boxes : A jewelry (American English) or jewellery (British English) box, is a box for trinkets or jewels. It can take a very modest form with paper covering and lining, covered in leather and lined with satin, or be larger and more highly decorated. A hat box is used for storing or transporting a hat. Hat boxes are often cylindrical or oval. A humidor is a special box for storing cigars at the proper humidity. A "strong box" or safe, is a secure lockable box for storing money or other valuable items. The term "strong box" is sometimes used for safes that are not portable but installed in a wall or floor. A toolbox is used for carrying tools of various kinds. They are usually used for portability rather than just storage. A toy box is name of box for storing toys. A box file is used in offices for storing papers and smaller files. Gallery
Technology
Containers
null
413928
https://en.wikipedia.org/wiki/Ascariasis
Ascariasis
Ascariasis is a disease caused by the parasitic roundworm Ascaris lumbricoides. Infections have no symptoms in more than 85% of cases, especially if the number of worms is small. Symptoms increase with the number of worms present and may include shortness of breath and fever at the beginning of the disease. These may be followed by symptoms of abdominal swelling, abdominal pain, and diarrhea. Children are most commonly affected, and in this age group the infection may also cause poor weight gain, malnutrition, and learning problems. Infection occurs by ingesting food or drink contaminated with Ascaris eggs from feces. The eggs hatch in the intestines, the larvae burrow through the gut wall, and migrate to the lungs via the blood. There they break into the alveoli and pass up the trachea, where they are coughed up and may be swallowed. The larvae then pass through the stomach a second time into the intestine, where they become adult worms. It is a type of soil-transmitted helminthiasis and part of a group of diseases called helminthiases. Prevention is by improved sanitation, which includes improving access to toilets and proper disposal of feces. Handwashing with soap appears protective. In areas where more than 20% of the population is affected, treating everyone at regular intervals is recommended. Reoccurring infections are common. There is no vaccine. Treatments recommended by the World Health Organization are the medications albendazole, mebendazole, levamisole, or pyrantel pamoate. Other effective agents include tribendimidine and nitazoxanide. About 0.8 to 1.2 billion people globally have ascariasis, with the most heavily affected populations being in sub-Saharan Africa, Latin America, and Asia. This makes ascariasis the most common form of soil-transmitted helminthiasis. As of 2010 it caused about 2,700 deaths a year, down from 3,400 in 1990. Another type of Ascaris infects pigs. Ascariasis is classified as a neglected tropical disease. Signs and symptoms In populations where worm infections are widespread, it is common to find that most people are infected by a small number of worms, while a small number of people are heavily infected. This is characteristic of many types of worm infections. Those people who are infected with only a small number of worms usually have no symptoms. Migrating larvae As larval stages travel through the body, they may cause visceral damage, peritonitis and inflammation, enlargement of the liver or spleen, and an inflammation of the lungs. Pulmonary manifestations take place during larval migration and may present as Loeffler's syndrome, a transient respiratory illness associated with blood eosinophilia and pulmonary infiltrates with radiographic shadowing. Intestinal blockage The worms can occasionally cause intestinal blockage when large numbers get tangled into a bolus or they may migrate from the small intestine, which may require surgery. More than 796 A. lumbricoides worms weighing up to were recovered at autopsy from a two-year-old South African girl. The worms had caused torsion and gangrene of the ileum, which was interpreted as the cause of death. The worms lack teeth. However, they can rarely cause bowel perforations by inducing volvulus and closed-loop obstruction. Bowel obstruction Bowel obstruction may occur in up to 0.2 per 1000 per year. A worm may block the ampulla of Vater, or go into the main pancreatic duct, resulting in acute pancreatitis with raised serum levels of amylase and lipase. Occasionally, a worm can travel through the biliary tree and even into the gallbladder, causing acute cholangitis or acute cholecystitis. Allergies Ascariasis may result in allergies to shrimp and dustmites due to the shared antigen, tropomyosin; this has not been confirmed in the laboratory. Malnutrition The worms in the intestine may cause malabsorption and anorexia, which contribute to malnutrition. The malabsorption may be due to a loss of brush border enzymes, erosion and flattening of the villi, and inflammation of the lamina propria. Others Ascaris have an aversion to some general anesthetics and may exit the body, sometimes through the mouth, when an infected individual is put under general anesthesia. Cause Transmission The source of infection is from objects contaminated with fecal matter containing eggs. Ingestion of infective eggs from soil contaminated with human feces or contaminated vegetables and water is the primary route of infection. Infectious eggs may occur on objects such as hands, money, and furniture. Transmission from human to human by direct contact is impossible. Transmission comes through municipal recycled wastewater into crop fields. This is quite common in emerging industrial economies and poses serious risks for local crop sales and exports of contaminated vegetables. A 1986 outbreak of ascariasis in Italy was traced to irresponsible wastewater recycling used to grow Balkan vegetable exports. The number of ova (eggs) in sewage or in crops that were irrigated with raw or partially treated sewage, is a measure of the degree of ascariasis incidence. For example: In a study published in 1992, municipal wastewater in Riyadh, Saudi Arabia, detected over 100 eggs per litre of wastewater and in Czechoslovakia was as high as 240–1050 eggs per litre. In one field study in Marrakech, Morocco, where raw sewage is used to fertilize crop fields, Ascaris eggs were detected at the rate of 0.18 eggs/kg in potatoes, 0.27 eggs/kg in turnip, 4.63 eggs/kg in mint, 0.7 eggs/kg in carrots, and 1.64 eggs/kg in radish. A similar study in the same area showed that 73% of children working on these farms were infected with helminths, particularly Ascaris, probably as a result of exposure to the raw sewage. Lifecycle The first appearance of eggs in stools is 60–70 days. In larval ascariasis, symptoms occur 4–16 days after infection. The final symptoms are gastrointestinal discomfort, colic and vomiting, fever, and observation of live worms in stools. Some patients may have pulmonary symptoms or neurological disorders during the migration of the larvae. There are generally few or no symptoms. A bolus of worms may obstruct the intestine; migrating larvae may cause pneumonitis and eosinophilia. Adult worms have a lifespan of 1–2 years which means that individuals may be infected all their lives as worms die and new worms are acquired. Eggs can survive potentially for 15 years and a single worm may produce 200,000 eggs a day. They maintain their position by swimming against the intestinal flow. Mechanism Ascaris takes most of its nutrients from the partially digested host food in the intestine. There is some evidence that it can secrete enzyme inhibitors, presumably to protect itself from digestion by the hosts' enzymes. Children are often more severely affected. Diagnosis Most diagnoses are made by identifying the appearance of the worm or eggs in feces. Due to the large quantity of eggs laid, diagnosis can generally be made using only one or two fecal smears. The diagnosis is usually incidental when the host passes a worm in the stool or vomit. The eggs can be seen in a smear of fresh feces examined on a glass slide under a microscope and there are various techniques to concentrate them first or increase their visibility, such as the ether sedimentation method or the Kato technique. The eggs have a characteristic shape: they are oval with a thick, mamillated shell (covered with rounded mounds or lumps), measuring 35–50 micrometer in diameter and 40–70 in length. During pulmonary disease, larvae may be found in fluids aspirated from the lungs. White blood cell counts may demonstrate peripheral eosinophilia; this is common in many parasitic infections and is not specific to ascariasis. On X-ray, 15–35 cm long filling defects, sometimes with a whirled appearance (bolus of worms). Prevention Prevention is by improved access to sanitation which includes the use of properly functioning and clean toilets by all community members as one important aspect. Handwashing with soap may be protective; however, there is no evidence it affects the severity of the disease. Eliminating the use of untreated human faeces as fertilizer is also important. In areas where more than 20% of the population is affected treating everyone is recommended. This has a cost of about 2 to 3 cents per person per treatment. This is known as mass drug administration and is often carried out among school-age children. For this purpose, broad-spectrum benzimidazoles such as mebendazole and albendazole are the drugs of choice recommended by WHO. Treatment Medications Medications that are used to kill roundworms are called ascaricides. Those recommended by the World Health Organization for ascariasis are: albendazole, mebendazole, levamisole and pyrantel pamoate. Single-dose of albendazole, mebendazole, and ivermectin are effective against ascariasis. They are effective at removing parasites and eggs from the intestines. Other effective agents include tribendimidine and nitazoxanide. Pyrantel pamoate may induce intestinal obstruction in a heavy worm load. Albendazole is contraindicated during pregnancy and children under two years of age. Thiabendazole may cause migration of the worm into the esophagus, so it is usually combined with piperazine. Piperazine is a flaccid paralyzing agent that blocks the response of Ascaris muscle to acetylcholine, which immobilizes the worm. It prevents migration when treatment is accomplished with weak drugs such as thiabendazole. If used by itself, it causes the worm to be passed out in the feces and may be used when worms have caused blockage of the intestine or the biliary duct. Corticosteroids can treat some of the symptoms, such as inflammation. Other medications Hexylresorcinol, effective in a single dose. During the 1940s this compound, as Crystoids brand pills, was the treatment of choice; patients were instructed not to chew the Crystoids to prevent burns to the mucous membranes. A saline cathartic would be administered several hours later. Santonin, more toxic than hexylresorcinol and often only partly effective. Oil of chenopodium, more toxic than hexylresorcinol Surgery In some cases with severe infestation, the worms may cause bowel obstruction, requiring emergency surgery. The bowel obstruction may be due to the number of worms in the bowel or twisting of the bowel. During the surgery the worms may be manually removed. Prognosis It is rare for infections to be life-threatening. Epidemiology Regions Ascariasis is common in tropical regions as well as subtropical and regions that lack proper sanitation. It is rare to find traces of the infection in developed or urban regions. Infection estimates Roughly 0.8–1.3 billion individuals are infected with this intestinal worm, primarily in Africa and Asia. About 120 to 220 million of these cases are symptomatic. Deaths As of 2010, ascariasis caused about 2,700 directly attributable deaths, down from 3,400 in 1990. The indirectly attributable deaths due to the malnutrition link may be much higher. Research Mouse and pig animal models are used to study Ascaris infection. Other animals Ascariasis is more common in young animals than mature ones, with signs including unthriftiness, potbelly, rough hair coat, and slow growth. In pigs, the infection is caused by Ascaris suum. It is characterized by poor weight gain, leading to financial losses for the farmer. In horses and other equines, the equine roundworm is Parascaris equorum. Society and culture Kings of England Richard III and Henry VIII both had ascariasis.
Biology and health sciences
Helminthic diseases and infestations
Health
414144
https://en.wikipedia.org/wiki/Sterilization%20%28microbiology%29
Sterilization (microbiology)
Sterilization () refers to any process that removes, kills, or deactivates all forms of life (particularly microorganisms such as fungi, bacteria, spores, and unicellular eukaryotic organisms) and other biological agents (such as prions or viruses) present in fluid or on a specific surface or object. Sterilization can be achieved through various means, including heat, chemicals, irradiation, high pressure, and filtration. Sterilization is distinct from disinfection, sanitization, and pasteurization, in that those methods reduce rather than eliminate all forms of life and biological agents present. After sterilization, fluid or an object is referred to as being sterile or aseptic. Applications Foods One of the first steps toward modernized sterilization was made by Nicolas Appert, who discovered that application of heat over a suitable period of time slowed the decay of foods and various liquids, preserving them for safe consumption for a longer time than was typical. Canning of foods is an extension of the same principle and has helped to reduce food borne illness ("food poisoning"). Other methods of sterilizing foods include ultra-high temperature processing (which uses a shorter duration of heating), food irradiation, and high pressure (pascalization). In the context of food, sterility typically refers to commercial sterility, "the absence of microorganisms capable of growing in the food at normal non-refrigerated conditions at which the food is likely to be held during distribution and storage" according to the Codex Allimentarius. Medicine and surgery In general, surgical instruments and medications that enter an already aseptic part of the body (such as the bloodstream, or penetrating the skin) must be sterile. Examples of such instruments include scalpels, hypodermic needles, and artificial pacemakers. This is also essential in the manufacture of parenteral pharmaceuticals. Preparation of injectable medications and intravenous solutions for fluid replacement therapy requires not only sterility but also well-designed containers to prevent entry of adventitious agents after initial product sterilization. Most medical and surgical devices used in healthcare facilities are made of materials that are able to undergo steam sterilization. However, since 1950, there has been an increase in medical devices and instruments made of materials (e.g., plastics) that require low-temperature sterilization. Ethylene oxide gas has been used since the 1950s for heat- and moisture-sensitive medical devices. Within the past 15 years, a number of new, low-temperature sterilization systems (e.g., vaporized hydrogen peroxide, peracetic acid immersion, ozone) have been developed and are being used to sterilize medical devices. Spacecraft There are strict international rules to protect the contamination of Solar System bodies from biological material from Earth. Standards vary depending on both the type of mission and its destination; the more likely a planet is considered to be habitable, the stricter the requirements are. Many components of instruments used on spacecraft cannot withstand very high temperatures, so techniques not requiring excessive temperatures are used as tolerated, including heating to at least , chemical sterilization, oxidization, ultraviolet, and irradiation. Quantification The aim of sterilization is the reduction of initially present microorganisms or other potential pathogens. The degree of sterilization is commonly expressed by multiples of the decimal reduction time, or D-value, denoting the time needed to reduce the initial number to one tenth () of its original value. Then the number of microorganisms after sterilization time is given by: . The D-value is a function of sterilization conditions and varies with the type of microorganism, temperature, water activity, pH, etc.. For steam sterilization (see below), typically the temperature, in degrees Celsius, is given as an index. Theoretically, the likelihood of the survival of an individual microorganism is never zero. To compensate for this, the overkill method is often used. Using the overkill method, sterilization is performed by sterilizing for longer than is required to kill the bioburden present on or in the item being sterilized. This provides a sterility assurance level (SAL) equal to the probability of a non-sterile unit. For high-risk applications, such as medical devices and injections, a sterility assurance level of at least 10−6 is required by the United States of America Food and Drug Administration (FDA). Heat Steam Steam sterilization, also known as moist heat sterilization, uses heated saturated steam under pressure to inactivate or kill microorganisms via denaturation of macromolecules, primarily proteins. This method is a faster process than dry heat sterilization. Steam sterilization is performed using an autoclave, sometimes called a converter or steam sterilizer. The object or liquid is placed in the autoclave chamber, which is then sealed and heated using pressurized steam to a temperature set point for a defined period of time. Steam sterilization cycles can be categorized as either pre-vacuum or gravity displacement. Gravity displacement cycles rely on the lower density of the injected steam to force cooler, denser air out of the chamber drain. Steam Sterilization | Disinfection & Sterilization Guidelines | Guidelines Library | Infection Control | CDC In comparison, pre-vacuum cycles create a vacuum in the chamber to remove cool dry air prior to injecting saturated steam, resulting in faster heating and shorter cycle times. Typical steam sterilization cycles are between 3 and 30 minutes at at , but adjustments may be made depending on the bioburden of the article being sterilized, its resistance (D-value) to steam sterilization, the article's heat tolerance, and the required sterility assurance level. Following the completion of a cycle, liquids in a pressurized autoclave must be cooled slowly to avoid boiling over when the pressure is released. This may be achieved by gradually depressurizing the sterilization chamber and allowing liquids to evaporate under a negative pressure, while cooling the contents. Proper autoclave treatment will inactivate all resistant bacterial spores in addition to fungi, bacteria, and viruses, but is not expected to eliminate all prions, which vary in their heat resistance. For prion elimination, various recommendations state for 60 minutes or for at least 18 minutes. The 263K scrapie prion is inactivated relatively quickly by such sterilization procedures; however, other strains of scrapie and strains of Creutzfeldt-Jakob disease (CKD) and bovine spongiform encephalopathy (BSE) are more resistant. Using mice as test animals, one experiment showed that heating BSE positive brain tissue at for 18 minutes resulted in only a 2.5 log decrease in prion infectivity. Most autoclaves have meters and charts that record or display information, particularly temperature and pressure as a function of time. The information is checked to ensure that the conditions required for sterilization have been met. Indicator tape is often placed on the packages of products prior to autoclaving, and some packaging incorporates indicators. The indicator changes color when exposed to steam, providing a visual confirmation. Biological indicators can also be used to independently confirm autoclave performance. Simple biological indicator devices are commercially available, based on microbial spores. Most contain spores of the heat-resistant microbe Geobacillus stearothermophilus (formerly Bacillus stearothermophilus), which is extremely resistant to steam sterilization. Biological indicators may take the form of glass vials of spores and liquid media, or as spores on strips of paper inside glassine envelopes. These indicators are placed in locations where it is difficult for steam to reach to verify that steam is penetrating that area. For autoclaving, cleaning is critical. Extraneous biological matter or grime may shield organisms from steam penetration. Proper cleaning can be achieved through physical scrubbing, sonication, ultrasound, or pulsed air. Pressure cooking and canning is analogous to autoclaving, and when performed correctly renders food sterile. To sterilize waste materials that are chiefly composed of liquid, a purpose-built effluent decontamination system can be utilized. These devices can function using a variety of sterilants, although using heat via steam is most common. Dry Dry heat was the first method of sterilization and is a longer process than moist heat sterilization. The destruction of microorganisms through the use of dry heat is a gradual phenomenon. With longer exposure to lethal temperatures, the number of killed microorganisms increases. Forced ventilation of hot air can be used to increase the rate at which heat is transferred to an organism and reduce the temperature and amount of time needed to achieve sterility. At higher temperatures, shorter exposure times are required to kill organisms. This can reduce heat-induced damage to food products. The standard setting for a hot air oven is at least two hours at . A rapid method heats air to for 6 minutes for unwrapped objects and 12 minutes for wrapped objects. Dry heat has the advantage that it can be used on powders and other heat-stable items that are adversely affected by steam (e.g., it does not cause rusting of steel objects). Flaming Flaming is done to inoculation loops and straight-wires in microbiology labs for streaking. Leaving the loop in the flame of a Bunsen burner or alcohol burner until it glows red ensures that any infectious agent is inactivated or killed. This is commonly used for small metal or glass objects, but not for large objects (see Incineration below). However, during the initial heating, infectious material may be sprayed from the wire surface before it is killed, contaminating nearby surfaces and objects. Therefore, special heaters have been developed that surround the inoculating loop with a heated cage, ensuring that such sprayed material does not further contaminate the area. Another problem is that gas flames may leave carbon or other residues on the object if the object is not heated enough. A variation on flaming is to dip the object in a 70% or more concentrated solution of ethanol, then briefly leave the object in the flame of a Bunsen burner. The ethanol will ignite and burn off rapidly, leaving less residue than a gas flame Incineration Incineration is a waste treatment process that involves the combustion of organic substances contained in waste materials. This method also burns any organism to ash. It is used to sterilize medical and other biohazardous waste before it is discarded with non-hazardous waste. Bacteria incinerators are mini furnaces that incinerate and kill off any microorganisms that may be on an inoculating loop or wire. Tyndallization Named after John Tyndall, tyndallization is an obsolete and lengthy process designed to reduce the level of activity of sporulating microbes that are left by a simple boiling water method. The process involves boiling for a period of time (typically 20 minutes) at atmospheric pressure, cooling, incubating for a day, and then repeating the process a total of three to four times. The incubation allow heat-resistant spores surviving the previous boiling period to germinate and form the heat-sensitive vegetative (growing) stage, which can be killed by the next boiling step. This is effective because many spores are stimulated to grow by the heat shock. The procedure only works for media that can support bacterial growth, and will not sterilize non-nutritive substrates like water. Tyndallization is also ineffective against prions. Glass bead sterilizers Glass bead sterilizers work by heating glass beads to . Instruments are then quickly doused in these glass beads, which heat the object while physically scraping contaminants off their surface. Glass bead sterilizers were once a common sterilization method employed in dental offices as well as biological laboratories, but are not approved by the U.S. Food and Drug Administration (FDA) and Centers for Disease Control and Prevention (CDC) to be used as a sterilizers since 1997. They are still popular in European and Israeli dental practices, although there are no current evidence-based guidelines for using this sterilizer. Chemical sterilization Chemicals are also used for sterilization. Heating provides a reliable way to rid objects of all transmissible agents, but it is not always appropriate if it will damage heat-sensitive materials such as biological materials, fiber optics, electronics, and many plastics. In these situations, chemicals either in a gaseous or liquid form, can be used as sterilants. While the use of gas and liquid chemical sterilants avoids the problem of heat damage, users must ensure that the article to be sterilized is chemically compatible with the sterilant being used and that the sterilant is able to reach all surfaces that must be sterilized (typically cannot penetrate packaging). In addition, the use of chemical sterilants poses new challenges for workplace safety, as the properties that make chemicals effective sterilants usually make them harmful to humans. The procedure for removing sterilant residue from the sterilized materials varies depending on the chemical and process that is used. Ethylene oxide Ethylene oxide (EO, EtO) gas treatment is one of the common methods used to sterilize, pasteurize, or disinfect items because of its wide range of material compatibility. It is also used to process items that are sensitive to processing with other methods, such as radiation (gamma, electron beam, X-ray), heat (moist or dry), or other chemicals. Ethylene oxide treatment is the most common chemical sterilization method, used for approximately 70% of total sterilizations, and for over 50% of all disposable medical devices. Ethylene oxide treatment is generally carried out between with relative humidity above 30% and a gas concentration between 200 and 800 mg/L. Typically, the process lasts for several hours. Ethylene oxide is highly effective, as it penetrates all porous materials, and it can penetrate through some plastic materials and films. Ethylene oxide kills all known microorganisms, such as bacteria (including spores), viruses, and fungi (including yeasts and moulds), and is compatible with almost all materials even when used repeatedly. It is flammable, toxic, and carcinogenic; however, only with a reported potential for some adverse health effects when not used in compliance with published requirements. Ethylene oxide sterilizers and processes require biological validation after sterilizer installation, significant repairs, or process changes. The traditional process consists of a preconditioning phase (in a separate room or cell), a processing phase (more commonly in a vacuum vessel and sometimes in a pressure rated vessel), and an aeration phase (in a separate room or cell) to remove EO residues and lower by-products such as ethylene chlorohydrin (EC or ECH) and, of lesser importance, ethylene glycol (EG). An alternative process, known as all-in-one processing, also exists for some products whereby all three phases are performed in the vacuum or pressure rated vessel. This latter option can facilitate faster overall processing time and residue dissipation. The most common EO processing method is the gas chamber. To benefit from economies of scale, EO has traditionally been delivered by filling a large chamber with a combination of gaseous EO, either as pure EO, or with other gases used as diluents; diluents include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and carbon dioxide. Ethylene oxide is still widely used by medical device manufacturers. Since EO is explosive at concentrations above 3%, EO was traditionally supplied with an inert carrier gas, such as a CFC or HCFC. The use of CFCs or HCFCs as the carrier gas was banned because of concerns of ozone depletion. These halogenated hydrocarbons are being replaced by systems using 100% EO, because of regulations and the high cost of the blends. In hospitals, most EO sterilizers use single-use cartridges because of the convenience and ease of use compared to the former plumbed gas cylinders of EO blends. It is important to adhere to patient and healthcare personnel government specified limits of EO residues in and/or on processed products, operator exposure after processing, during storage and handling of EO gas cylinders, and environmental emissions produced when using EO. The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) at 1 ppm – calculated as an 8-hour time-weighted average (TWA) – and 5 ppm as a 15-minute excursion limit (EL). The National Institute for Occupational Safety and Health's (NIOSH) immediately dangerous to life and health limit (IDLH) for EO is 800 ppm. The odor threshold is around 500 ppm, so EO is imperceptible until concentrations are well above the OSHA PEL. Therefore, OSHA recommends that continuous gas monitoring systems be used to protect workers using EO for processing. Nitrogen dioxide Nitrogen dioxide (NO2) gas is a rapid and effective sterilant for use against a wide range of microorganisms, including common bacteria, viruses, and spores. The unique physical properties of NO2 gas allow for sterilant dispersion in an enclosed environment at room temperature and atmospheric pressure. The mechanism for lethality is the degradation of DNA in the spore's core through nitration of the phosphate backbone, which kills the exposed organism as it absorbs NO2. This degradations occurs at even very low concentrations of the gas. NO2 has a boiling point of at sea level, which results in a relatively high saturated vapour pressure at ambient temperature. Because of this, liquid NO2 may be used as a convenient source for the sterilant gas. Liquid NO2 is often referred to by the name of its dimer, dinitrogen tetroxide (N2O4). Additionally, the low levels of concentration required, coupled with the high vapour pressure, assures that no condensation occurs on the devices being sterilized. This means that no aeration of the devices is required immediately following the sterilization cycle. NO2 is also less corrosive than other sterilant gases, and is compatible with most medical materials and adhesives. The most-resistant organism (MRO) to sterilization with NO2 gas is the spore of Geobacillus stearothermophilus, which is the same MRO for both steam and hydrogen peroxide sterilization processes. The spore form of G. stearothermophilus has been well characterized over the years as a biological indicator in sterilization applications. Microbial inactivation of G. stearothermophilus with NO2 gas proceeds rapidly in a log-linear fashion, as is typical of other sterilization processes. Noxilizer, Inc. has commercialized this technology to offer contract sterilization services for medical devices at its Baltimore, Maryland (USA) facility. This has been demonstrated in Noxilizer's lab in multiple studies and is supported by published reports from other labs. These same properties also allow for quicker removal of the sterilant and residual gases through aeration of the enclosed environment. The combination of rapid lethality and easy removal of the gas allows for shorter overall cycle times during the sterilization (or decontamination) process and a lower level of sterilant residuals than are found with other sterilization methods. Eniware, LLC has developed a portable, power-free sterilizer that uses no electricity, heat, or water. The 25 liter unit makes sterilization of surgical instruments possible for austere forward surgical teams, in health centers throughout the world with intermittent or no electricity and in disaster relief and humanitarian crisis situations. The 4-hour cycle uses a single use gas generation ampoule and a disposable scrubber to remove NO2 gas. Ozone Ozone is used in industrial settings to sterilize water and air, as well as a disinfectant for surfaces. It has the benefit of being able to oxidize most organic matter. On the other hand, it is a toxic and unstable gas that must be produced on-site, so it is not practical to use in many settings. Ozone offers many advantages as a sterilant gas; ozone is a very efficient sterilant because of its strong oxidizing properties (E=2.076 vs SHE) capable of destroying a wide range of pathogens, including prions, without the need for handling hazardous chemicals since the ozone is generated within the sterilizer from medical-grade oxygen. The high reactivity of ozone means that waste ozone can be destroyed by passing over a simple catalyst that reverts it to oxygen and ensures that the cycle time is relatively short. The disadvantage of using ozone is that the gas is very reactive and very hazardous. The NIOSH's IDLH for ozone is smaller than the IDLH for ethylene oxide. NIOSH and OSHA have set the PEL for ozone at , calculated as an 8-hour time-weighted average. The sterilant gas manufacturers include many safety features in their products but prudent practice is to provide continuous monitoring of exposure to ozone, in order to provide a rapid warning in the event of a leak. Monitors for determining workplace exposure to ozone are commercially available. Glutaraldehyde and formaldehyde Glutaraldehyde and formaldehyde solutions (also used as fixatives) are accepted liquid sterilizing agents, provided that the immersion time is sufficiently long. To kill all spores in a clear liquid can take up to 22 hours with glutaraldehyde and even longer with formaldehyde. The presence of solid particles may lengthen the required period or render the treatment ineffective. Sterilization of blocks of tissue can take much longer, due to the time required for the fixative to penetrate. Glutaraldehyde and formaldehyde are volatile, and toxic by both skin contact and inhalation. Glutaraldehyde has a short shelf-life (<2 weeks), and is expensive. Formaldehyde is less expensive and has a much longer shelf-life if some methanol is added to inhibit polymerization of the chemical to paraformaldehyde, but is much more volatile. Formaldehyde is also used as a gaseous sterilizing agent; in this case, it is prepared on-site by depolymerization of solid paraformaldehyde. Many vaccines, such as the original Salk polio vaccine, are sterilized with formaldehyde. Hydrogen peroxide Hydrogen peroxide, in both liquid and as vaporized hydrogen peroxide (VHP), is another chemical sterilizing agent. Hydrogen peroxide is a strong oxidant, which allows it to destroy a wide range of pathogens. Hydrogen peroxide is used to sterilize heat- or temperature-sensitive articles, such as rigid endoscopes. In medical sterilization, hydrogen peroxide is used at higher concentrations, ranging from around 35% up to 90%. The biggest advantage of hydrogen peroxide as a sterilant is the short cycle time. Whereas the cycle time for ethylene oxide may be 10 to 15 hours, some modern hydrogen peroxide sterilizers have a cycle time as short as 28 minutes. Drawbacks of hydrogen peroxide include material compatibility, a lower capability for penetration and operator health risks. Products containing cellulose, such as paper, cannot be sterilized using VHP and products containing nylon may become brittle. The penetrating ability of hydrogen peroxide is not as good as ethylene oxide and so there are limitations on the length and diameter of the lumen of objects that can be effectively sterilized. Hydrogen peroxide is a primary irritant and the contact of the liquid solution with skin will cause bleaching or ulceration depending on the concentration and contact time. It is relatively non-toxic when diluted to low concentrations, but is a dangerous oxidizer at high concentrations (> 10% w/w). The vapour is also hazardous, primarily affecting the eyes and respiratory system. Even short-term exposures can be hazardous and NIOSH has set the IDLH at 75 ppm, less than 1/10 the IDLH for ethylene oxide (800 ppm). Prolonged exposure to lower concentrations can cause permanent lung damage and consequently, OSHA has set the permissible exposure limit to 1.0 ppm, calculated as an 8-hour time-weighted average. Sterilizer manufacturers go to great lengths to make their products safe through careful design and incorporation of many safety features, though there are still workplace exposures of hydrogen peroxide from gas sterilizers documented in the FDA Manufacturer and User Facility Device Experience (MAUDE) database. When using any type of gas sterilizer, prudent work practices should include good ventilation, a continuous gas monitor for hydrogen peroxide, and good work practices and training. Vaporized hydrogen peroxide (VHP) is used to sterilize large enclosed and sealed areas, such as entire rooms and aircraft interiors. Although toxic, VHP breaks down in a short time to water and oxygen. Peracetic acid Peracetic acid (0.2%) is a recognized sterilant by the FDA for use in sterilizing medical devices such as endoscopes. Peracetic acid which is also known as peroxyacetic acid is a chemical compound often used in disinfectants such as sanitizers. It is most commonly produced by the reaction of acetic acid with hydrogen peroxide by using an acid catalyst. Peracetic acid is never sold in un-stabilized solutions which is why it is considered to be environmentally friendly. Peracetic acid is a colorless liquid and the molecular formula of peracetic acid is C2H4O3 or CH3COOOH. More recently, peracetic acid is being used throughout the world as more people are using fumigation to decontaminate surfaces to reduce the risk of COVID-19 and other diseases. Potential for chemical sterilization of prions Prions are highly resistant to chemical sterilization. Treatment with aldehydes, such as formaldehyde, have actually been shown to increase prion resistance. Hydrogen peroxide (3%) used or 1 hour was shown to be ineffective, providing less than 3 logs (10−3) reduction in contamination. Iodine, formaldehyde, glutaraldehyde, and peracetic acid also fail this test (1 hour treatment). Only chlorine, phenolic compounds, guanidinium thiocyanate, and sodium hydroxide reduce prion levels by more than 4 logs; chlorine (too corrosive to use on certain objects) and sodium hydroxide are the most consistent. Many studies have shown the effectiveness of sodium hydroxide. Radiation sterilization Sterilization can be achieved using electromagnetic radiation, such as ultraviolet light (UV), X-rays, and gamma rays, or irradiation by subatomic particles such as electron beams. Electromagnetic or particulate radiation can be energetic enough to ionize atoms or molecules (ionizing radiation), or less energetic atoms or molecules (non-ionizing radiation). Non-ionizing radiation sterilization UV irradiation (from a germicidal lamp) is useful for sterilization of surfaces and some transparent objects. Many objects that are transparent to visible light absorb UV. UV irradiation is routinely used to sterilize the interiors of biological safety cabinets between uses, but is ineffective in shaded areas, including areas under dirt (which may become polymerized after prolonged irradiation, so that it is very difficult to remove). It also damages some plastics, such as polystyrene foam if exposed for prolonged periods of time. Ionizing radiation sterilization The safety of irradiation facilities is regulated by the International Atomic Energy Agency of the United Nations and monitored by the different national Nuclear Regulatory Commissions (NRC). The radiation exposure accidents that have occurred in the past are documented by the agency and thoroughly analyzed to determine the cause and improvement potential. Such improvements are then mandated to retrofit existing facilities and future design. Gamma radiation is very penetrating, and is commonly used for sterilization of disposable medical equipment, such as syringes, needles, cannulas and IV sets, and food. It is emitted by a radioisotope, usually cobalt-60 (60Co) or caesium-137 (137Cs), which have photon energies of up to 1.3 and 0.66 MeV, respectively. Use of a radioisotope requires shielding for the safety of the operators while in use and in storage. With most designs, the radioisotope is lowered into a water-filled source storage pool, which absorbs radiation and allows maintenance personnel to enter the radiation shield. One variant keeps the radioisotope under water at all times and lowers the product to be irradiated in the water in hermetically sealed bells; no further shielding is required for such designs. Other uncommonly used designs are dry storage, providing movable shields that reduce radiation levels in areas of the irradiation chamber, etc. An incident in Decatur, Georgia, USA, where water-soluble caesium-137 leaked into the source storage pool, required Nuclear Regulatory Commission (NRC) intervention and led to the use of this radioisotope being almost entirely discontinued in favor of the more costly, non-water-soluble cobalt-60. Cobalt-60 gamma photons have about twice the energy, and hence greater penetrating range, of caesium-137-produced radiation. Electron beam processing is also commonly used for sterilization. Electron beams use an on-off technology and provide a much higher dosing rate than gamma or X-rays. Due to the higher dose rate, less exposure time is needed and thereby any potential degradation to polymers is reduced. Because electrons carry a charge, electron beams are less penetrating than both gamma and X-rays. Facilities rely on substantial concrete shields to protect workers and the environment from radiation exposure. High-energy X-rays (produced by bremsstrahlung) allow irradiation of large packages and pallet loads of medical devices. They are sufficiently penetrating to treat multiple pallet loads of low-density packages with very good dose uniformity ratios. X-ray sterilization does not require chemical or radioactive material: high-energy X-rays are generated at high intensity by an X-ray generator that does not require shielding when not in use. X-rays are generated by bombarding a dense material (target) such as tantalum or tungsten with high-energy electrons, in a process known as bremsstrahlung conversion. These systems are energy-inefficient, requiring much more electrical energy than other systems for the same result. Irradiation with X-rays, gamma rays, or electrons does not make materials radioactive, because the energy used is too low. Generally an energy of at least 10 MeV is needed to induce radioactivity in a material. Neutrons and very high-energy particles can make materials radioactive, but have good penetration, whereas lower energy particles (other than neutrons) cannot make materials radioactive, but have poorer penetration. Sterilization by irradiation with gamma rays may however affect material properties. Irradiation is used by the United States Postal Service to sterilize mail in the Washington, D.C. area. Some foods (e.g., spices and ground meats) are sterilized by irradiation. Subatomic particles may be more or less penetrating and may be generated by a radioisotope or a device, depending upon the type of particle. Sterile filtration Fluids that would be damaged by heat, irradiation, or chemical sterilization, such as drug solution, can be sterilized by microfiltration using membrane filters. This method is commonly used for heat labile pharmaceuticals and protein solutions in medicinal drug processing. A microfilter with pore size of usually 0.22 μm will effectively remove microorganisms. Some Staphylococcal species have, however, been shown to be flexible enough to pass through 0.22 μm filters. In the processing of biologics, viruses must be removed or inactivated, requiring the use of nanofilters with a smaller pore size (20–50 nm). Smaller pore sizes lower the flow rate, so in order to achieve higher total throughput or to avoid premature blockage, pre-filters might be used to protect small pore membrane filters. Tangential flow filtration (TFF) and alternating tangential flow (ATF) systems also reduce particulate accumulation and blockage. Membrane filters used in production processes are commonly made from materials such as mixed cellulose ester or polyethersulfone (PES). The filtration equipment and the filters themselves may be purchased as pre-sterilized disposable units in sealed packaging or must be sterilized by the user, generally by autoclaving at a temperature that does not damage the fragile filter membranes. To ensure proper functioning of the filter, the membrane filters are integrity tested post-use and sometimes before use. The nondestructive integrity test assures that the filter is undamaged and is a regulatory requirement. Typically, terminal pharmaceutical sterile filtration is performed inside of a cleanroom to prevent contamination. Preservation of sterility Instruments that have undergone sterilization can be maintained in such condition by containment in sealed packaging until use. Aseptic technique is the act of maintaining sterility during procedures.
Technology
Food, water and health
null
414192
https://en.wikipedia.org/wiki/Ovarian%20cancer
Ovarian cancer
Ovarian cancer is a cancerous tumor of an ovary. It may originate from the ovary itself or more commonly from communicating nearby structures such as fallopian tubes or the inner lining of the abdomen. The ovary is made up of three different cell types including epithelial cells, germ cells, and stromal cells. When these cells become abnormal, they have the ability to divide and form tumors. These cells can also invade or spread to other parts of the body. When this process begins, there may be no or only vague symptoms. Symptoms become more noticeable as the cancer progresses. These symptoms may include bloating, vaginal bleeding, pelvic pain, abdominal swelling, constipation, and loss of appetite, among others. Common areas to which the cancer may spread include the lining of the abdomen, lymph nodes, lungs, and liver. The risk of ovarian cancer increases with age. Most cases of ovarian cancer develop after menopause. It is also more common in women who have ovulated more over their lifetime. This includes those who have never had children, those who began ovulation at a younger age and those who reach menopause at an older age. Other risk factors include hormone therapy after menopause, fertility medication, and obesity. Factors that decrease risk include hormonal birth control, tubal ligation, pregnancy, and breast feeding. About 10% of cases are related to inherited genetic risk; women with mutations in the genes BRCA1 or BRCA2 have about a 50% chance of developing the disease. Some family cancer syndromes such as hereditary nonpolyposis colon cancer and Peutz-Jeghers syndrome also increase the risk of developing ovarian cancer. Epithelial ovarian carcinoma is the most common type of ovarian cancer, comprising more than 95% of cases. There are five main subtypes of ovarian carcinoma, of which high-grade serous carcinoma (HGSC) is the most common. Less common types of ovarian cancer include germ cell tumors and sex cord stromal tumors. A diagnosis of ovarian cancer is confirmed through a biopsy of tissue, usually removed during surgery. Screening is not recommended in women who are at average risk, as evidence does not support a reduction in death and the high rate of false positive tests may lead to unneeded surgery, which is accompanied by its own risks. Those at very high risk may have their ovaries removed as a preventive measure. If caught and treated in an early stage, ovarian cancer is often curable. Treatment usually includes some combination of surgery, radiation therapy, and chemotherapy. Outcomes depend on the extent of the disease, the subtype of cancer present, and other medical conditions. The overall five-year survival rate in the United States is 49%. Outcomes are worse in the developing world. In 2020, new cases occurred in approximately 313,000 women. In 2019 it resulted in 13,445 deaths in the United States. Death from ovarian cancer increased globally between 1990 and 2017 by 84.2%. Ovarian cancer is the second-most common gynecologic cancer in the United States. It causes more deaths than any other cancer of the female reproductive system. Among women it ranks fifth in cancer-related deaths. The typical age of diagnosis is 63. Death from ovarian cancer is more common in North America and Europe than in Africa and Asia. In the United States, it is more common in White and Hispanic women than Black or American Indian women. Signs and symptoms Early symptoms Early signs and symptoms of ovarian cancer may be absent or subtle. In most cases, symptoms exist for several months before being recognized and diagnosed. Symptoms can often be misdiagnosed as irritable bowel syndrome. The early stages of ovarian cancer tend to be painless which makes it difficult to detect it early on. Symptoms can vary based on the subtype. Ovarian borderline tumors, also known as low malignant potential (LMP) ovarian tumors, do not cause an increase in CA125 levels and are not identifiable with an ultrasound. The typical symptoms of an LMP tumor can include abdominal distension or pelvic pain. Particularly large masses tend to be benign or borderline. The most typical symptoms of ovarian cancer include bloating, abdominal or pelvic pain or discomfort, back pain, irregular menstruation or postmenopausal vaginal bleeding, pain or bleeding after or during sexual intercourse, loss of appetite, fatigue, diarrhea, indigestion, heartburn, constipation, nausea, feeling full, and possibly urinary symptoms (including frequent urination and urgent urination). Later symptoms Later symptoms of ovarian cancer are due to the growing mass causing pain by pressing on other abdominopelvic organs or from metastases. Because of the anatomic location of the ovaries deep in the pelvis, most masses are large and advanced at the time of diagnosis. The growing mass may cause pain if ovarian torsion develops. If these symptoms start to occur more often or more severely than usual, especially after no significant history of such symptoms, ovarian cancer is considered. Metastases may cause a Sister Mary Joseph nodule. Rarely, teratomas can cause growing teratoma syndrome or peritoneal gliomatosis. Some experience menometrorrhagia and abnormal vaginal bleeding after menopause in most cases. Other common symptoms include hirsutism, abdominal pain, virilization, and an adnexal mass. Children In adolescents or children with ovarian tumors, symptoms can include severe abdominal pain, irritation of the peritoneum, or bleeding. Sex cord stromal tumors produce hormones which can lead to the premature development of secondary sex characteristics. Sex cord-stromal tumors in prepubertal children may be manifested by signs of early puberty; abdominal pain and distension are also common. Adolescents with sex cord-stromal tumors may experience amenorrhea. As the cancer becomes more advanced, it can cause an accumulation of fluid in the abdomen and lead to distension. If the malignancy has not been diagnosed by the time it causes ascites, it is typically diagnosed shortly thereafter. Advanced cancers can also cause abdominal masses, lymph node masses, or pleural effusion. Risk factors There are many known risk factors that may increase a woman's risk of developing ovarian cancer. The risk of developing ovarian cancer is related to the amount of time a woman spends ovulating. Factors that increase the number of ovulatory cycles a woman undergoes may increase the risk of developing ovarian cancer. During ovulation, cells are stimulated to divide. If this division is abnormally regulated, tumors may form which can be malignant. Early menarche and late menopause increase the number of ovulatory cycles a woman undergoes in her lifetime and so increases the risk of developing ovarian cancer. Since ovulation is suppressed during pregnancy, not having children also increases the risk of ovarian cancer. Therefore, women who have not borne children are at twice the risk of ovarian cancer than those who have. Both obesity and hormone replacement therapy also raise the risk. The risk of developing ovarian cancer is less for women who have fewer menstrual cycles, no menstrual cycles, breast feeding, take oral contraceptives, have multiple pregnancies, and have a pregnancy at an early age. The risk of developing ovarian cancer is reduced in women who have had tubal ligation (colloquially known as having one's "tubes tied"), both ovaries removed, or hysterectomy (an operation in which the uterus is removed). Age is also a risk factor. Non-genetic factors such as diabetes mellitus, high body mass index, and tobacco use are also risk factors for ovarian cancer. Hormones The use of fertility medication may contribute to ovarian borderline tumor formation, but the link between the two is disputed and difficult to study. Fertility drugs may be associated with a higher risk of borderline tumors. Those who have been treated for infertility but remain nulliparous are at higher risk for epithelial ovarian cancer due to hormonal exposure that may lead to proliferation of cells. However, those who are successfully treated for infertility and subsequently give birth are at no higher risk. This may be due to shedding of precancerous cells during pregnancy, but the cause remains unclear. The risk factor may instead be infertility itself, not the treatment. Hormonal conditions such as polycystic ovary syndrome and endometriosis are associated with ovarian cancer, but the link is not completely confirmed. Postmenopausal hormone replacement therapy (HRT) with estrogen likely increases the risk of ovarian cancer. The association has not been confirmed in a large-scale study, but notable studies including the Million Women Study have supported this link. Postmenopausal HRT with combined estrogen and progesterone may increase contemporaneous risk if used for over 5 years, but this risk returns to normal after cessation of therapy. Estrogen HRT with or without progestins increases the risk of endometrioid and serous tumors but lowers the risk of mucinous tumors. Higher doses of estrogen increase this risk. Endometriosis is another risk factor for ovarian cancer, as is pain with menstruation. Endometriosis is associated with clear-cell and endometrioid subtypes, low-grade serous tumors, stage I and II tumors, grade 1 tumors, and lower mortality. Before menopause, obesity can increase a person's risk of ovarian cancer, but this risk is not present after menopause. This risk is also relevant in those who are both obese and have never used HRT. A similar association with ovarian cancer appears in taller women. Genetics A family history of ovarian cancer is a risk factor for ovarian cancer. Women with hereditary nonpolyposis colon cancer (Lynch syndrome), and those with BRCA-1 and BRCA-2 genetic abnormalities are at increased risk. The major genetic risk factor for ovarian cancer is a mutation in BRCA1 or BRCA2 genes, or in DNA mismatch repair genes, which is present in 10% of ovarian cancer cases. Only one allele needs to be mutated to place a person at high risk. The gene can be inherited through either the maternal or paternal line, but has variable penetrance. Though mutations in these genes are usually associated with increased risk of breast cancer, they also carry a substantial lifetime risk of ovarian cancer, a risk that peaks in a person's 40s and 50s. The lowest risk cited is 30% and the highest 60%. Mutations in BRCA1 have a lifetime risk of developing ovarian cancer of 15–45%. Mutations in BRCA2 are less risky than those with BRCA1, with a lifetime risk of 10% (lowest risk cited) to 40% (highest risk cited). On average, BRCA-associated cancers develop 15 years before their sporadic counterparts because people who inherit the mutations on one copy of their gene only need one mutation to start the process of carcinogenesis, whereas people with two normal genes would need to acquire two mutations. In the United States, five of 100 women with a first-degree relative with ovarian cancer will eventually get ovarian cancer themselves, placing those with affected family members at triple the risk of women with unaffected family members. Seven of 100 women with two or more relatives with ovarian cancer will eventually get ovarian cancer. In general, 5–10% of ovarian cancer cases have a genetic cause. BRCA mutations are associated with high-grade serous nonmucinous epithelial ovarian cancer. A strong family history of endometrial cancer, colon cancer, or other gastrointestinal cancers may indicate the presence of a syndrome known as hereditary nonpolyposis colorectal cancer (also known as Lynch syndrome), which confers a higher risk for developing a number of cancers, including ovarian cancer. Lynch syndrome is caused by mutations in mismatch repair genes, including MSH2, MLH1, MLH6, PMS1, and PMS2. The risk of ovarian cancer for an individual with Lynch syndrome is between 10 and 12 percent. Women of Icelandic descent, European Jewish descent/Ashkenazi Jewish descent, and Hungarian descent are at higher risk for epithelial ovarian cancer. Estrogen receptor beta gene (ESR2) seems to be a key to pathogenesis and response to therapy. Other genes that have been associated with ovarian cancer are BRIP1, MSH6, RAD51C and RAD51D. CDH1, CHEK2, PALB2 and RAD50 have also been associated with ovarian cancer. Several rare genetic disorders are associated with specific subtypes of ovarian cancer. Peutz–Jeghers syndrome, a rare genetic disorder, also predisposes women to sex cord tumour with annular tubules. Ollier disease and Maffucci syndrome are associated with granulosa cell tumors in children and may also be associated with Sertoli-Leydig tumors. Benign fibromas are associated with nevoid basal cell carcinoma syndrome. Diet Alcohol consumption does not appear to be related to ovarian cancer. The American Cancer Society recommends a healthy eating pattern that includes plenty of fruits, vegetables, whole grains, and a diet that avoids or limits red and processed meats and processed sugar. High consumption of total, saturated and trans-fats increases ovarian cancer risk. A 2021 umbrella review found that coffee, egg, and fat intake significantly increases the risk of ovarian cancer. There is mixed evidence from studies on ovarian cancer risk and consumption of dairy products. Environmental factors Industrialized nations, with the exception of Japan, have high rates of epithelial ovarian cancer, which may be due to diet in those countries. White women are at a 30–40% higher risk for ovarian cancer when compared to Black women and Hispanic women, likely due to socioeconomic factors; white women tend to have fewer children and different rates of gynecologic surgeries that affect risk for ovarian cancer. Tentative evidence suggests that talc, pesticides, and herbicides increase the risk of ovarian cancer. The American Cancer Society notes that as of now, no study has been able to accurately link any single chemical in the environment, or in the human diet, directly to mutations that cause ovarian cancer. Other Other factors that have been investigated, such as smoking, low levels of vitamin D in the blood, presence of inclusion ovarian cysts, and infection with human papilloma virus (the cause of some cases of cervical cancer), have been disproven as risk factors for ovarian cancer. The carcinogenicity of perineal talc is controversial, because it can act as an irritant if it travels through the reproductive tract to the ovaries. Case-control studies have shown that use of perineal talc does increase the risk of ovarian cancer, but using talc more often does not create a greater risk. Use of talc elsewhere on the body is unrelated to ovarian cancer. Sitting regularly for prolonged periods is associated with higher mortality from epithelial ovarian cancer. The risk is not negated by regular exercise, though it is lowered. Increased age (up to the 70s) is a risk factor for epithelial ovarian cancer because more mutations in cells can accumulate and eventually cause cancer. Those over 80 are at slightly lower risk. Smoking tobacco is associated with a higher risk of mucinous ovarian cancer; after smoking cessation, the risk eventually returns to normal. Higher levels of C-reactive protein are associated with a higher risk of developing ovarian cancer. Protective factors Suppression of ovulation, which would otherwise cause damage to the ovarian epithelium and, consequently, inflammation, is generally protective. This effect can be achieved by having children, taking combined oral contraceptives, and breast feeding, all of which are protective factors. A longer period of breastfeeding correlates with a larger decrease in the risk of ovarian cancer. Each birth decreases risk of ovarian cancer more, and this effect is seen with up to five births. Combined oral contraceptives reduce the risk of ovarian cancer by up to 50%, and the protective effect of combined oral contraceptives can last 25–30 years after they are discontinued. Regular use of aspirin (MALOVA (MALignant OVArian cancer) study) or acetaminophen (paracetamol) may be associated with a lower risk of ovarian cancer; other NSAIDs do not seem to have a similar protective effect. Tubal ligation is protective because carcinogens are unable to reach the ovary and fimbriae via the vagina, uterus, and Fallopian tubes. Tubal ligation is also protective in women with the BRCA1 mutation, but not the BRCA2 mutation. Hysterectomy reduces the risk, and removal of both Fallopian tubes and ovaries (bilateral salpingo-oophorectomy) dramatically reduces the risk of not only ovarian cancer but breast cancer as well. This is still a topic of research, as the link between hysterectomy and lower ovarian cancer risk is controversial. The reasons that hysterectomy may be protective have not been elucidated as of 2015. A diet that includes large amounts of carotene, fiber, and vitamins with low amounts of fat—specifically, a diet with non-starchy vegetables (e.g. broccoli and onions) may be protective. Dietary fiber is associated with a significant reduced risk of ovarian cancer. A 2021 review found that green leafy vegetables, allium vegetables, fiber, flavanoids and green tea intake can significantly reduce ovarian cancer risk. Pathophysiology Ovarian cancer forms when errors in normal ovarian cell growth occur. Usually, when cells grow old or get damaged, they die, and new cells take their place. Cancer starts when new cells form unneeded, and old or damaged cells do not die as they should. The buildup of extra cells often forms a mass of tissue called an ovarian tumor or growth. These abnormal cancer cells have many genetic abnormalities that cause them to grow excessively. When an ovary releases an egg, the egg follicle bursts open and becomes the corpus luteum. This structure needs to be repaired by dividing cells in the ovary. Continuous ovulation for a long time means more repair of the ovary by dividing cells, which can acquire mutations in each division. Overall, the most common gene mutations in ovarian cancer occur in NF1, BRCA1, BRCA2, and CDK12. Type I ovarian cancers, which tend to be less aggressive, tend to have microsatellite instability in several genes, including both oncogenes (most notably BRAF and KRAS) and tumor suppressors (most notably PTEN). The most common mutations in Type I cancers are KRAS, BRAF, ERBB2, PTEN, PIK3CA, and ARID1A. Type II cancers, the more aggressive type, have different genes mutated, including p53, BRCA1, and BRCA2. Low-grade cancers tend to have mutations in KRAS, whereas cancers of any grade that develop from low malignant potential tumors tend to have mutations in p53. Type I cancers tend to develop from precursor lesions, whereas Type II cancers can develop from a serous tubal intraepithelial carcinoma. Serous cancers that have BRCA mutations also inevitably have p53 mutations, indicating that the removal of both functional genes is important for cancer to develop. In 50% of high-grade serous cancers, homologous recombination DNA repair is dysfunctional, as are the notch and FOXM1 signaling pathways. They also almost always have p53 mutations. Other than this, mutations in high-grade serous carcinoma are hard to characterize beyond their high degree of genomic instability. BRCA1 and BRCA2 are essential for homologous recombination DNA repair, and germline mutations in these genes are found in about 15% of women with ovarian cancer. The most common mutations in BRCA1 and BRCA2 are the frameshift mutations that originated in a small founding population of Ashkenazi Jews. Almost 100% of rare mucinous carcinomas have mutations in KRAS and amplifications of ERBB2 (also known as Her2/neu). Overall, 20% of ovarian cancers have mutations in Her2/neu. Serous carcinomas may develop from serous tubal intraepithelial carcinoma, rather than developing spontaneously from ovarian tissue. Other carcinomas develop from cortical inclusion cysts, which are groups of epithelial ovarian cells inside the stroma. Diagnosis Examination Diagnosis of ovarian cancer starts with a physical examination (including a pelvic examination), a blood test (for CA-125 and sometimes other markers), and transvaginal ultrasound. Sometimes a rectovaginal examination is used to help plan a surgery. The diagnosis must be confirmed with surgery to inspect the abdominal cavity, take biopsies (tissue samples for microscopic analysis), and look for cancer cells in the abdominal fluid. This helps to determine if an ovarian mass is benign or malignant. Ovarian cancer's early stages (I/II) are difficult to diagnose because most symptoms are nonspecific and thus of little use in diagnosis; as a result, it is rarely diagnosed until it spreads and advances to later stages (III/IV). Additionally, symptoms of ovarian cancer may appear similar to irritable bowel syndrome. In women in whom pregnancy is a possibility, BHCG level can be measured during the diagnosis process. Serum alpha-fetoprotein, neuron-specific enolase, and lactate dehydrogenase can be measured in young girls and adolescents with suspected ovarian tumors as younger women with ovarian cancer are more likely to have malignant germ cell tumors. A physical examination, including a pelvic examination, and a pelvic ultrasound (transvaginal or otherwise) are both essential for diagnosis: physical examination may reveal increased abdominal girth and/or ascites (fluid within the abdominal cavity), while pelvic examination may reveal an ovarian or abdominal mass. An adnexal mass is a significant finding that often indicates ovarian cancer, especially if it is fixed, nodular, irregular, solid, and/or bilateral. 13–21% of adnexal masses are caused by malignancy; however, there are other benign causes of adnexal masses, including ovarian follicular cyst, leiomyoma, endometriosis, ectopic pregnancy, hydrosalpinx, tuboovarian abscess, ovarian torsion, dermoid cyst, cystadenoma (serous or mucinous), diverticular or appendiceal abscess, nerve sheath tumor, pelvic kidney, ureteral or bladder diverticulum, benign cystic mesothelioma of the peritoneum, peritoneal tuberculosis, or paraovarian cyst. Ovaries that can be felt are also a sign of ovarian cancer in postmenopausal women. Other parts of a physical examination for suspected ovarian cancer can include a breast examination and a digital rectal exam. Palpation of the supraclavicular, axillary, and inguinal lymph nodes may reveal lymphadenopathy, which can be indicative of metastasis. Another indicator may be the presence of a pleural effusion, which can be noted on auscultation. When an ovarian malignancy is included in a list of diagnostic possibilities, a limited number of laboratory tests are indicated. A complete blood count and serum electrolyte test is usually obtained; when an ovarian cancer is present, these tests often show a high number of platelets (20–25% of patients) and low blood sodium levels due to chemical signals secreted by the tumor. A positive test for inhibin A and inhibin B can indicate a granulosa cell tumor. A blood test for a marker molecule called CA-125 is useful in differential diagnosis and in follow up of the disease, but it by itself has not been shown to be an effective method to screen for early-stage ovarian cancer due to its unacceptable low sensitivity and specificity. CA-125 levels in premenopausal women over 200 U/mL may indicate ovarian cancer, as may any elevation in CA-125 above 35 U/mL in post-menopausal women. CA-125 levels are not accurate in early stage ovarian cancer, as half of stage I ovarian cancer patients have a normal CA-125 level. CA-125 may also be elevated in benign (non-cancerous) conditions, including endometriosis, pregnancy, uterine fibroids, menstruation, ovarian cysts, systemic lupus erythematosus, liver disease, inflammatory bowel disease, pelvic inflammatory disease, and leiomyoma. HE4 is another candidate for ovarian cancer testing, though it has not been extensively tested. Other tumor markers for ovarian cancer include CA19-9, CA72-4, CA15-3, immunosuppressive acidic protein, haptoglobin-alpha, OVX1, mesothelin, lysophosphatidic acid, osteopontin, and fibroblast growth factor 23. Use of blood test panels may help in diagnosis. The OVA1 panel includes CA-125, beta-2 microglobulin, transferrin, apolipoprotein A1, and transthyretin. OVA1 above 5.0 in premenopausal women and 4.4 in postmenopausal women indicates a high risk for cancer. A different set of laboratory tests is used for detecting sex cord-stromal tumors. High levels of testosterone or dehydroepiandrosterone sulfate, combined with other symptoms and high levels of inhibin A and inhibin B can be indicative of an SCST of any type. Current research is looking at ways to consider tumor marker proteomics in combination with other indicators of disease (i.e. radiology and/or symptoms) to improve diagnostic accuracy. The challenge in such an approach is that the disparate prevalence of ovarian cancer means that even testing with very high sensitivity and specificity will still lead to a number of false positive results, which in turn may lead to issues such as performing surgical procedures in which cancer is not found intraoperatively. Genomics approaches have not yet been developed for ovarian cancer. CT scanning is preferred to assess the extent of the tumor in the abdominopelvic cavity, though magnetic resonance imaging can also be used. CT scanning can also be useful for finding omental caking or differentiating fluid from solid tumor in the abdomen, especially in low malignant potential tumors. However, it may not detect smaller tumors. Sometimes, a chest x-ray is used to detect metastases in the chest or pleural effusion. Another test for metastatic disease, though it is infrequently used, is a barium enema, which can show if the rectosigmoid colon is involved in the disease. Positron emission tomography, bone scans, and paracentesis are of limited use; in fact, paracentesis can cause metastases to form at the needle insertion site and may not provide useful results. However, paracentesis can be used in cases where there is no pelvic mass and ascites is still present. A physician suspecting ovarian cancer may also perform mammography or an endometrial biopsy (in the case of abnormal bleeding) to assess the possibility of breast malignancies and endometrial malignancy, respectively. Vaginal ultrasonography is often the first-line imaging study performed when an adnexal mass is found. Several characteristics of an adnexal mass indicate ovarian malignancy; they usually are solid, irregular, multilocular, and/or large; and they typically have papillary features, central vessels, and/or irregular internal septations. However, SCST has no definitive characteristics on radiographic study. To definitively diagnose ovarian cancer, a surgical procedure to inspect the abdomen is required. This can be an open procedure (laparotomy, incision through the abdominal wall) or keyhole surgery (laparoscopy). During this procedure, suspicious tissue is removed and sent for microscopic analysis. Usually, this includes a unilateral salpingo-oophorectomy, removal of a single affected ovary and Fallopian tube. Fluid from the abdominal cavity can also be analyzed for cancerous cells. If cancer is found, this procedure can also be used to determine the extent of its spread (which is a form of tumor staging). Pafolacianine is indicated for use in adults with ovarian cancer to help identify cancerous lesions during surgery. It is a diagnostic agent that is administered in the form of an intravenous injection prior to surgery. Risk scoring A widely recognized method of estimating the risk of malignant ovarian cancer is the risk of malignancy index (RMI), calculated based on an initial workup. An RMI score of over 200 or 250 is generally felt to indicate high risk for ovarian cancer. The RMI is calculated as: RMI = ultrasound score × menopausal score x CA-125 level in U/ml. Two methods can be used to determine the ultrasound score and menopausal score, with the resultant scores being referred to as RMI 1 and RMI 2, respectively, depending on what method is used. Another method for quantifying risk of ovarian cancer is the Risk of Ovarian Cancer Algorithm (ROCA), which observes levels over time and determines if they are increasing rapidly enough to warrant transvaginal ultrasound. The Risk of Ovarian Malignancy algorithm uses CA-125 levels and HE4 levels to calculate the risk of ovarian cancer; it may be more effective than RMI. The IOTA models can be used to estimate the probability that an adnexal tumor is malignant. They include LR2 risk model, The Simple Rules risk (SRrisk) calculation and Assessment of Different Neoplasias in the Adnexa (ADNEX) model that can be used to assess risk of malignancy in an adnexal mass, based on its characteristics and risk factors. The QCancer (Ovary) algorithm is used to predict likelihood of ovarian cancer from risk factors. Ovarian-Adnexal Reporting and Data System (ORADS) is a standardized system developed by the American College of Radiology to improve the management and diagnosis of ovarian and adnexal masses. It provides a consistent framework for interpreting imaging findings, particularly from ultrasound, and assigns risk stratification categories that guide clinical decision-making. By utilizing a clear set of criteria and terminology, ORADS aims to enhance communication among healthcare providers, increase diagnostic accuracy, and ultimately improve patient outcomes in the evaluation of ovarian and adnexal pathologies. Additionally, a specialized ORADS calculator is available to facilitate reporting, helping radiologists and clinicians quickly and accurately classify findings according to the system's guidelines. Pathology Ovarian cancers are classified according to the microscopic appearance of their structures (histology or histopathology). Histology dictates many aspects of clinical treatment, management, and prognosis. The gross pathology of ovarian cancers is very similar regardless of histologic type: ovarian tumors have solid and cystic masses. According to SEER, the types of ovarian cancers in women age 20 and over are: Ovarian cancers are histologically and genetically divided into type I or type II. Type I cancers are of low histological grade and include endometrioid, mucinous, and clear-cell carcinomas. Type II cancers are of higher histological grade and include serous carcinoma and carcinosarcoma. Epithelial carcinoma Epithelial ovarian cancer typically presents at an advanced stage and is derived from the malignant transformation of the epithelium of the ovarian surface, peritoneum, or fallopian tube. It is the most common cause of gynecologic cancer death. There are various types of epithelial ovarian cancer, including serous tumor, endometrioid tumor, clear-cell tumor, mucinous tumor, and undifferentiated or unclassified tumors. Annually worldwide, 230,000 women will be diagnosed and 150,000 will die. It has a 46% 5 year survival rate after diagnosis because of the advanced stage of the disease at the time of diagnosis. Typically, around 75% of patients are diagnosed as having an advanced stage of the disease because of the asymptomatic nature of its presentation. There is a genomic predisposition to epithelial ovarian cancer and the BRCA1 and BRCA2 genes have been found to be the causative genes in 65–75% of hereditary epithelial ovarian cancer. Serous carcinoma Serous ovarian cancer is the most common type of epithelial ovarian cancer and it accounts for about two-thirds of cases of epithelial ovarian cancer. Low-grade serous carcinoma is less aggressive than high-grade serous carcinomas, though it does not typically respond well to chemotherapy or hormonal treatments. Serous carcinomas are thought to begin in the Fallopian tube. High grade serous carcinoma accounts for 75% of all epithelial ovarian cancer. About 15–20% of high grade serous carcinoma have germline BRCA1 and BRCA2 mutations. Histologically, the growth pattern of high grade serous carcinoma is heterogenous and has some papillary or solid growth patterns. The tumor cells are atypical with large, irregular nuclei. It has a high proliferation rate. 50% of the time, serous carcinomas are bilateral, and in 85% of cases, they have spread beyond the ovary at the time of diagnosis. Serous Tubal Intraepithelial Carcinoma (STIC) is now recognized to be the precursor lesion of most so-called ovarian high-grade serous carcinomas. STIC is characterised by Abnormal p53 staining Ki67 proliferation index in excess of 10% Positive WT1 (to exclude metastases) Small-cell carcinoma Small-cell ovarian carcinoma is rare and aggressive, with two main subtypes: hypercalcemic and pulmonary. This rare malignancy most commonly affects young women under the age of 40 years old with a range between 14 months and 58 years. The mean age of diagnosis of 24 years. Approximately two-thirds of patients will present with paraneoplastic hypercalcemia meaning they have high blood calcium levels for an unknown reason. The tumor secretes Parathyroid hormone related protein which acts similarly to PTH and binds PTH receptors in the bone and kidney causing hypercalcemia. Recent research has found an inactivating germline and somatic mutation of SMARCA4 gene. The hypercalcemic subtype is very aggressive and has an overall survival rate of 16% with a recurrence rate of 65% in patients who receive treatment. Patients who have spread of the disease to other parts of the body tend to die 2 years after the diagnosis. Extra-ovarian spread is involved in 50% of cases and lymph node spread is present in 55% of cases. The most common initial presentation is a rapidly growing unilateral pelvic mass with a mean size of 15 cm. Histologically, it is characterized by many sheets of small, round, tightly packed cells with clusters, nests, and cords. Immunohistochemistry is typically positive for vimentin, cytokeratin, CD10, p53, and WT-1. Small cell ovarian carcinoma of the pulmonary subtype presents differently from the hypercalcemic subtype. Typically, pulmonary small cell ovarian cancer usually affects both ovaries of older women and looks like oat-cell carcinoma of the lung. The average age of disease onset is 59 years old and approximately 45% of cases are bilateral for the pulmonary subtype. Additionally, several hormones can be elevated in the pulmonary subtype including serotonin, somatostatin, insulin, gastrin, and calcitonin. Primary peritoneal carcinoma Primary peritoneal carcinomas develop from the peritoneum, a membrane that covers the abdominal cavity that has the same embryonic origin as the ovary. They are often discussed and classified with ovarian cancers when they affect the ovary. They can develop even after the ovaries have been removed and may appear similar to mesothelioma. Clear-cell carcinoma Ovarian clear-cell carcinoma is a rare subtype of epithelial ovarian cancer. Those diagnosed with ovarian clear-cell carcinoma are typically younger at the age of diagnosis and diagnosed at earlier stages than other subtypes of epithelial ovarian cancer. The highest incidence of clear-cell carcinoma of the ovary have been observed among young Asian women, especially those of Korean, Taiwanese, and Japanese background. Endometriosis has been linked to being the main risk factor for the development of clear-cell carcinoma of the ovary and has been found to be present in 50% of women diagnosed with clear-cell carcinoma of the ovary. The development of clots in the legs such as deep vein thromboembolism or in the lungs with pulmonary embolism is reported to be 40% higher in patients with clear-cell carcinoma than other epithelial ovarian cancer subtypes. Mutations in molecular pathways such as ARID1A, PIK3, and PIK3CA have been found to be linked to clear-cell carcinoma. They typically present as a large, unilateral mass, with a mean size between 13 and 15 cm. 90% of cases are unilateral. Ovarian clear-cell carcinoma does not typically respond well to chemotherapy due to intrinsic chemoresistance, therefore treatment is typically with aggressive cytoreductive surgery and platinum-based chemotherapy. Clear-cell adenocarcinoma Clear-cell adenocarcinomas are histopathologically similar to other clear-cell carcinomas, with clear cells and hobnail cells. They represent approximately 5–10% of epithelial ovarian cancers and are associated with endometriosis in the pelvic cavity. They are typically early-stage and therefore curable by surgery, but advanced clear-cell adenocarcinomas (approximately 20%) have a poor prognosis and are often resistant to platinum chemotherapy. Endometrioid Endometrioid adenocarcinomas make up approximately 13–15% of all ovarian cancers. Because they are typically low-grade, endometrioid adenocarcinomas have a good prognosis. The median age of diagnosis is around 53 years of age. These tumors frequently co-occur with endometriosis or endometrial cancer. Cancer antigen 125 levels are typically elevated and a family history of a first degree relative with endometrioid ovarian cancer is associated with increased risk of developing endometrioid ovarian cancer. The average tumor size is larger than 10 cm. Malignant mixed müllerian tumor (carcinosarcoma) Mixed müllerian tumors make up less than 1% of ovarian cancer. They have epithelial and mesenchymal cells visible and tend to have a poor prognosis. Mucinous Mucinous tumors include mucinous adenocarcinoma and mucinous cystadenocarcinoma. Mucinous adenocarcinoma Mucinous adenocarcinomas make up 5–10% of epithelial ovarian cancers. Histologically, they are similar to intestinal or cervical adenocarcinomas and are often actually metastases of appendiceal or colon cancers. Advanced mucinous adenocarcinomas have a poor prognosis, generally worse than serous tumors, and are often resistant to platinum chemotherapy, though they are rare. Pseudomyxoma peritonei Pseudomyxoma peritonei refers to a collection of encapsulated mucus or gelatinous material in the abdominopelvic cavity, which is very rarely caused by a primary mucinous ovarian tumor. More commonly, it is associated with ovarian metastases of intestinal cancer. Undifferentiated epithelial Undifferentiated cancers - those where the cell type cannot be determined - make up about 10% of epithelial ovarian cancers and have a comparatively poor prognosis. When examined under the microscope, these tumors have very abnormal cells that are arranged in clumps or sheets. Usually there are recognizable clumps of serous cells inside the tumor. Malignant Brenner tumor Malignant Brenner tumors are rare. Histologically, they have dense fibrous stroma with areas of transitional epithelium and some squamous differentiation. To be classified as a malignant Brenner tumor, it must have Brenner tumor foci and transitional cell carcinoma. The transitional cell carcinoma component is typically poorly differentiated and resembles urinary tract cancer. Transitional cell carcinoma Transitional cell carcinomas represent less than 5% of ovarian cancers. Histologically, they appear similar to bladder carcinoma. The prognosis is intermediate - better than most epithelial cancers but worse than malignant Brenner tumors. Sex cord-stromal tumor Sex cord-stromal tumor, including estrogen-producing granulosa cell tumor, the benign thecoma, and virilizing Sertoli-Leydig cell tumor or arrhenoblastoma, accounts for 7% of ovarian cancers. They occur most frequently in women between 50 and 69 years of age but can occur in women of any age, including young girls. They are not typically aggressive and are usually unilateral; they are therefore usually treated with surgery alone. Sex cord-stromal tumors are the main hormone-producing ovarian tumors. Several different cells from the mesenchyme can give rise to sex-cord or stromal tumors. These include fibroblasts and endocrine cells. The symptoms of a sex-cord or stromal ovarian tumor can differ from other types of ovarian cancer. Common signs and symptoms include ovarian torsion, hemorrhage from or rupture of the tumor, an abdominal mass, and hormonal disruption. In children, isosexual precocious pseudopuberty may occur with granulosa cell tumors since they produce estrogen. These tumors cause abnormalities in menstruation (excessive bleeding, infrequent menstruation, or no menstruation) or postmenopausal bleeding. Because these tumors produce estrogen, they can cause or occur at the same time as endometrial cancer or breast cancer. Other sex-cord/stromal tumors present with distinct symptoms. Sertoli-Leydig cell tumors cause virilization and excessive hair growth due to the production of testosterone and androstenedione, which can also cause Cushing's syndrome in rare cases. Also, sex-cord stromal tumors occur that do not cause a hormonal imbalance, including benign fibromas, which cause ascites and hydrothorax. With germ cell tumors, sex cord-stromal tumors are the most common ovarian cancer diagnosed in women under 20. Granulosa cell tumor Granulosa cell tumors are the most common sex-cord stromal tumors, making up 70% of cases, and are divided into two histologic subtypes: adult granulosa cell tumors, which develop in women over 50, and juvenile granulosa tumors, which develop before puberty or before the age of 30. Both develop in the ovarian follicle from a population of cells that surrounds germinal cells. Adult granulosa cell tumor Adult granulosa cell tumors are characterized by later onset (30+ years, 50 on average). These tumors produce high levels of estrogen, which causes its characteristic symptoms: menometrorrhagia; endometrial hyperplasia; tender, enlarged breasts; postmenopausal bleeding; and secondary amenorrhea. The mass of the tumor can cause other symptoms, including abdominal pain and distension, or symptoms similar to an ectopic pregnancy if the tumor bleeds and ruptures. Juvenile granulosa cell tumor Sertoli-Leydig cell tumor Sertoli-Leydig tumors are most common in women before the age of 30, and particularly common before puberty. Sclerosing stromal tumors Sclerosing stromal tumors typically occur in girls before puberty or women before the age of 30. Germ cell tumor Germ cell tumors of the ovary develop from the ovarian germ cells. Germ cell tumor accounts for about 30% of ovarian tumors, but only 5% of ovarian cancers, because most germ-cell tumors are teratomas and most teratomas are benign. Malignant teratomas tend to occur in older women, when one of the germ layers in the tumor develops into a squamous cell carcinoma. Germ-cell tumors tend to occur in young women (20s–30s) and girls, making up 70% of the ovarian cancer seen in that age group. Germ-cell tumors can include dysgerminomas, teratomas, yolk sac tumors/endodermal sinus tumors, and choriocarcinomas, when they arise in the ovary. Some germ-cell tumors have an isochromosome 12, where one arm of chromosome 12 is deleted and replaced with a duplicate of the other. Most germ-cell cancers have a better prognosis than other subtypes and are more sensitive to chemotherapy. They are more likely to be stage I at diagnosis. Overall, they metastasize more frequently than epithelial ovarian cancers. In addition, the cancer markers used vary with tumor type: choriocarcinomas are monitored with beta-HCG and endodermal sinus tumors with alpha-fetoprotein. Germ-cell tumors are typically discovered when they become large, palpable masses. However, like sex cord tumors, they can cause ovarian torsion or hemorrhage and, in children, isosexual precocious puberty. They frequently metastasize to nearby lymph nodes, especially para-aortic and pelvic lymph nodes. The most common symptom of germ cell tumors is subacute abdominal pain caused by the tumor bleeding, necrotizing, or stretching the ovarian capsule. If the tumor ruptures, causes significant bleeding, or torses the ovary, it can cause acute abdominal pain, which occurs in less than 10% of those with germ-cell tumors. They can also secrete hormones which change the menstrual cycle. In 25% of germ-cell tumors, the cancer is discovered during a routine examination and does not cause symptoms. Diagnosing germ cell tumors may be difficult because the normal menstrual cycle and puberty can cause pain and pelvic symptoms, and a young woman may even believe these symptoms to be those of pregnancy, and not seek treatment due to the stigma of teen pregnancy. Blood tests for alpha-fetoprotein, karyotype, human chorionic gonadotropin, and liver function are used to diagnose germ cell tumor and potential co-occurring gonadal dysgenesis. A germ cell tumor may be initially mistaken for a benign ovarian cyst. Dysgerminoma Dysgerminoma accounts for 35% of ovarian cancer in young women and is the most likely germ cell tumor to metastasize to the lymph nodes; nodal metastases occur in 25–30% of cases. These tumors may have mutations in the KIT gene, a mutation known for its role in gastrointestinal stromal tumor. People with an XY karyotype and ovaries (gonadal dysgenesis) or an X,0 karyotype and ovaries (Turner syndrome) who develop a unilateral dysgerminoma are at risk for a gonadoblastoma in the other ovary, and in this case, both ovaries are usually removed when a unilateral dysgerminoma is discovered to avoid the risk of another malignant tumor. Gonadoblastomas in people with Swyer or Turner syndrome become malignant in approximately 40% of cases. However, in general, dysgerminomas are bilateral 10–20% of the time. They are composed of cells that cannot differentiate further and develop directly from germ cells or from gonadoblastomas. Dysgerminomas contain syncytiotrophoblasts in approximately 5% of cases, and can therefore cause elevated hCG levels. On gross appearance, dysgerminomas are typically pink to tan-colored, have multiple lobes, and are solid. Microscopically, they appear identical to seminomas and very close to embryonic primordial germ cells, having large, polyhedral, rounded clear cells. The nuclei are uniform and round or square with prominent nucleoli and the cytoplasm has high levels of glycogen. Inflammation is another prominent histologic feature of dysgerminomas. Choriocarcinoma Choriocarcinoma can occur as a primary ovarian tumor developing from a germ cell, though it is usually a gestational disease that metastasizes to the ovary. Primary ovarian choriocarcinoma has a poor prognosis and can occur without a pregnancy. They produce high levels of hCG and can cause early puberty in children or menometrorrhagia (irregular, heavy menstruation) after menarche. Immature (solid) teratoma Immature, or solid, teratomas are the most common type of ovarian germ cell tumor, making up 40–50% of cases. Teratomas are characterized by the presence of disorganized tissues arising from all three embryonic germ layers: ectoderm, mesoderm, and endoderm; immature teratomas also have undifferentiated stem cells that make them more malignant than mature teratomas (dermoid cysts). The different tissues are visible on gross pathology and often include bone, cartilage, hair, mucus, or sebum, but these tissues are not visible from the outside, which appears to be a solid mass with lobes and cysts. Histologically, they have large amounts of neuroectoderm organized into sheets and tubules along with glia; the amount of neural tissue determines the histologic grade. Immature teratomas usually only affect one ovary (10% co-occur with dermoid cysts) and usually metastasize throughout the peritoneum. They can also cause mature teratoma implants to grow throughout the abdomen in a disease called growing teratoma syndrome; these are usually benign but will continue to grow during chemotherapy, and often necessitate further surgery. Unlike mature teratomas, immature teratomas form many adhesions, making them less likely to cause ovarian torsion. There is no specific marker for immature teratomas, but carcinoembryonic antigen (CEA), CA-125, CA19-9, or AFP can sometimes indicate an immature teratoma. Stage I teratomas make up the majority (75%) of cases and have the best prognosis, with 98% of patients surviving five years; if a Stage I tumor is also grade 1, it can be treated with unilateral surgery only. Stage II though IV tumors make up the remaining quarter of cases and have a worse prognosis, with 73–88% of patients surviving five years. Mature teratoma (dermoid cyst) Mature teratomas, or dermoid cysts, are rare tumors consisting of mostly benign tissue that develop after menopause. The tumors consist of disorganized tissue with nodules of malignant tissue, which can be of various types. The most common malignancy is squamous cell carcinoma, but adenocarcinoma, basal-cell carcinoma, carcinoid tumor, neuroectodermal tumor, malignant melanoma, sarcoma, sebaceous tumor, and struma ovarii can also be part of the dermoid cyst. They are treated with surgery and adjuvant platinum chemotherapy or radiation. Yolk sac tumor/endodermal sinus tumor Yolk sac tumors, formerly called endodermal sinus tumors, make up approximately 10–20% of ovarian germ cell malignancies, and have the worst prognosis of all ovarian germ cell tumors. They occur both before menarche (in one-third of cases) and after menarche (the remaining two-thirds of cases). Half of the people with yolk sac tumors are diagnosed in stage I. Typically, they are unilateral until metastasis, which occurs within the peritoneal cavity and via the bloodstream to the lungs. Yolk sac tumors grow quickly and recur easily, and are not easily treatable once they have recurred. Stage I yolk sac tumors are highly treatable, with a 5-year disease-free survival rate of 93%, but stage II-IV tumors are less treatable, with survival rates of 64–91%. Their gross appearance is solid, friable, and yellow, with necrotic and hemorrhagic areas. They also often contain cysts that can degenerate or rupture. Histologically, yolk sac tumors are characterized by the presence of Schiller-Duval bodies (which are pathognomonic for yolk sac tumors) and a reticular pattern. Yolk sac tumors commonly secrete alpha-fetoprotein and can be immunohistochemically stained for its presence; the level of alpha-fetoprotein in the blood is a useful marker of recurrence. Embryonal carcinoma Embryonal carcinomas, a rare tumor type usually found in mixed tumors, develop directly from germ cells but are not terminally differentiated; in rare cases, they may develop in dysgenetic gonads. They can develop further into a variety of other neoplasms, including choriocarcinoma, yolk sac tumor, and teratoma. They occur in younger people, with an average age at diagnosis of 14, and secrete both alpha-fetoprotein (in 75% of cases) and hCG. Histologically, embryonal carcinoma appears similar to the embryonic disc, made up of epithelial, anaplastic cells in disorganized sheets, with gland-like spaces and papillary structures. Polyembryoma Polyembryomas, the most immature form of teratoma and very rare ovarian tumors, are histologically characterized by having several embryo-like bodies with structures resembling a germ disk, yolk sac, and amniotic sac. Syncytiotrophoblast giant cells also occur in polyembryomas. Squamous cell carcinoma Primary ovarian squamous cell carcinomas are rare and have a poor prognosis when advanced. More typically, ovarian squamous cell carcinomas are cervical metastases, areas of differentiation in an endometrioid tumor, or derived from a mature teratoma. Mixed tumors Mixed tumors contain elements of more than one of the above classes of tumor histology. To be classed as a mixed tumor, the minor type must make up more than 10% of the tumor. Though mixed carcinomas can have any combination of cell types, mixed ovarian cancers are typically serous/endometrioid or clear-cell/endometrioid. Mixed germ cell tumors make up approximately 25–30% of all germ cell ovarian cancers, with combinations of dysgerminoma, yolk sac tumor, and/or immature teratoma. The prognosis and treatment vary based on the component cell types. Secondary ovarian cancer Ovarian cancer can also be a secondary cancer, the result of metastasis from a primary cancer elsewhere in the body. About 5–30% of ovarian cancers are due to metastases, while the rest are primary cancers. Common primary cancers are breast cancer, colon cancer, appendiceal cancer, and stomach cancer (primary gastric cancers that metastasize to the ovary are called Krukenberg tumors). Krukenberg tumors have signet ring cells and mucinous cells. Endometrial cancer and lymphomas can also metastasize to the ovary. Borderline tumors Ovarian borderline tumors, sometimes called low malignant potential (LMP) ovarian tumors, have some benign and some malignant features. LMP tumors make up approximately 10–15% of all ovarian tumors. They develop earlier than epithelial ovarian cancer, around the age of 40–49. They typically do not have extensive invasion; 10% of LMP tumors have areas of stromal microinvasion (<3mm, <5% of tumor). LMP tumors have other abnormal features, including increased mitosis, changes in cell size or nucleus size, abnormal nuclei, cell stratification, and small projections on cells (papillary projections). Serous and/or mucinous characteristics can be seen on histological examination, and serous histology makes up the overwhelming majority of advanced LMP tumors. More than 80% of LMP tumors are Stage I; 15% are stage II and III and less than 5% are stage IV. Implants of LMP tumors are often non-invasive. Staging Ovarian cancer is staged using the FIGO staging system and uses information obtained after surgery, which can include a total abdominal hysterectomy via midline laparotomy, removal of (usually) both ovaries and Fallopian tubes, (usually) the omentum, pelvic (peritoneal) washings, assessment of retroperitoneal lymph nodes (including the pelvic and para-aortic lymph nodes), appendectomy in suspected mucinous tumors, and pelvic/peritoneal biopsies for cytopathology. Around 30% of ovarian cancers that appear confined to the ovary have metastasized microscopically, which is why even stage-I cancers must be staged completely. 22% of cancers presumed to be stage I are observed to have lymphatic metastases. The AJCC stage is the same as the FIGO stage. The AJCC staging system describes the extent of the primary tumor (T), the absence or presence of metastasis to nearby lymph nodes (N), and the absence or presence of distant metastasis (M). The most common stage at diagnosis is stage IIIc, with over 70% of diagnoses. FIGO AJCC/TNM The AJCC/TNM staging system indicates where the tumor has developed, spread to lymph nodes, and metastasis. The AJCC/TNM stages can be correlated with the FIGO stages: Grading Grade 1 tumors have well differentiated cells (look very similar to the normal tissue) and are the ones with the best prognosis. Grade 2 tumors are also called moderately well-differentiated and they are made up of cells that resemble the normal tissue. Grade 3 tumors have the worst prognosis and their cells are abnormal, referred to as poorly differentiated. Metastasis in ovarian cancer is very common in the abdomen and occurs via exfoliation, where cancer cells burst through the ovarian capsule and are able to move freely throughout the peritoneal cavity. Ovarian cancer metastases usually grow on the surface of organs rather than the inside; they are also common on the omentum and the peritoneal lining. Cancer cells can also travel through the lymphatic system and metastasize to lymph nodes connected to the ovaries via blood vessels; i.e. the lymph nodes along the infundibulopelvic ligament, the broad ligament, and the round ligament. The most commonly affected groups include the paraaortic, hypogastric, external iliac, obturator, and inguinal lymph nodes. Usually, ovarian cancer does not metastasize to the liver, lung, brain, or kidneys unless it is a recurrent disease; this differentiates ovarian cancer from many other forms of cancer. Prevention Women with strong genetic risk for ovarian cancer may consider the surgical removal of their ovaries as a preventive measure. This is often done after completion of childbearing years. This reduces the chances of developing both breast cancer (by around 50%) and ovarian cancer (by about 96%) in women at high risk. Women with BRCA gene mutations usually also have their Fallopian tubes removed at the same time (salpingo-oophorectomy), since they also have an increased risk of Fallopian tube cancer. However, these statistics may overestimate the risk reduction because of how they have been studied. Because a large fraction of ovarian cancers originate in the fallopian tubes, the Ovarian Cancer Research Alliance and the Society of Gynecologic Oncology now recommend that women who are not planning on having additional children and who are undergoing surgical procedures such as tubal ligation (having one's "tubes tied") undergo opportunistic salpingo-oophorectomy — i.e. simultaneously having their fallopian tubes removed. OVCARE — BC Cancer's multi-institutional and multidisciplinary ovarian research group — began recommending salpingectomy at the time of hysterectomy and in place of tubal ligation in 2010. Women with a significant family history for ovarian cancer are often referred to a genetic counselor to see if testing for BRCA mutations would be beneficial. The use of oral contraceptives, the absence of 'periods' during the menstrual cycle, and tubal ligation reduce the risk. There may an association of developing ovarian cancer and ovarian stimulation during infertility treatments. Endometriosis has been linked to ovarian cancers. Human papillomavirus infection, smoking, and talc have not been identified as increasing the risk for developing ovarian cancer. Screening There is no simple and reliable way to test for ovarian cancer in women who do not have any signs or symptoms. Screening is not recommended in women who are at average risk, as evidence does not support a reduction in death and the high rate of false positive tests may lead to unneeded surgery, which is accompanied by its own risks. Women with high risk of ovarian cancer that are currently identified based on family history and genetic testing may benefit from screening. The Pap test does not screen for ovarian cancer. Ovarian cancer is usually only palpable in advanced stages. This high risk group has benefited with earlier detection. Screening is not recommended using CA-125 measurements, HE4 levels, ultrasound, or adnexal palpation in women who are at average risk. Currently there is no national screening programme in the UK for ovarian cancer. CA125 and transvaginal ultrasound can be utilised but there is minimal evidence to suggest this decreases mortality . More recently, the Risk of Ovarian Cancer Algorithm (ROMA) has been shown to detect earlier cancers using CA125 and age but again does not provide a robust measure to decrease mortality at present. Ovarian cancer has low prevalence, even in the high-risk group of women from the ages of 50 to 60 (about one in 2000), and screening of women with average risk is more likely to give ambiguous results than detect a problem that requires treatment. Because ambiguous results are more likely than detection of a treatable problem, and because the usual response to ambiguous results is invasive interventions, in women of average risk, the potential harms of having screening without an indication outweigh the potential benefits. The purpose of screening is to diagnose ovarian cancer at an early stage when it is more likely to be treated successfully. Screening with transvaginal ultrasound, pelvic examination, and CA-125 levels can be used instead of preventive surgery in women who have BRCA1 or BRCA2 mutations. This strategy has shown some success. Screening for CA125, a chemical released by ovarian tumours, with follow-up using ultrasound, was shown to be ineffective in reducing mortality in a large-scale UK study. There have been some screening trials that have used age, family history of ovarian cancer, and mutation status to identify target populations for screening. Management Once it is determined that ovarian, fallopian tube or primary peritoneal cancer is present, treatment is scheduled by a gynecologic oncologist (a physician trained to treat cancers of a woman's reproductive system). Gynecologic oncologists can perform surgery on and give chemotherapy to women with ovarian cancer. A treatment plan is developed. Treatment usually involves surgery and chemotherapy, and sometimes radiotherapy, regardless of the subtype of ovarian cancer. Surgical treatment may be sufficient for well-differentiated malignant tumors and confined to the ovary. Addition of chemotherapy may be required for more aggressive tumors confined to the ovary. For patients with advanced disease, a combination of surgical reduction with a combination chemotherapy regimen is standard. Since 1980, platinum-based drugs have had an important role in treating ovarian cancer. Borderline tumors, even following spread outside of the ovary, are managed well with surgery, and chemotherapy is not seen as useful. Second-look surgery and maintenance chemotherapy have not been shown to provide benefit. Surgery Surgery has been the standard of care for decades and may be necessary for obtaining a specimen for diagnosis. The surgery depends upon the extent of nearby invasion of other tissues by the cancer when it is diagnosed. This extent of the cancer is described by assigning it a stage, the presumed type, and the grade of cancer. The gynecological surgeon may remove one (unilateral oophorectomy) or both ovaries (bilateral oophorectomy). The Fallopian tubes (salpingectomy), uterus (hysterectomy), and the omentum (omentectomy) may also be removed. Typically, all of these organs are removed. For those who test positive for faulty BRCA1 or BRCA2 genes having a risk-reducing surgery is an option. An increasing number of women choose this. At the same time the average waiting time for undergoing the procedure is two-years which is much longer than recommended. For low-grade, unilateral stage-IA cancers, only the involved ovary (which must be unruptured) and Fallopian tube will be removed. This can be done especially in young people who wish to preserve their fertility. However, a risk of microscopic metastases exists and staging must be completed. If any metastases are found, a second surgery to remove the remaining ovary and uterus is needed. Tranexamic acid can be administered prior to surgery to reduce the need for blood transfusions due to blood loss during the surgery. If a tumor in a premenopausal woman is determined to be a low malignant potential tumor during surgery, and it is clearly stage I cancer, only the affected ovary is removed. For postmenopausal women with low malignant potential tumors, hysterectomy with bilateral salpingo-oophorectomy is still the preferred option. During staging, the appendix can be examined or removed. This is particularly important with mucinous tumors. In children or adolescents with ovarian cancer, surgeons typically attempt to preserve one ovary to allow for the completion of puberty, but if the cancer has spread, this is not always possible. Dysgerminomas, in particular, tend to affect both ovaries: 8–15% of dysgerminomas are present in both ovaries. People with low-grade (well-differentiated) tumors are typically treated only with surgery, which is often curative. In general, germ cell tumors can be treated with unilateral surgery unless the cancer is widespread or fertility is not a factor. In women with surgically staged advanced epithelial ovarian cancer (stages III and IV), studies suggest all attempts should be made to reach complete cytoreduction (surgical efforts to remove the bulk of the tumor). In advanced cancers, where complete removal is not an option, as much tumor as possible is removed in a procedure called debulking surgery. This surgery is not always successful, and is less likely to be successful in women with extensive metastases in the peritoneum, stage- IV disease, cancer in the transverse fissure of the liver, mesentery, or diaphragm, and large areas of ascites. Debulking surgery has usually only been done once but a recent study has shown a longer overall survival in recurrent ovarian cancer when surgery combined with chemotherapy was performed compared to treatment with chemotherapy alone. Computed tomography (abdominal CT) is often used to assess if primary debulking surgery is possible, but low certainty evidence also suggests fluorodeoxyglucose‐18 (FDG) PET/CT and MRI may be useful as an addition for assessing macroscopic incomplete debulking. More complete debulking is associated with better outcomes: women with no macroscopic evidence of disease after debulking have a median survival of 39 months, as opposed to 17 months with less complete surgery. By removing metastases, many cells that are resistant to chemotherapy are removed, and any clumps of cells that have died are also removed. This allows chemotherapy to better reach the remaining cancer cells, which are more likely to be fast-growing and therefore chemosensitive. Interval debulking surgery is another protocol used, where neoadjuvant chemotherapy is given, debulking surgery is performed, and chemotherapy is finished after debulking. Though no definitive studies have been completed, it is shown to be approximately equivalent to primary debulking surgery in terms of survival and shows slightly lower morbidity. Previous studies have shown different results from primary debulking versus interval debulking. The ongoing TRUST study may clarify selection criteria for each surgical approach. There are several different surgical procedures that can be employed to treat ovarian cancer. For stage I and II cancer, laparoscopic (keyhole) surgery can be used, but metastases may not be found. For advanced cancer, laparoscopy is not used, since debulking metastases requires access to the entire peritoneal cavity. Depending on the extent of the cancer, procedures may include a bilateral salpingo-oophorectomy, biopsies throughout the peritoneum and abdominal lymphatic system, omentectomy, splenectomy, bowel resection, diaphragm stripping or resection, appendectomy, or even a posterior pelvic exenteration. To fully stage ovarian cancer, lymphadenectomy can be included in the surgery, However, it has not offered benefits in terms of survival in either HGSOC or LGSOC. This is particularly important in germ cell tumors because they frequently metastasize to nearby lymph nodes. If ovarian cancer recurs, secondary surgery is sometimes a treatment option. This depends on how easily the tumor can be removed, how much fluid has accumulated in the abdomen, and overall health. Effectivenes of this surgery depends on surgical technique, completeness of cytoreduction, and extent of disease. It also can be helpful in people who had their first surgery done by a generalist and in epithelial ovarian cancer. Secondary surgery can be effective in dysgerminomas and immature teratomas. Evidence suggests surgery in recurrent epithelial ovarian cancer may be associated with prolonging life in some women with platinum-sensitive disease. The major side effect of oophorectomy in younger women is early menopause, which can cause osteoporosis. After surgery, hormone replacement therapy can be considered, especially in younger women. This therapy can consist of a combination of estrogen and progesterone, or estrogen alone. Estrogen alone is safe after hysterectomy; when the uterus is still present, unopposed estrogen dramatically raises the risk of endometrial cancer. Estrogen therapy after surgery does not change survival rates. People having ovarian cancer surgery are typically hospitalized afterwards for 3–4 days and spend around a month recovering at home. Surgery outcomes are best at hospitals that do a large number of ovarian cancer surgeries. It is unclear if laparoscopy or laparotomy is better or worse for FIGO stage I ovarian cancer. There is also no apparent difference between total abdominal hysterectomy and supracervical hysterectomy for advanced cancers. Approximately 2.8% of people having a first surgery for advanced ovarian cancer die within two weeks of the surgery (2.8% perioperative mortality rate). More aggressive surgeries are associated with better outcomes in advanced (stage III or IV) ovarian cancer. Chemotherapy Chemotherapy has been a general standard of care for ovarian cancer for decades, although with variable protocols. Chemotherapy is used after surgery to treat any residual disease, if appropriate. In some cases, there may be reason to perform chemotherapy first, followed by surgery. This is called "neoadjuvant chemotherapy", and is common when a tumor cannot be completely removed or optimally debulked via surgery. Though it has not been shown to increase survival, it can reduce the risk of complications after surgery. If a unilateral salpingo-oophorectomy or other surgery is performed, additional chemotherapy, called "adjuvant chemotherapy", can be given. Adjuvant chemotherapy is used in stage 1 cancer typically if the tumor is of a high histologic grade (grade 3) or the highest substage (stage 1c), provided the cancer has been optimally staged during surgery. Bevacizumab may be used as an adjuvant chemotherapy if the tumor is not completely removed during surgery or if the cancer is stage IV; it can extend progression-free survival but has not been shown to extend overall survival. Chemotherapy is curative in approximately 20% of advanced ovarian cancers; it is more often curative with malignant germ cell tumors than epithelial tumors. Adjuvant chemotherapy has been found to improve survival and reduce the risk of ovarian cancer recurring compared to no adjuvant therapy in women with early stage epithelial ovarian cancer. Chemotherapy in ovarian cancer typically consists of platins, a group of platinum-based drugs, combined with non-platins. Platinum-based drugs have been used since 1980. Common therapies can include paclitaxel, cisplatin, topotecan, doxorubicin, epirubicin, and gemcitabine. Carboplatin is typically given in combination with either paclitaxel or docetaxel; the typical combination is carboplatin with paclitaxel. Carboplatin is superior to cisplatin in that it is less toxic and has fewer side effects, generally allowing for an improved quality of life in comparison, though both are similarly effective. Three-drug regimens have not been found to be more effective, and platins alone or nonplatins alone are less effective than platins and nonplatins in combination. There is a small benefit in platinum‐based chemotherapy compared with non‐platinum therapy. Platinum combinations can offer improved survival over single platinum. In people with relapsed ovarian cancer, evidence suggests topotecan has a similar effect on overall survival as paclitaxel and topotecan plus thalidomide, whilst it is superior to treosulfan and not as effective as pegylated liposomal doxorubicin in platinum-sensitive people. Chemotherapy can be given intravenously or in the peritoneal cavity. Though intraperitoneal chemotherapy is associated with longer progression-free survival and overall survival, it also causes more adverse side effects than intravenous chemotherapy. It is mainly used when the cancer has been optimally debulked. Intraperitoneal chemotherapy can be highly effective because ovarian cancer mainly spreads inside the peritoneal cavity, and higher doses of the drugs can reach the tumors this way. Chemotherapy can cause anemia; intravenous iron has been found to be more effective than oral iron supplements in reducing the need for blood transfusions. Typical cycles of treatment involve one treatment every 3 weeks, repeated for 6 weeks or more. Fewer than 6 weeks (cycles) of treatment is less effective than 6 weeks or more. Germ-cell malignancies are treated differently than other ovarian cancers — a regimen of bleomycin, etoposide, and cisplatin (BEP) is used with 5 days of chemotherapy administered every 3 weeks for 3 to 4 cycles. Chemotherapy for germ cell tumors has not been shown to cause amenorrhea, infertility, birth defects, or miscarriage. Maintenance chemotherapy has not been shown to be effective. In people with BRCA mutations, platinum chemotherapy is more effective. Germ-cell tumors and malignant sex-cord/stromal tumors are treated with chemotherapy, though dysgerminomas and sex-cord tumors are not typically very responsive. Platinum-sensitive or platinum-resistant If ovarian cancer recurs, it is considered partially platinum-sensitive or platinum-resistant, based on the time since the last recurrence treated with platins: partially platinum-sensitive cancers recurred 6–12 months after last treatment, and platinum-resistant cancers have an interval of less than 6 months. Second-line chemotherapy can be given after the cancer becomes symptomatic because no difference in survival is seen between treating asymptomatic (elevated CA-125) and symptomatic recurrences. For platinum-sensitive tumors, platins are the drugs of choice for second-line chemotherapy, in combination with other cytotoxic agents. Regimens include carboplatin combined with pegylated liposomal doxorubicin, gemcitabine, or paclitaxel. Carboplatin-doublet therapy can be combined with paclitaxel for increased efficacy in some cases. Another potential adjuvant therapy for platinum-sensitive recurrences is olaparib, which may improve progression-free survival but has not been shown to improve overall survival. (Olaparib, a PARP inhibitor, was approved by the US FDA for use in BRCA-associated ovarian cancer that had previously been treated with chemotherapy.) For recurrent germ cell tumors, an additional 4 cycles of BEP chemotherapy is the first-line treatment for those who have been treated with surgery or platins. If the tumor is determined to be platinum-resistant, vincristine, dactinomycin, and cyclophosphamide (VAC) or some combination of paclitaxel, gemcitabine, and oxaliplatin may be used as a second-line therapy. For platinum-resistant tumors, there are no high-efficacy chemotherapy options. Single-drug regimens (doxorubicin or topotecan) do not have high response rates, but single-drug regimens of topotecan, pegylated liposomal doxorubicin, or gemcitabine are used in some cases. Topotecan cannot be used in people with an intestinal blockage. Paclitaxel used alone is another possible regimen, or it may be combined with liposomal doxorubicin, gemcitabine, cisplatin, topotecan, etoposide, or cyclophosphamide. (
Biology and health sciences
Cancer
Health
414259
https://en.wikipedia.org/wiki/Trichomonas%20vaginalis
Trichomonas vaginalis
Trichomonas vaginalis is an anaerobic, flagellated protozoan parasite and the causative agent of a sexually transmitted disease called trichomoniasis. It is the most common pathogenic protozoan that infects humans in industrialized countries. Infection rates in men and women are similar but women are usually symptomatic, while infections in men are usually asymptomatic. Transmission usually occurs via direct, skin-to-skin contact with an infected individual, most often through vaginal intercourse. It is estimated that 160 million cases of infection are acquired annually worldwide. The estimates for North America alone are between 5 and 8 million new infections each year, with an estimated rate of asymptomatic cases as high as 50%. Usually treatment consists of metronidazole and tinidazole. Clinical History Alfred Francois Donné (1801–1878) was the first to describe a procedure to diagnose trichomoniasis through "the microscopic observation of motile protozoa in vaginal or cervical secretions" in 1836. He published this in the article entitled, "Animalcules observés dans les matières purulentes et le produit des sécrétions des organes génitaux de l'homme et de la femme" in the journal, Comptes rendus de l'Académie des sciences. With it, he created the binomial name of the parasite as Trichomonas vaginalis. 80 years after the initial discovery of the parasitic protozoan, Hohne declared Trichomoniasis as a clinical entity in 1916. Signs and symptoms Most women (85%) and men (77%) with infected with T. vaginalis do not have symptoms. Half of these women can develop symptoms within 6 months and can have vaginal erythema, dyspareunia, dysuria, and vaginal discharge, which is often diffuse, malodorous, and yellow-green, along with itching in the genital region. “Strawberry cervix,” occurs in about 5% of women. In men, it can cause urethritis, epididymitis and prostatitis. Complications Some of the complications of Trichomonas vaginalis in women include: preterm delivery, low birth weight, and increased mortality as well as predisposing to human immunodeficiency virus infection, AIDS, and cervical cancer. Trichomonas vaginalis can be seen in diverse locations within the body, such as," in the urinary tract, fallopian tubes, and pelvis and can cause pneumonia, bronchitis, and oral lesions." Diagnosis Classically, with a cervical smear, infected women may have a transparent "halo" around their superficial cell nucleus but more typically the organism itself is seen with a, "slight cyanophilic tinge, faint eccentric nuclei, and fine acidophilic granules." It is unreliably detected by studying a genital discharge or with a cervical smear because of their low sensitivity. Trichomonas vaginalis is also routinely diagnosed via a wet mount, in which motility is observed. Currently, the most common method of diagnosis is via overnight culture, with a sensitivity range of 75–95%. Newer methods, such as rapid antigen testing and transcription-mediated amplification, have even greater sensitivity, but are not in widespread use. Treatment Infection is treated and cured with metronidazole or tinidazole. The CDC recommends a one time dose of 2 grams of either metronidazole or tinidazole as the first-line treatment; the alternative treatment recommended is 500 milligrams of metronidazole, twice daily, for seven days if there is failure of the single-dose regimen. Medication should be prescribed to any sexual partner(s) as well because they may be asymptomatic carriers. Morphology Trichomonas vaginalis exists in only one morphological stage, a trophozoite, and cannot encyst (or form cysts.) This protozoan does not typically adhere to one shape, as in different conditions, the parasite has the ability to change. When in culture separate from the host, it usually displays a more "pear" or oval shaped morphology, but when present in a living host, specifically on the epithelial cells of the vaginal wall, the shape is more "amoeboid". It is slightly larger than a white blood cell, measuring 9 × 7 μm. In both forms, Trichomonas vaginalis has five flagella – four protruding from the front or anterior of the parasite and the fifth on the back or posterior end. The functionality of the fifth flagellum is not known. In addition, a barb-like axostyle projects opposite the four-flagella bundle. All of these flagella are connected to an "undulating" membrane. The axostyle may be used for attachment to surfaces and may also cause the tissue damage seen in trichomoniasis. The nucleus is usually elongated, and is located near the anterior end of the protozoan within the cytoplasm which contains many hydrogenosomes (closed-membrane organelle with the ability to produce both adenosine triphosphate and hydrogen while in anaerobic conditions.) While Trichomonas vaginalis does not have a cyst form, the organism can survive for up to 24 hours in urine, semen, or even water samples. A nonmotile, round, pseudocystic form with internalized flagella has been observed under unfavorable conditions. This form is generally regarded as a degenerate stage as opposed to a resistant form, although viability of pseudocystic cells has been occasionally reported. The ability to revert to trophozoite form, to reproduce and sustain infection has been described, along with a microscopic cell staining technique to visually discern this elusive form. Metabolism Trichomonas vaginalis is an anaerobe. There is an absence of cytochrome C and mitochondria, thus making oxygen uptake and synthesis of adenosine triphosphate via oxidative phosphorylation difficult. Although it contains no mitochondria, an analogous structure called a hydrogenosome, which is the site of fermentative oxidation of pyruvate, carries out many of the same metabolic processes. Carbohydrates, specifically those with alpha1,4- glycosidic linkages, are metabolized and eventually fermented to produce products such as acetate, lactate, malate, glycerol and CO2 under aerobic conditions. Hydrogen is produced under anaerobic conditions. Outside the hydrogenosome, carbohydrate metabolism also occurs freely in the cytoplasm. The Embden-Meyerhof-Parnas pathway. is used to convert glucose into phosphoenolpyruvate which ultimately becomes pyruvate. Virulence factors Although Trichomonas vaginalis exists as a trophozoite in its infective form, its amoeboid form is also an important characteristic that adds to how well it is able to infect its host. The amoeboid form, which is pancake shaped, allows for greater surface area contact with epithelial cells of the vagina, cervix, urethra, and prostate. The pseudocyst form is also a way in which the microbe can infect more efficiently, but this only induced when exposed to cold and other stressors. These various forms are accompanied with differing protein phosphorylation profiles which are triggered by environmental pressures. One of the hallmark features of Trichomonas vaginalis is the adherence factors that allow cervicovaginal epithelium colonization in women. The adherence that this organism illustrates is specific to vaginal epithelial cells being pH, time, and temperature dependent. A variety of virulence factors mediate this process some of which are the microtubules, microfilaments, bacterial adhesins (4), and cysteine proteinases. The adhesins are four trichomonad enzymes called AP65, AP51, AP33, and AP23 that mediate the interaction of the parasite to the receptor molecules on vaginal epithelial cells. The best characterized surface molecule associated with one of the four adhesins is called Trichomonas vaginalis lipoglycans. This molecule is the most abundant on the surface of Trichomonas vaginalis, aids in sticking to vaginal epithelial cells, and can also influence how the human immune system responds, affecting inflammatory responses and macrophages in the body. Cysteine proteinases may be another virulence factor because not only do these 30 kDa proteins bind to host cell surfaces but also may degrade extracellular matrix proteins like hemoglobin, fibronectin or collagen IV. Genome sequencing and statistics The Trichomonas vaginalis genome is approximately 160 megabases in size – ten times larger than predicted from earlier gel-based chromosome sizing. (The human genome is ~3.5 gigabases by comparison.) As much as two-thirds of the Trichomonas vaginalis sequence consists of repetitive and transposable elements, indicative of a recent drastic, evolutionarily expansion of the genome. The total number of predicted protein-coding genes is ~60,000, with the genome being around 65% repetitive (virus-like, transposon-like, retrotransposon-like, and unclassified repeats, all with high copy number and low polymorphism). Approximately 26,000 of the protein-coding genes have been classed as 'evidence-supported' (similar either to known proteins, or to expressed sequence tags), while the remainder have no known function. These extraordinary genome statistics are likely to change downward as the genome sequence, currently very fragmented due to the difficulty of ordering repetitive DNA, is assembled into chromosomes, and as more transcription data (expressed sequence tags, microarrays) accumulate. TrichDB.org was launched as a free, public genomic data repository and retrieval service devoted to genome-scale trichomonad data. The site currently contains all of the Trichomonas vaginalis sequence project data, several expressed sequence tag libraries, and tools for data mining and display. TrichDB is part of the EupathDB functional genomics database project funded by the National Institutes of Health and National Institute of Allergy and Infectious Diseases. Genetic diversity High levels of genetic diversity were detected in Trichomonas vaginalis after phenotypic differences were discovered during clinical presentations. Studies into the genetic diversity of Trichomonas vaginalis has shown that there are two distinct lineages of the parasite found worldwide; both lineages are represented evenly in field isolates. The two lineages differ in whether or not Trichomonas vaginalis virus infection is present. Trichomonas vaginalis virus infection is clinically relevant in that, it has an effect on parasite resistance to metronidazole, a first line drug treatment for human trichomoniasis. Increased susceptibility to human immunodeficiency virus In addition to inflammation that Trichomonas vaginalis causes, the parasite also causes lysis of epithelial cells and red blood cells in the area leading to more inflammation and disruption of the protective barrier usually provided by the epithelium. Having Trichomonas vaginalis also may increase the chances of the infected woman transmitting human immunodeficiency virus to her sexual partner(s). Evolution The biology of Trichomonas vaginalis has implications for understanding the origin of sexual reproduction in eukaryotes. Trichomonas vaginalis is not known to undergo meiosis, a key stage of the eukaryotic sexual cycle. However, when Malik et al. examined Trichomonas vaginalis for the presence of 29 genes known to function in meiosis, they found 27 homologous genes to the ones found in animals, fungi, plants and other protists, including eight of nine genes that are specific to meiosis in model organisms. These findings suggest that Trichomonas vaginalis has the capability for meiotic recombination, and hence "parasexual" reproduction. 21 of the 27 meiosis genes were also found in another parasite Giardia lamblia (also called Giardia intestinalis), indicating that these meiotic genes were present in a common ancestor of Trichomonas vaginalis and G. intestinalis. Since these two species are descendants of lineages that are highly divergent among eukaryotes, these meiotic genes were likely present in a common ancestor of all eukaryotes.
Biology and health sciences
Excavata
Plants
414262
https://en.wikipedia.org/wiki/Fecal%E2%80%93oral%20route
Fecal–oral route
The fecal–oral route (also called the oral–fecal route or orofecal route) describes a particular route of transmission of a disease wherein pathogens in fecal particles pass from one person to the mouth of another person. Main causes of fecal–oral disease transmission include lack of adequate sanitation (leading to open defecation), and poor hygiene practices. If soil or water bodies are polluted with fecal material, humans can be infected with waterborne diseases or soil-transmitted diseases. Fecal contamination of food is another form of fecal-oral transmission. Washing hands properly after changing a baby's diaper or after performing anal hygiene can prevent foodborne illness from spreading. The common factors in the fecal-oral route can be summarized as five Fs: fingers, flies, fields, fluids, and food. Diseases caused by fecal-oral transmission include typhoid, cholera, polio, hepatitis and many other infections, especially ones that cause diarrhea. Background Although fecal–oral transmission is usually discussed as a route of transmission, it is actually a specification of the entry and exit portals of the pathogen, and can operate across several of the other routes of transmission. Fecal–oral transmission is primarily considered as an indirect contact route through contaminated food or water. However, it can also operate through direct contact with feces or contaminated body parts, such as through anal sex. It can also operate through droplet or airborne transmission through the toilet plume from contaminated toilets. F-diagram The foundations for the "F-diagram" being used today were laid down in a publication by the World Health Organization (WHO) in 1958. This publication explained transmission routes and barriers to the transmission of diseases from the focal point of human feces. Modifications have been made over the course of history to derive modern-looking F-diagrams. These diagrams are used in many sanitation publications. They are set up in a way that fecal–oral transmission pathways are shown to take place via water, hands, arthropods and soil. To make it easier to remember, words starting with the letter "F" are used for each of these pathways, namely fluids, fingers, flies, food, fields, fomites (objects and household surfaces). Rather than only concentrating on human feces, feces from other animals should also be included in the F-diagram. The sanitation and hygiene barriers when placed correctly prevent the transmission of an infection through hands, water and food. The F-diagram can be used to show how proper sanitation (in particular toilets, hygiene, handwashing) can act as an effective barrier to stop transmission of diseases via fecal–oral pathways. Examples Transmission The process of transmission may be simple or involve multiple steps. Some examples of routes of fecal–oral transmission include: water that has come in contact with feces (for example due to groundwater pollution from pit latrines) and is then not treated properly before drinking; by shaking someone's hand that has been contaminated by stool, changing a child's diapers, working in the garden, or dealing with domestic animals; food that has been prepared in the presence of fecal matter; eating soil (geophagia); disease vectors, like houseflies, spreading contamination from inadequate fecal disposal such as open defecation; poor or absent hand washing after using the toilet or changing diapers; poor or absent cleaning of anything that has been in contact with feces; sexual practices that may involve oral contact with feces, such as anilingus, coprophilia, or A2M. eating feces, in children, or in a mental disorder called coprophagia Prevention One approach to changing people's behaviors and stopping open defecation is the community-led total sanitation approach. In this process "live demonstrations" of flies moving from food to fresh human feces and back are used. This can "trigger" villagers into action. Diseases The list below shows the main diseases that can be passed via the fecal–oral route. They are grouped by the type of pathogen involved in disease transmission. Bacteria Vibrio cholerae (cholera) Clostridioides difficile (pseudomembranous enterocolitis) Shigella (shigellosis / bacillary dysentery) Salmonella typhii (typhoid fever) Vibrio parahaemolyticus Escherichia coli Campylobacter Viruses Hepatitis A Hepatitis E Enteroviruses Norovirus acute gastroenteritis Poliovirus (poliomyelitis) Most human Coronaviruses are transmitted fecally including Feline coronavirus, there have also been reports of SARS-CoV-2 being found in stool samples. Rotavirus gastroenteritis Adenovirus gastroenteritis Protozoans Entamoeba histolytica (amoebiasis / amoebic dysentery) Giardia (giardiasis) Cryptosporidium (cryptosporidiosis) Toxoplasma gondii (toxoplasmosis) Helminths Tapeworms Soil-transmitted helminths Related diseases Waterborne diseases are diseases caused by pathogenic microorganisms that most commonly are transmitted in contaminated fresh water. This is one particular type of fecal-oral transmission. Neglected tropical diseases also include many diseases transmitted via the fecal-oral route.
Biology and health sciences
Concepts
Health
414350
https://en.wikipedia.org/wiki/Tooth%20decay
Tooth decay
Dental cavity, also known as tooth decay,<ref name="caries" group="lower-alpha">The word 'caries' is a mass noun, and is not a plural of 'carie'.</ref> is the breakdown of teeth due to acids produced by bacteria. The resulting cavities may be a number of different colors, from yellow to black. Symptoms may include pain and difficulty eating. Complications may include inflammation of the tissue around the tooth, tooth loss and infection or abscess formation. Tooth regeneration is an ongoing stem cell–based field of study that aims to find methods to reverse the effects of decay; current methods are based on easing symptoms. The cause of cavities is acid from bacteria dissolving the hard tissues of the teeth (enamel, dentin and cementum). The acid is produced by the bacteria when they break down food debris or sugar on the tooth surface. Simple sugars in food are these bacteria's primary energy source and thus a diet high in simple sugar is a risk factor. If mineral breakdown is greater than buildup from sources such as saliva, caries results. Risk factors include conditions that result in less saliva, such as diabetes mellitus, Sjögren syndrome and some medications. Medications that decrease saliva production include antihistamines and antidepressants. Dental caries are also associated with poverty, poor cleaning of the mouth, and receding gums resulting in exposure of the roots of the teeth. Prevention of dental caries includes regular cleaning of the teeth, a diet low in sugar, and small amounts of fluoride. Brushing one's teeth twice per day, and flossing between the teeth once a day is recommended. Fluoride may be acquired from water, salt or toothpaste among other sources. Treating a mother's dental caries may decrease the risk in her children by decreasing the number of certain bacteria she may spread to them. Screening can result in earlier detection. Depending on the extent of destruction, various treatments can be used to restore the tooth to proper function, or the tooth may be removed. There is no known method to grow back large amounts of tooth. The availability of treatment is often poor in the developing world. Paracetamol (acetaminophen) or ibuprofen may be taken for pain. Worldwide, approximately 3.6 billion people (48% of the population) have dental caries in their permanent teeth as of 2016. The World Health Organization estimates that nearly all adults have dental caries at some point in time. In baby teeth it affects about 620 million people or 9% of the population. They have become more common in both children and adults in recent years. The disease is most common in the developed world due to greater simple sugar consumption, but less common in the developing world. Caries is Latin for "rottenness". Signs and symptoms A person experiencing caries may not be aware of the disease. The earliest sign of a new carious lesion is the appearance of a chalky white spot on the surface of the tooth, indicating an area of demineralization of enamel. This is referred to as a white spot lesion, an incipient carious lesion, or a "micro-cavity". As the lesion continues to demineralize, it can turn brown but will eventually turn into a cavitation ("cavity").  Before the cavity forms, the process is reversible, but once a cavity forms, the lost tooth structure cannot be regenerated. A lesion that appears dark brown and shiny suggests dental caries were once present, but the demineralization process has stopped, leaving a stain. Active decay is lighter in color and dull in appearance. As the enamel and dentin are destroyed, the cavity becomes more noticeable. The affected areas of the tooth change color and become soft to the touch. Once the decay passes through the enamel, the dentinal tubules, which have passages to the nerve of the tooth, become exposed, resulting in pain that can be transient, temporarily worsening with exposure to heat, cold, or sweet foods and drinks. A tooth weakened by extensive internal decay can sometimes suddenly fracture under normal chewing forces. When the decay has progressed enough to allow the bacteria to overwhelm the pulp tissue in the center of the tooth, a toothache can result, and the pain will become more constant. Death of the pulp tissue and infection are common consequences. The tooth will no longer be sensitive to hot or cold but can be very tender to pressure. Dental caries can also cause bad breath and foul tastes. In highly progressed cases, an infection can spread from the tooth to the surrounding soft tissues. Complications such as cavernous sinus thrombosis and Ludwig angina can be life-threatening. Cause Four things are required for caries to form: a tooth surface (enamel or dentin), caries-causing bacteria, fermentable carbohydrates (such as sucrose), and time. This involves adherence of food to the teeth and acid creation by the bacteria that makes up the dental plaque. However, these four criteria are not always enough to cause the disease and a sheltered environment promoting development of a cariogenic biofilm is required. The caries disease process does not have an inevitable outcome, and different individuals will be susceptible to different degrees depending on the shape of their teeth, oral hygiene habits, and the buffering capacity of their saliva. Dental caries can occur on any surface of a tooth that is exposed to the oral cavity, but not the structures that are retained within the bone. Tooth decay is caused by biofilm (dental plaque) lying on the teeth and maturing to become cariogenic (causing decay). Certain bacteria in the biofilm produce acids, primarily lactic acid, in the presence of fermentable carbohydrates such as sucrose, fructose, and glucose. Caries occur more often in people from the lower end of the socio-economic scale than people from the upper end of the socio-economic scale, due to lack of education about dental care, and lack of access to professional dental care which may be expensive. Bacteria The most common bacteria associated with dental cavities are the mutans streptococci, most prominently Streptococcus mutans and Streptococcus sobrinus, and lactobacilli. However, cariogenic bacteria (the ones that can cause the disease) are present in dental plaque, but they are usually in too low concentrations to cause problems unless there is a shift in the balance. This is driven by local environmental change, such as frequent sugar intake or inadequate biofilm removal (toothbrushing). If left untreated, the disease can lead to pain, tooth loss and infection. The mouth contains a wide variety of oral bacteria, but only a few specific species of bacteria are believed to cause dental caries: Streptococcus mutans and Lactobacillus species among them. Streptococcus mutans are gram-positive bacteria which constitute biofilms on the surface of teeth. These organisms can produce high levels of lactic acid following fermentation of dietary sugars and are resistant to the adverse effects of low pH, properties essential for cariogenic bacteria. As the cementum of root surfaces is more easily demineralized than enamel surfaces, a wider variety of bacteria can cause root caries, including Lactobacillus acidophilus, Actinomyces spp., Nocardia spp., and Streptococcus mutans. Bacteria collect around the teeth and gums in a sticky, creamy-coloured mass called plaque, which serves as a biofilm. Some sites collect plaque more commonly than others, for example, sites with a low rate of salivary flow (molar fissures). Grooves on the occlusal surfaces of molar and premolar teeth provide microscopic retention sites for plaque bacteria, as do the interproximal sites. Plaque may also collect above or below the gingiva, where it is referred to as supra- or sub-gingival plaque, respectively. These bacterial strains, most notably S. mutans, can be inherited by a child from a caretaker's kiss or through feeding pre-masticated food. Dietary sugars Bacteria in a person's mouth convert glucose, fructose, and most commonly sucrose (table sugar) into acids, mainly lactic acid, through a glycolytic process called fermentation. If left in contact with the tooth, these acids may cause demineralization, which is the dissolution of its mineral content. The process is dynamic, however, as remineralization can also occur if the acid is neutralized by saliva or mouthwash. Fluoride toothpaste or dental varnish may aid remineralization. If demineralization continues over time, enough mineral content may be lost so that the soft organic material left behind disintegrates, forming a cavity or hole. The impact such sugars have on the progress of dental caries is called cariogenicity. Sucrose, although a bound glucose and fructose unit, is in fact more cariogenic than a mixture of equal parts of glucose and fructose. This is due to the bacteria using the energy in the saccharide bond between the glucose and fructose subunits. S.mutans adheres to the biofilm on the tooth by converting sucrose into an extremely adhesive substance called dextran polysaccharide by the enzyme dextran sucranase. Exposure The frequency with which teeth are exposed to cariogenic (acidic) environments affects the likelihood of caries development. After meals or snacks, the bacteria in the mouth metabolize sugar, resulting in an acidic by-product that decreases pH. As time progresses, the pH returns to normal due to the buffering capacity of saliva and the dissolved mineral content of tooth surfaces. During every exposure to the acidic environment, portions of the inorganic mineral content at the surface of teeth dissolve and can remain dissolved for two hours. Since teeth are vulnerable during these acidic periods, the development of dental caries relies heavily on the frequency of acid exposure. The carious process can begin within days of a tooth's erupting into the mouth if the diet is sufficiently rich in suitable carbohydrates. Evidence suggests that the introduction of fluoride treatments has slowed the process. Proximal caries take an average of four years to pass through enamel in permanent teeth. Because the cementum enveloping the root surface is not nearly as durable as the enamel encasing the crown, root caries tend to progress much more rapidly than decay on other surfaces. The progression and loss of mineralization on the root surface is 2.5 times faster than caries in enamel. In very severe cases where oral hygiene is very poor and where the diet is very rich in fermentable carbohydrates, caries may cause cavities within months of tooth eruption. This can occur, for example, when children continuously drink sugary drinks from baby bottles (see later discussion). Teeth There are certain diseases and disorders affecting teeth that may leave an individual at a greater risk for cavities. Molar incisor hypo-mineralization seems to be increasingly common. While the cause is unknown it is thought to be a combination of genetic and environmental factors. Possible contributing factors that have been investigated include systemic factors such as high levels of dioxins or polychlorinated biphenyl (PCB) in the mother's milk, premature birth and oxygen deprivation at birth, and certain disorders during the child's first 3 years such as mumps, diphtheria, scarlet fever, measles, hypoparathyroidism, malnutrition, malabsorption, hypo-vitaminosis D, chronic respiratory diseases, or undiagnosed and untreated coeliac disease, which usually presents with mild or absent gastrointestinal symptoms. Amelogenesis imperfecta, which occurs in between 1 in 718 and 1 in 14,000 individuals, is a disease in which the enamel does not fully form or forms in insufficient amounts and can fall off a tooth. In both cases, teeth may be left more vulnerable to decay because the enamel is not able to protect the tooth. In most people, disorders or diseases affecting teeth are not the primary cause of dental caries. Approximately 96% of tooth enamel is composed of minerals. These minerals, especially hydroxyapatite, will become soluble when exposed to acidic environments. Enamel begins to demineralize at a pH of 5.5. Dentin and cementum are more susceptible to caries than enamel because they have lower mineral content. Thus, when root surfaces of teeth are exposed from gingival recession or periodontal disease, caries can develop more readily. Even in a healthy oral environment, however, the tooth is susceptible to dental caries. The evidence for linking malocclusion and/or crowding to dental caries is weak; however, the anatomy of teeth may affect the likelihood of caries formation. Where the deep developmental grooves of teeth are more numerous and exaggerated, pit and fissure caries is more likely to develop (see next section). Also, caries is more likely to develop when food is trapped between teeth. Other factors Reduced salivary flow rate is associated with increased caries since the buffering capability of saliva is not present to counterbalance the acidic environment created by certain foods. As a result, medical conditions that reduce the amount of saliva produced by salivary glands, in particular the submandibular gland and parotid gland, are likely to lead to dry mouth and thus to widespread tooth decay. Examples include Sjögren syndrome, diabetes mellitus, diabetes insipidus, and sarcoidosis. Medications, such as antihistamines and antidepressants, can also impair salivary flow. Stimulants, most notoriously methylamphetamine, also occlude the flow of saliva to an extreme degree. This is known as meth mouth. Tetrahydrocannabinol (THC), the active chemical substance in cannabis, also causes a nearly complete occlusion of salivation, known in colloquial terms as "cotton mouth". Moreover, 63% of the most commonly prescribed medications in the United States list dry mouth as a known side-effect. Radiation therapy of the head and neck may also damage the cells in salivary glands, somewhat increasing the likelihood of caries formation.See Common effects of cancer therapies on salivary glands at Susceptibility to caries can be related to altered metabolism in the tooth, in particular to fluid flow in the dentin. Experiments on rats have shown that a high-sucrose, cariogenic diet "significantly suppresses the rate of fluid motion" in dentin. The use of tobacco may also increase the risk for caries formation. Some brands of smokeless tobacco contain high sugar content, increasing susceptibility to caries. Tobacco use is a significant risk factor for periodontal disease, which can cause the gingiva to recede. As the gingiva loses attachment to the teeth due to gingival recession, the root surface becomes more visible in the mouth. If this occurs, root caries is a concern since the cementum covering the roots of teeth is more easily demineralized by acids than enamel. Currently, there is not enough evidence to support a causal relationship between smoking and coronal caries, but evidence does suggest a relationship between smoking and root-surface caries. Exposure of children to secondhand tobacco smoke is associated with tooth decay. Intrauterine and neonatal lead exposure promote tooth decay. Besides lead, all atoms with electrical charge and ionic radius similar to bivalent calcium, such as cadmium, mimic the calcium ion and therefore exposure to them may promote tooth decay. Poverty is also a significant social determinant for oral health. Dental caries have been linked with lower socio-economic status and can be considered a disease of poverty. Forms are available for risk assessment for caries when treating dental cases; this system using the evidence-based Caries Management by Risk Assessment (CAMBRA). It is still unknown if the identification of high-risk individuals can lead to more effective long-term patient management that prevents caries initiation and arrests or reverses the progression of lesions. Saliva also contains iodine and EGF. EGF results effective in cellular proliferation, differentiation and survival. Salivary EGF, which seems also regulated by dietary inorganic iodine, plays an important physiological role in the maintenance of oral (and gastro-oesophageal) tissue integrity, and, on the other hand, iodine is effective in prevention of dental caries and oral health. Pathophysiology Teeth are bathed in saliva and have a coating of bacteria on them (biofilm) that continually forms. The development of biofilm begins with pellicle formation. Pellicle is an acellular proteinaceous film which covers the teeth. Bacteria colonize on the teeth by adhering to the pellicle-coated surface. Over time, a mature biofilm is formed, creating a cariogenic environment on the tooth surface. The minerals in the hard tissues of the teeth enamel, dentin and cementum are constantly undergoing demineralization and remineralization. Dental caries result when the demineralization rate is faster than the remineralization, producing net mineral loss, which occurs when there is an ecologic shift within the dental biofilm from a balanced population of microorganisms to a population that produces acids and can survive in an acid environment. Enamel Tooth enamel is a highly mineralized acellular tissue, and caries act upon it through a chemical process brought on by the acidic environment produced by bacteria. As the bacteria consume the sugar and use it for their own energy, they produce lactic acid. The effects of this process include the demineralization of crystals in the enamel, caused by acids, over time until the bacteria physically penetrate the dentin. Enamel rods, which are the basic unit of the enamel structure, run perpendicularly from the surface of the tooth to the dentin. Since demineralization of enamel by caries follows the direction of the enamel rods, the different triangular patterns between pit and fissure and smooth-surface caries develop in the enamel because the orientation of enamel rods are different in the two areas of the tooth. As the enamel loses minerals, and dental caries progresses, the enamel develops several distinct zones, visible under a light microscope. From the deepest layer of the enamel to the enamel surface, the identified areas are the: translucent zone, dark zones, body of the lesion, and surface zone. The translucent zone is the first visible sign of caries and coincides with a one to two percent loss of minerals. A slight remineralization of enamel occurs in the dark zone, which serves as an example of how the development of dental caries is an active process with alternating changes. The area of greatest demineralization and destruction is in the body of the lesion itself. The surface zone remains relatively mineralized and is present until the loss of tooth structure results in a cavitation. Dentin Unlike enamel, the dentin reacts to the progression of dental caries. After tooth formation, the ameloblasts, which produce enamel, are destroyed once enamel formation is complete and thus cannot later regenerate enamel after its destruction. On the other hand, dentin is produced continuously throughout life by odontoblasts, which reside at the border between the pulp and dentin. Since odontoblasts are present, a stimulus, such as caries, can trigger a biologic response. These defense mechanisms include the formation of sclerotic and tertiary dentin. In dentin from the deepest layer to the enamel, the distinct areas affected by caries are the advancing front, the zone of bacterial penetration, and the zone of destruction. The advancing front represents a zone of demineralized dentin due to acid and has no bacteria present. The zones of bacterial penetration and destruction are the locations of invading bacteria and ultimately the decomposition of dentin. The zone of destruction has a more mixed bacterial population where proteolytic enzymes have destroyed the organic matrix. The innermost dentin caries has been reversibly attacked because the collagen matrix is not severely damaged, giving it potential for repair. Sclerotic dentin The structure of dentin is an arrangement of microscopic channels, called dentinal tubules, which radiate outward from the pulp chamber to the exterior cementum or enamel border. The diameter of the dentinal tubules is largest near the pulp (about 2.5 μm) and smallest (about 900 nm) at the junction of dentin and enamel. The carious process continues through the dentinal tubules, which are responsible for the triangular patterns resulting from the progression of caries deep into the tooth. The tubules also allow caries to progress faster. In response, the fluid inside the tubules brings immunoglobulins from the immune system to fight the bacterial infection. At the same time, there is an increase of mineralization of the surrounding tubules. This results in a constriction of the tubules, which is an attempt to slow the bacterial progression. In addition, as the acid from the bacteria demineralizes the hydroxyapatite crystals, calcium and phosphorus are released, allowing for the precipitation of more crystals which fall deeper into the dentinal tubule. These crystals form a barrier and slow the advancement of caries. After these protective responses, the dentin is considered sclerotic. According to hydrodynamic theory, fluids within dentinal tubules are believed to be the mechanism by which pain receptors are triggered within the pulp of the tooth. Since sclerotic dentin prevents the passage of such fluids, pain that would otherwise serve as a warning of the invading bacteria may not develop at first. Tertiary dentin In response to dental caries, there may be production of more dentin toward the direction of the pulp. This new dentin is referred to as tertiary dentin. Tertiary dentin is produced to protect the pulp for as long as possible from the advancing bacteria. As more tertiary dentin is produced, the size of the pulp decreases. This type of dentin has been subdivided according to the presence or absence of the original odontoblasts. If the odontoblasts survive long enough to react to the dental caries, then the dentin produced is called "reactionary" dentin. If the odontoblasts are killed, the dentin produced is called "reparative" dentin. In the case of reparative dentin, other cells are needed to assume the role of the destroyed odontoblasts. Growth factors, especially TGF-β, are thought to initiate the production of reparative dentin by fibroblasts and mesenchymal cells of the pulp. Reparative dentin is produced at an average of 1.5 μm/day, but can be increased to 3.5 μm/day. The resulting dentin contains irregularly shaped dentinal tubules that may not line up with existing dentinal tubules. This diminishes the ability for dental caries to progress within the dentinal tubules. Cementum The incidence of cemental caries increases in older adults as gingival recession occurs from either trauma or periodontal disease. It is a chronic condition that forms a large, shallow lesion and slowly invades first the root's cementum and then dentin to cause a chronic infection of the pulp (see further discussion under classification by affected hard tissue). Because dental pain is a late finding, many lesions are not detected early, resulting in restorative challenges and increased tooth loss. Diagnosis The presentation of caries is highly variable. However, the risk factors and stages of development are similar. Initially, it may appear as a small chalky area (smooth surface caries), which may eventually develop into a large cavitation. Sometimes caries may be directly visible. However other methods of detection such as X-rays are used for less visible areas of teeth and to judge the extent of destruction. Lasers for detecting caries allow detection without ionizing radiation and are now used for detection of interproximal decay (between the teeth). Primary diagnosis involves inspection of all visible tooth surfaces using a good light source, dental mirror and explorer. Dental radiographs (X-rays) may show dental caries before it is otherwise visible, in particular caries between the teeth. Large areas of dental caries are often apparent to the naked eye, but smaller lesions can be difficult to identify. Visual and tactile inspection along with radiographs are employed frequently among dentists, in particular to diagnose pit and fissure caries. Early, uncavitated caries is often diagnosed by blowing air across the suspect surface, which removes moisture and changes the optical properties of the unmineralized enamel. Some dental researchers have cautioned against the use of dental explorers to find caries, in particular sharp ended explorers. In cases where a small area of tooth has begun demineralizing but has not yet cavitated, the pressure from the dental explorer could cause a cavity. Since the carious process is reversible before a cavity is present, it may be possible to arrest caries with fluoride and remineralize the tooth surface. When a cavity is present, a restoration will be needed to replace the lost tooth structure. At times, pit and fissure caries may be difficult to detect. Bacteria can penetrate the enamel to reach dentin, but then the outer surface may remineralize, especially if fluoride is present. These caries, sometimes referred to as "hidden caries", will still be visible on X-ray radiographs, but visual examination of the tooth would show the enamel intact or minimally perforated. The differential diagnosis for dental caries includes dental fluorosis and developmental defects of the tooth including hypomineralization of the tooth and hypoplasia of the tooth. The early carious lesion is characterized by demineralization of the tooth surface, altering the tooth's optical properties. Technology using laser speckle image (LSI) techniques may provide a diagnostic aid to detect early carious lesions. Classification Caries can be classified by location, etiology, rate of progression, and affected hard tissues. These forms of classification can be used to characterize a particular case of tooth decay to more accurately represent the condition to others and also indicate the severity of tooth destruction. In some instances, caries is described in other ways that might indicate the cause. The G. V. Black classification is as follows: Class I: occlusal surfaces of posterior teeth, buccal or lingual pits on molars, lingual pit near cingulum of maxillary incisors Class II: proximal surfaces of posterior teeth Class III: interproximal surfaces of anterior teeth without incisal edge involvement Class IV: interproximal surfaces of anterior teeth with incisal edge involvement Class V: cervical third of facial or lingual surface of tooth Class VI: incisal or occlusal edge is worn away due to attrition Early childhood caries Early childhood caries (ECC), also known as "baby bottle caries," "baby bottle tooth decay" or "bottle rot," is a pattern of decay found in young children with their deciduous (baby) teeth. This must include the presence of at least one carious lesion on a primary tooth in a child under the age of 6 years. The teeth most likely affected are the maxillary anterior teeth, but all teeth can be affected. The name for this type of caries comes from the fact that the decay usually is a result of allowing children to fall asleep with sweetened liquids in their bottles or feeding children sweetened liquids multiple times during the day. Another pattern of decay is "rampant caries", which signifies advanced or severe decay on multiple surfaces of many teeth. Rampant caries may be seen in individuals with xerostomia, poor oral hygiene, stimulant use (due to drug-induced dry mouth), and/or large sugar intake. If rampant caries is a result of previous radiation to the head and neck, it may be described as radiation-induced caries. Problems can also be caused by the self-destruction of roots and whole tooth resorption when new teeth erupt or later from unknown causes. Children at 6–12 months are at increased risk of developing dental caries. A range of studies have reported that there is a correlation between caries in primary teeth and caries in permanent teeth. Rate of progression Temporal descriptions can be applied to caries to indicate the progression rate and previous history. "Acute" signifies a quickly developing condition, whereas "chronic" describes a condition that has taken an extended time to develop, in which thousands of meals and snacks, many causing some acid demineralization that is not remineralized, eventually result in cavities. Recurrent caries, also described as secondary, are caries that appear at a location with a previous history of caries. This is frequently found on the margins of fillings and other dental restorations. On the other hand, incipient caries describes decay at a location that has not experienced previous decay. Arrested caries describes a lesion on a tooth that was previously demineralized but was remineralized before causing a cavitation. Fluoride treatment can help recalcification of tooth enamel as well as the use of amorphous calcium phosphate. Micro-invasive interventions (such as dental sealant or resin infiltration) have been shown to slow down the progression of proximal decay. Affected hard tissue Depending on which hard tissues are affected, it is possible to describe caries as involving enamel, dentin, or cementum. Early in its development, caries may affect only enamel. Once the extent of decay reaches the deeper layer of dentin, the term "dentinal caries" is used. Since cementum is the hard tissue that covers the roots of teeth, it is not often affected by decay unless the roots of teeth are exposed to the mouth. Although the term "cementum caries" may be used to describe the decay on roots of teeth, very rarely does caries affect the cementum alone. Prevention Oral hygiene The primary approach to dental hygiene care consists of tooth-brushing and flossing. The purpose of oral hygiene is to remove and prevent the formation of plaque or dental biofilm, although studies have shown this effect on caries is limited. While there is no evidence that flossing prevents tooth decay, the practice is still generally recommended. A toothbrush can be used to remove plaque on accessible surfaces, but not between teeth or inside pits and fissures on chewing surfaces. When used correctly, dental floss removes plaque from areas that could otherwise develop proximal caries but only if the depth of sulcus has not been compromised. Additional aids include interdental brushes, water picks, and mouthwashes. The use of rotational electric toothbrushes might reduce the risk of plaque and gingivitis, though it is unclear whether they are of clinical importance. However, oral hygiene is effective at preventing gum disease (gingivitis / periodontal disease). Food is forced inside pits and fissures under chewing pressure, leading to carbohydrate-fuelled acid demineralisation where the brush, fluoride toothpaste, and saliva have no access to remove trapped food, neutralise acid, or remineralise tooth enamel. (Occlusal caries accounts for between 80 and 90% of caries in children (Weintraub, 2001).) Unlike brushing, fluoride leads to proven reduction in caries incidence by approximately 25%; higher concentrations of fluoride (>1,000 ppm) in toothpaste also helps prevents tooth decay, with the effect increasing with concentration up to a plateau. A randomized clinical trial demonstrated that toothpastes that contain arginine have greater protection against tooth cavitation than the regular fluoride toothpastes containing 1450 ppm alone. A Cochrane review has confirmed that the use of fluoride gels, normally applied by a dental professional from once to several times a year, assists in the prevention of tooth decay in children and adolescents, reiterating the importance of fluoride as the principal means of caries prevention. Another review concluded that the supervised regular use of a fluoride mouthwash greatly reduced the onset of decay in the permanent teeth of children. Professional hygiene care consists of regular dental examinations and professional prophylaxis (cleaning). Sometimes, complete plaque removal is difficult, and a dentist or dental hygienist may be needed. Along with oral hygiene, radiographs may be taken at dental visits to detect possible dental caries development in high-risk areas of the mouth (e.g. "bitewing" X-rays which visualize the crowns of the back teeth). Alternative methods of oral hygiene also exist around the world, such as the use of teeth cleaning twigs such as miswaks in some Middle Eastern and African cultures. There is some limited evidence demonstrating the efficacy of these alternative methods of oral hygiene. Dietary modification People who eat more free sugars get more cavities, with cavities increasing exponentially with increasing sugar intake. Populations with less sugar intake have fewer cavities. In one population, in Nigeria, where sugar consumption was about 2g/day, only two percent of the population, of any age, had had a cavity. Chewy and sticky foods (such as candy, cookies, potato chips, and crackers) tend to adhere to teeth longer. However, dried fruits such as raisins and fresh fruit such as apples and bananas disappear from the mouth quickly, and do not appear to be a risk factor. Consumers are not good at guessing which foods stick around in the mouth. For children, the American Dental Association and the European Academy of Paediatric Dentistry recommend limiting the frequency of consumption of drinks with sugar, and not giving baby bottles to infants during sleep (see earlier discussion).Oral Health Topics: Baby Bottle Tooth Decay , hosted on the American Dental Association website. Page accessed August 14, 2006. Parents are also recommended to avoid sharing utensils and cups with their infants to prevent transferring bacteria from the parent's mouth. Xylitol is a naturally occurring sugar alcohol that is used in different products as an alternative to sucrose (table sugar). As of 2015 the evidence concerning the use of xylitol in chewing gum was insufficient to determine if it is effective at preventing caries. Other measures The use of dental sealants is a means of prevention. A sealant is a thin plastic-like coating applied to the chewing surfaces of the molars to prevent food from being trapped inside pits and fissures. This deprives resident plaque bacteria of carbohydrate, preventing the formation of pit and fissure caries. Sealants are usually applied on the teeth of children, as soon as the teeth erupt but adults are receiving them if not previously performed. Sealants can wear out and fail to prevent access of food and plaque bacteria inside pits and fissures and need to be replaced so they must be checked regularly by dental professionals. Dental sealants have been shown to be more effective at preventing occlusal decay when compared to fluoride varnish applications. Calcium, as found in food such as milk and green vegetables, is often recommended to protect against dental caries. Fluoride helps prevent decay of a tooth by binding to the hydroxyapatite crystals in enamel. Streptococcus mutans is the leading cause of tooth decay. Low concentration fluoride ions act as bacteriostatic therapeutic agent and high concentration fluoride ions are bactericidal. The incorporated fluorine makes enamel more resistant to demineralization and, thus, resistant to decay. Fluoride can be found in either topical or systemic form. Topical fluoride is more highly recommended than systemic intake to protect the surface of the teeth. Topical fluoride is used in toothpaste, mouthwash and fluoride varnish. Standard fluoride toothpaste (1,000–1,500 ppm) is more effective than low fluoride toothpaste (< 600ppm) to prevent dental caries. It is recommended that all adult patients to use fluoridated toothpaste with at least 1350ppm fluoride content, brushing at least 2 times per day and brush right before bed. For children and young adults, use fluoridated toothpaste with 1350ppm to 1500ppm fluoride content, brushing 2 times per day and also brush right before bed. American Dental Association Council suggest that for children <3 years old, caregivers should begin brushing their teeth by using fluoridated toothpaste with an amount no more than a smear. Supervised toothbrushing must also be done to children below 8 years of age to prevent swallowing of toothpaste. After brushing with fluoride toothpaste, rinsing should be avoided and the excess spat out. Many dental professionals include application of topical fluoride solutions as part of routine visits and recommend the use of xylitol and amorphous calcium phosphate products. Silver diammine fluoride may work better than fluoride varnish to prevent cavities. Systemic fluoride is found as lozenges, tablets, drops and water fluoridation. These are ingested orally to provide fluoride systemically. Water fluoridation has been shown to be beneficial to prevent tooth decay, especially in low social economical areas, where other forms of fluoride is not available. However, a Cochrane systematic review found no evidence to suggest that taking fluoride systemically daily in pregnant women was effective in preventing dental decay in their offspring. While some products containing chlorhexidine have been shown to limit the progression of existing tooth decay; there is currently no evidence suggesting that chlorhexidine gels and varnishes can prevent dental caries or reduce the population of Streptococcus mutans in the mouth. An oral health assessment carried out before a child reaches the age of one may help with management of caries. The oral health assessment should include checking the child's history, a clinical examination, checking the risk of caries in the child including the state of their occlusion and assessing how well equipped the child's parent or carer is to help the child prevent caries. To further increase a child's cooperation in caries management, good communication by the dentist and the rest of the staff of a dental practice should be used. This communication can be improved by calling the child by their name, using eye contact and including them in any conversation about their treatment. Vaccines are also under development. Treatment Most importantly, whether the carious lesion is cavitated or non-cavitated dictates the management. Clinical assessment of whether the lesion is active or arrested is also important. Noncavitated lesions can be arrested and remineralization can occur under the right conditions. However, this may require extensive changes to the diet (reduction in frequency of refined sugars), improved oral hygiene (toothbrushing twice per day with fluoride toothpaste and daily flossing), and regular application of topical fluoride. More recently, Immunoglobulin Y specific to Streptococcus mutans has been used to suppress growth of S. mutans. Such management of a carious lesion is termed "non-operative" since no drilling is carried out on the tooth. Non-operative treatment requires excellent understanding and motivation from the individual, otherwise the decay will continue. Once a lesion has cavitated, especially if dentin is involved, remineralization is much more difficult and a dental restoration is usually indicated ("operative treatment"). Before a restoration can be placed, all of the decay must be removed otherwise it will continue to progress underneath the filling. Sometimes a small amount of decay can be left if it is entombed and there is a seal which isolates the bacteria from their substrate. This can be likened to placing a glass container over a candle, which burns itself out once the oxygen is used up. Techniques such as stepwise caries removal are designed to avoid exposure of the dental pulp and overall reduction of the amount of tooth substance which requires removal before the final filling is placed. Often enamel which overlies decayed dentin must also be removed as it is unsupported and susceptible to fracture. The modern decision-making process with regards the activity of the lesion, and whether it is cavitated, is summarized in the table. Destroyed tooth structure does not fully regenerate, although remineralization of very small carious lesions may occur if dental hygiene is kept at optimal level. For the small lesions, topical fluoride is sometimes used to encourage remineralization. For larger lesions, the progression of dental caries can be stopped by treatment. The goal of treatment is to preserve tooth structures and prevent further destruction of the tooth. Aggressive treatment, by filling, of incipient carious lesions, places where there is superficial damage to the enamel, is controversial as they may heal themselves, while once a filling is performed it will eventually have to be redone and the site serves as a vulnerable site for further decay. In general, early treatment is quicker and less expensive than treatment of extensive decay. Local anesthetics, nitrous oxide ("laughing gas"), or other prescription medications may be required in some cases to relieve pain during or following treatment or to relieve anxiety during treatment. A dental handpiece ("drill") is used to remove large portions of decayed material from a tooth. A spoon, a dental instrument used to carefully remove decay, is sometimes employed when the decay in dentin reaches near the pulp. Some dentists remove dental caries using a laser rather than the traditional dental drill. A Cochrane review of this technique looked at Er:YAG (erbium-doped yttrium aluminium garnet), Er,Cr:YSGG (erbium, chromium: yttrium-scandium-gallium-garnet) and Nd:YAG (neodymium-doped yttrium aluminium garnet) lasers and found that although people treated with lasers (compared to a conventional dental "drill") experienced less pain and had a lesser need for dental anaesthesia, that overall there was little difference in caries removal. Another alternative to drilling or lasers for small caries is the use of air abrasion, in which small abrasive particles are blasted at decay using pressurized air (similar to sand blasting). Once the decay is removed, the missing tooth structure requires a dental restoration of some sort to return the tooth to function and aesthetic condition. Restorative materials include dental amalgam, composite resin, glass ionomer cement, porcelain, and gold. Composite resin and porcelain can be made to match the color of a patient's natural teeth and are thus used more frequently when aesthetics are a concern. Composite restorations are not as strong as dental amalgam and gold; some dentists consider the latter as the only advisable restoration for posterior areas where chewing forces are great. When the decay is too extensive, there may not be enough tooth structure remaining to allow a restorative material to be placed within the tooth. Thus, a crown may be needed. This restoration appears similar to a cap and is fitted over the remainder of the natural crown of the tooth. Crowns are often made of gold, porcelain, or porcelain fused to metal. For children, preformed crowns are available to place over the tooth. These are usually made of metal (usually stainless steel but increasingly there are aesthetic materials). Traditionally teeth are shaved down to make room for the crown but, more recently, stainless steel crowns have been used to seal decay into the tooth and stop it progressing. This is known as the Hall Technique and works by depriving the bacteria in the decay of nutrients and making their environment less favorable for them. It is a minimally invasive method of managing decay in children and does not require local anesthetic injections in the mouth. In certain cases, endodontic therapy may be necessary for the restoration of a tooth. Endodontic therapy, also known as a "root canal", is recommended if the pulp in a tooth dies from infection by decay-causing bacteria or from trauma. In root canal therapy, the pulp of the tooth, including the nerve and vascular tissues, is removed along with decayed portions of the tooth. The canals are instrumented with endodontic files to clean and shape them, and they are then usually filled with a rubber-like material called gutta percha. The tooth is filled and a crown can be placed. Upon completion of root canal therapy, the tooth is non-vital, as it is devoid of any living tissue. An extraction can also serve as treatment for dental caries. The removal of the decayed tooth is performed if the tooth is too far destroyed from the decay process to effectively restore the tooth. Extractions are sometimes considered if the tooth lacks an opposing tooth or will probably cause further problems in the future, as may be the case for wisdom teeth. Extractions may also be preferred by people unable or unwilling to undergo the expense or difficulties in restoring the tooth. Recent studies from the University of Michigan demonstrated that SDF is effective in stopping tooth decay when applied to the teeth of young children. The silver ions in SDF denature bacterial proteins and enzymes, effectively killing cariogenic bacteria such as Streptococcus mutans. Epidemiology Worldwide, approximately 3.6 billion people have dental caries in their permanent teeth. In baby teeth it affects about 620 million people or 9% of the population. The disease is most common in Latin American countries, countries in the Middle East, and South Asia, and least prevalent in China. In the United States, dental caries is the most common chronic childhood disease, being at least five times more common than asthma. It is the primary pathological cause of tooth loss in children. Between 29% and 59% of adults over the age of 50 experience caries. Treating dental cavities costs 5–10% of health-care budgets in industrialized countries, and can easily exceed budgets in lower-income countries. The number of cases has decreased in some developed countries, and this decline is usually attributed to increasingly better oral hygiene practices and preventive measures such as fluoride treatment. Nonetheless, countries that have experienced an overall decrease in cases of tooth decay continue to have a disparity in the distribution of the disease. Among children in the United States and Europe, twenty percent of the population endures sixty to eighty percent of cases of dental caries. A similarly skewed distribution of the disease is found throughout the world with some children having none or very few caries and others having a high number. Australia, Nepal, and Sweden (where children receive dental care paid for by the government) have a low incidence of cases of dental caries among children, whereas cases are more numerous in Costa Rica and Slovakia. The classic DMF (decay/missing/filled) index is one of the most common methods for assessing caries prevalence as well as dental treatment needs among populations. This index is based on in-field clinical examination of individuals by using a probe, mirror and cotton rolls. Because the DMF index is done without X-ray imaging, it underestimates real caries prevalence and treatment needs. Bacteria typically associated with dental caries have been isolated from vaginal samples from females who have bacterial vaginosis. History There is a long history of dental caries. Over a million years ago, hominins such as Paranthropus had cavities. The largest increases in the prevalence of caries have been associated with dietary changes. Archaeological evidence shows that tooth decay is an ancient disease dating far into prehistory. Skulls dating from a million years ago through the Neolithic period show signs of caries, including those from the Paleolithic and Mesolithic ages. The increase of caries during the Neolithic period may be attributed to the increased consumption of plant foods containing carbohydrates. The beginning of rice cultivation in South Asia is also believed to have caused an increase in caries especially for women, although there is also some evidence from sites in Thailand, such as Khok Phanom Di, that shows a decrease in overall percentage of dental caries with the increase in dependence on rice agriculture. A Sumerian text from 5000 BC describes a "tooth worm" as the cause of caries. Evidence of this belief has also been found in India, Egypt, Japan, and China. Unearthed ancient skulls show evidence of primitive dental work. In Pakistan, teeth dating from around 5500 BC to 7000 BC show nearly perfect holes from primitive dental drills. The Ebers Papyrus, an Egyptian text from 1550 BC, mentions diseases of teeth. During the Sargonid dynasty of Assyria during 668 to 626 BC, writings from the king's physician specify the need to extract a tooth due to spreading inflammation. In the Roman Empire, wider consumption of cooked foods led to a small increase in caries prevalence. The Greco-Roman civilization, in addition to the Egyptian civilization, had treatments for pain resulting from caries. The rate of caries remained low through the Bronze Age and Iron Age, but sharply increased during the Middle Ages. Periodic increases in caries prevalence had been small in comparison to the 1000 AD increase, when sugar cane became more accessible to the Western world. Treatment consisted mainly of herbal remedies and charms, but sometimes also included bloodletting. The barber surgeons of the time provided services that included tooth extractions. Learning their training from apprenticeships, these health providers were quite successful in ending tooth pain and likely prevented systemic spread of infections in many cases. Among Roman Catholics, prayers to Saint Apollonia, the patroness of dentistry, were meant to heal pain derived from tooth infection. There is also evidence of caries increase when Indigenous people in North America changed from a strictly hunter-gatherer diet to a diet with maize. Rates also increased after contact with colonizing Europeans, implying an even greater dependence on maize. During the European Age of Enlightenment, the belief that a "tooth worm" caused caries was also no longer accepted in the European medical community. Pierre Fauchard, known as the father of modern dentistry, was one of the first to reject the idea that worms caused tooth decay and noted that sugar was detrimental to the teeth and gingiva. In 1850, another sharp increase in the prevalence of caries occurred and is believed to be a result of widespread diet changes. Prior to this time, cervical caries was the most frequent type of caries, but increased availability of sugar cane, refined flour, bread, and sweetened tea corresponded with a greater number of pit and fissure caries. In the 1890s, W. D. Miller conducted a series of studies that led him to propose an explanation for dental caries that was influential for current theories. He found that bacteria inhabited the mouth and that they produced acids that dissolved tooth structures when in the presence of fermentable carbohydrates. This explanation is known as the chemoparasitic caries theory. Miller's contribution, along with the research on plaque by G. V. Black and J. L. Williams, served as the foundation for the current explanation of the etiology of caries. Several of the specific strains of lactobacilli were identified in 1921 by Fernando E. Rodríguez Vargas. In 1924 in London, Killian Clarke described a spherical bacterium in chains isolated from carious lesions which he called Streptococcus mutans. Although Clarke proposed that this organism was the cause of caries, the discovery was not followed up. Later, in 1954 in the US, Frank Orland working with hamsters showed that caries was transmissible and caused by acid-producing Streptococcus thus ending the debate whether dental caries were resultant from bacteria. It was not until the late 1960s that it became generally accepted that the Streptococcus isolated from hamster caries was the same as S. mutans. Tooth decay has been present throughout human history, from early hominids millions of years ago, to modern humans. The prevalence of caries increased dramatically in the 19th century, as the Industrial Revolution made certain items, such as refined sugar and flour, readily available. The diet of the "newly industrialized English working class" then became centered on bread, jam, and sweetened tea, greatly increasing both sugar consumption and caries. Etymology and usage Naturalized from Latin into English (a loanword), caries in its English form originated as a mass noun that means 'rottenness', that is, 'decay'. The word is an uncountable noun.Cariesology or cariology'' is the study of dental caries. Society and culture It is estimated that untreated dental caries results in worldwide productivity losses in the size of about US$27 billion yearly. Other animals Dental caries are uncommon among companion animals.
Biology and health sciences
Dentistry
null
414721
https://en.wikipedia.org/wiki/Forehead
Forehead
In human anatomy, the forehead is an area of the head bounded by three features, two of the skull and one of the scalp. The top of the forehead is marked by the hairline, the edge of the area where hair on the scalp grows. The bottom of the forehead is marked by the supraorbital ridge, the bone feature of the skull above the eyes. The two sides of the forehead are marked by the temporal ridge, a bone feature that links the supraorbital ridge to the coronal suture line and beyond. However, the eyebrows do not form part of the forehead. In Terminologia Anatomica, sinciput is given as the Latin equivalent to "forehead" (etymology of sinciput: from semi- "half" and caput "head".). Structure The bone of the forehead is the squamous part of the frontal bone. The overlying muscles are the occipitofrontalis, procerus, and corrugator supercilii muscles, all of which are controlled by the temporal branch of the facial nerve. The sensory nerves of the forehead connect to the ophthalmic branch of the trigeminal nerve and to the cervical plexus, and lie within the subcutaneous fat. The motor nerves of the forehead connect to the facial nerve. The ophthalmic branch of the trigeminal nerve, the supraorbital nerve, divides at the orbital rim into two parts in the forehead. One part, the superficial division, runs over the surface of the occipitofrontalis muscle. This provides sensation for the skin of the forehead, and for the front edge of the scalp. The other part, the deep division, runs into the occipitofrontalis muscle and provides frontoparietal sensation. Blood supply to the forehead is via the left and right superorbital, supertrochealar, and anterior branches of the superficial temporal artery. Function Expression The muscles of the forehead help to form facial expressions. There are four basic motions, which can occur individually or in combination to form different expressions. The occipitofrontalis muscles can raise the eyebrows, either together or individually, forming expressions of surprise and quizzicality. The corrugator supercilii muscles can pull the eyebrows inwards and down, forming a frown. The procerus muscles can pull down the centre portions of the eyebrows. Wrinkles The movements of the muscles in the forehead produce characteristic wrinkles in the skin. The occipitofrontalis muscles produce the transverse wrinkles across the width of the forehead, and the corrugator supercilii muscles produce vertical wrinkles between the eyebrows above the nose. The procerus muscles cause the nose to wrinkle. Society and culture In physiognomy and phrenology, the shape of the forehead was taken to symbolise intellect and intelligence. "Animals, even the most intelligent of them,", wrote Samuel R. Wells in 1942, "can hardly be said to have any forehead at all, and in natural total idiots it is very diminished". Pseudo-Aristotle, in Physiognomica, stated that the forehead is governed by Mars. A low and little forehead denoted magnanimity, boldness, and confidence; a fleshy and wrinkle-free forehead, litigiousness, vanity, deceit, and contentiousness; a sharp forehead, weakness and fickleness; a wrinkled forehead, great spirit and wit yet poor fortune; a round forehead, virtue and good understanding; a full large forehead, boldness, malice, boundary issues, and high spirit; and a long high forehead, honesty, weakness, simplicity, and poor fortune. In fighting, slamming one's forehead into one's opponent is termed a headbutt.
Biology and health sciences
Human anatomy
Health
1142120
https://en.wikipedia.org/wiki/Candlestick%20chart
Candlestick chart
A candlestick chart (also called Japanese candlestick chart or K-line) is a style of financial chart used to describe price movements of a security, derivative, or currency. While similar in appearance to a bar chart, each candlestick represents four important pieces of information for that day: open and close in the thick body, and high and low in the "candle wick". Being densely packed with information, it tends to represent trading patterns over short periods of time, often a few days or a few trading sessions. Candlestick charts are most often used in technical analysis of equity and currency price patterns. They are used by traders to determine possible price movement based on past patterns, and who use the opening price, closing price, high and low of that time period. They are visually similar to box plots, though box plots show different information. History Candlestick charts are thought to have been developed in the 18th century by Munehisa Homma, a Japanese rice trader. They were introduced to the Western world by Steve Nison in his book Japanese Candlestick Charting Techniques, first published in 1991. They are often used today in stock analysis along with other analytical tools such as Fibonacci analysis. In Beyond Candlesticks, Nison says: However, based on my research, it is unlikely that Homma used candle charts. As will be seen later, when I discuss the evolution of the candle charts, it was more likely that candle charts were developed in the early part of the Meiji period in Japan (in the late 1800s). Description The area between the open and the close is called the real body, price excursions above and below the real body are shadows (also called wicks). Wicks illustrate the highest and lowest traded prices of an asset during the time interval represented. The body illustrates the opening and closing trades. The price range is the distance between the top of the upper shadow and the bottom of the lower shadow moved through during the time frame of the candlestick. The range is calculated by subtracting the low price from the high price. The fill or the color of the candle's body represent the price change during the period. Normally, if the asset closed higher than it opened, the body is displayed as hollow (or the green color is used), with the opening price at the bottom of the body and the closing price at the top. Conversely, if the asset closed lower than it opened, the body is displayed as filled (or the red color is used), with the opening price at the top and the closing price at the bottom. Modern charting software permits unrestricted customization of candle looks and colors, so the actual look of rising or falling price candles may vary. A version of a candlestick chart is a hollow candlestick chart, where both fill and color are used to represent different price relationships: Solid candles show that the current close price is less than the current open price. Hollow candles show that the current close price is greater than the current open price. Red candles show that the current close price is less than the previous close price. Green candles show that the current close price is greater than the previous close price. A candlestick need not have either a body or a wick. Generally, the longer the body of the candle, the more intense the trading. Candlesticks can also show the current price as they're forming, whether the price moved up or down over the time phrase and the price range of the asset covered in that time. Rather than using the open, high, low, and close values for a given time interval, candlesticks can also be constructed using the open, high, low, and close of a specified volume range (for example, 1,000; 100,000; 1 million shares per candlestick). In modern charting software, volume can be incorporated into candlestick charts by increasing or decreasing candlesticks width according to the relative volume for a given time period. Usage Candlestick charts are a visual aid for decision making in stock, foreign exchange, commodity, and option trading. By looking at a candlestick, one can identify an asset's opening and closing prices, highs and lows, and overall range for a specific time frame. Candlestick charts serve as a cornerstone of technical analysis. For example, when the bar is white and high relative to other time periods, it means buyers are very bullish. The opposite is true when there is a black bar. A candlestick pattern is a particular sequence of candlesticks on a candlestick chart, which is mainly used to identify trends. Heikin-Ashi candlesticks Heikin-Ashi (平均足, Japanese for 'average bar') candlesticks are a weighted version of candlesticks calculated in the following way: Close = (real open + real high + real low + real close) / 4 Open = (previous Heikin-Ashi open + previous Heikin-Ashi close) / 2 High = max(real high, Heikin-Ashi open, Heikin-Ashi close) Low = min(real low, Heikin-Ashi open, Heikin-Ashi close) The body of a Heikin-Ashi candle does not always represent the actual open/close. Unlike with regular candlesticks, a long wick shows more strength, whereas the same period on a standard chart might show a long body with little or no wick.
Mathematics
Statistics
null
1145328
https://en.wikipedia.org/wiki/Metal%20fabrication
Metal fabrication
Metal fabrication is the creation of metal structures by cutting, bending and assembling processes. It is a value-added process involving the creation of machines, parts, and structures from various raw materials. Typically, a fabrication shop bids on a job, usually based on engineering drawings, and if awarded the contract, builds the product. Large fab shops employ a multitude of value-added processes, including welding, cutting, forming and machining. As with other manufacturing processes, both human labor and automation are commonly used. A fabricated product may be called a fabrication, and shops specializing in this type of work are called fab shops. The end products of other common types of metalworking, such as machining, metal stamping, forging, and casting, may be similar in shape and function, but those processes are not classified as fabrication. Processes Cutting is done by sawing, shearing, or chiselling (all with manual and powered variants); torching with handheld torches (such as oxy-fuel torches or plasma torches); and via numerical control (CNC) cutters (using a laser, mill bits, torch, or water jet). Bending is done by hammering (manual or powered) or via press brakes, tube benders and similar tools. Modern metal fabricators use press brakes to coin or air-bend metal sheet into form. CNC-controlled backgauges use hard stops to position cut parts to place bend lines in specific positions. Assembling (joining of pieces) is done by welding, binding with adhesives, riveting, threaded fasteners, or further bending in the form of crimped seams. Structural steel and sheet metal are the usual materials for fabrication; welding wire, flux and/or fasteners are used to join the cut pieces. Fabrication comprises or overlaps with various metalworking specialties: Fabrication shops and machine shops have overlapping capabilities, but fabrication shops generally concentrate on metal preparation and assembly (as described above). Machine shops cut metal, but focus primarily on the machining of parts on machine tools. Some firms do both fab work and machining. Blacksmithing has always involved fabrication, although that term has not always been used. Welder-produced products, often referred to as weldments, are examples of fabrication. Boilermakers originally specialized in fabricating boilers, but the term is now used more broadly. Millwrights originally specialized in setting up grain mills and saw mills, but now perform a wide range of fabrication. Ironworkers, also known as steel erectors, also engage in fabrication. They often work with prefabricated segments, produced in fab shops, that are delivered to the site. Raw materials Standard metal fabrication materials are: Plate metal Formed and expanded metal Tube stock Welding wire/welding rod Casting Cutting and burning A variety of tools are used to cut raw material. The most common cutting method is shearing. Special band saws for cutting metal have hardened blades and feed mechanisms for even cutting. Abrasive cut-off saws, also known as chop saws, are similar to miter saws but have a steel-cutting abrasive disks. Cutting torches can cut large sections of steel with little effort. Burn tables are CNC (computer-operated) cutting torches, usually powered by natural gas. Plasma and laser cutting tables, and water jet cutters, are also common. Plate steel is loaded on the table and the parts are cut out as programmed. The support table consists of a grid of bars that can be replaced when worn. Higher-end burn tables may include CNC punch capability using a carousel of punches and taps. In fabrication of structural steel by plasma and laser cutting, robots move the cutting head in three dimensions around the cut material. Forming Forming converts flat sheet metal into 3-D parts by applying force without adding or removing material. The force must be great enough to change the metal's initial shape. Forming can be controlled with tools such as punches and dies. Machinery can regulate force magnitude and direction. Machine-based forming can combine forming and welding to produce lengths of fabricated sheeting (e.g. linear grating for water drainage). Most metallic materials, being at least somewhat ductile and capable of considerable permanent deformation without cracking or breaking, lend themselves particularly well to these techniques. Proper design and use of tools with machinery creates a repeatable form that can be used to create products for many industries, including jewelry, aerospace, automotive, construction, civil and architectural. Machining Machining is a specialized trade of removing material from a block of metal to make it a desired shape. Fab shops generally have some machining capability, using metal lathes, mills, drills, and other portable machining tools. Most solid components, such as gears, bolts, screws and nuts, are machined. Welding Welding is the main focus of steel fabrication. Formed and machined parts are assembled and tack-welded in place, then rechecked for accuracy. If multiple weldments have been ordered, a fixture may be used to locate parts for welding. A welder then finishes the work according to engineering drawings (for detailed welding) or by their own experience and judgement (if no details are provided). Special measures may be needed to prevent or correct warping of weldments due to heat. These may include redesigning the piece to require less welding, employing staggered welding, using a stout fixture, covering the weldment in sand as it cools, and post-weld straightening. Straightening of warped steel weldments is done with an oxyacetylene torch. In this highly specialized work, heat is selectively applied to the steel in a slow, linear sweep, causing the steel to contract in the direction of the sweep as it cools. A highly skilled welder can remove significant warpage this way. Steel weldments are occasionally annealed in a low-temperature oven to relieve residual stresses. Such weldments, particularly those for engine blocks, may be line-bored after heat treatment. After the weldment has cooled, seams are usually ground clean, and the assembly can be sandblasted, primed and painted. Any additional manufacturing is then performed, and the finished product is inspected and shipped. Specialties Many fabrication shops offer specialty processes, including : Casting Powder coating Powder metallurgy Welding Machining CNC machining
Technology
Metallurgy
null
1145707
https://en.wikipedia.org/wiki/Cervical%20collar
Cervical collar
A cervical collar, also known as a neck brace, is a medical device used to support and immobilize a person's neck. It is also applied by emergency personnel to those who have had traumatic head or neck injuries, although they should not be routinely used in prehospital care. They can also be used to treat chronic medical conditions. Whenever people have a traumatic head or neck injury, they may have a cervical fracture. This makes them at high risk for spinal cord injury, which could be exacerbated by movement of the person and could lead to paralysis or death. A common scenario for this injury would be a person suspected of having whiplash because of a car accident. In order to prevent further injury, such people may have a collar placed by medical professionals until X-rays can be taken to determine if a cervical spine fracture exists. Medical professionals will often use the NEXUS criteria and/or the Canadian C-spine rules to clear a cervical collar and determine the need for imaging. The routine use of a cervical collar is not recommended. Cervical collars are also used therapeutically to help realign the spinal cord and relieve pain, although they are usually not worn for long periods of time. Another use of the cervical collar is for strains, sprains, or whiplash. If pain is persistent, the collar might be required to remain attached to help in the healing process. A person may also need a cervical collar, or may require a halo fixation device to support the neck during recovery after surgery such as cervical spinal fusion. Types A soft collar is fairly flexible and is the least limiting but can carry a high risk of further breakage, especially in people with osteoporosis. They are usually made of felt. It can be used for minor injuries or after healing has allowed the neck to become more stable. A range of manufactured rigid collars are also used, usually comprising (a) a firm plastic bi-valved shell secured with Velcro straps and (b) removable padded liners. The also contain a back pad, back panel, front pad, front panel, and chin pad. There are air holes throughout the device to provide ventilation to the area but also to allow access for a tracheostomy if needed. The rigidness is provided by plexiglass in some models. The most frequently prescribed are the Aspen, Malibu, Miami J, and Philadelphia collars. All these can be used with additional chest and head extension pieces to increase stability. Cervical collars are incorporated into rigid braces that constrain the head and chest together. Examples include the Sterno-Occipital Mandibular Immobilization Device (SOMI), Lerman Minerva and Yale types. Special cases, such as very young children or non-cooperative adults, are sometimes still immobilized in medical plaster of paris casts, such as the Minerva cast. Rigid collars are most restrictive when flexing the neck and least restrictive with lateral rotation when compared to soft collars. Despite this, subjects have similar range of motion when asked to perform activities of daily living. It is thought that these collars provide a proprioceptive guide on how much to move one's neck and when patients are preoccupied with performing an activity they are able to move their neck more. This is why in more minor injuries, cervical collars are still placed to remind patients of their injury so they can restrict any activities that may worsen their condition. Application and care When applying a cervical collar, it must be tight enough to immobilize the neck but must be loose enough to avoid pressure on the vasculature of the neck, strangulation, and pressure ulcers. Ideally, any clothing or jewelry in the neck area should be removed before applying the collar. Next, a collar size must be chosen according to the patient's size and build. The practitioner will then measure the length of the neck. The collar is then placed by one practitioner while the other holds the neck still. Then, the collar should be locked to the ideal neck length according to the specific manufacturer's manual. The chin must be in the chin piece and the collar must extend down to the sternal notch. If the patient has a tracheostomy hole, medical professionals must assure that the hole is midline and accessible in a patient with a cervical collar. Some common errors include incorrect chosen collar size, incorrect technique in placing collar, and incorrect measurement of neck length. Cervical collars and patient's necks should be evaluated and cleaned frequently for hygienic purposes as well as to avoid pressure ulcers. When the neck area is being cleaned, it is again important for two people to help remove the collar. One person must help hold the neck and keep it aligned while the other unfastens the straps and removes the collar. The area is then cleaned with soap, water, and washcloths. If there is evidence of skin breakdown, other topical agents and even antibiotics may be used if there is evidence of infection as well. History The cervical collar was invented in 1966 by George Cottrell during the Vietnam war as a way to provide neck immobilization in American soldiers with potential unstable neck injuries. Its use in the prehospital setting in the United States was popularized by orthopedic surgeon, Dr. JD Farrington. In his paper, "Death in a Ditch", Farrington described seeing "sloppy and inefficient removal of victim[s] from their vehicle." He explained how a standardized approach of applying cervical collars before extracting motor vehicle collision victims from their vehicles is necessary to prevent this. Use over time As a result of several small randomized clinical trials over the last decade, hospitals and ambulance staff have seen a significant reduction in the number of patients that are being immobilized. This has been due to complications such as increased intracranial pressure with traumatic brain injury, along with access issues for airway management in obtunded patients. Other risks and complications include increased testing and imaging, increased incidence in displacement of spinal fractures in the elderly, limited physical examination of patients, neck pain, pressure ulcers, and increased length in hospital stay. Because of these potential complications, cervical collars are not recommended in trauma patients with isolated penetrating injury and no neurological deficits. This is because the benefit of a potential secondary cervical injury being prevented with a cervical collar is much less than the risks associated with a cervical collar; with the most concerning being trouble accessing a patient's airway. Some medical professionals have even been calling for a ban on cervical collars, stating that they cause more harm than good. There is also very little evidence that shows cervical collars to be actually making a difference in traumatic cervical spine injury. Other uses Cervical collars are used much less commonly for things outside of cervical injury and precaution. These uses include cervical radiculopathy, sleep apnea, and patients on CPAP ventilation. Most studies for these conditions are small scale and limited. In a 2009 study, it was shown that patients with a confirmed recent diagnosis of cervical radiculopathy who had a cervical collar applied had greater decrease in pain after 6 weeks compared to patients who did not have one applied. When these patients were followed up after six months, almost all of the subjects had complete or near complete resolution of any pain and/or disability, regardless if they had a cervical collar applied or not. Sleep apnea can be worsened by anterior flexion of the neck or posterior movement of the mandible when sleeping supine. Cervical collars are used to prevent these movements when sleeping in these patients. Small scale studies have failed to show any improvement in oxygenation, snoring, and/or apneic episodes with the use of cervical collars at night. These patients can experience discomfort and feelings of strangulation at night if the collar is not fastened properly. Despite this, some practitioners still apply cervical collars for sleep apnea. Patients on CPAP ventilation can often have suboptimal positioning due to pain, discomfort, or lack of knowledge. Similarly to patients with sleep apnea, patients on CPAP need optimization of their neck position to keep their airway clear of any obstruction. Specifically, posterior movement of the mandible is to be avoided as to not cause the strap of the CPAP to come off. Also, limited head movement while on CPAP is desired to optimize oxygen flow in and out of the device. Cervical soft collars are used to try to achieve both of these goals. In a small study analyzing the use of cervical collars in patients on CPAP ventilation with a history of sleep apnea, a significant benefit was observed. Sport In high-risk motorsports such as Motocross, go-kart racing and speed-boat racing, racers often wear a protective collar to avoid whiplash and other neck injuries. Designs range from simple foam collars to complex composite devices. Additional images
Technology
Devices
null
1146115
https://en.wikipedia.org/wiki/Astronomical%20radio%20source
Astronomical radio source
An astronomical radio source is an object in outer space that emits strong radio waves. Radio emission comes from a wide variety of sources. Such objects are among the most extreme and energetic physical processes in the universe. History In 1932, American physicist and radio engineer Karl Jansky detected radio waves coming from an unknown source in the center of the Milky Way galaxy. Jansky was studying the origins of radio frequency interference for Bell Laboratories. He found "...a steady hiss type static of unknown origin", which eventually he concluded had an extraterrestrial origin. This was the first time that radio waves were detected from outer space. The first radio sky survey was conducted by Grote Reber and was completed in 1941. In the 1970s, some stars in the Milky Way were found to be radio emitters, one of the strongest being the unique binary MWC 349. Sources: Solar System The Sun As the nearest star, the Sun is the brightest radiation source in most frequencies, down to the radio spectrum at 300 MHz (1 m wavelength). When the Sun is quiet, the galactic background noise dominates at longer wavelengths. During geomagnetic storms, the Sun will dominate even at these low frequencies. Jupiter Oscillation of electrons trapped in the magnetosphere of Jupiter produce strong radio signals, particularly bright in the decimeter band. The magnetosphere of Jupiter is responsible for intense episodes of radio emission from the planet's polar regions. Volcanic activity on Jupiter's moon Io injects gas into Jupiter's magnetosphere, producing a torus of particles about the planet. As Io moves through this torus, the interaction generates Alfvén waves that carry ionized matter into the polar regions of Jupiter. As a result, radio waves are generated through a cyclotron maser mechanism, and the energy is transmitted out along a cone-shaped surface. When Earth intersects this cone, the radio emissions from Jupiter can exceed the solar radio output. Ganymede In 2021 news outlets reported that scientists, with the Juno spacecraft that orbits Jupiter since 2016, detected an FM radio signal from the moon Ganymede at a location where the planet's magnetic field lines connect with those of its moon. According to the reports these were caused by cyclotron maser instability and were similar to both WiFi-signals and Jupiter's radio emissions. A study about the radio emissions was published in September 2020 but did not describe them to be of FM nature or similar to WiFi signals. Sources: Galactic The Galactic Center The center of the Milky Way was the first radio source to be detected. It contains a number of radio sources, including Sagittarius A, the compact region around the supermassive black hole, Sagittarius A*, as well as the black hole itself. When flaring, the accretion disk around the supermassive black hole lights up, detectable in radio waves. In the 2000s, three Galactic Center Radio Transients (GCRTs) were detected: GCRT J1746–2757, GCRT J1745–3009, and GCRT J1742–3001. In addition, ASKAP J173608.2-321635, which was detected six times in 2020, may be a fourth GCRT. Region around the Galactic Center In 2021, astronomers reported the detection of peculiar, highly circularly polarized intermittent radio waves from near the Galactic Center whose unidentified source could represent a new class of astronomical objects with a GCRT so far not "fully explain[ing] the observations". Supernova remnants Supernova remnants often show diffuse radio emission. Examples include Cassiopeia A, the brightest extrasolar radio source in the sky, and the Crab Nebula. Neutron stars Pulsars Supernovae sometimes leave behind dense spinning neutron stars called pulsars. They emit jets of charged particles which emit synchrotron radiation in the radio spectrum. Examples include the Crab Pulsar, the first pulsar to be discovered. Pulsars and quasars (dense central cores of extremely distant galaxies) were both discovered by radio astronomers. In 2003 astronomers using the Parkes radio telescope discovered two pulsars orbiting each other, the first such system known. Rotating Radio Transient (RRAT) Sources Rotating radio transients (RRATs) are a type of neutron stars discovered in 2006 by a team led by Maura McLaughlin from the Jodrell Bank Observatory at the University of Manchester in the UK. RRATs are believed to produce radio emissions which are very difficult to locate, because of their transient nature. Early efforts have been able to detect radio emissions (sometimes called RRAT flashes) for less than one second a day, and, like with other single-burst signals, one must take great care to distinguish them from terrestrial radio interference. Distributing computing and the Astropulse algorithm may thus lend itself to further detection of RRATs. Star forming regions Short radio waves are emitted from complex molecules in dense clouds of gas where stars are giving birth. Spiral galaxies contain clouds of neutral hydrogen and carbon monoxide which emit radio waves. The radio frequencies of these two molecules were used to map a large portion of the Milky Way galaxy. Sources: extra-galactic Radio galaxies Many galaxies are strong radio emitters, called radio galaxies. Some of the more notable are Centaurus A and Messier 87. Quasars (short for "quasi-stellar radio source") were one of the first point-like radio sources to be discovered. Quasars' extreme redshift led us to conclude that they are distant active galactic nuclei, believed to be powered by black holes. Active galactic nuclei have jets of charged particles which emit synchrotron radiation. One example is 3C 273, the optically brightest quasar in the sky. Merging galaxy clusters often show diffuse radio emission. Cosmic microwave background The cosmic microwave background is blackbody background radiation left over from the Big Bang (the rapid expansion, roughly 13.8 billion years ago, that was the beginning of the universe. Extragalactic pulses - Fast Radio Burst D. R. Lorimer and others analyzed archival survey data and found a 30-jansky dispersed burst, less than 5 milliseconds in duration, located 3° from the Small Magellanic Cloud. They reported that the burst properties argue against a physical association with our Galaxy or the Small Magellanic Cloud. In a recent paper, they argue that current models for the free electron content in the universe imply that the burst is less than 1 gigaparsec distant. The fact that no further bursts were seen in 90 hours of additional observations implies that it was a singular event such as a supernova or coalescence (fusion) of relativistic objects. It is suggested that hundreds of similar events could occur every day and, if detected, could serve as cosmological probes. Radio pulsar surveys such as Astropulse-SETI@home offer one of the few opportunities to monitor the radio sky for impulsive burst-like events with millisecond durations. Because of the isolated nature of the observed phenomenon, the nature of the source remains speculative. Possibilities include a black hole-neutron star collision, a neutron star-neutron star collision, a black hole-black hole collision, or some phenomenon not yet considered. In 2010 there was a new report of 16 similar pulses from the Parkes Telescope which were clearly of terrestrial origin, but in 2013 four pulse sources were identified that supported the likelihood of a genuine extragalactic pulsing population. These pulses are known as fast radio bursts (FRBs). The first observed burst has become known as the Lorimer burst. Blitzars are one proposed explanation for them. Sources: not yet observed Primordial black holes According to the Big Bang Model, during the first few moments after the Big Bang, pressure and temperature were extremely great. Under these conditions, simple fluctuations in the density of matter may have resulted in local regions dense enough to create black holes. Although most regions of high density would be quickly dispersed by the expansion of the universe, a primordial black hole would be stable, persisting to the present. One goal of Astropulse is to detect postulated mini black holes that might be evaporating due to "Hawking radiation". Such mini black holes are postulated to have been created during the Big Bang, unlike currently known black holes. Martin Rees has theorized that a black hole, exploding via Hawking radiation, might produce a signal that's detectable in the radio. The Astropulse project hopes that this evaporation would produce radio waves that Astropulse can detect. The evaporation wouldn't create radio waves directly. Instead, it would create an expanding fireball of high-energy gamma rays and particles. This fireball would interact with the surrounding magnetic field, pushing it out and generating radio waves. ET Previous searches by various "search for extraterrestrial intelligence" (SETI) projects, starting with Project Ozma, have looked for extraterrestrial communications in the form of narrow-band signals, analogous to our own radio stations. The Astropulse project argues that since we know nothing about how ET might communicate, this might be a bit closed-minded. Thus, the Astropulse Survey can be viewed as complementary to the narrow-band SETI@home survey as a by-product of the search for physical phenomena. Other undiscovered phenomena Explaining their discovery in 2005 of a powerful bursting radio source, NRL astronomer Dr. Joseph Lazio stated: "Amazingly, even though the sky is known to be full of transient objects emitting at X- and gamma-ray wavelengths, very little has been done to look for radio bursts, which are often easier for astronomical objects to produce." The use of coherent dedispersion algorithms and the computing power provided by the SETI network may lead to discovery of previously undiscovered phenomena.
Physical sciences
Radio astronomy
Astronomy