id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
293667
https://en.wikipedia.org/wiki/Mesothelae
Mesothelae
The Mesothelae are a suborder of spiders (order Araneae). , two extant families were accepted by the World Spider Catalog, Liphistiidae and Heptathelidae. Alternatively, the Heptathelidae can be treated as a subfamily of a more broadly circumscribed Liphistiidae. There are also a number of extinct families. This suborder is thought to form the sister group to all other living spiders, and to retain ancestral characters, such as a segmented abdomen with spinnerets in the middle and two pairs of book lungs. Extant members of the Mesothelae are medium to large spiders with eight eyes grouped on a tubercle. They are found only in China, Japan, and southeast Asia. The oldest known Mesothelae spiders are from the Carboniferous, over 300 million years ago. Taxonomy Reginald Innes Pocock in 1892 was the first to realize that the exceptional characters of the genus Liphistius (the only member of the group then known) meant that it was more different from the remaining spiders than they were among themselves. Accordingly, he proposed dividing spiders into two subgroups, Mesothelae for Liphistius, and Opisthothelae for all other spiders. The names refer to the position of the spinning organs, which are in the middle of the abdomen in Liphistius and nearer the end in all other spiders. In Greek, μέσος (mesos) means "middle", and θήλα (thēla) "teat". Phylogeny and classification Pocock divided his Opisthothelae into two groups, which he called Mygalomorphae and Arachnomorphae (now Araneomorphae), implicitly adopting the phylogeny shown below. Pocock's approach was criticized by other arachnologists. Thus in 1923, Alexander Petrunkevitch rejected grouping mygalomorphs and araneomorphs into Opisthothelae, treating Liphistiomorphae (i.e. Mesothelae), Mygalomorphae and Arachnomorphae (Araneomorphae) as three separate groups. Others, such as W. S. Bristowe in 1933, put Liphistiomorphae and Mygalomorphae into one group, called Orthognatha, with Araneomorphae as Labidognatha: In 1976, Platnick and Gertsch argued for a return to Pocock's classification, drawing on morphological evidence. Subsequent phylogenetic studies based on molecular data have vindicated this view. The accepted classification of spiders is now: Order Araneae (spiders) Suborder Mesothelae Pocock, 1892 Suborder Opisthothelae Pocock, 1892 Infraorder Mygalomorphae Pocock, 1892 Infraorder Araneomorphae Smith, 1902 (syn. Arachnomorphae Pocock, 1892) Extant families Initially the Mesothelae consisted of a single family, Liphistiidae. In 1923, the new genus Heptathela was described and placed in a separate tribe within Liphistiidae, Heptatheleae. In 1939, Alexander Petrunkevitch raised the tribe to a separate family, Heptathelidae. In 1985, Robert Raven reunited the two families, a view supported by Breitling in 2022. Other authors have maintained two separate families, a position accepted by the World Spider Catalog . Description Members of Mesothelae have paraxial chelicerae, two pairs of coxal glands on the legs, eight eyes grouped on a nodule, two pairs of book lungs, and no endites on the base of the pedipalp. Most have at least seven or eight spinnerets near the middle of the abdomen. Lateral spinnerets are multi-segmented. Recent Mesothelae are characterized by the narrow sternum on the ventral side of the cephalothorax (prosoma). Several plesiomorphic characteristics may be useful in recognizing these spiders: there are tergite plates on the dorsal side and the almost median position of the spinnerets on the ventral side of the opisthosoma. Although it has been claimed that they lack venom glands and ducts, which almost all other spiders have, subsequent works have demonstrated that at least some, possibly all, do in fact have both the glands and ducts. All Mesothelae have eight spinnerets in four pairs. Like mygalomorph spiders, they have two pairs of book lungs. Unlike all other extant mesothelians, heptathelids do not have fishing lines in front of the entrances to the burrows that they construct, making them more difficult to find. They also have a paired receptaculum (unpaired in other liphistiids), and have a conductor in their palpal bulb. These long palps can confusingly look like an extra pair of legs, a mistake also made of some solifugids. Distribution Liphistiidae spiders are mainly distributed in Laos, Malaysia, Myanmar, Sumatra, and Thailand, with two species native to China. Heptathelidae are found in Vietnam, the eastern provinces of China, and southern Japan, including the Ryukyu Islands. Fossils A number of families and genera of fossil arthropods have been assigned to the Mesothelae, particularly by Alexander Petrunkevitch. However, Paul A. Selden has shown that most only have "the general appearance of spiders", with segmented abdomens (opisthosomae), but no definite spinnerets. These families include: †Arthrolycosidae Frič, 1904 †Arthromygalidae Petrunkevitch, 1923 †Pyritaraneidae Petrunkevitch, 1953 †Palaeothele Selden, 2000 (unplaced in a family) Between 2015 and 2019 six genera of Mesothele spider in four families were described from Late Cretaceous (Cenomanian) aged Burmese Amber in Myanmar. Cretaceothele (Cretaceothelidae) Burmathele (Burmathelidae), Parvithele, Pulvillothele (Parvithelidae) Intermesothele and Eomesothele (Eomesothelidae)
Biology and health sciences
Spiders
Animals
503201
https://en.wikipedia.org/wiki/Ring-tailed%20lemur
Ring-tailed lemur
The ring-tailed lemur (Lemur catta) is a medium- to larger-sized strepsirrhine (wet-nosed) primate and the most internationally recognized lemur species, owing to its long, black-and-white, ringed tail. It belongs to Lemuridae, one of five lemur families, and is the only member of the Lemur genus. Like all lemurs, it is endemic to the island of Madagascar, where it is endangered. Known locally in Malagasy as (, spelled in French) or , it ranges from gallery forests to spiny scrub in the southern regions of the island. It is omnivorous, as well as the most adapted to living terrestrially of the extant lemurs. The ring-tailed lemur is highly social, living in groups—known as "troops"—of up to 30 individuals. It is also a female-dominant species, a commonality among lemurs. To keep warm and reaffirm social bonds, groups will huddle together. Mutual grooming is another vital aspect of lemur socialization (as with all primates), reaffirming social and familial connections, while also helping rid each other of any potential insects. Ring-tailed lemurs are strictly diurnal, being active exclusively during daylight hours. Due to this lifestyle, they also sunbathe; the lemurs can be observed sitting upright on their tails, exposing their soft, white belly fur towards the sun. They will often also have their palms open and eyes gently closed. Like other lemurs, this species relies strongly on their sense of smell, and territorial marking, with scent glands, provides communication signals throughout a group's home range. The glands are located near the eyes, as well as near the anus. The males perform a unique scent-marking behavior called spur-marking and will participate in stink fights by dousing their tails with their pheromones and “wafting” them at opponents. Additionally, lemurs of both sexes will scent-mark trees, logs, rocks or other objects by simply rubbing their faces and bodies onto it, not unlike a domestic cat. As one of the most vocal primates, the ring-tailed lemur uses numerous vocalizations, including calling for group cohesion and predator alarm calls. Experiments have shown that the ring-tailed lemur, despite the lack of a large brain (relative to simiiform primates), can organize sequences, understand basic arithmetic operations, and preferentially select tools based on functional qualities. Despite adapting to and breeding easily under captive care (and being the most popular species of lemur in zoos worldwide, with more than 2,000 captive-raised individuals), the wild population of ring-tailed lemur is listed as endangered by the IUCN Red List, due to habitat destruction, local hunting for bushmeat and the exotic pet trade. As of early 2017, the population in the wild is believed to have crashed to as low as 2,000 individuals due to these reasons, making them far more critically endangered. Local Malagasy farmers and logging industries frequently make use of slash and burn deforestation techniques, with smoke being visible on the horizon on most days in Madagascar, in an effort to accommodate livestock and to cultivate larger fields of crops. Etymology Although the term "lemur" was first intended for slender lorises, it was soon limited to the endemic Malagasy primates, which have been known as "lemurs" ever since. The name derives from the Latin term lemures, which refers to specters or ghosts that were exorcised during the Lemuria festival of ancient Rome. According to Carl Linnaeus's own explanation, the name was selected because of the nocturnal activity and slow movements of the slender loris. Being familiar with the works of Virgil and Ovid and seeing an analogy that fit with his naming scheme, Linnaeus adapted the term "lemur" for these nocturnal primates. However, it has been commonly and falsely assumed that Linnaeus was referring to the ghost-like appearance, reflective eyes, and ghostly cries of lemurs. It has also been speculated that Linnaeus may also have known that some Malagasy people have held legends that lemurs are the souls of their ancestors, but this is unlikely given that the name was selected for slender lorises from India. The species name, catta, refers to the ring-tailed lemur's cat-like appearance. Its purring vocalization is similar to that of the domestic cat. Following Linnaeus's species description, the common name "ring-tailed maucauco" was first penned in 1771 by Welsh naturalist Thomas Pennant, who noted its characteristic long, banded tail. (The term "maucauco" was a very common term for lemurs at this time.) The now universal English name "ring-tailed lemur" was first used by George Shaw in his illustrated scientific publication covering the Leverian collection, which was published between 1792 and 1796. Evolutionary history All mammalian fossils from Madagascar come from recent times. Thus, little is known about the evolution of the ring-tailed lemur, let alone the rest of the lemur clade, which comprises the entire endemic primate population of the island. However, chromosomal and molecular evidence suggest that lemurs are more closely related to each other than to other strepsirrhine primates. For this to have happened, it is thought that a very small ancestral population came to Madagascar via a single rafting event between 50 and 80 million years ago. Subsequent evolutionary radiation and speciation has created the diversity of Malagasy lemurs seen today. According to analysis of amino acid sequences, the branching of the family Lemuridae has been dated to 26.1 ±3.3 mya while rRNA sequences of mtDNA place the split at 24.9 ±3.6 mya. The ruffed lemurs are the first genus to split away (most basal) in the family, a view that is further supported by analysis of DNA sequences and karyotypes. Additionally, Molecular data suggests a deep genetic divergence and sister group relationship between the true lemurs (Eulemur) and the other two genera: Lemur and Hapalemur. The ring-tailed lemur is thought to share closer affinities to the bamboo lemurs of the genus Hapalemur than to the other two genera in its family. This has been supported by comparisons in communication, chromosomes, genetics, and several morphological traits, such as scent gland similarities. However, other data concerning immunology and other morphological traits fail to support this close relationship. For example, Hapalemur species have short snouts, while the ring-tailed lemur and the rest of Lemuridae have long snouts. However, differences in the relationship between the orbit (eye socket) and the muzzle suggest that the ring-tailed lemur and the true lemurs evolved their elongated faces independently. The relationship between the ring-tailed lemur and bamboo lemurs is the least understood. Molecular analysis suggests that either the bamboo lemurs diverged from the ring-tailed lemur, making the group monophyletic and supporting the current two-genera taxonomy, or that the ring-tailed lemur is nested in with the bamboo lemurs, requiring Hapalemur simus to be split off into its own genus, Prolemur. The karyotype of the ring-tailed lemur has 56 chromosomes, of which four are metacentric (arms of nearly equal length), four are submetacentric (arms of unequal length), and 46 are acrocentric (the short arm is hardly observable). The X chromosome is metacentric and the Y chromosome is acrocentric. Taxonomic classification Linnaeus first used the genus name Lemur to describe "Lemur tardigradus" (the red slender loris, now known as Loris tardigradus) in his 1754 catalog of the Museum of King Adolf Frederick. In 1758, his 10th edition of Systema Naturae listed the genus Lemur with three included species, only one of which is still considered to be a lemur while another is no longer considered to be a primate. These species include: Lemur tardigradus, Lemur catta (the ring-tailed lemur), and Lemur volans (the Philippine colugo, now known as Cynocephalus volans). In 1911, Oldfield Thomas made Lemur catta the type species for the genus, despite the term initially being used to describe lorises. On January 10, 1929, the International Commission on Zoological Nomenclature (ICZN) formalized this decision in its publication of Opinion 122. The ring-tailed lemur shares many similarities with ruffed lemurs (genus Varecia) and true lemurs (genus Eulemur), and its skeleton is nearly indistinguishable from that of the true lemurs. Consequently, the three genera were once grouped together in the genus Lemur and more recently are sometimes referred to as subfamily Lemurinae (within family Lemuridae). However, ruffed lemurs were reassigned to the genus Varecia in 1962, and due to similarities between the ring-tailed lemur and the bamboo lemurs, particularly in regards to molecular evidence and scent glands similarities, the true lemurs were moved to the genus Eulemur by Yves Rumpler and Elwyn L. Simons (1988) as well as Colin Groves and Robert H. Eaglen (1988). In 1991, Ian Tattersall and Jeffrey H. Schwartz reviewed the evidence and came to a different conclusion, instead favoring to return the members of Eulemur and Varecia to the genus Lemur. However, this view was not widely accepted and the genus Lemur remained monotypic, containing only the ring-tailed lemur. Because the differences in molecular data are so minute between the ring-tailed lemur and both genera of bamboo lemurs, it has been suggested that all three genera be merged. Because of the difficulty in discerning the relationships within family Lemuridae, not all authorities agree on the taxonomy, although the majority of the primatological community favors the current classification. In 1996, researchers Steven Goodman and Olivier Langrand suggested that the ring-tailed lemur may demonstrate regional variations, particularly a high mountain population at Andringitra Massif that has a thicker coat, lighter coloration, and variations in its tail rings. In 2001, primatologist Colin Groves concluded that this does not represent a locally occurring subspecies. This decision was later supported by further fieldwork that showed that the differences fell within the normal range of variation for the species. The thicker coat was considered a local adaptation to extreme low temperatures in the region, and the fading of the fur was attributed to increased exposure to solar radiation. Additional genetic studies in 2000 further supported the conclusion that population did not vary significantly from the other ring-tailed lemur populations on the island. Anatomy and physiology The ring-tailed lemur is a relatively large lemur. Its average weight is . Its head–body length ranges between , its tail length is , and its total length is . Other measurements include a hind foot length of , ear length of , and cranium length of . The species has a slender frame and narrow face, fox-like muzzle. The ring-tailed lemur's trademark—a long, bushy tail—is ringed in alternating black and white transverse bands, numbering 12 or 13 white rings and 13 or 14 black rings and always ending in a black tip. The total number of rings nearly matches the approximate number of caudal vertebrae (~25). Its tail is longer than its body and is not prehensile. Instead, it is only used for balance, communication, and group cohesion. The pelage (fur) is so dense that it can clog electric clippers. The ventral (chest) coat and throat are white or cream. The dorsal (back) coat varies from gray to rosy-brown, sometimes with a brown pygal patch around the tail region, where the fur grades to pale gray or grayish brown. The dorsal coloration is slightly darker around the neck and crown. The hair on the throat, cheeks, and ears is white or off-white and also less dense, allowing the dark skin underneath to show through. The muzzle is dark grayish and the nose is black, and the eyes are encompassed by black triangular patches. Facial vibrissae (whiskers) are developed and found above the lips (mystacal), on the cheeks (genal), and on the eyebrow (superciliary). Vibrissae are also found slightly above the wrist on the underside of the forearm. The ears are relatively large compared to other lemurs and are covered in hair, which has only small tufts if any. Although slight pattern variations in the facial region may be seen between individuals, there are no obvious differences between the sexes. Unlike most diurnal primates, but like all strepsirrhine primates, the ring-tailed lemur has a tapetum lucidum, or reflective layer behind the retina of the eye, that enhances night vision. The tapetum is highly visible in this species because the pigmentation of the ocular fundus (back surface of the eye), which is present in—but varies between—all lemurs, is very spotty. The ring-tailed lemur also has a rudimentary foveal depression on the retina. Another shared characteristic with the other strepsirrhine primates is the rhinarium, a moist, naked, glandular nose supported by the upper jaw and protruding beyond the chin. The rhinarium continues down where it divides the upper lip. The upper lip is attached to the premaxilla, preventing the lip from protruding and thus requiring the lemur to lap water rather than using suction. The skin of the ring-tailed lemur is dark gray or black in color, even in places where the fur is white. It is exposed on the nose, palms, soles, eyelids, lips, and genitalia. The skin is smooth, but the leathery texture of the hands and feet facilitate terrestrial movement. The anus, located at the joint of the tail, is covered when the tail is lowered. The area around the anus (circumanal area) and the perineum are covered in fur. In males, the scrotum lacks fur, is covered in small, horny spines, and the two sacs of the scrotum are divided. The penis is nearly cylindrical in shape and is covered in small spines, as well as having two pairs of larger spines on both sides. Males have a relatively small baculum (penis bone) compared to their size. The scrotum, penis, and prepuce are usually coated with a foul-smelling secretion. Females have a vulva with a thick, elongated clitoris that protrudes from the labia. The opening of the urethra is closer to the clitoris than the vagina, forming a "drip tip". Females have two pairs of mammary glands (four nipples), but only one pair is functional. The anterior pair (closest to the head) are very close to the axillae (armpit). Furless scent glands are present on both males and females. Both sexes have small, dark antebrachial (forearm) glands measuring 1 cm long and located on the inner surface of the forearm nearly above the wrist joint. (This trait is shared between the Lemur and Hapalemur genera.) The gland is soft and compressible, bears fine dermal ridges (like fingerprints), and is connected to the palm by a fine, 2 mm–high, hairless strip. However, only the male has a horny spur that overlays this scent gland. The spur develops with age through the accumulation of secretions from an underlying gland that may connect through the skin through as many as a thousand minuscule ducts. The males also have brachial (arm) glands on the axillary surface of their shoulders (near the armpit). The brachial gland is larger than the antebrachial gland, covered in short hair around the periphery, and has a naked crescent-shaped orifice near the center. The gland secretes a foul-smelling, brown, sticky substance. The brachial gland is barely developed if present at all in females. Both sexes also have apocrine and sebaceous glands in their genital or perianal regions, which are covered in fur. Its fingers are slender, padded, mostly lacking webbing, and semi-dexterous with flat, human-like nails. The thumb is both short and widely separated from the other fingers. Despite being set at a right angle to the palm, the thumb is not opposable since the ball of the joint is fixed in place. As with all strepsirrhines, the hand is ectaxonic (the axis passes through the fourth digit) rather than mesaxonic (the axis passing through the third digit) as seen in monkeys and apes. The fourth digit is the longest, and only slightly longer than the second digit. Likewise, the fifth digit is only slightly longer than the second. The palms are long and leathery, and like other primates, they have dermal ridges to improve grip. The feet are semi-digitigrade and more specialized than the hands. The big toe is opposable and is smaller than the big toe of other lemurs, which are more arboreal. The second toe is short, has a small terminal pad, and has a toilet-claw (sometimes referred to as a grooming claw) specialized for personal grooming, specifically to rake through fur that is unreachable by the mouth. The toilet-claw is a trait shared among nearly all living strepsirrhine primates. Unlike other lemurs, the ring-tailed lemur's heel is not covered by fur. Dentition The ring-tailed lemur has a dentition of , meaning that on each side of the jaw it has two incisors, one canine tooth, three premolars, and three molar teeth. Its deciduous dentition is . The permanent teeth erupt in the following order: m 1/1 (first molars), i 2/2 (first incisors), i 3/3 (second incisors), C1 (upper canines), m 2/2 (second molars), c1 (lower canines), m 3/3 (third molars), p 4/4 (third premolars), p 3/3 (second premolars), p 2/2 (first premolars). Its lower incisors (i1 and i2) are long, narrow, and finely spaced while pointing almost straight forward in the mouth (procumbent). Together with the incisor-shaped (incisiform) lower canines (c1), which are slightly larger and also procumbent, form a structure called a toothcomb, a trait unique to nearly all strepsirrhine primates. The toothcomb is used during oral grooming, which involves licking and tooth-scraping. It may also be used for grasping small fruits, removing leaves from the stem when eating, and possibly scraping sap and gum from tree bark. The toothcomb is kept clean using a sublingual organ—a thin, flat, fibrous plate that covers a large part of the base of the tongue. The first lower premolar (p2) following the toothcomb is shaped like a canine (caniniform) and occludes the upper canine, essentially filling the role of the incisiform lower canine. There is also a diastema (gap) between the second and third premolars (p2 and p3). The upper incisors are small, with the first incisors (I1) space widely from each other, yet closely to the second incisors (I2). Both are compressed buccolingually (between the cheek and the tongue). The upper canines (C1) are long, have a broad base, and curve down and back (recurved). The upper canines exhibit slight sexual dimorphism, with males exhibiting slightly larger canines than females. Both sexes use them in combat by slashing with them. There is a small diastema between the upper canine and the first premolar (P2), which is smaller and more caniniform than the other premolars. Unlike other lemurs, the first two upper molars (M1 and M2) have prominent lingual cingulae, yet do not have a protostyle. Ecology The ring-tailed lemur is diurnal and semi-terrestrial. It is the most terrestrial of lemur species, spending as much as 33% of its time on the ground. However it is still considerably arboreal, spending 23% of its time in the mid-level canopy, 25% in the upper-level canopy, 6% in the emergent layer and 13% in small bushes. Troop travel is 70% terrestrial. Troop size, home range, and population density vary by region and food availability. Troops typically range in size from 6 to 25, although troops with over 30 individuals have been recorded. The average troop contains 13 to 15 individuals. Home range size varies between . Troops of the ring-tailed lemur will maintain a territory, but overlap is often high. When encounters occur, they are agonistic, or hostile in nature. A troop will usually occupy the same part of its range for three or four days before moving. When it does move, the average traveling distance is . Population density ranges from 100 individuals per in dry forests to 250–600 individuals per km2 in gallery and secondary forests. The ring-tailed lemur has both native and introduced predators. Native predators include the fossa (Cryptoprocta ferox), the Madagascar harrier-hawk (Polyboroides radiatus), the Madagascar buzzard (Buteo brachypterus) and the Madagascar ground boa (Acrantophis madagascariensis). Introduced predators include the small Indian civet (Viverricula indica), the domestic cat and the domestic dog. Geographic range and habitat Endemic to southern and southwestern Madagascar, the ring-tailed lemur ranges further into highland areas than other lemurs. It inhabits deciduous forests, dry scrub, montane humid forests, and gallery forests (forests along riverbanks). It strongly favors gallery forests, but such forests have now been cleared from much of Madagascar in order to create pasture for livestock. Depending on location, temperatures within its geographic range can vary from at Andringitra Massif to in the spiny forests of Beza Mahafaly Special Reserve. This species is found as far east as Tôlanaro, inland towards the mountains of Andringitra on the southeastern plateau, among the spiny forests of the southern part of the island, and north along the west coast to the town of Belo sur Mer. Historically, the northern limits of its range in the west extended to the Morondava River near Morondava. It can still be found in Kirindy Mitea National Park, just south of Morondava, though at very low densities. It does not occur in Kirindy Forest Reserve, north of Morondava. Its distribution throughout the rest of its range is very spotty, with population densities varying widely. The ring-tailed lemur can be easily seen in five national parks in Madagascar: Andohahela National Park, Andringitra National Park, Isalo National Park, Tsimanampetsotse National Park, and Zombitse-Vohibasia National Park. It can also be found in Beza-Mahafaly Special Reserve, Kalambatritra Special Reserve, Pic d'Ivohibe Special Reserve, Amboasary Sud, Berenty Private Reserve, Anja Community Reserve, and marginally at Kirindy Mitea National Park. Unprotected forests that the species has been reported in include Ankoba, Ankodida, Anjatsikolo, Anbatotsilongolongo, Mahazoarivo, Masiabiby, and Mikea. Within the protected regions it is known to inhabit, the ring-tailed lemur is sympatric (shares its range) with as many as 24 species of lemur, covering every living genus except Allocebus, Indri, and Varecia. Historically, the species used to be sympatric with the critically endangered southern black-and-white ruffed lemur (Varecia variegata editorum), which was once found at Andringitra National Park; however, no sightings of the ruffed lemur have been reported in recent years. In western Madagascar, sympatric ring-tailed lemurs and red-fronted lemurs (Eulemur rufifrons) have been studied together. Little interaction takes place between the two species. While the diets of the two species overlap, they eat in different proportions since the ring-tailed lemur has a more varied diet and spends more time on the ground. Diet The ring-tailed lemur is an opportunistic omnivore primarily eating fruits and leaves, particularly those of the tamarind tree (Tamarindus indica), known natively as kily. When available, tamarind makes up as much as 50% of the diet, especially during the dry, winter season. The ring-tailed lemur eats from as many as three dozen different plant species, and its diet includes flowers, herbs, bark and sap. It has been observed eating decayed wood, earth, spider webs, insect cocoons, arthropods (spiders, caterpillars, cicadas and grasshoppers) and small vertebrates (birds and chameleons). During the dry season it becomes increasingly opportunistic. Behavior Social systems Ring-tailed lemurs live in groups known as "troops," which are classified as multi-male groups, with a matriline as the core group. As with most lemurs, females socially dominate males in all circumstances, including feeding priority. Dominance is enforced by lunging, chasing, cuffing, grabbing and biting. Young females do not always inherit their mother's rank and young males leave the troop between three and five years of age. Both sexes have separate dominance hierarchies; females have a distinct hierarchy while male rank is correlated with age. Each troop has one to three central, high-ranking adult males who interact with females more than other group males and lead the troop procession with high-ranking females. Recently transferred males, old males or young adult males that have not yet left their natal group are often lower ranking. Staying at the periphery of the group they tend to be marginalized from group activity. For males, social structure changes can be seasonal. During the six-month period between December and May a few males migrate between groups. Established males transfer on average every 3.5 years, although young males may transfer approximately every 1.4 years. Group fission occurs when groups get too large and resources become scarce. In the mornings the ring-tailed lemur sunbathes to warm itself. It faces the sun sitting in what is frequently described as a "sun-worshipping" posture or lotus position. However, it sits with its legs extended outward, not cross-legged, and will often support itself on nearby branches. Sunning is often a group activity, particularly during the cold mornings. At night, troops will split into sleeping parties huddling closely together to keep warm. Despite being quadrupedal the ring-tailed lemur can rear up and balance on its hind legs, usually for aggressive displays. When threatened, the ring-tailed lemur may jump in the air and strike out with its short nails and sharp upper canine teeth in a behavior termed jump fighting. This is extremely rare outside of the breeding season when tensions are high and competition for mates is intense. Other aggressive behaviors include a threat-stare, used to intimidate or start a fight, and a submissive gesture known as pulled-back lips. Border disputes with rival troops occur occasionally and it is the dominant female's responsibility to defend the troop's home range. Agonistic encounters include staring, lunging approaches and occasional physical aggression, and conclude with troop members retreating toward the center of the home range. Olfactory communication Olfactory communication is critically important for strepsirrhines like the ring-tailed lemur. Males and females scent mark both vertical and horizontal surfaces at the overlaps in their home ranges using their anogenital scent glands. The ring-tailed lemur will perform a handstand to mark vertical surfaces, grasping the highest point with its feet while it applies its scent. Use of scent marking varies by age, sex and social status. Male lemurs use their antebrachial and brachial glands to demarcate territories and maintain intragroup dominance hierarchies. The thorny spur that overlays the antebrachial gland on each wrist is scraped against tree trunks to create grooves anointed with their scent. This is known as spur-marking. In displays of aggression, males engage in a social display behaviour called stink fighting, which involves impregnating their tails with secretions from the antebrachial and brachial glands and waving the scented tail at male rivals. Ring-tailed lemurs have also been shown to mark using urine. Behaviorally, there is a difference between regular urination, where the tail is slightly raised and a stream of urine is produced, and the urine-marking behavior, where the tail is held up in display and only a few drops of urine are used. The urine-marking behavior is typically used by females to mark territory, and has been observed primarily at the edges of the troop's territory and in areas where other troops may frequent. The urine marking behavior also is most frequent during the mating season, and may play a role in reproductive communication between groups. Auditory communication The ring-tailed lemur is one of the most vocal primates and has a complex array of distinct vocalizations used to maintain group cohesion during foraging and alert group members to the presence of a predator. Calls range from simple to complex. An example of a simple call is the purr (), which expresses contentment. A complex call is the sequence of clicks, close-mouth click series (CMCS), open-mouth click series (OMCS) and yaps () used during predator mobbing. Some calls have variants and undergo transitions between variants, such as an infant "whit" (distress call) transitioning from one variant to another (). The most commonly heard vocalizations are the moan () (low-to-moderate arousal, group cohesion), early-high wail () (moderate-to-high arousal, group cohesion), and clicks () ("location marker" to draw attention). Breeding and reproduction The ring-tailed lemur is polygynandrous, although the dominant male in the troop typically breeds with more females than other males. Fighting is most common during the breeding season. A receptive female may initiate mating by presenting her backside, lifting her tail, and looking at the desired male over her shoulder. Males may inspect the female's genitals to determine receptiveness. Females typically mate within their troop, but may seek outside males. The breeding season runs from mid-April to mid-May. Estrus lasts 4 to 6 hours, and females mate with multiple males during this period. Within a troop, females stagger their receptivity so that each female comes into season on a different day during the breeding season, reducing competition for male attention. Females lactate during the wet season, from December through April, when resources are readily available. Females gestate during the dry season, from May through September, when resources are low. Females give birth during seasons where resources, such as flowers, are in peak. Gestation lasts for about 135 days, and parturition occurs in September or occasionally October. In the wild, one offspring is the norm, although twins may occur. Ring-tailed lemur infants have a birth weight of and are carried ventrally (on the chest) for the first 1 to 2 weeks, then dorsally (on the back). The young lemurs begin to eat solid food after two months and are fully weaned after five months. Sexual maturity is reached between 2.5 and 3 years. Male involvement in infant rearing is limited, although the entire troop, regardless of age or sex, can be seen caring for the young. Alloparenting between troop females has been reported. Kidnapping by females and infanticide by males also occur occasionally. Due to harsh environmental conditions, predation and accidents such as falls, infant mortality can be as high as 50% within the first year and as few as 30% may reach adulthood. The longest-lived ring-tailed lemur in the wild was a female at the Berenty Reserve who lived for 20 years. In the wild, females rarely live past the age of 16, whereas the life expectancy of males is not known due to their social structure. The longest-lived male was reported to be 15 years old. The maximum lifespan reported in captivity was 27 years. Cognitive abilities and tool use Historically, the studies of learning and cognition in non-human primates have focused on simians (monkeys and apes), while strepsirrhine primates, such as the ring-tailed lemur and its allies, have been overlooked and popularly dismissed as unintelligent. A couple of factors stemming from early experiments have played a role in the development of this assumption. First, the experimental design of older tests may have favored the natural behavior and ecology of simians over that of strepsirrhines, making the experimental tasks inappropriate for lemurs. For example, simians are known for their manipulative play with non-food objects, whereas lemurs are only known to manipulate non-food objects in captivity. This behavior is usually connected with food association. Also, lemurs are known to displace objects with their nose or mouth more so than with their hands. Therefore, an experiment requiring a lemur to manipulate an object without prior training would favor simians over strepsirrhines. Second, individual ring-tailed lemurs accustomed to living in a troop may not respond well to isolation for laboratory testing. Past studies have reported hysterical behavior in such scenarios. The notion that lemurs are unintelligent has been perpetuated by the view that the neocortex ratio (as a measure of brain size) indicates intelligence. In fact, primatologist Alison Jolly noted early in her academic career that some lemur species, such as the ring-tailed lemur, have evolved a social complexity similar to that of cercopithecine monkeys, but not the corresponding intelligence. After years of observations of wild ring-tailed lemur populations at the Berenty Reserve in Madagascar and as well as baboons in Africa, she more recently concluded that this highly social lemur species does not demonstrate the equivalent social complexity of cercopithecine monkeys, despite general appearances. Regardless, research has continued to illuminate the complexity of the lemur mind, with emphasis on the cognitive abilities of the ring-tailed lemur. As early as the mid-1970s, studies had demonstrated that they could be trained through operant conditioning using standard schedules of reinforcement. The species has been shown to be capable of learning pattern, brightness, and object discrimination, skills common among vertebrates. The ring-tailed lemur has also been shown to learn a variety of complex tasks often equaling, if not exceeding, the performance of simians. More recently, research at the Duke Lemur Center has shown that the ring-tailed lemur can organize sequences in memory and retrieve ordered sequences without language. The experimental design demonstrated that the lemurs were using internal representation of the sequence to guide their responses and not simply following a trained sequence, where one item in the sequence cues the selection of the next. But this is not the limit of the ring-tailed lemur's reasoning skills. Another study, performed at the Myakka City Lemur Reserve, suggests that this species along with several other closely related lemur species understand simple arithmetic operations. Since tool use is considered to be a key feature of primate intelligence, the apparent lack of this behavior in wild lemurs, as well as the lack of non-food object play, has helped reinforce the perception that lemurs are less intelligent than their simian cousins. However, another study at the Myakka City Lemur Reserve examined the representation of tool functionality in both the ring-tailed lemur and the common brown lemur and discovered that, like monkeys, they used tools with functional properties (e.g., tool orientation or ease of use) instead of tools with nonfunctional features (e.g., color or texture). Although the ring-tailed lemur may not use tools in the wild, it can not only be trained to use a tool, but will preferentially select tools based on their functional qualities. Therefore, the conceptual competence to use a tool may have been present in the common primate ancestor, even though the use of tools may not have appeared until much later. Conservation status In addition to being listed as endangered in 2014 by the IUCN, the ring-tailed lemur has been listed since 1977 by CITES under Appendix I, which makes trade of wild-caught specimens illegal. Although there are more endangered species of lemur, the ring-tailed lemur is considered a flagship species due to its recognizability. As of 2017, only about 2,000 ring-tailed lemurs are estimated to be left in the wild, making the threat of extinction far more serious for them than previously believed. Three factors threaten ring-tailed lemurs. First and foremost is habitat destruction. Starting nearly 2,000 years ago with the introduction of humans to the island, forests have been cleared to produce pasture and agricultural land. Extraction of hardwoods for fuel and lumber, as well mining and overgrazing, have also taken their toll. Today, it is estimated that 90% of Madagascar's original forest cover has been lost. Rising populations have created even greater demand in the southwest portion of the island for fuel wood, charcoal, and lumber. Fires from the clearing of grasslands, as well as slash-and-burn agriculture destroy forests. Another threat to the species is harvesting either for food (bushmeat), fur clothing or pets. Finally, periodic drought common to southern Madagascar can impact populations already in decline. In 1991 and 1992, for example, a severe drought caused an abnormally high mortality rate among infants and females at the Beza Mahafaly Special Reserve. Two years later, the population had declined by 31% and took nearly four years to start to recover. The ring-tailed lemur resides in several protected areas within its range, each offering varying levels of protection. At the Beza Mahafaly Special Reserve, a holistic approach to in-situ conservation has been taken. Not only does field research and resource management involve international students and local people (including school children), livestock management is used at the peripheral zones of the reserve and ecotourism benefits the local people. Outside of its diminishing habitat and other threats, the ring-tailed lemur reproduces readily and has fared well in captivity. For this reason, along with its popularity, it has become the most populous lemur in zoos worldwide, with more than 2500 in captivity as of 2009. It is also the most common of all captive primates. Ex situ facilities actively involved in the conservation of the ring-tailed lemur include the Duke Lemur Center in Durham, North Carolina, the Lemur Conservation Foundation in Myakka City, Florida, and the Madagascar Fauna Group headquartered at the Saint Louis Zoo. Due to the high success of captive breeding, reintroduction is a possibility if wild populations were to crash. Although experimental releases have met success on St. Catherines Island in Georgia, demonstrating that captive lemurs can readily adapt to their environment and exhibit a full range of natural behaviors, captive release is not currently being considered. Ring-tailed lemur populations can also benefit from drought intervention, due to the availability of watering troughs and introduced fruit trees, as seen at the Berenty Private Reserve in southern Madagascar. However, these interventions are not always seen favorably, since natural population fluctuations are not permitted. The species is thought to have evolved its high fecundity due to its harsh environment. Cultural references The ring-tailed lemur is known locally in Malagasy as (pronounced , and spelled maki in French) or (pronounced or colloquially ). Being the most widely recognized endemic primate on the island, it has been selected as the symbol for Madagascar National Parks (formerly known as ANGAP). The Maki brand, which started by selling T-shirts in Madagascar and now sells clothing across the Indian Ocean islands, is named after this lemur due to its popularity, even though the company's logo portrays the face of a sifaka and its name uses the French spelling. The first mention of the ring-tailed lemur in Western literature came in 1625 when English traveller and writer Samuel Purchas described them as being comparable in size to a monkey and having a fox-like long tail with black and white rings. Charles Catton included the species in his 1788 book Animals Drawn from Nature and Engraved in Aqua-tinta, calling it the "Maucauco" and regarding it as a type of monkey. The species was further popularized by the Animal Planet television series Lemur Street, as well as by the character King Julien in the animated Madagascar film and TV franchise. The ring-tailed lemur was also the focus of the 1996 Nature documentary A Lemur's Tale, which was filmed at the Berenty Reserve and followed a troop of lemurs. The troop included a special infant named Sapphire, who was nearly albino, with white fur, bright blue eyes, and the characteristic ringed tail. A Ring-tailed lemur played a role in the 1997 comedy film Fierce Creatures, starring John Cleese, who has a passion for lemurs. Cleese later hosted the 1998 BBC documentary In the Wild: Operation Lemur with John Cleese, which tracked the progress of a reintroduction of black-and-white ruffed lemurs back into the Betampona Reserve in Madagascar. The project had been partly funded by Cleese's donation of the proceeds from the London premier of Fierce Creatures.
Biology and health sciences
Strepsirrhini
Animals
503782
https://en.wikipedia.org/wiki/Streptococcus%20pneumoniae
Streptococcus pneumoniae
Streptococcus pneumoniae, or pneumococcus, is a Gram-positive, spherical bacteria, alpha-hemolytic member of the genus Streptococcus. S. pneumoniae cells are usually found in pairs (diplococci) and do not form spores and are non motile. As a significant human pathogenic bacterium S. pneumoniae was recognized as a major cause of pneumonia in the late 19th century, and is the subject of many humoral immunity studies. Streptococcus pneumoniae resides asymptomatically in healthy carriers typically colonizing the respiratory tract, sinuses, and nasal cavity. However, in susceptible individuals with weaker immune systems, such as the elderly and young children, the bacterium may become pathogenic and spread to other locations to cause disease. It spreads by direct person-to-person contact via respiratory droplets and by auto inoculation in persons carrying the bacteria in their upper respiratory tracts. It can be a cause of neonatal infections. Streptococcus pneumoniae is the main cause of community acquired pneumonia and meningitis in children and the elderly, and of sepsis in those infected with HIV. The organism also causes many types of pneumococcal infections other than pneumonia. These invasive pneumococcal diseases include bronchitis, rhinitis, acute sinusitis, otitis media, conjunctivitis, meningitis, sepsis, osteomyelitis, septic arthritis, endocarditis, peritonitis, pericarditis, cellulitis, and brain abscess. Streptococcus pneumoniae can be differentiated from the viridans streptococci, some of which are also alpha-hemolytic, using an optochin test, as S. pneumoniae is optochin-sensitive. S. pneumoniae can also be distinguished based on its sensitivity to lysis by bile, the so-called "bile solubility test". The encapsulated, Gram-positive, coccoid bacteria have a distinctive morphology on Gram stain, lancet-shaped diplococci. They have a polysaccharide capsule that acts as a virulence factor for the organism; more than 100 different serotypes are known, and these types differ in virulence, prevalence, and extent of drug resistance. The capsular polysaccharide (CPS) serves as a critical defense mechanism against the host immune system. It composes the outermost layer of encapsulated strains of S. pneumoniae and is commonly attached to the peptidoglycan of the cell wall. It consists of a viscous substance derived from a high-molecular-weight polymer composed of repeating oligosaccharide units linked by covalent bonds to the cell wall. The virulence and invasiveness of various strains of S. pneumoniae vary according to their serotypes, determined by their chemical composition and the quantity of CPS they produce. Variations among different S. pneumoniae strains significantly influence pathogenesis, determining bacterial survival and likelihood of causing invasive disease. Additionally, the CPS inhibits phagocytosis by preventing granulocytes' access to the cell wall. History In 1881, the organism, known later in 1886 as the pneumococcus for its role as a cause of pneumonia, was first isolated simultaneously and independently by the U.S. Army physician George Sternberg and the French chemist Louis Pasteur. The organism was termed Diplococcus pneumoniae from 1920 because of its characteristic appearance in Gram-stained sputum. It was renamed Streptococcus pneumoniae in 1974 because it was very similar to streptococci. Streptococcus pneumoniae played a central role in demonstrating that genetic material consists of DNA. In 1928, Frederick Griffith demonstrated transformation of life turning harmless pneumococcus into a lethal form by co-inoculating the live pneumococci into a mouse along with heat-killed virulent pneumococci. In 1944, Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrated that the transforming factor in Griffith's experiment was not protein, as was widely believed at the time, but DNA. Avery's work marked the birth of the molecular era of genetics. Genetics The genome of S. pneumoniae is a closed, circular DNA structure that contains between 2.0 and 2.1 million base pairs depending on the strain. It has a core set of 1553 genes, plus 154 genes in its virulome, which contribute to virulence and 176 genes that maintain a noninvasive phenotype. Genetic information can vary up to 10% between strains. The pneumococcal genome is known to contain a large and diverse repertoire of antimicrobial peptides, including 11 different lantibiotics. Transformation Natural bacterial transformation involves the transfer of DNA from one bacterium to another through the surrounding medium. Transformation is a complex developmental process requiring energy and is dependent on expression of numerous genes. In S. pneumoniae, at least 23 genes are required for transformation. For a bacterium to bind, take up, and recombine exogenous DNA into its chromosome, it must enter a special physiological state called competence. Competence in S. pneumoniae is induced by DNA-damaging agents such as mitomycin C, fluoroquinolone antibiotics (norfloxacin, levofloxacin and moxifloxacin), and topoisomerase inhibitors. Transformation protects S. pneumoniae against the bactericidal effect of mitomycin C. Michod et al. summarized evidence that induction of competence in S. pneumoniae is associated with increased resistance to oxidative stress and increased expression of the RecA protein, a key component of the recombinational repair machinery for removing DNA damage. On the basis of these findings, they suggested that transformation is an adaptation for repairing oxidative DNA damage. S. pneumoniae infection stimulates polymorphonuclear leukocytes (granulocytes) to produce an oxidative burst that is potentially lethal to the bacteria. The ability of S. pneumoniae to repair oxidative DNA damage in its genome caused by this host defense likely contributes to the pathogen's virulence. Consistent with this premise, Li et al. reported that, among different highly transformable S. pneumoniae isolates, nasal colonization fitness and virulence (lung infectivity) depend on an intact competence system. Infection Streptococcus pneumoniae is part of the normal upper respiratory tract flora. As with many natural flora, it can become pathogenic under the right conditions, typically when the immune system of the host is suppressed. Invasins, such as pneumolysin, an antiphagocytic capsule, various adhesins, and immunogenic cell wall components are all major virulence factors. After S. pneumoniae colonizes the air sacs of the lungs, the body responds by stimulating the inflammatory response, causing plasma, blood, and white blood cells to fill the alveoli. This condition is called bacterial pneumonia. S. pneumoniae undergoes spontaneous phase variation, changing between transparent and opaque colony phenotypes. The transparent phenotype has a thinner capsule and expresses large amounts of phosphorylcholine (ChoP) and choline-binding protein A (CbpA), contributing to the bacteria's ability to adhere and colonize in the nasopharynx. The opaque phenotype is characterized by a thicker capsule, resulting in increased resistance to host clearance. It expresses large amounts of capsule and pneumococcal surface protein A (PspA) which help the bacteria survive in the blood. Phase-variation between these two phenotypes allows S. pneumoniae to survive in different human body systems. Diseases and symptoms Pneumonia is the most prevalent disease caused by Streptococcus pneumoniae. Pneumonia is a lung infection characterized by symptoms such as fever, chills, coughing, rapid or labored breathing, and chest pain. For the elderly, those who contract pneumonia have also shown these lesser nonspecific symptoms, but also tend to show that they have tachypnea a few days before clinical certainty that they have contracted the bacterial illness. Tachypnea is characterized by rapid and shallow breathing and can affect a person’s ability to sleep, chest pain, and a decreased appetite. While a few different bacterial infections can lead to meningitis, S. pneumoniae is one of the leading causes of this infection. Pneumococcal meningitis occurs when the bacteria goes from the blood to the central nervous system, which is made up of the brain and the spinal cord. Here, the infection will spread and cause inflammation, leading to severe disabilities like brain damage or hearing loss or limb removal or death. Symptoms include common problems such as head aches, fevers, and nausea, but the more telling signs that a bacterial infection may have reached the brain are sensitivity to light, seizures, having limited range in neck movement, and easy bruising all over the body. Osteomyelitis, or bone infection, is a rare occurrence but has been seen in patients who were diagnosed to have a S. pneumoniae infection that went untreated for too long. Sepsis is caused by overwhelming response to an infection and leads to tissue damage, organ failure, and even death. The symptoms include confusion, shortness of breath, elevated heart rate, pain or discomfort, over-perspiration, fever, shivering, or feeling cold. Less severe illnesses that can be caused by pneumococcal infection are conjunctivitis (pink eye ), otitis media (middle ear infection), Bronchitis (airway inflammation), and sinusitis (sinus infection). Vaccine Due to the importance of disease caused by S. pneumoniae, several vaccines have been developed to protect against invasive infection. The World Health Organization recommends routine childhood pneumococcal vaccination; it is incorporated into the childhood immunization schedule in a number of countries including the United Kingdom, the United States, Greece,and South Africa. Currently, there are two vaccines available for S. pneumoniae: the pneumococcal polysaccharide vaccine (PPV23) and the pneumococcal conjugate vaccine (PCV13). PPV23 functions by utilizing CPS to stimulate the production of type-specific antibodies, initiating processes such as complement activation, opsonization, and phagocytosis to combat bacterial infections. It elicits a humoral immune response targeting the CPS present on the bacterial surface. PPSV23 offers T-cell-independent immunity and requires revaccination 5 years after the first vaccination because of its temporary nature. PCV13 was developed when determining its low efficacy in children and infants. PCV13 elicits a T-cell-dependent response and provides enduring immunity by promoting interaction between B and T cells, leading to an enhanced and prolonged immune response. Biotechnology Components from S. pneumoniae have been harnessed for a range of applications in biotechnology. Through engineering of surface molecules from this bacterium, proteins can be irreversibly linked using the sortase enzyme or using the SnoopTag/SnoopCatcher reaction. Various glycoside hydrolases have also been cloned from S. pneumoniae to help analysis of cell glycosylation. Interaction with Haemophilus influenzae Historically, Haemophilus influenzae has been a significant cause of infection, and both H. influenzae and S. pneumoniae can be found in the human upper respiratory system. A study of competition in vitro revealed S. pneumoniae overpowered H. influenzae by attacking it with hydrogen peroxide. There is also evidence that S. pneumoniae uses hydrogen peroxide as a virulence factor. However, in a study adding both bacteria to the nasal cavity of a mouse within two weeks, only H. influenzae survives; further analysis showed that neutrophils (a type of phagocyte) exposed to dead H. influenzae were more aggressive in attacking S. pneumoniae. Diagnosis Diagnosis is generally made based on clinical suspicion along with a positive culture from a sample from virtually any place in the body. S. pneumoniae is, in general, optochin sensitive, although optochin resistance has been observed. The recent advances in next-generation sequencing and comparative genomics have enabled the development of robust and reliable molecular methods for the detection and identification of S. pneumoniae. For instance, the Xisco gene was recently described as a biomarker for PCR-based detection of S. pneumoniae and differentiation from closely related species. Atromentin and leucomelone possess antibacterial activity, inhibiting the enzyme enoyl-acyl carrier protein reductase, (essential for the biosynthesis of fatty acids) in S. pneumoniae. Resistance Resistant pneumococcal strains are called penicillin-resistant pneumococci (PRP), penicillin-resistant Streptococcus pneumoniae (PRSP), Streptococcus pneumoniae penicillin resistant (SPPR) or drug-resistant Strepotococcus pneumoniae (DRSP). In 2015, in the US, there were an estimated 30,000 cases, and in 30% of them the strains were resistant to one or more antibiotics.
Biology and health sciences
Gram-positive bacteria
Plants
503839
https://en.wikipedia.org/wiki/Semnopithecus
Semnopithecus
Semnopithecus is a genus of Old World monkeys native to the Indian subcontinent, with all species with the exception of two being commonly known as gray langurs. Traditionally only the species Semnopithecus entellus was recognized, but since about 2001 additional species have been recognized. The taxonomy has been in flux, but currently eight species are recognized. Members of the genus Semnopithecus are terrestrial, inhabiting forest, open lightly wooded habitats, and urban areas on the Indian subcontinent. Most species are found at low to moderate altitudes, but the Nepal gray langur and Kashmir gray langur occur up to in the Himalayas. Characteristics These langurs are largely gray (some more yellowish), with a black face and ears. Externally, the various species mainly differ in the darkness of the hands and feet, the overall color and the presence or absence of a crest. Typically all north Indian gray langurs have their tail tips looping towards their head during a casual walk whereas all south Indian and Sri Lankan gray langurs have an inverted "U" shape or a "S" tail carriage pattern. There are also significant variations in the size depending on the sex, with the male always larger than the female. The head-and-body length is from . Their tails, at are always longer than their bodies. Langurs from the southern part of their range are smaller than those from the north. At , the heaviest langur ever recorded was a male Nepal gray langur. The larger gray langurs are rivals for the largest species of monkey found in Asia. The average weight of gray langurs is in the males and in the females. Langurs mostly walk quadrupedally and spend half of their time on the ground and the other half in trees. They will also make bipedal hops, climbing and descending supports with the body upright, and leaps. Langurs can leap horizontally and in descending. Taxonomy Traditionally, only Semnopithecus entellus was recognized as a species, the remainder all being treated as subspecies. In 2001, it was proposed that seven species should be recognized. This was followed in Mammal Species of the World in 2005, though several of the seven species intergrade, and alternative treatments exist where only two species (a northern and a southern) are recognized. Phylogenetic evidence supports at least three species: a north Indian, a south Indian and a Sri Lankan one. It has been suggested that the Semnopithecus priam thersites is worthy of treatment as a species rather than a subspecies, but at present this is based on limited evidence. During a study based on external morphology and ecological niche modelling in Peninsular India six main types were found, but continued to label all as subspecies. Coat color is highly variable, possible due to phenotypic plasticity and therefore of questionable value in species delimitation. It has been suggested that Trachypithecus should be considered only a subgenus of Semnopithecus. If maintaining the two as separate monophyletic genera, the purple-faced langur and Nilgiri langur belong in Semnopithecus instead of their former genus Trachypithecus. At present it is unclear where the T. pileatus species group (consisting of the capped langur, Shortridge's langur and Gee's golden langur) belongs, as available mtDNA data place it in Semnopithecus, while Y chromosome data place it in Trachypithecus. A possible explanation for this is that the T. pileatus species group is the result of fairly recent hybridization between Semnopithecus and Trachypithecus. As of 2005, the authors of Mammal Species of the World recognized the following seven Semnopithecus species Nepal gray langur Semnopithecus schistaceus Kashmir gray langur Semnopithecus ajax Tarai gray langur Semnopithecus hector Northern plains gray langur Semnopithecus entellus Black-footed gray langur Semnopithecus hypoleucos Southern plains gray langur Semnopithecus dussumieri Tufted gray langur Semnopithecus priam Results of analysis of mitochondrial cytochrome b gene and two nuclear DNA-encoded genes of several colobine species revealed that Nilgiri and purple-faced langurs cluster with gray langur, while Trachypithecus species form a distinct clade. Since then, two other species have been moved from Trachypithecus to Semnopithecus: Purple-faced langur Semnopithecus vetulus Nilgiri langur Semnopithecus johnii In addition, Semnopithecus dussumieri has been determined to be invalid. Most of the range that had been considered S. dussumieri is now considered S. entellus. Thus the current generally accepted species within the genus Semnopithecus are: A 2013 genetic study indicated that while S. entellus, S. hypoleucos, S. priam and S. johnii are all valid taxa, there has been hybridization between S. priam and S. johnii. It also indicated that there has been some hybridization between S. entellus and S. hypoleucos where their ranges overlap, and a small amount of hybridization between S. hypoleucos and S. priam. It also suggested that S. priam and S. johnii diverged from each other fairly recently. Distribution and habitat The entire distribution of all gray langur species stretches from the Himalayas in the north to Sri Lanka in the south, and from Bangladesh in the east to Pakistan in the west. They possibly occur in Afghanistan. The bulk of the gray langur distribution is within India, and all seven currently recognized species have at least a part of their range in this country. Gray langurs can adapt to a variety of habitats. They inhabit arid habitats like deserts, tropical habitats like tropical rainforests and temperate habitats like coniferous forests, deciduous habitats and mountains habitats. They are found at sea level to altitudes up to . They can adapt well to human settlements, and are found in villages, towns and areas with housing or agriculture. They live in densely populated cities like Jodhpur, which has a population numbering up to a million. Ecology and behavior Gray langurs are diurnal. They sleep during the night in trees but also on man-made structures like towers and electric poles when in human settlements. When resting in trees, they generally prefer the highest branches. Ungulates like bovine and deer will eat food dropped by foraging langurs. Langurs are preyed upon by leopards, dholes and tigers. Wolves, jackals, Asian black bears and pythons may also prey on langurs. Diet Gray langurs are primarily herbivores. However, unlike some other colobines they do not depend on leaves and leaf buds of herbs, but will also eat coniferous needles and cones, fruits and fruit buds, evergreen petioles, shoots and roots, seeds, grass, bamboo, fern rhizomes, mosses, and lichens. Leaves of trees and shrubs rank at the top of preferred food, followed by herbs and grasses. Non-plant material consumed include spider webs, termite mounds and insect larvae. They forage on agricultural crops and other human foods, and even accept handouts. Although they occasionally drink, langurs get most of their water from the moisture in their food. Social structure Gray langurs exist in three types of groups: one-male groups, comprising one adult male, several females and offspring; multiple-male groups, comprising males and females of all ages; all-male groups. All-male groups tend to be the smallest of the groups and can consist of adults, subadults, and juveniles. Some populations have only multiple-male groups as mixed sex groups, while others have only one-male groups as mixed sexed groups. Some evidence suggests multiple-male groups are temporary and exist only after a takeover, and subsequently split into one-male and all-male groups. Social hierarchies exist for all group types. In all-male groups, dominance is attained through aggression and mating success. With sexually mature females, rank is based on physical condition and age. The younger the female, the higher the rank. Dominance rituals are most common among high-ranking langurs. Most changes in social rank in males take place during changes in group members. An adult male may remain in a one-male group for 45 months. The rate of male replacement can occur quickly or slowly depending on the group. Females within a group are matrilineally related. Female memberships are also stable, but less so in larger groups. Relationships between the females tend to be friendly. They will do various activities with each together, such as foraging, traveling and resting. They will also groom each other regardless of their rank. However, higher-ranking females give out and receive grooming the most. In addition, females groom males more often than the other way around. Male and female relationships are usually positive. Relationships between males can range from peaceful to violent. While females remain in their natal groups, males will leave when they reach adulthood. Relationships between groups tend to be hostile. High-ranking males from different groups will display, vocalize, and fight among themselves. Reproduction and parenting In one-male groups, the resident male is usually the sole breeder of the females and sires all the young. In multiple-male groups, the highest-ranking male fathers most of the offspring, followed by the next-ranking males and even outside males will father young. Higher-ranking females are more reproductively successful than lower-ranking ones. Female gray langurs do not make it obvious that they are in estrous. However, males are still somehow able to deduce the reproduction state of females. Females signal that they are ready to mate by shuddering the head, lowering the tail, and presenting their anogenital regions. Such solicitations do not always lead to copulation. When langurs mate, they are sometimes disrupted by other group members. Females have even been recorded mounting other females. The gestation period of gray langur lasts around 200 days, at least at Jodhpur, India. In some areas, reproduction is year-around. Year-round reproduction appears to occur in populations that capitalize on human-made foods. Other populations have seasonal reproduction. Infanticide is common among gray langurs. Most infanticidal langurs are males that have recently immigrated to a group and driven out the prior male. These males only kill infants that are not their own. Infanticide is more commonly reported in one-male groups, perhaps because one male monopolizing matings drives the evolution of this trait. In multiple-male groups, the costs for infanticidal males are likely to be high as the other males may protect the infants and they can't ensure that they'll sire young with other males around. Nevertheless, infanticide does occur in these groups, and is suggested that such practices serve to return a female to estrous and gain the opportunity to mate. Females usually give birth to a single infant, although twins do occur. Most births occur during the night. Infants are born with thin, dark brown or black hair and pale skin. Infants spend their first week attached to their mothers' chests and mostly just suckle or sleep. They do not move much in terms of locomotion for the first two weeks of their life. As they approach their sixth week of life, infants vocalize more. They use squeaks and shrieks to communicate stress. In the following months, the infants are capable of quadrupedal locomotion and can walk, run and jump by the second and third months. Alloparenting occurs among langurs, starting when the infants reach two years of age. The infant will be given to the other females of the group. However, if the mother dies, the infant usually follows. Langurs are weaned by 13 months. Vocalizations Gray langurs are recorded to make a number of vocalizations: loud calls or whoops made only by adult males during displays; harsh barks made by adult and subadult males when surprised by a predator; cough barks made by adults and subadults during group movements; grunt barks made mostly by adult males during group movements and agonistic interactions; rumble screams made in agonistic interactions; pant barks made with loud calls when groups are interacting; grunts made in many different situations, usually in agonistic ones; honks made by adult males when groups are interacting; rumbles made during approaches, embraces, and mounts; hiccups made by most members of a group when they find another group. Status and conservation Gray langurs have stable populations in some areas and declining ones in others. Both the black-footed gray langur and Kashmir gray langur are considered threatened. The latter is the rarest species of gray langur, with less than 250 mature individuals remaining. In India, gray langurs number at around 300,000. India has laws prohibiting the capturing or killing of langurs, but they are still hunted in some parts of the country. Enforcement of these laws has proven to be difficult and it seems most people are unaware of their protection. Populations are also threatened by mining, forest fires and deforestation for wood. Langurs can be found near roads and can become victims of automobile accidents. This happens even in protected areas, with deaths by automobile collisions making nearly a quarter of mortality in Kumbhalgarh Wildlife Sanctuary in Rajasthan, India. Langurs are considered sacred in the Hindu religion and are sometimes kept for religious purposes by Hindu priests and for roadside performances. However, some religious groups use langurs as food and medicine, and parts of gray langurs are sometimes kept as amulets for good luck. Because of their sacred status and their less aggressive behavior compared to other primates, langurs are generally not considered pests in many parts of India. Despite this, research in some areas show high levels of support for the removal of langurs from villages, their sacred status no longer important. Langurs will raid crops and steal food from houses, and this causes people to persecute them. While people may feed them in temples, they do not extend such care to monkeys at their homes. Langurs stealing and biting people to get food in urban areas may also contribute to more persecutions.
Biology and health sciences
Old World monkeys
Animals
503948
https://en.wikipedia.org/wiki/Elasmotherium
Elasmotherium
Elasmotherium is an extinct genus of large rhinoceros that lived in Eastern Europe, Central Asia and East Asia during Late Miocene through to the Late Pleistocene, with the youngest reliable dates of at least 39,000 years ago. It was the last surviving member of Elasmotheriinae, a distinctive group of rhinoceroses separate from the group that contains living rhinoceros (Rhinocerotinae). Five species are recognised. The genus first appeared in the Late Miocene in present-day China, likely having evolved from Sinotherium, before spreading to the Pontic–Caspian steppe, the Caucasus and Central Asia. The best known Elasmotherium species, E. sibiricum, sometimes called the Siberian unicorn, was among the largest known rhinoceroses, with an estimated body mass of around , comparable to an elephant, and is often conjectured to have borne a single very large horn. However, no horn has ever been found, and other authors have conjectured that the horn was likely much smaller. Like all rhinoceroses, elasmotheres were herbivorous. Unlike any other rhinos and any other ungulates aside from some notoungulates, its high-crowned molars were ever-growing, and it was likely adapted for a grazing diet. Its legs were longer than those of other rhinos and were adapted for galloping, giving it a horse-like gait. Taxonomy Elasmotherium was first described in 1808-1809 by German/Russian palaeontologist Gotthelf Fischer von Waldheim based on a left lower jaw, four molars, and the tooth root of the third premolar, which was gifted to Moscow University by princess Ekaterina Dashkova in 1807. He first announced the genus name at an 1808 presentation before the Moscow Society of Naturalists, and named the type species E. sibiricum a year later in 1809. The genus name derives from Ancient Greek elasmos "laminated" and therion "beast" in reference to the laminated folding of the tooth enamel; and the species name sibericus is probably a reference to the predominantly Siberian origin of Princess Dashkova's collection. However, the specimen's exact origins are unknown. In 1877, German naturalist Johann Friedrich von Brandt placed it into the newly erected subfamily Elasmotheriinae, separate from modern rhinos. The genus is known from hundreds of find sites, mainly of cranial fragments and teeth, but in some cases nearly complete skeletons of post-cranial bones, scattered over Eurasia from Eastern Europe to China. Dozens of crania have been reconstructed and given archaeological identifiers. The division into species is based mainly on the fine distinctions of the teeth and jaws and the shape of the skull. Evolution Elasmotherium belongs to the subfamily Elasmotheriinae, distinct from the subfamily which includes all living rhinceroses, Rhinocerotinae. The depth of the split between Elasmotheriinae and Rhinocerotinae is disputed. Older estimates place the age of divergence around 47 million years ago, during the Eocene, while younger estimates place the split around 35 million years ago, during the Oligocene. Unambiguous members of Elasmotheriinae first appeared during the Early Miocene, and were widespread across Europe, Africa and Asia during the Miocene epoch. Elasmotherium is the only known member of Elasmotheriinae from after the Miocene, with elasmotheriines declining as part of a broader decline of rhinocerotids and many other species of mammals during the late Miocene period. The oldest known species of Elasmotherium is Elasmotherium primigenium from the Late Miocene of Dingbian County in Shaanxi, China. Elasmotherium likely evolved from Sinotherium, a genus of elasmothere also found in China. Elasmotherium arrived in Eastern Europe around 2.5 million years ago, during the earliest part of the Pleistocene epoch. Hypsodonty, a dentition pattern where the molars have high crowns and the enamel extends below the gum line, is thought to be a characteristic of Elasmotheriinae, perhaps as an adaptation to the heavier grains featured in riparian zones on riversides. Species There are four chronospecies of Elasmotherium aside from the aforementioned E. primigenium, which are—from oldest to youngest—E. chaprovicum, E. peii, E. caucasicum and E. sibiricum, and which together span from the Late Pliocene to the Late Pleistocene. An elasmotherian species turned up in the preceding Khaprovian or Khaprov Faunal Complex, which was at first taken to be E. caucasicum, and then on the basis of the dentition was redefined as a new species, E. chaprovicum (Shvyreva, 2004), named after the Khaprov Faunal Complex. The Khaprov is in the Middle Villafranchian, MN17, which spans the Piacenzian of the Late Pliocene and the Gelasian of the Early Pleistocene of Northern Caucasus, Moldova and Asia and has been dated to 2.6–2.2 Ma. E. peii was first described by (Chow, 1958) for remains found in Shaanxi, China. The species is also known from numerous remains from the classical range of Elasmotherium, and some sources have considered this species to be a synonym of E. caucasicum, but it is currently considered distinct. It is mainly found in the Psekups faunal complex between 2.2 and 1.6 Ma, and additional remains from Shaanxi were described in 2018. E. caucasicum was first described by Russian palaeontologist Aleksei Borissiak in 1914, who said it apparently flourished in the Black Sea region as a member of the Early Pleistocene Tamanian Faunal Unit (1.1–0.8 Ma, Taman Peninsula). It is the most commonly found mammal of the assemblage. E. caucasicum is thought to be more primitive than E. sibiricum and perhaps represents an ancestral stock. It is also known in northern China from the Early Pleistocene Nihewan Faunal assemblage and were extinct at approximately 1.6 Ma. This suggests Elasmotherium developed separately in Russia and China. E. sibiricum, described by Johann Fischer von Waldheim in 1808 and chronologically the latest species of the sequence appeared in the Middle Pleistocene, ranging from southwestern Russia to western Siberia and southward into Ukraine and Moldova. Description Elasmotherium is typically reconstructed as a woolly animal, generally based on the woolliness exemplified in contemporary megafauna such as mammoths and the woolly rhino. However, it is sometimes depicted as bare-skinned like modern rhinos. In 1948, Russian palaeontologist Valentin Teryaev suggested it was semi-aquatic with a dome-like horn, and resembled a hippo because the animal had four toes like a wetland tapir rather than the three toes in other rhinos, but Elasmotherium has since been shown to have had only three functional toes, and Teryaev's reconstruction has not garnered much scientific attention. The known specimens of E. sibiricum reach up to in length, with shoulder heights up to , while E. caucasicum reaches at least in body length with an estimated mass of , making Elasmotherium the largest rhinos of the Quaternary. Both species were among the largest rhinos, comparable in size to the woolly mammoth and larger than the contemporary woolly rhinoceros. The feet were unguligrade, the front larger than the rear, with three digits at the front and rear, with a vestigial fifth metacarpal. Dentition Like other rhinos, Elasmotherium had two premolars and three molars for chewing, and lacked incisors and canines, relying instead on a prehensile lip to strip food. Elasmotherium were euhypsodonts, with large tooth crowns and enamel extending below the gum line, and continuously growing teeth. Elasmotherium fossils rarely show evidence of tooth roots. Horn Elasmotherium is traditionally thought to have had a keratinous horn, indicated by a circular dome on the forehead, with a deep, furrowed surface, and a circumference of . The furrows are interpreted as the seats of blood vessels for horn-generating tissue. In rhinos, the horn is not attached to bone, but grows from the surface of a dense skin tissue, anchoring itself by creating bone irregularities and rugosities. The outermost layer cornifies. As the layers age, the horn loses diameter by degradation of the keratin due to ultraviolet light, drying out, and continual wearing. However, melanin and calcium deposits in the centre harden the keratin there, which gives the horn its distinctive shape. There was likely a large hump of muscle on the back, which is generally thought to have supported a heavy horn. A 2021 study found that the cranial dome was quite fragile and ill suited for a large horn and was more indicative of a smaller horn, and that the dome could function as a resonating chamber of some sort, akin to that of Rusingoryx and hadrosaur crests. Palaeobiology Diet Modern hypsodont hoofed mammals are generally grazers of open environments, with hypsodonty possibly an adaptation to chewing tough, fibrous grass. Elasmotherium dental wearing is similar to that of the grazing white rhino, and both of their heads have a downward orientation, indicating a similar lifestyle and an ability to only reach low-lying plants. In fact, the head of Elasmotherium had the most obtuse angle of any rhinoceros, and could only reach the lowest levels and therefore must have grazed habitually. Elasmotherium also displays euhypsodonty (evergrowing teeth), which is typically seen in rodents, and dental physiology could have been influenced by pulling up food from moist, grainy soil. Therefore, they may have inhabited both mammoth steppeland and riparian riversides, similar to contemporary mammoths. Movement Elasmotherium had similar running limbs to the white rhinoceros–which run at with a top speed of . However, Elasmotherium had double the weight–about –and consequently had a more restricted gait and mobility, likely achieving much slower speeds. Elephants, weighing , cannot exceed a walking speed of . Extinction Elasmotherium was previously thought to have gone extinct around 200,000 years ago as part of normal extinction, but E. sibiricum skull fragments from the Pavlodar Region, Kazakhstan, shows its persistence in the Western Siberian Plain about 39,000–35,000 years ago. Isolated remains dating to 50,000 years ago are known from the Siberian Smelovskaya and Batpak Caves, likely dragged there by a predator. This timing is roughly coincident with the Pleistocene extinction, during which many mammal species with body weights greater than died out. This coincided with a shift to a cooler climate–which resulted in replacement of grasses and herbs by lichens and mosses–and the migration of modern humans into the area.
Biology and health sciences
Perissodactyla
Animals
504290
https://en.wikipedia.org/wiki/Terrace%20%28earthworks%29
Terrace (earthworks)
A terrace in agriculture is a flat surface that has been cut into hills or mountains to provide areas for the cultivation for crops, as a method of more effective farming. Terrace agriculture or cultivation is when these platforms are created successively down the terrain in a pattern that resembles the steps of a staircase. As a type of landscaping, it is called terracing. Terraced fields decrease both erosion and surface runoff, and may be used to support growing crops that require irrigation, such as rice. The Rice Terraces of the Philippine Cordilleras have been designated as a UNESCO World Heritage Site because of the significance of this technique. Uses Terraced paddy fields are used widely in rice, wheat and barley farming in east, south, southwest, and southeast Asia, as well as the Mediterranean Basin, Africa, and South America. Drier-climate terrace farming is common throughout the Mediterranean Basin, where they are used for vineyards, olive trees, cork oak, and other crops. Ancient history The Yemen Highlands are known for their terrace systems which were constructed at the beginning of Bronze Age in the 3rd millennium BC. Terracing is also used for sloping terrain; the Hanging Gardens of Babylon may have been built on an artificial mountain with stepped terraces, such as those on a ziggurat. At the seaside Villa of the Papyri in Herculaneum, the villa gardens of Julius Caesar's father-in-law were designed in terraces to give pleasant and varied views of the Bay of Naples. Intensive terrace farming is believed to have been practiced before the early 15th century AD in West Africa. Terraces were used by many groups, notably the Mafa, Ngas, Gwoza, and the Dogon. Recent history It was long held that steep mountain landscapes are not conducive to, or do not even permit, agricultural mechanization. In the 1970s in the European Alps, pasture farms began mechanizing the management of alpine pastures and harvesting of forage grasses through use of single axle two-wheel tractors (2WTs) and very low center of gravity articulated steering 4-wheel tractors. Their designs by various European manufacturers were initially quite simple but effective, allowing them to cross slopes approaching 20%. In the 2000s new designs of wheels and tires, tracks, etc, and incorporation of electronics for better and safer control, allowed these machines to operate on slopes greater than 20% with various implements such as reaper-harvesters, rakes, balers, and transport trailers. In Asian sub-tropical countries, a similar process has begun with the introduction of smaller, lower-tech and much lower-priced 2WTs in the 4-9 horsepower range that can be safely operated in the small, narrow terraces, and are light enough to be lifted and lowered from one terrace to the next. What is different from the Alpine use is that these 2WTs are being used for tillage and crop establishment of maize, wheat, and potato crops, and with their small 60-70cm-wide rotovators and special cage wheels are puddling the terraces for transplanted and broadcast rice. Farmers are also using the engines as stationary power sources for powering water pumps and threshers. Even more recently farmers are experimenting with use of small reaper-harvester attachments. In Nepal, the low costs of these mostly Chinese-made machines and the increased productivity they produce have meant that this scale-appropriate machinery is spreading across Nepal's Himalaya Mountains and likely into the other countries of the Himalaya and Hindu Kush. In specific areas South America In the South American Andes, farmers have used terraces, known as andenes, for over a thousand years to farm potatoes, maize, and other native crops. Terraced farming was developed by the Wari culture and other peoples of the south-central Andes before 1000 AD, centuries before they were used by the Inca, who adopted them. The terraces were built to make the most efficient use of shallow soil and to enable irrigation of crops by allowing runoff to occur through the outlet. The Inca people built on these, developing a system of canals, aqueducts, and puquios to direct water through dry land and increase fertility levels and growth. These terraced farms are found wherever mountain villages have existed in the Andes. They provided the food necessary to support the populations of great Inca cities and religious centres such as Machu Picchu. Myanmar In mountainous areas of Myanmar, terrace farming is known locally as the staircase or ladder farming (in Myanmar: mm:‌လှေခါးထစ်‌တောင်ယာ) ‌and the agriculture technique of that kind is known as လှေခါးထစ်စိုက်ပျိုးနည်း. Japan In Japan, some of the 100 Selected Terraced Rice Fields (in Japanese: 日本の棚田百選一覧), from Iwate in the north to Kagoshima in the south, are slowly disappearing, but volunteers are helping the farmers both to maintain their traditional methods and for sightseeing purposes. Canary Islands Terraced fields are common in islands with steep slopes. The Canary Islands present a complex system of terraces covering the landscape from the coastal irrigated plantations to the dry fields in the highlands. These terraces, which are named cadenas (chains), are built with stone walls of skillful design, which include attached stairs and channels. England In Old English, a terrace was also called a "lynch" (lynchet). An example of an ancient Lynch Mill is in Lyme Regis. The water is directed from a river by a duct along a terrace. This set-up was used in steep hilly areas in the UK. Israel Ancient terraces are a common feature in the Jerusalem Mountains, often found in conjunction with ancient rock-cut agricultural structures including quarries, winepresses, olive oil presses, water holes, lime kilns, roads, and agricultural watchtowers. According to Zvi Ron's estimation, these terraces encompass approximately 56% of the open grounds in the area. Despite their prevalence, there is a lack of consensus among scholars regarding their construction date. Various theories have been proposed, with Zvi Ron suggesting that their origins date back to ancient times, Finkelstein proposing the Middle Bronze Age, and Feig, Stager, and Harel suggesting the Iron Age. Archaeologists Gibson and Edelstein conducted research on terrace systems in the Rephaim valley, proposing that the ones in Khirbet er-Ras were built during the Iron Age II, whereas those in Ein Yael were linked to the Second Temple and Roman periods. Seligman suggested that while some terraces were established in ancient times, the majority of them are more likely to have originated during the Roman and Byzantine periods. A 2014 research study on terraces near Ramat Rachel, using Optically Stimulated Luminescence (OSL), yielded dates ranging from the Hellenistic period to Mamluk and Ottoman times. The majority of the samples fell within the latter periods. However, the study's ability to precisely determine the original construction date remains uncertain, as the results could also reflect subsequent agricultural modifications that affected exposure to sunlight. Gallery
Technology
Buildings and infrastructure
null
849619
https://en.wikipedia.org/wiki/Mastiff
Mastiff
A mastiff is a large and powerful type of dog. Mastiffs are among the largest dogs, and typically have a short coat, a long low-set tail and large feet; the skull is large and bulky, the muzzle broad and short (brachycephalic) and the ears drooping and pendant-shaped. European and Asian records dating back 3,000 years show dogs of the mastiff type. Mastiffs have historically been guard dogs, protecting homes and property, although throughout history they have been used as hunting dogs, war dogs and for blood sports, such as fighting each other and other animals, including bulls, bears and even lions. History Historical and archaeological evidence suggests that mastiffs have long been distinct in both form and function from the similarly large livestock guardian dogs from which they were most likely developed; they also form separate genetic populations. The Fédération Cynologique Internationale and some kennel clubs group the two types together as molossoid dogs; some modern livestock guardian breeds, such as the Pyrenean Mastiff, the Spanish Mastiff and the Tibetan Mastiff, and an extinct draught dog called the Belgian Mastiff, have the word "mastiff" in their name, but are not considered true mastiffs. Many older English sources refer to mastiffs as bandogs or bandogges, although technically the term "bandog" meant a dog that was tethered by a chain (or "bande") that would be released at night; the terms "mastiff" and "bandog" were often used interchangeably. One of the most famous "bandog" programs in England, led to the establishment of a recognized "bandog" breed known today as the Bull Mastiff. The least common "bandog" program in England was funded by Sir Nathanael Dieu-est-Mon'plaisir, the St. Louis Vincent Mastiff or South American Mastiff was named after Vincent Louis who reared plantation dogs originating from St. Louis and other parts of South America. This rare breed is the most expensive mastiff-type dog amongst the "bandog" breeds. In the twentieth century the term "bandog" was revived to describe some large fighting mastiff type dogs crossed with any bulldog in the United States. List of mastiff breeds Extant breeds Extinct breeds
Biology and health sciences
Dogs
null
849815
https://en.wikipedia.org/wiki/Kepler%20space%20telescope
Kepler space telescope
The Kepler space telescope is a defunct space telescope launched by NASA in 2009 to discover Earth-sized planets orbiting other stars. Named after astronomer Johannes Kepler, the spacecraft was launched into an Earth-trailing heliocentric orbit. The principal investigator was William J. Borucki. After nine and a half years of operation, the telescope's reaction control system fuel was depleted, and NASA announced its retirement on October 30, 2018. Designed to survey a portion of Earth's region of the Milky Way to discover Earth-size exoplanets in or near habitable zones and to estimate how many of the billions of stars in the Milky Way have such planets, Kepler's sole scientific instrument is a photometer that continually monitored the brightness of approximately 150,000 main sequence stars in a fixed field of view. These data were transmitted to Earth, then analyzed to detect periodic dimming caused by exoplanets that cross in front of their host star. Only planets whose orbits are seen edge-on from Earth could be detected. Kepler observed 530,506 stars, and had detected 2,778 confirmed planets as of June 16, 2023. History Pre-launch development The Kepler space telescope was part of NASA's Discovery Program of relatively low-cost science missions. The telescope's construction and initial operation were managed by NASA's Jet Propulsion Laboratory, with Ball Aerospace responsible for developing the Kepler flight system. In January 2006, the project's launch was delayed eight months because of budget cuts and consolidation at NASA. It was delayed again by four months in March 2006 due to fiscal problems. During this time, the high-gain antenna was changed from a design using a gimbal to one fixed to the frame of the spacecraft to reduce cost and complexity, at the cost of one observation day per month. Post launch The Ames Research Center was responsible for the ground system development, mission operations since December 2009, and scientific data analysis. The initial planned lifetime was three and a half years, but greater-than-expected noise in the data, from both the stars and the spacecraft, meant additional time was needed to fulfill all mission goals. Initially, in 2012, the mission was expected to be extended until 2016, but on July 14, 2012, one of the four reaction wheels used for pointing the spacecraft stopped turning, and completing the mission would only be possible if the other three all remained reliable. Then, on May 11, 2013, a second one failed, disabling the collection of science data and threatening the continuation of the mission. On August 15, 2013, NASA announced that they had given up trying to fix the two failed reaction wheels. This meant the current mission needed to be modified, but it did not necessarily mean the end of planet hunting. NASA had asked the space science community to propose alternative mission plans "potentially including an exoplanet search, using the remaining two good reaction wheels and thrusters". On November 18, 2013, the K2 "Second Light" proposal was reported. This would include utilizing the disabled Kepler in a way that could detect habitable planets around smaller, dimmer red dwarfs. On May 16, 2014, NASA announced the approval of the K2 extension. By January 2015, Kepler and its follow-up observations had found 1,013 confirmed exoplanets in about 440 star systems, along with a further 3,199 unconfirmed planet candidates. Four planets have been confirmed through Kepler's K2 mission. In November 2013, astronomers estimated, based on Kepler space mission data, that there could be as many as 40 billion rocky Earth-size exoplanets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. It is estimated that 11 billion of these planets may be orbiting Sun-like stars. The nearest such planet may be away, according to the scientists. On January 6, 2015, NASA announced the 1,000th confirmed exoplanet discovered by the Kepler space telescope. Four of the newly confirmed exoplanets were found to orbit within habitable zones of their related stars: three of the four, Kepler-438b, Kepler-442b and Kepler-452b, are almost Earth-size and likely rocky; the fourth, Kepler-440b, is a super-Earth. On May 10, 2016, NASA verified 1,284 new exoplanets found by Kepler, the single largest finding of planets to date. Kepler data have also helped scientists observe and understand supernovae; measurements were collected every half-hour so the light curves were especially useful for studying these types of astronomical events. On October 30, 2018, after the spacecraft ran out of fuel, NASA announced that the telescope would be retired. The telescope was shut down the same day, bringing an end to its nine-year service. Kepler observed 530,506 stars and discovered 2,662 exoplanets over its lifetime. A newer NASA mission, TESS, launched in 2018, is continuing the search for exoplanets. Spacecraft design The telescope has a mass of and contains a Schmidt camera with a front corrector plate (lens) feeding a primary mirror—at the time of its launch this was the largest mirror on any telescope outside Earth orbit, though the Herschel Space Observatory took this title a few months later. Its telescope has a 115 deg2 (about 12-degree diameter) field of view (FoV), roughly equivalent to the size of one's fist held at arm's length. Of this, 105 deg2 is of science quality, with less than 11% vignetting. The photometer has a soft focus to provide excellent photometry, rather than sharp images. The mission goal was a combined differential photometric precision (CDPP) of 20 ppm for a m(V)=12 Sun-like star for a 6.5-hour integration, though the observations fell short of this objective (see mission status). Camera The focal plane of the spacecraft's camera is made out of forty-two CCDs at 2200×1024 pixels each, possessing a total resolution of 94.6 megapixels, which at the time made it the largest camera system launched into space. The array was cooled by heat pipes connected to an external radiator. The CCDs were read out every 6.5 seconds (to limit saturation) and co-added on board for 58.89 seconds for short cadence targets, and 1765.5 seconds (29.4 minutes) for long cadence targets. Due to the larger bandwidth requirements for the former, these were limited in number to 512 compared to 170,000 for long cadence. However, even though at launch Kepler had the highest data rate of any NASA mission, the 29-minute sums of all 95 million pixels constituted more data than could be stored and sent back to Earth. Therefore, the science team pre-selected the relevant pixels associated with each star of interest, amounting to about 6 percent of the pixels (5.4 megapixels). The data from these pixels was then requantized, compressed and stored, along with other auxiliary data, in the on-board 16 gigabyte solid-state recorder. Data that was stored and downlinked includes science stars, p-mode stars, smear, black level, background and full field-of-view images. Primary mirror The Kepler primary mirror is in diameter. Manufactured by glass maker Corning using ultra-low expansion (ULE) glass, the mirror is specifically designed to have a mass only 14% that of a solid mirror of the same size. To produce a space telescope system with sufficient sensitivity to detect relatively small planets, as they pass in front of stars, a very high reflectance coating on the primary mirror was required. Using ion assisted evaporation, Surface Optics Corp. applied a protective nine-layer silver coating to enhance reflection and a dielectric interference coating to minimize the formation of color centers and atmospheric moisture absorption. Photometric performance In terms of photometric performance, Kepler worked well, much better than any Earth-bound telescope, but short of design goals. The objective was a combined differential photometric precision (CDPP) of 20 parts per million (PPM) on a magnitude 12 star for a 6.5-hour integration. This estimate was developed allowing 10 ppm for stellar variability, roughly the value for the Sun. The obtained accuracy for this observation has a wide range, depending on the star and position on the focal plane, with a median of 29 ppm. Most of the additional noise appears to be due to a larger-than-expected variability in the stars themselves (19.5 ppm as opposed to the assumed 10.0 ppm), with the rest due to instrumental noise sources slightly larger than predicted. Because decrease in brightness from an Earth-size planet transiting a Sun-like star is so small, only 80 ppm, the increased noise means each individual transit is only a 2.7 σ event, instead of the intended 4 σ. This, in turn, means more transits must be observed to be sure of a detection. Scientific estimates indicated that a mission lasting 7 to 8 years, as opposed to the originally planned 3.5 years, would be needed to find all transiting Earth-sized planets. On April 4, 2012, the Kepler mission was approved for extension through the fiscal year 2016, but this also depended on all remaining reaction wheels staying healthy, which turned out not to be the case (see Reaction wheel issues below). Orbit and orientation Kepler orbits the Sun, which avoids Earth occultations, stray light, and gravitational perturbations and torques inherent in an Earth orbit. NASA has characterized Kepler's orbit as "Earth-trailing". With an orbital period of 372.5 days, Kepler is slowly falling farther behind Earth (about 16 million miles per annum). , the distance to Kepler from Earth was about . This means that after about 26 years Kepler will reach the other side of the Sun and will get back to the neighborhood of the Earth after 51 years. Until 2013 the photometer pointed to a field in the northern constellations of Cygnus, Lyra and Draco, which is well out of the ecliptic plane, so that sunlight never enters the photometer as the spacecraft orbits. This is also the direction of the Solar System's motion around the center of the galaxy. Thus, the stars which Kepler observed are roughly the same distance from the Galactic Center as the Solar System, and also close to the galactic plane. This fact is important if position in the galaxy is related to habitability, as suggested by the Rare Earth hypothesis. Orientation is three-axis stabilized by sensing rotations using fine-guidance sensors located on the instrument focal plane (instead of rate sensing gyroscopes, e.g. as used on Hubble). and using reaction wheels and hydrazine thrusters to control the orientation. Operations Kepler was operated out of Boulder, Colorado, by the Laboratory for Atmospheric and Space Physics (LASP) under contract to Ball Aerospace & Technologies. The spacecraft's solar array was rotated to face the Sun at the solstices and equinoxes, so as to optimize the amount of sunlight falling on the solar array and to keep the heat radiator pointing towards deep space. Together, LASP and Ball Aerospace controlled the spacecraft from a mission operations center located on the research campus of the University of Colorado. LASP performs essential mission planning and the initial collection and distribution of the science data. The mission's initial life-cycle cost was estimated at US$600 million, including funding for 3.5 years of operation. In 2012, NASA announced that the Kepler mission would be funded until 2016 at a cost of about $20 million per year. Communications NASA contacted the spacecraft using the X band communication link twice a week for command and status updates. Scientific data are downloaded once a month using the Ka band link at a maximum data transfer rate of approximately 550 kB/s. The high gain antenna is not steerable so data collection is interrupted for a day to reorient the whole spacecraft and the high gain antenna for communications to Earth. The Kepler space telescope conducted its own partial analysis on board and only transmitted scientific data deemed necessary to the mission in order to conserve bandwidth. Data management Science data telemetry collected during mission operations at LASP is sent for processing to the Kepler Data Management Center (DMC) which is located at the Space Telescope Science Institute on the campus of Johns Hopkins University in Baltimore, Maryland. The science data telemetry is decoded and processed into uncalibrated FITS-format science data products by the DMC, which are then passed along to the Science Operations Center (SOC) at NASA Ames Research Center, for calibration and final processing. The SOC at NASA Ames Research Center (ARC) develops and operates the tools needed to process scientific data for use by the Kepler Science Office (SO). Accordingly, the SOC develops the pipeline data processing software based on scientific algorithms developed jointly by the SO and SOC. During operations, the SOC: Receives uncalibrated pixel data from the DMC Applies the analysis algorithms to produce calibrated pixels and light curves for each star Performs transit searches for detection of planets (threshold-crossing events, or TCEs) Performs data validation of candidate planets by evaluating various data products for consistency as a way to eliminate false positive detections The SOC also evaluates the photometric performance on an ongoing basis and provides the performance metrics to the SO and Mission Management Office. Finally, the SOC develops and maintains the project's scientific databases, including catalogs and processed data. The SOC finally returns calibrated data products and scientific results back to the DMC for long-term archiving, and distribution to astronomers around the world through the Multimission Archive at STScI (MAST). Reaction wheel failures On July 14, 2012, one of the four reaction wheels used for fine pointing of the spacecraft failed. While Kepler requires only three reaction wheels to accurately aim the telescope, another failure would leave the spacecraft unable to aim at its original field. After showing some problems in January 2013, a second reaction wheel failed on May 11, 2013, ending Kepler's primary mission. The spacecraft was put into safe mode, then from June to August 2013 a series of engineering tests were done to try to recover either failed wheel. By August 15, 2013, it was decided that the wheels were unrecoverable, and an engineering report was ordered to assess the spacecraft's remaining capabilities. This effort ultimately led to the "K2" follow-on mission observing different fields near the ecliptic. Operational timeline In January 2006, the project's launch was delayed eight months because of budget cuts and consolidation at NASA. It was delayed again by four months in March 2006 due to fiscal problems. At this time, the high-gain antenna was changed from a gimballed design to one fixed to the frame of the spacecraft to reduce cost and complexity, at the cost of one observation day per month. The Kepler observatory was launched on March 7, 2009, at 03:49:57 UTC aboard a Delta II rocket from Cape Canaveral Air Force Station, Florida. The launch was a success and all three stages were completed by 04:55 UTC. The cover of the telescope was jettisoned on April 7, 2009, and the first light images were taken on the next day. On April 20, 2009, it was announced that the Kepler science team had concluded that further refinement of the focus would dramatically increase the scientific return. On April 23, 2009, it was announced that the focus had been successfully optimized by moving the primary mirror 40 micrometers (1.6 thousandths of an inch) towards the focal plane and tilting the primary mirror 0.0072 degree. On May 13, 2009, at 00:01 UTC, Kepler successfully completed its commissioning phase and began its search for planets around other stars. On June 19, 2009, the spacecraft successfully sent its first science data to Earth. It was discovered that Kepler had entered safe mode on June 15. A second safe mode event occurred on July 2. In both cases the event was triggered by a processor reset. The spacecraft resumed normal operation on July 3 and the science data that had been collected since June 19 was downlinked that day. On October 14, 2009, the cause of these safing events was determined to be a low voltage power supply that provides power to the RAD750 processor. On January 12, 2010, one portion of the focal plane transmitted anomalous data, suggesting a problem with focal plane MOD-3 module, covering two out of Kepler's 42 CCDs. , the module was described as "failed", but the coverage still exceeded the science goals. Kepler downlinked roughly twelve gigabytes of data about once per month. Field of view Kepler has a fixed field of view (FOV) against the sky. The diagram to the right shows the celestial coordinates and where the detector fields are located, along with the locations of a few bright stars with celestial north at the top left corner. The mission website has a calculator that will determine if a given object falls in the FOV, and if so, where it will appear in the photo detector output data stream. Data on exoplanet candidates is submitted to the Kepler Follow-up Program, or KFOP, to conduct follow-up observations. Kepler's field of view covers 115 square degrees, around 0.25 percent of the sky, or "about two scoops of the Big Dipper". Thus, it would require around 400 Kepler-like telescopes to cover the whole sky. The Kepler field contains portions of the constellations Cygnus, Lyra, and Draco. The nearest star system in Kepler's field of view is the trinary star system Gliese 1245, 15 light years from the Sun. The brown dwarf WISE J2000+3629, 22.8 ± 1 light years from the Sun is also in the field of view, but is invisible to Kepler due to emitting light primarily in infrared wavelengths. Objectives and methods The scientific objective of the Kepler space telescope was to explore the structure and diversity of planetary systems. This spacecraft observes a large sample of stars to achieve several key goals: To determine how many Earth-size and larger planets there are in or near the habitable zone (often called "Goldilocks planets") of a wide variety of spectral types of stars. To determine the range of size and shape of the orbits of these planets. To estimate how many planets there are in multiple-star systems. To determine the range of orbit size, brightness, size, mass and density of short-period giant planets. To identify additional members of each discovered planetary system using other techniques. Determine the properties of those stars that harbor planetary systems. Most of the exoplanets previously detected by other projects were giant planets, mostly the size of Jupiter and bigger. Kepler was designed to look for planets 30 to 600 times less massive, closer to the order of Earth's mass (Jupiter is 318 times more massive than Earth). The method used, the transit method, involves observing repeated transit of planets in front of their stars, which causes a slight reduction in the star's apparent magnitude, on the order of 0.01% for an Earth-size planet. The degree of this reduction in brightness can be used to deduce the diameter of the planet, and the interval between transits can be used to deduce the planet's orbital period, from which estimates of its orbital semi-major axis (using Kepler's laws) and its temperature (using models of stellar radiation) can be calculated. The probability of a random planetary orbit being along the line-of-sight to a star is the diameter of the star divided by the diameter of the orbit. For an Earth-size planet at 1 AU transiting a Sun-like star the probability is 0.47%, or about 1 in 210. For a planet like Venus orbiting a Sun-like star the probability is slightly higher, at 0.65%; If the host star has multiple planets, the probability of additional detections is higher than the probability of initial detection assuming planets in a given system tend to orbit in similar planes—an assumption consistent with current models of planetary system formation. For instance, if a Kepler-like mission conducted by aliens observed Earth transiting the Sun, there is a 7% chance that it would also see Venus transiting. Kepler's 115 deg2 field of view gives it a much higher probability of detecting Earth-sized planets than the Hubble Space Telescope, which has a field of view of only 10 sq. arc-minutes. Moreover, Kepler is dedicated to detecting planetary transits, while the Hubble Space Telescope is used to address a wide range of scientific questions, and rarely looks continuously at just one starfield. Of the approximately half-million stars in Kepler's field of view, around 150,000 stars were selected for observation. More than 90,000 are G-type stars on, or near, the main sequence. Thus, Kepler was designed to be sensitive to wavelengths of 400–865 nm where brightness of those stars peaks. Most of the stars observed by Kepler have apparent visual magnitude between 14 and 16 but the brightest observed stars have apparent visual magnitude of 8 or lower. Most of the planet candidates were initially not expected to be confirmed due to being too faint for follow-up observations. All the selected stars are observed simultaneously, with the spacecraft measuring variations in their brightness every thirty minutes. This provides a better chance for seeing a transit. The mission was designed to maximize the probability of detecting planets orbiting other stars. Because Kepler must observe at least three transits to confirm that the dimming of a star was caused by a transiting planet, and because larger planets give a signal that is easier to check, scientists expected the first reported results to be larger Jupiter-size planets in tight orbits. The first of these were reported after only a few months of operation. Smaller planets, and planets farther from their sun would take longer, and discovering planets comparable to Earth were expected to take three years or longer. Data collected by Kepler is also being used for studying variable stars of various types and performing asteroseismology, particularly on stars showing solar-like oscillations. Planet finding process Finding planet candidates Once Kepler has collected and sent back the data, raw light curves are constructed. Brightness values are then adjusted to take the brightness variations due to the rotation of the spacecraft into account. The next step is processing (folding) light curves into a more easily observable form and letting software select signals that seem potentially transit-like. At this point, any signal that shows potential transit-like features is called a threshold crossing event. These signals are individually inspected in two inspection rounds, with the first round taking only a few seconds per target. This inspection eliminates erroneously selected non-signals, signals caused by instrumental noise and obvious eclipsing binaries. Threshold crossing events that pass these tests are called Kepler Objects of Interest (KOI), receive a KOI designation and are archived. KOIs are inspected more thoroughly in a process called dispositioning. Those which pass the dispositioning are called Kepler planet candidates. The KOI archive is not static, meaning that a Kepler candidate could end up in the false-positive list upon further inspection. In turn, KOIs that were mistakenly classified as false positives could end up back in the candidates list. Not all the planet candidates go through this process. Circumbinary planets do not show strictly periodic transits, and have to be inspected through other methods. In addition, third-party researchers use different data-processing methods, or even search planet candidates from the unprocessed light curve data. As a consequence, those planets may be missing KOI designation. Confirming planet candidates Once suitable candidates have been found from Kepler data, it is necessary to rule out false positives with follow-up tests. Usually, Kepler candidates are imaged individually with more-advanced ground-based telescopes in order to resolve any background objects which could contaminate the brightness signature of the transit signal. Another method to rule out planet candidates is astrometry for which Kepler can collect good data even though doing so was not a design goal. While Kepler cannot detect planetary-mass objects with this method, it can be used to determine if the transit was caused by a stellar-mass object. Through other detection methods There are a few different exoplanet detection methods which help to rule out false positives by giving further proof that a candidate is a real planet. One of the methods, called doppler spectroscopy, requires follow-up observations from ground-based telescopes. This method works well if the planet is massive or is located around a relatively bright star. While current spectrographs are insufficient for confirming planetary candidates with small masses around relatively dim stars, this method can be used to discover additional massive non-transiting planet candidates around targeted stars. In multiplanetary systems, planets can often be confirmed through transit timing variation by looking at the time between successive transits, which may vary if planets are gravitationally perturbed by each other. This helps to confirm relatively low-mass planets even when the star is relatively distant. Transit timing variations indicate that two or more planets belong to the same planetary system. There are even cases where a non-transiting planet is also discovered in this way. Circumbinary planets show much larger transit timing variations between transits than planets gravitationally disturbed by other planets. Their transit duration times also vary significantly. Transit timing and duration variations for circumbinary planets are caused by the orbital motion of the host stars, rather than by other planets. In addition, if the planet is massive enough, it can cause slight variations of the host stars' orbital periods. Despite being harder to find circumbinary planets due to their non-periodic transits, it is much easier to confirm them, as timing patterns of transits cannot be mimicked by an eclipsing binary or a background star system. In addition to transits, planets orbiting around their stars undergo reflected-light variations—like the Moon, they go through phases from full to new and back again. Because Kepler cannot resolve the planet from the star, it sees only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small—the photometric precision required to see a close-in giant planet is about the same as to detect an Earth-sized planet in transit across a solar-type star—Jupiter-sized planets with an orbital period of a few days or less are detectable by sensitive space telescopes such as Kepler. In the long run, this method may help find more planets than the transit method, because the reflected light variation with orbital phase is largely independent of the planet's orbital inclination, and does not require the planet to pass in front of the disk of the star. In addition, the phase function of a giant planet is also a function of its thermal properties and atmosphere, if any. Therefore, the phase curve may constrain other planetary properties, such as the particle size distribution of the atmospheric particles. Kepler's photometric precision is often high enough to observe a star's brightness changes caused by doppler beaming or a star's shape deformation by a companion. These can sometimes be used to rule out hot Jupiter candidates as false positives caused by a star or a brown dwarf when these effects are too noticeable. However, there are some cases where such effects are detected even by planetary-mass companions such as TrES-2b. Through validation If a planet cannot be detected through at least one of the other detection methods, it can be confirmed by determining if the possibility of a Kepler candidate being a real planet is significantly larger than any false-positive scenarios combined. One of the first methods was to see if other telescopes can see the transit as well. The first planet confirmed through this method was Kepler-22b which was also observed with the Spitzer Space Telescope in addition to analyzing any other false-positive possibilities. Such confirmation is costly, as small planets can generally be detected only with space telescopes. In 2014, a new confirmation method called "validation by multiplicity" was announced. From the planets previously confirmed through various methods, it was found that planets in most planetary systems orbit in a relatively flat plane, similar to the planets found in the Solar System. This means that if a star has multiple planet candidates, it is very likely a real planetary system. Transit signals still need to meet several criteria which rule out false-positive scenarios. For instance, it has to have considerable signal-to-noise ratio, it has at least three observed transits, orbital stability of those systems have to be stable and transit curve has to have a shape that partly eclipsing binaries could not mimic the transit signal. In addition, its orbital period needs to be 1.6 days or longer to rule out common false positives caused by eclipsing binaries. Validation by multiplicity method is very efficient and allows to confirm hundreds of Kepler candidates in a relatively short amount of time. A new validation method using a tool called PASTIS has been developed. It makes it possible to confirm a planet even when only a single candidate transit event for the host star has been detected. A drawback of this tool is that it requires a relatively high signal-to-noise ratio from Kepler data, so it can mainly confirm only larger planets or planets around quiet and relatively bright stars. Currently, the analysis of Kepler candidates through this method is underway. PASTIS was first successful for validating the planet Kepler-420b. K2 Extension In April 2012, an independent panel of senior NASA scientists recommended that the Kepler mission be continued through 2016. According to the senior review, Kepler observations needed to continue until at least 2015 to achieve all the stated scientific goals. On November 14, 2012, NASA announced the completion of Kepler's primary mission, and the beginning of its extended mission, which ended in 2018 when it ran out of fuel. Reaction wheel issues In July 2012, one of Kepler's four reaction wheels (wheel 2) failed. On May 11, 2013, a second wheel (wheel 4) failed, jeopardizing the continuation of the mission, as three wheels are necessary for its planet hunting. Kepler had not collected science data since May because it was not able to point with sufficient accuracy. On July 18 and 22 reaction wheels 4 and 2 were tested respectively; wheel 4 only rotated counter-clockwise but wheel 2 ran in both directions, albeit with significantly elevated friction levels. A further test of wheel 4 on July 25 managed to achieve bi-directional rotation. Both wheels, however, exhibited too much friction to be useful. On August 2, NASA put out a call for proposals to use the remaining capabilities of Kepler for other scientific missions. Starting on August 8, a full systems evaluation was conducted. It was determined that wheel 2 could not provide sufficient precision for scientific missions and the spacecraft was returned to a "rest" state to conserve fuel. Wheel 4 was previously ruled out because it exhibited higher friction levels than wheel 2 in previous tests. Sending astronauts to fix Kepler is not an option because it orbits the Sun and is millions of kilometers from Earth. On August 15, 2013, NASA announced that Kepler would not continue searching for planets using the transit method after attempts to resolve issues with two of the four reaction wheels failed. An engineering report was ordered to assess the spacecraft's capabilities, its two good reaction wheels and its thrusters. Concurrently, a scientific study was conducted to determine whether enough knowledge can be obtained from Kepler's limited scope to justify its $18 million per year cost. Possible ideas included searching for asteroids and comets, looking for evidence of supernovas, and finding huge exoplanets through gravitational microlensing. Another proposal was to modify the software on Kepler to compensate for the disabled reaction wheels. Instead of the stars being fixed and stable in Kepler's field of view, they will drift. Proposed software was to track this drift and more or less completely recover the mission goals despite being unable to hold the stars in a fixed view. Previously collected data continued to be analyzed. Second Light (K2) In November 2013, a new mission plan named K2 "Second Light" was presented for consideration. K2 would involve using Kepler's remaining capability, photometric precision of about 300 parts per million, compared with about 20 parts per million earlier, to collect data for the study of "supernova explosions, star formation and Solar-System bodies such as asteroids and comets, ... " and for finding and studying more exoplanets. In this proposed mission plan, Kepler would search a much larger area in the plane of Earth's orbit around the Sun. Celestial objects, including exoplanets, stars and others, detected by the K2 mission would be associated with the EPIC acronym, standing for Ecliptic Plane Input Catalog. In early 2014, the spacecraft underwent successful testing for the K2 mission. From March to May 2014, data from a new field called Field 0 was collected as a testing run. On May 16, 2014, NASA announced the approval of extending the Kepler mission to the K2 mission. Kepler's photometric precision for the K2 mission was estimated to be 50 ppm on a magnitude 12 star for a 6.5-hour integration. In February 2014, photometric precision for the K2 mission using two-wheel, fine-point precision operations was measured as 44 ppm on magnitude 12 stars for a 6.5-hour integration. The analysis of these measurements by NASA suggests the K2 photometric precision approaches that of the Kepler archive of three-wheel, fine-point precision data. On May 29, 2014, campaign fields 0 to 13 were reported and described in detail. Field 1 of the K2 mission is set towards the Leo-Virgo region of the sky, while Field 2 is towards the "head" area of Scorpius and includes two globular clusters, Messier 4 and Messier 80, and part of the Scorpius–Centaurus association, which is only about 11 million years old and distant with probably over 1,000 members. On December 18, 2014, NASA announced that the K2 mission had detected its first confirmed exoplanet, a super-Earth named HIP 116454 b. Its signature was found in a set of engineering data meant to prepare the spacecraft for the full K2 mission. Radial velocity follow-up observations were needed as only a single transit of the planet was detected. During a scheduled contact on April 7, 2016, Kepler was found to be operating in emergency mode, the lowest operational and most fuel intensive mode. Mission operations declared a spacecraft emergency, which afforded them priority access to NASA's Deep Space Network. By the evening of April 8 the spacecraft had been upgraded to safe mode, and on April 10 it was placed into point-rest state, a stable mode which provides normal communication and the lowest fuel burn. At that time, the cause of the emergency was unknown, but it was not believed that Kepler's reaction wheels or a planned maneuver to support K2 Campaign 9 were responsible. Operators downloaded and analyzed engineering data from the spacecraft, with the prioritization of returning to normal science operations. Kepler was returned to science mode on April 22. The emergency caused the first half of Campaign 9 to be shortened by two weeks. In June 2016, NASA announced a K2 mission extension of three additional years, beyond the expected exhaustion of on-board fuel in 2018. In August 2018, NASA roused the spacecraft from sleep mode, applied a modified configuration to deal with thruster problems that degraded pointing performance, and began collecting scientific data for the 19th observation campaign, finding that the onboard fuel was not yet utterly exhausted. On October 30, 2018, NASA announced that the spacecraft was out of fuel and its mission was officially ended. Mission results The Kepler space telescope was in active operation from 2009 through 2013, with the first main results announced on January 4, 2010. As expected, the initial discoveries were all short-period planets. As the mission continued, additional longer-period candidates were found. , Kepler has discovered 5,011 exoplanet candidates and 2,662 confirmed exoplanets. As of August 2022, 2,056 exoplanet candidates remain to be confirmed and 2,711 are now confirmed exoplanets. 2009 NASA held a press conference to discuss early science results of the Kepler mission on August 6, 2009. At this press conference, it was revealed that Kepler had confirmed the existence of the previously known transiting exoplanet HAT-P-7b, and was functioning well enough to discover Earth-size planets. Because Kepler's detection of planets depends on seeing very small changes in brightness, stars that vary in brightness by themselves (variable stars) are not useful in this search. From the first few months of data, Kepler scientists determined that about 7,500 stars from the initial target list are such variable stars. These were dropped from the target list, and replaced by new candidates. On November 4, 2009, the Kepler project publicly released the light curves of the dropped stars. The first new planet candidate observed by Kepler was originally marked as a false positive because of uncertainties in the mass of its parent star. However, it was confirmed ten years later and is now designated Kepler-1658b. The first six weeks of data revealed five previously unknown planets, all very close to their stars. Among the notable results are one of the least dense planets yet found, two low-mass white dwarfs that were initially reported as being members of a new class of stellar objects, and Kepler-16b, a well-characterized planet orbiting a binary star. 2010 On June 15, 2010, the Kepler mission released data on all but 400 of the ~156,000 planetary target stars to the public. 706 targets from this first data set have viable exoplanet candidates, with sizes ranging from as small as Earth to larger than Jupiter. The identity and characteristics of 306 of the 706 targets were given. The released targets included five candidate multi-planet systems, including six extra exoplanet candidates. Only 33.5 days of data were available for most of the candidates. NASA also announced data for another 400 candidates were being withheld to allow members of the Kepler team to perform follow-up observations. The data for these candidates was published February 2, 2011. (See the Kepler results for 2011 below.) The Kepler results, based on the candidates in the list released in 2010, implied that most candidate planets have radii less than half that of Jupiter. The results also imply that small candidate planets with periods less than thirty days are much more common than large candidate planets with periods less than thirty days and that the ground-based discoveries are sampling the large-size tail of the size distribution. This contradicted older theories which had suggested small and Earth-size planets would be relatively infrequent. Based on extrapolations from the Kepler data, an estimate of around 100 million habitable planets in the Milky Way may be realistic. Some media reports of the TED talk have led to the misunderstanding that Kepler had actually found these planets. This was clarified in a letter to the Director of the NASA Ames Research Center, for the Kepler Science Council dated August 2, 2010 states, "Analysis of the current Kepler data does not support the assertion that Kepler has found any Earth-like planets." In 2010, Kepler identified two systems containing objects which are smaller and hotter than their parent stars: KOI 74 and KOI 81. These objects are probably low-mass white dwarfs produced by previous episodes of mass transfer in their systems. 2011 On February 2, 2011, the Kepler team announced the results of analysis of the data taken between 2 May and September 16, 2009. They found 1235 planetary candidates circling 997 host stars. (The numbers that follow assume the candidates are really planets, though the official papers called them only candidates. Independent analysis indicated that at least 90% of them are real planets and not false positives). 68 planets were approximately Earth-size, 288 super-Earth-size, 662 Neptune-size, 165 Jupiter-size, and 19 up to twice the size of Jupiter. In contrast to previous work, roughly 74% of the planets are smaller than Neptune, most likely as a result of previous work finding large planets more easily than smaller ones. That February 2, 2011 release of 1235 exoplanet candidates included 54 that may be in the "habitable zone", including five less than twice the size of Earth. There were previously only two planets thought to be in the "habitable zone", so these new findings represent an enormous expansion of the potential number of "Goldilocks planets" (planets of the right temperature to support liquid water). All of the habitable zone candidates found thus far orbit stars significantly smaller and cooler than the Sun (habitable candidates around Sun-like stars will take several additional years to accumulate the three transits required for detection). Of all the new planet candidates, 68 are 125% of Earth's size or smaller, or smaller than all previously discovered exoplanets. "Earth-size" and "super-Earth-size" is defined as "less than or equal to 2 Earth radii (Re)" [(or, Rp ≤ 2.0 Re) – Table 5]. Six such planet candidates [namely: KOI 326.01 (Rp=0.85), KOI 701.03 (Rp=1.73), KOI 268.01 (Rp=1.75), KOI 1026.01 (Rp=1.77), KOI 854.01 (Rp=1.91), KOI 70.03 (Rp=1.96) – Table 6] are in the "habitable zone." A more recent study found that one of these candidates (KOI 326.01) is in fact much larger and hotter than first reported. The frequency of planet observations was highest for exoplanets two to three times Earth-size, and then declined in inverse proportionality to the area of the planet. The best estimate (as of March 2011), after accounting for observational biases, was: 5.4% of stars host Earth-size candidates, 6.8% host super-Earth-size candidates, 19.3% host Neptune-size candidates, and 2.55% host Jupiter-size or larger candidates. Multi-planet systems are common; 17% of the host stars have multi-candidate systems, and 33.9% of all the planets are in multiple planet systems. By December 5, 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Compared to the February 2011 figures, the number of Earth-size and super-Earth-size planets increased by 200% and 140% respectively. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars, marking a decrease from the February figure; this was due to the more stringent criteria in use in the December data. On December 20, 2011, the Kepler team announced the discovery of the first Earth-size exoplanets, Kepler-20e and Kepler-20f, orbiting a Sun-like star, Kepler-20. Based on Kepler's findings, astronomer Seth Shostak estimated in 2011 that "within a thousand light-years of Earth", there are "at least 30,000" habitable planets. Also based on the findings, the Kepler team has estimated that there are "at least 50 billion planets in the Milky Way", of which "at least 500 million" are in the habitable zone. In March 2011, astronomers at NASA's Jet Propulsion Laboratory (JPL) reported that about "1.4 to 2.7 percent" of all Sun-like stars are expected to have Earth-size planets "within the habitable zones of their stars". This means there are "two billion" of these "Earth analogs" in the Milky Way alone. The JPL astronomers also noted that there are "50 billion other galaxies", potentially yielding more than one sextillion "Earth analog" planets if all galaxies have similar numbers of planets to the Milky Way. 2012 In January 2012, an international team of astronomers reported that each star in the Milky Way may host "on average...at least 1.6 planets", suggesting that over 160 billion star-bound planets may exist in the Milky Way. Kepler also recorded distant stellar super-flares, some of which are 10,000 times more powerful than the 1859 Carrington event. The superflares may be triggered by close-orbiting Jupiter-sized planets. The Transit Timing Variation (TTV) technique, which was used to discover Kepler-9d, gained popularity for confirming exoplanet discoveries. A planet in a system with four stars was also confirmed, the first time such a system had been discovered. , there were a total of 2,321 candidates. Of these, 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars. The Kepler team estimated that 5.4% of all stars host Earth-size planet candidates, and that 17% of all stars have multiple planets. 2013 According to a study by Caltech astronomers published in January 2013, the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The study, based on planets orbiting the star Kepler-32, suggests that planetary systems may be common around stars in the Milky Way. The discovery of 461 more candidates was announced on January 7, 2013. The longer Kepler watches, the more planets with long periods it can detect. A candidate, newly announced on January 7, 2013, was Kepler-69c (formerly, KOI-172.02), an Earth-size exoplanet orbiting a star similar to the Sun in the habitable zone and possibly habitable. In April 2013, a white dwarf was discovered bending the light of its companion red dwarf in the KOI-256 star system. In April 2013, NASA announced the discovery of three new Earth-size exoplanets—Kepler-62e, Kepler-62f, and Kepler-69c—in the habitable zones of their respective host stars, Kepler-62 and Kepler-69. The new exoplanets are considered prime candidates for possessing liquid water and thus a habitable environment. A more recent analysis has shown that Kepler-69c is likely more analogous to Venus, and thus unlikely to be habitable. On May 15, 2013, NASA announced the space telescope had been crippled by failure of a reaction wheel that keeps it pointed in the right direction. A second wheel had previously failed, and the telescope required three wheels (out of four total) to be operational for the instrument to function properly. Further testing in July and August determined that while Kepler was capable of using its damaged reaction wheels to prevent itself from entering safe mode and of downlinking previously collected science data it was not capable of collecting further science data as previously configured. Scientists working on the Kepler project said there was a backlog of data still to be looked at, and that more discoveries would be made in the following couple of years, despite the setback. Although no new science data from Kepler field had been collected since the problem, an additional sixty-three candidates were announced in July 2013 based on the previously collected observations. In November 2013, the second Kepler science conference was held. The discoveries included the median size of planet candidates getting smaller compared to early 2013, preliminary results of the discovery of a few circumbinary planets and planets in the habitable zone. 2014 On February 13, over 530 additional planet candidates were announced residing around single planet systems. Several of them were nearly Earth-sized and located in the habitable zone. This number was further increased by about 400 in June 2014. On February 26, scientists announced that data from Kepler had confirmed the existence of 715 new exoplanets. A new statistical method of confirmation was used called "verification by multiplicity" which is based on how many planets around multiple stars were found to be real planets. This allowed much quicker confirmation of numerous candidates which are part of multiplanetary systems. 95% of the discovered exoplanets were smaller than Neptune and four, including Kepler-296f, were less than 2 1/2 the size of Earth and were in habitable zones where surface temperatures are suitable for liquid water. In March, a study found that small planets with orbital periods of less than one day are usually accompanied by at least one additional planet with orbital period of 1–50 days. This study also noted that ultra-short period planets are almost always smaller than 2 Earth radii unless it is a misaligned hot Jupiter. On April 17, the Kepler team announced the discovery of Kepler-186f, the first nearly Earth-sized planet located in the habitable zone. This planet orbits around a red dwarf. In May 2014, K2 observations fields 0 to 13 were announced and described in detail. K2 observations began in June 2014. In July 2014, the first discoveries from K2 field data were reported in the form of eclipsing binaries. Discoveries were derived from a Kepler engineering data set which was collected prior to campaign 0 in preparation to the main K2 mission. On September 23, 2014, NASA reported that the K2 mission had completed campaign 1, the first official set of science observations, and that campaign 2 was underway. Campaign 3 lasted from November 14, 2014, to February 6, 2015, and included "16,375 standard long cadence and 55 standard short cadence targets". 2015 In January 2015, the number of confirmed Kepler planets exceeded 1000. At least two (Kepler-438b and Kepler-442b) of the discovered planets announced that month were likely rocky and in the habitable zone. Also in January 2015, NASA reported that five confirmed sub-earth-sized rocky exoplanets, all smaller than the planet Venus, were found orbiting the 11.2 billion year old star Kepler-444, making this star system, at 80% of the age of the universe, the oldest yet discovered. In April 2015, campaign 4 was reported to last between February 7, 2015, and April 24, 2015, and to include observations of nearly 16,000 target stars and two notable open star clusters, Pleiades and Hyades. In May 2015, Kepler observed a newly discovered supernova, KSN 2011b (Type 1a), before, during and after explosion. Details of the pre-nova moments may help scientists better understand dark energy. On July 24, 2015, NASA announced the discovery of Kepler-452b, a confirmed exoplanet that is near-Earth in size and found orbiting the habitable zone of a Sun-like star. The seventh Kepler planet candidate catalog was released, containing 4,696 candidates, and increase of 521 candidates since the previous catalog release in January 2015. On September 14, 2015, astronomers reported unusual light fluctuations of KIC 8462852, an F-type main-sequence star in the constellation Cygnus, as detected by Kepler, while searching for exoplanets. Various hypotheses have been presented, including comets, asteroids, and an alien civilization. 2016 By May 10, 2016, the Kepler mission had verified 1,284 new planets. Based on their size, about 550 could be rocky planets. Nine of these orbit in their stars' habitable zone: Kepler-560b, Kepler-705b, Kepler-1229b, Kepler-1410b, Kepler-1455b, Kepler-1544 b, Kepler-1593b, Kepler-1606b, and Kepler-1638b. Data releases The Kepler team originally promised to release data within one year of observations. However, this plan was changed after launch, with data being scheduled for release up to three years after its collection. This resulted in considerable criticism, leading the Kepler science team to release the third quarter of their data one year and nine months after collection. The data through September 2010 (quarters 4, 5, and 6) was made public in January 2012. Follow-ups by others Periodically, the Kepler team releases a list of candidates (Kepler Objects of Interest, or KOIs) to the public. Using this information, a team of astronomers collected radial velocity data using the SOPHIE échelle spectrograph to confirm the existence of the candidate KOI-428b in 2010, later named Kepler-40b. In 2011, the same team confirmed candidate KOI-423b, later named Kepler-39b. Citizen scientist participation Since December 2010, Kepler mission data has been used for the Planet Hunters project, which allows volunteers to look for transit events in the light curves of Kepler images to identify planets that computer algorithms might miss. By June 2011, users had found sixty-nine potential candidates that were previously unrecognized by the Kepler mission team. The team has plans to publicly credit amateurs who spot such planets. In January 2012, the BBC program Stargazing Live aired a public appeal for volunteers to analyse Planethunters.org data for potential new exoplanets. This led two amateur astronomers—one in Peterborough, England—to discover a new Neptune-sized exoplanet, to be named Threapleton Holmes B. One hundred thousand other volunteers were also engaged in the search by late January, analyzing over one million Kepler images by early 2012. One such exoplanet, PH1b (or Kepler-64b from its Kepler designation), was discovered in 2012. A second exoplanet, PH2b (Kepler-86b) was discovered in 2013. In April 2017, ABC Stargazing Live, a variation of BBC Stargazing Live, launched the Zooniverse project "Exoplanet Explorers". While Planethunters.org worked with archived data, Exoplanet Explorers used recently downlinked data from the K2 mission. On the first day of the project, 184 transit candidates were identified that passed simple tests. On the second day, the research team identified a star system, later named K2-138, with a Sun-like star and four super-Earths in a tight orbit. In the end, volunteers helped to identify 90 exoplanet candidates. The citizen scientists that helped discover the new star system will be added as co-authors in the research paper when published. Confirmed exoplanets Exoplanets discovered using Kepler data, but confirmed by outside researchers, include Kepler-39b, Kepler-40b, Kepler-41b, Kepler-43b, Kepler-44b, Kepler-45b, as well as the planets orbiting Kepler-223 and Kepler-42. The "KOI" acronym indicates that the star is a Kepler Object of Interest. Kepler Input Catalog The Kepler Input Catalog is a publicly searchable database of roughly 13.2 million targets used for the Kepler Spectral Classification Program and the Kepler mission. The catalog alone is not used for finding Kepler targets, because only a portion of the listed stars (about one-third of the catalog) can be observed by the spacecraft. Solar System observations Kepler has been assigned an observatory code in order to report its astrometric observations of small Solar System bodies to the Minor Planet Center. In 2013 the alternative NEOKepler mission was proposed, a search for near-Earth objects, in particular potentially hazardous asteroids (PHAs). Its unique orbit and larger field of view than existing survey telescopes allow it to look for objects inside Earth's orbit. It was predicted a 12-month survey could make a significant contribution to the hunt for PHAs as well as potentially locating targets for NASA's Asteroid Redirect Mission. Kepler's first discovery in the Solar System, however, was , a 200-kilometer cold classical Kuiper belt object located beyond the orbit of Neptune. Retirement On October 30, 2018, NASA announced that the Kepler space telescope, having run out of fuel, and after nine years of service and the discovery of over 2,600 exoplanets, has been officially retired, and will maintain its current, safe orbit, away from Earth. The spacecraft was deactivated with a "goodnight" command sent from the mission's control center at the Laboratory for Atmospheric and Space Physics on November 15, 2018. Kepler's retirement coincides with the 388th anniversary of Johannes Kepler's death in 1630.
Technology
Space-based observatories
null
850048
https://en.wikipedia.org/wiki/Gas%20lighting
Gas lighting
Gas lighting is the production of artificial light from combustion of a fuel gas such as methane, propane, butane, acetylene, ethylene, hydrogen, carbon monoxide, coal gas (town gas) or natural gas. The light is produced either directly by the flame, generally by using special mixes (typically propane or butane) of illuminating gas to increase brightness, or indirectly with other components such as the gas mantle or the limelight, with the gas primarily functioning to heat the mantle or the lime to incandescence . Before electricity became sufficiently widespread and economical to allow for general public use, gas lighting was prevalent for outdoor and indoor use in cities and suburbs where the infrastructure for distribution of gas was practical. At that time, the most common fuels for gas lighting were wood gas, coal gas and, in limited cases, water gas. Early gas lights were ignited manually by lamplighters, although many later designs are self-igniting. Gas lighting now is frequently used for camping, for which the high energy density of the hydrocarbon fuel, and the modular canisters on which camping lights are built, brings bright and long lasting light without complex equipment. In addition, some urban historical districts retain gas street lighting, and gas lighting is used indoors or outdoors to create or preserve a nostalgic effect. History of gas lighting Background Prior to use of gaseous fuels for lighting, the early lighting fuels consisted of olive oil, beeswax, fish oil, whale oil, sesame oil, nut oil, or other similar substances, which were all liquid fuels. These were the most commonly used fuels until the late 18th century. Whale oil was especially widely used for lighting in European cities such as London through the early 19th century. Chinese records dating back 1,700 years indicate the use of natural gas in homes for lighting and heating. The natural gas was transported by means of bamboo pipes to homes. The ancient Chinese of the Spring and Autumn period made the first practical use of natural gas for lighting purposes around 500 B.C. in which they used bamboo pipelines to transport both brine and natural gas for many miles, such as the ones in Zigong salt mines. Public illumination preceded by centuries the development and widespread adoption of gas lighting. In 1417, Sir Henry Barton, Lord Mayor of London, ordained "Lanthornes with lights to bee hanged out on the Winter evening betwixt Hallowtide and Candlemassee." Paris was first illuminated by an order issued in 1524, and, in the beginning of the 16th century, the inhabitants were ordered to keep lights burning in the windows of all houses that faced streets. In 1668, when some regulations were made for improving the streets of London, the residents were reminded to hang out their lanterns at the usual time, and, in 1690, an order was issued to hang out a light, or lamp, every night at nightfall, from Michaelmas to Christmas. By an Act of the Common Council in 1716, all housekeepers, whose houses faced any street, lane, or passage, were required to hang out, every dark night, one or more lights, to burn from six to eleven o'clock, under the penalty of one shilling as a fine for failing to do so. Accumulating and escaping gases were known originally among coal miners for their adverse effects rather than their useful characteristics. Coal miners described two types of gases, one called the choke damp and the other fire damp. In 1667, a paper detailing the effects of these gases was entitled, "A Description of a Well and Earth in Lancashire taking Fire, by a Candle approaching to it. Imparted by Thomas Shirley, Esq an eye-witness." British clergyman and scientist Stephen Hales experimented with the actual distillation of coal, thereby obtaining a flammable liquid. He reported his results in the first volume of his Vegetable Statics, published in 1726. From the distillation of "one hundred and fifty-eight grains [10.2 g] of Newcastle coal, he stated that he obtained 180 cubic inches [2.9 L] of gas, which weighed 51 grains [3.3 g], being nearly one third of the whole." Hales's results garnered attention decades later as the unique chemical properties of various gases became understood through the work of Joseph Black, Henry Cavendish, Alessandro Volta, and others. A 1733 publication by Sir James Lowther in the Philosophical Transactions of the Royal Society detailed some properties of coal gas, including its flammability. Lowther demonstrated the principal properties of coal gas to different members of the Royal Society. He showed that the gas retained its flammability after storage for some time. The demonstration did not result in identification of utility. Minister and experimentalist John Clayton referred to coal gas as the "spirit" of coal. He discovered its flammability by an accident. The "spirit" he isolated from coal caught fire by coming in contact with a candle as it escaped from a fracture in one of his distillation vessels. He stored the coal gas in bladders, and at times he entertained his friends by demonstrating the flammability of the gas. Clayton published his findings in Philosophical Transactions. Early technology It took nearly 200 years for gas to become accessible for commercial use. A Flemish alchemist, Jan Baptista van Helmont, was the first person to formally recognize gas as a state of matter. He would go on to identify several types of gases, including carbon dioxide. Over one hundred years later in 1733, Sir James Lowther had some of his miners working on a water pit for his mine. While digging the pit they hit a pocket of gas. Lowther took a sample of the gas and took it home to do some experiments. He noted, "The said air being put into a bladder … and tied close, may be carried away, and kept some days, and being afterwards pressed gently through a small pipe into the flame of a candle, will take fire, and burn at the end of the pipe as long as the bladder is gently pressed to feed the flame, and when taken from the candle after it is so lighted, it will continue burning till there is no more air left in the bladder to supply the flame." Lowther had basically discovered the principle behind gas lighting. Later in the 18th century William Murdoch (sometimes spelled "Murdock") stated: "the gas obtained by distillation from coal, peat, wood and other inflammable substances burnt with great brilliancy upon being set fire to … by conducting it through tubes, it might be employed as an economical substitute for lamps and candles." Murdoch's first invention was a lantern with a gas-filled bladder attached to a jet. He would use this to walk home at night. After seeing how well this worked he decided to light his home with gas. In 1797, Murdoch installed gas lighting in his new home as well as the workshop in which he worked. “This work was of a large scale, and he next experimented to find better ways of producing, purifying, and burning the gas.” The foundation had been laid for companies to start producing gas and other inventors to start playing with ways of using the new technology. Murdoch was the first to exploit the flammability of gas for the practical application of lighting. He worked for Matthew Boulton and James Watt at their Soho Foundry steam engine works in Birmingham, England. In the early 1790s, while overseeing the use of his company's steam engines in tin mining in Cornwall, Murdoch began experimenting with various types of gas, finally settling on coal gas as the most effective. He first lit his own house in Redruth, Cornwall in 1792. In 1798, he used gas to light the main building of the Soho Foundry and in 1802 lit the outside in a public display of gas lighting, the lights astonishing the local population. One of the employees at the Soho Foundry, Samuel Clegg, saw the potential of this new form of lighting. Clegg left his job to set up his own gas lighting business, the Gas Light and Coke Company. A "thermolampe" using gas distilled from wood was patented in 1799, while German inventor Friedrich Winzer (Frederick Albert Winsor) was the first person to patent coal-gas lighting in 1804. In 1801, Phillipe Lebon of Paris had also used gas lights to illuminate his house and gardens, and was considering how to light all of Paris. In 1820, Paris adopted gas street lighting. In 1804, Dr Henry delivered a course of lectures on chemistry, at Manchester, in which he showed the mode of producing gas from coal, and the facility and advantage of its use. Dr Henry analysed the composition and investigated the properties of carburetted hydrogen gas (i.e. methane). His experiments were numerous and accurate and made upon a variety of substances; having obtained the gas from wood, peat, different kinds of coal, oil, wax, etc., he quantified the intensity of the light from each source. In 1806 The Philips and Lee factory and a portion of Chapel Street in Salford, Lancashire were lit by gas, thought to be the first use of gas street lighting in the world. Josiah Pemberton, an inventor, had for some time been experimenting on the nature of gas. A resident of Birmingham, his attention may have been roused by the exhibition at Soho. About 1806, he exhibited gas lights in a variety of forms and with great brilliance at the front of his factory in Birmingham. In 1808 he constructed an apparatus, applicable for several uses, for Benjamin Cooke, a manufacturer of brass tubes, gilt toys, and other articles. In 1808, Murdoch presented to the Royal Society a paper entitled "Account of the Application of Gas from Coal to Economical Purposes" in which he described his successful application of coal gas to light the extensive establishment of Messrs. Phillips and Lea. For this paper he was awarded Count Rumford's gold medal. Murdoch's statements threw great light on the comparative advantage of gas and candles, and contained much useful information on the expenses of production and management. Although the history is uncertain, David Melville has been credited with the first house and street lighting in the United States, in either 1805 or 1806 in Newport, Rhode Island. In 1809, accordingly, the first application was made to Parliament to incorporate a company in order to accelerate the process, but the bill failed to pass. In 1810, however, the application was renewed by the same parties, and though some opposition was encountered and considerable expense incurred, the bill passed, but not without great alterations; and the London and Westminster Gas Light and Coke Company was established. Less than two years later, on 31 December 1813, Westminster Bridge was lit by gas. By 1816, Samuel Clegg obtained the patent for his horizontal rotative retort, his apparatus for purifying coal gas with cream of lime, and for his rotative gas meter and self-acting governor. Widespread use Among the economic impacts of gas lighting was much longer work hours in factories. This was particularly important in Great Britain during the winter months when nights are significantly longer. Factories could even work continuously over 24 hours, resulting in increased production. Following successful commercialization, gas lighting spread to other countries. In England, the first place outside London to have gas lighting was Preston, Lancashire, in 1816; this was due to the Preston Gaslight Company run by revolutionary Joseph Dunn, who found the most improved way of brighter gas lighting. The parish church there was the first religious building to be lit by gas lighting. In Bristol, a Gas Light Company was founded on 15 December 1815. Under the supervision of the engineer, John Brelliat, extensive works were conducted in 1816-17 to build a gasholder, mains and street lights. Many of the principal streets in the centre of the city, as well as nearby houses, had switched to gas lighting by the end of 1817. In America, Seth Bemis lit his factory with gas illumination from 1812 to 1813. The use of gas lights in Rembrandt Peale's Museum in Baltimore in 1816 was a great success. Baltimore was the first American city with gas street lights; Peale's Gas Light Company of Baltimore on 7 February 1817 lit its first street lamp at Market and Lemon Streets (currently Baltimore and Holliday Streets). The first private residence in the US illuminated by gas has been variously identified as that of David Melville (c. 1806), as described above, or of William Henry, a coppersmith, at 200 Lombard Street, Philadelphia, Pennsylvania, in 1816. In 1817, at the three stations of the Chartered Gas Company in London, 25 chaldrons (24 m3) of coal were carbonized daily, producing 300,000 cubic feet (8,500 m3) of gas. This supplied gas lamps equal to 75,000 Argand lamps each yielding the light of six candles. At the City Gas Works, in Dorset Street, Blackfriars, three chaldrons of coal were carbonized each day, providing the gas equivalent of 9,000 Argand lamps. So 28 chaldrons of coal were carbonized daily, and 84,000 lights supplied by those two companies only. At this period the principal difficulty in gas manufacture was purification. Mr. D. Wilson, of Dublin, patented a method for purifying coal gas by means of the chemical action of ammoniacal gas. Another plan was devised by Reuben Phillips, of Exeter, who patented the purification of coal gas by the use of dry lime. G. Holworthy, in 1818, patented a method of purifying it by passing the gas, in a highly condensed state, through iron retorts heated to a dark red. In 1820, Swedish inventor Johan Patrik Ljungström had developed a gas lighting with copper apparatuses and chandeliers of ink, brass and crystal, reportedly one of the first such public installations of gas lighting in the region, enhanced as a triumphal arch for the city gate for a royal visit of Charles XIV John of Sweden in 1820. By 1823, numerous towns and cities throughout Britain were lit by gas. Gas light cost up to 75% less than oil lamps or candles, which helped to accelerate its development and deployment. By 1859, gas lighting was to be found all over Britain and about a thousand gas works had sprung up to meet the demand for the new fuel. The brighter lighting which gas provided allowed people to read more easily and for longer. This helped to stimulate literacy and learning, speeding up the second Industrial Revolution. In 1824 the English Association for Gas Lighting on the Continent, a sizeable business producing gas for several cities in mainland, Europe, including Berlin, was established, with Sir William Congreve, 2nd Baronet as general manager. The 1839 invention, the Bude-Light, provided a brighter and more economical lamp. Oil-gas appeared in the field as a rival of coal gas. In 1815, John Taylor patented an apparatus for the decomposition of "oil" and other animal substances. Public attention was attracted to "oil-gas" by the display of the patent apparatus at Apothecary's Hall, by Taylor & Martineau. In 1891 the gas mantle was invented by the Austrian chemist Carl Auer von Welsbach. This eliminated the need for special illuminating gas (a synthetic mixture of hydrogen and hydrocarbon gases produced by destructive distillation of bituminous coal or peat) to get bright shining flames. Acetylene was also used from about 1898 for gas lighting on a smaller scale. Illuminating gas was used for gas lighting, as it produces a much brighter light than natural gas or water gas. Illuminating gas was much less toxic than other forms of coal gas, but less could be produced from a given quantity of coal. The experiments with distilling coal were described by John Clayton in 1684. George Dixon's pilot plant exploded in 1760, setting back the production of illuminating gas a few years. The first commercial application was in a Manchester cotton mill in 1806. In 1901, studies of the defoliant effect of leaking gas pipes led to the discovery that ethylene is a plant hormone. Throughout the 19th century and into the first decades of the 20th, the gas was manufactured by the gasification of coal. Later in the 19th century, natural gas began to replace coal gas, first in the US, and then in other parts of the world. In the United Kingdom, coal gas was used until the early 1970s. Russia The history of the Russian gas industry began with retired Lieutenant Pyotr Sobolevsky (1782–1841), who improved Philippe le Bon's design for a "thermolamp" and presented it to Emperor Alexander I in 1811; in January 1812, Sobolevsky was instructed to draw up a plan for gas street-lighting for St. Petersburg. The French invasion of Russia delayed implementation, but St. Petersburg's Governor General Mikhail Miloradovich, who had seen the gas lighting of Vienna, Paris and other European cities, initiated experimental work on gas lighting for the capital, using British apparatus for obtaining gas from pit coal, and by the autumn of 1819, Russia's first gas street light was lit on one of the streets on Aptekarsky Island. In February 1835, the Company for Gas Lighting St. Petersburg was founded; towards the end of that year, a factory for the production of lighting gas was constructed near the Obvodny Canal, using pit coal brought in by ship from Cardiff; and 204 gas lamps were ceremonially lit in St. Petersburg on 27 September 1839. Over the next 10 years, their numbers almost quadrupled, to reach 800. By the middle of the 19th century, the central streets and buildings of the capital were illuminated: the Palace Square, Bolshaya and Malaya Morskaya streets, Nevsky and Tsarskoselsky Avenues, Passage Arcade, Noblemen's Assembly, the Technical Institute and Peter and Paul Fortress. Theatrical use It took many years of development and testing before gas lighting for the stage was commercially available. Gas technology was then installed in just about every major theatre in the world. But gas lighting was short-lived because the electric light bulb soon followed. In the 19th century, gas stage lighting went from a crude experiment to the most popular way of lighting theatrical stages. In 1804, Frederick Albert Winsor first demonstrated the way to use gas to light the stage in London at the Lyceum Theatre. Although the demonstration and all the lead research were being done in London, "in 1816 at the Chestnut Street Theatre in Philadelphia was the earliest gas lit theatre in world". In 1817 the Lyceum, Drury Lane, and Covent Garden theatres were all lit by gas. Gas was brought into the building by "miles of rubber tubing from outlets in the floor called 'water joints'" which "carried the gas to border-lights and wing lights". But before it was distributed, the gas came through a central distribution point called a "gas table", which varied the brightness by regulating the gas supply, and the gas table, which allowed control of separate parts of the stage. Thus it became the first stage 'switchboard'. By the 1850s, gas lighting in theatres had spread practically all over the United States and Europe. Some of the largest installations of gas lighting were in large auditoriums, like the Théâtre du Chatelet, built in 1862. In 1875, the new Paris Opera was constructed. "Its lighting system contained more than twenty-eight miles [] of gas piping, and its gas table had no fewer than eighty-eight stopcocks, which controlled nine hundred and sixty gas jets." The theatre that used the most gas lighting was Astley's Equestrian Amphitheatre in London. According to the Illustrated London News, "Everywhere white and gold meets the eye, and about 200,000 gas jets add to the glittering effect of the auditorium … such a blaze of light and splendour has scarcely ever been witnessed, even in dreams." Theatres switched to gas lighting because it was more economical than using candles and also required less labour to operate. With gas lighting, theatres would no longer need to have people tending to candles during a performance, or having to light each candle individually. "It was easier to light a row of gas jets than a greater quantity of candles high in the air." Theatres also no longer needed to worry about wax dripping on the actors during a show. Gas lighting also had an effect on the actors. As the stage was brighter, they could now use less make-up and their motions did not have to be as exaggerated. Half-lit stages had become fully lit stages. Production companies were so impressed with the new technology that one said, "This light is perfect for the stage. One can obtain gradation of brightness that is really magical." The best result was the improved respect from the audience. There was no more shouting or riots. The light pushed the actors more up stage behind the proscenium, helping the audience concentrate more on the action that was taking place on stage rather than what was going on in the house. Management had more authority on what went on during the show because they could see. Gaslight was the leading cause of behaviour change in theatres. They were no longer places for mingling and orange selling, but places of respected entertainment. Types of lighting instruments There were six types of burners, but four burners were really experimented with: The first burner used was the single-jet burner, which produced a small flame. The tip of the burner was made out of lead, which absorbed heat, causing the flame to be smaller in size. It was discovered that the flame would burn brighter if the metal was mixed with other components, such as porcelain. Flat burners were invented mainly to distribute gas and light evenly to the systems. The fishtail burner was similar to the flat burner, but it produced a brighter flame and conducted less heat. The last burner that was experimented with was the Welsbach burner. Around this time the Bunsen burner was in use along with some forms of electricity. The Welsbach was based on the idea of the Bunsen burner, still using gas. A cotton mesh with cerium and thorium was imbedded into the Welsbach. This source of light was named the gas mantle; it produced three times more light than the naked flame. Several different instruments were used for stage lighting in the 19th century fell; these included footlights, border lights, groundrows, lengths, bunch lights, conical reflector floods, and limelight spots. These mechanisms sat directly on the stage, blinding the eyes of the audience. Footlights caused the actors' costumes to catch fire if they got too close. These lights also caused bothersome heat that affected both audience members and actors. Again, the actors had to adapt to these changes. They started fireproofing their costumes and placing wire mesh in front of the footlights. Border lights, also known as striplights, were a row of lights that hung horizontally in the flies. Color was added later by dying cotton, wool, and silk cloth. Lengths were constructed the same way as border lights, but mounted vertically in the rear where the wings were. Bunch lights were a cluster of burners that sat on a vertical base that was fuelled directly from the gas line. The conical reflector can be related to the Fresnel lens used today. This adjustable box of light reflected a beam whose size could be altered by a barndoor. Limelight spots are similar to today's current spotlighting system. This instrument was used in scene shops, as well as the stage. Gas lighting did have some disadvantages. "Several hundred theatres are said to have burned down in America and Europe between 1800 and the introduction of electricity in the late 1800s. The increased heat was objectionable, and the border lights and wing lights had to be lighted by a long stick with a flaming wad of cotton at the end. For many years, an attendant or gas boy moved along the long row of jets, lighting them individually while gas was escaping from the whole row. Both actors and audiences complained of the escaping gas, and explosions sometimes resulted from its accumulation." These problems with gas lighting led to the rapid adoption of electric lighting. By 1881, the Savoy Theatre in London was using incandescent lighting. While electric lighting was introduced to theatre stages, the gas mantle was developed in 1885 for gas-lit theatres. "This was a beehive-shaped mesh of knitted thread impregnated with lime that, in miniature, converted the naked gas flame into in effect, a lime-light." Electric lighting slowly took over in theatres. In the 20th century, it enabled better and safer theatre productions, with no smell, relatively very little heat, and more freedom for designers. Decline In the early 20th century, most cities in North America and Europe had gaslit streets, and most railway station platforms had gas lights too. However, around 1880 gas lighting for streets and train stations began giving way to high voltage (3,000–6,000 volt) direct current and alternating current arc lighting systems. This time period also saw the development of the first electric power utility designed for indoor use. The new system by inventor Thomas Edison was designed to function similar to gas lighting. For reasons of safety and simplicity it used direct current (DC) at a relatively low 110 volts to light incandescent light bulbs. Voltage in wires steadily declines as distance increases, and at this low voltage power plants needed to be within about of the lamps. This voltage drop problem made DC distribution relatively expensive and gas lighting retained widespread usage with new buildings sometimes constructed with dual systems of gas piping and electrical wiring connected to each room, to diversify the power sources for lighting. The development of new alternating current power transmission systems in the 1880s and 90s by companies such as Ganz and AEG in Europe and Westinghouse Electric and Thomson-Houston in the US solved the voltage and distance problem by using high transmission line voltages, and transformers to drop the voltage for distribution for indoor lighting. Alternating current technology overcame many of the limitations of direct current, enabling the rapid growth of reliable, low-cost electrical power networks which finally spelled the end of widespread usage of gas lighting. Modern usage Outdoors In some cities, gas lighting is preserved or restored as a vintage nostalgic feature to support the historic atmosphere of their historic centres. In the 20th century, most cities with gas streetlights replaced them with new electric streetlights. For example, Baltimore, the first US city to install gas streetlights, removed nearly all of them. A sole, token gas lamp is located at N. Holliday Street and E. Baltimore Street as a monument to the first gas lamp in America, erected at that location. However, gas lighting of streets has not disappeared completely from some cities, and the few municipalities that retained gas lighting now find that it provides a pleasing nostalgic effect. Gas lighting is also seeing a resurgence in the luxury home market for those in search of historical authenticity. The largest gas lighting network in the world is that of Berlin. With about 23,000 lamps (2022), it holds more than half of all working gas street lamps in the world, followed by Düsseldorf with 14,000 lamps (2020), of which at least 10,000 are to be retained In London there were about 1,500 working gas street lamps although there were plans to replace 299 of those in Westminster (the first city in the world lit by gas) with LED lighting by 2023, which sparked public opposition. In the United States, more than 2800 gas lights in Boston operate in the historic districts of Beacon Hill, Back Bay, Bay Village, Charlestown, and parts of other neighbourhoods. In Cincinnati, Ohio, more than 1100 gas lights operate in areas that have been named historic districts. Gas lights also operate in parts of the famed French Quarter and outside historic homes throughout the city in New Orleans. Zagreb, a capital of Croatia is using gas candelabers since 1863. At the time, Zagreb was illuminated by 60 000 lamps, but as of 1987, only 248 street lamps illuminate old parts of the city. Zagreb gas lamps are manually managed by limplighters in historic uniforms ("nažigači"). Prague, where gas lighting was introduced on 15 September 1847, had about 10,000 gas streetlamps in the 1940s. The last historic gas candelabras become electrified in 1985. However, in 2002–2014, streetlamps along the Royal Route and some other streets in the centre were rebuilt to use gas (using replicas of the historic poles and lanterns), several historic candelabras (Hradčanské náměstí, Loretánská street, Dražického náměstí etc.) were also converted back to gas lamps, and five new gas lamps were installed in the Michle Gasworks as a promotion. In 2018, there were 417 points (about 650 lanterns) of street gas lighting in Prague. During Advent and Christmas, lanterns on the Charles Bridge are managed manually by a lamplighter in historic uniform. The plan to reintroduce gas lights in Old Prague was proposed in 2002, and adopted by the Municipality of Prague in January 2004. Indoors The use of natural gas (methane) for indoor lighting is nearly extinct. Besides producing a lot of heat, the combustion of methane tends to release significant amounts of carbon monoxide, a colourless and odourless gas that is more readily absorbed by the blood than oxygen, and can be deadly. Historically, the use of lamps of all types was of shorter duration than we are accustomed to with electric lights, and in the far more draughty buildings, it was of less concern and danger. There are suppliers of new mantle gas lamps set up for use with natural gas; and, some old homes still have fixtures installed, and some period restorations have salvaged fixtures installed, more for decoration than use. New fixtures are still made and available for propane (sometimes called "bottle(d) gas"), a product of oil refining, which under most circumstances burns more completely to carbon dioxide and water vapour. In some locations where public utility electricity or kerosene are not readily accessible or desirable, propane gas mantle lamps are still used, although the increased availability of alternative energy sources, such as solar panels and small scale wind turbines, combined with increasing efficiency of lighting products, such as compact fluorescent lamps and LEDs are also in use. Other uses Perforated tubes bent into the shape of letters were used to form gas lit advertising signs, prior to the introduction of neon lights, as early as 1857 in Grand Rapids, Michigan. Gas lighting is still in common use for camping lights. Small portable gas lamps, connected to a portable gas cylinder, are a common item on camping trips. Mantle lamps powered by vaporized petrol, such as the Coleman lantern, are also available. Image gallery
Technology
Lighting
null
851297
https://en.wikipedia.org/wiki/Seed%20drill
Seed drill
A seed drill is a device used in agriculture that sows seeds for crops by positioning them in the soil and burying them to a specific depth while being dragged by a tractor. This ensures that seeds will be distributed evenly. The seed drill sows the seeds at the proper seeding rate and depth, ensuring that the seeds are covered by soil. This saves them from being eaten by birds and animals, or being dried up due to exposure to the sun. With seed drill machines, seeds are distributed in rows; this allows plants to get sufficient sunlight, nutrients from the soil. Before the introduction of the seed drill, most seeds were planted by hand broadcasting, an imprecise and wasteful process with a poor distribution of seeds and low productivity. Use of a seed drill can improve the ratio of crop yield (seeds harvested per seed planted) by as much as eight times. The use of seed drill saves time and labor. Some machines for metering out seeds for planting are called planters. The concepts evolved from ancient Chinese practice and later evolved into mechanisms that pick up seeds from a bin and deposit them down a tube. Seed drills of earlier centuries included single-tube seed drills in Sumer and multi-tube seed drills in China, and later a seed drill in 1701 by Jethro Tull that was influential in the growth of farming technology in recent centuries. Even for a century after Tull, hand-sowing of grain remained common. Function Many seed drills consist of a hopper filled with seeds arranged above a series of tubes that can be set at selected distances from each other to allow optimum growth of the resulting plants. Seeds are spaced out using fluted paddles which rotate using a geared drive from one of the drill's land wheels. The seeding rate is altered by changing gear ratios. Most modern drills use air to convey seeds in plastic tubes from the seed hopper to the colters. This arrangement enables seed drills to be much wider than the seed hopper—as much as 12m wide in some cases. The seed is metered mechanically into an air stream created by a hydraulically powered onboard fan and conveyed initially to a distribution head which sub-divides the seeds into the pipes taking the seeds to the individual colters. Before the operation of a conventional seed drill, hard ground has to be plowed and harrowed to soften it enough to be able to get the seeds to the right depth and make a good "seedbed", providing the right mix of moisture, stability, space and air for seed germination and root development. The plow digs up the earth and the harrow smooths the soil and breaks up any clumps. In the case that the soil is not as compacted as to need a plow, it can also be tilled by less deeply disturbing tools, before drilling. The least interruption of soil structure and soil fauna happens when a type of drilling machine is used which is outfitted to be able to "direct drill"; "direct" referring to sowing into narrow rows opened by single teeth placed in front of every seed-dispensing tube, directly into/ between the partly composted remains (stubble) of the last crop (directly into an untilled field). The drill must be set for the size of the seed used. After this the grain is put in the hopper on top, from which the seed grains flow down to the drill which spaces and plants the seed. This system is still used today but has been updated and modified over time in many aspects; the most visible example being very wide machines with which one farmer can plant many rows of seed at the same time. A seed drill can be pulled across the field, depending on the type, using draft animals, like bullocks or by a power engine, usually a tractor. Seeds sown using a seed drill are distributed evenly and placed at the correct depth in the soil. Precursors In older methods of planting, a field is initially prepared with a plow to a series of linear cuts known as furrows. The field is then seeded by throwing the seeds over the field, a method known as manual broadcasting. The seeds may not be sown to the right depth nor the proper distance from one another. Seeds that land in the furrows have better protection from the elements, and natural erosion or manual raking will cover them while leaving some exposed. The result is a field planted roughly in rows, but having a large number of plants outside the furrow lanes. There are several downsides to this approach. The most obvious is that seeds that land outside the furrows will not have the growth shown by the plants sown in the furrow since they are too shallow on the soil. Because of this, they are lost to the elements. Many of the seeds remain on the surface where they are vulnerable to being eaten by birds or carried away on the wind. Surface seeds commonly never germinate at all or germinate prematurely, only to be killed by frost. Since the furrows represent only a portion of the field's area, and broadcasting distributes seeds fairly evenly, this results in considerable wastage of seeds. Less obvious are the effects of over seeding; all crops grow best at a certain density, which varies depending on the soil and weather conditions. Additional seeding above this will actually reduce crop yields, in spite of more plants being sown, as there will be competition among the plants for the minerals, water, and the soil available. Another reason is that the mineral resources of the soil will also deplete at a much faster rate, thereby directly affecting the growth of the plants. History While the Babylonians used primitive seed drills around 1400 BCE, the invention never reached Europe. Multi-tube iron seed drills were invented by the Chinese in the 2nd century BCE. This multi-tube seed drill has been credited with giving China an efficient food production system that allowed it to support its large population for millennia. This multi-tube seed drill may have been introduced into Europe following contacts with China. In the Indian subcontinent, the seed drill was in widespread use among peasants by the time of the Mughal Empire in the 16th century. The first known European seed drill was attributed to Camillo Torello and patented by the Venetian Senate in 1566. A seed drill was described in detail by Tadeo Cavalina of Bologna in 1602. In England, the seed drill was further refined by Jethro Tull in 1701 in the Agricultural Revolution. However, seed drills of this and successive types were both expensive and unreliable, as well as fragile. Seed drills would not come into widespread use in Europe until the mid to late 19th century, when manufacturing advances such as machine tools, die forging and metal stamping allowed large scale precision manufacturing of metal parts. Early drills were small enough to be pulled by a single horse, and many of these remained in use into the 1930s. The availability of steam, and later gasoline tractors, however, saw the development of larger and more efficient drills that allowed farmers to seed ever larger tracts in a single day. Recent improvements to drills allow seed-drilling without prior tilling. This means that soils subject to erosion or moisture loss are protected until the seed germinates and grows enough to keep the soil in place. This also helps prevent soil loss by avoiding erosion after tilling. The development of the press drill was one of the major innovations in pre-1900 farming technology. Impact The invention of the seed drill dramatically improved germination. The seed drill employed a series of runners spaced at the same distance as the plowed furrows. These runners, or drills, opened the furrow to a uniform depth before the seed was dropped. Behind the drills were a series of presses, metal discs which cut down the sides of the trench into which the seeds had been planted, covering them over. This innovation permitted farmers to have precise control over the depth at which seeds were planted. This greater measure of control meant that fewer seeds germinated early or late and that seeds were able to take optimum advantage of available soil moisture in a prepared seedbed. The result was that farmers were able to use less seed and at the same time experience larger yields than under the broadcast methods. The seed drill allows farmers to sow seeds in well-spaced rows at specific depths at a specific seed rate; each tube creates a hole of a specific depth, drops in one or more seeds, and covers it over. This invention gives farmers much greater control over the depth that the seed is planted and the ability to cover the seeds without back-tracking. The result is an increased rate of germination, and a much-improved crop yield (up to eight times compared to broadcast seeding). The use of a seed drill also facilitates weed control. Broadcast seeding results in a random array of growing crops, making it difficult to control weeds using any method other than hand weeding. A field planted using a seed drill is much more uniform, typically in rows, allowing weeding with a hoe during the growing season. Weeding by hand is laborious and inefficient. Poor weeding reduces crop yield, so this benefit is extremely significant.
Technology
Farm and garden machinery
null
851586
https://en.wikipedia.org/wiki/Therizinosaurus
Therizinosaurus
Therizinosaurus (; meaning 'scythe lizard') is a genus of very large therizinosaurid that lived in Asia during the Late Cretaceous period in what is now the Nemegt Formation around 70 million years ago. It contains a single species, Therizinosaurus cheloniformis. The first remains of Therizinosaurus were found in 1948 by a Mongolian field expedition in the Gobi Desert and later described by Evgeny Maleev in 1954. The genus is only known from a few bones, including gigantic manual unguals (claw bones), from which it gets its name, and additional findings comprising fore and hindlimb elements that were discovered from the 1960s through the 1980s. Therizinosaurus was a colossal therizinosaurid that could grow up to long and tall, and weigh possibly over . Like other therizinosaurids, it would have been a slow-moving, long-necked, high browser equipped with a rhamphotheca (horny beak) and a wide torso for food processing. Its forelimbs were particularly robust and had three fingers that bore unguals which, unlike other relatives, were very stiffened, elongated, and only had significant curvatures at the tips. Therizinosaurus had the longest known manual unguals of any land animal, reaching above in length. Its hindlimbs ended in four functionally weight-bearing toes differing from other theropod groups in which the first toe was reduced to a dewclaw and also resembling the very distantly related sauropodomorphs. It was one of the last and the largest representative of its unique group, the Therizinosauria (formerly known as Segnosauria; the segnosaurs). During and after its original description in 1954, Therizinosaurus had rather complex relationships due to the lack of complete specimens and relatives at the time. Maleev thought the remains of Therizinosaurus to belong to a large turtle-like reptile, and also named a separate family for the genus: Therizinosauridae. Later on, with the discovery of more complete relatives, Therizinosaurus and kin were thought to represent some kind of Late Cretaceous sauropodomorphs or transitional ornithischians, even though at some point it was suggested that it may have been a theropod. After years of taxonomic debate, nevertheless, they are now placed in one of the major dinosaur clades, Theropoda, specifically as maniraptorans. Therizinosaurus is widely recovered within Therizinosauridae by most analyses. The unusual arms and body anatomy (extrapolated after relatives) of Therizinosaurus have been cited as an example of convergent evolution with chalicotheriines and other primarily herbivorous mammals, suggesting similar feeding habits. The elongated hand claws of Therizinosaurus were more useful when pulling vegetation within reach rather than being used for active attack or defense because of their fragility, however, they may have had some role for intimidation. Its arms also were particularly resistant to stress, which suggests a robust use of these limbs. Therizinosaurus was a very tall animal, likely having a reduced competition over the foliage in its habitat and outmatching predators like tyrannosaurid Tarbosaurus. History of discovery In 1948, several Mongolian Paleontological expeditions organized by the USSR Academy of Sciences were conducted in the Nemegt Formation of the Gobi Desert, Southwestern Mongolia, with the main objective of new fossils findings. The expeditions unearthed numerous dinosaur and turtle fossil remains from the stratotype locality Nemegt (also known as Nemegt Valley), but the most notable elements collected were three partial manual unguals (claw bones) of considerable size. This set of unguals was found on a subdivision of the Nemegt locality designated as Quarry V near the skeleton of a large theropod, but also in association with other elements including a metacarpal fragment and several ribs fragments. It was labelled under the specimen number PIN 551-483 and later on, these fossils were described by the Russian paleontologist Evgeny Maleev in 1954 who used them to scientifically name the new genus and type species Therizinosaurus cheloniformis, becoming the holotype specimen. The generic name, Therizinosaurus, is derived from the Greek (, meaning scythe, reap or cut) and (, meaning lizard) in reference to the enormous manual unguals, and the specific name, cheloniformis, is taken from the Greek (, meaning turtle) and Latin formis as the remains were thought to belong to a turtle-like reptile. Maleev also coined a separate family for this new and enigmatic taxon: Therizinosauridae. Since little was known of Therizinosaurus at the time of the original description, Maleev thought PIN 551-483 belonged to a large, long turtle-like reptile that relied on its giant hand claws to harvest seaweed. Though it was not fully understood to what general kind of animal these fossils belonged, in 1970, the Russian paleontologist Anatoly K. Rozhdestvensky was one of the first authors to suggest that Therizinosaurus was a theropod and not a turtle. He made comparisons between Chilantaisaurus and the holotype unguals of Therizinosaurus to propose that the appendages actually came from a carnosaurian dinosaur, thereby interpreting Therizinosaurus as a theropod. Rozhdestvensky also illustrated the three holotypic manual unguals and re-identified the metacarpal fragment as a metatarsal bone, and based on the unusual shape of both metatarsal and ribs fragments he listed them as sauropod remains. These theropodan affinities were also followed by the Polish paleontologist Halszka Osmólska and co-author Ewa Roniewicz in 1970 during their naming and description of Deinocheirus—another large and enigmatic theropod from the formation that was initially known from partial arms. Similar to Rozhdestvensky, they suggested that the holotype unguals were more likely to have belonged to a carnosaurian theropod, rather than a large marine turtle. Additional specimens Further expeditions in the Nemegt Formation unearthed more fossils of Therizinosaurus. In 1968 prior to Rozhdestvensky, Osmólska and Roniewicz statements, the upper portion of a manual ungual was found in the Altan Uul locality and labeled as MPC-D 100/17 (formerly IGM or GIN). In 1972, another fragmented ungual (specimen MPC-D 100/16) was discovered at the Upper White Beds of the Hermiin Tsav locality, only preserving its lower portion. During the year 1973, a much more complete, larger, and articulated specimen was collected also from Hermiin Tsav. This specimen was labelled as MPC-D 100/15 and consists of both left and right arms including the scapulocoracoids, both humeri (upper arm bones), right ulna with radius and left ulna, two right carpals, the right metacarpus including a complete digit Il, and some ribs with gastralia (belly ribs). As common with fossils, some elements were not entirely preserved such as the scapulocoracoids with broken ends, and the left arm is less complete than the right one. All of these specimens were first described and referred to Therizinosaurus by the Mongolian paleontologist Rinchen Barsbold in 1976. In this new monograph, he pointed out that the rib fragments in MPC-D 100/15 were more slender than the ones from the holotype, and identified MPC-D 100/16 and 100/17 as pertaining to digits I and III, respectively. It was clear to Barsbold that MPC-D 100/15 represented Therizinosaurus as the ungual in this specimen shared the elongation and flattened morphology of all previous specimens. He concluded that Therizinosaurus was a theropod taxon since MPC-D 100/15 matched multiple theropodan characters. Also during the year 1973, the specimen MPC-D 100/45 was discovered by the Joint Soviet-Mongolian Paleontological Expedition at the Hermiin Tsav locality. Unlike the previous findings, MPC-D 100/45 is represented by a right hindlimb composed of a very fragmented femur with the lower end of the tibia, astragalus, calcaneum, tarsal IV, a functional tetradacyl feet (four-toed) compromising four partial metatarsals, partially preserved digits I and III, and nearly complete digits II and IV. These newer remains were described by the also Mongolian paleontologist Altangerel Perle in 1982. He regarded the referral of Therizinosaurus and Therizinosauridae to Chelonia (turtles order) to be unlikely, and hypothesized Therizinosaurus and Segnosaurus—at the time of this description regarded as a theropod dinosaur—to be particularly similar based on their respective scapulocoracoid morphology, only differing in size. Perle referred MPC-D 100/45 to Therizinosaurus given that this specimen was found near the location of MPC-D 100/15 and was virtually similar to the described pes for Segnosaurus. In 1990, Barsbold and Teresa Maryanska agreed with Perle in that the hindlimb material from Hermiin Tsav he described in 1982 was therizinosaurian (then called segnosaurians) given that the metatarsus was stocky and the astragalus had a laterally arched ascending process (bony extension), but cast doubt with his referral of it to Therizinosaurus and the segnosaurian identity for this taxon since it was only known from the pectoral girdle and other forelimb elements, making direct comparisons between specimens impossible. They considered this specimen to represent a Late Cretaceous representative of the Segnosauria, but not Therizinosaurus. In 2010 however, the North American paleontologist Lindsay E. Zanno in her large taxonomic reevaluation of Therizinosauria considered the referral of MPC-D 100/45 to Therizinosaurus to be likely based on the rationale that it was collected in the same stratigraphic context (Nemegt Formation) as the holotype, and shared the robust and four-toed morphology of other therizinosaurids such as Segnosaurus. She also excluded the rib material from the holotype as it was re-identified by Rozhdestvensky to likely have come from a sauropod dinosaur, and not Therizinosaurus itself. Description For maniraptoran standards, Therizinosaurus obtained enormous sizes, estimated to have reached in length with estimated heights from and ponderous weights from to possibly over . These dimensions make Therizinosaurus the largest therizinosaur known and the largest known maniraptoran. Along with the contemporaneous ornithomimosaur Deinocheirus, it was the largest maniraptoriform. Though the body remains of Therizinosaurus are relatively incomplete, inferences can be made about its physical characteristics based on more complete and related therizinosaurids. Like other members of its family, Therizinosaurus had a proportionally small skull bearing a rhamphotheca (horny beak) atop its long neck; bipedal gaits; a large belly for foliage processing; and sparse feathering. Other traits that were likely present in Therizinosaurus include a heavily pneumatized (air-filled) vertebral column and a robustly-built, (backwards oriented) . In 2010, Senter and James used hindlimb length equations to predict the total length of the hindlimbs in Therizinosaurus and Deinocheirus. They concluded that an average Therizinosaurus may have had approximately long legs. More recently, Mike Taylor and Matt Wedel suggested that the whole neck would be 2.9 times the size of the humerus, which was , resulting in a long neck based on comparisons with the series of Nanshiungosaurus. The most distinctive feature of Therizinosaurus was the presence of gigantic unguals on each of the three digits of its hands. These were common among therizinosaurs but particularly large and stiffened in Therizinosaurus, and they are considered as the longest known from any terrestrial animal. Forelimbs The arm of Therizinosaurus covered in total length (humerus, radius and second metacarpal with phalanges lengths). The measured long with a stocky and flattened dorsal blade, wide acromial process (bony extension) and a very widened ventral surface. Near the anterior edge of the scapular widening and near the suture (bone joint), a foramen was located; it likely functioned as a channel for blood vessels and nerves when alive. The posterior edge of the scapula was robust and the was lightly built, likely fused into a cartilaginous system with its periphery in life. The measured in length, it had a broad and convex lateral surface that formed a slightly inclined concavity near of the scapulocoracoid suture. This concavity bent down towards the scapular widening. Near the scapulocoracoid suture, this edge turned very thin and possibly into cartilage along with the periphery of the coracoid in life, as the case of the scapular edge. A large foramen was also present on the coracoid. The was broad and deep, slightly pointing to the outer lateral side. It had robust and convex crest-like borders. The supraglenoid thickness was developed in a convex crest-shaped form, it was divided across the top of the scapulocoracoid suture. The attachment for the biceps muscle was prominently developed by a large tubercle with a stocky top, indicating powerful muscles in life. The was robustly built, measuring long. It had a broad upper end. The (deltoid muscle attachment) was particularly long and thick, with its top located approximately 1/3 from the upper end. The length of the crest was no less than 2/3 the length of the whole bone element. The lower end of the humerus was very expanded and flared. The condyles were developed onto the anterior side of the lower expansion while the epicondyles were very broad and projected over the limits of the articular areas. The measured and most of its length was occupied by its straight shaft. The ulnar process was very wide. The upper articular area was divided into inner and outer lateral sides. The lateral side had a triangular-shaped border and was slightly concave; it was limited in a top view by the depression for the upper articulation of the radius. The inner side formed a semilunar-shaped depression that covered the lunar-shaped condyle of the humerus. The was long and slightly S-curved. Its upper end was flattened in a lateral direction, very wide, and the distal end was highly robust. The first lower measured tall and wide and had two articulation surfaces on its lowermost end. The upper surface of this carpal was divided by a broad depression that formed the articulation of the carpus. On its inner side, it had a triangular-shaped outline that attached to the upper surface of metacarpal I, occupying a little bit less than the lateral side, which articulates to metacarpal II. These areas were separated by an oblique bony projection. The second lower carpal was smaller than the first one, measuring tall and wide. Its lower surface was flattened and the articular surface of the carpus extended from the first carpal to the second carpal over the articulation of the two bones. The I was long and compared to the others it was more stockier. Its lateral side was broad, especially on the uppermost area; the inner border was thin and narrow. The upper articulation was configured into three parts. The lower articular surface was somewhat asymmetric and bent to the inner side from the left one, along with a wide and deep opening. The total length of this metacarpal was larger than 2/3 the length of metacarpal III, which may have been a unique trait of Therizinosaurus. The metacarpal II measured in length and was the most elongated and robust metacarpal. It had an inclined, square-shaped, and flattened upper articulation. The articulation on the lower head had very symmetrical condyles, being divided by a broad, deep depression. The lateral connecting openings were poorly developed. The metacarpal III covered in length and had a very thin shaft compared to the other metacarpals. Its upper articulation was divided into three parts. The lower articular head was asymmetrical with deep and broad openings. As in metacarpal II, the lateral connecting openings were poorly developed. Only the second of the manus is known in Therizinosaurus. It consisted of two and a large . The first and second phalanges were somewhat equal in shape and length ( and , respectively), and shared the robust and stocky structure. The upper articular facets were very symmetrical and had a crest—particularly taller in the first phalanx. The top border of this crest was very pointed and thick; it likely served as the site for attachment of the extensor tendons in life. The lower heads were nearly symmetrical, but the central depression was considerably wider and deeper in the first phalanx. The manual unguals of Therizinosaurus were especially enormous and long, estimated to have covered approximately in length. Unlike other therizinosaurs they were very straight, side to side flattened, and had sharp curvatures only at the tips, a unique feature of Therizinosaurus. The lower tubercle, where the flexor tendons attached to the ungual, was thick and robust, indicating a large pad in life. The articulation surface that connected the preceding phalanx was slightly concave and divided into two by a central ridge. Hindlimbs Therizinosaurus had a rather stocky and robust that was very wide on its lower end. The was robust and short (almost sauropodomorph-like), and composed of five . The first four were functional and terminated in weight-bearing digits, hence having a tetradactyl (four-toed) condition. The last or fifth metatarsal was highly reduced bone located at the lateral side of the metatarsus and had no functional significance. Unlike most other theropods groups, the first pedal digit was—though shorter than the others—functional and weight-bearing. The second and third were equally long while the fourth was smaller and somewhat thinner. The pedal unguals were side to side flattened and likely sharp. The morphology of the feet of Therizinosaurus and other therizinosaurids was unique, as the general theropod formula includes tridactyl (three-toed) feet in which the first toe was reduced to a dewclaw and held off the ground. Classification Maleev originally classified Therizinosaurus as a giant marine turtle and the genus was assigned by him to a separate family, Therizinosauridae given how enigmatic the specimen was. The fossils remained with uncertainty among the scientific community; however, in 1970 Rozhdestvensky was one of the first paleontologists to suggest that Therizinosaurus was actually a theropod dinosaur instead of a turtle. He also suggested that the supposed ribs of the holotype were likely from a different dinosaur, possibly a sauropodomorph. In 1976 Barsbold concluded that Therizinosaurus was a theropod because MPC-D 100/15 matched numerous theropodan characters, and that Therizinosauridae and Deinocheiridae were probably synonyms. With the discovery and description of Segnosaurus, in 1979 Perle named a new family of dinosaurs, the Segnosauridae. He tentatively placed the family within Theropoda given the similarities of the mandible and dentition to other members. A year later, the new genus Erlikosaurus was named by Barsbold and Perle in 1980. They named a new infraorder called the Segnosauria, composed of Erlikosaurus and Segnosaurus. They also noted that while aberrant and having ornithischian-like pelves, segnosaurs featured similar traits to other theropods. With the discovery of the referred hindlimb to Therizinosaurus in 1982 by Perle, he concluded that Segnosaurus was very similar to the latter based on the morphology and they possibly belonged to a single, if not the same, group. In 1983, Barsbold named a new genus of segnosaur, Enigmosaurus. He analyzed the pelvis of the new genus and pointed out that segnosaurids were so different from other theropods that they could be outside the group or represent a different lineage of theropod dinosaurs. Later on the same year, he intensified the exclusion of segnosaurs from being theropods by noting that their pelves resembled those of sauropod dinosaurs. Consequently, the assignment of segnosaurs started to shift towards sauropodomorphs. In 1984, Gregory S. Paul claimed that segnosaurs, rather than being theropods, were indeed sauropodomorphs that successfully managed to remain in the Cretaceous period. He based the idea on anatomical traits such as the skull and similar configuration. He maintained his position in 1988 by placing the Segnosauria into the now obsolete Phytodinosauria, and was one of the first to suggest a segnosaur assignment for the enigmatic Therizinosaurus. Other prominent paleontologists like Jacques Gauthier or Paul Sereno supported this vision. In 1990, Barsbold and Teresa Maryanska agreed in that the hindlimb material from Hermiin Tsav referred to Therizinosaurus in 1982 was segnosaurian since it matched several traits, but considered it unlikely to belong to the genus and species as there was no overlapping material among specimens. Barsbold and Maryanska also disagreed with previous researchers who classified Deinocheirus as a segnosaur. In the same year, David B. Norman considered Therizinosaurus to be a theropod of uncertain classification. However, with the unexpected discovery and description of Alxasaurus in 1993, the widely accepted sauropodomorph affinities of segnosaurs were questioned by paleontologists Dale Russell and Dong Zhiming. This new genus was far more complete than any other segnosaur and multiple anatomical features indicated that it was related to Therizinosaurus. With this, they identified the Therizinosauridae along with the Segnosauridae to be the same group, the former name having taxonomic priority. Due to some primitive characters present in Alxasaurus they coined a new taxonomic rank, the Therizinosauroidea, containing the new taxon and Therizinosauridae. All of the new information provided data on the affinities of the new-named therizinosauroids. Russell and Dong concluded that they were theropods with unusual features. In 1994, Clark and colleagues redescribed the very complete skull of Erlikosaurus and even more theropod traits were found this time. They also validated the synonymy of the Segnosauridae with Therizinosauridae and considered therizinosauroids as maniraptoran dinosaurs. In 1997, Rusell coined the infraorder Therizinosauria in order to contain all segnosaurs. This new infraorder was composed of Therizinosauroidea and the more advanced Therizinosauridae. Consequently, Segnosauria became a synonym of Therizinosauria. Though some uncertainties remained, a small and feathered therizinosauroid from China was described in 1999 by Xu Xing and colleagues: the new genus Beipiaosaurus. It confirmed the placement of therizinosaurs among theropods and also their taxonomic place on the Coelurosauria. The discovery also indicated that feathers were highly distributed among theropod dinosaurs. In 2010, Lindsay Zanno revised the taxonomy of therizinosaurs in extensive detail. She found that many parts on therizinosaur holotype and referred specimens were lost or damaged, and sparse specimens with no overlapping elements were disadvantages when concluding the relationships of the members. Zanno accepted the referral of the specimen IGM 100/45 to Therizinosaurus since it matches multiple therizinosaurid traits, but decided not to include the specimen in her taxonomic analysis due to the lack of comparative forelimb remains. She also excluded the supposed ribs that were present on the holotype since they likely came from a different animal and not Therizinosaurus. In 2019, Hartman and colleagues also performed a large phylogenetic analysis of Therizinosauria based on the characters provided by Zanno in her revision. They found similar results to Zanno regarding the family Therizinosauridae but this time with the inclusion of more taxa and specimens. The cladogram below shows the placement of Therizinosaurus within Therizinosauria according to Hartman and colleagues in 2019: Paleobiology Feeding In 1993 Dale A. Russell and Donald E. Russell analyzed Therizinosaurus and Chalicotherium, and noted similarities in their respective body plan, even though they form part of different groups. Both genera had large, well-developed, and relatively strong arms; the pelvic girdle was robust and suited for a sitting behavior; and the hindlimb (particularly the foot) structure was robust and shortened. They considered these adaptations to represent an example of convergent evolution—a condition where organisms evolve similar traits without necessarily being related—between extinct mammal and dinosaur genera. Moreover, the body plan is somewhat exhibited by the modern-day gorillas. Because the animals with this type of body plan are known to represent herbivores, the authors suggested this lifestyle for Therizinosaurus. Russell and Russell reconstructed the feeding behavior of Therizinosaurus as being able to sit while consuming foliage from large shrubs and trees. The plant material would have been harvested with its hands and this action was likely favored by its elongated neck which prevented the use of large amounts of force and effort. As its arms were long enough to have touched the ground during certain stances, they could have helped the dinosaur to rise from a prone position. If browsing in a bipedal stance, Therizinosaurus may have been able to reach even higher vegetation supported by its short and robust feet. Whereas Chalicotherium was more suited to hook branches, Therizinosaurus was better at pushing large clumps of foliage because of its long claws. It is also possible that Therizinosaurus was less capable of great precision in its movements than was Chalicotherium, due to the latter having more developed brain, dental and muscular capacities. Anthony R. Fiorillo and colleagues in 2018 suggested that Therizinosaurus had a reduced bite force that may have been useful for cropping vegetation or foraging, based on relative therizinosaurids such as Erlikosaurus and Segnosaurus. As the bite force started to decrease from primitive to derived therizinosaurians, Therizinosaurus, being a derived member, would have been subject to the evolutionary relationship. Arms and claws function When the genus was first described by Maleev in 1954, he considered that the unusually large claws were used to harvest seaweed. This was however, based on the assumption of a giant marine turtle. In 1970, Rozhdestvensky re-examined the claws and suggested a possible function specialized in opening termite mounds or a frugivore diet. Barsbold in 1976 suggested that the unusual claws of Therizinosaurus may have been employed to impale or dig up loose terrain, however, he pointed out their notorious fragility upon impact. In 1995, Lev A. Nessov suggested the elongated claws were used for defense against predators and juveniles could have used their claws for arboreal locomotion, in a similar way to the modern-day sloths or hoatzin chicks. In 2014, Lautenschlager tested the function of various therizinosaur hand claws—including Therizinosaurus—through digital simulations. Three different functional scenarios were simulated for each claw morphology with a force of 400 N applied in each scenario: scratch/digging; hook-and-pull; and piercing. Though the stocky claws of Alxasaurus resulted in low-stress magnitudes, the stress was greater with the curvature and elongation of the claws in Falcarius, Nothronychus and Therizinosaurus. Some of the highest stress, deformation, and strain magnitudes were obtained in the scratch/digging scenario, the hook-and-pull scenario, in contrast, resulted in lower magnitudes, and lesser ones were found in the piercing scenario. Particularly, the overall stress was most pronounced in the unusual claws of Therizinosaurus, which may represent an exceptional case of elongation specialization. Lautenschlager noted the more strongly curved and elongate claws of some therizinosaurian taxa were poorly functional in a scratch/digging fashion, indicating this as the most unlikely function. Though fossorial (digging) behavior has been reported in several dinosaur species, the large body size largely rules out the possibility of burrow digging in therizinosaurs. Nevertheless, an overall digging action would have been done with the foot claws because, since as in other maniraptorans, feathers on the arms would have interfered with this function. Instead of being used for fossorial behavior, it is more likely that Therizinosaurus make use of its hands in a hook-and-pull fashion to pull or grasp vegetation within reach. This herbivorous behavior would make therizinosaurs mostly similar to the extant anteaters and the extinct ground sloths. Lautenschlager could neither confirm nor disregard that the hand claws could have been used for defense, intraspecific competition, stabilization by grasping tree trunks during high browsing, sexual dimorphism, or gripping mates during mating given the lack of more specimens. He clarified that there is no evidence that the long claws of Therizinosaurus would have been used in active defense or attack, however, it is possible that these appendages could have had some role when facing a threat, such as intimidation. Scott A. Lee and Zachary Richards in 2018 based on bending resistance measurements of several dinosaur humeri, found the humeri of carnosaur, therizinosaur, and tyrannosaur dinosaurs to be relatively resilient to stress. This increased ability to withstand stress supports the idea that Therizinosaurus and other therizinosaurians used their arms in a robust fashion that generated significant forces. They also suggested that the prominent claws of some members could have been used as a defense against predators and other various functions. Unlike the generally light and agile ornithomimosaurs who avoided predation with speed, Therizinosaurus and relatives relied on arms and claws to face threats (and were generally slow-runners to begin with). A 2023 study by Qin, Rayfield, Benton et al., regarding the claw function in therizinosaurids and alvarezsaurids, which represent the extremes of theropod claw morphology, suggest that there was no mechanical function identifiable for Therizinosaurus, suggesting the claws on its forelimbs were merely decorative rather than functional and a result of peramorphic growth resulting from increased body size. Paleoenvironment The remains of Therizinosaurus have been found in the well-known Nemegt Formation of the Gobi Desert. Although this formation has never been dated radiometrically because of the discontinuity of exposures and absence of datable volcanic rock facies, the vertebrate fossil assemblage suggests an early Maastrichtian stage possibly about 70 million years ago. The Nemegt Formation is separated into three informal members. The lower member is mainly composed by fluvial sediments, while middle and upper members consist of alluvial plain, paludal, lacustrine, and fluvial sedimentation. The environments that Therizinosaurus inhabited have been determined by the sedimentation across the formation, the δ13C level preserved on the tooth enamel of many herbivorous dinosaurs and the numerous petrified wood across the formation. They consisted of large meandering and braided rivers with extensive woodlands composed of large, enclosed, canopy-like forests of Araucarias that supported diverse herbivorous dinosaurs like Therizinosaurus. The climate of the formation was relatively temperate (mean annual temperatures between 7.6 and 8.7 °C), characterized by monsoons with cold, dry winters and hot, rainy summers with the addition of mean annual precipitations between 775 mmm and 835 mmm, a precipitation that was subject to prominent seasonal fluctuations. The wet environments of the Nemegt Formation may have acted as an oasis-like area that attracted oviraptorids from arid neighbour localities such as the Barun Goyot Formation, as evidenced on the presence of Nemegtomaia in both regions. It has been previously suggested that the Nemegt Formation may have been similar to the modern-day Okavango Delta, which is also composed of mesic (well-watered) surroundings. The paleofauna of the Nemegt Formation was diverse and rich, composed of other dinosaurs such as the alvarezsaurs Mononykus and Nemegtonykus; deinonychosaurs Adasaurus, and Zanabazar; ornithomimosaurs Anserimimus and Gallimimus; oviraptorosaurs Avimimus, Gigantoraptor, Rinchenia and Elmisaurus; tyrannosaurids Alectrosaurus, Alioramus and possibly Bagaraatan; ankylosaurids Saichania and Tarchia; and pachycephalosaurids Homalocephale and Prenocephale. The Nemegt megafauna included the ornithomimosaur Deinocheirus; hadrosaurids Barsboldia and Saurolophus; titanosaurs Nemegtosaurus and Opisthocoelicaudia; and the apex predator Tarbosaurus. Additional paleofauna includes birds like Judinornis or Teviornis; abundant freshwater ostracods at numerous localities; fish; terrestrial and aquatic turtles such as Mongolochelys and Nemegtemys; and the crocodylomorph Paralligator. As the sediments in which Therizinosaurus remains have been found are fluvial-based, it is suggested that it may have preferred to forage on riparian areas. Therizinosaurus due to its prominent height and high-browsing lifestyle, was one of the tallest dinosaurs in the Nemegt Formation paleofauna. It probably had no significant competition with other herbivores over the foliage, however, a niche partitioning with the titanosaurs—also long-necked dinosaurs—of the formation could have occurred. If Therizinosaurus was a grazer, on the other hand, it would have competed with contemporary grazers such as Saurolophus. Although small predators like dromaeosaurids and troodontids did not represent a threat to Therizinosaurus, the only other predator rivaling in size was Tarbosaurus. Because of the greater height of Therizinosaurus, a large Tarbosaurus may have been not able to bite any higher than the thighs or belly of an adult standing Therizinosaurus. The elongated claws may have been useful for self-defense or to intimidate the predator during this situation. It is also possible that Therizinosaurus competed for other various resources with Deinocheirus, Saurolophus, Nemegtosaurus and Opisthocoelicaudia.
Biology and health sciences
Theropods
Animals
851640
https://en.wikipedia.org/wiki/Helianthus
Helianthus
Helianthus () is a genus comprising around 70 species of annual and perennial flowering plants in the daisy family Asteraceae commonly known as sunflowers. Except for three South American species, the species of Helianthus are native to North America and Central America. The best-known species is the common sunflower (Helianthus annuus). This and other species, notably Jerusalem artichoke (H. tuberosus), are cultivated in temperate regions and some tropical regions, as food crops for humans, cattle, and poultry, and as ornamental plants. The species H. annuus typically grows during the summer and into early fall, with the peak growth season being mid-summer. Several perennial Helianthus species are grown in gardens, but have a tendency to spread rapidly and can become aggressive. On the other hand, the whorled sunflower, Helianthus verticillatus, was listed as an endangered species in 2014 when the U.S. Fish and Wildlife Service issued a final rule protecting it under the Endangered Species Act. The primary threats to this species are industrial forestry and pine plantations in Alabama, Georgia, and Tennessee. They grow to and are primarily found in woodlands, adjacent to creeks and moist, prairie-like areas. The common sunflower is the national flower of Ukraine, cultivated there for several centuries. Description Sunflowers are usually tall annual or perennial plants that in some species can grow to a height of or more. Each "flower" is actually a disc made up of tiny flowers, to form a larger false flower to better attract pollinators. The plants bear one or more wide, terminal capitula (flower heads made up of many tiny flowers), with bright yellow ray florets (mini flowers inside a flower head) at the outside and yellow or maroon (also known as a brown/red) disc florets inside. Several ornamental cultivars of H. annuus have red-colored ray florets; all of them stem from a single original mutant. While the majority of sunflowers are yellow, there are branching varieties in other colors including, orange, red and purple. The petiolate leaves are dentate and often sticky. The lower leaves are opposite, ovate, or often heart-shaped. The rough and hairy stem is branched in the upper part in wild plants, but is usually unbranched in domesticated cultivars. This genus is distinguished technically by the fact that the ray florets (when present) are sterile, and by the presence on the disk flowers of a pappus that is of two awn-like scales that are caducous (that is, easily detached and falling at maturity). Some species also have additional shorter scales in the pappus, and one species lacks a pappus entirely. Another technical feature that distinguishes the genus more reliably, but requires a microscope to see, is the presence of a prominent, multicellular appendage at the apex of the style. Further, the florets of a sunflower are arranged in a natural spiral. Variability is seen among the perennial species that make up the bulk of those in the genus. Some have most or all of the large leaves in a rosette at the base of the plant and produce a flowering stem that has leaves that are reduced in size. Most of the perennials have disk flowers that are entirely yellow, but a few have disk flowers with reddish lobes. One species, H. radula, lacks ray flowers altogether. Overall, the macroevolution of the Helianthus is driven by multiple biotic and abiotic factors and influences various floral morphology. Helianthus species are used as food plants by the larvae of many lepidopterans. Growth stages The growth of a sunflower depends strictly on its genetic makeup and background. Additionally, the season it is planted will have effects on its development; those seasons tend to be in the middle of summer and beginning of fall. Sunflower development is classified by a series of vegetative stages and reproductive stages that can be determined by identifying the heads or main branch of a single head or branched head. Facing the Sun (heliotropism) Before blooming, Helianthus plant heads tilt upwards during the day to face the Sun. This movement is referred to as heliotropism, which continues for a short time when flower buds form and young Helianthus heads track the Sun. At night, the flower heads reorient their position and face East in anticipation for the sunrise. Sunflowers move back to their original position between the hours of 3am and 6am, and the leaves follow about an hour later. By the time they are mature and reach anthesis, Helianthus generally stop moving and remain facing east, which lets them be warmed by the rising sun. Historically, this has led to controversy on whether or not Helianthus is heliotropic, as many scientists have failed to observe movement when studying plants that have already bloomed. This is notably different from heliotropism in leaves, as the moving mechanism for leaves exists in the pulvinus. Since flowers do not have pulvini, the movement is caused by increased growth rate of the stems. The growth rate accumulation of the stem on the east side of the stem gradually pushes the flower from east to west during daytime. This matches with the Sun as it rises from the east and falls in the west. At night, the growth rate is higher in the west side of the stem that gradually pushes the flower from the west side back to the east side. In addition, it is not actually the whole plant that changes its direction to face the Sun, but the flower itself that bends to be illuminated by the Suns rays. The heliotropic movement is caused by growth on the opposite side of the flower, driven by accumulation of growth hormones during Sun exposure. Heliotropism persists on cloudy days when the sun is not shining brightly, meaning that the movement is endogenous as a trained and continuous process. However, flower movement does not occur during long periods of rain or clouds. It also does not occur in a growth chamber when exposed to 16 hours of light or in greenhouses, suggesting that the plants require a directional, moving light source. Helianthus can also discriminate between different types of light. When exposed to different light frequencies, the hypocotyls will bend toward blue light but not red light, depending on the quality of the light source. It is the circadian rhythms and the differences of the stem growth rate that work together and cause the heliotropism of the Helianthus. This is important for attracting pollinators and increasing growth metabolism. Future studies are required to identify the exact physiological basis and cellular mechanism for this behavior. Taxonomy Helianthus is derived from Greek hēlios "sun" and ánthos "flower", because its round flower heads in combination with the ligules look like the Sun. There are many species recognized in the genus: Helianthus agrestis Pollard – southeastern sunflower – Florida, Georgia Helianthus ambiguus Britt. – Wisconsin, Michigan, Ohio, New York Helianthus angustifolius L. – swamp sunflower – Texas, northern Florida to southern Illinois, Long Island, New York Helianthus annuus L. – common sunflower, girasol – most of United States + Canada Helianthus anomalus S.F.Blake – western sunflower – Nevada, Utah, Arizona, New Mexico Helianthus argophyllus Torr. & A.Gray – silverleaf sunflower – Texas, North Carolina, Florida Helianthus arizonensis R.C.Jacks. – Arizona sunflower – Arizona, New Mexico Helianthus atrorubens L. – purpledisk sunflower – Louisiana, Alabama, Georgia, Florida, South Carolina, North Carolina, Tennessee, Kentucky, Virginia Helianthus bolanderi A.Gray – serpentine sunflower – California, Oregon Helianthus × brevifolius E.Watson – Texas, Indiana, Ohio Helianthus californicus DC. – California sunflower – California Helianthus carnosus Small – lakeside sunflower – Florida Helianthus ciliaris DC. – Texas blueweed – United States: Washington, California, Arizona, New Mexico, Nevada, Utah, Texas, Oklahoma, Colorado, Kansas, Illinois; Mexico: Tamaulipas, Coahuila, Chihuahua, Sonora Helianthus cinereus Small – Missouri, Kentucky, Indiana, Ohio Helianthus coloradensis Cockerell – prairie sunflower – Colorado, New Mexico Helianthus cusickii A.Gray – Cusick's sunflower – Washington, Oregon, California, Idaho, Nevada Helianthus debilis Nutt. – cucumberleaf sunflower – Texas to Maine, Mississippi Helianthus decapetalus L. – thinleaf sunflower – eastern United States; Ontario, Quebec Helianthus deserticola Heiser – desert sunflower – Arizona, Nevada, Utah Helianthus devernii T.M.Draper – red rock sunflower – Nevada †Helianthus diffusus Sims – Missouri† Helianthus dissectifolius R.C.Jacks. – Chihuahua, Durango Helianthus divaricatus L. – woodland sunflower or rough woodland sunflower – eastern United States; Ontario, Quebec Helianthus × divariserratus R.W.Long Michigan, Indiana, Ohio, Connecticut Helianthus × doronicoides Lam. – Texas, Oklahoma, Arkansas, Missouri, Iowa, Minnesota, Illinois, Kentucky, Indiana, Ohio, Pennsylvania, Michigan, New Jersey, Virginia Helianthus eggertii Small – Alabama, Kentucky, and Tennessee Helianthus exilis A.Gray – California Helianthus floridanus A.Gray ex Chapm. – Florida sunflower – Louisiana, Alabama, Georgia, Florida, South Carolina, North Carolina Helianthus giganteus L. – giant sunflower – eastern United States; most of Canada Helianthus glaucophyllus D.M.Sm – whiteleaf sunflower – Tennessee, South Carolina, North Carolina Helianthus × glaucus Small – scattered locales in southeastern United States Helianthus gracilentus A.Gray – slender sunflower – California Helianthus grosseserratus M.Martens – sawtooth sunflower – Great Plains, Great Lakes, Ontario, Quebec Helianthus heterophyllus Nutt. – variableleaf sunflower – Coastal plain of Texas to North Carolina Helianthus hirsutus Raf. – hairy sunflower – central and eastern United States, Ontario Helianthus × intermedius R.W.Long – intermediate sunflower – scattered locales in United States Helianthus laciniatus A.Gray – alkali sunflower – United States: Arizona, New Mexico, Texas; Mexico: Coahuila, Nuevo León Helianthus × laetiflorus Pers. – cheerful sunflower, mountain sunflower – scattered in eastern and central United States; Canada Helianthus laevigatus Torr. & A.Gray – smooth sunflower – Georgia, South Carolina, North Carolina, Virginia, Maryland, West Virginia Helianthus lenticularis Douglas ex Lindl. Minnesota to North Dakota, Idaho, Missouri, Texas Helianthus longifolius Pursh – longleaf sunflower – Alabama, Georgia, North Carolina Helianthus × luxurians (E.Watson) E.Watson – Great Lakes region Helianthus maximiliani Schrad. – Maximillian sunflower – much of United States and Canada Helianthus membranifolius Poir. – Cayenne Island French Guiana Helianthus microcephalus Torr. & A.Gray – eastern United States Helianthus mollis Lam. – downy sunflower, ashy sunflower – Ontario, eastern and central United States Helianthus multiflorus L. – manyflower sunflower – Ohio Helianthus navarri Phil. – Chile Helianthus neglectus Heiser – neglected sunflower – New Mexico, Texas Helianthus niveus (Benth.) Brandegee – showy sunflower – United States: California, Arizona; Mexico: Baja California, Baja California Sur Helianthus nuttallii Torr. & A.Gray – western and central United States, Canada Helianthus occidentalis Riddell – fewleaf sunflower, western sunflower – Great Lakes region, scattered in southeastern United States Helianthus × orgyaloides Cockerell – Colorado, Kansas Helianthus paradoxus Heiser – paradox sunflower – Utah, New Mexico, Texas Helianthus pauciflorus Nutt. – stiff sunflower – central United States, Canada Helianthus petiolaris Nutt. – prairie sunflower, lesser sunflower – much of United States, Canada Helianthus porteri (A.Gray) Pruski – Porter's sunflower – Alabama, Georgia, South Carolina, North Carolina Helianthus praecox Engelm. & A.Gray Texas sunflower – Texas †Helianthus praetermissus  – New Mexico sunflower – New Mexico† Helianthus pumilus Nutt. – little sunflower – Colorado, Wyoming, Montana, Utah, Idaho Helianthus radula (Pursh) Torr. & A.Gray – rayless sunflower – Louisiana, Mississippi, Alabama, Georgia, South Carolina, Florida Helianthus resinosus Small – resindot sunflower – Mississippi, Alabama, Georgia, South Carolina, North Carolina, Florida Helianthus salicifolius A.Dietr. – willowleaf sunflower – Texas, Oklahoma, Kansas, Missouri, Illinois, Wisconsin, Ohio, Pennsylvania, New York Helianthus sarmentosus Rich. – French Guiana Helianthus scaberrimus Elliott – South Carolina Helianthus schweinitzii Torr. & A.Gray – Schweinitz's sunflower – South Carolina, North Carolina Helianthus silphioides Nutt. – rosinweed sunflower – Lower Mississippi Valley Helianthus simulans E.Watson – muck sunflower – southeastern United States Helianthus smithii Heiser – Smith's sunflower – Alabama, Georgia, Tennessee Helianthus speciosus Hook. – Michoacán Helianthus strumosus L. – eastern and central United States, Canada Helianthus subcanescens (A.Gray) E.Watson – Manitoba, north-central United States Helianthus subtuberosus Bourg. Helianthus tuberosus L. – Jerusalem artichoke, sunchoke, earth-apple, topinambur – much of United States and Canada Helianthus verticillatus Small – whorled sunflower – Alabama, Georgia, Tennessee Formerly included The following species were previously included in the genus Helianthus. Flourensia thurifera (Molina) DC. (as H. thurifer Molina) Flourensia thurifera (Molina) DC. (as H. navarri) Phil. Helianthella quinquenervis (Hook.) A.Gray (as H. quinquenervis Hook.) Helianthella uniflora (Nutt.) Torr. & A.Gray (as H. uniflorus Nutt.) Pappobolus imbaburensis (Hieron.) Panero (as H. imbaburensis Hieron.) Viguiera procumbens (Pers.) S.F.Blake (as H. procumbens Pers.) Uses The seeds of H. annuus are used as human food. Most cultivars of sunflower are variants of H. annuus, but four other species (all perennials) are also domesticated. This includes H. tuberosus, the Jerusalem artichoke, which produces edible tubers. There are many species in the sunflower genus Helianthus, and many species in other genera that may be called sunflowers. The Maximillian sunflower (Helianthus maximiliani) is one of 38 species of perennial sunflower native to North America. The Land Institute and other breeding programs are currently exploring the potential for these as a perennial seed crop. The sunchoke (Jerusalem artichoke or Helianthus tuberosus) is related to the sunflower, another example of perennial sunflower. The Mexican sunflower is Tithonia rotundifolia. It is only very distantly related to North American sunflowers. False sunflower refers to plants of the genus Heliopsis. Ecology Sunflowers have been proven to be excellent plants to attract beneficial insects, including pollinators. Helianthus spp. are a nectar producing flowering plant that attract pollinators and parasitoids which reduce the pest populations in nearby crop vegetation. Sunflowers attract different beneficial pollinators (e.g., honey bees) and other known insect prey to feed on and control the population of parasitic pests that could be harmful to the crops. Predacious insects are first attracted to sunflowers once they are planted. Once the Helianthus spp. reaches six inches and produces flowers it begins to attract more pollinators. Distance between sunflower rows and crop vegetation plays an important role in this phenomenon, hypothesizing that closer proximity to the crops will increase insect attraction. In addition to pollinators of Helianthus spp., there are other factors such as abiotic stress, florivory, and disease which also contribute to the evolution of floral traits. These selective pressures, which stem from several biotic and abiotic factors are associated with habitat environmental conditions which all play a role in the overall morphology of the sunflowers' floral traits. An ecosystem is composed of both biotic (which are living elements of an ecosystem such as plants, animals, fungi, protists, and bacteria), and abiotic factors (non-living elements of an ecosystem such as air, soil, water, light, salinity and temperature). It is thought that two biotic factors can explain for the evolution of larger sunflowers and why they are present in more drier environments. For one thing, the selection by pollinators is thought to have increased the sunflower's size in a drier environment. This is because in a drier environment, there are typically less pollinators. As a result, in order for the sunflower to be able to attract more pollinators, they had to increase the morphology of their floral traits in that they had to increase their display size. Another biotic factor that can explain for the evolution of larger sunflowers in drier environments is that the pressure from florivory and disease favors smaller flowers in habitats that have a more moderate supply of moisture (mesic habitat). Wetter environments usually have more dense vegetation, more herbivores, and more surrounding pathogens. As larger flowers are typically more susceptible to disease and florivory, smaller flowers may have evolved in wetter environments which explains the evolution of larger sunflowers in more drier environments. Gallery
Biology and health sciences
Asterales
null
851716
https://en.wikipedia.org/wiki/Tugtupite
Tugtupite
Tugtupite is a beryllium aluminium tectosilicate. It also contains sodium and chlorine and has the formula Na4AlBeSi4O12Cl. Tugtupite is a member of the silica-deficient feldspathoid mineral group. It occurs in high alkali intrusive igneous rocks. Tugtupite is tenebrescent, sharing much of its crystal structure with sodalite, and the two minerals are occasionally found together in the same sample. Tugtupite occurs as vitreous, transparent to translucent masses of tetragonal crystals and is commonly found in white, pink, to crimson, and even blue and green. It has a Mohs hardness of 4 and a specific gravity of 2.36. It fluoresces crimson under ultraviolet radiation. It was first found in 1962 at Tugtup agtakôrfia Ilimaussaq intrusive complex of southwest Greenland. It has also been found at Mont-Saint-Hilaire in Quebec and in the Lovozero Massif of the Kola Peninsula in Russia The name is derived from the Greenlandic Inuit word for reindeer (tuttu), and means "reindeer blood". The U.S. Geological Survey reports that in Nepal, tugtupite (as well as jasper and nephrite) were found extensively in most of the rivers from the Bardia to the Dang. It is used as a gemstone.
Physical sciences
Silicate minerals
Earth science
851852
https://en.wikipedia.org/wiki/Curry%20tree
Curry tree
The curry tree or Bergera koenigii (syn. Murraya koenigii), is a tropical and sub-tropical tree in the family Rutaceae (the rue family, which includes rue, citrus, and satinwood), native to Asia. The plant is also sometimes called sweet neem, though M. koenigii is in a different family from neem, Azadirachta indica, which is in the related family Meliaceae. Its leaves, known as curry leaves, also referred to as sweet neem, are used in many dishes in the Indian subcontinent. Description It is a small tree, growing ) tall, with a trunk up to in diameter. The aromatic leaves are pinnate, with 11–21 leaflets, each leaflet long and broad. The plant produces small white flowers which can self-pollinate to produce small shiny-black drupes containing a single, large viable seed. The berry pulp is edible, with a sweet flavor. Distribution and habitat The tree is native to the Indian subcontinent. Commercial plantations have been established in India, and more recently Australia and South of Spain (Costa del Sol). It grows best in well-drained soil that does not dry out, in areas with full sun or partial shade, preferably away from the wind. Growth is more robust when temperatures are at least . Etymology and common names The word "curry" is borrowed from the Tamil word kari (கறி, literally "blackened"), the name of the plant associated with the perceived blackness of the tree's leaves. The records of the leaves being utilized are found in Tamil literature dating back to the 1st and 4th centuries CE. Britain had spice trades with the ancient Tamil region. It was introduced to England in the late 16th century.. The species Bergera koenigii was first published by Carl Linnaeus in Mantissa Plantarum vol.2 on page 563 in 1767. It was formerly known as Murraya koenigii , which was first published in Syst. Veg., ed. 16. 2: 315 in 1825. Some sources still recognise it as the accepted name. The former generic name, Murraya, derives from Johan Andreas Murray (1740–1791), who studied botany under Carl Linnaeus and became a professor of medicine with an interest in medicinal plants at the University of Göttingen, Germany. The specific name, koenigii, derives from the last name of botanist Johann Gerhard König. The curry tree is also called curry leaf tree or curry bush, among numerous local names, depending on the country. It is known by a variety of names in the Indian subcontinent and South Asia itself. Some of its alternative names are: Hindi: करी/करीयापत्ता का पेड़ (kari/kariyāpattā ka peṛ) Punjabi: ਕਡੀ/ਕੜੀ ਪੱਤੀ ਦਾ ਰੁਖ (kaḍi/kaṛi patti dā rukh) Gujarati: મીઠો લીંબડો નુ બૃક્ષ/ઝાડ (miṭho limbḍo nu bruksh/jhāḍ) Marathi: कढीपानाचे/कढीलिंबाचे झाड (kaḍhīpānache /kaḍhīlimbāche jhāḍ) Bengali: করীফুুলীর/কারীপাতার গাছ (kariphulir /kāripātār gāchh) Odia: ଭୃଷଙ୍ଗର/ଭୃଷମର ଗଛ (bhrusungara/bhrusamara gachha) Assamese: নৰসিংহৰ গাছ (narahingor gās) Nepali: करीपात को रूख (karipāt ko rūkh) Meitei: ꯀꯔꯤ ꯄꯥꯝꯕꯤ (kari pambi) Kannada: ಕರಿಬೇವಿನ ಮರ (karivēvina mara) Tamil: கறிவேப்பிலை மரம் (karivēppilai maram) Telugu: కరివేపాకు చెట్టు (karivēpāku cheṭṭu) Malayalam: കറിവേപ്പില മരം (karivēppila maram) Tulu: ಬೇವುಡಿರೇ ಮರ (bēvudirae mara) Sinhala: කරපිංච ගස (karapincha gasa) Burmese:ဟင်းရွက်သစ်ပင် (hainnrwat saitpain) Uses Culinary The fresh leaves are an indispensable part of Indian cuisine and Indian traditional medicines. They are most widely used in southern and west coast Indian cooking, usually fried along with vegetable oil, mustard seeds and chopped onions in the first stage of the preparation. They are also used to make thoran, vada, rasam, and kadhi; additionally, they are often dry-roasted (and then ground) in the preparation of various powdered spice blends (masalas), such as South Indian sambar masala, the main seasoning in the ubiquitous vegetable stew sambar. The curry leaves are also added as flavoring to masala dosa, the South Indian potato-filled crepes, made with a mildly probiotic, fermented lentil and rice batter. The fresh leaves are valued as seasoning in the cuisines of South and Southeast Asia. In Cambodia, curry leaves (, ) are roasted and used as an ingredient for samlor machu kroeung. In Java, the leaves are often stewed to flavor gulai. Though available dried, the aroma and flavor are greatly inferior. In almost all cases, the leaves will be freshly plucked from a garden only a few hours or even minutes before they are used. The oil can be extracted and used to make scented soaps. The leaves of Murraya koenigii are also used as a herb in Ayurvedic and Siddha medicine in which they are believed to possess anti-disease properties, but there is no high-quality clinical evidence for such effects. The berries are edible, but the seeds may be toxic to humans. Propagation Seeds must be ripe and fresh to plant; dried or shriveled fruits are not viable. The skin must be peeled off, and this is recommended before planting. One can plant the whole fruit, but it is best to remove the pulp before planting in a potting mix that is kept moist but not wet. Stem cuttings can be also used for propagation. In the Indian Subcontinent, the plant is a fixture in almost every household. It is mainly planted privately, but also cultivated commercially to a small extent. Because the leaves must be fresh upon use, it is often traded through a small neighborhood or city wide network of farmers, who regularly supply fresh leaves to stall vendors. Chemical constituents Compounds found in curry tree leaves, stems, bark, and seeds include cinnamaldehyde, and numerous carbazole alkaloids, including mahanimbine, girinimbine, and mahanine. Nutritionally, the leaves are a rich source of carotenoids, beta-carotene, calcium and iron.
Biology and health sciences
Herbs and spices
Plants
851965
https://en.wikipedia.org/wiki/Pumping%20station
Pumping station
Pumping stations, also called pumphouses, are public utility buildings containing pumps and equipment for pumping fluids from one place to another. They are critical in a variety of infrastructure systems, such as water supply, drainage of low-lying land, canals and removal of sewage to processing sites. A pumping station is an integral part of a pumped-storage hydroelectricity installation. Pumping stations are designed to move water or sewage from one location to another, overcoming gravitational challenges, and are essential for maintaining navigable canal levels, supplying water, and managing sewage and floodwaters. In canal systems, pumping stations help replenish water lost through lock usage and leakage, ensuring navigability. Similarly, in land drainage, stations pump water to prevent flooding in areas below sea level, a concept pioneered during the Victorian era in places like The Fens in the UK. The introduction of "package pumping stations" has modernized drainage systems, allowing a compact, efficient solution for areas where gravity drainage is impractical. Water pumping stations are differentiated by their applications, such as sourcing from wells, raw water pumping, and high service pumping, each designed to meet specific demand projections and customer needs. Wastewater pumping stations, on the other hand, are engineered to handle sewage, with designs that ensure reliability and safety, minimizing environmental impacts from overflows. Innovations in pump technology and station design have led to the development of submersible pump stations, which are more compact and safer, effectively reducing the footprint and visibility of sewage management infrastructure. Electronic controllers have enhanced the efficiency and monitoring capabilities of pumping stations, essential for modern systems. Pumped-storage schemes represent a critical use of pumping stations, providing a method for energy storage and generation by moving water between reservoirs at different elevations, highlighting the versatility and importance of pumping stations across sectors. Some pumping stations have been recognized for their architectural and historical significance, e.g. the Claverton and Crofton Pumping Stations, are preserved as museum attractions. Examples such as land drainage in the Netherlands water supply in Hong Kong and agricultural drainage in Iraq, underscore the vital role these facilities play in supporting modern infrastructure, environmental management, and energy storage. Canal water supply In countries with canal systems, pumping stations are also frequent. Because of the way the system of canal locks work, water is lost from the upper part of a canal each time a vessel passes through. Also, most lock gates are not watertight, so some water leaks from the higher levels of the canal to those lower down. The water has to be replaced or eventually the upper levels of the canal would not hold enough water to be navigable. Canals are usually fed by diverting water from streams and rivers into the upper parts of the canal, but if no suitable source is available, a pumping station can be used to maintain the water level. An example of a canal pumping station is the Claverton Pumping Station on the Kennet and Avon Canal in southern England, United Kingdom. This pumps water from the nearby River Avon to the canal using pumps driven by a waterwheel which is powered by the river. Where no external water supply is available, back pumping systems may be employed. Water is extracted from the canal below the lowest lock of a flight and is pumped back to the top of the flight, ready for the next boat to pass through. Such installations are usually small. Land drainage When low-lying areas of land are drained, the general method is to dig drainage ditches. However, if the area is below sea level then it is necessary to pump the water upwards into water channels that finally drain into the sea. The Victorians understood this concept, and in the United Kingdom they built pumping stations with water pumps, powered by steam engines to accomplish this task. In Lincolnshire, large areas of wetland at sea level, called The Fens, were turned into rich arable farmland by this method. The land is full of nutrients because of the accumulation of sedimentary mud that created the land initially. Elsewhere, pumping stations are used to remove water that has found its way into low-lying areas as a result of leakage or flooding (in New Orleans, for example). Package pumping station In more recent times, a "package pumping station" provides an efficient and economic way of installing a drainage system. They are suitable for mechanical building services collection and pumping of liquids like surface water, wastewater or sewage from areas where drainage by gravity is not possible. A package pumping station is an integrated system, built in a housing manufactured from strong, impact-resistant materials such as precast concrete, polyethylene, or glass-reinforced plastic. The unit is supplied with internal pipework fitted, pre-assembled ready for installation into the ground, after which the submersible pumps and control equipment are fitted. Features may include controls for fully automatic operation; a high-level alarm indication, in the event of pump failure; and possibly a guide-rail/auto-coupling/pedestal system, to permit easy removal of pumps for maintenance. Traditional site constructed systems have the valve vault components installed in a separate structure. Having two structural components can lead to potentially serious site problems such as uneven settling between components which results in stress on, and failure of the pipes and connections between components. The development of a packaged pump station system combined all components into a single housing which not only eliminates uneven settling issues, but pre-plumbing and outfitting each unit prior to installation can reduce the cost and time involved with civil work and site labor. Water pumping stations Water pumping stations are differentiated from wastewater pumping stations in that they do not have to be sized to account for high peak flow rates. They have five general categories: Source (such as a well) pump discharging into an elevated tank Raw water pumping from a river or lake In-line booster pumping into an elevated tank High service pumping of finished water at high pressure Distributed system booster without a storage tank in the piping system Water pumping stations are constructed in areas in which the demand or projected demand is reasonably defined, and is dependent on a combination of customer needs and fire flow requirements. Average annual per-capita water consumption, peak hour, and maximum daily can vary greatly due to factors such as climate, income levels, population, and the proportions of residential, commercial, and industrial users. Wastewater pumping stations Pumping stations in sewage collection systems are normally designed to handle raw sewage that is fed from underground gravity pipelines (pipes that are sloped so that a liquid can flow in one direction under gravity). Sewage is fed into and stored in a pit, commonly known as a wet well. The well is equipped with electrical instrumentation to detect the level of sewage present. When the sewage level rises to a predetermined point, a pump will be started to lift the sewage upward through a pressurized pipe system called a sewer force main if the sewage is transported some significant distance. The pumping station may be called a lift station if the pump merely discharges into a nearby gravity manhole. From here the cycle starts all over again until the sewage reaches its point of destination—usually a treatment plant. By this method, pumping stations are used to move waste to higher elevations. In the case of high sewage flows into the well (for example during peak flow periods and wet weather) additional pumps will be used. If this is insufficient, or in the case of failure of the pumping station, a backup in the sewer system can occur, leading to a sanitary sewer overflow—the discharge of raw sewage into the environment. Sewage pumping stations are typically designed so that one pump or one set of pumps will handle normal peak flow conditions. Redundancy is built into the system so that in the event that any one pump is out of service, the remaining pump or pumps will handle the designed flow. The storage volume of the wet well between the "pump on" and "pump off" settings is designed to minimize pump starts and stops, but is not so long a retention time as to allow the sewage in the wet well to go septic. Sewage pumps are almost always end-suction centrifugal pumps with open impellers and are specially designed with a large open passage so as to avoid clogging with debris or winding stringy debris onto the impeller. A four pole or six pole AC induction motor normally drives the pump. Rather than provide large open passages, some pumps, typically smaller sewage pumps, also macerate any solids within the sewage breaking them down into smaller parts which can more easily pass through the impeller. The interior of a sewage pump station is a very dangerous place. Poisonous gases, such as methane and hydrogen sulfide, can accumulate in the wet well; an ill-equipped person entering the well would be overcome by fumes very quickly. Any entry into the wet well requires the correct confined space entry method for a hazardous environment. To minimize the need for entry, the facility is normally designed to allow pumps and other equipment to be removed from outside the wet well. Traditional sewage pumping stations incorporate both a wet well and a "dry well". Often these are the same structure separated by an internal divide. In this configuration pumps are installed below ground level on the base of the dry well so that their inlets are below water level on pump start, priming the pump and also maximising the available NPSH. Although nominally isolated from the sewage in the wet well, dry wells are underground, confined spaces and require appropriate precautions for entry. Further, any failure or leakage of the pumps or pipework can discharge sewage directly into the dry well with complete flooding not an uncommon occurrence. As a result, the electric motors are normally mounted above the overflow, top water level of the wet well, usually above ground level, and drive the sewage pumps through an extended vertical shaft. To protect the above ground motors from weather, small pump houses are normally built, which also incorporate the electrical switchgear and control electronics. These are the visible parts of a traditional sewage pumping station although they are typically smaller than the underground wet and dry wells. More modern pumping stations do not require a dry well or pump house and usually consist only of a wet well. In this configuration, submersible sewage pumps with closely coupled electric motor are mounted within the wet well itself, submerged within the sewage. Submersible pumps are mounted on two vertical guide rails and seal onto a permanently fixed "duckfoot", which forms both a mount and also a vertical bend for the discharge pipe. For maintenance or replacement, submersible pumps are raised by a chain off of the duckfoot and up the two guide rails to the maintenance (normally ground) level. Reinstalling the pumps simply reverses this process with the pump being remounted on the guide rails and lowered onto the duckfoot where the weight of the pump reseals it. As the motors are sealed and weather is not a concern, no above ground structures are required, excepting a small kiosk to contain the electrical switchgear and control systems. Due to the much reduced health and safety concerns, and smaller footprint and visibility, submersible pump sewage pumping stations have almost completely superseded traditional sewage pumping stations. Further, a refit of a traditional pumping station usually involves converting it into a modern pumping station by installing submersibles in the wet well, demolishing the pump house and retiring the dry well by either stripping it, or knocking down the internal partition and merging it with the wet well. Electronic controllers Pump manufacturers have always designed and manufactured electronic devices to control and supervise pumping stations. Today it is also very common to use a programmable logic controller (PLC) or Remote Terminal Unit (RTU) for such work, but the experience needed to solve certain particular problems, makes an easy choice to look for a specific pump controller. RTUs are very helpful in remote monitoring of each pumping station from a centralized control room with SCADA (Supervisory Control & Data Acquisition) systems. This setup can be helpful in monitoring pump faults, levels, and other alarms and parameters, making it more efficient. Pumped-storage schemes A pumped-storage scheme is a type of power station for storing and producing electricity to supply high peak demands by moving water between reservoirs at different elevations. Typically, water is channeled from a high-level reservoir to a low-level reservoir, through turbine generators that generate electricity. This is done when the station is required to generate power. During low-demand periods, such as overnight, the generators are reversed to become pumps that move the water back up to the top reservoir. List of pumping stations There are countless thousands of pumping stations throughout the world. The following is a list of those described in this encyclopedia. United Kingdom In the UK, during the Victorian Era, there was a fashion for public buildings to feature highly ornate architecture. Consequently, a considerable number of former pumping stations have been listed and preserved. The majority were originally steam-powered, and where the steam engines are still in situ, many of the sites have since re-opened as museum attractions. Canal water supply Claverton Pumping Station, on the Kennet and Avon Canal, near Bath, water-powered Cobb's Engine House, ruin near southern portal of Netherton Tunnel Crofton Pumping Station, on the Kennet and Avon Canal, near Great Bedwyn Leawood Pump House, on the Cromford Canal in Derbyshire Smethwick Engine, now removed from original site to Birmingham Thinktank New Smethwick Pumping Station (now part of Galton Valley Canal Heritage Centre) Groundwater supply Used to pump water from a well into a reservoir Bestwood Pumping Station, Nottinghamshire Boughton Pumping Station, Nottinghamshire Bratch Pumping Station, Staffordshire Mill Meece Pumping Station, in Staffordshire Papplewick Pumping Station, Nottinghamshire (pumped from a deep well) Selly Oak Pumping Station, Birmingham (building converted to an electricity substation) Twyford Pumping Station, Hampshire Hydraulic power station Wapping Hydraulic Power Station, London (converted to electricity, now an arts centre and restaurant) Land drainage Pinchbeck Engine, near Spalding (preserved beam engine and scoop wheel) Pode Hole pumping station, near Spalding, Lincolnshire (formerly steam beam engines, no longer present) Prickwillow Engine House, near Ely, Cambridgeshire (now the Museum of Fenland Drainage) Stretham Old Engine, Stretham, Cambridgeshire Westonzoyland Pumping Station, Somerset Public water supply Used to pump drinking water from a reservoir into a water supply system. Blagdon Pumping Station, Chew Valley, Somerset Edgbaston Waterworks, Birmingham Kempton Park Pumping Station, London Kew Bridge Pumping Station, Kew Bridge, London Langford Pumping Station ("Museum of Power"), Essex Ryhope Engines Museum, Sunderland Tees Cottage Pumping Station, Darlington Sewage Abbey Pumping Station, Leicester Abbey Mills Pumping Station, in North London. (steam engines no longer present) Cheddars Lane Pumping Station, Cambridge Claymills Pumping Station, near Burton upon Trent Coleham Pumping Station, Coleham, near Shrewsbury Crossness Pumping Station, in South London Dock Road Edwardian Pumping Station, in Northwich, Cheshire (Gas engines. Built 1913) Low Hall Pumping Station, Walthamstow, North London Markfield Beam Engine, Tottenham, London Old Brook Pumping Station, Chatham, Kent Underground railway Brunel Engine House (now Brunel Museum), Rotherhithe, East London (extracted water from Thames Tunnel; engine no longer present) Shore Road Pumping Station, Birkenhead, Wirral (originally steam, now electric; extracts water from the rail tunnel under the River Mersey) Hong Kong Public water supply Engineer's Office of the Former Pumping Station, Hong Kong Iraq Agricultural drainage Nasiriyah Drainage Pump Station, Dhi Qar Province Canada Hamilton Museum of Steam and Technology, Hamilton, Ontario's first Water Works, powered by two 1859 steam engines Netherlands Land drainage Cruquius pumping station (Operational, but no longer steam-powered.) – an 8-beam Cornish engine with the largest cylinder (144 in (3.5m) diameter) in the world. ir.D.F. Woudagemaal, (ir. Wouda pumping station) (world's largest steam-powered pumping station) Spain Stations for public water supply in Barcelona. One of them is a Barcelona City History Museum heritage site (MUHBA Casa de l'aigua). Another is a museum itself: Museu Agbar de les Aigües (Agbar water museum). United States Chicago Avenue Pumping Station in Chicago, built in 1869, still in use (with modern pumps) but also serves as a theater. Pumping Station No. 2 San Francisco Fire Department Auxiliary Water Supply System, San Francisco, California, listed on the National Register of Historic Places
Technology
Food, water and health
null
852063
https://en.wikipedia.org/wiki/Campanulaceae
Campanulaceae
The family Campanulaceae (also bellflower family), of the order Asterales, contains nearly 2400 species in 84 genera of herbaceous plants, shrubs, and rarely small trees, often with milky sap. Among them are several familiar garden plants belonging to the genera Campanula (bellflower), Lobelia, and Platycodon (balloonflower). Campanula rapunculus (rampion or r. bellflower) and Codonopsis lanceolata are eaten as vegetables. Lobelia inflata (indian tobacco), L. siphilitica and L. tupa (devil's tobacco) and others have been used as medicinal plants. Campanula rapunculoides (creeping bellflower) may be a troublesome weed, particularly in gardens, while Legousia spp. may occur in arable fields. Most current classifications include the segregate family Lobeliaceae in Campanulaceae as subfamily Lobelioideae. A third subfamily, Cyphioideae, includes the genus Cyphia, and sometimes also the genera Cyphocarpus, Nemacladus, Parishella and Pseudonemacladus. Alternatively, the last three genera are placed in Nemacladoideae, while Cyphocarpus is placed in its own subfamily, Cyphocarpoideae. This family is almost cosmopolitan, occurring on all continents except Antarctica. In addition, species of the family are native to many remote oceanic islands and archipelagos. Hawaii is particularly rich, with well over 100 endemic species of Hawaiian lobelioids. Continental areas with high diversity are South Africa, California and the northern Andes. Habitats range from extreme deserts to rainforests and lakes, from the tropics to the high Arctic (Campanula uniflora), and from sea cliffs to high alpine habitats. Description Although most Campanulaceae are perennial herbs (sometimes climbing, as in Codonopsis), there is also a large number of annuals e.g. species of Legousia. Isotoma hypocrateriformis is a succulent annual from Australia's dry interior. There are also biennials, e.g. the commonly cultivated ornamental Campanula medium (Canterbury bells). Many perennial campanuloids grow in rock-crevices, such as Musschia aurea (Madeira) and Petromarula pinnata (Crete). Some lobelioids also grow on rocks, e.g. the peculiar perennial stem succulent Brighamia rockii in Hawaii. Insular and tropical montane species in particular are often more or less woody and may bear the leaves in a dense rosette. When, in addition, the plant is unbranched, the result may be a palm- or treefern-like habit, as in species of the Hawaiian genus Cyanea, which comprises the tallest of Campanulaceae, C. leptostegia (up to 14 m). Lysipomia are minute cushion plants of the high Andes, while giant rosette-forming lobelias (e.g., Lobelia deckenii) are a characteristic component of the vegetation in the alpine zone on the tropical African volcanoes. In the Himalaya Campanula modesta and Cyananthus microphyllus reach even higher, probably setting the altitudinal record for the family at 4800 m. Several species are associated with freshwater, such as Lobelia dortmanna, an isoetid common in oligotrophic lakes in the boreal zone of North America and Europe, and Howellia aquatilis, an elodeid growing in ponds in SW North America. There is usually abundant, white latex, but occasionally the exudate is clear and/or very sparse, as in Jasione. Tubers occur in several genera, e.g. Cyphia. Leaves are often alternate, more rarely opposite (e. g. Codonopsis) or whorled (Ostrowskia). They are simple (Petromarula one of very few exceptions) entire (repeatedly divided in spp. of Cyanea), but often with dentate margin. Stipules are absent. Inflorescences are quite diverse, including both cymose and racemose types. In Jasione they are strongly condensed and resemble asteraceous capitula. In a few species, e. g. Cyananthus lobatus, flowers are solitary. Flowers are bisexual (dioecious in Dialypetalum) and protandrous. Petals are fused into a corolla with 3 to 8 lobes. It may be bell- or star-shaped in subfamily Campanuloideae, while tubular and bilaterally symmetric in most Lobelioideae. Blue of various shades is the most common petal colour, but purple, red, pink, orange, yellow, white, and green also occur. The corolla may be down to 1 mm wide and long in some species of Wahlenbergia. At the other extreme, it reaches a width of 15 cm in Ostrowskia. Stamens are equal in number to, and alternating with the petals. Anthers may be fused into a tube, as in all species of Lobelioideae and some Campanuloideae (e.g. Symphyandra) Within the family pollen grains are often tricolporate, less commonly triporate, tricolpate, or pantoporate. Carpel number is usually 2, 3 or 5 (8 in Ostrowskia), and corresponds to the number of stigmatic lobes. The style is in various ways involved in the "presentation" of the pollen, as in several other families of the order Asterales. In Lobelioideae, pollen is, already in the bud stage, released into the tube formed by the anthers. During flowering, it is pushed up by the elongating style and "presented" to visiting pollinators at the apex of the tube, a mechanism described as a pollen pump. The style eventually protrudes through the anther tube, and becomes receptive to pollen. In Campanuloideae, the pollen is instead packed between hairs on the style, gradually being released as the hairs invaginate. Subsequently, the stigmatic lobes unfold, and become receptive. Bees and birds (particularly hummingbirds and hawaiian honeycreepers) are probably the most common pollinators of Campanulaceae. A few confirmed and many probable cases of bat-pollination are known, particularly in the genus Burmeistera. Brighamia and Hippobroma have pale or white flowers with a long-tubed corolla, and are pollinated by hawkmoths. Pollination by lizards has been reported for Musschia aurea and Nesocodon mauritianus. The ovary is usually inferior or, in some species, semi-inferior. Very rarely is it completely superior (e.g. Cyananthus). In Campanumoea javanica, calyx and corolla diverge from the ovary at different levels. Berries are a common fruit-type in Lobelioideae (Burmeistera, Clermontia, Centropogon, Cyanea etc.), whilst rare in Campanuloideae (Canarina being one of few examples). Capsules, with very varying modes of dehiscence, are otherwise the predominating fruit type in the family. Seeds are mostly small (<2 mm) and numerous. Subfamilies and genera 95 genera are accepted. The Angiosperm Phylogeny Website divides the family into five subfamilies. Campanuloideae Adenophora – Europe and Asia Asyneuma – southern Europe and Asia Berenice – Réunion Campanula (synonyms Azorina and Theodorovia ) – mostly northern hemisphere Campanulastrum Small Canarina – Canary Islands and East Africa Codonopsis – eastern Asia Craterocapsa – South Africa Cryptocodon – C Asia Cyananthus – E Asia Cyclocodon Griff. ex Hook.f. & Thomson – tropical and subtropical Asia and New Guinea Cylindrocarpa – Central Asia Eastwoodiella Morin – California Echinocodon – China Edraianthus – SE Europe and W Asia Favratia Feer – Alps Feeria – Morocco × Fockeanthus – central Europe Githopsis – W N America Gunillaea – Tropical Africa and Madagascar Hanabusaya – Korea Heterochaenia – Réunion Heterocodon – SW N America Himalacodon D.Y.Hong & Qiang Wang – Himalayas Homocodon – China Jasione – Europe and SW Asia Kericodon Cupido – Cape Provinces Legousia – Europe and N Africa Melanocalyx (Fed.) Morin – subarctic Eurasia and North America and Rocky Mountains Merciera – South Africa Michauxia – Middle East Microcodon – South Africa Muehlbergella Feer Musschia – Madeira Namacodon – southwestern Africa Nesocodon – Mauritius Ostrowskia – Central Asia Palustricodon Morin – central and eastern North America Pankycodon D.Y.Hong & X.T.Ma – Himalayas Peracarpa – Southeast Asia Petromarula – Crete Physoplexis – Alps Phyteuma – Europe and Asia Platycodon – eastern Asia Poolea Morin – Texas Prismatocarpus – Southern Africa Protocodon Morin – Florida Pseudocodon D.Y.Hong & H.Sun Ravenella Morin – California Rhigiophyllum – South Africa Roella – South Africa Rotanthella Morin – Florida Sergia – C Asia Siphocodon - South Africa Smithiastrum Morin – California and Oregon Theilera – South Africa Trachelium – SE Europe, Middle East and C Asia Treichelia – South Africa Triodanis – Americas and southern Europe Wahlenbergia (synonym Hesperocodon Eddie & Cupido) – mostly Southern Hemisphere Zeugandra – Iran Lobelioideae Brighamia – Hawaii Burmeistera – N Andes and C America Centropogon – Neotropics Clermontia – Hawaii Cyanea – Hawaii Delissea – Hawaii Dialypetalum – Madagascar Diastatea – Neotropics Dielsantha – Tropical Africa Downingia – W N America and S S America Grammatotheca – South Africa Heterotoma – Mexico Hippobroma – W Indies Howellia – SW N America Isotoma – Australia Legenere – California Lithotoma E.B.Knox – Australia Lobelia (synonyms include Hypsela, Laurentia, Pratia) – cosmopolitan Lysipomia – Andes Monopsis – tropical and southern Africa and Comoros Palmerella – California Porterella – SW N America Ruthiella – New Guinea Sclerotheca (synonym Apetahia ) – Society Islands Siphocampylus – Neotropics Solenopsis – S Europe and N Africa Trematolobelia – Hawaii Trimeris – St. Helena Unigenes – South Africa Wimmeranthus Rzed. – southwestern Mexico Wimmerella – South Africa Cyphioideae Cyphia – Africa Cyphocarpoideae Cyphocarpus – northern Chile Nemacladoideae Nemacladus (syn. Parishella) – SW N America Pseudonemacladus – Mexico Fossil record The earliest known occurrence of Campanulaceae pollen is from Oligocene strata. Earliest Campanulaceae macrofossils dated, are seeds of †Campanula paleopyramidalis from 17-16 million years old Miocene deposits in the Nowy Sacz, Carpathians, Poland. It is a close relative of the extant Campanula pyramidalis. Chemical compounds Members of subfamily Lobelioideae contain the alkaloid lobeline. The principal storage carbohydrate of Campanulaceae is inulin, a fructan also occurring in the related Asteraceae. Literature
Biology and health sciences
Asterales
null
852089
https://en.wikipedia.org/wiki/Gravitational%20time%20dilation
Gravitational time dilation
Gravitational time dilation is a form of time dilation, an actual difference of elapsed time between two events, as measured by observers situated at varying distances from a gravitating mass. The lower the gravitational potential (the closer the clock is to the source of gravitation), the slower time passes, speeding up as the gravitational potential increases (the clock moving away from the source of gravitation). Albert Einstein originally predicted this in his theory of relativity, and it has since been confirmed by tests of general relativity. This effect has been demonstrated by noting that atomic clocks at differing altitudes (and thus different gravitational potential) will eventually show different times. The effects detected in such Earth-bound experiments are extremely small, with differences being measured in nanoseconds. Relative to Earth's age in billions of years, Earth's core is in effect 2.5 years younger than its surface. Demonstrating larger effects would require measurements at greater distances from the Earth, or a larger gravitational source. Gravitational time dilation was first described by Albert Einstein in 1907 as a consequence of special relativity in accelerated frames of reference. In general relativity, it is considered to be a difference in the passage of proper time at different positions as described by a metric tensor of spacetime. The existence of gravitational time dilation was first confirmed directly by the Pound–Rebka experiment in 1959, and later refined by Gravity Probe A and other experiments. Gravitational time dilation is closely related to gravitational redshift, in which the closer a body emitting light of constant frequency is to a gravitating body, the more its time is slowed by gravitational time dilation, and the lower (more "redshifted") would seem to be the frequency of the emitted light, as measured by a fixed observer. Definition Clocks that are far from massive bodies (or at higher gravitational potentials) run more quickly, and clocks close to massive bodies (or at lower gravitational potentials) run more slowly. For example, considered over the total time-span of Earth (4.6 billion years), a clock set in a geostationary position at an altitude of 9,000 meters above sea level, such as perhaps at the top of Mount Everest (prominence 8,848m), would be about 39 hours ahead of a clock set at sea level. This is because gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle, in the gravitational field of massive objects. According to general relativity, inertial mass and gravitational mass are the same, and all accelerated reference frames (such as a uniformly rotating reference frame with its proper time dilation) are physically equivalent to a gravitational field of the same strength. Consider a family of observers along a straight "vertical" line, each of whom experiences a distinct constant g-force directed along this line (e.g., a long accelerating spacecraft, a skyscraper, a shaft on a planet). Let be the dependence of g-force on "height", a coordinate along the aforementioned line. The equation with respect to a base observer at is where is the total time dilation at a distant position , is the dependence of g-force on "height" , is the speed of light, and denotes exponentiation by e. For simplicity, in a Rindler's family of observers in a flat spacetime, the dependence would be with constant , which yields . On the other hand, when is nearly constant and is much smaller than , the linear "weak field" approximation can also be used. See Ehrenfest paradox for application of the same formula to a rotating reference frame in flat spacetime. Outside a non-rotating sphere A common equation used to determine gravitational time dilation is derived from the Schwarzschild metric, which describes spacetime in the vicinity of a non-rotating massive spherically symmetric object. The equation is where is the proper time between two events for an observer close to the massive sphere, i.e. deep within the gravitational field is the coordinate time between the events for an observer at an arbitrarily large distance from the massive object (this assumes the far-away observer is using Schwarzschild coordinates, a coordinate system where a clock at infinite distance from the massive sphere would tick at one second per second of coordinate time, while closer clocks would tick at less than that rate), is the gravitational constant, is the mass of the object creating the gravitational field, is the radial coordinate of the observer within the gravitational field (this coordinate is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate; the equation in this form has real solutions for ), is the speed of light, is the Schwarzschild radius of , is the escape velocity, and is the escape velocity, expressed as a fraction of the speed of light c. To illustrate then, without accounting for the effects of rotation, proximity to Earth's gravitational well will cause a clock on the planet's surface to accumulate around 0.0219 fewer seconds over a period of one year than would a distant observer's clock. In comparison, a clock on the surface of the Sun will accumulate around 66.4 fewer seconds in one year. Circular orbits In the Schwarzschild metric, free-falling objects can be in circular orbits if the orbital radius is larger than (the radius of the photon sphere). The formula for a clock at rest is given above; the formula below gives the general relativistic time dilation for a clock in a circular orbit: Both dilations are shown in the figure below. Important features of gravitational time dilation According to the general theory of relativity, gravitational time dilation is copresent with the existence of an accelerated reference frame. Additionally, all physical phenomena in similar circumstances undergo time dilation equally according to the equivalence principle used in the general theory of relativity. The speed of light in a locale is always equal to c according to the observer who is there. That is, every infinitesimal region of spacetime may be assigned its own proper time, and the speed of light according to the proper time at that region is always c. This is the case whether or not a given region is occupied by an observer. A time delay can be measured for photons which are emitted from Earth, bend near the Sun, travel to Venus, and then return to Earth along a similar path. There is no violation of the constancy of the speed of light here, as any observer observing the speed of photons in their region will find the speed of those photons to be c, while the speed at which we observe light travel finite distances in the vicinity of the Sun will differ from c. If an observer is able to track the light in a remote, distant locale which intercepts a remote, time dilated observer nearer to a more massive body, that first observer tracks that both the remote light and that remote time dilated observer have a slower time clock than other light which is coming to the first observer at c, like all other light the first observer really can observe (at their own location). If the other, remote light eventually intercepts the first observer, it too will be measured at c by the first observer. Gravitational time dilation in a gravitational well is equal to the velocity time dilation for a speed that is needed to escape that gravitational well (given that the metric is of the form , i. e. it is time invariant and there are no "movement" terms ). To show that, one can apply Noether's theorem to a body that freely falls into the well from infinity. Then the time invariance of the metric implies conservation of the quantity , where is the time component of the 4-velocity of the body. At the infinity , so , or, in coordinates adjusted to the local time dilation, ; that is, time dilation due to acquired velocity (as measured at the falling body's position) equals to the gravitational time dilation in the well the body fell into. Applying this argument more generally one gets that (under the same assumptions on the metric) the relative gravitational time dilation between two points equals to the time dilation due to velocity needed to climb from the lower point to the higher. Experimental confirmation Gravitational time dilation has been experimentally measured using atomic clocks on airplanes, such as the Hafele–Keating experiment. The clocks aboard the airplanes were slightly faster than clocks on the ground. The effect is significant enough that the Global Positioning System's artificial satellites need to have their clocks corrected. Additionally, time dilations due to height differences of less than one metre have been experimentally verified in the laboratory. Gravitational time dilation in the form of gravitational redshift has also been confirmed by the Pound–Rebka experiment and observations of the spectra of the white dwarf Sirius B. Gravitational time dilation has been measured in experiments with time signals sent to and from the Viking 1 Mars lander.
Physical sciences
Theory of relativity
Physics
24044333
https://en.wikipedia.org/wiki/Sulfate%20mineral
Sulfate mineral
The sulfate minerals are a class of minerals that include the sulfate ion () within their structure. The sulfate minerals occur commonly in primary evaporite depositional environments, as gangue minerals in hydrothermal veins and as secondary minerals in the oxidizing zone of sulfide mineral deposits. The chromate and manganate minerals have a similar structure and are often included with the sulfates in mineral classification systems. Sulfate minerals include: Anhydrous sulfates Barite BaSO4 Celestite SrSO4 Anglesite PbSO4 Anhydrite CaSO4 Hanksite Na22K(SO4)9(CO3)2Cl Hydroxide and hydrous sulfates Gypsum CaSO4·2H2O Chalcanthite CuSO4·5H2O Kieserite MgSO4·H2O Starkeyite MgSO4·4H2O Hexahydrite MgSO4·6H2O Epsomite MgSO4·7H2O Meridianiite MgSO4·11H2O Melanterite FeSO4·7H2O Antlerite Cu3SO4(OH)4 Brochantite Cu4SO4(OH)6 Alunite KAl3(SO4)2(OH)6 Jarosite KFe3(SO4)2(OH)6 Nickel–Strunz classification -07- sulfates IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses it to modify the Classification of Nickel–Strunz (mindat.org, 10 ed, pending publication). Abbreviations: "*" – discredited (IMA/CNMNC status). "?" – questionable/doubtful (IMA/CNMNC status). "REE" – Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu) "PGE" – Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt) 03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates: Neso: insular (from Greek νησος nēsos, island) Soro: grouping (from Greek σωροῦ sōros, heap, mound (especially of corn)) Cyclo: ring Ino: chain (from Greek ις [genitive: ινος inos], fibre) Phyllo: sheet (from Greek φύλλον phyllon, leaf) Tekto: three-dimensional framework Nickel–Strunz code scheme: NN.XY.##x NN: Nickel–Strunz mineral class number X: Nickel–Strunz mineral division letter Y: Nickel–Strunz mineral family letter ##x: Nickel–Strunz mineral/group number, x add-on letter Class: sulfates, selenates, tellurates 07.A Sulfates (selenates, etc.) without Additional Anions, without H2O 07.AB With medium-sized cations: 05 Millosevichite, 05 Mikasaite; 10 Chalcocyanite, 10 Zincosite* 07.AC With medium-sized and large cations: IMA2008-029, 05 Vanthoffite; 10 Efremovite, 10 Manganolangbeinite, 10 Langbeinite; 15 Eldfellite, 15 Yavapaiite; 20 Godovikovite, 20 Sabieite; 25 Thenardite, 35 Aphthitalite 07.AD With only large cations: 05 Arcanite, 05 Mascagnite; 10 Mercallite, 15 Misenite, 20 Letovicite, 25 Glauberite, 30 Anhydrite; 35 Anglesite, 35 Barite, 35 Celestine, 35 Radiobarite*, 35 Olsacherite; 40 Kalistrontite, 40 Palmierite 07.B Sulfates (selenates, etc.) with additional anions, without H2O 07.BB With medium-sized cations: 05 Caminite, 10 Hauckite, 15 Antlerite, 20 Dolerophanite, 25 Brochantite, 30 Vergasovaite, 35 Klebelsbergite, 40 Schuetteite, 45 Paraotwayite, 50 Xocomecatlite, 55 Pauflerite 07.BC With medium-sized and large cations: 05 Dansite; 10 Alunite, 10 Ammonioalunite, 10 Ammoniojarosite, 10 Beaverite, 10 Argentojarosite, 10 Huangite, 10 Dorallcharite, 10 Jarosite, 10 Hydroniumjarosite, 10 Minamiite, 10 Natrojarosite, 10 Natroalunite, 10 Osarizawaite, 10 Plumbojarosite, 10 Walthierite, 10 Schlossmacherite; 15 Yeelimite; 20 Atlasovite, 20 Nabokoite; 25 Chlorothionite; 30 Fedotovite, 30 Euchlorine; 35 Kamchatkite, 40 Piypite; 45 Klyuchevskite-Duplicate, 45 Klyuchevskite, 45 Alumoklyuchevskite; 50 Caledonite, 55 Wherryite, 60 Mammothite; 65 Munakataite, 65 Schmiederite, 65 Linarite; 70 Chenite, 75 Krivovichevite 07.BD With only large cations: 05 Sulphohalite; 10 Galeite, 10 Schairerite; 15 Kogarkoite; 20 Cesanite, 20 Caracolite; 25 Burkeite, 30 Hanksite, 35 Cannonite, 40 Lanarkite, 45 Grandreefite, 50 Itoite, 55 Chiluite, 60 Hectorfloresite, 65 Pseudograndreefite, 70 Sundiusite 07.C Sulfates (selenates, etc.) without additional anions, with H2O 07.CB With only medium-sized cations: 05 Gunningite, 05 Dwornikite, 05 Kieserite, 05 Szomolnokite, 05 Szmikite, 05 Poitevinite, 05 Cobaltkieserite; 07 Sanderite, 10 Bonattite, 15 Boyleite, 15 Aplowite, 15 Ilesite, 15 Rozenite, 15 Starkeyite, 15 IMA2002-034; 20 Chalcanthite, 20 Jokokuite, 20 Pentahydrite, 20 Siderotil; 25 Bianchite, 25 Ferrohexahydrite, 25 Chvaleticeite, 25 Hexahydrite, 25 Moorhouseite, 25 Nickelhexahydrite; 30 Retgersite; 35 Bieberite, 35 Boothite, 35 Mallardite, 35 Melanterite, 35 Zincmelanterite, 35 Alpersite; 40 Epsomite, 40 Goslarite, 40 Morenosite; 45 Alunogen, 45 Meta-alunogen; 50 Coquimbite, 50 Paracoquimbite; 55 Rhomboclase, 60 Kornelite, 65 Quenstedtite, 70 Lausenite; 75 Lishizhenite, 75 Romerite; 80 Ransomite; 85 Bilinite, 85 Apjohnite, 85 Dietrichite, 85 Halotrichite, 85 Pickeringite, 85 Redingtonite, 85 Wupatkiite; 90 Meridianiite, 95 Caichengyunite 07.CC With medium-sized and large cations: 05 Krausite, 10 Tamarugite; 15 Mendozite, 15 Kalinite; 20 Lonecreekite, 20 Alum-(K), 20 Alum-(Na), 20 Lanmuchangite, 20 Tschermigite; 25 Pertlikite, 25 Monsmedite?, 25 Voltaite, 25 Zincovoltaite; 30 Krohnkite, 35 Ferrinatrite, 40 Goldichite, 45 Loweite; 50 Blodite, 50 Changoite, 50 Nickelblodite; 55 Mereiterite, 55 Leonite; 60 Boussingaultite, 60 Cyanochroite, 60 Mohrite, 60 Picromerite, 60 Nickelboussingaultite; 65 Polyhalite; 70 Leightonite, 75 Amarillite, 80 Konyaite, 85 Wattevilleite 07.CD With only large cations: 05 Matteuccite, 10 Mirabilite, 15 Lecontite, 20 Hydroglauberite, 25 Eugsterite, 30 Gorgeyite; 35 Koktaite, 35 Syngenite; 40 Gypsum, 45 Bassanite, 50 Zircosulfate, 55 Schieffelinite, 60 Montanite, 65 Omongwaite 07.D Sulfates (selenates, etc.) with additional anions, with H2O 07.DB With only medium-sized cations; insular octahedra and finite groups: 05 Svyazhinite, 05 Aubertite, 05 Magnesioaubertite; 10 Rostite, 10 Khademite; 15 Jurbanite; 20 Minasragrite, 20 Anorthominasragrite, 20 Orthominasragrite; 25 Bobjonesite; 30 Amarantite, 30 Hohmannite, 30 Metahohmannite; 35 Aluminocopiapite, 35 Copiapite, 35 Calciocopiapite, 35 Cuprocopiapite, 35 Ferricopiapite, 35 Magnesiocopiapite, 35 Zincocopiapite 07.DC With only medium-sized cations; chains of corner-sharing octahedra: 05 Aluminite, 05 Meta-aluminite; 10 Butlerite, 10 Parabutlerite; 15 Fibroferrite, 20 Xitieshanite; 25 Botryogen, 25 Zincobotryogen; 30 Chaidamuite, 30 Guildite 07.DD With only medium-sized cations; sheets of edge-sharing octahedra: 05 Basaluminite?, 05 Felsobanyaite, 07.5 Kyrgyzstanite, 08.0 Zn-Schulenbergite; 10 Langite, 10 Posnjakite, 10 Wroewolfeite; 15 Spangolite, 20 Ktenasite, 25 Christelite; 30 Campigliaite, 30 Devilline, 30 Orthoserpierite, 30 Niedermayrite, 30 Serpierite; 35 Motukoreaite, 35 Mountkeithite, 35 Glaucocerinite, 35 Honessite, 35 Hydrowoodwardite, 35 Hydrohonessite, 35 Shigaite, 35 Natroglaucocerinite, 35 Wermlandite, 35 Nikischerite, 35 Zincaluminite, 35 Woodwardite, 35 Carrboydite, 35 Zincowoodwardite, 35 Zincowoodwardite-3R, 35 Zincowoodwardite-1T; 40 Lawsonbauerite, 40 Torreyite, 45 Mooreite, 50 Namuwite, 55 Bechererite, 60 Ramsbeckite, 65 Vonbezingite, 70 Redgillite; 75 Chalcoalumite, 75 Nickelalumite*; 80 Guarinoite, 80 Theresemagnanite, 80 Schulenbergite; 85 Montetrisaite 07.DE With only medium-sized cations; unclassified: 05 Mangazeite; 10 Carbonatecyanotrichite, 10 Cyanotrichite; 15 Schwertmannite, 20 Tlalocite, 25 Utahite, 35 Coquandite, 40 Osakaite, 45 Wilcoxite, 50 Stanleyite, 55 Mcalpineite, 60 Hydrobasaluminite, 65 Zaherite, 70 Lautenthalite, 75 Camérolaite, 80 Brumadoite 07.DF With large and medium-sized cations: 05 Uklonskovite, 10 Kainite, 15 Natrochalcite; 20 Metasideronatrite, 20 Sideronatrite; 25 Despujolsite, 25 Fleischerite, 25 Schaurteite, 25 Mallestigite; 30 Slavikite, 35 Metavoltine; 40 Lannonite, 40 Vlodavetsite; 45 Peretaite, 50 Gordaite, 55 Clairite, 60 Arzrunite, 65 Elyite, 70 Yecoraite, 75 Riomarinaite, 80 Dukeite, 85 Xocolatlite 07.DG With large and medium-sized cations; with NO3, CO3, B(OH)4, SiO4 or IO3: 05 Darapskite; 10 Clinoungemachite, 10 Ungemachite, 10 Humberstonite; 15 Bentorite, 15 Charlesite, 15 Ettringite, 15 Jouravskite, 15 Sturmanite, 15 Thaumasite, 15 Carraraite, 15 Buryatite; 20 Rapidcreekite, 25 Tatarskite, 30 Nakauriite, 35 Chessexite; 40 Carlosruizite, 40 Fuenzalidaite; 45 Chelyabinskite* 07.E Uranyl Sulfates 07.EA Without cations: 05 Uranopilite, 05 Metauranopilite, 10 Jachymovite 07.EB With medium-sized cations: 05 Johannite, 10 Deliensite 07.EC With medium-sized and large cations: 05 Cobaltzippeite, 05 Magnesiozippeite, 05 Nickelzippeite, 05 Natrozippeite, 05 Zinc-zippeite, 05 Zippeite; 10 Rabejacite, 15 Marecottite, 20 Pseudojohannite 07.J Thiosulfates 07.JA Thiosulfates with Pb: 05 Sidpietersite 07.X Unclassified Strunz Sulfates (Selenates, Tellurates) 07.XX Unknown: 00 Aiolosite, 00 Steverustite, 00 Grandviewite, 00 IMA2009-008, 00 Adranosite, 00 Blakeite Class: chromates 07.F Chromates 07.FA Without additional anions: 05 Tarapacaite, 10 Chromatite, 15 Hashemite, 20 Crocoite 07.FB With additional O, V, S, Cl: 05 Phoenicochroite, 10 Santanaite, 15 Wattersite, 20 Deanesmithite, 25 Edoylerite 07.FC With PO4, AsO4, SiO4: 05 Vauquelinite; 10 Fornacite, 10 Molybdofornacite; 15 Hemihedrite, 15 Iranite; 20 Embreyite, 20 Cassedanneite; 07.FD Dichromates: 05 Lópezite Class: molybdates, wolframates and niobates 07.G Molybdates, wolframates and niobates 07.GA Without additional anions or H2O: 05 Fergusonite-(Ce), 05 Fergusonite-(Nd)N, 05 Fergusonite-(Y), 05 Powellite, 05 Wulfenite, 05 Stolzite, 05 Scheelite; 10 Formanite-(Y), 10 Iwashiroite-(Y); 15 Paraniite-(Y) 07.GB With additional anions and/or H2O: 05 Lindgrenite, 10 Szenicsite, 15 Cuprotungstite, 20 Phyllotungstite, 25 Rankachite, 30 Ferrimolybdite, 35 Anthoinite, 35 Mpororoite, 40 Obradovicite-KCu, 45 Mendozavilite-NaFe, 45 Paramendozavilite, 50 Tancaite-(Ce) 07.H Uranium and uranyl molybdates and wolframates 07.HA With U4+: 05 Sedovite, 10 Cousinite, 15 Moluranite 07.HB With U6+: 15 Calcurmolite, 20 Tengchongite, 25 Uranotungstite
Physical sciences
Minerals
Earth science
1307896
https://en.wikipedia.org/wiki/Chemically%20peculiar%20star
Chemically peculiar star
In astrophysics, chemically peculiar stars (CP stars) are stars with distinctly unusual metal abundances, at least in their surface layers. Classification Chemically peculiar stars are common among hot main-sequence (hydrogen-burning) stars. These hot peculiar stars have been divided into 4 main classes on the basis of their spectra, although two classification systems are sometimes used: non-magnetic metallic-lined (Am, CP1) magnetic (Ap, CP2) non-magnetic mercury-manganese (HgMn, CP3) helium-weak (He-weak, CP4). The class names provide a good idea of the peculiarities that set them apart from other stars on or near the main sequence. The Am stars (CP1 stars) show weak lines of singly ionized Ca and/or Sc, but show enhanced abundances of heavy metals. They also tend to be slow rotators and have an effective temperature between 7000 and . The Ap stars (CP2 stars) are characterized by strong magnetic fields, enhanced abundances of elements such as Si, Cr, Sr and Eu, and are also generally slow rotators. The effective temperature of these stars is stated to be between 8000 and , but the issue of calculating effective temperatures in such peculiar stars is complicated by atmospheric structure. The HgMn stars (CP3 stars) are also classically placed within the Ap category, but they do not show the strong magnetic fields associated with classical Ap stars. As the name implies, these stars show increased abundances of singly ionized mercury and manganese. These stars are also very slow rotators, even by the standards of CP stars. The effective temperature range for these stars is quoted at between and . The He-weak stars (CP4 stars) show weaker He lines than would be expected classically from their observed Johnson UBV colours. A rare class of He-weak stars are, paradoxically, the helium-rich stars, with temperatures of –. Cause of the peculiarities It is generally thought that the peculiar surface compositions observed in these hot main-sequence stars have been caused by processes that happened after the star formed, such as diffusion or magnetic effects in the outer layers of the stars. These processes cause some elements, particularly He, N and O, to "settle" out in the atmosphere into the layers below, while other elements such as Mn, Sr, Y and Zr are "levitated" out of the interior to the surface, resulting in the observed spectral peculiarities. It is assumed that the centers of the stars, and the bulk compositions of the entire star, have more normal chemical abundance mixtures which reflect the compositions of the gas clouds from which they formed. In order for such diffusion and levitation to occur and the resulting layers to remain intact, the atmosphere of such a star must be stable enough to convection that convective mixing does not occur. The proposed mechanism causing this stability is the unusually large magnetic field that is generally observed in stars of this type. Approximately 5–10% of hot main sequence stars show chemical peculiarities. Of these, the vast majority are Ap (or Bp) stars with strong magnetic fields. Non-magnetic, or only weakly magnetic, chemically peculiar stars mostly fall into the Am or HgMn categories. A much smaller percentage show stronger peculiarities, such as the dramatic under-abundance of iron peak elements in λ Boötis stars. sn stars Another group of stars sometimes considered to be chemically peculiar are the 'sn' stars. These hot stars, usually of spectral classes B2 to B9, show Balmer lines with sharp (s) cores, sharp metallic absorption lines, and contrasting broad (nebulous, n) neutral helium absorption lines. These may be combined with the other chemical peculiarities more commonly seen in B-type stars. It was originally proposed that the unusual helium lines were created in a weak shell of material around the star, but are now thought to be caused by the Stark effect. Other stars There are also classes of chemically peculiar cool stars (that is, stars with spectral type G or later), but these stars are typically not main-sequence stars. These are usually identified by the name of their class or some further specific label. The phrase chemically peculiar star without further specification usually means a member of one of the hot main sequence types described above. Many of the cooler chemically peculiar stars are the result of the mixing of nuclear fusion products from the interior of the star to its surface; these include most of the carbon stars and S-type stars. Others are the result of mass transfer in a binary star system; examples of these include the barium stars and some S stars. Companions There are very few reports of exoplanets whose host stars are chemically peculiar stars. The young variable star HR 8799, which hosts four directly imaged massive planets, belongs to the group of λ Boötis stars. Similarly, the binary star HIP 79098, whose primary is a mercury-manganese star, was found via direct imaging to have a substellar companion, possibly a brown dwarf or a gas giant.
Physical sciences
Stellar astronomy
Astronomy
1309936
https://en.wikipedia.org/wiki/Iron%28II%2CIII%29%20oxide
Iron(II,III) oxide
Iron(II,III) oxide, or black iron oxide, is the chemical compound with formula Fe3O4. It occurs in nature as the mineral magnetite. It is one of a number of iron oxides, the others being iron(II) oxide (FeO), which is rare, and iron(III) oxide (Fe2O3) which also occurs naturally as the mineral hematite. It contains both Fe2+ and Fe3+ ions and is sometimes formulated as FeO ∙ Fe2O3. This iron oxide is encountered in the laboratory as a black powder. It exhibits permanent magnetism and is ferrimagnetic, but is sometimes incorrectly described as ferromagnetic. Its most extensive use is as a black pigment (see: Mars Black). For this purpose, it is synthesized rather than being extracted from the naturally occurring mineral as the particle size and shape can be varied by the method of production. Preparation Heated iron metal interacts with steam to form iron oxide and hydrogen gas. 3Fe + 4H2O->Fe3O4 + 4H2 Under anaerobic conditions, ferrous hydroxide (Fe(OH)2) can be oxidized by water to form magnetite and molecular hydrogen. This process is described by the Schikorr reaction: \underset{ferrous\ hydroxide}{3Fe(OH)2} -> \underset{magnetite}{Fe3O4} + \underset{hydrogen}{H2} + \underset{water}{2H2O} This works because crystalline magnetite (Fe3O4) is thermodynamically more stable than amorphous ferrous hydroxide (Fe(OH)2 ). The Massart method of preparation of magnetite as a ferrofluid, is convenient in the laboratory: mix iron(II) chloride and iron(III) chloride in the presence of sodium hydroxide. A more efficient method of preparing magnetite without troublesome residues of sodium, is to use ammonia to promote chemical co-precipitation from the iron chlorides: first mix solutions of 0.1 M FeCl3·6H2O and FeCl2·4H2O with vigorous stirring at about 2000 rpm. The molar ratio of the FeCl3:FeCl2 should be about 2:1. Heat the mix to 70 °C, then raise the speed of stirring to about 7500 rpm and quickly add a solution of NH4OH (10 volume %). A dark precipitate of nanoparticles of magnetite forms immediately. In both methods, the precipitation reaction relies on rapid transformation of acidic iron ions into the spinel iron oxide structure at pH 10 or higher. Controlling the formation of magnetite nanoparticles presents challenges: the reactions and phase transformations necessary for the creation of the magnetite spinel structure are complex. The subject is of practical importance because magnetite particles are of interest in bioscience applications such as magnetic resonance imaging (MRI), in which iron oxide magnetite nanoparticles potentially present a non-toxic alternative to the gadolinium-based contrast agents currently in use. However, difficulties in controlling the formation of the particles, still frustrate the preparation of superparamagnetic magnetite particles, that is to say: magnetite nanoparticles with a coercivity of 0 A/m, meaning that they completely lose their permanent magnetisation in the absence of an external magnetic field. The smallest values currently reported for nanosized magnetite particles is Hc = 8.5 A m−1, whereas the largest reported magnetization value is 87 Am2 kg−1 for synthetic magnetite. Pigment quality Fe3O4, so called synthetic magnetite, can be prepared using processes that use industrial wastes, scrap iron or solutions containing iron salts (e.g. those produced as by-products in industrial processes such as the acid vat treatment (pickling) of steel): Oxidation of Fe metal in the Laux process where nitrobenzene is treated with iron metal using FeCl2 as a catalyst to produce aniline: C6H5NO2 + 3 Fe + 2 H2O → C6H5NH2 + Fe3O4 Oxidation of FeII compounds, e.g. the precipitation of iron(II) salts as hydroxides followed by oxidation by aeration where careful control of the pH determines the oxide produced. Reduction of Fe2O3 with hydrogen: 3Fe2O3 + H2 → 2Fe3O4 +H2O Reduction of Fe2O3 with CO: 3Fe2O3 + CO → 2Fe3O4 + CO2 Production of nano-particles can be performed chemically by taking for example mixtures of FeII and FeIII salts and mixing them with alkali to precipitate colloidal Fe3O4. The reaction conditions are critical to the process and determine the particle size. Iron(II) carbonate can also be thermally decomposed into Iron(II,III): Reactions Reduction of magnetite ore by CO in a blast furnace is used to produce iron as part of steel production process: {Fe3O4} + 4CO -> {3Fe} + 4CO2 Controlled oxidation of Fe3O4 is used to produce brown pigment quality γ-Fe2O3 (maghemite): More vigorous calcining (roasting in air) gives red pigment quality α-Fe2O3 (hematite): Structure Fe3O4 has a cubic inverse spinel group structure which consists of a cubic close packed array of oxide ions where all of the Fe2+ ions occupy half of the octahedral sites and the Fe3+ are split evenly across the remaining octahedral sites and the tetrahedral sites. Both FeO and γ-Fe2O3 have a similar cubic close packed array of oxide ions and this accounts for the ready interchangeability between the three compounds on oxidation and reduction as these reactions entail a relatively small change to the overall structure. Fe3O4 samples can be non-stoichiometric. The ferrimagnetism of Fe3O4 arises because the electron spins of the FeII and FeIII ions in the octahedral sites are coupled and the spins of the FeIII ions in the tetrahedral sites are coupled but anti-parallel to the former. The net effect is that the magnetic contributions of both sets are not balanced and there is a permanent magnetism. In the molten state, experimentally constrained models show that the iron ions are coordinated to 5 oxygen ions on average. There is a distribution of coordination sites in the liquid state, with the majority of both FeII and FeIII being 5-coordinated to oxygen and minority populations of both 4- and 6-fold coordinated iron. Properties Fe3O4 is ferrimagnetic with a Curie temperature of . There is a phase transition at , called Verwey transition where there is a discontinuity in the structure, conductivity and magnetic properties. This effect has been extensively investigated and whilst various explanations have been proposed, it does not appear to be fully understood. While it has much higher electrical resistivity than iron metal (96.1 nΩ m), Fe3O4's electrical resistivity (0.3 mΩ m ) is significantly lower than that of Fe2O3 (approx kΩ m). This is ascribed to electron exchange between the FeII and FeIII centres in Fe3O4. Uses Fe3O4 is used as a black pigment and is known as C.I pigment black 11 (C.I. No.77499) or Mars Black. Fe3O4 is used as a catalyst in the Haber process and in the water-gas shift reaction. The latter uses an HTS (high temperature shift catalyst) of iron oxide stabilised by chromium oxide. This iron–chrome catalyst is reduced at reactor start up to generate Fe3O4 from α-Fe2O3 and Cr2O3 to CrO3. Bluing is a passivation process that produces a layer of Fe3O4 on the surface of steel to protect it from rust. Along with sulfur and aluminium, it is an ingredient in steel-cutting thermite. Medical uses Nano particles of Fe3O4 are used as contrast agents in MRI scanning. Ferumoxytol, sold under the brand names Feraheme and Rienso, is an intravenous Fe3O4 preparation for treatment of anemia resulting from chronic kidney disease. Ferumoxytol is manufactured and globally distributed by AMAG Pharmaceuticals. Biological occurrence Magnetite has been found as nano-crystals in magnetotactic bacteria (42–45 nm) and in the beak tissue of homing pigeons.
Physical sciences
Oxide salts
Chemistry
1310829
https://en.wikipedia.org/wiki/Ilish
Ilish
The ilish (Tenualosa ilisha) (), also known as the ilishi, hilsa fish, hilsa herring or hilsa shad, is a species of fish related to the herring, in the family Clupeidae. It is a very popular and sought-after food in the Bengali region, and is the national fish of Bangladesh and state fish of the Indian state of West Bengal. As of 2023, 97% of the world's total ilish supply originates in Bangladesh. The fish contributes about 12% of the total fish production and about 1.15% of GDP in Bangladesh. On 6 August 2017, Department of Patents, Designs and Trademarks under the Ministry of Industries declared ilish as a Geographical Indication of Bangladesh. About 450,000 people are directly involved in the catching of the fish as a large part of their livelihood; around four to five million people are indirectly involved with the trade. Common names Other names include jatka, illi, ilish, ellis, palla fish, hilsha, ilih etc. (: ilih/ilihi, , : Modar or Palva, , Sindhī: پلو مڇي pallo machhi, , Telugu: పులస pulasa). The name ilish is also used in India's Assamese, Bengali, and Odia communities. In Iraq it is called sboor (صبور). In Malaysia and Indonesia, it is commonly known as terubok. Due to its distinguished features as being oily and tender, some Malays, especially in northern Johore, call it '' (to distinguish it from the toli - which species is rich in tiny bones and not so oily). In Myanmar, it is called () in Burmese which derives from the Mon language word ကသလံက် with က in Mon and in Burmese meaning fish. Description Females of the species grow larger than males, with males individuals not reaching over 46 cm. Females can reach lengths of up to 55 cm. Maturity is generally attained by the end of the first year or the start of the second, with males maturing at sizes of 26–29 cm and females at 31–33 cm. It has no dorsal spines but 18 – 21 dorsal soft rays and anal soft rays. The belly has 30 to 33 scutes. There is a distinct median notch in the upper jaw. Gill rakers fine and numerous, about 100 to 250 on the lower part of the arch and the fins are hyaline. The fish shows a dark blotch behind gill opening, followed by a series of small spots along the flank in juveniles. While alive, the fish is silver shot with gold and purple. Habitat and breeding The fish is marine; freshwater; brackish; pelagic-neritic; anadromous; depth range of about 200 m. Within a tropical range; 34°N - 5°N, 42°E - 97°E in marine and freshwater. It is found in rivers and estuaries in Bangladesh, India, Pakistan, Myanmar (also known as Burma) and the Persian Gulf area where it can be found in the Tigris and Euphrates rivers in and around Iran and southern Iraq. The fish schools in coastal waters and ascends up the rivers (anadromous) for around 50 – 100 km to spawn during the southwest monsoons (June to September and January to April). Actual peak breeding season of the fish is a topic of debate amongst the researchers. After spawning, they return to the sea which are known as Jatka in Bangladesh (fish size up to 9 cm), although some stocks remain resident in rivers. Since the 1900s, numerous efforts have been made to breed and cultivate hilsa across South Asia, especially in India and Bangladesh. However, no significant success has been achieved in completing the fish's life cycle in captivity. Feeding habit The species filter feeds on plankton and forages in muddy bottoms. Its diet primarily consists of Bacillariophyceae (diatoms), Chlorophyceae (green algae), and crustaceans (Copepoda and Cladocera). While adults generally feed on Chlorophyceae and Bacillariophyceae, juveniles primarily depend on crustaceans. Production The fish is found in 11 countries: Bangladesh, India, Myanmar, Pakistan, Iran, Iraq, Kuwait, Bahrain, Indonesia, Malaysia and Thailand. Bangladesh is the top hilsa-producing country in the world, followed by Myanmar and then India. An estimated 97% of the total hilsa catch comes from Bangladesh. Ilish production in the country increased by 92% from 2008 to 2023. Food value The fish is popular food amongst the people of South Asia and in the Middle East, but especially with Bengalis, Odias and Telugus of Coastal Andhra. Bengali fish curry is a popular dish made with mustard oil or seed. The Bengalis popularly call this dish Shorshe Ilish. It is very popular in Bengal (Bangladesh and India's West Bengal), as well as in Odisha, Tripura, Assam, Gujarat and Andhra Pradesh. It is also exported globally. Ilish collected from Bangladesh is regarded the finest of all, celebrated for its size and subtle taste. In North America (where ilish is not always readily available) other shad fish are sometimes used as an ilish substitute, especially in Bengali cuisine. This typically occurs near the East coast of North America, where fresh shad fish, which tastes similar to ilish, can be found. In Bangladesh, fish are caught in the Meghna-Jamuna delta, which flows into the Bay of Bengal and Meghna (lower Brahmaputra), and Jamuna rivers. In India, Rupnarayan (which has the Kolaghater hilsa), Hooghly, Mahanadi, Narmada and Godavari rivers and the Chilika Lake are famous for their fish yields. In the Indian state of Andhra Pradesh, hilsa takes on a special significance. Here, the term "pulasa" refers specifically to the larger, mature hilsa that migrate upstream along the Godavari River. This migratory journey is crucial, as it's believed that the Godavari's unique muddy waters contribute to the development of a richer flavour and firmer texture in the fish, compared to hilsa caught elsewhere. Due to this perceived superior quality and its limited seasonal availability (typically monsoon season), pulasa commands a significantly higher price and cultural importance in Andhra Pradesh. It is considered a rich delicacy, often referred to as the "king of fish" in Godavari Areas and features in celebratory meals and as a prized gift. The upstream migration itself is seen as a vital natural process, and the pulasa a reward for the patient fishermen who wait for its arrival. In Pakistan, most hilsa fish are caught in the Indus River of Delta in Sindh. They are also caught in the sea, but some consider the marine stage of the fish as not so tasty. The fish has very sharp and tough bones, making it problematic to eat for some. Ilish is an oily fish rich in omega 3 fatty acids. Recent experiments have shown its beneficial effects in decreasing cholesterol level in rats and insulin level. In Bengal and Odisha, ilish can be smoked, fried, steamed or baked in young plantain leaves, prepared with mustard seed paste, curd, aubergine, different condiments like jira (cumin) and so on. It is said that people can cook ilish in more than 50 ways. Ilish roe is also popular as a side dish. Ilish can be cooked in very little oil since the fish itself is very oily. Ilish in culture Ilish is the national fish of Bangladesh. In Andhra Pradesh, the saying goes "Pustelu ammi ayina Pulasa tinocchu", meaning roughly "It's worth eating Pulasa/Ilish even if you have to sell your mangala sutra. Hilsa is also known as pulasa in Godavari districts of the state. The name Pulasa stays with the fish for a limited period between July–Sept of a year, when floods raise the Godavari River. This time the fish is in high demand and sometimes $100 per kilo. In many Bengali Hindu families a pair of ilish fishes (Bengali: Jora Ilish) are bought on auspicious days, for example for special prayers or puja days like for the Hindu Goddess of music, art and knowledge Saraswati Puja, which takes place in the beginning of spring or on the day of Lakshmi Puja (the Goddess of Wealth and Prosperity) which takes place in autumn. Some people offer the fish to the goddess Lakshmi, without which the Puja is sometimes thought to be incomplete. It is often given as gifts (Bengali: tattwa) in Bengali weddings. Hilsa is also known in Sindh as Pallo Machi and is important part of Sindhi cuisine, prepared with numerous cooking methods. It can be deep fried and garnished with local spices, can be cooked with onions and potatoes into a traditional fish meal or barbequed. The fish often has roe, which is called "aani" in Sindhi and is enjoyed as a delicacy. Often fried alongside the palla and served with the fish fillets. The rivalry of East Bengal and Mohon Bagan, two football clubs of Kolkata are celebrated by food. When East Bengal wins, an ilish (hilsha) dish is cooked by the East Bengal supporters. Similarly, when Mohon Bagan wins, a Chingri (Prawn) dish is prepared by the Mohon Bagan supporters. These items often occur in the tifos of these respective clubs. Overfishing Due to the demand and popularity of this species, overfishing is rampant. Fishes weighing around 2 to 3 kilograms have become rare in India, as even the smaller fish are caught using finer fishing nets as production in Bangladesh have increased. As a consequence of this, prices of the fish have risen. In the past ilish were not harvested between Vijaya Dashami and Saraswati Puja due to some informal customs of Odia and Bengali Hindus as it is the breeding period of the fish. But as disposable incomes grew, wealthier consumers abandoned the old traditions. The advent of finer fishing nets and advanced trawling techniques, and environmental degradation of the rivers, has worsened the situation. Fishermen have been ignoring calls to at least leave the juvenile "jatka" alone to repopulate the species. The fishing of the young jatka is now illegal in Bangladesh. This ban however has resulted in a rise in unemployment, as around 83,000 fishermen are unable to pursue their former livelihood for eight months every year. It has also led to the creation of a black market where jatka are sold for exorbitant prices. Furthermore, the changes brought about by global warming have led to a gradual depletion of the ilish's breeding grounds, reducing populations of the fish even further. Pollution in rivers have worsened the situation, but due to slightly better waters the fishes are found more near Bangladesh delta. Owing to this situation ilish is used as a diplomatic trade item, most recently in the distribution of COVID-19 vaccines. Bangladesh has regularly imposed restrictions on the export of ilish abroad, citing its scarcity. Despite this, former Prime Minister Sheikh Hasina periodically lifted the ban to allow the annual export and gifted of 3,000-5,000 tonnes of fish to India during the Durga Puja, popularly known as "Hilsa Diplomacy". Since the fall of the Hasina government, the interim government of Bangladesh has imposed a ban on ilish exports, which was partially lifted on 21 September 2024 to allow for the export of 3,000 tonnes of fish for Durga Puja.
Biology and health sciences
Clupeiformes
null
138789
https://en.wikipedia.org/wiki/Flintlock
Flintlock
Flintlock is a general term for any firearm that uses a flint-striking ignition mechanism, the first of which appeared in Western Europe in the early 16th century. The term may also apply to a particular form of the mechanism itself, also known as the true flintlock, that was introduced in the early 17th century, and gradually replaced earlier firearm-ignition technologies, such as the matchlock, the wheellock, and the earlier flintlock mechanisms such as the snaplock and snaphaunce. The true flintlock continued to be in common use for over two centuries, replaced by percussion cap and, later, the cartridge-based systems in the early-to-mid 19th century. Although long superseded by modern firearms, flintlock weapons enjoy continuing popularity with black-powder shooting enthusiasts. History French court gunsmith Marin le Bourgeoys made a firearm incorporating a flintlock mechanism for King Louis XIII shortly after his accession to the throne in 1610. However, firearms using some form of flint ignition mechanism had already been in use for over half a century. The first proto-flintlock was the snaplock, which was probably invented shortly before 1517 and was inarguably in use by 1547. Their cost and delicacy limited their use; for example around 1662, only one in six firearms used by the British royal army was a snaphaunce, the rest being matchlocks. The development of firearm lock mechanisms had proceeded from the matchlock to wheellock to the earlier flintlocks (snaplock, snaphance, miquelet, and doglock) in the previous two centuries, and each type had been an improvement, contributing design features to later firearms which were useful. Le Bourgeoys fitted these various features together to create what became known as the flintlock or true flintlock. Flintlock firearms differed from the then more common and cheaper to manufacture matchlock arms in that they were fired by the spark of the flint against the powder charge rather than by the direct application of a lighted length of cord or (as it was then called) "match". This was particularly important with men armed with muskets guarding artillery trains where a lighted cord ("match") would have been a dangerous fire hazard. Such men armed with these flintlocks were called "fusiliers" as flintlocks were then called "fusils" from the French word for such. Various types were in use by elite infantry, scouts, artillery guards (as noted), and private individuals in European armies throughout most of the 16th and 17th centuries, though matchlocks continued to overwhelmingly outnumber them. The early Dutch States Army used flintlocks on an unusually large scale, issuing snaphances to its infantry in the 1620s and true flintlocks by 1640. While it is known that the Dutch were the first power to adopt the flintlock as the standard infantry weapon, the exact chronology of the transition is uncertain. The new flintlock system quickly became popular and was known and used in various forms throughout Europe by 1630, although older flintlock systems continued to be used for some time. Examples of early flintlock muskets can be seen in the painting "Marie de' Medici as Bellona" by Rubens (painted around 1622–1625). These flintlocks were in use alongside older firearms such as matchlocks, wheellocks, and miquelet locks for nearly a hundred years. The last major European power to standardize the flintlock was the Holy Roman Empire, when in 1702 the Emperor instituted a new regulation that all matchlocks were to be converted or scrapped. The "true" flintlock was less expensive to manufacture than earlier flintlocks, which along with general economic development allowed every European soldier to have one by the 18th century. Compared to the earlier matchlock, flintlocks could be reloaded roughly twice as fast, misfired far less often, and were easier to use in various environments due to the fact that they did not require a lit match. This instantly changed the calculus of infantry combat; by one calculation, a formation equipped entirely with flintlocks (with paper cartridges) could output ten times as many shots in an equivalent period of time as a typical early 17th-century pike and shot formation equipped with matchlocks (pike:shot ratio of 3:2). Various breech-loading flintlocks were developed starting around 1650. The most popular action has a barrel that was unscrewed from the rest of the gun. This is more practical on pistols because of the shorter barrel length. This type is known as a Queen Anne pistol because it was during her reign that it became popular (although it was actually introduced in the reign of King William III). Another type has a removable screw plug set into the side or top or bottom of the barrel. A large number of sporting rifles were made with this system, as it allowed easier loading compared with muzzle loading with a tight-fitting bullet and patch. One of the more successful was the system built by Isaac de la Chaumette starting in 1704. The barrel could be opened by three revolutions of the triggerguard, to which it was attached. The plug stayed attached to the barrel and the ball and powder were loaded from the top. This system was improved in the 1770s by Colonel Patrick Ferguson and 100 experimental rifles used in the American Revolutionary War. The only two flintlock breech loaders to be produced in quantity were the Hall and the Crespi. The first was invented by John Hall and patented c. 1817. It was issued to the U.S. Army as the Model 1819 Hall Breech Loading Rifle. The Hall rifles and carbines were loaded using a combustible paper cartridge inserted into the upward tilting breechblock. Hall rifles leaked gas from the often poorly fitted action. The same problem affected the muskets produced by Giuseppe Crespi and adopted by the Austrian Army in 1771. Nonetheless, the Crespi System was experimented with by the British during the Napoleonic Wars, and percussion Halls guns saw service in the American Civil War. Flintlock weapons were commonly used until the mid 19th century, when they were replaced by percussion lock systems. Even though they have long been considered obsolete, flintlock weapons continue to be produced today by manufacturers such as Pedersoli, Euroarms, and Armi Sport. Not only are these weapons used by modern re-enactors, but they are also used for hunting, as many U.S. states have dedicated hunting seasons for black-powder weapons, which includes both flintlock and percussion lock weapons. Even after it became dominant in Europe, the flintlock did not proliferate globally. Flintlocks were far more complicated to manufacture than simple matchlocks, thus less-developed countries continued to use the latter into the mid 19th century, long after Europe had made the switch in the late 17th. In the Indian subcontinent, the natively-manufactured toradar matchlock was the most common firearm type until about 1830. The Sinhalese Kingdoms locally produced flintlock mechanisms for long-barreled muskets known as the Bondikula known for its unique bifurcated butt and heavy ornamentation. These were widely used during the 17th-18th centuries. In China, some flintlocks had been acquired and illustrated by 1635, but they were not adopted by the army. An 1836 British report about the Qing dynasty's military strength noted that all Chinese firearms were "ill-made" matchlocks, with no flintlocks or any of the other "tribes of firearm." Southeast Asia was in a similar position to China and India. The Vietnamese were introduced to flintlocks by the Dutch in the 1680s, and bought some from European merchants. Flintlocks began to appear in Javanese arsenals in the first decade of the eighteenth century and the Dutch began to supply flintlocks to the rulers of Surabaya in the 1710s and 1720s. But matchlocks remained prominent until the mid-19th century, and the Southeast Asian states generally lacked the ability to natively produce the flintlock. The Jiaozhi arquebus was still the main firearm of Nguyễn dynasty musketeers at the end of the 18th century. The Burmese only obtained a majority of flintlocks in their armed forces by the 1860s (the Burmese kings demanded to be paid in surplus European muskets instead of currency), at which point the European powers had already moved on to percussion cap firearms. Subtypes Flintlocks may be any type of small arm: long gun or pistol, smoothbore or rifle, muzzleloader or breechloader. Pistols Flintlock pistols were used as self-defense weapons and as a military arm. Their effective range was short, and they were frequently used as an adjunct to a sword or cutlass. Pistols were usually smoothbore although some rifled pistols were produced. Flintlock pistols came in a variety of sizes and styles which often overlap and are not well defined, many of the names we use having been applied by collectors and dealers long after the pistols were obsolete. The smallest were less than long and the largest were over . From around the beginning of the 1700s the larger pistols got shorter, so that by the late 1700s the largest would be around long. The smallest would fit into a typical pocket or a hand warming muff and could easily be carried by women. The largest sizes would be carried in holsters across a horse's back just ahead of the saddle. In-between sizes included the coat pocket pistol, or coat pistol, which would fit into a large pocket, the coach pistol, meant to be carried on or under the seat of a coach in a bag or box, and belt pistols, sometimes equipped with a hook designed to slip over a belt or waistband. Larger pistols were called horse pistols. Arguably the most elegant of the pistol designs was the Queen Anne pistol, which was made in all sizes. Arguably the high point of the mechanical development of the flintlock pistol was the British duelling pistol; it was highly reliable, water resistant and accurate. External decoration was minimal but craftsmanship was evident, and the internal works were often finished to a higher degree of craftsmanship than the exterior. Dueling pistols were the size of the horse pistols of the late 1700s, around long and were usually sold in pairs along with accessories in a wooden case with compartments for each piece. Muskets Flintlock muskets were the mainstay of European armies between 1660 and 1840. A musket was a muzzle-loading smoothbore long gun that was loaded with a round lead ball, but it could also be loaded with shot for hunting. For military purposes, the weapon was loaded with ball, or a mixture of ball with several large shot (called buck and ball), and had an effective range of about . Smoothbore weapons that were designed for hunting birds were called "fowlers." Flintlock muskets tended to be of large caliber and usually had no choke, allowing them to fire full-caliber balls. Military flintlock muskets tended to weigh approximately 10 pounds (4.53 kg), as heavier weapons were found to be too cumbersome, and lighter weapons were not rugged or heavy enough to be used in hand-to-hand combat. They were usually designed to be fitted with a bayonet. On flintlocks, the bayonet played a primarily a deterrence role - casualty lists from several battles in the 18th century showed that fewer than 2% of wounds were caused by bayonets. Antoine-Henri Jomini, a celebrated military author of the Napoleonic period who served in numerous armies during that period, stated that the majority of bayonet charges in the open resulted with one side fleeing before any contacts were made. Flintlock weapons were not used like modern rifles. They tended to be fired in mass volleys, followed by bayonet charges in which the weapons were used much like the pikes that they replaced. Because they were also used as pikes, military flintlocks tended to be approximately in length (without the bayonet attached), and used bayonets that were approximately in length. Rifles In Germany, the Jäger rifle was developed by the late 18th century. It was used for hunting, and in a military context, skirmishing and by specialist marksmen. In the United States, the small game hunting long rifle ("Pennsylvania rifle" or "Kentucky rifle") was developed in southeastern Pennsylvania in the early 1700s. Based on the Jäger rifle, but with a much longer barrel, these were exceptionally accurate for their time, and had an effective range of approximately . They tended to fire smaller caliber rounds, with calibers in the range of being the most common - hence being sometimes referred to as a "pea rifle." The Jezail was a military long flintlock rifle, developed near and popular throughout Afghanistan, India, Central Asia and parts of the Middle East. However, while European military tactics remained based on loosely-aimed mass volleys, most of their flintlocks were still smoothbore - as the spiral grooves of rifling made rifles take more time to load, and after repeated shots black powder tended to foul the barrels. Rifled flintlocks saw most military use by sharpshooters, skirmishers, and other support units. While by the late 18th century there were increasing efforts to take advantage of the rifle for military purposes, with specialist rifle units such as the King's Royal Rifle Corps of 1756 and Rifle Brigade (Prince Consort's Own), smoothbores predominated until the advent of the Minié ball – by which time the percussion cap had made the flintlock obsolete. Multishot flintlock weapons Multiple barrels Because of the time needed to reload (even experts needed 15 seconds to reload a smooth-bore, muzzle-loading musket), flintlocks were sometimes produced with two, three, four or more barrels for multiple shots. These designs tended to be costly to make and were often unreliable and dangerous. While weapons like double barreled shotguns were reasonably safe, weapons like the pepperbox revolver would sometimes fire all barrels simultaneously, or would sometimes just explode in the user's hand. It was often cheaper, safer, and more reliable to carry several single-shot weapons instead. Single barrel Some repeater rifles, multishot single barrel pistols, and multishot single barrel revolvers were also made. Notable are the Puckle gun, Mortimer, Kalthoff, Michele Lorenzoni, Abraham Hill, Cookson pistols, the Jennings repeater and the Elisha Collier revolver. Drawbacks Flintlocks were prone to many problems compared to modern weapons. Misfires were common. The flint had to be properly maintained, as a dull or poorly knapped piece of flint would not make as much of a spark and would increase the misfire rate dramatically. Moisture was a problem, since moisture on the frizzen or damp powder would prevent the weapon from firing. This rendered flintlock weapons unusable in rainy or damp weather. Some armies attempted to remedy this by using a leather cover over the lock mechanism, but this proved to have only limited success. Accidental firing was also a problem for flintlocks. A burning ember left in the barrel could ignite the next powder charge as it was loaded. This could be avoided by waiting between shots for any leftover residue to completely burn. Running a lubricated cleaning patch down the barrel with the ramrod would also extinguish any embers, and would clean out some of the barrel fouling as well. Soldiers on the battlefield could not take these precautions. They had to fire as quickly as possible, often firing three to four rounds per minute. Loading and firing at such a pace dramatically increased the risk of an accidental discharge. When a flintlock was fired it sprayed a shower of sparks forwards from the muzzle and another sideways out of the flash-hole. One reason for firing in volleys was to ensure that one man's sparks didn't ignite the next man's powder as he was in the act of loading. An accidental frizzen strike could also ignite the main powder charge, even if the pan had not yet been primed. Some modern flintlock users will still place a leather cover over the frizzen while loading as a safety measure to prevent this from happening. However, this does slow down the loading time, which prevented safety practices such as this from being used on the battlefields of the past. The black powder used in flintlocks would quickly foul the barrel, which was a problem for rifles and for smooth bore weapons that fired a tighter fitting round for greater accuracy. Each shot would add more fouling to the barrel, making the weapon more and more difficult to load. Even if the barrel was badly fouled, the flintlock user still had to properly seat the round all the way to the breech of the barrel. Leaving an air gap in between the powder and the round (known as "short starting") was very dangerous, and could cause the barrel to explode. Handling loose black powder was also dangerous, for obvious reasons. Powder measures, funnels, and other pieces of equipment were usually made out of brass to reduce the risk of creating a spark, which could ignite the powder. Soldiers often used pre-made paper cartridges, which unlike modern cartridges were not inserted whole into the weapon. Instead, they were tubes of paper that contained a pre-measured amount of powder and a lead ball. Although paper cartridges were safer to handle than loose powder, their primary purpose was not safety related at all. Instead, paper cartridges were used mainly because they sped up the loading process. A soldier did not have to take the time to measure out powder when using a paper cartridge. He simply tore open the cartridge, used a small amount of powder to prime the pan, then dumped the remaining powder from the cartridge into the barrel. The black powder used in flintlocks contained sulfur. If the weapon was not cleaned after use, sulfur dioxide in the powder residue would absorb moisture, producing sulfuric and sulfonic acids which would erode the inside of the gun barrel and the lock mechanism. Flintlock weapons that were not properly cleaned and maintained would corrode to the point of being destroyed. Most flintlocks were produced at a time before modern manufacturing processes became common. Even in mass-produced weapons, parts were often handmade. If a flintlock became damaged, or parts wore out due to age, the damaged parts were not easily replaced. Parts would often have to be filed down, hammered into shape, or otherwise modified so that they would fit, making repairs much more difficult. Machine-made, interchangeable parts began to be used only shortly before flintlocks were replaced by caplocks. Method of operation A cock tightly holding a sharp piece of flint is rotated to half-cock, where the sear falls into a safety notch on the tumbler, preventing an accidental discharge. The operator loads the gun, usually from the muzzle end, with black powder from a powder flask, followed by lead shot, a round lead ball, usually wrapped in a piece of paper or a cloth patch, all rammed down with a ramrod that is usually stored on the underside of the barrel. Wadding between the charge and the ball was often used in earlier guns. The flash pan is primed with a small amount of very finely ground gunpowder, and the flashpan lid or frizzen is closed. The gun is now in a "primed and loaded" state, and this is how it would typically be carried while hunting or if going into battle. To fire: The cock is further rotated from half-cock to full-cock, releasing the safety lock on the cock. The gun is leveled and the trigger is pulled, releasing the cock holding the flint. The flint strikes the frizzen, a piece of steel on the priming pan lid, opening it and exposing the priming powder. The contact between flint and frizzen produces a shower of sparks (burning pieces of the metal) that is directed into the gunpowder in the flashpan. The powder ignites, and the flash passes through a small hole in the barrel (called a vent or touchhole) that leads to the combustion chamber where it ignites the main powder charge, and the gun discharges. The British Army and the Continental Army both used paper cartridges to load their weapons. The powder charge and ball were instantly available to the soldier inside this small paper envelope. To load a flintlock weapon using a paper cartridge, a soldier would move the cock to the half-cock position; tear the cartridge open with his teeth; fill the flashpan half-full with powder, directing it toward the vent; close the frizzen to keep the priming charge in the pan; pour the rest of the powder down the muzzle and stuff the cartridge in after it; take out the ramrod and ram the ball and cartridge all the way to the breech; replace the ramrod; shoulder the weapon. The weapon can then be fully cocked and fired. Cultural impact Firearms using some form of flintlock mechanism were the main form of firearm for over 200 years. It was not until Reverend Alexander John Forsyth invented a rudimentary percussion cap system in 1807 that the flintlock system began to decline in popularity. The percussion ignition system was more weatherproof and reliable than the flintlock, but the transition from flintlock to percussion cap was a slow one, and the percussion system was not widely used until around 1830. The Model 1840 U.S. musket was the last flintlock firearm produced for the U.S. military. However, obsolete flintlocks saw action in the earliest days of the American Civil War. For example, in 1861, the Army of Tennessee had over 2,000 flintlock muskets in service. As a result of the flintlock's long active life, it left lasting marks on the language and on drill and parade. Terms such as: "lock, stock and barrel", "going off half-cocked" and "flash in the pan" remain current in English. In addition, the weapon positions and drill commands that were originally devised to standardize carrying, loading and firing a flintlock weapon remain the standard for drill and display (see manual of arms). Gallery
Technology
Mechanisms_2
null
139114
https://en.wikipedia.org/wiki/Defensive%20wall
Defensive wall
A defensive wall is a fortification usually used to protect a city, town or other settlement from potential aggressors. The walls can range from simple palisades or earthworks to extensive military fortifications such as curtain walls with towers, bastions and gates for access to the city. From ancient to modern times, they were used to enclose settlements. Generally, these are referred to as city walls or town walls, although there were also walls, such as the Great Wall of China, Walls of Benin, Hadrian's Wall, Anastasian Wall, and the Atlantic Wall, which extended far beyond the borders of a city and were used to enclose regions or mark territorial boundaries. In mountainous terrain, defensive walls such as letzis were used in combination with castles to seal valleys from potential attack. Beyond their defensive utility, many walls also had important symbolic functions representing the status and independence of the communities they embraced. Existing ancient walls are almost always masonry structures, although brick and timber-built variants are also known. Depending on the topography of the area surrounding the city or the settlement the wall is intended to protect, elements of the terrain such as rivers or coastlines may be incorporated in order to make the wall more effective. Walls may only be crossed by entering the appropriate city gate and are often supplemented with towers. The practice of building these massive walls, though having its origins in prehistory, was refined during the rise of city-states, and energetic wall-building continued into the medieval period and beyond in certain parts of Europe. Simpler defensive walls of earth or stone, thrown up around hillforts, ringworks, early castles and the like, tend to be referred to as ramparts or banks. History Mesopotamia From very early history to modern times, walls have been a near necessity for every city. Uruk in ancient Sumer (Mesopotamia) is one of the world's oldest known walled cities. Before that, the proto-city of Jericho in the West Bank had a wall surrounding it as early as the 8th millenniumBC. The earliest known town wall in Europe is of Solnitsata, built in the 6th or 5th millennium BC. The Assyrians deployed large labour forces to build new palaces, temples and defensive walls. Babylon was one of the most famous cities of the ancient world, especially as a result of the building program of Nebuchadnezzar, who expanded the walls and built the Ishtar Gate. The Persians built defensive walls to protect their territories, notably the Derbent Wall and the Great Wall of Gorgan built on the either sides of the Caspian Sea against nomadic nations. South Asia Some settlements in the Indus Valley civilization were also fortified. By about 3500BC, hundreds of small farming villages dotted the Indus floodplain. Many of these settlements had fortifications and planned streets. The stone and mud brick houses of Kot Diji were clustered behind massive stone flood dykes and defensive walls, for neighboring communities quarreled constantly about the control of prime agricultural land. Mundigak () in present-day south-east Afghanistan has defensive walls and square bastions of sun dried bricks. Southeast Asia The concept of a city fully enclosed by walls was not fully developed in Southeast Asia until the arrival of Europeans. However, Burma serves an exception, as they had a longer tradition of fortified walled towns; towns in Burma had city walls by 1566. Besides that, Rangoon in 1755 had stockades made of teak logs on a ground rampart. The city was fortified with six city gates with each gate flanked by massive brick towers. In other areas of Southeast Asia, city walls spread in the 16th and 17th century along with the rapid growth of cities in this period as a need to defend against European naval attack. Ayutthaya built its walls in 1550 and Banten, Jepara, Tuban and Surabaya all had theirs by 1600; while Makassar had theirs by 1634. A sea wall was the main defense for Gelgel. For cities that did not have city walls, the least it would have had was a stockaded citadel. This wooden walled area housed the royal citadel or aristocratic compounds such as in Surakarta and Aceh. China Large rammed earth walls were built in ancient China since the Shang dynasty (–1050BC), as the capital at ancient Ao had enormous walls built in this fashion (see siege for more info). Although stone walls were built in China during the Warring States (481–221BC), mass conversion to stone architecture did not begin in earnest until the Tang dynasty (618–907 AD). Sections of the Great Wall had been built prior to the Qin dynasty (221–207BC) and subsequently connected and fortified during the Qin dynasty, although its present form was mostly an engineering feat and remodeling of the Ming dynasty (1368–1644AD). The large walls of Pingyao serve as one example. Likewise, the walls of the Forbidden City in Beijing were established in the early 15th century by the Yongle Emperor. According to Tonio Andrade, the immense thickness of Chinese city walls prevented larger cannons from being developed, since even industrial era artillery had trouble breaching Chinese walls. Korea Eupseongs (Hangul: 읍성), 'city fortresses', which served both military and administrative functions, have been constructed since the time of Silla until the end of the Joseon dynasty. Throughout the period of the Joseon dynasty eupseongs were modified and renovated, and new eupseongs were built, but in 1910 Japan (the occupying power of Korea) issued an order for their demolition, resulting in most being destroyed. Studies of the ruins and reconstructions of the ancient city walls are currently being undertaken at some sites. Europe In ancient Greece, large stone walls had been built in Mycenaean Greece, such as the ancient site of Mycenae (famous for the huge stone blocks of its 'cyclopean' walls). In classical era Greece, the city of Athens built a long set of parallel stone walls called the Long Walls that reached their guarded seaport at Piraeus. Exceptions were few, but neither ancient Sparta nor ancient Rome had walls for a long time, choosing to rely on their militaries for defense instead. Initially, these fortifications were simple constructions of wood and earth, which were later replaced by mixed constructions of stones piled on top of each other without mortar. The Romans later fortified their cities with massive, mortar-bound stone walls. Among these are the largely extant Aurelian Walls of Rome and the Theodosian Walls of Constantinople, together with partial remains elsewhere. These are mostly city gates, like the Porta Nigra in Trier or Newport Arch in Lincoln. In Central Europe, the Celts built large fortified settlements which the Romans called oppida, whose walls seem partially influenced by those built in the Mediterranean. The fortifications were continuously expanded and improved. Apart from these, the early Middle Ages also saw the creation of some towns built around castles. These cities were only rarely protected by simple stone walls and more usually by a combination of both walls and ditches. From the 12th century AD hundreds of settlements of all sizes were founded all across Europe, which very often obtained the right of fortification soon afterwards. Several medieval town walls have survived into the modern age, such as the walled towns of Austria, walls of Tallinn, or the town walls of York and Canterbury in England, as well as Nordlingen, Dinkelsbühl and Rothenburg ob der Tauber in Germany. In Spain, Avila and Tossa del Mar hosts surviving medieval walls while Lugo has an intact Roman wall. The founding of urban centers was an important means of territorial expansion and many cities, especially in central and eastern Europe, were founded for this purpose during the period of Eastern settlement. These cities are easy to recognise due to their regular layout and large market spaces. The fortifications of these settlements were continuously improved to reflect the current level of military development. Gunpowder era Chinese city walls While gunpowder and cannons were invented in China, China never developed wall breaking artillery to the same extent as other parts of the world. Part of the reason is probably because Chinese walls were already highly resistant to artillery and discouraged increasing the size of cannons. In the mid-twentieth century a European expert in fortification commented on their immensity: "in China ... the principal towns are surrounded to the present day by walls so substantial, lofty, and formidable that the medieval fortifications of Europe are puny in comparison." Chinese walls were thick. The eastern wall of Ancient Linzi, established in 859 BC, had a maximum thickness of 43 metres and an average thickness of 20–30 metres. Ming prefectural and provincial capital walls were thick at the base and at the top. In Europe the height of wall construction was reached under the Roman Empire, whose walls often reached in height, the same as many Chinese city walls, but were only thick. Rome's Servian Walls reached in thickness and in height. Other fortifications also reached these specifications across the empire, but all these paled in comparison to contemporary Chinese walls, which could reach a thickness of at the base in extreme cases. Even the walls of Constantinople which have been described as "the most famous and complicated system of defence in the civilized world," could not match up to a major Chinese city wall. Had both the outer and inner walls of Constantinople been combined they would have only reached roughly a bit more than a third the width of a major wall in China. According to Philo the width of a wall had to be thick to be able to withstand ancient (non-gunpowder) siege engines. European walls of the 1200s and 1300s could reach the Roman equivalents but rarely exceeded them in length, width, and height, remaining around thick. When referring to a very thick wall in medieval Europe, what is usually meant is a wall of in width, which would have been considered thin in a Chinese context. There are some exceptions such as the Hillfort of Otzenhausen, a Celtic ringfort with a thickness of in some parts, but Celtic fort-building practices died out in the early medieval period. Andrade goes on to note that the walls of the marketplace of Chang'an were thicker than the walls of major European capitals. Aside from their immense size, Chinese walls were also structurally different from the ones built in medieval Europe. Whereas European walls were mostly constructed of stone interspersed with gravel or rubble filling and bonded by limestone mortar, Chinese walls had tamped earthen cores which absorbed the energy of artillery shots. Walls were constructed using wooden frameworks which were filled with layers of earth tamped down to a highly compact state, and once that was completed the frameworks were removed for use in the next wall section. Starting from the Song dynasty these walls were improved with an outer layer of bricks or stone to prevent erosion, and during the Ming, earthworks were interspersed with stone and rubble. Most Chinese walls were also sloped rather than vertical to better deflect projectile energy. The Chinese Wall Theory essentially rests on a cost benefit hypothesis, where the Ming recognized the highly resistant nature of their walls to structural damage, and could not imagine any affordable development of the guns available to them at the time to be capable of breaching said walls. Even as late as the 1490s a Florentine diplomat considered the French claim that "their artillery is capable of creating a breach in a wall of eight feet in thickness" to be ridiculous and the French "braggarts by nature". Very rarely did cannons blast breaches in city walls in Chinese warfare. This may have been partly due to cultural tradition. Famous military commanders such as Sun Tzu and Zheng Zhilong recommended not to directly attack cities and storm their walls. Even when direct assaults were made with cannons, it was usually by focusing on the gates rather than the walls. There were instances where cannons were used against walled fortifications, such as by Koxinga, but only in the case of small villages. During Koxinga's career, there is only one recorded case of capturing a settlement by bombarding its walls: the siege of Taizhou in 1658. In 1662, the Dutch found that bombarding the walls of a town in Fujian Province had no effect and they focused on the gates instead just as in Chinese warfare. In 1841, a 74-gun British warship bombarded a Chinese coastal fort near Guangzhou and found that it was "almost impervious to the efforts of horizontal fire." In fact twentieth century explosive shells had some difficulty creating a breach in tamped earthen walls. Bastions and star forts As a response to gunpowder artillery, European fortifications began displaying architectural principles such as lower and thicker walls in the mid-1400s. Cannon towers were built with artillery rooms where cannons could discharge fire from slits in the walls. However, this proved problematic as the slow rate of fire, reverberating concussions, and noxious fumes produced greatly hindered defenders. Gun towers also limited the size and number of cannon placements because the rooms could only be built so big. Notable surviving artillery towers include a seven layer defensive structure built in 1480 at Fougères in Brittany, and a four layer tower built in 1479 at Querfurth in Saxony. The star fort, also known as the bastion fort, trace italienne, or renaissance fortress, was a style of fortification that became popular in Europe during the 16th century. The bastion and star fort was developed in Italy, where the Florentine engineer Giuliano da Sangallo (1445–1516) compiled a comprehensive defensive plan using the geometric bastion and full trace italienne that became widespread in Europe. The main distinguishing features of the star fort were its angle bastions, each placed to support their neighbor with lethal crossfire, covering all angles, making them extremely difficult to engage with and attack. Angle bastions consisted of two faces and two flanks. Artillery positions positioned at the flanks could fire parallel into the opposite bastion's line of fire, thus providing two lines of cover fire against an armed assault on the wall, and preventing mining parties from finding refuge. Meanwhile, artillery positioned on the bastion platform could fire frontally from the two faces, also providing overlapping fire with the opposite bastion. Overlapping mutually supporting defensive fire was the greatest advantage enjoyed by the star fort. As a result, sieges lasted longer and became more difficult affairs. By the 1530s the bastion fort had become the dominant defensive structure in Italy. Outside Europe, the star fort became an "engine of European expansion," and acted as a force multiplier so that small European garrisons could hold out against numerically superior forces. Wherever star forts were erected the natives experienced great difficulty in uprooting European invaders. In China, Sun Yuanhua advocated for the construction of angled bastion forts in his Xifashenji so that their cannons could better support each other. The officials Han Yun and Han Lin noted that cannons on square forts could not support each side as well as bastion forts. Their efforts to construct bastion forts, and their results, were limited. Ma Weicheng built two bastion forts in his home county, which helped fend off a Qing incursion in 1638. By 1641, there were ten bastion forts in the county. Before bastion forts could spread any further, the Ming dynasty fell in 1644, and they were largely forgotten as the Qing dynasty was on the offensive most of the time and had no use for them. Decline In the wake of city growth and the ensuing change of defensive strategy, focusing more on the defense of forts around cities, many city walls were demolished. Also, the invention of gunpowder rendered walls less effective, as siege cannons could then be used to blast through walls, allowing armies to simply march through. Today, the presence of former city fortifications can often only be deduced from the presence of ditches, ring roads or parks. Furthermore, some street names hint at the presence of fortifications in times past, for example when words such as "wall" or "glacis" occur. In the 19th century, less emphasis was placed on preserving the fortifications for the sake of their architectural or historical value on the one hand, complete fortifications were restored (Carcassonne), on the other hand many structures were demolished in an effort to modernize the cities. One exception to this is the "monument preservation" law by the Bavarian King Ludwig I of Bavaria, which led to the nearly complete preservation of many monuments such as the Rothenburg ob der Tauber, Nördlingen and Dinkelsbühl. The countless small fortified towns in the Franconia region were also preserved as a consequence of this edict. Modern era Walls and fortified wall structures were still built in the modern era. They did not, however, have the original purpose of being a structure able to resist a prolonged siege or bombardment. Modern examples of defensive walls include: Berlin's city wall from the 1730s to the 1860s was partially made of wood. Its primary purpose was to enable the city to impose tolls on goods and, secondarily, also served to prevent the desertion of soldiers from the garrison in Berlin. The Berlin Wall (1961 to 1989) did not exclusively serve the purpose of protection of an enclosed settlement. One of its purposes was to prevent the crossing of the Berlin border between the German Democratic Republic and the West German exclave of west-Berlin. The Korean Demilitarized Zone that divides North Korea and South Korea near the 38th parallel north. The Nicosia Wall along the Green Line divides North and South Cyprus. In the 20th century and after, many enclaved Jewish settlements in Israeli occupied territory in the West Bank were and are surrounded by fortified walls Mexico–United States barrier, a wall advocated by U.S. President Donald Trump for the Mexico–United States border to prevent illegal immigration, drug smuggling, human trafficking, and entry of potential terrorists Belfast, Northern Ireland by the "peace lines". Gated communities are modern residential neighborhoods where access is controlled, often prohibiting through-travelers or non-residents via a wall and guards Additionally, in some countries, different embassies may be grouped together in a single "embassy district", enclosed by a fortified complex with walls and towersthis usually occurs in regions where the embassies run a high risk of being target of attacks. An early example of such a compound was the Legation Quarter in Beijing in the late 19th and early 20th centuries. Most of these modern city walls are made of steel and concrete. Vertical concrete plates are put together so as to allow the least space in between them, and are rooted firmly in the ground. The top of the wall is often protruding and beset with barbed wire in order to make climbing them more difficult. These walls are usually built in straight lines and covered by watchtowers at the corners. Double walls with an interstitial "zone of fire", as the former Berlin Wall had, are now rare. In September 2014, Ukraine announced the construction of the "European Rampart" alongside its border with Russia to be able to successfully apply for a visa-free movement with the European Union. Composition At its simplest, a defensive wall consists of a wall enclosure and its gates. For the most part, the top of the walls were accessible, with the outside of the walls having tall parapets with embrasures or merlons. North of the Alps, this passageway at the top of the walls occasionally had a roof. In addition to this, many different enhancements were made over the course of the centuries: City ditch: a ditch dug in front of the walls, occasionally filled with water to form a moat. Gate tower: a tower built next to, or on top of the city gates to better defend the city gates. Wall tower: a tower built on top of a segment of the wall, which usually extended outwards slightly, so as to be able to observe the exterior of the walls on either side. In addition to arrow slits, ballistae, catapults and cannons could be mounted on top for extra defence. Pre-wall: wall built outside the wall proper, usually of lesser height the space in between was usually further subdivided by additional walls. Additional obstacles in front of the walls. The defensive towers of west and south European fortifications in the Middle Ages were often very regularly and uniformly constructed (cf. Ávila, Provins), whereas Central European city walls tend to show a variety of different styles. In these cases the gate and wall towers often reach up to considerable heights, and gates equipped with two towers on either side are much rarer. Apart from having a purely military and defensive purpose, towers also played a representative and artistic role in the conception of a fortified complex. The architecture of the city thus competed with that of the castle of the noblemen and city walls were often a manifestation of the pride of a particular city. Urban areas outside the city walls, so-called Vorstädte, were often enclosed by their own set of walls and integrated into the defense of the city. These areas were often inhabited by the poorer population and held the "noxious trades". In many cities, a new wall was built once the city had grown outside of the old wall. This can often still be seen in the layout of the city, for example in Nördlingen, and sometimes even a few of the old gate towers are preserved, such as the white tower in Nuremberg. Additional constructions prevented the circumvention of the city, through which many important trade routes passed, thus ensuring that tolls were paid when the caravans passed through the city gates, and that the local market was visited by the trade caravans. Furthermore, additional signaling and observation towers were frequently built outside the city, and were sometimes fortified in a castle-like fashion. The border of the area of influence of the city was often partially or fully defended by elaborate ditches, walls and hedges. The crossing points were usually guarded by gates or gate houses. These defenses were regularly checked by riders, who often also served as the gate keepers. Long stretches of these defenses can still be seen to this day, and even some gates are still intact. To further protect their territory, rich cities also established castles in their area of influence. An example of this practice is the Romanian Bran Castle, which was intended to protect nearby Kronstadt (today's Braşov). The city walls were often connected to the fortifications of hill castles via additional walls. Thus the defenses were made up of city and castle fortifications taken together. Several examples of this are preserved, for example in Germany Hirschhorn on the Neckar, Königsberg and Pappenheim, Franken, Burghausen in Oberbayern and many more. A few castles were more directly incorporated into the defensive strategy of the city (e.g. Nuremberg, Zons, Carcassonne), or the cities were directly outside the castle as a sort of "pre-castle" (Coucy-le-Chateau, Conwy and others). Larger cities often had multiple stewards for example Augsburg was divided into a Reichstadt and a clerical city. These different parts were often separated by their own fortifications. Dimensions of famous city walls Gallery Africa Americas Asia China Europe Roman Archaeological Erbil Citadel wall
Technology
Fortification
null
139410
https://en.wikipedia.org/wiki/Homological%20algebra
Homological algebra
Homological algebra is the branch of mathematics that studies homology in a general algebraic setting. It is a relatively young discipline, whose origins can be traced to investigations in combinatorial topology (a precursor to algebraic topology) and abstract algebra (theory of modules and syzygies) at the end of the 19th century, chiefly by Henri Poincaré and David Hilbert. Homological algebra is the study of homological functors and the intricate algebraic structures that they entail; its development was closely intertwined with the emergence of category theory. A central concept is that of chain complexes, which can be studied through their homology and cohomology. Homological algebra affords the means to extract information contained in these complexes and present it in the form of homological invariants of rings, modules, topological spaces, and other "tangible" mathematical objects. A spectral sequence is a powerful tool for this. It has played an enormous role in algebraic topology. Its influence has gradually expanded and presently includes commutative algebra, algebraic geometry, algebraic number theory, representation theory, mathematical physics, operator algebras, complex analysis, and the theory of partial differential equations. K-theory is an independent discipline which draws upon methods of homological algebra, as does the noncommutative geometry of Alain Connes. History Homological algebra began to be studied in its most basic form in the 1800s as a branch of topology and in the 1940s became an independent subject with the study of objects such as the ext functor and the tor functor, among others. Chain complexes and homology The notion of chain complex is central in homological algebra. An abstract chain complex is a sequence of abelian groups and group homomorphisms, with the property that the composition of any two consecutive maps is zero: The elements of Cn are called n-chains and the homomorphisms dn are called the boundary maps or differentials. The chain groups Cn may be endowed with extra structure; for example, they may be vector spaces or modules over a fixed ring R. The differentials must preserve the extra structure if it exists; for example, they must be linear maps or homomorphisms of R-modules. For notational convenience, restrict attention to abelian groups (more correctly, to the category Ab of abelian groups); a celebrated theorem by Barry Mitchell implies the results will generalize to any abelian category. Every chain complex defines two further sequences of abelian groups, the cycles Zn = Ker dn and the boundaries Bn = Im dn+1, where Ker d and Im d denote the kernel and the image of d. Since the composition of two consecutive boundary maps is zero, these groups are embedded into each other as Subgroups of abelian groups are automatically normal; therefore we can define the nth homology group Hn(C) as the factor group of the n-cycles by the n-boundaries, A chain complex is called acyclic or an exact sequence if all its homology groups are zero. Chain complexes arise in abundance in algebra and algebraic topology. For example, if X is a topological space then the singular chains Cn(X) are formal linear combinations of continuous maps from the standard n-simplex into X; if K is a simplicial complex then the simplicial chains Cn(K) are formal linear combinations of the n-simplices of K; if A = F/R is a presentation of an abelian group A by generators and relations, where F is a free abelian group spanned by the generators and R is the subgroup of relations, then letting C1(A) = R, C0(A) = F, and Cn(A) = 0 for all other n defines a sequence of abelian groups. In all these cases, there are natural differentials dn making Cn into a chain complex, whose homology reflects the structure of the topological space X, the simplicial complex K, or the abelian group A. In the case of topological spaces, we arrive at the notion of singular homology, which plays a fundamental role in investigating the properties of such spaces, for example, manifolds. On a philosophical level, homological algebra teaches us that certain chain complexes associated with algebraic or geometric objects (topological spaces, simplicial complexes, R-modules) contain a lot of valuable algebraic information about them, with the homology being only the most readily available part. On a technical level, homological algebra provides the tools for manipulating complexes and extracting this information. Here are two general illustrations. Two objects X and Y are connected by a map f between them. Homological algebra studies the relation, induced by the map f, between chain complexes associated with X and Y and their homology. This is generalized to the case of several objects and maps connecting them. Phrased in the language of category theory, homological algebra studies the functorial properties of various constructions of chain complexes and of the homology of these complexes. An object X admits multiple descriptions (for example, as a topological space and as a simplicial complex) or the complex is constructed using some 'presentation' of X, which involves non-canonical choices. It is important to know the effect of change in the description of X on chain complexes associated with X. Typically, the complex and its homology are functorial with respect to the presentation; and the homology (although not the complex itself) is actually independent of the presentation chosen, thus it is an invariant of X. Standard tools Exact sequences In the context of group theory, a sequence of groups and group homomorphisms is called exact if the image of each homomorphism is equal to the kernel of the next: Note that the sequence of groups and homomorphisms may be either finite or infinite. A similar definition can be made for certain other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms. More generally, the notion of an exact sequence makes sense in any category with kernels and cokernels. Short The most common type of exact sequence is the short exact sequence. This is an exact sequence of the form where ƒ is a monomorphism and g is an epimorphism. In this case, A is a subobject of B, and the corresponding quotient is isomorphic to C: (where f(A) = im(f)). A short exact sequence of abelian groups may also be written as an exact sequence with five terms: where 0 represents the zero object, such as the trivial group or a zero-dimensional vector space. The placement of the 0's forces ƒ to be a monomorphism and g to be an epimorphism (see below). Long A long exact sequence is an exact sequence indexed by the natural numbers. Five lemma Consider the following commutative diagram in any abelian category (such as the category of abelian groups or the category of vector spaces over a given field) or in the category of groups. The five lemma states that, if the rows are exact, m and p are isomorphisms, l is an epimorphism, and q is a monomorphism, then n is also an isomorphism. Snake lemma In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), consider a commutative diagram: where the rows are exact sequences and 0 is the zero object. Then there is an exact sequence relating the kernels and cokernels of a, b, and c: Furthermore, if the morphism f is a monomorphism, then so is the morphism ker a → ker b, and if g is an epimorphism, then so is coker b → coker c. Abelian categories In mathematics, an abelian category is a category in which morphisms and objects can be added and in which kernels and cokernels exist and have desirable properties. The motivating prototype example of an abelian category is the category of abelian groups, Ab. The theory originated in a tentative attempt to unify several cohomology theories by Alexander Grothendieck. Abelian categories are very stable categories, for example they are regular and they satisfy the snake lemma. The class of Abelian categories is closed under several categorical constructions, for example, the category of chain complexes of an Abelian category, or the category of functors from a small category to an Abelian category are Abelian as well. These stability properties make them inevitable in homological algebra and beyond; the theory has major applications in algebraic geometry, cohomology and pure category theory. Abelian categories are named after Niels Henrik Abel. More concretely, a category is abelian if it has a zero object, it has all binary products and binary coproducts, and it has all kernels and cokernels. all monomorphisms and epimorphisms are normal. Derived functor Suppose we are given a covariant left exact functor F : A → B between two abelian categories A and B. If 0 → A → B → C → 0 is a short exact sequence in A, then applying F yields the exact sequence 0 → F(A) → F(B) → F(C) and one could ask how to continue this sequence to the right to form a long exact sequence. Strictly speaking, this question is ill-posed, since there are always numerous different ways to continue a given exact sequence to the right. But it turns out that (if A is "nice" enough) there is one canonical way of doing so, given by the right derived functors of F. For every i≥1, there is a functor RiF: A → B, and the above sequence continues like so: 0 → F(A) → F(B) → F(C) → R1F(A) → R1F(B) → R1F(C) → R2F(A) → R2F(B) → ... . From this we see that F is an exact functor if and only if R1F = 0; so in a sense the right derived functors of F measure "how far" F is from being exact. Ext functor Let R be a ring and let ModR be the category of modules over R. Let B be in ModR and set T(B) = HomR(A,B), for fixed A in ModR. This is a left exact functor and thus has right derived functors RnT. The Ext functor is defined by This can be calculated by taking any injective resolution and computing Then (RnT)(B) is the cohomology of this complex. Note that HomR(A,B) is excluded from the complex. An alternative definition is given using the functor G(A)=HomR(A,B). For a fixed module B, this is a contravariant left exact functor, and thus we also have right derived functors RnG, and can define This can be calculated by choosing any projective resolution and proceeding dually by computing Then (RnG)(A) is the cohomology of this complex. Again note that HomR(A,B) is excluded. These two constructions turn out to yield isomorphic results, and so both may be used to calculate the Ext functor. Tor functor Suppose R is a ring, and denoted by R-Mod the category of left R-modules and by Mod-R the category of right R-modules (if R is commutative, the two categories coincide). Fix a module B in R-Mod. For A in Mod-R, set T(A) = A⊗RB. Then T is a right exact functor from Mod-R to the category of abelian groups Ab (in the case when R is commutative, it is a right exact functor from Mod-R to Mod-R) and its left derived functors LnT are defined. We set i.e., we take a projective resolution then remove the A term and tensor the projective resolution with B to get the complex (note that A⊗RB does not appear and the last arrow is just the zero map) and take the homology of this complex. Spectral sequence Fix an abelian category, such as a category of modules over a ring. A spectral sequence is a choice of a nonnegative integer r0 and a collection of three sequences: For all integers r ≥ r0, an object Er, called a sheet (as in a sheet of paper), or sometimes a page or a term, Endomorphisms dr : Er → Er satisfying dr o dr = 0, called boundary maps or differentials, Isomorphisms of Er+1 with H(Er), the homology of Er with respect to dr. A doubly graded spectral sequence has a tremendous amount of data to keep track of, but there is a common visualization technique which makes the structure of the spectral sequence clearer. We have three indices, r, p, and q. For each r, imagine that we have a sheet of graph paper. On this sheet, we will take p to be the horizontal direction and q to be the vertical direction. At each lattice point we have the object . It is very common for n = p + q to be another natural index in the spectral sequence. n runs diagonally, northwest to southeast, across each sheet. In the homological case, the differentials have bidegree (−r, r − 1), so they decrease n by one. In the cohomological case, n is increased by one. When r is zero, the differential moves objects one space down or up. This is similar to the differential on a chain complex. When r is one, the differential moves objects one space to the left or right. When r is two, the differential moves objects just like a knight's move in chess. For higher r, the differential acts like a generalized knight's move. Functoriality A continuous map of topological spaces gives rise to a homomorphism between their nth homology groups for all n. This basic fact of algebraic topology finds a natural explanation through certain properties of chain complexes. Since it is very common to study several topological spaces simultaneously, in homological algebra one is led to simultaneous consideration of multiple chain complexes. A morphism between two chain complexes, is a family of homomorphisms of abelian groups that commute with the differentials, in the sense that for all n. A morphism of chain complexes induces a morphism of their homology groups, consisting of the homomorphisms for all n. A morphism F is called a quasi-isomorphism if it induces an isomorphism on the nth homology for all n. Many constructions of chain complexes arising in algebra and geometry, including singular homology, have the following functoriality property: if two objects X and Y are connected by a map f, then the associated chain complexes are connected by a morphism and moreover, the composition of maps f: X → Y and g: Y → Z induces the morphism that coincides with the composition It follows that the homology groups are functorial as well, so that morphisms between algebraic or topological objects give rise to compatible maps between their homology. The following definition arises from a typical situation in algebra and topology. A triple consisting of three chain complexes and two morphisms between them, is called an exact triple, or a short exact sequence of complexes, and written as if for any n, the sequence is a short exact sequence of abelian groups. By definition, this means that fn is an injection, gn is a surjection, and Im fn =  Ker gn. One of the most basic theorems of homological algebra, sometimes known as the zig-zag lemma, states that, in this case, there is a long exact sequence in homology where the homology groups of L, M, and N cyclically follow each other, and δn are certain homomorphisms determined by f and g, called the connecting homomorphisms'''. Topological manifestations of this theorem include the Mayer–Vietoris sequence and the long exact sequence for relative homology. Foundational aspects Cohomology theories have been defined for many different objects such as topological spaces, sheaves, groups, rings, Lie algebras, and C*-algebras. The study of modern algebraic geometry would be almost unthinkable without sheaf cohomology. Central to homological algebra is the notion of exact sequence; these can be used to perform actual calculations. A classical tool of homological algebra is that of derived functor; the most basic examples are functors Ext and Tor. With a diverse set of applications in mind, it was natural to try to put the whole subject on a uniform basis. There were several attempts before the subject settled down. An approximate history can be stated as follows: Cartan-Eilenberg: In their 1956 book "Homological Algebra", these authors used projective and injective module resolutions. 'Tohoku': The approach in a celebrated paper by Alexander Grothendieck which appeared in the Second Series of the Tohoku Mathematical Journal in 1957, using the abelian category concept (to include sheaves of abelian groups). The derived category of Grothendieck and Verdier. Derived categories date back to Verdier's 1967 thesis. They are examples of triangulated categories used in a number of modern theories. These move from computability to generality. The computational sledgehammer par excellence is the spectral sequence; these are essential in the Cartan-Eilenberg and Tohoku approaches where they are needed, for instance, to compute the derived functors of a composition of two functors. Spectral sequences are less essential in the derived category approach, but still play a role whenever concrete computations are necessary. There have been attempts at 'non-commutative' theories which extend first cohomology as torsors (important in Galois cohomology).
Mathematics
Algebra
null
18783051
https://en.wikipedia.org/wiki/Late%20Pleistocene%20extinctions
Late Pleistocene extinctions
The Late Pleistocene to the beginning of the Holocene saw the extinction of the majority of the world's megafauna (typically defined as animal species having body masses over ), which resulted in a collapse in faunal density and diversity across the globe. The extinctions during the Late Pleistocene are differentiated from previous extinctions by its extreme size bias towards large animals (with small animals being largely unaffected), and widespread absence of ecological succession to replace these extinct megafaunal species, and the regime shift of previously established faunal relationships and habitats as a consequence. The timing and severity of the extinctions varied by region and are thought to have been driven by varying combinations of human and climatic factors. Human impact on megafauna populations is thought to have been driven by hunting ("overkill"), as well as possibly environmental alteration. The relative importance of human vs climatic factors in the extinctions has been the subject of long-running controversy. Major extinctions occurred in Australia-New Guinea (Sahul) beginning approximately 50,000 years ago and in the Americas about 13,000 years ago, coinciding in time with the early human migrations into these regions. Extinctions in northern Eurasia were staggered over tens of thousands of years between 50,000 and 10,000 years ago, while extinctions in the Americas were virtually simultaneous, spanning only 3000 years at most. Overall, during the Late Pleistocene about 65% of all megafaunal species worldwide became extinct, rising to 72% in North America, 83% in South America and 88% in Australia, with all mammals over becoming extinct in Australia and the Americas, and around 80% globally. Africa, South Asia and Southeast Asia experienced more moderate extinctions than other regions. Extinctions by biogeographic realm Summary Introduction The Late Pleistocene saw the extinction of many mammals weighing more than , including around 80% of mammals over 1 tonne. The proportion of megafauna extinctions is progressively larger the further the human migratory distance from Africa, with the highest extinction rates in Australia, and North and South America. The increased extent of extinction mirrors the migration pattern of modern humans: the further away from Africa, the more recently humans inhabited the area, the less time those environments (including its megafauna) had to become accustomed to humans (and vice versa). There are two main hypotheses to explain this extinction: Climate change associated with the advance and retreat of major ice caps or ice sheets causing reduction in favorable habitat. Human hunting causing attrition of megafauna populations, commonly known as "overkill". There are some inconsistencies between the current available data and the prehistoric overkill hypothesis. For instance, there are ambiguities around the timing of Australian megafauna extinctions. Evidence supporting the prehistoric overkill hypothesis includes the persistence of megafauna on some islands for millennia past the disappearance of their continental cousins. For instance, ground sloths survived on the Antilles long after North and South American ground sloths were extinct, woolly mammoths died out on remote Wrangel Island 6,000 years after their extinction on the mainland, and Steller's sea cows persisted off the isolated and uninhabited Commander Islands for thousands of years after they had vanished from the continental shores of the north Pacific. The later disappearance of these island species correlates with the later colonization of these islands by humans. Still, there are some arguments that species responded differently to environmental changes, and no one factor by itself explains the large variety of extinctions. The causes may involve the interplay of climate change, competition between species, unstable population dynamics, and hunting as well as competition by humans. The original debates as to whether human arrival times or climate change constituted the primary cause of megafaunal extinctions necessarily were based on paleontological evidence coupled with geological dating techniques. Recently, genetic analyses of surviving megafaunal populations have contributed new evidence, leading to the conclusion: "The inability of climate to predict the observed population decline of megafauna, especially during the past 75,000 years, implies that human impact became the main driver of megafauna dynamics around this date." Africa Although Africa was one of the least affected regions, the region still suffered extinctions, particularly around the Late Pleistocene-Holocene transition. These extinctions were likely predominantly climatically driven by changes to grassland habitats. Ungulates Even-Toed Ungulates Suidae (swine) Metridiochoerus (ssp.) Kolpochoerus (ssp.) Bovidae (bovines, antelope) Giant buffalo (Syncerus antiquus) Megalotragus Rusingoryx Southern springbok (Antidorcas australis) Bond's springbok (Antidorcas bondi) Damaliscus hypsodon Damaliscus niro Atlantic gazelle (Gazella atlantica) Gazella tingitana Caprinae Makapania? Cervidae (deer) Megaceroides algericus (North Africa) Odd-toed Ungulates Rhinoceros (Rhinocerotidae). Narrow-nosed rhinoceros (Stephanorhinus hemitoechus, North Africa) Ceratotherium mauritanicum Wild Equus spp. Caballine horses Equus algericus (North Africa) Subgenus Asinus (asses) Equus melkiensis (North Africa) Zebras Giant zebra (Equus capensis) Saharan zebra (Equus mauritanicus) Proboscidea Elephantidae (elephants) Palaeoloxodon iolensis? (other authors suggest that this taxon went extinct at the end of the Middle Pleistocene) Rodentia Paraethomys filfilae? South Asia and Southeast Asia The timing of extinctions on the Indian subcontinent is uncertain due to a lack of reliable dating. Similar issues have been reported for Chinese sites, though there is no evidence for any of the megafaunal taxa having survived into the Holocene in that region. Extinctions in Southeast Asia and South China have been proposed to be the result of environmental shift from open to closed forested habitats. Ungulates Even-Toed Ungulates Several Bovidae spp. Bos palaesondaicus (ancestor to the banteng) Cebu tamaraw (Bubalus cebuensis) Bubalus grovesi Short-horned water buffalo (Bubalus mephistopheles) Bubalus palaeokerabau Hippopotamidae Hexaprotodon (Indian subcontinent and Southeast Asia) Odd-toed Ungulates Equus spp. Equus namadicus (Indian subcontinent) Yunnan horse (Equus yunanensis) Giant tapir (Tapirus augustus, Southeast Asia and Southern China) Taprus sinensis Pholidota Giant Asian pangolin (Manis palaeojavanica) Carnivora Caniformia Arctoidea Bears Ailuropoda baconi (ancestor to the giant panda) Afrotheria Afroinsectiphilia Orycteropodidae/Tubulidentata Aardvark (Orycteropus afer; extirpated in South Asia circa 13,000 BCE) Paenungulata Tethytheria Proboscideans Stegodontidae Stegodon spp. (including Stegodon florensis on Flores, Stegodon orientalis in East and Southeast Asia, and Stegodon sp. in the Indian subcontinent) Elephantidae Palaeoloxodon spp. Palaeoloxodon namadicus (Indian subcontinent, possibly also Southeast Asia) Birds Japanese flightless duck (Shiriyanetta hasegawai) Leptoptilos robustus Ostriches (Struthio) (Indian subcontinent) Reptiles Crocodilia Alligator munensis? Testudines (turtles and tortoises) Manouria oyamai Primates Several simian (Simiiformes) spp. Pongo (orangutans) Pongo weidenreichi (South China) Various Homo spp. (archaic humans) Homo erectus soloensis (Java) Homo floresiensis (Flores) Homo luzonensis (Luzon, Philippines) Denisovans (Homo sp.) Europe, Northern and East Asia The Palearctic realm spans the entirety of the European continent and stretches into northern Asia, through the Caucasus and central Asia to northern China, Siberia and Beringia. Extinctions were more severe in Northern Eurasia than in Africa or South and Southeast Asia. These extinctions were staggered over tens of thousands of years, spanning from around 50,000 years Before Present (BP) to around 10,000 years BP, with temperate adapted species like the straight-tusked elephant and the narrow-nosed rhinoceros generally going extinct earlier than cold adapted species like the woolly mammoth and woolly rhinoceros. Climate change has been considered a probable major factor in the extinctions, possibly in combination with human hunting. Ungulates Even-Toed Hoofed Mammals Various Bovidae spp. Steppe bison (Bison priscus) Baikal yak (Bos baikalensis) European water buffalo (Bubalus murrensis) Bubalus wansijocki (extinct buffalo native to North China) Bubalus teilhardi European tahr (Hemitragus cedrensis) Giant muskox (Praeovibos priscus) Northern saiga antelope (Saiga borealis) Twisted-horned antelope (Spirocerus kiakhtensis) Goat-horned antelope (Parabubalis capricornis) Various deer (Cervidae) spp. Giant deer/Irish elk (Megaloceros giganteus) Cretan deer (Candiacervus spp.) Haploidoceros mediterraneus Sinomegaceros spp. (including Sinomegaceros yabei in Japan, and Sinomegaceros ordosianus and possibly Sinomegaceros pachyosteus in China). Dwarf Ryuku deer (Cervus astylodon) All native Hippopotamus spp. Hippopotamus amphibius (European range, still extant in Africa) Maltese dwarf hippopotamus (Hippopotamus melitensis) Cyprus dwarf hippopotamus (Hippopotamus minor) Sicilian dwarf hippopotamus (Hippopotamus pentlandi) Camelus knoblochi and other Camelus spp. Odd-Toed Hoofed Mammals Various Equus spp. e.g. Various wild horse subspecies (e.g. Equus c. gallicus, Equus c. latipes, Equus c. uralensis) Equus dalianensis (wild horse species known from North China) European wild ass (Equus hydruntinus) (survived in refugia in Anatolia until late Holocene) Equus ovodovi (survived in refugia in North China until late Holocene) All native Rhinoceros (Rhinocerotidae) spp. Elasmotherium Woolly rhinoceros (Coelodonta antiquitatis) Stephanorhinus spp. Merck's rhinoceros (Stephanorhinus kirchbergensis) Narrow-nosed rhinoceros (Stephanorhinus hemiotoechus) Carnivora Caniformia Canidae Caninae Wolves Cave wolf (Canis lupus spelaeus) Dire wolf (Aenocyon dirus) Dholes European dhole (Cuon alpinus europaeus) Sardinian dhole (Cynotherium sardous) Arctoidea Various Ursus spp. Steppe brown bear (Ursus arctos "priscus") Gamssulzen cave bear (Ursus ingressus) Pleistocene small cave bear (Ursus rossicus) Cave bear (Ursus spelaeus) Giant polar bear (Ursus maritimus tyrannus) Musteloidea Mustelidae Several otter (Lutrinae) spp. Robust Pleistocene European otter (Cyrnaonyx) Algarolutra Sardinian giant otter (Megalenhydris barbaricina) Sardinian dwarf otter (Sardolutra) Cretan otter (Lutrogale cretensis) Feliformia Various Felidae (cats) spp. Homotherium latidens (sometimes called the scimitar-toothed cat) Cave lynx (Lynx pardinus spelaeus) Issoire lynx (Lynx issiodorensis) Panthera spp. Cave lion (Panthera spelaea) European ice age leopard (Panthera pardus spelaea) Hyaenidae (hyenas) Cave hyena (Crocuta crocuta spelaea and Crocuta crocuta ultima) "Hyaena" prisca All native Elephant (Elephantidae) spp. Mammoths Woolly mammoth (Mammuthus primigenius) Dwarf Sardinian mammoth (Mammuthus lamarmorai) Straight-tusked elephant (Palaeoloxodon antiquus) (Europe) Palaeoloxodon naumanni (Japan, possibly also Korea and northern China) Palaeoloxodon huaihoensis (China) Dwarf elephant Palaeoloxodon creutzburgi (Crete) Cyprus dwarf elephant (Palaeoloxodon cypriotes) Palaeoloxodon mnaidriensis (Sicily) Rodents Allocricetus bursae Cricetus major (alternatively Cricetus cricetus major) Dicrostonyx gulielmi (ancestor to the Arctic lemming) Giant Eurasian porcupine (Hystrix refossa) Leithia spp. (Maltese and Sicilian giant dormouse) Marmota paleocaucasica Microtus grafi Mimomys spp. M. pyrenaicus M. chandolensis Pliomys lenki Spermophilus citelloides Spermophilus severskensis Spermophilus superciliosus Trogontherium cuvieri (large beaver) Lagomorpha Lepus tanaiticus (alternatively Lepus timidus tanaiticus) Pika (Ochotona) spp. e.g. Giant pika (Ochotona whartoni) Tonomochota spp. T. khasanensis T. sikhotana T. major Birds Yakutian goose (Anser djuktaiensis) East Asian Ostrich (Struthio anderssoni) Various European crane spp. (Genus Grus) Grus primigenia Grus melitensis Cretan owl (Athene cretensis) Primates Homo Denisovans (Homo sp.) Neanderthals (Homo (sapiens) neanderthalensis; survived until about 40,000 years ago on the Iberian peninsula) Reptiles Solitudo sicula; survived in Sicily until about 12,500 years ago. Lacerta siculimelitensis; from Malta. Extinctions in North America were concentrated at the end of the Late Pleistocene, around 13,800–11,400 years Before Present, which were coincident with the onset of the Younger Dryas cooling period, as well as the emergence of the hunter-gatherer Clovis culture. The relative importance of human and climactic factors in the North American extinctions has been the subject of significant controversy. Extinctions totalled around 35 genera. The radiocarbon record for North America south of the Alaska-Yukon region has been described as "inadequate" to construct a reliable chronology. North American extinctions (noted as herbivores (H) or carnivores (C)) included: Ungulates Even-Toed Hoofed Mammals Various Bovidae spp. Most forms of Pleistocene bison (only Bison bison in North America, and Bison bonasus in Eurasia, survived) Ancient bison (Bison antiquus) (H) Long-horned/Giant bison (Bison latifrons) (H) Steppe bison (Bison priscus) (H) Bison occidentalis (H) Several members of Caprinae (the muskox survived) Giant muskox (Praeovibos priscus) (H) Shrub-ox (Euceratherium collinum) (H) Harlan's muskox (Bootherium bombifrons) (H) Soergel's ox (Soergelia mayfieldi) (H) Harrington's mountain goat (Oreamnos harringtoni; smaller and more southern distribution than its surviving relative) (H) Saiga antelope (Saiga tatarica; extirpated) (H) Deer Stag-moose (Cervalces scotti) (H) American mountain deer (Odocoileus lucasi) (H) Torontoceros hypnogeos (H) Various Antilocapridae genera (pronghorns survived) Capromeryx (H) Stockoceros (H) Tetrameryx (H) Pacific pronghorn (Antilocapra pacifica) (H) Several peccary (Tayassuidae) spp. Flat-headed peccary (Platygonus) (H) Long-nosed peccary (Mylohyus) (H) Collared peccary (Dicotyles tajacu; extirpated, range semi-recolonised) (H) (Muknalia minimus is a junior synonym) Various members of Camelidae Western camel (Camelops hesternus) (H) Stilt legged llamas (Hemiauchenia ssp.) (H) Stout legged llamas (Palaeolama ssp.) (H) Odd-Toed Hoofed Mammals All native forms of Equidae Caballine true horses (Equus cf. ferus) from the Late Pleistocene of North America have historically been assigned to many different species, including Equus fraternus, Equus scotti and Equus lambei, but the taxonomy of these horses is unclear, and many of these species may be synonymous with each other, perhaps only representing a single species. Stilt-legged horse (Haringtonhippus francisci / Equus francisci; (H) Tapirs (Tapirus; three species) California tapir (Tapirus californicus) (H) Merriam's tapir (Tapirus merriami) (H) Vero tapir (Tapirus veroensis) (H) Order Notoungulata Mixotoxodon (H) Carnivora Feliformia Several Felidae spp. Sabertooths (Machairodontinae) Smilodon fatalis (sabertooth cat) (C) Homotherium serum scimitar-toothed cat (C) American cheetah (Miracinonyx trumani; not true cheetah) Cougar (Puma concolor; megafaunal ecomorph extirpated from North America, South American populations recolonised former range) (C) Jaguarundi (Herpailurus yagouaroundi; extirpated, range semi-recolonised) (C) Margay (Leopardus weidii; extirpated) (C) Ocelot (Leopardus pardalis; extirpated, range marginally recolonised) (C) Jaguars Pleistocene North American jaguar (Panthera onca augusta; range semi-recolonised by other subspecies) (C) North America Jaguar Panthera balamoides (dubious, suggested to be a junior synonym of the short faced bear Arctotherium) Lions American lion (Panthera atrox) (C) Cave lion (Panthera spelaea; present only in Alaska and Yukon) (C) Caniformia Canidae Dire wolf (Aenocyon dirus) (C) Pleistocene coyote (Canis latrans orcutti) (C) Megafaunal wolf e.g. Beringian wolf (Canis lupus ssp.) (C) Dhole (Cuon alpinus; extirpated) (C) Protocyon troglodytes (C) Arctoidea Musteloidea Mephitidae Short-faced skunk (Brachyprotoma obtusata) (C) Mustelidae Steppe polecat (Mustela eversmanii; extirpated) (C) Various bear (Ursidae) spp. Arctodus simus (C) Florida spectacled bear (Tremarctos floridanus) (C) South American short-faced bear (Arctotherium wingei) (C) Giant polar bear (Ursus maritimus tyrannus; a possible inhabitant) (C) Afrotheria Paenungulata Tethytheria All native spp. of Proboscidea Mastodons American mastodon (Mammut americanum) (H) Pacific mastodon (Mammut pacificus) (H) (validity uncertain) Gomphotheriidae spp. Cuvieronius (H) Mammoth (Mammuthus) spp. Columbian mammoth (Mammuthus columbi) (H) Pygmy mammoth (Mammuthus exilis) (H) Woolly mammoth (Mammuthus primigenius) (H) Sirenia Dugongidae Steller's sea cow (Hydrodamalis gigas; extirpated from North America, survived in Beringia into 18th century) (H) Euarchontoglires Bats Stock's vampire bat (Desmodus stocki) (C) Pristine mustached bat (Pteronotus (Phyllodia) pristinus) (C) Rodents Giant beaver (Castoroides) spp. Castoroides ohioensis (H) Castoroides leiseyorum (H) Klein's porcupine (Erethizon kleini) (H) Giant island deer mouse (Peromyscus nesodytes) (C) Neochoerus spp. e.g. Pinckney's capybara (Neochoerus pinckneyi) (H) Neochoerus aesopi (H) Neotoma findleyi Neotoma pygmaea Synaptomys australis All giant hutia (Heptaxodontidae) spp. Blunt-toothed giant hutia (Amblyrhiza inundata; could grow as large as an American black bear) (H) Plate-toothed giant hutia (Elasmodontomys obliquus) (H) Twisted-toothed mouse (Quemisia gravis) (H) Osborn's key mouse (Clidomys osborn's) (H) Xaymaca fulvopulvis (H) Lagomorphs Aztlan rabbit (Aztlanolagus sp.) (H) Giant pika (Ochotona whartoni) (H) Eulipotyphla Notiosorex dalquesti Notiosorex harrisi Xenarthra Pilosa Giant anteater (Myrmecophaga tridactyla; extirpated, range partially recolonised) (C) All remaining ground sloth spp. Eremotherium (megatheriid giant ground sloth) (H) Nothrotheriops (nothrotheriid ground sloth) (H) Megalonychid ground sloth spp. Megalonyx (H) Nohochichak (H) Xibalbaonyx (H) Meizonyx Mylodontid ground sloth spp. Paramylodon (H) Cingulata All members of Glyptodontinae Glyptotherium (H) Beautiful armadillo (Dasypus bellus) (H) Pachyarmatherium All Pampatheriidae spp. Holmesina (H) Pampatherium (H) Birds Water Fowl Ducks Bermuda flightless duck (Anas pachyscelus) (H) Californian flightless sea duck (Chendytes lawi) (C) Mexican stiff-tailed duck (Oxyura zapatima) (H) Neochen barbadiana (H) Turkey (Meleagris) spp. Californian turkey (Meleagris californica) (H) Meleagris crassipes (H) Various Gruiformes spp. All cave rail (Nesotrochis) spp. e.g. Antillean cave rail (Nesotrochis debooyi) (C) Barbados rail (Incertae sedis) (C) Cuban flightless crane (Antigone cubensis) (H) La Brea crane (Grus pagei) (H) Various flamingo (Phoenicopteridae) spp. Minute flamingo (Phoenicopterus minutus) (C) Cope's flamingo (Phoenicopterus copei) (C) Dow's puffin (Fratercula dowi) (C) Pleistocene Mexican diver spp. Plyolimbus baryosteus (C) Podiceps spp. Podiceps parvus (C) Storks La Brea/Asphalt stork (Ciconia maltha) (C) Wetmore's stork (Mycteria wetmorei) (C) Pleistocene Mexican cormorants spp. (genus Phalacrocorax) Phalacrocorax goletensis (C) Phalacrocorax chapalensis (C) All remaining teratorn (Teratornithidae) spp. Aiolornis incredibilis (C) Cathartornis gracilis (C) Oscaravis olsoni (C) Teratornis merriami (C) Teratornis woodburnensis (C) Several New World vultures (Cathartidae) spp. Pleistocene black vulture (Coragyps occidentalis ssp.) (C) Megafaunal Californian condor (Gymnogyps amplus) (C) Clark's condor (Breagyps clarki) (C) Cuban condor (Gymnogyps varonai) (C) Several Accipitridae spp. American neophrone vulture (Neophrontops americanus) (C) Woodward's eagle (Amplibuteo woodwardi) (C) Cuban great hawk (Buteogallus borrasi) (C) Daggett's eagle (Buteogallus daggetti) (C) Fragile eagle (Buteogallus fragilis) (C) Cuban giant hawk (Gigantohierax suarezi) (C) Errant eagle (Neogyps errans) (C) Grinnell's crested eagle (Spizaetus grinnelli) (C) Willett's hawk-eagle (Spizaetus willetti) (C) Caribbean titan hawk (Titanohierax) (C) Several owl (Strigiformes) spp. Brea miniature owl (Asphaltoglaux) (C) Kurochkin's pygmy owl (Glaucidium kurochkini) (C) Brea owl (Oraristix brea) (C) Cuban giant owl (Ornimegalonyx) (C) Bermuda flicker (Colaptes oceanicus) (C) Several caracara (Caracarinae) spp. Bahaman terrestrial caracara (Caracara sp.) (C) Puerto Rican terrestrial caracara (Caracara sp.) (C) Jamaican caracara (Carcara tellustris) (C) Cuban caracara (Milvago sp.) (C) Hispaniolan caracara (Milvago sp.) (C) Psittacopasserae Psittaciformes Mexican thick-billed parrot (Rhynchopsitta phillipsi) (H) Several giant tortoise spp. Hesperotestudo (H) Gopherus spp. Gopherus donlaloi (H) Chelonoidis spp. Chelonoidis marcanoi (H) Chelonoidis alburyorum (H) The survivors are in some ways as significant as the losses: bison (H), grey wolf (C), lynx (C), grizzly bear (C), American black bear (C), deer (e.g. caribou, moose, wapiti (elk), Odocoileus spp.) (H), pronghorn (H), white-lipped peccary (H), muskox (H), bighorn sheep (H), and mountain goat (H); the list of survivors also include species which were extirpated during the Quaternary extinction event, but recolonised at least part of their ranges during the mid-Holocene from South American relict populations, such as the cougar (C), jaguar (C), giant anteater (C), collared peccary (H), ocelot (C) and jaguarundi (C). All save the pronghorns and giant anteaters were descended from Asian ancestors that had evolved with human predators. Pronghorns are the second-fastest land mammal (after the cheetah), which may have helped them elude hunters. More difficult to explain in the context of overkill is the survival of bison, since these animals first appeared in North America less than 240,000 years ago and so were geographically removed from human predators for a sizeable period of time. Because ancient bison evolved into living bison, there was no continent-wide extinction of bison at the end of the Pleistocene (although the genus was regionally extirpated in many areas). The survival of bison into the Holocene and recent times is therefore inconsistent with the overkill scenario. By the end of the Pleistocene, when humans first entered North America, these large animals had been geographically separated from intensive human hunting for more than 200,000 years. Given this enormous span of geologic time, bison would almost certainly have been very nearly as naive as native North American large mammals. The culture that has been connected with the wave of extinctions in North America is the paleo-American culture associated with the Clovis people (q.v.), who were thought to use spear throwers to kill large animals. The chief criticism of the "prehistoric overkill hypothesis" has been that the human population at the time was too small and/or not sufficiently widespread geographically to have been capable of such ecologically significant impacts. This criticism does not mean that climate change scenarios explaining the extinction are automatically to be preferred by default, however, any more than weaknesses in climate change arguments can be taken as supporting overkill. Some form of a combination of both factors could be plausible, and overkill would be a lot easier to achieve large-scale extinction with an already stressed population due to climate change. South America South America suffered among the worst losses of the continents, with around 83% of its megafauna going extinct. These extinctions postdate the arrival of modern humans in South America around 15,000 years ago. Both human and climatic factors have been attributed as factors in the extinctions by various authors. Although some megafauna has been historically suggested to have survived into the early Holocene based on radiocarbon dates this may be the result of dating errors due to contamination. The extinctions are coincident with the end of the Antarctic Cold Reversal (a cooling period earlier and less severe than the Northern Hemisphere Younger Dryas) and the emergence of Fishtail projectile points, which became widespread across South America. Fishtail projectile points are thought to have been used in big game hunting, though direct evidence of exploitation of extinct megafauna by humans is rare, though megafauna exploitation has been documented at a number of sites. Fishtail points rapidly disappeared after the extinction of the megafauna, and were replaced by other styles more suited to hunting smaller prey. Some authors have proposed the "Broken Zig-Zag" model, where human hunting and climate change causing a reduction in open habitats preferred by megafauna were synergistic factors in megafauna extinction in South America. Ungulates Even-Toed Hoofed Mammals Several Cervidae (deer) spp. Morenelaphus Antifer Agalmaceros blicki (potentially synonym of modern white-tailed deer) Odocoileus salinae Various Camelidae spp. Eulamaops Stilt legged llama Hemiauchenia Stout legged llama Palaeolama Odd-Toed Hoofed Mammals Several species of tapirs (Tapiridae) Tapirus cristatellus All Pleistocene wild horse genera (Equidae) Equus neogeus Hippidion Hippidion devillei Hippidion principale Hippidion saldiasi All remaining Meridiungulata genera Order Litopterna Macraucheniidae Macrauchenia Macraucheniopsis Xenorhinotherium Proterotheriidae Neolicaphrium recens Order Notoungulata Toxodontidae Piauhytherium (Some authors regard this taxon as synonym of Trigodonops) Mixotoxodon Toxodon Trigodonops Primates Platyrrhini (New World monkeys) Atelidae Protopithecus Caipora Cartelles Alouatta mauroi Carnivora Feliformia Several Felidae spp. Saber-toothed cat (Smilodon) spp. Smilodon fatalis (northwestern South America) Smilodon populator (eastern and southern South America) Patagonian jaguar (Panthera onca mesembrina) (some authors have suggested that these remains actually belong to the American lion instead) Caniformia Canidae Dire wolf (Aenocyon dirus) Nehring's wolf (Canis nehringi) Protocyon Pleistocene bush dog (Speothos pacivorus) Ursidae (bears) South American short-faced bear (Arctotherium spp.) Arctotherium bonairense Arctotherium tarijense Arctotherium wingei Rodents Neochoerus Bats Giant vampire bat (Desmodus draculae) Proboscidea (elephants and relatives) Gomphotheridae Cuvieronius Notiomastodon Xenarthrans All remaining ground sloth genera Megatheriidae spp. Eremotherium Megatherium Nothrotheriidae spp. Nothropus Nothrotherium Megalonychidae spp. Ahytherium Australonyx Diabolotherium Megistonyx Mylodontidae spp. (including Scelidotheriinae) Catonyx Glossotherium Lestodon Mylodon Scelidotherium Scelidodon Mylodonopsis Ocnotherium Valgipes All remaining Glyptodontinae spp. Doedicurus Glyptodon/Chlamydotherium Heteroglyptodon Hoplophorus Lomaphorus Neosclerocalyptus Neuryurus Panochthus Parapanochthus Plaxhaplous Sclerocalyptus Several Dasypodidae spp. Beautiful armadillo (Dasypus bellus) Eutatus Pachyarmatherium Propaopus All Pampatheriidae spp. Holmesina (et Chlamytherium occidentale) Pampatherium Tonnicinctus Birds Various Caracarinae spp. Venezuelan caracara (Caracara major) Seymour's caracara (Caracara seymouri) Peruvian caracara (Milvago brodkorbi) Various Cathartidae spp. Pampagyps imperator Geronogyps reliquus Wingegyps cartellei Pleistovultur nevesi Various Tadorninae spp. Neochen debilis Neochen pugil Psilopterus (small terror bird remains dated to the Late Pleistocene, but these are disputed) Reptiles Crocs & Gators Caiman venezuelensis Testudines Chelonoidis lutzae (Argentina) Peltocephalus maturin Sahul (Australia-New Guinea) and the Pacific A scarcity of reliably dated megafaunal bone deposits has made it difficult to construct timelines for megafaunal extinctions in certain areas, leading to a divide among researches about when and how megafaunal species went extinct. There are at least three hypotheses regarding the extinction of the Australian megafauna: that they went extinct with the arrival of the Aboriginal Australians on the continent, that they went extinct due to natural climate change. This theory is based on evidence of megafauna surviving until 40,000 years ago, a full 30,000 years after homo sapiens first landed in Australia, and thus that the two groups coexisted for a long time. Evidence of these animals existing at that time come from fossil records and ocean sediment. To begin with, sediment core drilled in the Indian Ocean off the SW coast of Australia indicate the existence of a fungus called Sporormiella, which survived off the dung of plant-eating mammals. The abundance of these spores in the sediment prior to 45,000 years ago indicates that many large mammals existed in the southwest Australian landscape until that point. The sediment data also indicates that the megafauna population collapsed within a few thousand years, around the 45,000 years ago, suggesting a rapid extinction event. In addition, fossils found at South Walker Creek, which is the youngest megafauna site in northern Australia, indicate that at least 16 species of megafauna survived there until 40,000 years ago. Furthermore, there is no firm evidence of homo sapiens living at South Walker Creek 40,000 years ago, therefore no human cause can be attributed to the extinction of these megafauna. However, there is evidence of major environmental deterioration of South Water Creek 40,000 years ago, which may have caused the extinct event. These changes include increased fire, reduction in grasslands, and the loss of fresh water. The same environmental deterioration is seen across Australia at the time, further strengthening the climate change argument. Australia's climate at the time could best be described as an overall drying of the landscape due to lower precipitation, resulting in less fresh water availability and more drought conditions. Overall, this led to changes in vegetation, increased fires, overall reduction in grasslands, and a greater competition for already scarce fresh water. These environmental changes proved to be too much for the Australian megafauna to cope with, causing the extinction of 90% of megafauna species. The third hypothesis shared by some scientists is that human impacts and natural climate changes led to the extinction of Australian megafauna. About 75% of Australia is semi-arid or arid, so it makes sense that megafauna species used the same fresh water resources as humans. This competition could have led to more hunting of megafauna. Furthermore, Homo sapiens used fire agriculture to burn impassable land. This further diminished the already disappearing grassland which contained plants that were a key dietary component of herbivorous megafauna. While there is no scientific consensus on this, it is plausible that homo sapiens and natural climate change had a combined impact. Overall, there is a great deal of evidence for humans being the culprit, but by ruling out climate change completely as a cause of the Australian megafauna extinction we are not getting the whole picture. The climate change in Australia 45,000 years ago destabilized the ecosystem, making it particularly vulnerable to hunting and fire agriculture by humans; this is probably what led to the extinction of the Australian megafauna. Several studies provide evidence that climate change caused megafaunal extinction during the Pleistocene in Australia. One group of researchers analyzed fossilized teeth found at Cuddie Springs in southeastern Australia. By analyzing oxygen isotopes, they measured aridity, and by analyzing carbon isotopes and dental microwear texture analysis, they assessed megafaunal diets and vegetation. During the middle Pleistocene, southeastern Australia was dominated by browsers, including fauna that consumed C4 plants. By the late Pleistocene, the C4 plant dietary component had decreased considerably. This shift may have been caused by increasingly arid conditions, which may have caused dietary restrictions. Other isotopic analyses of eggshells and wombat teeth also point to a decline of C4 vegetation after 45 Ka. This decline in C4 vegetation is coincident with increasing aridity. Increasingly arid conditions in southeastern Australia during the late Pleistocene may have stressed megafauna, and contributed to their decline. In Sahul (a former continent composed of Australia and New Guinea), the sudden and extensive spate of extinctions occurred earlier than in the rest of the world. Most evidence points to a 20,000 year period after human arrival circa 63,000 BCE, but scientific argument continues as to the exact date range. In the rest of the Pacific (other Australasian islands such as New Caledonia, and Oceania) although in some respects far later, endemic fauna also usually perished quickly upon the arrival of humans in the late Pleistocene and early Holocene. Marsupials Various members of Diprotodontidae Diprotodon (largest known marsupial) Hulitherium tomasetti Maokopia ronaldi Zygomaturus Palorchestes ("marsupial tapir") Various members of Vombatidae Lasiorhinus angustidens (giant wombat) Phascolonus (giant wombat) Ramasayia magna (giant wombat) Vombatus hacketti (Hackett's wombat) Warendja wakefieldi (dwarf wombat) Sedophascolomys (giant wombat) Phascolarctos stirtoni (giant koala) Marsupial lion (Thylacoleo carnifex) Borungaboodie (giant potoroo) Various members of Macropodidae (kangaroos, wallabies, etc.) Procoptodon (short-faced kangaroos) e.g. Procoptodon goliah Sthenurus (giant kangaroo) Simosthenurus (giant kangaroo) Various Macropus (giant kangaroo) spp. e.g. Macropus ferragus Macropus titan Macropus pearsoni Protemnodon spp. (giant wallaby) Troposodon (wallaby) Bohra (giant tree kangaroo) Propleopus oscillans (omnivorous, giant musky rat-kangaroo) Nombe Congruus Various forms of Sarcophilus (Tasmanian devil) Sarcophilus laniarius (25% larger than modern species, unclear if it is actually a distinct species from living Tasmanian devil) Sarcophilus moornaensis Monotremes: egg-laying mammals. Echidna Murrayglossus hacketti (giant echidna) Megalibgwilia ramsayi Birds Pygmy Cassowary (Casuarius lydekkeri) Genyornis (a dromornithid Giant malleefowl (Progura gallinacea) Cryptogyps lacertosus Dynatoaetus gaffae Several Phoenicopteridae spp. Xenorhynchopsis spp. (Australian flamingo) Xenorhynchopsis minor Xenorhynchopsis tibialis Reptiles Crocs & Gators Ikanogavialis (the last fully marine crocodilian) Paludirex (Australian freshwater mekosuchine crocodiian) Quinkana (Australian terrestrial mekosuchine crocodilian, apex predator) Volia (a two-to-three meter long mekosuchine crocodylian, apex predator of Pleistocene Fiji) Mekosuchus Mekosuchus inexpectatus (New Caledonian land crocodile) Mekosuchus kalpokasi (Vanuatu land crocodile) Varanus sp. (Pleistocene and Holocene New Caledonia) Megalania (Varanus pricus) (a giant predatory monitor lizard comparable or larger than the Komodo dragon) Snakes Wonambi (a five-to-six-metre-long Australian constrictor snake) Several spp. of Meiolaniidae (giant armoured turtles) Meiolania Ninjemys Causes History of research The megafaunal extinctions were already recognized as a distinct phenomenon by some scientists in the 19th century: Several decades later in his 1911 book The World of Life (published 2 years before his death), Wallace revisited the issue of the Pleistocene megafauna extinctions, concluding that the extinctons were at least in part the result of human agency in combination with other factors. Discussion of the topic became more widespread during the 20th century, particularly following the proposal of the "overkill hypothesis" by Paul Schultz Martin during the 1960s. By the end of the 20th century, two "camps" of researchers had emerged on the topic, one supporting climate change, the other supporting human hunting as the primary cause of the extinctions. Hunting The hunting hypothesis suggests that humans hunted megaherbivores to extinction, which in turn caused the extinction of carnivores and scavengers which had preyed upon those animals. This hypothesis holds Pleistocene humans responsible for the megafaunal extinction. One variant, known as blitzkrieg, portrays this process as relatively quick. Some of the direct evidence for this includes: fossils of some megafauna found in conjunction with human remains, embedded arrows and tool cut marks found in megafaunal bones, and European cave paintings that depict such hunting. Biogeographical evidence is also suggestive: the areas of the world where humans evolved currently have more of their Pleistocene megafaunal diversity (the elephants and rhinos of Asia and Africa) compared to other areas such as Australia, the Americas, Madagascar and New Zealand without the earliest humans. The overkill hypothesis, a variant of the hunting hypothesis, was proposed in 1966 by Paul S. Martin, Professor of Geosciences Emeritus at the Desert Laboratory of the University of Arizona. Circumstantially, the close correlation in time between the appearance of humans in an area and extinction there provides weight for this scenario. Radiocarbon dating has supported the plausibility of this correlation being reflective of causation. The megafaunal extinctions covered a vast period of time and highly variable climatic situations. The earliest extinctions in Australia were complete approximately 50,000 BP, well before the Last Glacial Maximum and before rises in temperature. The most recent extinction in New Zealand was complete no earlier than 500 BP and during a period of cooling. In between these extremes megafaunal extinctions have occurred progressively in such places as North America, South America and Madagascar with no climatic commonality. The only common factor that can be ascertained is the arrival of humans. This phenomenon appears even within regions. The mammal extinction wave in Australia about 50,000 years ago coincides not with known climatic changes, but with the arrival of humans. In addition, large mammal species like the giant kangaroo Protemnodon appear to have succumbed sooner on the Australian mainland than on Tasmania, which was colonised by humans a few thousand years later. A study published in 2015 supported the hypothesis further by running several thousand scenarios that correlated the time windows in which each species is known to have become extinct with the arrival of humans on different continents or islands. This was compared against climate reconstructions for the last 90,000 years. The researchers found correlations of human spread and species extinction indicating that the human impact was the main cause of the extinction, while climate change exacerbated the frequency of extinctions. The study, however, found an apparently low extinction rate in the fossil record of mainland Asia. A 2020 study published in Science Advances found that human population size and/or specific human activities, not climate change, caused rapidly rising global mammal extinction rates during the past 126,000 years. Around 96% of all mammalian extinctions over this time period are attributable to human impacts. According to Tobias Andermann, lead author of the study, "these extinctions did not happen continuously and at constant pace. Instead, bursts of extinctions are detected across different continents at times when humans first reached them. More recently, the magnitude of human driven extinctions has picked up the pace again, this time on a global scale." On a related note, the population declines of still extant megafauna during the Pleistocene have also been shown to correlate with human expansion rather than climate change. The extinction's extreme bias towards larger animals further supports a relationship with human activity rather than climate change. There is evidence that the average size of mammalian fauna declined over the course of the Quaternary, a phenomenon that was likely linked to disproportionate hunting of large animals by humans. Extinction through human hunting has been supported by archaeological finds of mammoths with projectile points embedded in their skeletons, by observations of modern naive animals allowing hunters to approach easily and by computer models by Mosimann and Martin, and Whittington and Dyke, and most recently by Alroy. In 2024 a paper was published in Science Advances that added additional support to the overkill hypothesis in North America when the skull of an 18 month old child, dated to 12,800 years ago, was analyzed for chemical signatures attributable to both maternal milk and solid food. Specific isotopes of carbon and nitrogen most closely matched those that would have been found in the mammoth genus and secondarily elk or bison. A number of objections have been raised regarding the hunting hypothesis. Notable among them is the sparsity of evidence of human hunting of megafauna. There is no archeological evidence that in North America megafauna other than mammoths, mastodons, gomphotheres and bison were hunted, despite the fact that, for example, camels and horses are very frequently reported in fossil history. Overkill proponents, however, say this is due to the fast extinction process in North America and the low probability of animals with signs of butchery to be preserved. The majority of North American taxa have too sparse a fossil record to accurately assess the frequency of human hunting of them. A study by Surovell and Grund concluded "archaeological sites dating to the time of the coexistence of humans and extinct fauna are rare. Those that preserve bone are considerably more rare, and of those, only a very few show unambiguous evidence of human hunting of any type of prey whatsoever." Eugene S. Hunn suggests that the birthrate in hunter-gatherer societies is generally too low, that too much effort is involved in the bringing down of a large animal by a hunting party, and that in order for hunter-gatherers to have brought about the extinction of megafauna simply by hunting them to death, an extraordinary amount of meat would have had to have been wasted. Proponents of hunting as a cause of the extinctions argue that statistical modelling validates that relatively low-level hunting can have significant effect on megafauna populations due to their slow life cycles, and that hunting can cause top-down forcing trophic cascade events that destabilize ecosystems. Second-order predation The Second-Order Predation Hypothesis says that as humans entered the New World they continued their policy of killing predators, which had been successful in the Old World but because they were more efficient and because the fauna, both herbivores and carnivores, were more naive, they killed off enough carnivores to upset the ecological balance of the continent, causing overpopulation, environmental exhaustion, and environmental collapse. The hypothesis accounts for changes in animal, plant, and human populations. The scenario is as follows: After the arrival of H. sapiens in the New World, existing predators must share the prey populations with this new predator. Because of this competition, populations of original, or first-order, predators cannot find enough food; they are in direct competition with humans. Second-order predation begins as humans begin to kill predators. Prey populations are no longer well controlled by predation. Killing of nonhuman predators by H. sapiens reduces their numbers to a point where these predators no longer regulate the size of the prey populations. Lack of regulation by first-order predators triggers boom-and-bust cycles in prey populations. Prey populations expand and consequently overgraze and over-browse the land. Soon the environment is no longer able to support them. As a result, many herbivores starve. Species that rely on the slowest recruiting food become extinct, followed by species that cannot extract the maximum benefit from every bit of their food. Boom-bust cycles in herbivore populations change the nature of the vegetative environment, with consequent climatic impacts on relative humidity and continentality. Through overgrazing and overbrowsing, mixed parkland becomes grassland, and climatic continentality increases. The second-order predation hypothesis has been supported by a computer model, the Pleistocene extinction model (PEM), which, using the same assumptions and values for all variables (herbivore population, herbivore recruitment rates, food needed per human, herbivore hunting rates, etc.) other than those for hunting of predators. It compares the overkill hypothesis (predator hunting = 0) with second-order predation (predator hunting varied between 0.01 and 0.05 for different runs). The findings are that second-order predation is more consistent with extinction than is overkill (results graph at left). The Pleistocene extinction model is the only test of multiple hypotheses and is the only model to specifically test combination hypotheses by artificially introducing sufficient climate change to cause extinction. When overkill and climate change are combined they balance each other out. Climate change reduces the number of plants, overkill removes animals, therefore fewer plants are eaten. Second-order predation combined with climate change exacerbates the effect of climate change. (results graph at right). The second-order predation hypothesis is further supported by the observation above that there was a massive increase in bison populations. However, this hypothesis has been criticised on the grounds that the multispecies model produces a mass extinction through indirect competition between herbivore species: small species with high reproductive rates subsidize predation on large species with low reproductive rates. All prey species are lumped in the Pleistocene extinction model. Also, the control of population sizes by predators is not fully supported by observations of modern ecosystems. The hypothesis further assumes decreases in vegetation due to climate change, but deglaciation doubled the habitable area of North America. Any vegetational changes that did occur failed to cause almost any extinctions of small vertebrates, and they are more narrowly distributed on average, which detractors cite as evidence against the hypothesis. Competition for water In southeastern Australia, the scarcity of water during the interval in which humans arrived in Australia suggests that human competition with megafauna for precious water sources may have played a role in the extinction of the latter. Landscape alteration One consequence of the colonisation by humans of lands previously uninhabited by them may have been the introduction of new fire regimes because of extensive fire use by humans. There is evidence that anthropogenic fire use had major impacts on the local environments in both Australia and North America. Climate change At the end of the 19th and beginning of the 20th centuries, when scientists first realized that there had been glacial and interglacial ages, and that they were somehow associated with the prevalence or disappearance of certain animals, they surmised that the termination of the Pleistocene ice age might be an explanation for the extinctions. The most obvious change associated with the termination of an ice age is the increase in temperature. Between 15,000 BP and 10,000 BP, a 6 °C increase in global mean annual temperatures occurred. This was generally thought to be the cause of the extinctions. According to this hypothesis, a temperature increase sufficient to melt the Wisconsin ice sheet could have placed enough thermal stress on cold-adapted mammals to cause them to die. Their heavy fur, which helps conserve body heat in the glacial cold, might have prevented the dumping of excess heat, causing the mammals to die of heat exhaustion. Large mammals, with their reduced surface area-to-volume ratio, would have fared worse than small mammals. A study covering the past 56,000 years indicates that rapid warming events with temperature changes of up to had an important impact on the extinction of megafauna. Ancient DNA and radiocarbon data indicates that local genetic populations were replaced by others within the same species or by others within the same genus. Survival of populations was dependent on the existence of refugia and long distance dispersals, which may have been disrupted by human hunters. Other scientists have proposed that increasingly extreme weather—hotter summers and colder winters—referred to as "continentality", or related changes in rainfall caused the extinctions. It has been shown that vegetation changed from mixed woodland-parkland to separate prairie and woodland. This may have affected the kinds of food available. Shorter growing seasons may have caused the extinction of large herbivores and the dwarfing of many others. In this case, as observed, bison and other large ruminants would have fared better than horses, elephants and other monogastrics, because ruminants are able to extract more nutrition from limited quantities of high-fiber food and better able to deal with anti-herbivory toxins. So, in general, when vegetation becomes more specialized, herbivores with less diet flexibility may be less able to find the mix of vegetation they need to sustain life and reproduce, within a given area. Increased continentality resulted in reduced and less predictable rainfall limiting the availability of plants necessary for energy and nutrition. It has been suggested that this change in rainfall restricted the amount of time favorable for reproduction. This could disproportionately harm large animals, since they have longer, more inflexible mating periods, and so may have produced young at unfavorable seasons (i.e., when sufficient food, water, or shelter was unavailable because of shifts in the growing season). In contrast, small mammals, with their shorter life cycles, shorter reproductive cycles, and shorter gestation periods, could have adjusted to the increased unpredictability of the climate, both as individuals and as species which allowed them to synchronize their reproductive efforts with conditions favorable for offspring survival. If so, smaller mammals would have lost fewer offspring and would have been better able to repeat the reproductive effort when circumstances once more favored offspring survival. A study looking at the environmental conditions across Europe, Siberia and the Americas from 25,000 to 10,000 YBP found that prolonged warming events leading to deglaciation and maximum rainfall occurred just prior to the transformation of the rangelands that supported megaherbivores into widespread wetlands that supported herbivore-resistant plants. The study proposes that moisture-driven environmental change led to the megafaunal extinctions and that Africa's trans-equatorial position allowed rangeland to continue to exist between the deserts and the central forests, therefore fewer megafauna species became extinct there. Evidence in Southeast Asia, in contrast to Europe, Australia, and the Americas, suggests that climate change and an increasing sea level were significant factors in the extinction of several herbivorous species. Alterations in vegetation growth and new access routes for early humans and mammals to previously isolated, localized ecosystems were detrimental to select groups of fauna. Some evidence from Europe also suggests climatic changes were responsible for extinctions there, as the individuals extinctions tended to occur during times of environmental change and did not correlate particularly well with human migrations. In Australia, some studies have suggested that extinctions of megafauna began before the peopling of the continent, favouring climate change as the driver. In Beringia, megafauna may have gone extinct because of particularly intense paludification and because the land connection between Eurasia and North America flooded before the Cordilleran Ice Sheet retreated far enough to reopen the corridor between Beringia and the remainder of North America. Woolly mammoths became extirpated from Beringia because of climatic factors, although human activity also played a synergistic role in their decline. In North America, a Radiocarbon-dated Event-Count (REC) modelling study found that megafaunal declines in North America correlated with climatic changes instead of human population expansion. In the North American Great Lakes region, the population declines of mastodons and mammoths have been found to correlate with climatic fluctuations during the Younger Dryas rather than human activity. In the Argentine Pampas, the flooding of vast swathes of the once much larger Pampas grasslands may have played a role in the extinctions of its megafaunal assemblages. Critics object that since there were multiple glacial advances and withdrawals in the evolutionary history of many of the megafauna, it is rather implausible that only after the last glacial maximum would there be such extinctions. Proponents of climate change as the extinction event's cause like David J. Meltzer suggest that the last deglaciation may have been markedly different from previous ones. Also, one study suggests that the Pleistocene megafaunal composition may have differed markedly from that of earlier interglacials, making the Pleistocene populations particularly vulnerable to changes in their environment. Studies propose that the annual mean temperature of the current interglacial that we have seen for the last 10,000 years is no higher than that of previous interglacials, yet most of the same large mammals survived similar temperature increases. In addition, numerous species such as mammoths on Wrangel Island and St. Paul Island survived in human-free refugia despite changes in climate. This would not be expected if climate change were responsible (unless their maritime climates offered some protection against climate change not afforded to coastal populations on the mainland). Under normal ecological assumptions island populations should be more vulnerable to extinction due to climate change because of small populations and an inability to migrate to more favorable climes. Critics have also identified a number of problems with the continentality hypotheses. Megaherbivores have prospered at other times of continental climate. For example, megaherbivores thrived in Pleistocene Siberia, which had and has a more continental climate than Pleistocene or modern (post-Pleistocene, interglacial) North America. The animals that became extinct actually should have prospered during the shift from mixed woodland-parkland to prairie, because their primary food source, grass, was increasing rather than decreasing. Although the vegetation did become more spatially specialized, the amount of prairie and grass available increased, which would have been good for horses and for mammoths, and yet they became extinct. This criticism ignores the increased abundance and broad geographic extent of Pleistocene bison at the end of the Pleistocene, which would have increased competition for these resources in a manner not seen in any earlier interglacials. Although horses became extinct in the New World, they were successfully reintroduced by the Spanish in the 16th century—into a modern post-Pleistocene, interglacial climate. Today there are feral horses still living in those same environments. They find a sufficient mix of food to avoid toxins, they extract enough nutrition from forage to reproduce effectively and the timing of their gestation is not an issue. This criticism ignores the fact that present-day horses are not competing for resources with ground sloths, mammoths, mastodons, camels, llamas, and bison. Similarly, mammoths survived the Pleistocene Holocene transition on isolated, uninhabited islands in the Mediterranean Sea until 4,000 to 7,000 years ago, as well as on Wrangel Island in the Siberian Arctic. Additionally, large mammals should have been able to migrate, permanently or seasonally, if they found the temperature too extreme, the breeding season too short, or the rainfall too sparse or unpredictable. Seasons vary geographically. By migrating away from the equator, herbivores could have found areas with growing seasons more favorable for finding food and breeding successfully. Modern-day African elephants migrate during periods of drought to places where there is apt to be water. Large animals also store more fat in their bodies than do medium-sized animals and this should have allowed them to compensate for extreme seasonal fluctuations in food availability. Some evidence weighs against climate change as a valid hypothesis as applied to Australia. It has been shown that the prevailing climate at the time of extinction (40,000–50,000 BP) was similar to that of today, and that the extinct animals were strongly adapted to an arid climate. The evidence indicates that all of the extinctions took place in the same short time period, which was the time when humans entered the landscape. The main mechanism for extinction was probably fire (started by humans) in a then much less fire-adapted landscape. Isotopic evidence shows sudden changes in the diet of surviving species, which could correspond to the stress they experienced before extinction. Some evidence obtained from analysis of the tusks of mastodons from the American Great Lakes region appears inconsistent with the climate change hypothesis. Over a span of several thousand years prior to their extinction in the area, the mastodons show a trend of declining age at maturation. This is the opposite of what one would expect if they were experiencing stresses from deteriorating environmental conditions, but is consistent with a reduction in intraspecific competition that would result from a population being reduced by human hunting. It may be observed that neither the overkill nor the climate change hypotheses can fully explain events: browsers, mixed feeders and non-ruminant grazer species suffered most, while relatively more ruminant grazers survived. However, a broader variation of the overkill hypothesis may predict this, because changes in vegetation wrought by either Second Order Predation (see below) or anthropogenic fire preferentially selects against browse species. Disease The hyperdisease hypothesis, as advanced by Ross D. E. MacFee and Preston A. Marx, attributes the extinction of large mammals during the late Pleistocene to indirect effects of the newly arrived aboriginal humans. In more recent times, disease has driven many vulnerable species to extinction; the introduction of avian malaria and avipoxvirus, for example, has greatly decreased the populations of the endemic birds of Hawaii, with some going extinct. The hyperdisease hypothesis proposes that humans or animals traveling with them (e.g., chickens or domestic dogs) introduced one or more highly virulent diseases into vulnerable populations of native mammals, eventually causing extinctions. The extinction was biased toward larger-sized species because smaller species have greater resilience because of their life history traits (e.g., shorter gestation time, greater population sizes, etc.). Humans are thought to be the cause because other earlier immigrations of mammals into North America from Eurasia did not cause extinctions. A similar suggestion is that pathogens were transmitted by the expanding humans via the domesticated dogs they brought with them. A related theory proposes that a highly contagious prion disease similar to chronic wasting disease or scrapie that was capable of infecting a large number of species was the culprit. Animals weakened by this "superprion" would also have easily become reservoirs of viral and bacterial diseases as they succumbed to neurological degeneration from the prion, causing a cascade of different diseases to spread among various mammal species. This theory could potentially explain the prevalence of heterozygosity at codon 129 of the prion protein gene in humans, which has been speculated to be the result of natural selection against homozygous genotypes that were more susceptible to prion disease and thus potentially a tell-tale of a major prion pandemic that affected humans of or younger than reproductive age far in the past and disproportionately killed before they could reproduce those with homozygous genotypes at codon 129. If a disease was indeed responsible for the end-Pleistocene extinctions, then there are several criteria it must satisfy (see Table 7.3 in MacPhee & Marx 1997). First, the pathogen must have a stable carrier state in a reservoir species. That is, it must be able to sustain itself in the environment when there are no susceptible hosts available to infect. Second, the pathogen must have a high infection rate, such that it is able to infect virtually all individuals of all ages and sexes encountered. Third, it must be extremely lethal, with a mortality rate of c. 50–75%. Finally, it must have the ability to infect multiple host species without posing a serious threat to humans. Humans may be infected, but the disease must not be highly lethal or able to cause an epidemic. As with other hypotheses, a number of counterarguments to the hyperdisease hypothesis have been put forth. Generally speaking, disease has to be very virulent to kill off all the individuals in a genus or species. Even such a virulent disease as West Nile fever is unlikely to have caused extinction. The disease would need to be implausibly selective while being simultaneously implausibly broad. Such a disease needs to be capable of killing off wolves such as Canis dirus or goats such as Oreamnos harringtoni while leaving other very similar species (Canis lupus and Oreamnos americanus, respectively) unaffected. It would need to be capable of killing off flightless birds while leaving closely related flighted species unaffected. Yet while remaining sufficiently selective to afflict only individual species within genera it must be capable of fatally infecting across such clades as birds, marsupials, placentals, testudines, and crocodilians. No disease with such a broad scope of fatal infectivity is known, much less one that remains simultaneously incapable of infecting numerous closely related species within those disparate clades. On the other hand, this objection does not account for the possibility of a variety of different diseases being introduced around the same era. Numerous species including wolves, mammoths, camelids, and horses had emigrated continually between Asia and North America over the past 100,000 years. For the disease hypothesis to be applicable there it would require that the population remain immunologically naive despite this constant transmission of genetic and pathogenic material. The dog-specific hypothesis in particular cannot account for several major extinction events, notably the Americas (for reasons already covered) and Australia. Dogs did not arrive in Australia until approximately 35,000 years after the first humans arrived there, and approximately 30,000 years after the Australian megafaunal extinction was complete. Extraterrestrial impact An extraterrestrial impact, which has occasionally been proposed as a cause of the Younger Dryas, has been suggested by some authors as a potential cause of the extinction of North America's megafauna due to the temporal proximity between a proposed date for such an impact and the following megafaunal extinctions. However, the Younger Dryas impact hypothesis lacks widespread support among scholars due to various inconsistencies in the hypothesis, and another group of researchers has published a review contesting the arguments for it point by point. Geomagnetic field weakening Around 41,500 years ago, the Earth's magnetic field weakened in an event known as the Laschamp event. This weakening may have caused increased flux of UV-B radiation and has been suggested by a few authors as a cause of megafaunal extinctions in the Late Quaternary. The full effects of such events on the biosphere are poorly understood, however these explanations have been criticized as they do not account for the population bottlenecks seen in many megafaunal species and nor is there evidence for extreme radio-isotopic changes during the event. Considering these factors, causation is unlikely. Effects The extinction of the megafauna has been argued by some authors to be disappearance of the mammoth steppe rather than the other way around. Alaska now has low nutrient soil unable to support bison, mammoths, and horses. R. Dale Guthrie has claimed this as a cause of the extinction of the megafauna there; however, he may be interpreting it backwards. The loss of large herbivores to break up the permafrost allows the cold soils that are unable to support large herbivores today. Today, in the arctic, where trucks have broken the permafrost grasses and diverse flora and fauna can be supported. In addition, Chapin (Chapin 1980) showed that simply adding fertilizer to the soil in Alaska could make grasses grow again like they did in the era of the mammoth steppe. Possibly, the extinction of the megafauna and the corresponding loss of dung is what led to low nutrient levels in modern-day soil and therefore is why the landscape can no longer support megafauna. However, more recent authors have viewed it as more likely that the collapse of the mammoth steppe was driven by climatic warming, which in turn impacted the megafauna, rather than the other way around. Megafauna play a significant role in the lateral transport of mineral nutrients in an ecosystem, tending to translocate them from areas of high to those of lower abundance. They do so by their movement between the time they consume the nutrient and the time they release it through elimination (or, to a much lesser extent, through decomposition after death). In South America's Amazon Basin, it is estimated that such lateral diffusion was reduced over 98% following the megafaunal extinctions that occurred roughly 12,500 years ago. Given that phosphorus availability is thought to limit productivity in much of the region, the decrease in its transport from the western part of the basin and from floodplains (both of which derive their supply from the uplift of the Andes) to other areas is thought to have significantly impacted the region's ecology, and the effects may not yet have reached their limits. The extinction of the mammoths allowed grasslands they had maintained through grazing habits to become birch forests. The new forest and the resulting forest fires may have induced climate change. Such disappearances might be the result of the proliferation of modern humans. Large populations of megaherbivores have the potential to contribute greatly to the atmospheric concentration of methane, which is an important greenhouse gas. Modern ruminant herbivores produce methane as a byproduct of foregut fermentation in digestion, and release it through belching or flatulence. Today, around 20% of annual methane emissions come from livestock methane release. In the Mesozoic, it has been estimated that sauropods could have emitted 520 million tons of methane to the atmosphere annually, contributing to the warmer climate of the time (up to 10 °C warmer than at present). This large emission follows from the enormous estimated biomass of sauropods, and because methane production of individual herbivores is believed to be almost proportional to their mass. Recent studies have indicated that the extinction of megafaunal herbivores may have caused a reduction in atmospheric methane. One study examined the methane emissions from the bison that occupied the Great Plains of North America before contact with European settlers. The study estimated that the removal of the bison caused a decrease of as much as 2.2 million tons per year. Another study examined the change in the methane concentration in the atmosphere at the end of the Pleistocene epoch after the extinction of megafauna in the Americas. After early humans migrated to the Americas about 13,000 BP, their hunting and other associated ecological impacts led to the extinction of many megafaunal species there. Calculations suggest that this extinction decreased methane production by about 9.6 million tons per year. This suggests that the absence of megafaunal methane emissions may have contributed to the abrupt climatic cooling at the onset of the Younger Dryas. The decrease in atmospheric methane that occurred at that time, as recorded in ice cores, was 2–4 times more rapid than any other decrease in the last half million years, suggesting that an unusual mechanism was at work. The extermination of megafauna left many niches vacant, which has been cited as an explanation for the vulnerability and fragility of many ecosystems to destruction in the later Holocene extinction. The comparative lack of megafauna in modern ecosystems has reduced high-order interactions among surviving species, reducing ecological complexity. This depauperate, post-megafaunal ecological state has been associated with diminished ecological resilience to stressors. Many extant species of plants have adaptations that were advantageous in the presence of megafauna but are now useless in their absence. The demise of megafaunal ecosystem engineers in the Arctic that maintained open grassland environments has been highly detrimental to shorebirds of the genus Numenius. Relationship to later extinctions There is no general agreement on where the Quaternary extinction event ends, and the Holocene, or anthropogenic, extinction begins, or if they should be considered separate events at all. Some authors have argued that the activities of earlier archaic humans have also resulted in extinctions, though the evidence for this is equivocal. This hypothesis is supported by rapid megafaunal extinction following recent human colonisation in Australia, New Zealand and Madagascar, in a similar way that any large, adaptable predator moving into a new ecosystem would. In many cases, it is suggested even minimal hunting pressure was enough to wipe out large fauna, particularly on geographically isolated islands. Only during the most recent parts of the extinction have plants also suffered large losses.
Physical sciences
Events
Earth science
18787752
https://en.wikipedia.org/wiki/Baurusuchidae
Baurusuchidae
Baurusuchidae is a Gondwanan family of mesoeucrocodylians that lived during the Late Cretaceous. It is a group of terrestrial hypercarnivorous crocodilians from South America (Argentina and Brazil) and possibly Pakistan. Baurusuchidae has been, in accordance with the PhyloCode, officially defined as the least inclusive clade containing Cynodontosuchus rothi, Pissarrachampsa sera, and Baurusuchus pachecoi. Baurusuchids have been placed in the suborder Baurusuchia, and two subfamilies have been proposed: Baurusuchinae and Pissarrachampsinae. Genera Several genera have been assigned to Baurusuchidae. Baurusuchus was the first, being the namesake of the family. Remains of Baurusuchus have been found from the Late Cretaceous Bauru Group of Brazil in deposits that are Turonian - Santonian in age. In addition to Baurusuchus, five other South American crocodyliforms have been assigned to Baurusuchidae: Campinasuchus, Cynodontosuchus, Pissarrachampsa, Stratiotosuchus, and Wargosuchus. Cynodontosuchus was the first known baurusuchid, named in 1896 by English paleontologist Arthur Smith Woodward, although it was only recently assigned to Baurusuchidae. Wargosuchus was described in 2008. Cynodontosuchus and Wargosuchus are known only from fragmentary remains. Both genera are from the Santonian of Argentina. A fourth genus, Stratiotosuchus, was assigned to Baurusuchidae in 2001. Pabwehshi is the youngest genus that has been assigned to Baurusuchidae, and is from the Maastrichtian of Pakistan. It was named in 2001 but has since been reassigned as a basal member of Sebecia. A new genus, Campinasuchus, was assigned to the family in May, 2011. It is known from the Turonian-Santonian Adamantina Formation of the Bauru Basin of Brazil. Soon after, the new genus Pissarrachampsa was named from the Campanian–Maastrichtian Vale do Rio do Peixe Formation, also in the Bauru Basin. Phylogeny The family Baurusuchidae was named by Brazilian paleontologist Llewellyn Ivor Price in 1945 to include Baurusuchus. In 1946, American paleontologist Edwin Harris Colbert erected the group Sebecosuchia, which united Baurusuchidae with the family Sebecidae (represented by the genus Sebecus). Both Baurusuchus and Sebecus have deep snouts and ziphodont dentitions (teeth that are serrated and laterally compressed). Other forms were later found that had a close appearance to these two genera, among them Cynodontosuchus, Stratiotosuchus, and Wargosuchus. Several features were used to unite these groups: a deep snout, a ziphodont dentition, a curved tooth row, an enlarged canine-like dentary tooth that fits into a deep notch in the upper jaw, and a groove on the lower jaw. Many phylogenetic analyses within the past decades have supported a close relationship between the two families. Baurusuchids and sebecosuchids are both early members of the clade Metasuchia, which includes the subgroups Notosuchia (mainly terrestrial crocodyliforms) and Neosuchia (larger, often semiaquatic crocodyliforms, including living crocodylians). Sebecosuchians, which include both baurusuchids and sebecosuchids, were found to be closely related to notosuchians in several studies. The new genera Iberosuchus and Eremosuchus were later assigned to Baurusuchidae, and phylogenetic analyses encompassing these taxa continued to find Baurusuchidae to be closely related to Sebecidae. Both families were allied with notosuchians in the larger group Ziphosuchia, composed of ziphodont crocodyliforms. More recently, sebecosuchians - including baurusuchids - have been placed within Notosuchia as derived members of the clade. Below is a modified cladogram from Ortega et al. (2000) placing baurusuchids within Notosuchia: In 2004, the superfamily Baurusuchoidea was established to include baurusuchids and sebecids. Phylogenetically, Baurusuchoidea was defined as the most recent common ancestor of Baurusuchus and Sebecus and all of its descendants while Baurusuchidae was defined as the most recent common ancestor of Baurusuchus and Stratiotosuchus and all of its descendants. In a 2005 analysis, Sebecidae was found to be a paraphyletic grouping, or a grouping that includes some descendants of a common ancestor but not all. Sebecids formed an assemblage of basal sebecosuchians, while baurusuchids remained a valid grouping of derived sebecosuchians. Below is a modified cladogram from Turner and Calvo (2005): Later studies noted many features that distinguished baurusuchids from sebecosuchids. Sebecosuchids were often considered to be more closely related to Neosuchia, a group that includes modern crocodylians, while baurusuchids were thought to be a more distantly related clade. In a 1999 phylogenetic analysis, Baurusuchus formed a clade with notosuchians to the exclusion of other ziphosuchians. This placement has been upheld by recent analyses, which place Baurusuchus within Notosuchia. In 2007, a new clade called Sebecia was erected. Sebecia included sebecids and peirosaurids. Peirosauridae, a family of small terrestrial crocodyliforms, had often been placed in or near Neosuchia in previous studies. The assignment of sebecids to Sebecia placed the family closer to Neosuchia than Notosuchia. In this study, baurusuchids were split up, with Baurusuchus placed as a more basal metasuchian and the remaining baurusuchids (Bretesuchus and Pabwehshi) placed as sebecians. Therefore, the family Baurusuchidae was paraphyletic. Below is a modified cladogram from Larsson and Sues (2007): More recent studies have nested Baurusuchus deep within Notosuchia, just as the larger group Sebecosuchia once was, while the remaining sebecosuchian genera have been placed more distantly in Metasuchia. A new baurusuchid called Pissarrachampsa was named in 2011, and a comprehensive phylogenetic analysis of baurusuchids was conducted along with its description. Montefeltro et al. (2011) found Baurusuchidae to be a monophyletic group with the genera Baurusuchus, Cynodontosuchus, Pissarrachampsa, Stratiotosuchus, and Wargosuchus. They adopted the name Baurusuchia in a phylogenetic sense to distinguish baurusuchids from related crocodyliforms. Baurusuchia was first erected as an infraorder in 1968, but in the 2011 analysis it was found to be in an identical position to Baurusuchidae in the final tree. The only difference between Baurusuchidae and Baurusuchia is that the former is a node-based taxon and the latter is a stem-based taxon. Baurusuchidae is defined as the least inclusive clade containing Cynodontosuchus rothi, Pissarrachampsa sera, and Baurusuchus pachecoi. As in all node-based clades, there is a most recent common ancestor; these genera are all of its known descendants. Baurusuchia is formally defined as the most inclusive clade containing Baurusuchus pachecoi but not Sebecus icaeorhinus, Sphagesaurus huenei, Araripesuchus gomesi, Montealtosuchus arrudacamposi, or Crocodylus niloticus. In contrast to the node-based Baurusuchidae, the stem-based Baurusuchia does not include a common ancestor and all its descendants, but rather all forms more closely related to a specific baurusuchid than a non-baurusuchid. As a stem-based taxon, Baurusuchia is more inclusive than Baurusuchidae; a new taxon could potentially be placed outside Baurusuchidae because it is not a descendant of the most recent common ancestor of baurusuchids, but would still be a baurusuchian because it is more closely related to baurusuchids than it is to other crocodyliforms. For now, however, Baurusuchidae and Baurusuchia are almost identical in scope, with Baurusuchia also including Pabwehshi, based on their reference phylogenies. Other analyses however, have recovered additional taxa within Baurusuchia outside of Baurusuchidae. (Pakasuchus and Comahuesuchus) Montefeltro et al. (2011) also divided Baurusuchidae into two subfamilies, Pissarrachampsinae and Baurusuchinae. Pissarrachampsinae includes Pissarrachampsa and Wargosuchus while Baurusuchinae includes Stratiotosuchus and Baurusuchus. Cynodontosuchus is not a member of either of these subfamilies, but the most basal baurusuchid. Many of the unique features that separate Cynodontosuchus may also be associated with a juvenile individual. The material that Cynodontosuchus is based on has been suggested to be a juvenile form of Wargosuchus, and the two taxa may be synonymous. Below is a cladogram from Montefeltro et al. (2011): A sixth genus of baurusuchid, Campinasuchus, was named just a few months before Pissarrachampsa, and was not included in the analysis. Darlim et al. (2021) described a new baurusuchid, Aphaurosuchus, and proposed formal definitions for the clades Baurusuchia, Baurusuchidae, Baurusuchinae, and Pissarrachampsinae. In addition to this, the study conducted a phylogenetic analysis to resolve the affinites of the new taxon and provide a reference phylogeny for the newly defined clades. The cladogram of this analysis is shown below. Paleobiology In 2011, fossilized eggs were described from the Late Cretaceous Adamantina Formation of Brazil that may have been laid by a baurusuchid, most probably Baurusuchus. A new oospecies called Bauruoolithus fragilis was named on the basis of these remains. The eggs are about twice as long as they are wide and have blunt ends. At about a quarter of a millimeter in thickness, the shells are relatively thin. Some eggs may have already hatched by the time they were buried, but none show extensive degradation. In living crocodilians (the closest living relatives of baurusuchids), eggs undergo extrinsic degradation to allow hatchlings to easily break through their shells. The fossils indicate that baurusuchid hatchlings probably broke through thin egg shells rather than shells that had been degraded over their incubation period.
Biology and health sciences
Prehistoric crocodiles
Animals
4927505
https://en.wikipedia.org/wiki/Molecular%20graphics
Molecular graphics
Molecular graphics is the discipline and philosophy of studying molecules and their properties through graphical representation. IUPAC limits the definition to representations on a "graphical display device". Ever since Dalton's atoms and Kekulé's benzene, there has been a rich history of hand-drawn atoms and molecules, and these representations have had an important influence on modern molecular graphics. Colour molecular graphics are often used on chemistry journal covers artistically. History Prior to the use of computer graphics in representing molecular structure, Robert Corey and Linus Pauling developed a system for representing atoms or groups of atoms from hard wood on a scale of 1 inch = 1 angstrom connected by a clamping device to maintain the molecular configuration. These early models also established the CPK coloring scheme that is still used today to differentiate the different types of atoms in molecular models (e.g. carbon = black, oxygen = red, nitrogen = blue, etc). This early model was improved upon in 1966 by W.L. Koltun and are now known as Corey-Pauling-Koltun (CPK) models. The earliest efforts to produce models of molecular structure was done by Project MAC using wire-frame models displayed on a cathode ray tube in the mid 1960s. In 1965, Carroll Johnson distributed the Oak Ridge thermal ellipsoid plot (ORTEP) that visualized molecules as a ball-and-stick model with lines representing the bonds between atoms and ellipsoids to represent the probability of thermal motion. Thermal ellipsoid plots quickly became the de facto standard used in the display of X-ray crystallography data, and are still in wide use today. The first practical use of molecular graphics was a simple display of the protein myoglobin using a wireframe representation in 1966 by Cyrus Levinthal and Robert Langridge working at Project MAC. Among the milestones in high-performance molecular graphics was the work of Nelson Max in "realistic" rendering of macromolecules using reflecting spheres. Initially much of the technology concentrated on high-performance 3D graphics. During the 1970s, methods for displaying 3D graphics using cathode ray tubes were developed using continuous tone computer graphics in combination with electro-optic shutter viewing devices. The first devices used an active shutter 3D system, generating different perspective views for the left and right channel to provide the illusion of three-dimensional viewing. Stereoscopic viewing glasses were designed using lead lanthanum zirconate titanate (PLZT) ceramics as electronically-controlled shutter elements. Active 3D glasses require batteries and work in concert with the display to actively change the presentation by the lenses to the wearer's eyes. Many modern 3D glasses use a passive, polarized 3D system that enables the wearer to visualize 3D effects based on their own perception. Passive 3D glasses are more common today since they are less expensive. The requirements of macromolecular crystallography also drove molecular graphics because the traditional techniques of physical model-building could not scale. The first two protein structures solved by molecular graphics without the aid of the Richards' Box were built with Stan Swanson's program FIT on the Vector General graphics display in the laboratory of Edgar Meyer at Texas A&M University: First Marge Legg in Al Cotton's lab at A&M solved a second, higher-resolution structure of staph. nuclease (1975) and then Jim Hogle solved the structure of monoclinic lysozyme in 1976. A full year passed before other graphics systems were used to replace the Richards' Box for modelling into density in 3-D. Alwyn Jones' FRODO program (and later "O") were developed to overlay the molecular electron density determined from X-ray crystallography and the hypothetical molecular structure. Timeline Types Ball-and-stick models In the ball-and-stick model, atoms are drawn as small sphered connected by rods representing the chemical bonds between them. Space-filling models In the space-filling model, atoms are drawn as solid spheres to suggest the space they occupy, in proportion to their van der Waals radii. Atoms that share a bond overlap with each other. Surfaces In some models, the surface of the molecule is approximated and shaded to represent a physical property of the molecule, such as electronic charge density. Ribbon diagrams Ribbon diagrams are schematic representations of protein structure and are one of the most common methods of protein depiction used today. The ribbon shows the overall path and organization of the protein backbone in 3D, and serves as a visual framework on which to hang details of the full atomic structure, such as the balls for the oxygen atoms bound to the active site of myoglobin in the adjacent image. Ribbon diagrams are generated by interpolating a smooth curve through the polypeptide backbone. α-helices are shown as coiled ribbons or thick tubes, β-strands as arrows, and non-repetitive coils or loops as lines or thin tubes. The direction of the polypeptide chain is shown locally by the arrows, and may be indicated overall by a colour ramp along the length of the ribbon.
Physical sciences
Substance
Chemistry
4930033
https://en.wikipedia.org/wiki/Stationary%20state
Stationary state
A stationary state is a quantum state with all observables independent of time. It is an eigenvector of the energy operator (instead of a quantum superposition of different energies). It is also called energy eigenvector, energy eigenstate, energy eigenfunction, or energy eigenket. It is very similar to the concept of atomic orbital and molecular orbital in chemistry, with some slight differences explained below. Introduction A stationary state is called stationary because the system remains in the same state as time elapses, in every observable way. For a single-particle Hamiltonian, this means that the particle has a constant probability distribution for its position, its velocity, its spin, etc. (This is true assuming the particle's environment is also static, i.e. the Hamiltonian is unchanging in time.) The wavefunction itself is not stationary: It continually changes its overall complex phase factor, so as to form a standing wave. The oscillation frequency of the standing wave, multiplied by the Planck constant, is the energy of the state according to the Planck–Einstein relation. Stationary states are quantum states that are solutions to the time-independent Schrödinger equation: where This is an eigenvalue equation: is a linear operator on a vector space, is an eigenvector of , and is its eigenvalue. If a stationary state is plugged into the time-dependent Schrödinger equation, the result is Assuming that is time-independent (unchanging in time), this equation holds for any time . Therefore, this is a differential equation describing how varies in time. Its solution is Therefore, a stationary state is a standing wave that oscillates with an overall complex phase factor, and its oscillation angular frequency is equal to its energy divided by . Stationary state properties As shown above, a stationary state is not mathematically constant: However, all observable properties of the state are in fact constant in time. For example, if represents a simple one-dimensional single-particle wavefunction , the probability that the particle is at location is which is independent of the time . The Heisenberg picture is an alternative mathematical formulation of quantum mechanics where stationary states are truly mathematically constant in time. As mentioned above, these equations assume that the Hamiltonian is time-independent. This means simply that stationary states are only stationary when the rest of the system is fixed and stationary as well. For example, a 1s electron in a hydrogen atom is in a stationary state, but if the hydrogen atom reacts with another atom, then the electron will of course be disturbed. Spontaneous decay Spontaneous decay complicates the question of stationary states. For example, according to simple (nonrelativistic) quantum mechanics, the hydrogen atom has many stationary states: 1s, 2s, 2p, and so on, are all stationary states. But in reality, only the ground state 1s is truly "stationary": An electron in a higher energy level will spontaneously emit one or more photons to decay into the ground state. This seems to contradict the idea that stationary states should have unchanging properties. The explanation is that the Hamiltonian used in nonrelativistic quantum mechanics is only an approximation to the Hamiltonian from quantum field theory. The higher-energy electron states (2s, 2p, 3s, etc.) are stationary states according to the approximate Hamiltonian, but stationary according to the true Hamiltonian, because of vacuum fluctuations. On the other hand, the 1s state is truly a stationary state, according to both the approximate and the true Hamiltonian. Comparison to "orbital" in chemistry An orbital is a stationary state (or approximation thereof) of a one-electron atom or molecule; more specifically, an atomic orbital for an electron in an atom, or a molecular orbital for an electron in a molecule. For a molecule that contains only a single electron (e.g. atomic hydrogen or H2+), an orbital is exactly the same as a total stationary state of the molecule. However, for a many-electron molecule, an orbital is completely different from a total stationary state, which is a many-particle state requiring a more complicated description (such as a Slater determinant). In particular, in a many-electron molecule, an orbital is not the total stationary state of the molecule, but rather the stationary state of a single electron within the molecule. This concept of an orbital is only meaningful under the approximation that if we ignore the electron–electron instantaneous repulsion terms in the Hamiltonian as a simplifying assumption, we can decompose the total eigenvector of a many-electron molecule into separate contributions from individual electron stationary states (orbitals), each of which are obtained under the one-electron approximation. (Luckily, chemists and physicists can often (but not always) use this "single-electron approximation".) In this sense, in a many-electron system, an orbital can be considered as the stationary state of an individual electron in the system. In chemistry, calculation of molecular orbitals typically also assume the Born–Oppenheimer approximation.
Physical sciences
Quantum mechanics
Physics
4932111
https://en.wikipedia.org/wiki/Capacitor
Capacitor
In electrical engineering, a capacitor is a device that stores electrical energy by accumulating electric charges on two closely spaced surfaces that are insulated from each other. The capacitor was originally known as the condenser, a term still encountered in a few compound names, such as the condenser microphone. It is a passive electronic component with two terminals. The utility of a capacitor depends on its capacitance. While some capacitance exists between any two electrical conductors in proximity in a circuit, a capacitor is a component designed specifically to add capacitance to some part of the circuit. The physical form and construction of practical capacitors vary widely and many types of capacitor are in common use. Most capacitors contain at least two electrical conductors, often in the form of metallic plates or surfaces separated by a dielectric medium. A conductor may be a foil, thin film, sintered bead of metal, or an electrolyte. The nonconducting dielectric acts to increase the capacitor's charge capacity. Materials commonly used as dielectrics include glass, ceramic, plastic film, paper, mica, air, and oxide layers. When an electric potential difference (a voltage) is applied across the terminals of a capacitor, for example when a capacitor is connected across a battery, an electric field develops across the dielectric, causing a net positive charge to collect on one plate and net negative charge to collect on the other plate. No current actually flows through a perfect dielectric. However, there is a flow of charge through the source circuit. If the condition is maintained sufficiently long, the current through the source circuit ceases. If a time-varying voltage is applied across the leads of the capacitor, the source experiences an ongoing current due to the charging and discharging cycles of the capacitor. Capacitors are widely used as parts of electrical circuits in many common electrical devices. Unlike a resistor, an ideal capacitor does not dissipate energy, although real-life capacitors do dissipate a small amount (see Non-ideal behavior). The earliest forms of capacitors were created in the 1740s, when European experimenters discovered that electric charge could be stored in water-filled glass jars that came to be known as Leyden jars. Today, capacitors are widely used in electronic circuits for blocking direct current while allowing alternating current to pass. In analog filter networks, they smooth the output of power supplies. In resonant circuits they tune radios to particular frequencies. In electric power transmission systems, they stabilize voltage and power flow. The property of energy storage in capacitors was exploited as dynamic memory in early digital computers, and still is in modern DRAM. History Natural capacitors have existed since prehistoric times. The most common example of natural capacitance are the static charges accumulated between clouds in the sky and the surface of the Earth, where the air between them serves as the dielectric. This results in bolts of lightning when the breakdown voltage of the air is exceeded. In October 1745, Ewald Georg von Kleist of Pomerania, Germany, found that charge could be stored by connecting a high-voltage electrostatic generator by a wire to a volume of water in a hand-held glass jar. Von Kleist's hand and the water acted as conductors and the jar as a dielectric (although details of the mechanism were incorrectly identified at the time). Von Kleist found that touching the wire resulted in a powerful spark, much more painful than that obtained from an electrostatic machine. The following year, the Dutch physicist Pieter van Musschenbroek invented a similar capacitor, which was named the Leyden jar, after the University of Leiden where he worked. He also was impressed by the power of the shock he received, writing, "I would not take a second shock for the kingdom of France." Daniel Gralath was the first to combine several jars in parallel to increase the charge storage capacity. Benjamin Franklin investigated the Leyden jar and came to the conclusion that the charge was stored on the glass, not in the water as others had assumed. He also adopted the term "battery", (denoting the increase of power with a row of similar units as in a battery of cannon), subsequently applied to clusters of electrochemical cells. In 1747, Leyden jars were made by coating the inside and outside of jars with metal foil, leaving a space at the mouth to prevent arcing between the foils. The earliest unit of capacitance was the jar, equivalent to about 1.11 nanofarads. Leyden jars or more powerful devices employing flat glass plates alternating with foil conductors were used exclusively up until about 1900, when the invention of wireless (radio) created a demand for standard capacitors, and the steady move to higher frequencies required capacitors with lower inductance. More compact construction methods began to be used, such as a flexible dielectric sheet (like oiled paper) sandwiched between sheets of metal foil, rolled or folded into a small package. Early capacitors were known as condensers, a term that is still occasionally used today, particularly in high power applications, such as automotive systems. The term condensatore was used by Alessandro Volta in 1780 to refer to a device, similar to his electrophorus, he developed to measure electricity, and translated in 1782 as condenser, where the name referred to the device's ability to store a higher density of electric charge than was possible with an isolated conductor. The term became deprecated because of the ambiguous meaning of steam condenser, with capacitor becoming the recommended term in the UK from 1926, while the change occurred considerably later in the United States. Since the beginning of the study of electricity, non-conductive materials like glass, porcelain, paper and mica have been used as insulators. Decades later, these materials were also well-suited for use as the dielectric for the first capacitors. Paper capacitors, made by sandwiching a strip of impregnated paper between strips of metal and rolling the result into a cylinder, were commonly used in the late 19th century; their manufacture started in 1876, and they were used from the early 20th century as decoupling capacitors in telephony. Porcelain was used in the first ceramic capacitors. In the early years of Marconi's wireless transmitting apparatus, porcelain capacitors were used for high voltage and high frequency application in the transmitters. On the receiver side, smaller mica capacitors were used for resonant circuits. Mica capacitors were invented in 1909 by William Dubilier. Prior to World War II, mica was the most common dielectric for capacitors in the United States. Charles Pollak (born Karol Pollak), the inventor of the first electrolytic capacitors, found out that the oxide layer on an aluminum anode remained stable in a neutral or alkaline electrolyte, even when the power was switched off. In 1896 he was granted U.S. Patent No. 672,913 for an "Electric liquid capacitor with aluminum electrodes". Solid electrolyte tantalum capacitors were invented by Bell Laboratories in the early 1950s as a miniaturized and more reliable low-voltage support capacitor to complement their newly invented transistor. With the development of plastic materials by organic chemists during the Second World War, the capacitor industry began to replace paper with thinner polymer films. One very early development in film capacitors was described in British Patent 587,953 in 1944. Electric double-layer capacitors (now supercapacitors) were invented in 1957 when H. Becker developed a "Low voltage electrolytic capacitor with porous carbon electrodes". He believed that the energy was stored as a charge in the carbon pores used in his capacitor as in the pores of the etched foils of electrolytic capacitors. Because the double layer mechanism was not known by him at the time, he wrote in the patent: "It is not known exactly what is taking place in the component if it is used for energy storage, but it leads to an extremely high capacity." The MOS capacitor was later widely adopted as a storage capacitor in memory chips, and as the basic building block of the charge-coupled device (CCD) in image sensor technology. In 1966, Dr. Robert Dennard invented modern DRAM architecture, combining a single MOS transistor per capacitor. Theory of operation Overview A capacitor consists of two conductors separated by a non-conductive region. The non-conductive region can either be a vacuum or an electrical insulator material known as a dielectric. Examples of dielectric media are glass, air, paper, plastic, ceramic, and even a semiconductor depletion region chemically identical to the conductors. From Coulomb's law a charge on one conductor will exert a force on the charge carriers within the other conductor, attracting opposite polarity charge and repelling like polarity charges, thus an opposite polarity charge will be induced on the surface of the other conductor. The conductors thus hold equal and opposite charges on their facing surfaces, and the dielectric develops an electric field. An ideal capacitor is characterized by a constant capacitance C, in farads in the SI system of units, defined as the ratio of the positive or negative charge Q on each conductor to the voltage V between them: A capacitance of one farad (F) means that one coulomb of charge on each conductor causes a voltage of one volt across the device. Because the conductors (or plates) are close together, the opposite charges on the conductors attract one another due to their electric fields, allowing the capacitor to store more charge for a given voltage than when the conductors are separated, yielding a larger capacitance. In practical devices, charge build-up sometimes affects the capacitor mechanically, causing its capacitance to vary. In this case, capacitance is defined in terms of incremental changes: Hydraulic analogy In the hydraulic analogy, voltage is analogous to water pressure and electrical current through a wire is analogous to water flow through a pipe. A capacitor is like an elastic diaphragm within the pipe. Although water cannot pass through the diaphragm, it moves as the diaphragm stretches or un-stretches. Capacitance is analogous to diaphragm elasticity. In the same way that the ratio of charge differential to voltage would be greater for a larger capacitance value (), the ratio of water displacement to pressure would be greater for a diaphragm that flexes more readily. In an AC circuit, a capacitor behaves like a diaphragm in a pipe, allowing the charge to move on both sides of the dielectric while no electrons actually pass through. For DC circuits, a capacitor is analogous to a hydraulic accumulator, storing the energy until pressure is released. Similarly, they can be used to smooth the flow of electricity in rectified DC circuits in the same way an accumulator damps surges from a hydraulic pump. Charged capacitors and stretched diaphragms both store potential energy. The more a capacitor is charged, the higher the voltage across the plates (). Likewise, the greater the displaced water volume, the greater the elastic potential energy. Electrical current affects the charge differential across a capacitor just as the flow of water affects the volume differential across a diaphragm. Just as capacitors experience dielectric breakdown when subjected to high voltages, diaphragms burst under extreme pressures. Just as capacitors block DC while passing AC, diaphragms displace no water unless there is a change in pressure. Circuit equivalence at short-time limit and long-time limit In a circuit, a capacitor can behave differently at different time instants. However, it is usually easy to think about the short-time limit and long-time limit: In the long-time limit, after the charging/discharging current has saturated the capacitor, no current would come into (or get out of) either side of the capacitor; Therefore, the long-time equivalence of capacitor is an open circuit. In the short-time limit, if the capacitor starts with a certain voltage V, since the voltage drop on the capacitor is known at this instant, we can replace it with an ideal voltage source of voltage V. Specifically, if V=0 (capacitor is uncharged), the short-time equivalence of a capacitor is a short circuit. Parallel-plate capacitor The simplest model of a capacitor consists of two thin parallel conductive plates each with an area of separated by a uniform gap of thickness filled with a dielectric of permittivity . It is assumed the gap is much smaller than the dimensions of the plates. This model applies well to many practical capacitors which are constructed of metal sheets separated by a thin layer of insulating dielectric, since manufacturers try to keep the dielectric very uniform in thickness to avoid thin spots which can cause failure of the capacitor. Since the separation between the plates is uniform over the plate area, the electric field between the plates is constant, and directed perpendicularly to the plate surface, except for an area near the edges of the plates where the field decreases because the electric field lines "bulge" out of the sides of the capacitor. This "fringing field" area is approximately the same width as the plate separation, , and assuming is small compared to the plate dimensions, it is small enough to be ignored. Therefore, if a charge of is placed on one plate and on the other plate (the situation for unevenly charged plates is discussed below), the charge on each plate will be spread evenly in a surface charge layer of constant charge density coulombs per square meter, on the inside surface of each plate. From Gauss's law the magnitude of the electric field between the plates is . The voltage(difference) between the plates is defined as the line integral of the electric field over a line (in the z-direction) from one plate to another The capacitance is defined as . Substituting above into this equation Therefore, in a capacitor the highest capacitance is achieved with a high permittivity dielectric material, large plate area, and small separation between the plates. Since the area of the plates increases with the square of the linear dimensions and the separation increases linearly, the capacitance scales with the linear dimension of a capacitor (), or as the cube root of the volume. A parallel plate capacitor can only store a finite amount of energy before dielectric breakdown occurs. The capacitor's dielectric material has a dielectric strength Ud which sets the capacitor's breakdown voltage at . The maximum energy that the capacitor can store is therefore The maximum energy is a function of dielectric volume, permittivity, and dielectric strength. Changing the plate area and the separation between the plates while maintaining the same volume causes no change of the maximum amount of energy that the capacitor can store, so long as the distance between plates remains much smaller than both the length and width of the plates. In addition, these equations assume that the electric field is entirely concentrated in the dielectric between the plates. In reality there are fringing fields outside the dielectric, for example between the sides of the capacitor plates, which increase the effective capacitance of the capacitor. This is sometimes called parasitic capacitance. For some simple capacitor geometries this additional capacitance term can be calculated analytically. It becomes negligibly small when the ratios of plate width to separation and length to separation are large. For unevenly charged plates: If one plate is charged with while the other is charged with , and if both plates are separated from other materials in the environment, then the inner surface of the first plate will have , and the inner surface of the second plated will have charge. Therefore, the voltage between the plates is . Note that the outer surface of both plates will have , but those charges do not affect the voltage between the plates. If one plate is charged with while the other is charged with , and if the second plate is connected to ground, then the inner surface of the first plate will have , and the inner surface of the second plated will have . Therefore, the voltage between the plates is . Note that the outer surface of both plates will have zero charge. Interleaved capacitor For number of plates in a capacitor, the total capacitance would be where is the capacitance for a single plate and is the number of interleaved plates. As shown to the figure on the right, the interleaved plates can be seen as parallel plates connected to each other. Every pair of adjacent plates acts as a separate capacitor; the number of pairs is always one less than the number of plates, hence the multiplier. Energy stored in a capacitor To increase the charge and voltage on a capacitor, work must be done by an external power source to move charge from the negative to the positive plate against the opposing force of the electric field. If the voltage on the capacitor is , the work required to move a small increment of charge from the negative to the positive plate is . The energy is stored in the increased electric field between the plates. The total energy stored in a capacitor (expressed in joules) is equal to the total work done in establishing the electric field from an uncharged state. where is the charge stored in the capacitor, is the voltage across the capacitor, and is the capacitance. This potential energy will remain in the capacitor until the charge is removed. If charge is allowed to move back from the positive to the negative plate, for example by connecting a circuit with resistance between the plates, the charge moving under the influence of the electric field will do work on the external circuit. If the gap between the capacitor plates is constant, as in the parallel plate model above, the electric field between the plates will be uniform (neglecting fringing fields) and will have a constant value . In this case the stored energy can be calculated from the electric field strength The last formula above is equal to the energy density per unit volume in the electric field multiplied by the volume of field between the plates, confirming that the energy in the capacitor is stored in its electric field. Current–voltage relation The current I(t) through any component in an electric circuit is defined as the rate of flow of a charge Q(t) passing through it. Actual charges – electrons – cannot pass through the dielectric of an ideal capacitor. Rather, one electron accumulates on the negative plate for each one that leaves the positive plate, resulting in an electron depletion and consequent positive charge on one electrode that is equal and opposite to the accumulated negative charge on the other. Thus the charge on the electrodes is equal to the integral of the current as well as proportional to the voltage, as discussed above. As with any antiderivative, a constant of integration is added to represent the initial voltage V(t0). This is the integral form of the capacitor equation: Taking the derivative of this and multiplying by C yields the derivative form: for independent of time, voltage and electric charge. The dual of the capacitor is the inductor, which stores energy in a magnetic field rather than an electric field. Its current-voltage relation is obtained by exchanging current and voltage in the capacitor equations and replacing with the inductance . RC circuits A series circuit containing only a resistor, a capacitor, a switch and a constant DC source of voltage is known as a charging circuit. If the capacitor is initially uncharged while the switch is open, and the switch is closed at , it follows from Kirchhoff's voltage law that Taking the derivative and multiplying by C, gives a first-order differential equation: At , the voltage across the capacitor is zero and the voltage across the resistor is V0. The initial current is then . With this assumption, solving the differential equation yields where is the time constant of the system. As the capacitor reaches equilibrium with the source voltage, the voltages across the resistor and the current through the entire circuit decay exponentially. In the case of a discharging capacitor, the capacitor's initial voltage () replaces . The equations become AC circuits Impedance, the vector sum of reactance and resistance, describes the phase difference and the ratio of amplitudes between sinusoidally varying voltage and sinusoidally varying current at a given frequency. Fourier analysis allows any signal to be constructed from a spectrum of frequencies, whence the circuit's reaction to the various frequencies may be found. The reactance and impedance of a capacitor are respectively where is the imaginary unit and is the angular frequency of the sinusoidal signal. The phase indicates that the AC voltage lags the AC current by 90°: the positive current phase corresponds to increasing voltage as the capacitor charges; zero current corresponds to instantaneous constant voltage, etc. Impedance decreases with increasing capacitance and increasing frequency. This implies that a higher-frequency signal or a larger capacitor results in a lower voltage amplitude per current amplitude – an AC "short circuit" or AC coupling. Conversely, for very low frequencies, the reactance is high, so that a capacitor is nearly an open circuit in AC analysis – those frequencies have been "filtered out". Capacitors are different from resistors and inductors in that the impedance is inversely proportional to the defining characteristic; i.e., capacitance. A capacitor connected to an alternating voltage source has a displacement current to flowing through it. In the case that the voltage source is V0cos(ωt), the displacement current can be expressed as: At , the capacitor has a maximum (or peak) current whereby . The ratio of peak voltage to peak current is due to capacitive reactance (denoted XC). XC approaches zero as approaches infinity. If XC approaches 0, the capacitor resembles a short wire that strongly passes current at high frequencies. XC approaches infinity as ω approaches zero. If XC approaches infinity, the capacitor resembles an open circuit that poorly passes low frequencies. The current of the capacitor may be expressed in the form of cosines to better compare with the voltage of the source: In this situation, the current is out of phase with the voltage by +π/2 radians or +90 degrees, i.e. the current leads the voltage by 90°. Laplace circuit analysis (s-domain) When using the Laplace transform in circuit analysis, the impedance of an ideal capacitor with no initial charge is represented in the domain by: where is the capacitance, and is the complex frequency. Circuit analysis Cpacitors in parallel Capacitors in a parallel configuration each have the same applied voltage. Their capacitances add up. Charge is apportioned among them by size. Using the schematic diagram to visualize parallel plates, it is apparent that each capacitor contributes to the total surface area. For capacitors in series Connected in series, the schematic diagram reveals that the separation distance, not the plate area, adds up. The capacitors each store instantaneous charge build-up equal to that of every other capacitor in the series. The total voltage difference from end to end is apportioned to each capacitor according to the inverse of its capacitance. The entire series acts as a capacitor smaller than any of its components. Capacitors are combined in series to achieve a higher working voltage, for example for smoothing a high voltage power supply. The voltage ratings, which are based on plate separation, add up, if capacitance and leakage currents for each capacitor are identical. In such an application, on occasion, series strings are connected in parallel, forming a matrix. The goal is to maximize the energy storage of the network without overloading any capacitor. For high-energy storage with capacitors in series, some safety considerations must be applied to ensure one capacitor failing and leaking current does not apply too much voltage to the other series capacitors. Series connection is also sometimes used to adapt polarized electrolytic capacitors for bipolar AC use. Voltage distribution in parallel-to-series networks. To model the distribution of voltages from a single charged capacitor connected in parallel to a chain of capacitors in series : Note: This is only correct if all capacitance values are equal. The power transferred in this arrangement is: Non-ideal behavior In practice, capacitors deviate from the ideal capacitor equation in several aspects. Some of these, such as leakage current and parasitic effects are linear, or can be analyzed as nearly linear, and can be accounted for by adding virtual components to form an equivalent circuit. The usual methods of network analysis can then be applied. In other cases, such as with breakdown voltage, the effect is non-linear and ordinary (normal, e.g., linear) network analysis cannot be used, the effect must be considered separately. Yet another group of artifacts may exist, including temperature dependence, that may be linear but invalidates the assumption in the analysis that capacitance is a constant. Finally, combined parasitic effects such as inherent inductance, resistance, or dielectric losses can exhibit non-uniform behavior at varying frequencies of operation. Breakdown voltage Above a particular electric field strength, known as the dielectric strength Eds, the dielectric in a capacitor becomes conductive. The voltage at which this occurs is called the breakdown voltage of the device, and is given by the product of the dielectric strength and the separation between the conductors, The maximum energy that can be stored safely in a capacitor is limited by the breakdown voltage. Exceeding this voltage can result in a short circuit between the plates, which can often cause permanent damage to the dielectric, plates, or both. Due to the scaling of capacitance and breakdown voltage with dielectric thickness, all capacitors made with a particular dielectric have approximately equal maximum energy density, to the extent that the dielectric dominates their volume. For air dielectric capacitors the breakdown field strength is of the order 2–5 MV/m (or kV/mm); for mica the breakdown is 100–300 MV/m; for oil, 15–25 MV/m; it can be much less when other materials are used for the dielectric. The dielectric is used in very thin layers and so absolute breakdown voltage of capacitors is limited. Typical ratings for capacitors used for general electronics applications range from a few volts to 1 kV. As the voltage increases, the dielectric must be thicker, making high-voltage capacitors larger per capacitance than those rated for lower voltages. The breakdown voltage is critically affected by factors such as the geometry of the capacitor conductive parts; sharp edges or points increase the electric field strength at that point and can lead to a local breakdown. Once this starts to happen, the breakdown quickly tracks through the dielectric until it reaches the opposite plate, leaving carbon behind and causing a short (or relatively low resistance) circuit. The results can be explosive, as the short in the capacitor draws current from the surrounding circuitry and dissipates the energy. However, in capacitors with particular dielectrics and thin metal electrodes, shorts are not formed after breakdown. It happens because a metal melts or evaporates in a breakdown vicinity, isolating it from the rest of the capacitor. The usual breakdown route is that the field strength becomes large enough to pull electrons in the dielectric from their atoms thus causing conduction. Other scenarios are possible, such as impurities in the dielectric, and, if the dielectric is of a crystalline nature, imperfections in the crystal structure can result in an avalanche breakdown as seen in semi-conductor devices. Breakdown voltage is also affected by pressure, humidity and temperature. Equivalent circuit An ideal capacitor only stores and releases electrical energy, without dissipation. In practice, capacitors have imperfections within the capacitor's materials that result in the following parasitic components: , the equivalent series inductance, due to the leads. This is usually significant only at relatively high frequencies. Two resistances that add a real-valued component to the total impedance, which wastes power: , a small series resistance in the leads. Becomes more relevant as frequency increases. , a small conductance (or reciprocally, a large resistance) in parallel with the capacitance, to account for imperfect dielectric material. This causes a small leakage current across the dielectric (see ) that slowly discharges the capacitor over time. This conductance dominates the total resistance at very low frequencies. Its value varies greatly depending on the capacitor material and quality. Simplified RLC series model As frequency increases, the capacitive impedance (a negative reactance) reduces, so the dielectric's conductance becomes less important and the series components become more significant. Thus, a simplified RLC series model valid for a large frequency range simply treats the capacitor as being in series with an equivalent series inductance and a frequency-dependent equivalent series resistance , which varies little with frequency. Unlike the previous model, this model is not valid at DC and very low frequencies where is relevant. Inductive reactance increases with frequency. Because its sign is positive, it counteracts the capacitance. At the RLC circuit's natural frequency , the inductance perfectly cancels the capacitance, so total reactance is zero. Since the total impedance at is just the real-value of , average power dissipation reaches its maximum of , where V is the root mean square (RMS) voltage across the capacitor. At even higher frequencies, the inductive impedance dominates, so the capacitor undesirably behaves instead like an inductor. High-frequency engineering involves accounting for the inductance of all connections and components. Q factor For a simplified model of a capacitor as an ideal capacitor in series with an equivalent series resistance , the capacitor's quality factor (or Q) is the ratio of the magnitude of its capacitive reactance to its resistance at a given frequency : The Q factor is a measure of its efficiency: the higher the Q factor of the capacitor, the closer it approaches the behavior of an ideal capacitor. Dissipation factor is its reciprocal. Ripple current Ripple current is the AC component of an applied source (often a switched-mode power supply) whose frequency may be constant or varying. Ripple current causes heat to be generated within the capacitor due to the dielectric losses caused by the changing field strength together with the current flow across the slightly resistive supply lines or the electrolyte in the capacitor. The equivalent series resistance (ESR) is the amount of internal series resistance one would add to a perfect capacitor to model this. Some types of capacitors, primarily tantalum and aluminum electrolytic capacitors, as well as some film capacitors have a specified rating value for maximum ripple current. Tantalum electrolytic capacitors with solid manganese dioxide electrolyte are limited by ripple current and generally have the highest ESR ratings in the capacitor family. Exceeding their ripple limits can lead to shorts and burning parts. Aluminum electrolytic capacitors, the most common type of electrolytic, suffer a shortening of life expectancy at higher ripple currents. If ripple current exceeds the rated value of the capacitor, it tends to result in explosive failure. Ceramic capacitors generally have no ripple current limitation and have some of the lowest ESR ratings. Film capacitors have very low ESR ratings but exceeding rated ripple current may cause degradation failures. Capacitance instability The capacitance of certain capacitors decreases as the component ages. In ceramic capacitors, this is caused by degradation of the dielectric. The type of dielectric, ambient operating and storage temperatures are the most significant aging factors, while the operating voltage usually has a smaller effect, i.e., usual capacitor design is to minimize voltage coefficient. The aging process may be reversed by heating the component above the Curie point. Aging is fastest near the beginning of life of the component, and the device stabilizes over time. Electrolytic capacitors age as the electrolyte evaporates. In contrast with ceramic capacitors, this occurs towards the end of life of the component. Temperature dependence of capacitance is usually expressed in parts per million (ppm) per °C. It can usually be taken as a broadly linear function but can be noticeably non-linear at the temperature extremes. The temperature coefficient may be positive or negative, depending mostly on the dielectric material. Some, designated C0G/NP0, but called NPO, have a somewhat negative coefficient at one temperature, positive at another, and zero in between. Such components may be specified for temperature-critical circuits. Capacitors, especially ceramic capacitors, and older designs such as paper capacitors, can absorb sound waves resulting in a microphonic effect. Vibration moves the plates, causing the capacitance to vary, in turn inducing AC current. Some dielectrics also generate piezoelectricity. The resulting interference is especially problematic in audio applications, potentially causing feedback or unintended recording. In the reverse microphonic effect, the varying electric field between the capacitor plates exerts a physical force, moving them as a speaker. This can generate audible sound, but drains energy and stresses the dielectric and the electrolyte, if any. Current and voltage reversal Current reversal occurs when the current changes direction. Voltage reversal is the change of polarity in a circuit. Reversal is generally described as the percentage of the maximum rated voltage that reverses polarity. In DC circuits, this is usually less than 100%, often in the range of 0 to 90%, whereas AC circuits experience 100% reversal. In DC circuits and pulsed circuits, current and voltage reversal are affected by the damping of the system. Voltage reversal is encountered in RLC circuits that are underdamped. The current and voltage reverse direction, forming a harmonic oscillator between the inductance and capacitance. The current and voltage tends to oscillate and may reverse direction several times, with each peak being lower than the previous, until the system reaches an equilibrium. This is often referred to as ringing. In comparison, critically damped or overdamped systems usually do not experience a voltage reversal. Reversal is also encountered in AC circuits, where the peak current is equal in each direction. For maximum life, capacitors usually need to be able to handle the maximum amount of reversal that a system may experience. An AC circuit experiences 100% voltage reversal, while underdamped DC circuits experience less than 100%. Reversal creates excess electric fields in the dielectric, causes excess heating of both the dielectric and the conductors, and can dramatically shorten the life expectancy of the capacitor. Reversal ratings often affect the design considerations for the capacitor, from the choice of dielectric materials and voltage ratings to the types of internal connections used. Dielectric absorption Capacitors made with any type of dielectric material show some level of "dielectric absorption" or "soakage". On discharging a capacitor and disconnecting it, after a short time it may develop a voltage due to hysteresis in the dielectric. This effect is objectionable in applications such as precision sample and hold circuits or timing circuits. The level of absorption depends on many factors, from design considerations to charging time, since the absorption is a time-dependent process. However, the primary factor is the type of dielectric material. Capacitors such as tantalum electrolytic or polysulfone film exhibit relatively high absorption, while polystyrene or Teflon allow very small levels of absorption. In some capacitors where dangerous voltages and energies exist, such as in flashtubes, television sets, microwave ovens and defibrillators, the dielectric absorption can recharge the capacitor to hazardous voltages after it has been shorted or discharged. Any capacitor containing over 10 joules of energy is generally considered hazardous, while 50 joules or higher is potentially lethal. A capacitor may regain anywhere from 0.01 to 20% of its original charge over a period of several minutes, allowing a seemingly safe capacitor to become surprisingly dangerous. Leakage No material is a perfect insulator, thus all dielectrics allow some small level of current to leak through, which can be measured with a megohmmeter.<ref>Robinson's Manual of Radio Telegraphy and Telephony by S.S. Robinson -- US Naval Institute 1924 Pg. 170</ref> Leakage is equivalent to a resistor in parallel with the capacitor. Constant exposure to factors such as heat, mechanical stress, or humidity can cause the dielectric to deteriorate resulting in excessive leakage, a problem often seen in older vacuum tube circuits, particularly where oiled paper and foil capacitors were used. In many vacuum tube circuits, interstage coupling capacitors are used to conduct a varying signal from the plate of one tube to the grid circuit of the next stage. A leaky capacitor can cause the grid circuit voltage to be raised from its normal bias setting, causing excessive current or signal distortion in the downstream tube. In power amplifiers this can cause the plates to glow red, or current limiting resistors to overheat, even fail. Similar considerations apply to component fabricated solid-state (transistor) amplifiers, but, owing to lower heat production and the use of modern polyester dielectric-barriers, this once-common problem has become relatively rare. Electrolytic failure from disuse Aluminum electrolytic capacitors are conditioned when manufactured by applying a voltage sufficient to initiate the proper internal chemical state. This state is maintained by regular use of the equipment. If a system using electrolytic capacitors is unused for a long period of time it can lose its conditioning. Sometimes they fail with a short circuit when next operated. Lifespan All capacitors have varying lifespans, depending upon their construction, operational conditions, and environmental conditions. Solid-state ceramic capacitors generally have very long lives under normal use, which has little dependency on factors such as vibration or ambient temperature, but factors like humidity, mechanical stress, and fatigue play a primary role in their failure. Failure modes may differ. Some capacitors may experience a gradual loss of capacitance, increased leakage or an increase in equivalent series resistance (ESR), while others may fail suddenly or even catastrophically. For example, metal-film capacitors are more prone to damage from stress and humidity, but will self-heal when a breakdown in the dielectric occurs. The formation of a glow discharge at the point of failure prevents arcing by vaporizing the metallic film in that spot, neutralizing any short circuit with minimal loss in capacitance. When enough pinholes accumulate in the film, a total failure occurs in a metal-film capacitor, generally happening suddenly without warning. Electrolytic capacitors generally have the shortest lifespans. Electrolytic capacitors are affected very little by vibration or humidity, but factors such as ambient and operational temperatures play a large role in their failure, which gradually occur as an increase in ESR (up to 300%) and as much as a 20% decrease in capacitance. The capacitors contain electrolytes which will eventually diffuse through the seals and evaporate. An increase in temperature also increases internal pressure, and increases the reaction rate of the chemicals. Thus, the life of an electrolytic capacitor is generally defined by a modification of the Arrhenius equation, which is used to determine chemical-reaction rates: Manufacturers often use this equation to supply an expected lifespan, in hours, for electrolytic capacitors when used at their designed operating temperature, which is affected by both ambient temperature, ESR, and ripple current. However, these ideal conditions may not exist in every use. The rule of thumb for predicting lifespan under different conditions of use is determined by: This says that the capacitor's life decreases by half for every 10 degrees Celsius that the temperature is increased, where: is the rated life under rated conditions, e.g. 2000 hours is the rated max/min operational temperature is the average operational temperature is the expected lifespan under given conditions Capacitor types Practical capacitors are available commercially in many different forms. The type of internal dielectric, the structure of the plates and the device packaging all strongly affect the characteristics of the capacitor, and its applications. Values available range from very low (picofarad range; while arbitrarily low values are in principle possible, stray (parasitic) capacitance in any circuit is the limiting factor) to about 5 kF supercapacitors. Above approximately 1 microfarad electrolytic capacitors are usually used because of their small size and low cost compared with other types, unless their relatively poor stability, life and polarised nature make them unsuitable. Very high capacity supercapacitors use a porous carbon-based electrode material. Dielectric materials Most capacitors have a dielectric spacer, which increases their capacitance compared to air or a vacuum. In order to maximise the charge that a capacitor can hold, the dielectric material needs to have as high a permittivity as possible, while also having as high a breakdown voltage as possible. The dielectric also needs to have as low a loss with frequency as possible. However, low value capacitors are available with a high vacuum between their plates to allow extremely high voltage operation and low losses. Variable capacitors with their plates open to the atmosphere were commonly used in radio tuning circuits. Later designs use polymer foil dielectric between the moving and stationary plates, with no significant air space between the plates. Several solid dielectrics are available, including paper, plastic, glass, mica and ceramic. Paper was used extensively in older capacitors and offers relatively high voltage performance. However, paper absorbs moisture, and has been largely replaced by plastic film capacitors. Most of the plastic films now used offer better stability and ageing performance than such older dielectrics such as oiled paper, which makes them useful in timer circuits, although they may be limited to relatively low operating temperatures and frequencies, because of the limitations of the plastic film being used. Large plastic film capacitors are used extensively in suppression circuits, motor start circuits, and power-factor correction circuits. Ceramic capacitors are generally small, cheap and useful for high frequency applications, although their capacitance varies strongly with voltage and temperature and they age poorly. They can also suffer from the piezoelectric effect. Ceramic capacitors are broadly categorized as class 1 dielectrics, which have predictable variation of capacitance with temperature or class 2 dielectrics, which can operate at higher voltage. Modern multilayer ceramics are usually quite small, but some types have inherently wide value tolerances, microphonic issues, and are usually physically brittle. Glass and mica capacitors are extremely reliable, stable and tolerant to high temperatures and voltages, but are too expensive for most mainstream applications. Electrolytic capacitors and supercapacitors are used to store small and larger amounts of energy, respectively, ceramic capacitors are often used in resonators, and parasitic capacitance occurs in circuits wherever the simple conductor-insulator-conductor structure is formed unintentionally by the configuration of the circuit layout. Electrolytic capacitors use an aluminum or tantalum plate with an oxide dielectric layer. The second electrode is a liquid electrolyte, connected to the circuit by another foil plate. Electrolytic capacitors offer very high capacitance but suffer from poor tolerances, high instability, gradual loss of capacitance especially when subjected to heat, and high leakage current. Poor quality capacitors may leak electrolyte, which is harmful to printed circuit boards. The conductivity of the electrolyte drops at low temperatures, which increases equivalent series resistance. While widely used for power-supply conditioning, poor high-frequency characteristics make them unsuitable for many applications. Electrolytic capacitors suffer from self-degradation if unused for a period (around a year), and when full power is applied may short circuit, permanently damaging the capacitor and usually blowing a fuse or causing failure of rectifier diodes. For example, in older equipment, this may cause arcing in rectifier tubes. They can be restored before use by gradually applying the operating voltage, often performed on antique vacuum tube equipment over a period of thirty minutes by using a variable transformer to supply AC power. The use of this technique may be less satisfactory for some solid state equipment, which may be damaged by operation below its normal power range, requiring that the power supply first be isolated from the consuming circuits. Such remedies may not be applicable to modern high-frequency power supplies as these produce full output voltage even with reduced input. Tantalum capacitors offer better frequency and temperature characteristics than aluminum, but higher dielectric absorption and leakage. Polymer capacitors (OS-CON, OC-CON, KO, AO) use solid conductive polymer (or polymerized organic semiconductor) as electrolyte and offer longer life and lower ESR at higher cost than standard electrolytic capacitors. A feedthrough capacitor is a component that, while not serving as its main use, has capacitance and is used to conduct signals through a conductive sheet. Several other types of capacitor are available for specialist applications. Supercapacitors store large amounts of energy. Supercapacitors made from carbon aerogel, carbon nanotubes, or highly porous electrode materials, offer extremely high capacitance (up to 5 kF ) and can be used in some applications instead of rechargeable batteries. Alternating current capacitors are specifically designed to work on line (mains) voltage AC power circuits. They are commonly used in electric motor circuits and are often designed to handle large currents, so they tend to be physically large. They are usually ruggedly packaged, often in metal cases that can be easily grounded/earthed. They also are designed with direct current breakdown voltages of at least five times the maximum AC voltage. Voltage-dependent capacitors The dielectric constant for a number of very useful dielectrics changes as a function of the applied electrical field, for example ferroelectric materials, so the capacitance for these devices is more complex. For example, in charging such a capacitor the differential increase in voltage with charge is governed by: where the voltage dependence of capacitance, , suggests that the capacitance is a function of the electric field strength, which in a large area parallel plate device is given by . This field polarizes the dielectric, which polarization, in the case of a ferroelectric, is a nonlinear S-shaped function of the electric field, which, in the case of a large area parallel plate device, translates into a capacitance that is a nonlinear function of the voltage. Corresponding to the voltage-dependent capacitance, to charge the capacitor to voltage an integral relation is found: which agrees with only when does not depend on voltage . By the same token, the energy stored in the capacitor now is given by Integrating: where interchange of the order of integration is used. The nonlinear capacitance of a microscope probe scanned along a ferroelectric surface is used to study the domain structure of ferroelectric materials. Another example of voltage dependent capacitance occurs in semiconductor devices such as semiconductor diodes, where the voltage dependence stems not from a change in dielectric constant but in a voltage dependence of the spacing between the charges on the two sides of the capacitor. This effect is intentionally exploited in diode-like devices known as varicaps. Frequency-dependent capacitors If a capacitor is driven with a time-varying voltage that changes rapidly enough, at some frequency the polarization of the dielectric cannot follow the voltage. As an example of the origin of this mechanism, the internal microscopic dipoles contributing to the dielectric constant cannot move instantly, and so as frequency of an applied alternating voltage increases, the dipole response is limited and the dielectric constant diminishes. A changing dielectric constant with frequency is referred to as dielectric dispersion, and is governed by dielectric relaxation processes, such as Debye relaxation. Under transient conditions, the displacement field can be expressed as (see electric susceptibility): indicating the lag in response by the time dependence of , calculated in principle from an underlying microscopic analysis, for example, of the dipole behavior in the dielectric. See, for example, linear response function. The integral extends over the entire past history up to the present time. A Fourier transform in time then results in: where εr(ω) is now a complex function, with an imaginary part related to absorption of energy from the field by the medium. See permittivity. The capacitance, being proportional to the dielectric constant, also exhibits this frequency behavior. Fourier transforming Gauss's law with this form for displacement field: where is the imaginary unit, is the voltage component at angular frequency , is the real part of the current, called the conductance, and determines the imaginary part of the current and is the capacitance. is the complex impedance. When a parallel-plate capacitor is filled with a dielectric, the measurement of dielectric properties of the medium is based upon the relation: where a single prime denotes the real part and a double prime the imaginary part, is the complex impedance with the dielectric present, is the so-called complex capacitance with the dielectric present, and is the capacitance without the dielectric. (Measurement "without the dielectric" in principle means measurement in free space, an unattainable goal inasmuch as even the quantum vacuum is predicted to exhibit nonideal behavior, such as dichroism. For practical purposes, when measurement errors are taken into account, often a measurement in terrestrial vacuum, or simply a calculation of C0, is sufficiently accurate.) Using this measurement method, the dielectric constant may exhibit a resonance at certain frequencies corresponding to characteristic response frequencies (excitation energies) of contributors to the dielectric constant. These resonances are the basis for a number of experimental techniques for detecting defects. The conductance method measures absorption as a function of frequency. Alternatively, the time response of the capacitance can be used directly, as in deep-level transient spectroscopy. Another example of frequency dependent capacitance occurs with MOS capacitors, where the slow generation of minority carriers means that at high frequencies the capacitance measures only the majority carrier response, while at low frequencies both types of carrier respond. At optical frequencies, in semiconductors the dielectric constant exhibits structure related to the band structure of the solid. Sophisticated modulation spectroscopy measurement methods based upon modulating the crystal structure by pressure or by other stresses and observing the related changes in absorption or reflection of light have advanced our knowledge of these materials. Styles The arrangement of plates and dielectric has many variations in different styles depending on the desired ratings of the capacitor. For small values of capacitance (microfarads and less), ceramic disks use metallic coatings, with wire leads bonded to the coating. Larger values can be made by multiple stacks of plates and disks. Larger value capacitors usually use a metal foil or metal film layer deposited on the surface of a dielectric film to make the plates, and a dielectric film of impregnated paper or plasticthese are rolled up to save space. To reduce the series resistance and inductance for long plates, the plates and dielectric are staggered so that connection is made at the common edge of the rolled-up plates, not at the ends of the foil or metalized film strips that comprise the plates. The assembly is encased to prevent moisture entering the dielectricearly radio equipment used a cardboard tube sealed with wax. Modern paper or film dielectric capacitors are dipped in a hard thermoplastic. Large capacitors for high-voltage use may have the roll form compressed to fit into a rectangular metal case, with bolted terminals and bushings for connections. The dielectric in larger capacitors is often impregnated with a liquid to improve its properties. Capacitors may have their connecting leads arranged in many configurations, for example axially or radially. "Axial" means that the leads are on a common axis, typically the axis of the capacitor's cylindrical bodythe leads extend from opposite ends. Radial leads are rarely aligned along radii of the body's circle, so the term is conventional. The leads (until bent) are usually in planes parallel to that of the flat body of the capacitor, and extend in the same direction; they are often parallel as manufactured. Small, cheap discoidal ceramic capacitors have existed from the 1930s onward, and remain in widespread use. After the 1980s, surface mount packages for capacitors have been widely used. These packages are extremely small and lack connecting leads, allowing them to be soldered directly onto the surface of printed circuit boards. Surface mount components avoid undesirable high-frequency effects due to the leads and simplify automated assembly, although manual handling is made difficult due to their small size. Mechanically controlled variable capacitors allow the plate spacing to be adjusted, for example by rotating or sliding a set of movable plates into alignment with a set of stationary plates. Low cost variable capacitors squeeze together alternating layers of aluminum and plastic with a screw. Electrical control of capacitance is achievable with varactors (or varicaps), which are reverse-biased semiconductor diodes whose depletion region width varies with applied voltage. They are used in phase-locked loops, amongst other applications. Capacitor markings Marking codes for larger parts Most capacitors have designations printed on their bodies to indicate their electrical characteristics. Larger capacitors, such as electrolytic types usually display the capacitance as value with explicit unit, for example, 220 μF. For typographical reasons, some manufacturers print MF on capacitors to indicate microfarads (μF). Three-/four-character marking code for small capacitors Smaller capacitors, such as ceramic types, often use a shorthand-notation consisting of three digits and an optional letter, where the digits (XYZ) denote the capacitance in picofarad (pF), calculated as XY × 10Z, and the letter indicating the tolerance. Common tolerances are ±5%, ±10%, and ±20%, denotes as J, K, and M, respectively. A capacitor may also be labeled with its working voltage, temperature, and other relevant characteristics. Example: A capacitor labeled or designated as 473K 330V has a capacitance of = 47 nF (±10%) with a maximum working voltage of 330 V. The working voltage of a capacitor is nominally the highest voltage that may be applied across it without undue risk of breaking down the dielectric layer. Two-character marking code for small capacitors For capacitances following the E3, E6, E12 or E24 series of preferred values, the former ANSI/EIA-198-D:1991, ANSI/EIA-198-1-E:1998 and ANSI/EIA-198-1-F:2002 as well as the amendment IEC 60062:2016/AMD1:2019 to IEC 60062 define a special two-character marking code for capacitors for very small parts which leave no room to print the above-mentioned three-/four-character code onto them. The code consists of an uppercase letter denoting the two significant digits of the value followed by a digit indicating the multiplier. The EIA standard also defines a number of lowercase letters to specify a number of values not found in E24. RKM code The RKM code following IEC 60062 and BS 1852 is a notation to state a capacitor's value in a circuit diagram. It avoids using a decimal separator and replaces the decimal separator with the SI prefix symbol for the particular value (and the letter for weight 1). The code is also used for part markings. Example: for 4.7 nF or for 2.2 F. Historical In texts prior to the 1960s and on some capacitor packages until more recently, obsolete capacitance units were utilized in electronic books, magazines, and electronics catalogs. The old units "mfd" and "mf" meant microfarad (μF); and the old units "mmfd", "mmf", "uuf", "μμf", "pfd" meant picofarad (pF); but they are rarely used any more. Also, "Micromicrofarad" or "micro-microfarad" are obsolete units that are found in some older texts that is equivalent to picofarad (pF). Summary of obsolete capacitance units: (upper/lower case variations are not shown) μF (microfarad) = mf, mfd pF (picofarad) = mmf, mmfd, pfd, μμF Applications Energy storage A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries are being changed. (This prevents loss of information in volatile memory.) A capacitor can facilitate conversion of kinetic energy of charged particles into electric energy and store it. There are tradeoffs between capacitors and batteries as storage devices. Without external resistors or inductors, capacitors can generally release their stored energy in a very short time compared to batteries. Conversely, batteries can hold a far greater charge per their size. Conventional capacitors provide less than 360 joules per kilogram of specific energy, whereas a conventional alkaline battery has a density of 590 kJ/kg. There is an intermediate solution: supercapacitors, which can accept and deliver charge much faster than batteries, and tolerate many more charge and discharge cycles than rechargeable batteries. They are, however, 10 times larger than conventional batteries for a given charge. On the other hand, it has been shown that the amount of charge stored in the dielectric layer of the thin film capacitor can be equal to, or can even exceed, the amount of charge stored on its plates. In car audio systems, large capacitors store energy for the amplifier to use on demand. Also, for a flash tube, a capacitor is used to hold the high voltage. Digital memory In the 1930s, John Atanasoff applied the principle of energy storage in capacitors to construct dynamic digital memories for the first binary computers that used electron tubes for logic. Pulsed power and weapons Pulsed power is used in many applications to increase the power intensity (watts) of a volume of energy (joules) by releasing that volume within a very short time. Pulses in the nanosecond range and powers in the gigawatts are achievable. Short pulses often require specially constructed, low-inductance, high-voltage capacitors that are often used in large groups (capacitor banks) to supply huge pulses of current for many pulsed power applications. These include electromagnetic forming, Marx generators, pulsed lasers (especially TEA lasers), pulse forming networks, radar, fusion research, and particle accelerators. Large capacitor banks (reservoir) are used as energy sources for the exploding-bridgewire detonators or slapper detonators in nuclear weapons and other specialty weapons. Experimental work is under way using banks of capacitors as power sources for electromagnetic armour and electromagnetic railguns and coilguns. Power conditioning Reservoir capacitors are used in power supplies where they smooth the output of a full or half wave rectifier. They can also be used in charge pump circuits as the energy storage element in the generation of higher voltages than the input voltage. Capacitors are connected in parallel with the power circuits of most electronic devices and larger systems (such as factories) to shunt away and conceal current fluctuations from the primary power source to provide a "clean" power supply for signal or control circuits. Audio equipment, for example, uses several capacitors in this way, to shunt away power line hum before it gets into the signal circuitry. The capacitors act as a local reserve for the DC power source, and bypass AC currents from the power supply. This is used in car audio applications, when a stiffening capacitor compensates for the inductance and resistance of the leads to the lead–acid car battery. Power-factor correction In electric power distribution, capacitors are used for power-factor correction. Such capacitors often come as three capacitors connected as a three phase load. Usually, the values of these capacitors are not given in farads but rather as a reactive power in volt-amperes reactive (var). The purpose is to counteract inductive loading from devices like electric motors and transmission lines to make the load appear to be mostly resistive. Individual motor or lamp loads may have capacitors for power-factor correction, or larger sets of capacitors (usually with automatic switching devices) may be installed at a load center within a building or in a large utility substation. Suppression and coupling Signal coupling Because capacitors pass AC but block DC signals (when charged up to the applied DC voltage), they are often used to separate the AC and DC components of a signal. This method is known as AC coupling or "capacitive coupling". Here, a large value of capacitance, whose value need not be accurately controlled, but whose reactance is small at the signal frequency, is employed. Decoupling A decoupling capacitor is a capacitor used to protect one part of a circuit from the effect of another, for instance to suppress noise or transients. Noise caused by other circuit elements is shunted through the capacitor, reducing the effect they have on the rest of the circuit. It is most commonly used between the power supply and ground. An alternative name is bypass capacitor as it is used to bypass the power supply or other high impedance component of a circuit. Decoupling capacitors need not always be discrete components. Capacitors used in these applications may be built into a printed circuit board, between the various layers. These are often referred to as embedded capacitors. The layers in the board contributing to the capacitive properties also function as power and ground planes, and have a dielectric in between them, enabling them to operate as a parallel plate capacitor. High-pass and low-pass filters Noise suppression, spikes, and snubbers When an inductive circuit is opened, the current through the inductance collapses quickly, creating a large voltage across the open circuit of the switch or relay. If the inductance is large enough, the energy may generate a spark, causing the contact points to oxidize, deteriorate, or sometimes weld together, or destroying a solid-state switch. A snubber capacitor across the newly opened circuit creates a path for this impulse to bypass the contact points, thereby preserving their life; these were commonly found in contact breaker ignition systems, for instance. Similarly, in smaller scale circuits, the spark may not be enough to damage the switch but may still radiate undesirable radio frequency interference (RFI), which a filter capacitor absorbs. Snubber capacitors are usually employed with a low-value resistor in series, to dissipate energy and minimize RFI. Such resistor-capacitor combinations are available in a single package. Capacitors are also used in parallel with interrupting units of a high-voltage circuit breaker to equally distribute the voltage between these units. These are called "grading capacitors". In schematic diagrams, a capacitor used primarily for DC charge storage is often drawn vertically in circuit diagrams with the lower, more negative, plate drawn as an arc. The straight plate indicates the positive terminal of the device, if it is polarized (see electrolytic capacitor). Motor starters In single phase squirrel cage motors, the primary winding within the motor housing is not capable of starting a rotational motion on the rotor, but is capable of sustaining one. To start the motor, a secondary "start" winding has a series non-polarized starting capacitor to introduce a lead in the sinusoidal current. When the secondary (start) winding is placed at an angle with respect to the primary (run) winding, a rotating electric field is created. The force of the rotational field is not constant, but is sufficient to start the rotor spinning. When the rotor comes close to operating speed, a centrifugal switch (or current-sensitive relay in series with the main winding) disconnects the capacitor. The start capacitor is typically mounted to the side of the motor housing. These are called capacitor-start motors, that have relatively high starting torque. Typically they can have up-to four times as much starting torque as a split-phase motor and are used on applications such as compressors, pressure washers and any small device requiring high starting torques. Capacitor-run induction motors have a permanently connected phase-shifting capacitor in series with a second winding. The motor is much like a two-phase induction motor. Motor-starting capacitors are typically non-polarized electrolytic types, while running capacitors are conventional paper or plastic film dielectric types. Signal processing The energy stored in a capacitor can be used to represent information, either in binary form, as in DRAMs, or in analogue form, as in analog sampled filters and CCDs. Capacitors can be used in analog circuits as components of integrators or more complex filters and in negative feedback loop stabilization. Signal processing circuits also use capacitors to integrate a current signal. Tuned circuits Capacitors and inductors are applied together in tuned circuits to select information in particular frequency bands. For example, radio receivers rely on variable capacitors to tune the station frequency. Speakers use passive analog crossovers, and analog equalizers use capacitors to select different audio bands. The resonant frequency f of a tuned circuit is a function of the inductance (L) and capacitance (C) in series, and is given by: where is in henries and is in farads. Sensing Most capacitors are designed to maintain a fixed physical structure. However, various factors can change the structure of the capacitor, and the resulting change in capacitance can be used to sense those factors. Changing the dielectric The effects of varying the characteristics of the dielectric can be used for sensing purposes. Capacitors with an exposed and porous dielectric can be used to measure humidity in air. Capacitors are used to accurately measure the fuel level in airplanes; as the fuel covers more of a pair of plates, the circuit capacitance increases. Squeezing the dielectric can change a capacitor at a few tens of bar pressure sufficiently that it can be used as a pressure sensor. A selected, but otherwise standard, polymer dielectric capacitor, when immersed in a compatible gas or liquid, can work usefully as a very low cost pressure sensor up to many hundreds of bar. Changing the distance between the plates Capacitors with a flexible plate can be used to measure strain or pressure. Industrial pressure transmitters used for process control use pressure-sensing diaphragms, which form a capacitor plate of an oscillator circuit. Capacitors are used as the sensor in condenser microphones, where one plate is moved by air pressure, relative to the fixed position of the other plate. Some accelerometers use MEMS capacitors etched on a chip to measure the magnitude and direction of the acceleration vector. They are used to detect changes in acceleration, in tilt sensors, or to detect free fall, as sensors triggering airbag deployment, and in many other applications. Some fingerprint sensors use capacitors. Additionally, a user can adjust the pitch of a theremin musical instrument by moving their hand since this changes the effective capacitance between the user's hand and the antenna. Changing the effective area of the plates Capacitive touch switches are now used on many consumer electronic products. Oscillators A capacitor can possess spring-like qualities in an oscillator circuit. In the image example, a capacitor acts to influence the biasing voltage at the npn transistor's base. The resistance values of the voltage-divider resistors and the capacitance value of the capacitor together control the oscillatory frequency. Producing light A light-emitting capacitor is made from a dielectric that uses phosphorescence to produce light. If one of the conductive plates is made with a transparent material, the light is visible. Light-emitting capacitors are used in the construction of electroluminescent panels, for applications such as backlighting for laptop computers. In this case, the entire panel is a capacitor used for the purpose of generating light. Hazards and safety The hazards posed by a capacitor are usually determined, foremost, by the amount of energy stored, which is the cause of things like electrical burns or heart fibrillation. Factors such as voltage and chassis material are of secondary consideration, which are more related to how easily a shock can be initiated rather than how much damage can occur. Under certain conditions, including conductivity of the surfaces, preexisting medical conditions, the humidity of the air, or the pathways it takes through the body (i.e.: shocks that travel across the core of the body and, especially, the heart are more dangerous than those limited to the extremities), shocks as low as one joule have been reported to cause death, although in most instances they may not even leave a burn. Shocks over ten joules will generally damage skin, and are usually considered hazardous. Any capacitor that can store 50 joules or more should be considered potentially lethal. Capacitors may retain a charge long after power is removed from a circuit; this charge can cause dangerous or even potentially fatal shocks or damage connected equipment. For example, even a seemingly innocuous device such as the flash of a disposable camera, has a photoflash capacitor which may contain over 15 joules of energy and be charged to over 300 volts. This is easily capable of delivering a shock. Service procedures for electronic devices usually include instructions to discharge large or high-voltage capacitors, for instance using a Brinkley stick. Larger capacitors, such as those used in microwave ovens, HVAC units and medical defibrillators may also have built-in discharge resistors to dissipate stored energy to a safe level within a few seconds after power is removed. High-voltage capacitors are stored with the terminals shorted, as protection from potentially dangerous voltages due to dielectric absorption or from transient voltages the capacitor may pick up from static charges or passing weather events. Some old, large oil-filled paper or plastic film capacitors contain polychlorinated biphenyls (PCBs). It is known that waste PCBs can leak into groundwater under landfills. Capacitors containing PCBs were labelled as containing "Askarel" and several other trade names. PCB-filled paper capacitors are found in very old (pre-1975) fluorescent lamp ballasts, and other applications. Capacitors may catastrophically fail when subjected to voltages or currents beyond their rating, or in case of polarized capacitors, applied in a reverse polarity. Failures may create arcing that heats and vaporizes the dielectric fluid, causing a build up of pressurized gas that may result in swelling, rupture, or an explosion. Larger capacitors may have vents or similar mechanism to allow the release of such pressures in the event of failure. Capacitors used in RF or sustained high-current applications can overheat, especially in the center of the capacitor rolls. Capacitors used within high-energy capacitor banks can violently explode when a short in one capacitor causes sudden dumping of energy stored in the rest of the bank into the failing unit. High voltage vacuum capacitors can generate soft X-rays even during normal operation. Proper containment, fusing, and preventive maintenance can help to minimize these hazards. High-voltage capacitors may benefit from a pre-charge to limit in-rush currents at power-up of high voltage direct current (HVDC) circuits. This extends the life of the component and may mitigate high-voltage hazards.
Technology
Components
null
2679845
https://en.wikipedia.org/wiki/Test%20particle
Test particle
In physical theories, a test particle, or test charge, is an idealized model of an object whose physical properties (usually mass, charge, or size) are assumed to be negligible except for the property being studied, which is considered to be insufficient to alter the behaviour of the rest of the system. The concept of a test particle often simplifies problems, and can provide a good approximation for physical phenomena. In addition to its uses in the simplification of the dynamics of a system in particular limits, it is also used as a diagnostic in computer simulations of physical processes. Electrostatics In simulations with electric fields the most important characteristics of a test particle is its electric charge and its mass. In this situation it is often referred to as a test charge. The electric field created by a point charge q is , where ε0 is the vacuum electric permittivity. Multiplying this field by a test charge gives an electric force (Coulomb's law) exerted by the field on a test charge. Note that both the force and the electric field are vector quantities, so a positive test charge will experience a force in the direction of the electric field. Classical gravity The easiest case for the application of a test particle arises in Newton's law of universal gravitation. The general expression for the gravitational force between any two point masses and is: , where and represent the position of each particle in space. In the general solution for this equation, both masses rotate around their center of mass R, in this specific case: . In the case where one of the masses is much larger than the other (), one can assume that the smaller mass moves as a test particle in a gravitational field generated by the larger mass, which does not accelerate. We can define the gravitational field as , with as the distance between the massive object and the test particle, and is the unit vector in the direction going from the massive object to the test mass. Newton's second law of motion of the smaller mass reduces to , and thus only contains one variable, for which the solution can be calculated more easily. This approach gives very good approximations for many practical problems, e.g. the orbits of satellites, whose mass is relatively small compared to that of the Earth. General relativity In metric theories of gravitation, particularly general relativity, a test particle is an idealized model of a small object whose mass is so small that it does not appreciably disturb the ambient gravitational field. According to the Einstein field equations, the gravitational field is locally coupled not only to the distribution of non-gravitational mass–energy, but also to the distribution of momentum and stress (e.g. pressure, viscous stresses in a perfect fluid). In the case of test particles in a vacuum solution or electrovacuum solution, this turns out to imply that in addition to the tidal acceleration experienced by small clouds of test particles (with spin or not), test particles with spin may experience additional accelerations due to spin–spin forces.
Physical sciences
Basics_4
Physics
2679981
https://en.wikipedia.org/wiki/Image%20file%20format
Image file format
An image file format is a file format for a digital image. There are many formats that can be used, such as JPEG, PNG, and GIF. Most formats up until 2022 were for storing 2D images, not 3D ones. The data stored in an image file format may be compressed or uncompressed. If the data is compressed, it may be done so using lossy compression or lossless compression. For graphic design applications, vector formats are often used. Some image file formats support transparency. Raster formats are for 2D images. A 3D image can be represented within a 2D format, as in a stereogram or autostereogram, but this 3D image will not be a true light field, and thereby may cause the vergence-accommodation conflict. Image files are composed of digital data in one of these formats so that the data can be displayed on a digital (computer) display or printed out using a printer. A common method for displaying digital image information has historically been rasterization. Image file sizes The size of raster image files is positively correlated with the number of pixels in the image and the color depth (bits per pixel). Images can be compressed in various ways, however. A compression algorithm stores either an exact representation or an approximation of the original image in a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding decompression algorithm. Images with the same number of pixels and color depth can have very different compressed file size. Considering exactly the same compression, number of pixels, and color depth for two images, different graphical complexity of the original images may also result in very different file sizes after compression due to the nature of compression algorithms. With some compression formats, images that are less complex may result in smaller compressed file sizes. This characteristic sometimes results in a smaller file size for some lossless formats than lossy formats. For example, graphically simple images (i.e. images with large continuous regions like line art or animation sequences) may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy JPEG format. For example, a 640480 pixel image with 24-bit color would occupy almost a megabyte of space: 64048024 = 7,372,800 bits = 921,600 bytes = 900 KiB With vector images, the file size increases only with the addition of more vectors. Image file compression There are two types of image file compression algorithms: lossless and lossy. Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed image. Lossless compression generally, but not always, results in larger files than lossy compression. Lossless compression should be used to avoid accumulating stages of re-compression when editing images. Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to be a perfect copy, but is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than lossless compression. Most lossy compression algorithms allow for variable compression that trades image quality for file size. Major graphic file formats Including proprietary types, there are hundreds of image file types. The PNG, JPEG, and GIF formats are most often used to display images on the Internet. Some of these graphic formats are listed and briefly described below, separated into the two main families of graphics: raster and vector. Raster images are further divided into formats primarily aimed at (web) delivery (i.e. supporting relatively strong compression) versus formats primarily aimed at authoring or interchange (uncompressed or only relatively weak compression). In addition to straight image formats, Metafile formats are portable formats which can include both raster and vector information. Examples are application-independent formats such as WMF and EMF. The metafile format is an intermediate format. Most applications open metafiles and then save them in their own native format. Page description language refers to formats used to describe the layout of a printed page containing text, objects and images. Examples are PostScript, PDF and PCL. Raster formats (2D) Delivery formats JPEG JPEG (Joint Photographic Experts Group) is a lossy compression method; JPEG-compressed images are usually stored in the JFIF (JPEG File Interchange Format) or the Exif (Exchangeable image file format) file format. The JPEG filename extension is JPG or JPEG. Nearly every digital camera can save images in the JPEG format, which supports eight-bit grayscale images and 24-bit color images (eight bits each for red, green, and blue). JPEG applies lossy compression to images, which can result in a significant reduction of the file size. Applications can determine the degree of compression to apply, and the amount of compression affects the visual quality of the result. When not too great, the compression does not noticeably affect or detract from the image's quality, but JPEG files suffer generational degradation when repeatedly edited and saved. (JPEG also provides lossless image storage, but the lossless version is not widely supported.) GIF The GIF (Graphics Interchange Format) is in normal use limited to an 8-bit palette, or 256 colors (while 24-bit color depth is technically possible). GIF is most suitable for storing graphics with few colors, such as simple diagrams, shapes, logos, and cartoon style images, as it uses LZW lossless compression, which is more effective when large areas have a single color, and less effective for photographic or dithered images. Due to GIF's simplicity and age, it achieved almost universal software support. Due to its animation capabilities, it is still widely used to provide image animation effects, despite its low compression ratio compared to modern video formats. PNG The PNG (Portable Network Graphics) file format was created as a free, open-source alternative to GIF. The PNG file format supports 8-bit (256 colors) paletted images (with optional transparency for all palette colors) and 24-bit truecolor (16 million colors) or 48-bit truecolor with and without alpha channel – while GIF supports only 8-bit palettes with a single transparent color. Compared to JPEG, PNG excels when the image has large, uniformly colored areas. Even for photographs – where JPEG is often the choice for final distribution since its lossy compression typically yields smaller file sizes – PNG is still well-suited to storing images during the editing process because of its lossless compression. PNG provides a patent-free replacement for GIF (though GIF is itself now patent-free) and can also replace many common uses of TIFF. Indexed-color, grayscale, and truecolor images are supported, plus an optional alpha channel. The Adam7 interlacing allows an early preview, even when only a small percentage of the image data has been transmitted — useful in online viewing applications like web browsers. PNG can store gamma and chromaticity data, as well as ICC profiles, for accurate color matching on heterogeneous platforms. Animated formats derived from PNG are MNG and APNG, which is backwards compatible with PNG and supported by most browsers. JPEG 2000 JPEG 2000 is a compression standard enabling both lossless and lossy storage. The compression methods used are different from the ones in standard JFIF/JPEG; they improve quality and compression ratios, but also require more computational power to process. JPEG 2000 also adds features that are missing in JPEG. It is not nearly as common as JPEG, but it is used currently in professional movie editing and distribution (some digital cinemas, for example, use JPEG 2000 for individual movie frames). WebP WebP is an open image format released in 2010 that uses both lossless and lossy compression. It was designed by Google to reduce image file size to speed up web page loading: its principal purpose is to supersede JPEG as the primary format for photographs on the web. WebP is based on VP8's intra-frame coding and uses a container based on RIFF. In 2011, Google added an "Extended File Format" allowing WebP support for animation, ICC profile, XMP and Exif metadata, and tiling. The support for animation allowed for converting older animated GIFs to animated WebP. The WebP container (i.e., RIFF container for WebP) allows feature support over and above the basic use case of WebP (i.e., a file containing a single image encoded as a VP8 key frame). The WebP container provides additional support for: Lossless compression – An image can be losslessly compressed, using the WebP Lossless Format. Metadata – An image may have metadata stored in EXIF or XMP formats. Transparency – An image may have transparency, i.e., an alpha channel. Color Profile – An image may have an embedded ICC profile as described by the International Color Consortium. Animation – An image may have multiple frames with pauses between them, making it an animation. HDR raster formats Most typical raster formats cannot store HDR data (32 bit floating point values per pixel component), which is why some relatively old or complex formats are still predominant here, and worth mentioning separately. Newer alternatives are showing up, though. RGBE is the format for HDR images originating from Radiance and also supported by Adobe Photoshop. JPEG-HDR is a file format from Dolby Labs similar to RGBE encoding, standardized as JPEG XT Part 2. JPEG XT Part 7 includes support for encoding floating point HDR images in the base 8-bit JPEG file using enhancement layers encoded with four profiles (A-D); Profile A is based on the RGBE format and Profile B on the XDepth format from Trellis Management. HEIF The High Efficiency Image File Format (HEIF) is an image container format that was standardized by MPEG on the basis of the ISO base media file format. While HEIF can be used with any image compression format, the HEIF standard specifies the storage of HEVC intra-coded images and HEVC-coded image sequences taking advantage of inter-picture prediction. AVIF AV1 Image File Format (AVIF) standardized by the video consortium Alliance for open media (AOMedia) creator of the video format Av1, to take advantage of modern compression algorithms and a completely royalty-free image format. It uses the image format with AVIF coding and recommends using the HEIF container, see AV1 in HEIF. JPEG XL JPEG XL is a royalty-free raster-graphics file format that supports both lossy and lossless compression. It supports reversible recompression of existing JPEG files, as well as high-precision HDR (up to 32-bit floating point values per pixel component). It is designed to be usable for both delivery and authoring use cases. Authoring / Interchange formats TIFF The TIFF (Tag Image File Format) format is a flexible format usually using either the TIFF or TIF filename extension. The tag structure was designed to be easily extendible, and many vendors have introduced proprietary special-purpose tags – with the result that no one reader handles every flavor of TIFF file. TIFFs can be lossy or lossless, depending on the technique chosen for storing the pixel data. Some offer relatively good lossless compression for bi-level (black&white) images. Some digital cameras can save images in TIFF format, using the LZW compression algorithm for lossless storage. TIFF image format is not widely supported by web browsers, but it remains widely accepted as a photograph file standard in the printing business. TIFF can handle device-specific color spaces, such as the CMYK defined by a particular set of printing press inks. OCR (Optical Character Recognition) software packages commonly generate some form of TIFF image (often monochromatic) for scanned text pages. BMP The BMP file format (Windows bitmap) is a raster-based device-independent file type designed in the early days of computer graphics. It handles graphic files within the Microsoft Windows OS. Typically, BMP files are uncompressed, and therefore large and lossless; their advantage is their simple structure and wide acceptance in Windows programs. PPM, PGM, PBM, and PNM Netpbm format is a family including the portable pixmap file format (PPM), the portable graymap file format (PGM) and the portable bitmap file format (PBM). These are either pure ASCII files or raw binary files with an ASCII header that provide very basic functionality and serve as a lowest common denominator for converting pixmap, graymap, or bitmap files between different platforms. Several applications refer to them collectively as PNM ("Portable aNy Map"). Container formats of raster graphics editors These image formats contain various images, layers and objects, out of which the final image is to be composed AFPhoto (Affinity Photo Document) CD5 (Chasys Draw Image) CLIP (Clip Studio Paint) CPT (Corel Photo Paint) KRA (Krita) MDP (Medibang and FireAlpaca) PDN (Paint Dot Net) PLD (PhotoLine Document) PSD (Adobe PhotoShop Document) PSP (Corel Paint Shop Pro) SAI (Paint Tool SAI) XCF (eXperimental Computing Facility format) — native GIMP format Other raster formats BPG (Better Portable Graphics) — an image format from 2014. Its purpose is to replace JPEG when quality or file size is an issue. To that end, it features a high data compression ratio, based on a subset of the HEVC video compression standard, including lossless compression. In addition, it supports various meta data (such as EXIF). DEEP — IFF-style format used by TVPaint DRW (Drawn File) ECW (Enhanced Compression Wavelet) FITS (Flexible Image Transport System) FLIF (Free Lossless Image Format) — a discontinued lossless image format which claims to outperform PNG, lossless WebP, lossless BPG and lossless JPEG 2000 in terms of compression ratio. It uses the MANIAC (Meta-Adaptive Near-zero Integer Arithmetic Coding) entropy encoding algorithm, a variant of the CABAC (context-adaptive binary arithmetic coding) entropy encoding algorithm. ICO — container for one or more icons (subsets of BMP and/or PNG) ILBM — IFF-style format for up to 32 bit in planar representation, plus optional 64 bit extensions IMG (ERDAS IMAGINE Image) IMG (Graphics Environment Manager (GEM) image file) — planar, run-length encoded JPEG XR — JPEG standard based on Microsoft HD Photo Layered Image File Format — for microscope image processing Nrrd (Nearly raw raster data) PAM (Portable Arbitrary Map) — late addition to the Netpbm family PCX (PiCture eXchange) — obsolete PGF (Progressive Graphics File) PLBM (Planar Bitmap) — proprietary Amiga format SGI (Silicon Graphics Image) — native raster graphics file format for Silicon Graphics workstations SID (multiresolution seamless image database, MrSID) Sun Raster — obsolete TGA (TARGA) — obsolete VICAR file format — NASA/JPL image transport format XISF (Extensible Image Serialization Format) Vector formats As opposed to the raster image formats above (where the data describes the characteristics of each individual pixel), vector image formats contain a geometric description which can be rendered smoothly at any desired display size. At some point, all vector graphics must be rasterized in order to be displayed on digital monitors. Vector images may also be displayed with analog CRT technology such as that used in some electronic test equipment, medical monitors, radar displays, laser shows and early video games. Plotters are printers that use vector data rather than pixel data to draw graphics. CGM CGM (Computer Graphics Metafile) is a file format for 2D vector graphics, raster graphics, and text, and is defined by ISO/IEC 8632. All graphical elements can be specified in a textual source file that can be compiled into a binary file or one of two text representations. CGM provides a means of graphics data interchange for computer representation of 2D graphical information independent from any particular application, system, platform, or device. It has been adopted to some extent in the areas of technical illustration and professional design, but has largely been superseded by formats such as SVG and DXF. Gerber format (RS-274X) The Gerber format (aka Extended Gerber, RS-274X) is a 2D bi-level image description format developed by Ucamco. It is the de facto standard format for printed circuit board or PCB software. SVG SVG (Scalable Vector Graphics) is an open standard created and developed by the World Wide Web Consortium to address the need (and attempts of several corporations) for a versatile, scriptable and all-purpose vector format for the web and otherwise. The SVG format does not have a compression scheme of its own, but due to the textual nature of XML, an SVG graphic can be compressed using a program such as gzip. Because of its scripting potential, SVG is a key component in web applications: interactive web pages that look and act like applications. Other 2D vector formats AFDesign (Affinity Designer document) AI (Adobe Illustrator Artwork) — proprietary file format developed by Adobe Systems CDR — proprietary format for CorelDRAW vector graphics editor !DRAW — a native vector graphic format (in several backward compatible versions) for the RISC-OS computer system begun by Acorn in the mid-1980s and still present on that platform today DrawingML — used in Office Open XML documents GEM — metafiles interpreted and written by the Graphics Environment Manager VDI subsystem GLE (Graphics Layout Engine) — graphics scripting language HP-GL (Hewlett-Packard Graphics Language) — introduced on Hewlett-Packard plotters, but generalized into a printer language HVIF (Haiku Vector Icon Format) Lottie — format for vector graphics animation MathML (Mathematical Markup Language) — an application of XML for describing mathematical notations NAPLPS (North American Presentation Layer Protocol Syntax) ODG (OpenDocument Graphics) PGML (Precision Graphics Markup Language) — a W3C submission that was not adopted as a recommendation PSTricks and PGF/TikZ are languages for creating graphics in TeX documents QCC — used by Quilt Manager (by Quilt EZ) for designing quilts ReGIS (Remote Graphic Instruction Set) — used by DEC computer terminals Remote imaging protocol — system for sending vector graphics over low-bandwidth links TinyVG — binary, simpler alternative to SVG VML (Vector Markup Language) — obsolete XML-based format Xar — format used in vector applications from Xara XPS (XML Paper Specification) — page description language and a fixed-document format 3D vector formats AMF – Additive Manufacturing File Format Asymptote – A language that lifts TeX to 3D. .blend – Blender COLLADA DGN .dwf .dwg .dxf eDrawings .flt – OpenFlight FVRML – and FX3D, function-based extensions of VRML and X3D glTF - 3D asset delivery format (.glb binary version) HSF IGES IMML – Immersive Media Markup Language IPA JT .MA (Maya ASCII format) .MB (Maya Binary format) .OBJ Wavefront OpenGEX – Open Game Engine Exchange PLY POV-Ray scene description language PRC STEP SKP STL – A stereolithography format U3D – Universal 3D file format VRML – Virtual Reality Modeling Language XAML XGL XVL xVRML X3D .3D 3DF .3DM .3ds – Autodesk 3D Studio 3DXML X3D – Vector format used in 3D applications from Xara Compound formats These are formats containing both pixel and vector data, possible other data, e.g. the interactive features of PDF. EPS (Encapsulated PostScript) MODCA (Mixed Object:Document Content Architecture) PDF (Portable Document Format) PostScript, a page description language with strong graphics capabilities PICT (Classic Macintosh QuickDraw file) WMF / EMF (Windows Metafile / Enhanced Metafile) SWF (Shockwave Flash) XAML User interface language using vector graphics for images. Stereo formats MPO The Multi Picture Object (.mpo) format consists of multiple JPEG images (Camera & Imaging Products Association) (CIPA). PNS The PNG Stereo (.pns) format consists of a side-by-side image based on PNG (Portable Network Graphics). JPS The JPEG Stereo (.jps) format consists of a side-by-side image format based on JPEG.
Technology
File formats
null
2681004
https://en.wikipedia.org/wiki/Blesmol
Blesmol
The blesmols, also known as mole-rats, or African mole-rats, are burrowing rodents of the family Bathyergidae. They represent a distinct evolution of a subterranean life among rodents much like the pocket gophers of North America, the tuco-tucos in South America, and the Spalacidae from Eurasia. Distribution Modern blesmols are found strictly in sub-Saharan Africa. Fossil forms are also restricted almost exclusively to Africa, although a few specimens of the Pleistocene species Cryptomys asiaticus have been found in Israel. Nowak (1999) also reports that †Gypsorhychus has been found in fossil deposits of Mongolia. Anatomy Blesmols are somewhat mole-like animals with cylindrical bodies and short limbs. They range from in length, and from in weight, depending on the species. Blesmols, like many other fossorial mammals, have greatly reduced eyes and ear pinnae, a relatively short tail, loose skin, and (aside from the hairless naked mole rat) velvety fur. Blesmols have very poor vision, although they may use the surfaces of their eyes for sensing air currents. Despite their small or absent pinnae, they have a good sense of hearing, although their most important sense appears to be that of touch. Like other rodents, they have an excellent sense of smell, and they are also able to close their nostrils during digging to prevent them from clogging with dirt. The eyes of blesmols are structurally normal, despite their relatively small size, and include normal light-sensitive cells. However, the visual centres of their brains are reduced in certain respects, especially in those centres concerned with localising objects in the visual field. Research has shown that at least two species of blesmol (Fukomys mechowii and Heliophobius argenteocinereus) are not blind, as commonly believed, and will actively avoid blue or green-yellow light. They do not appear able to detect the presence of red light, and can probably not distinguish between different colours. The ability to sense the presence of light is probably useful in allowing them to detect breaches in their tunnel systems and repair them promptly. Most blesmol species dig using their powerful incisors and, to a lesser extent, the foreclaws, although dune blesmols dig primarily with their feet, restricting them to soft, sandy soil. Dune blesmols aside, some species have been reported to be able to extend their burrows by an inch () into the walls of concrete enclosures. Their unique skull shape is associated with delivering sheer power to the lateral masseter muscle which is responsible for the powerful bite of the anterior portion of the mouth. The incisors of blesmols are projected forward and protrude from the mouth even when the mouth is closed. This condition allows the animals to burrow with their teeth without getting dirt in their mouths. The number of cheek teeth varies greatly between species, an unusual feature among rodents, so that the dental formula for the family is: Technical characteristics The skull morphology of blesmols sets them apart from all other rodents. As with all members of their suborder, their jaws are hystricognathous, but, unlike their relatives, they have a highly reduced infraorbital foramen. The medial masseter muscle shows only minimal passage through the infraorbital foramen leading most authorities to consider them protrogomorphous. They are therefore the only protrogomorphous hystricognaths. Behavior Blesmols live in elaborate burrow systems and different species exhibit varying degrees of sociality. Most species are solitary, but one species, the damaraland blesmol (Fukomys damarensis) is one of only two eusocial mammals, the other being the naked mole rat. These species are characterized by having a single reproductively active male and female in a colony where the remaining animals are sterile. These animals prefer loose, sandy soils and are often associated with arid habitats. They rarely come to the surface, spending their entire life underground. Blesmols are herbivorous, and primarily eat roots, tubers, and bulbs. They are even able to pull smaller plants underground by their roots, without having to leave their burrows, enabling them to eat leaves, stems, and other parts of the plant that would otherwise be inaccessible. Blesmols burrow in search of food, and the great majority of their tunnel complex consists of these foraging burrows, surrounding a smaller number of storage areas, nests, and latrine chambers. Most species breed only once or twice during the year, although some breed all year round. They generally have small litters of two to five young, perhaps because their environment is sufficiently safe that they do not need to rapidly replace their population as many other rodents do. However, some species have much larger litters, averaging twelve young in the naked mole rat, and sometimes much larger. Classification The Bathyergidae are monophyletic, with all taxa tracing back to a single common ancestor. Although there is some controversy, the closest living relatives of the blesmols appear to be other African hystricognaths in the families Thryonomyidae (cane rats) and Petromuridae (dassie rats). Together these three living families along with their fossil relatives represent the infraorder Phiomorpha. At present 21 species of blesmols from 5 genera are accepted, but this number is likely to increase. Like other fossorial rodents such as pocket gophers, tuco-tucos, and blind mole rats, blesmols appear to speciate rapidly. They become geographically isolated easily, leading to various chromosomal forms and genetically distinct races. Some studies have suggested that the genus Bathyergus represents the basal-most lineage; while many researchers had posited that the Naked mole-rat, Heterocephalus, held that position, more recent investigation has placed that genus in a separate family, Heterocephalidae. Family Bathyergidae Subfamily Bathyerginae Georychus - cape blesmol Georychus capensis - cape mole-rat Cryptomys Cryptomys hottentotus - common mole-rat subspecies: C. h. natalensis - Natal mole-rat subspecies: C. h. nimrodi - Matabeleland mole-rat subspecies: C. h. pretoriae - highveld mole-rat Fukomys Fukomys amatus - Zambian mole-rat Fukomys anselli - Ansell's mole-rat Fukomys bocagei - Bocage's mole-rat Fukomys damarensis - Damaraland mole-rat Fukomys darlingi - Mashona mole-rat Fukomys foxi - Nigerian mole-rat Fukomys ilariae - Somali striped mole-rat Fukomys kafuensis - Kafue mole-rat Fukomys mechowii - Mechow's mole-rat Fukomys micklemi - Kataba mole-rat Fukomys ochraceocinereus - Ochre mole-rat Fukomys whytei - Malawian mole-rat subspecies: F. w. occlusus Fukomys zechi - Ghana mole-rat Heliophobius - Silvery mole-rat Heliophobius argenteocinereus - Silvery mole-rat Bathyergus - Dune blesmols Bathyergus janetta - Namaqua dune mole-rat Bathyergus suillus - Cape dune mole-rat Citations
Biology and health sciences
Rodents
Animals
2681885
https://en.wikipedia.org/wiki/Ursid%20hybrid
Ursid hybrid
An ursid hybrid is an animal with parents from two different species or subspecies of the bear family (Ursidae). Species and subspecies of bear known to have produced offspring with another bear species or subspecies include American black bears, grizzly bears, and polar bears, all of which are members of the genus Ursus. Bears not included in Ursus, such as the giant panda, are expected to be unable to produce hybrids with other bears. The giant panda bear belongs to the genus Ailuropoda. A recent study found genetic evidence of multiple instances and species combinations where genetic material has passed the species boundary in bears (a process called introgression by geneticists). Specifically, species with evidence of past intermingling were (1) brown bear and American black bear, (2) brown bear and polar bear, (3) American black bear and Asian black bear. Overall, this study shows that evolution in the bear family (Ursidae) has not been strictly bifurcating, but instead showed complex evolutionary relationships. All the Ursinae species (i.e., all bears except the giant panda and the spectacled bear) appear able to crossbreed. Brown × black bear hybrids In 1859, a black bear and a European brown bear were bred together in the London Zoological Gardens, but the three cubs did not reach maturity. In The Variation of Animals and Plants Under Domestication Darwin noted: In the nine-year Report it is stated that the bears had been seen in the Zoological Gardens to couple freely, but previously to 1848 most had rarely conceived. In the Reports published since this date three species have produced young (hybrids in one case), ... A bear shot in autumn 1986 in Alaska was thought by some to be a grizzly × black bear hybrid, due to its unusually large size and its proportionately larger braincase and skull. DNA testing was unable to determine whether it was a large American black bear or a grizzly bear. Intercontinental brown bear hybrids Although Eurasian brown bears and North American Brown bears are isolated, they are listed as a single species, so technically mating between the two sub-species is not hybridization, even though it cannot possibly occur in the wild. However, cross-breeding between the European brown bear and the North American grizzly bear has occurred in Cologne, Germany. Brown × polar bear hybrids Since 1874, at Halle, a series of successful matings of polar and brown bears were made. Some of the hybrid offspring were exhibited by the London Zoological Society. The Halle hybrid bears proved to be fertile, both with one of the parent species and with one another. Polar × brown bear hybrids are white at birth but later turn blue-brown or yellow-white. An adult polar × brown bear hybrid bred in the 19th century is now displayed at the Rothschild Zoological Museum, Tring, England Crandall reported the first polar × brown bear crosses as occurring at a small zoo in Stuttgart, Germany in 1876 rather than Halle in 1874. A female European brown bear mated with a male polar bear resulting in twin cubs in 1876. Three further births were recorded. The young were fertile among themselves and when mated back to European brown bears and to polar bears. DNA studies indicate that the ABC Islands bears have mixed brown and polar bear ancestry. Kodiak × polar bear hybrids "Kodiak" or "Kodiak brown" is a term now applied to brown bears found in coastal regions of North America. In the far north, these bears feed on salmon and often attain especially large size. "Alaskan brown" is sometimes used for Alaskan bears, but the main distinction is how far the bear is found from the coast. "Grizzly bear" is the term used for the brown bear of the North American interior. In 1936, a male polar bear accidentally got into an enclosure with a female Kodiak (Alaskan brown) bear at the U.S. National Zoo, resulting in three hybrid offspring. The hybrid offspring were fertile and able to breed successfully with each other, indicating that the two species of bear are closely related. The Kodiak is also considered by many to be a variant or subspecies of the basic Arctic brown bear. In 1943, Clara Helgason described a bear shot by hunters during her childhood. This was a large, off-white bear with hair all over his paws. The presence of hair on the bottom of the feet suggests it was not an unusually colored Kodiak brown bear, but a natural hybrid with a polar bear. In a 1970 National Geographic article Elizabeth C. Reed mentions being foster mother to 4 hybrid bear cubs from the National Zoological Park in Washington, where her husband was director. Grizzly × polar bear hybrids The grizzly bear is now regarded by most taxonomists as a variety of brown bear, Ursus arctos horribilis. Clinton Hart Merriam, taxonomist of grizzly bears, described an animal killed in 1864 at Rendezvous Lake, Barren Grounds, Canada as "buffy whitish" with a golden brown muzzle. This is considered to be a natural hybrid between a grizzly bear and polar bear. On 16 April 2006, a polar bear of unusual appearance was shot by a sports hunter on Banks Island in the Northwest Territories. DNA testing released 11 May 2006, proved the kill was a grizzly×polar bear hybrid. This is thought to be the first recorded case of interbreeding in the wild. The bear was proven to have a polar mother and a grizzly father. The DNA testing also spared the hunter the C$1000 fine for killing a grizzly bear, as well as the risk of being imprisoned for up to a year. The hunter had bought a license to hunt polar bears; he did not have a license to hunt grizzly at that time. The animal had dark rings around its eyes, similar to a panda's, but not as wide. It also had remarkably long claws, a slight hump on its back, brown spots in its white coat, and a slightly indented face — the nasal "stop" between the eyes which polar bears lack. The guide leading the hunt, Roger Kuptana of Sachs Harbour in the Northwest Territories, was the first to note the oddities. Several names were suggested for this specimen. The Idaho hunter who killed it, Jim Martell, suggested "polargrizz". The biologists of the Canadian Wildlife Service suggested "grolar" or "pizzly", as well as "nanulak", an elision of the Inuit nanuk (polar bear) and aklak (grizzly or brown bear). Both "grolar" and "pizzly" were used by the Canadian Broadcasting Corporation in widely distributed stories. Presently, though the mating seasons overlap, the polar bears' season begins slightly earlier than the grizzly bears'. A blog columnist for the Seattle Post-Intelligencer suggested that more hybrids may be seen as global warming progresses and alters normal mating periods. The Canadian Wildlife Service noted that grizzly-polar hybrids born of zoo matings have proven fertile. Grizzly bears have been sighted in what is usually polar bear territory in the Western Arctic near the Beaufort Sea, Banks Island, Victoria Island, and Melville Island. A "light chocolate colored" bear, possibly a hybrid, is reported to have been seen with polar bears near Kugluktuk in western Nunavut. Asian black bear known or suspected hybrids In 1975, within Venezuela's "Las Delicias" Zoo, a female Asian black bear shared its enclosure with a spectacled bear, and produced several hybrid descendants. In 2005, a possible Asiatic black bear × sun bear hybrid cub was captured in the Mekong River watershed of eastern Cambodia. The bear's mane was relatively slight, forming a crest on each side of the neck, as is typical in sun bears and some black bears. The appearance of its face was intermediate between that of a sun bear and a black bear, though its ears and large stout canines closely resembled those of the sun bear. Overall, the hybrid resembled an Asiatic black bear with an unusually glossy fur and an unusual head. In 2010, a male hybrid between an Asiatic black bear and a brown bear named Emma was taken from a bile farm and into the Animals Asia Foundation's China Moon Bear Rescue. Sloth bear hybrids Hybrids between the sloth bear (Melursus ursinus) and the Asiatic black bear (Ursus thibetanus, or Selenarctos thibetanus) are known. Hybrids have also been produced between the sloth bear (Melursus ursinus) and Malayan sun bear (Helarctos malayanus) at Tama Zoo in Tokyo.
Biology and health sciences
Hybrids
Animals
2682758
https://en.wikipedia.org/wiki/Hylonomus
Hylonomus
Hylonomus (; hylo- "forest" + nomos "dweller") is an extinct genus of reptile that lived during the Bashkirian stage of the Late Carboniferous. It is the earliest known crown group amniote and the oldest known unquestionable reptile, with the only known species being Hylonomus lyelli. Despite being amongst the oldest known reptiles, it is not the most primitive member of the group, being a eureptile more derived than either parareptiles or captorhinids. Discovery and naming Hylonomus lyelli was first described by John William Dawson in 1860. The species' name was given it by Dawson's teacher, the geologist Sir Charles Lyell. While it has traditionally been included in the group Protorothyrididae, it has since been recovered outside this group. Formerly assigned species Dawson also attributed two other species H. aciedentatus and H. wymani when he described H. lyelli in 1860, and later described two more species H. multidens and H. latidens in 1882. In 1966, Robert L. Carroll suggested that H. latidens is synonymous with the type species H. lyelli and that H. multidens belongs to a different genus of 'microsaur' which he named as Novascoticus. Both H. aciedentatus (also known as Smilerpeton aciedentatum) and H. wymani (RM 3061-9) are later reclassified as specimens of Dendrerpeton acadianum. Description Hylonomus was long (including the tail). Most of them are 20 cm long and probably would have looked rather similar to modern lizards. It had small sharp teeth and it likely ate small invertebrates such as millipedes or early insects. Specimens of Hylonomus indicate that their bodies are covered with horny scales. They are also described as having slender and lightweight leg and arm bones, long and slim hands and feet, a narrow and tongue-shaped part in the roof of the mouth, a deep groove on a certain bone in the skull, a bumpy structure on the back bones, changes in the height of certain back bone parts, a hole in a specific place on the skull, arm and leg bones that are the same length, a short fourth toe bone compared to the shin bone, a short fifth toe bone compared to the fourth toe bone, long neck bones, and a well-developed opening below the eye. Fossils of the basal pelycosaur Archaeothyris and the basal diapsid Petrolacosaurus are also found in the same region of Nova Scotia, although from a higher stratum, dated approximately 6 million years later. Fossilized footprints found in New Brunswick have been attributed to Hylonomus, at an estimated age of 315 million years. Paleoecology Fossils of Hylonomus have been found in the remains of fossilized club moss stumps in the Joggins Formation, Joggins, Nova Scotia, Canada. It is supposed that, after harsh weather, the club mosses would crash down, with the stumps eventually rotting and hollowing out. Small animals such as Hylonomus, seeking shelter, would enter and become trapped, starving to death. An alternative hypothesis is that the animals made their nests in the hollow tree stumps. In popular culture Hylonomus lyelli was named the Provincial Fossil of Nova Scotia in 2002.
Biology and health sciences
Other prehistoric reptiles
Animals
2683080
https://en.wikipedia.org/wiki/Halobacterium
Halobacterium
Halobacterium (common abbreviation Hbt.) is a genus in the family Halobacteriaceae. The genus Halobacterium ("salt" or "ocean bacterium") consists of several species of Archaea with an aerobic metabolism which requires an environment with a high concentration of salt; many of their proteins will not function in low-salt environments. They grow on amino acids in their aerobic conditions. Their cell walls are also quite different from those of bacteria, as ordinary lipoprotein membranes fail in high salt concentrations. In shape, they may be either rods or cocci, and in color, either red or purple. They reproduce using binary fission (by constriction), and are motile. Halobacterium grows best in a 42 °C environment. The genome of an unspecified Halobacterium species, sequenced by Shiladitya DasSarma, comprises 2,571,010 bp (base pairs) of DNA compiled into three circular strands: one large chromosome with 2,014,239 bp, and two smaller ones with 191,346 and 365,425 bp. This species, called Halobacterium sp. NRC-1, has been extensively used for postgenomic analysis. Halobacterium species can be found in the Great Salt Lake, the Dead Sea, Lake Magadi, and any other waters with high salt concentration. Purple Halobacterium species owe their color to bacteriorhodopsin, a light-sensitive protein which provides chemical energy for the cell by using sunlight to pump protons out of the cell. The resulting proton gradient across the cell membrane is used to drive the synthesis of the energy carrier ATP. Thus, when these protons flow back in, they are used in the synthesis of ATP (this proton flow can be emulated with a decrease in pH outside the cell, causing a flow of H+ ions). The bacteriorhodopsin protein is chemically very similar to the light-detecting pigment rhodopsin, found in the vertebrate retina. Species of Halobacterium Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). Unassigned species: "H. yunchengense" Cui et al. 2024 Synonyms Halobacterium cutirubrum > Halobacterium salinarum Halobacterium denitrificans > Haloferax denitrificans Halobacterium distributum > Halorubrum distributum Halobacterium halobium > Halobacterium salinarum Halobacterium lacusprofundi > Halorubrum lacusprofundi Halobacterium mediterranei > Haloferax mediterranei Halobacterium pharaonis > Natronomonas pharaonis Halobacterium piscisalsi > Halobacterium salinarum Halobacterium saccharovorum > Halorubrum saccharovorum Halobacterium sodomense > Halorubrum sodomense Halobacterium trapanicum > Halorubrum trapanicum Halobacterium vallismortis > Haloarcula vallismortis Halobacterium volcanii > Haloferax volcanii Genome structure The Halobacterium NRC-1 genome is 2,571,010 bp compiled into three circular replicons. More specifically, it is divided into one large chromosome with 2,014,239 bp and two small replicons pNRC100 (191,346 bp) and pNRC200 (365,425 bp). While much smaller than the large chromosome, the two plasmids account for most of the 91 insertion sequences and include genes for a DNA polymerase, seven transcription factors, genes in potassium and phosphate uptake, and cell division. The genome was discovered to contain a high G+C content at 67.9% on the large chromosome and 57.9% and 59.2% on the two plasmids. The genome also contained 91 insertion sequence elements constituting 12 families, including 29 on pNRC100, 40 on pNRC200, and 22 on the large chromosome. This helps explain the genetic plasticity that has been observed in Halobacterium. Of the archaea, halobacteria are viewed as being involved in the most lateral genetics (gene transfer between domains) and a proof that this transfer does take place. Cell structure and metabolism Halobacterium species are rod-shaped and enveloped by a single lipid bilayer membrane surrounded by an S-layer made from the cell-surface glycoprotein. They grow on amino acids in aerobic conditions. Although Halobacterium NRC-1 contains genes for glucose degradation, as well as genes for enzymes of a fatty acid oxidation pathway, it does not seem able to use these as energy sources. Though the cytoplasm retains an osmotic equilibrium with the hypersaline environment, the cell maintains a high potassium concentration using many active transporters. Many Halobacterium species possess proteinaceous organelles called gas vesicles. Ecology Halobacteria can be found in highly saline lakes such as the Great Salt Lake, the Dead Sea, and Lake Magadi. Halobacterium can be identified in bodies of water by the light-detecting pigment bacteriorhodopsin, which not only provides the archaeon with chemical energy, but adds to its reddish hue as well. An optimal temperature for growth has been observed at 37 °C. Halobacterium may be a candidate for a life form present on Mars. One of the problems associated with the survival on Mars is the destructive ultraviolet light. These microorganisms develop a thin crust of salt that can moderate some of the ultraviolet light. Sodium chloride is the most common salt and chloride salts are opaque to short-wave ultraviolet. Their photosynthetic pigment, bacteriorhodopsin, is actually opaque to the longer-wavelength ultraviolet (its red color). In addition, Halobacterium makes pigments called bacterioruberins that are thought to protect cells from damage by ultraviolet light. The obstacle they need to overcome is being able to grow at a low temperature during a presumably short time when a pool of water could be liquid. Applications Food Industry There is potential for Halobacterium species to be used in the food industry. Some examples of uses can include the production of Beta-Carotene, a pigment in halophilic bacteria that contributes to their red coloration, is used in the food industry as a natural food dye. Halophiles also produce degradative enzymes such as lipases, amylases, proteases, and xylanases that are used in various food processing methods. Notable applications of these enzymes include enhancing the fermentation process of salty foods, improving dough quality for the baking of breads, and contributing to the production of coffee. Bioremediation Many species of halophilic bacteria produce exopolysaccharides (EPS) which are used industrially as bioremediation agents. Biosurfactants are also released by many halophilic bacteria and these amphiphilic compounds have been used for soil remediation. Many halophiles are highly tolerant of heavy metals making them potentially useful in the bioremediation of xenobiotic compounds and heavy metals that are released into the environment from mining and metal plating. Halophiles contribute to the bioremediation of contaminants by converting xenobiotics into less toxic compounds. Some Halobacterium species have been shown to be effective in the bioremediation of pollutants including aliphatic hydrocarbons, such as those found in crude oil; and aromatic hydrocarbons such as 4-hydroxybenzoic acid, a contaminant in some high salinity industrial runoffs. Pharmaceuticals Some strains of Halobacterium, including Halobacterium salinarum, are being explored for medical applications of their radiation-resistance mechanisms. Bacterioruberin is a carotenoid pigment found in Halobacterium which decreases the bacteria’s sensitivity to γ-radiation and UV radiation. It has been shown in knockout studies, that the absence of bacterioruberin increases the sensitivity of the bacterium to oxidative DNA-damaging agents. Hydrogen peroxide, for example, reacts with bacteroruberin which prevents the production of reactive oxygen species, and thus protects the bacterium by reducing the oxidative capacity of the DNA-damaging agent. H. salinarum also exhibits high intracellular concentrations of potassium chloride which has also been shown to confer radiation resistance. Halobacterium are also being explored for the pharmaceutical applications of bioactive compounds they produce, including anticancer agents, antimicrobial biosurfactancts, and antimicrobial metabolites. Significance and applications Halobacteria are halophilic microorganisms that are currently being studied for their uses in scientific research and biotechnology. For instance, genomic sequencing of the Halobacterium species NRC-1 revealed their use of eukaryotic-like RNA polymerase II and translational machinery that are related to Escherichia coli and other Gram-negative bacteria. In addition, they possess genes for DNA replication, repair, and recombination that are similar to those present in bacteriophages, yeasts, and bacteria. The ability of this Halobacterium species to be easily cultured and genetically modified allows it to be used as a model organism in biological studies. Furthermore, Halobacterium NRC-1 have also been employed as a potential vector for delivering vaccines. In particular, they produce gas vesicles that can be genetically engineered to display specific epitopes. Additionally, the gas vesicles demonstrate the ability to function as natural adjuvants to help evoke stronger immune responses. Because of the requirement of Halobacteria for a high-salt environment, the preparation of these gas vesicles is inexpensive and efficient, needing only tap water for their isolation. Halobacteria also contain a protein called Bacteriorhodopsins which are light-driven proton pumps found on the cell membrane. Although most proteins in halophiles need high salt concentrations for proper structure and functioning, this protein has shown potential to be used for biotechnological purposes because of its stability even outside of these extreme environments. Bacteriorhodopsins isolated from Halobacterium salinarum have been especially studied for their applications in electronics and optics. Particularly, bacteriorhodopsins have been used in holographic storage, optical switching, motion detection, and nanotechnology. Although numerous uses of this protein have been presented, there are yet to be any high-scale commercial applications established at this time. Recombination and mating UV irradiation of Halobacterium sp. strain NRC-1 induces several gene products employed in homologous recombination. For instance, a homolog of the rad51/recA gene, which plays a key role in recombination, is induced 7-fold by UV. Homologous recombination may rescue stalled replication forks, and/or facilitate recombinational repair of DNA damage. In its natural habitat, homologous recombination is likely induced by the UV irradiation in sunlight. Halobacterium volcanii has a distinctive mating system in which cytoplasmic bridges between cells appear to be used for transfer of DNA from one cell to another. In wild populations of Halorubrum, genetic exchange and recombination occur frequently. This exchange may be a primitive form of sexual interaction, similar to the more well studied bacterial transformation that is also a process for transferring DNA between cells leading to homologous recombinational repair of DNA damage.
Biology and health sciences
Archaea
Plants
2683414
https://en.wikipedia.org/wiki/Perseus%20Cluster
Perseus Cluster
The Perseus cluster (Abell 426) is a cluster of galaxies in the constellation Perseus. It has a recession speed of 5,366 km/s and a diameter of 863. It is one of the most massive objects in the known universe, containing thousands of galaxies immersed in a vast cloud of multimillion-degree gas. X-radiation from the cluster The Perseus galaxy cluster is the brightest cluster in the sky when observed in the X-ray band. The cluster contains the radio source 3C 84 that is currently blowing bubbles of relativistic plasma into the core of the cluster. These are seen as holes in an X-ray image of the cluster, as they push away the X-ray emitting gas. They are known as radio bubbles, because they appear as emitters of radio waves due to the relativistic particles in the bubble. The galaxy NGC 1275 is located at the centre of the cluster, where the X-ray emission is brightest. The first detection of X-ray emission from the Perseus cluster (astronomical designation Per XR-1) occurred during an Aerobee rocket flight on March 1, 1970. The X-ray source may be associated with NGC 1275 (Per A, 3C 84), and was reported in 1971. If the source is NGC 1275, then Lx is about 4 x 1045 ergs/s. More detailed observations from Uhuru confirmed the earlier detection and its source within the Perseus cluster. Perseus galaxy cluster's Cosmic music note In 2003, a team of astronomers led by Andrew Fabian at Cambridge University discovered one of the deepest notes ever detected, after 53 hours of Chandra observations. No human will actually hear the note, because its time period between oscillations is 9.6 million years, which is 57 octaves below the keys in the middle of a piano. The sound waves appear to be generated by the inflation of bubbles of relativistic plasma by the central active galactic nucleus in NGC 1275. The bubbles are visible as ripples in the X-ray band since the X-ray brightness of the intracluster medium that fills the cluster is strongly dependent on the density of the plasma. In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster. A similar case also happens in the nearby Virgo Cluster, generated by an even larger supermassive black hole in the galaxy Messier 87, also detected by Chandra. Like the former, no human will hear the note. The tone is variable, and even lower than those generated by NGC 1275, from 56 octaves below middle C on minor eruptions, to as low as 59 octaves below middle C on major eruptions. Image gallery
Physical sciences
Notable galaxy clusters
Astronomy
25432202
https://en.wikipedia.org/wiki/Butane
Butane
Butane () is an alkane with the formula C4H10. Butane exists as two isomers, n-butane with connectivity and iso-butane with the formula . Both isomers are highly flammable, colorless, easily liquefied gases that quickly vaporize at room temperature and pressure. Butanes are a trace components of natural gases (NG gases). The other hydrocarbons in NG include propane, ethane, and especially methane, which are more abundant. Liquefied petroleum gas is a mixture of propane and some butanes. The name butane comes from the root but- (from butyric acid, named after the Greek word for butter) and the suffix -ane (for organic compounds). History The first synthesis of butane was accidentally achieved by British chemist Edward Francland in 1849 from ethyl iodide and zinc, but he had not realized that the ethyl radical dimerized and misidentified the substance. It was discovered in crude petroleum in 1864 by Edmund Ronalds, who was the first to describe its properties, which he named "hydride of butyl", based on the naming for the then-known butyric acid, which had been named and described by the French chemist Michel Eugène Chevreul 40 years earlier. Other names arose in the 1860s: "butyl hydride", "hydride of tetryl" and "tetryl hydride", "diethyl" or "ethyl ethylide" and others. August Wilhelm von Hofmann, in his 1866 systemic nomenclature, proposed the name "quartane", and the modern name was introduced to English from German around 1874. Butane did not have much practical use until the 1910s, when W. Snelling identified butane and propane as components in gasoline. He found that if they were cooled, they could be stored in a volume-reduced liquified state in pressurized containers. In 1911, Snelling's liquified petroleum gas was publicly available, and his process for producing the mixture was patented in 1913. Butane is one of the most produced industrial chemicals in the 21st century, with around 80-90 billion lbs (40 million US tons, 36 million metric tons) produced by the United States every year. Density The density of butane is highly dependent on temperature and pressure in the reservoir. For example, the density of liquid butane is 571.8±1 kg/m3 (for pressures up to 2 MPa and temperature 27±0.2 °C), while the density of liquid butane is 625.5±0.7 kg/m3 (for pressures up to 2 MPa and temperature −13±0.2 °C). Isomers Rotation about the central C−C bond produces two different conformations (trans and gauche) for n-butane. Reactions When oxygen is plentiful, butane undergoes complete combustion to form carbon dioxide and water vapor; when oxygen is limited, due to incomplete combustion, carbon (soot) or carbon monoxide may be formed instead of carbon dioxide. Butane is denser than air. When there is sufficient oxygen: 2 C4H10 + 13 O2 → 8 CO2 + 10 H2O When oxygen is limited: 2 C4H10 + 9 O2 → 8 CO + 10 H2O By weight, butane contains about or by liquid volume . The maximum adiabatic flame temperature of butane with air is . n-Butane is the feedstock for DuPont's catalytic process for the preparation of maleic anhydride: 2 CH3CH2CH2CH3 + 7 O2 → 2 C2H2(CO)2O + 8 H2O n-Butane, like all hydrocarbons, undergoes free radical chlorination providing both 1-chloro- and 2-chlorobutanes, as well as more highly chlorinated derivatives. The relative rates of the chlorinations are partially explained by the differing bond dissociation energies: 425 and 411 kJ/mol for the two types of C-H bonds. Uses Normal butane can be used for gasoline blending, as a fuel gas, fragrance extraction solvent, either alone or in a mixture with propane, and as a feedstock for the manufacture of ethylene and butadiene, a key ingredient of synthetic rubber. Isobutane is primarily used by refineries to enhance (increase) the octane number of motor gasoline. For gasoline blending, n-butane is the main component used to manipulate the Reid vapor pressure (RVP). Since winter fuels require much higher vapor pressure for engines to start, refineries raise the RVP by blending more butane into the fuel. n-Butane has a relatively high research octane number (RON) and motor octane number (MON), which are 93 and 92 respectively. When blended with propane and other hydrocarbons, the mixture may be referred to commercially as liquefied petroleum gas (LPG). It is used as a petrol component, as a feedstock for the production of base petrochemicals in steam cracking, as fuel for cigarette lighters and as a propellant in aerosol sprays such as deodorants. Pure forms of butane, especially isobutane, are used as refrigerants and have largely replaced the ozone-layer-depleting halomethanes in refrigerators, freezers, and air conditioning systems. The operating pressure for butane is lower than operating pressures for halomethanes such as Freon-12 (R-12). Hence, R-12 systems, such as those in automotive air conditioning systems, when converted to pure butane, will function poorly. Instead, a mixture of isobutane and propane is used to give cooling system performance comparable to R-12. Butane is also used as lighter fuel for common lighters or butane torches, and is sold bottled as a fuel for cooking, barbecues and camping stoves. In the 20th century, the Braun company of Germany made a cordless hair styling device product that used butane as its heat source to produce steam. As fuel, butane is often mixed with small amounts of mercaptans to give the unburned gas an offensive smell easily detected by the human nose. In this way, butane leaks can easily be identified. While hydrogen sulfide and mercaptans are toxic, they are present in levels so low that suffocation and fire hazard by the butane becomes a concern far before toxicity. Most commercially available butane also contains some contaminant oil, which can be removed by filtration. If not removed, it will otherwise leave a deposit at the point of ignition and may eventually block the uniform flow of gas. The butane used as a solvent for fragrance extraction does not contain these contaminants. Butane gas can cause gas explosions in poorly ventilated areas if leaks go unnoticed and are ignited by spark or flame. Purified butane is used as a solvent in the industrial extraction of cannabis oils. Health effects Inhalation of butane can cause euphoria, drowsiness, unconsciousness, asphyxia, cardiac arrhythmia, fluctuations in blood pressure and temporary memory loss, when abused directly from a highly pressurized container, and can result in death from asphyxiation and ventricular fibrillation. Butane enters the blood supply, and within seconds, leads to intoxication. Butane is the most commonly abused volatile substance in the UK, and was the cause of 52% of solvent related deaths in 2000. By spraying butane directly into the throat, the jet of fluid can cool rapidly to by expansion, causing prolonged laryngospasm. "Sudden sniffer's death" syndrome, first described by Bass in 1970, is the most common single cause of solvent related deaths, resulting in 55% of known fatal cases.
Physical sciences
Hydrocarbons
null
3640328
https://en.wikipedia.org/wiki/Ceratophryidae
Ceratophryidae
The Ceratophryidae, also known as common horned frogs, are a family of frogs found in South America. It is a relatively small family with three extant genera and 12 species. Despite the common name, not all species in the family have the horn-like projections at the eyes. They have a relatively large head with big mouth, and they are ambush predators able to consume large prey, including lizards, other frogs, and small mammals. They inhabit arid areas and are seasonal breeders, depositing many small eggs in aquatic habitats. Tadpoles are free-living and carnivorous (Ceratophrys and Lepidobatrachus) or grazers (Chacophrys). Some species (especially from the genera Ceratophrys and Lepidobatrachus) are popular in herpetoculture. The oldest fossils of the family are known from the Miocene epoch. The fossil giant frog Beelzebufo from the Late Cretaceous of Madagascar was formerly considered to belong to this family, but is now excluded, but is possibly closely related, alongside Baurubatrachus from the Late Cretaceous of Brazil. Wawelia from the Miocene of Argentina is no longer considered closely related. Taxonomy Placement of this clade has varied considerably over time, having been a subfamily within the Leptodactylidae for a long while. Later on, it has been raised to family level, either broadly defined, including the Telmatobiidae and Batrachylidae (as subfamilies Telmatobiinae and Batrachylinae, respectively), or as now is commonly accepted, as a smaller family with three genera. Genera The extant genera are: Ceratophrys Wied-Neuwied, 1824 (8 species) Chacophrys Reig & Limeses, 1963 (monotypic: Chacophrys pierottii (Vellard, 1948)) Lepidobatrachus Budgett, 1899 (3 species, 1 fossil species) In addition, a number of fossil taxa have been considered to be closely related, including: †Beelzebufo Evans, Jones, & Krause, 2008 (monotypic: Beelzebufo ampinga (Evan, Jones & Krause, 2008)) †Baurubatrachus Báez and Perí, 1990 (2 species)
Biology and health sciences
Frogs and toads
Animals
3642692
https://en.wikipedia.org/wiki/Ulmus%20pumila
Ulmus pumila
Ulmus pumila, the Siberian elm, is a tree native to Asia. It is also known as the Asiatic elm and dwarf elm, but sometimes miscalled the 'Chinese elm' (Ulmus parvifolia). U. pumila has been widely cultivated throughout Asia, North America, Argentina, and southern Europe, becoming naturalized in many places, notably across much of the United States. Description The Siberian elm is usually a small to medium-sized, often bushy, deciduous tree growing to tall, the diameter at breast height to . The bark is dark gray, irregularly longitudinally fissured. The branchlets are yellowish gray, glabrous or pubescent, unwinged and without a corky layer, with scattered lenticels. The winter buds dark brown to red-brown, globose to ovoid. The petiole is , pubescent, the leaf blade elliptic-ovate to elliptic-lanceolate, , the colour changing from dark green to yellow in autumn. The perfect, apetalous wind-pollinated flowers bloom for one week in early spring, before the leaves emerge, in tight fascicles (bundles) on the last year's branchlets. Flowers emerging in early February are often damaged by frost (causing the species to be dropped from the Dutch elm breeding programme). Each flower is about across and has a green calyx with 4–5 lobes, 4–8 stamens with brownish-red anthers, and a green pistil with a two-lobed style. Unlike most elms, the Siberian elm is able to self-pollinate successfully. The wind-dispersed samarae are whitish tan, orbicular to rarely broadly obovate or elliptical, , glabrous except for pubescence on stigmatic surface; the stalk , the perianth persistent. The seed is at centre of the samara or occasionally slightly toward apex but not reaching the apical notch. Flowering and fruiting occur March to May. Ploidy: 2n = 28. The tree also suckers readily from its roots. The tree is short-lived in temperate climates, rarely reaching more than 60 years of age, but in its native environment may live to between 100 and 150 years. A giant specimen, southeast of Khanbogt in the south Gobi, with a girth of in 2009, may exceed 250 years (based on average annual ring widths of other U. pumila in the area). Taxonomy The species was described by Peter Simon Pallas in the 18th century from specimens from Transbaikal. Two varieties were traditionally recognized: var. pumila and var. arborea, the latter now treated as a cultivar, U. pumila 'Pinnato-ramosa'. Distribution and habitat The tree is native to Central Asia, eastern Siberia, the Russian Far East, Mongolia, Tibet, northern China, India (northern Kashmir) and Korea. It is the last tree species encountered in the semi-desert regions of Central Asia. Ecology Pests and diseases The tree has considerable variability in resistance to Dutch elm disease; for example, trees from north-western and north-eastern China exhibit significantly higher tolerance than those from central and southern China. Moreover, it is highly susceptible to damage from many insects and parasites, including the elm leaf beetle Xanthogaleruca luteola, the Asian 'zigzag' sawfly Aproceros leucopoda, Elm Yellows, powdery mildew, cankers, aphids, leaf spot and, in the Netherlands, coral spot fungus Nectria cinnabarina. U. pumila is the most resistant of all the elms to verticillium wilt. Invasiveness and spontaneous hybridization In North America, Ulmus pumila has become an invasive species in much of the region from central Mexico northward across the eastern and central United States to Ontario, Canada. It also hybridizes in the wild with the native U. rubra (slippery elm) in the central United States, prompting conservation concerns for the latter species. In South America, the tree has spread across much of the Argentine pampas. In Europe it has spread widely in Spain, and hybridizes extensively there with the native field elm (U. minor), contributing to conservation concerns for the latter species. Research is ongoing into the extent of hybridisation with U. minor in Italy. In the Netherlands, 1700 Siberian elms planted in error for field elm in 2016 in the Zalkerbos near Kampen, Overijssel, were grubbed up because of invasiveness concerns and replaced in 2023 with native species. Ulmus pumila is often found in abundance along railroads and in abandoned lots and on disturbed ground. The gravel along railroad beds provides ideal conditions for its growth: well-drained, nutrient poor soil, and high light conditions; these beds provide corridors which facilitate its spread. It is found as high as 8000 feet in the Sandia Mountains in New Mexico and is invading coniferous forest there. New Mexico may be a center of genetic diversity in North America. Owing to its high sunlight requirements, it seldom invades mature forests, and is primarily a problem in cities and open areas, as well as along transportation corridors. The species is now listed in Japan as an alien species recognized as established in Japan or found in the Japanese wild. Cultivation U. pumila was introduced into Spain as an ornamental, probably during the reign of Philip II (1556–98), and from the 1930s into Italy. In these countries it has naturally hybridized with the field elm (U. minor). In Italy it was widely used in viniculture, notably in the Po valley, to support the grape vines until the 1950s, when the demands of mechanization made it unsuitable. Three specimens were supplied by the Späth nursery of Berlin to the Royal Botanic Garden Edinburgh (RBGE) in 1902 as U. pumila, in addition to specimens of the narrow-leaved U. pumila cultivar 'Pinnato-ramosa'. One was planted in RBGE; the two not planted in the Garden may survive in Edinburgh, as it was the practice of the Garden to distribute trees about the city. Kew Gardens obtained specimens of U. pumila from the Arnold Arboretum in 1908 and, as U. pekinensis, via the Veitch Nurseries in 1910 from William Purdom in northern China. A specimen obtained from Späth and planted in 1914 stood in the Ryston Hall arboretum, Norfolk, in the early 20th century. The tree was propagated and marketed by the Hillier & Sons nursery, Winchester, Hampshire, from 1962 to 1977, during which time over 500 were sold. More recently, the popularity of U. pumila in the Great Britain has been almost exclusively as a bonsai subject, and mature trees are largely restricted to arboreta. In the UK the TROBI Champions grow at Thorp Perrow Arboretum, Yorkshire, × in 2004, and at St Ann's Well Gardens, Hove, Sussex × in 2009. U. pumila is said to have been introduced to the US in 1905 by Prof. John George Jack, and later by Frank Nicholas Meyer, though 'Siberian elm' appears in some 19th-century US nursery catalogues. The tree was cultivated at the United States Department of Agriculture (USDA) Experimental Station at Mandan, North Dakota, where it flourished. It was consequently selected by the USDA for planting in shelter belts across the prairies in the aftermath of the Dustbowl disasters, where its rapid growth and tolerance for drought and cold initially made it a great success. However, the species later proved susceptible to numerous maladies. Attempts to find a more suitable cultivar were initiated in 1997 by the Plant Materials Center of the USDA, which established experimental plantations at Akron, Colorado, and Sidney, Nebraska. The study, no. 201041K, will conclude in 2020. The US National Champion, measuring high in 2011, grows in Berrien County, Michigan. The seeds lose their viability rapidly after maturity unless placed on suitable germination conditions or dried and placed at low temperatures. The species has a high sunlight requirement and is not shade-tolerant; with adequate light it exhibits rapid growth. The tree is also fairly intolerant of wet ground conditions, growing better on well-drained soils. While it is very resistant to drought and severe cold, and able to grow on poor soils, its short period of dormancy, flowering early in spring followed by continuous growth until the first frosts of autumn, renders it vulnerable to frost damage. As an ornamental U. pumila is a very poor tree, tending to be short-lived, with brittle wood and poor crown shape, but it has nevertheless enjoyed some popularity owing to its rapid growth and provision of shade. The Siberian Elm has been described as "one of the world's worst... ornamental trees that does not deserve to be planted anywhere". Yet in the US during the 1950s, the tree was also widely promoted as a fast-growing hedging substitute for privet, and as a consequence is now commonly found in nearly all states. Cultivars Valued for the high resistance of some clones to Dutch elm disease, over a dozen selections have been made to produce hardy ornamental cultivars, although several may no longer be in cultivation: A variegated weeping elm, with cream, dark green and light green variegation, is cultivated in China as Ulmus pumila 'Variegata'. Some authorities consider the cultivar 'Berardii' a form of U. pumila. Nottingham elm, considered an Ulmus × hollandica by Richens, was marketed from the 19th century as 'Siberian elm'. Hybrid cultivars Androssowii, U. × arbuscula, Fuente Umbria, Karagatch, Toledo The species has been widely hybridized in the United States and Italy to create robust trees of more native appearance with high levels of resistance to Dutch elm disease: Arno, Cathedral, Coolshade, Fiorente, Homestead, Lincoln, Morton Plainsman = , Morton Stalwart = , New Horizon, Plinio, Rebona, Regal, Recerta, Rosehill, San Zanobi, Urban, Willis, Dutch clone '260' (not released to commerce). Other hybrid cultivars involving crossings with U. pumila: Den Haag, Sapporo Autumn Gold Uses The unripe seeds have long been eaten by the peoples of Manchuria, and during the Great Chinese Famine they also became one of the most important foodstuffs in the Harbin region. The leaves were also gathered, to the detriment of the trees, prompting a prohibition order by the authorities, which was largely ignored. The leaves eaten raw are not very palatable, but stewed and prepared with Kaoliang or Foxtail millet make a better tasting and more filling meal. Ulmus pumila in literature and travel writing The "dwarf-" or "shrub-elms" of the North Caucasus, along with other local flora, appear in the opening description of Tolstoy's story 'The Raid' (1853). Nicholas Roerich describes a specimen discovered on his travels through Mongolia: We are in the deserts of Mongolia. It was hot and dusty yesterday. From faraway thunder was approaching. Some of our friends became tired from climbing the stony holy hills of Shiret Obo. While already returning to the camp, we noticed in the distance a huge elm tree – 'karagatch', - lonely, towering amidst the surrounding endless desert. The size of the tree, its somewhat familiar outlines attracted us into its shadow. Botanical considerations led us to believe that in the wide shade of the giant there might be some interesting herbs. Soon, all the co-workers gathered around the two mighty stems of the karagatch. The deep, deep shadow of the tree covered about 50 feet across. The powerful tree-stems were covered with fantastic burr growths. In the rich foliage, birds were singing and the beautiful branches were stretched out in all directions, as if wishing to give shelter to all pilgrims. Accessions North America Arnold Arboretum, US. Acc. nos. 17923, 638-79, 673-87. Denver Botanic Gardens, US. Acc. no. 900534. Dominium Arboretum, Ottawa, Ontario, Canada. No acc. details available. Holden Arboretum, US. Acc. nos. 99-868, 72-218 Longwood Gardens, US. Acc. no. 1962-0512. Morton Arboretum, US. Acc. nos. 542-49, 325-70, 53-74, 172-U. UBC Botanical Garden and Centre for Plant Research, US. Acc. no. 027560-0284-1989. Europe Arboretum of Warsaw University of Life Sciences , University of Life Sciences, Warsaw, Poland. 2 trees, no accession details available. Brighton & Hove City Council, UK. NCCPG Elm Collection. Dubrava Arboretum, Lithuania. No details available. Grange Farm Arboretum, Lincolnshire, UK. Acc. no. 521. Hergest Croft Gardens, Herefordshire, UK. One tree, no accession details available. Hortus Botanicus Nationalis, Salaspils, Latvia. Acc. nos. 18162,3,4. Royal Botanic Gardens, Wakehurst Place, UK. Acc. no. 2000-4449. Sir Harold Hillier Gardens. Acc. no. 2016.0386, grown from seed of tree in Utah, US. Tallinn Botanic Garden, Estonia. . No accession details available. Thorp Perrow Arboretum, Yorkshire, UK. British Champion tree, 19 m high, 70 cm d.b.h. in 2004. Westonbirt Arboretum , Tetbury, Glos., UK. Two trees planted 1981, no acc.details. Wijdemeren City Council, Netherlands. Elm Arboretum. U. pumila 'Puszta' planted Smeerdijkgaarde, Kortenhoef 2013; Dammerweg, Nederhorst den Berg 2015. 5 'Aurescens' planted 2015 Overmeerseweg, 'Pinnato-ramosa' planted 2015 Dammerweg, 'Mierenbos' and 'Poort Bulten' planted Brilhoek and cemetery Hornhof, Nederhorst den Berg in 2019 Australasia Alma Park, St Kilda, Victoria, Australia. One specimen, listed on the National Trust of Victoria's Significant Tree Register. Eastwoodhill Arboretum , Gisborne, New Zealand. 2 trees, details not known. Africa Arboretum of Haramaya University, Haramaya, Ethiopia Nurseries Europe Van Den Berk (UK) Ltd., , London, UK
Biology and health sciences
Rosales
Plants
6473036
https://en.wikipedia.org/wiki/Cleaner%20shrimp
Cleaner shrimp
Cleaner shrimp is a common name for a number of swimming decapod crustaceans that clean other organisms of parasites. Most are found in the families Hippolytidae (including the Pacific cleaner shrimp, Lysmata amboinensis) and Palaemonidae (including the spotted Periclimenes magnificus), though the families Alpheidae, Pandalidae, and Stenopodidae (including the banded coral shrimp, Stenopus hispidus) each contain at least one species of cleaner shrimp. The term "cleaner shrimp" is sometimes used more specifically for the family Hippolytidae and the genus Lysmata. Cleaner shrimp are so called because they exhibit a cleaning symbiosis with client fish where the shrimp clean parasites from the fish. The fish benefit by having parasites removed from them, and the shrimp gain the nutritional value of the parasites. The shrimp also eat the mucus and parasites around the wounds of injured fish, which reduces infections and helps healing. The action of cleansing further aids the health of client fish by reducing their stress levels. In many coral reefs, cleaner shrimp congregate at cleaning stations. In this behaviour cleaner shrimps are similar to cleaner fish, and sometimes may join with cleaner wrasse and other cleaner fish attending to client fish. Shrimp of the genus Urocaridella are often cryptic or live in caves on the reef and are not associated commensally with other animals. These shrimp assemble around cleaning stations where up to 25 shrimp live in proximity. When a potential client fish swims close to a station with shrimp present, several shrimp perform a "rocking dance," a side-to-side movement. Frequency of rocking increases with hunger. This increase in frequency suggests competition between hungry and sated shrimp. To avoid competition with other cleaners during the day, the shrimp Urocaridella antonbruunii was observed cleaning a sleeping fish at night. Cleaner shrimps are often included in saltwater aquaria partly due to their cleansing function and partly due to their brightly colored appearance.
Biology and health sciences
Shrimps and prawns
Animals
1935605
https://en.wikipedia.org/wiki/Micropropagation
Micropropagation
Micropropagation or tissue culture is the practice of rapidly multiplying plant stock material to produce many progeny plants, using modern plant tissue culture methods. Micropropagation is used to multiply a wide variety of plants, such as those that have been genetically modified or bred through conventional plant breeding methods. It is also used to provide a sufficient number of plantlets for planting from seedless plants, plants that do not respond well to vegetative reproduction or where micropropagation is the cheaper means of propagating (e.g. Orchids). Cornell University botanist Frederick Campion Steward discovered and pioneered micropropagation and plant tissue culture in the late 1950s and early 1960s. Steps In short, steps of micropropagation can be divided into four stages: Selection of mother plant Multiplication Rooting and acclimatizing Transfer new plant to soil Selection of mother plant Micropropagation begins with the selection of plant material to be propagated. The plant tissues are removed from an intact plant in a sterile condition. Clean stock materials that are free of viruses and fungi are important in the production of the healthiest plants. Once the plant material is chosen for culture, the collection of explant(s) begins and is dependent on the type of tissue to be used, including stem tips, anthers, petals, pollen and other plant tissues. The explant material is then surface sterilized, usually in multiple courses of bleach and alcohol washes, and finally rinsed in sterilized water. This small portion of plant tissue, sometimes only a single cell, is placed on a growth medium, typically containing Macro and micronutrients, water, sucrose as an energy source and one or more plant growth regulators (plant hormones). Usually, the medium is thickened with a gelling agent, such as agar, to create a gel which supports the explant during growth. Some plants are easily grown on simple media, but others require more complicated media for successful growth; the plant tissue grows and differentiates into new tissues depending on the medium. For example, media containing cytokinin are used to create branched shoots from plant buds. Multiplication Multiplication is the taking of tissue samples produced during the first stage and increasing their number. Following the successful introduction and growth of plant tissue, the establishment stage is followed by multiplication. Through repeated cycles of this process, a single explant sample may be increased from one to hundreds and thousands of plants. Depending on the type of tissue grown, multiplication can involve different methods and media. If the plant material grown is callus tissue, it can be placed in a blender and cut into smaller pieces and recultured on the same type of culture medium to grow more callus tissue. If the tissue is grown as small plants called plantlets, hormones are often added that cause the plantlets to produce many small offshoots. After the formation of multiple shoots, these shoots are transferred to rooting medium with a high auxin\cytokinin ratio. After the development of roots, plantlets can be used for hardening. Pretransplant This stage involves treating the plantlets/shoots produced to encourage root growth and "hardening." It is performed in vitro, or in a sterile "test tube" environment. "Hardening" refers to the preparation of the plants for a natural growth environment. Until this stage, the plantlets have been grown in "ideal" conditions, designed to encourage rapid growth. Due to the controlled nature of their maturation, the plantlets often do not have fully functional dermal coverings. This causes them to be highly susceptible to disease and inefficient in their use of water and energy. In vitro conditions are high in humidity, and plants grown under these conditions often do not form a working cuticle and stomata that keep the plant from drying out. When taken out of culture, the plantlets need time to adjust to more natural environmental conditions. Hardening typically involves slowly weaning the plantlets from a high-humidity, low light, warm environment to what would be considered a normal growth environment for the species in question. Transfer from culture In the final stage of plant micropropagation, the plantlets are removed from the plant media and transferred to soil or (more commonly) potting compost for continued growth by conventional methods. This stage is often combined with the "pretransplant" stage. Methods There are many methods of plant micro propagation. Meristem culture In Meristem culture, the meristem and a few subtending leaf primordia are placed into a suitable growing media. where they are induced to form new meristem. These meristems are then divided and further grown and multiplied. To produce plantlets the meristems are taken of from their proliferation medium and put on a regeneration medium. When an elongated rooted plantlet is produced after some weeks, it can be transferred to the soil. A disease-free plant can be produced by this method. Experimental result also suggest that this technique can be successfully utilized for rapid multiplication of various plant species, e.g. Coconut, strawberry, sugarcane. Callus culture A callus is mass of undifferentiated parenchymatous cells. When a living plant tissue is placed in an artificial growing medium with other conditions favorable, callus is formed. The growth of callus varies with the homogenous levels of auxin and cytokinin and can be manipulated by endogenous supply of these growth regulators in the culture medium. The callus growth and its organogenesis or embryogenesis can be referred into three different stages. Stage I: Rapid production of callus after placing the explants in culture medium Stage II: The callus is transferred to other medium containing growth regulators for the induction of adventitious organs. Stage III: The new plantlet is then exposed gradually to the environmental condition. Embryo culture In embryo culture, the embryo is excised and placed into a culture medium with proper nutrient in aseptic condition. To obtain a quick and optimum growth into plantlets, it is transferred to soil. It is particularly important for the production of interspecific and intergeneric hybrids and to overcome the embryo. Protoplast culture In protoplast culture, the plant cell can be isolated with the help of wall degrading enzymes and growth in a suitable culture medium in a controlled condition for regeneration of plantlets. Under suitable conditions the protoplast develops a cell wall followed by an increase in cell division and differentiation and grows into a new plant. The protoplast is first cultured in liquid medium at 25 to 28 C with a light intensity of 100 to 500 lux or in dark and after undergoing substantial cell division, they are transferred into solid medium congenial or morphogenesis in many horticultural crops respond well to protoplast culture. Advantages Micropropagation has a number of advantages over traditional plant propagation techniques: The main advantage of micropropagation is the production of many plants that are clones of each other. Micropropagation can be used to produce disease-free plants. It can have an extraordinarily high fecundity rate, producing thousands of propagules while conventional techniques might only produce a fraction of this number. It is the only viable method of regenerating genetically modified cells or cells after protoplast fusion. It is useful in multiplying plants which produce seeds in uneconomical amounts, or when plants are sterile and do not produce viable seeds or when seed cannot be stored (see recalcitrant seeds). Micropropagation often produces more robust plants, leading to accelerated growth compared to similar plants produced by conventional methods - like seeds or cuttings. Some plants with very small seeds, including most orchids, are most reliably grown from seed in sterile culture. A greater number of plants can be produced per square meter and the propagules can be stored longer and in a smaller area. Disadvantages Micropropagation is not always the perfect means of multiplying plants. Conditions that limits its use include: Labour may make up 50–69% of operating costs. All plants produced via micropropagation are genetically identical clones, leading to a lack of overall disease resilience, as all progeny plants may be vulnerable to the same infections. An infected plant sample can produce infected progeny. This is uncommon as the stock plants are carefully screened and vetted to prevent culturing plants infected with virus or fungus. Not all plants can be successfully tissue cultured, often because the proper medium for growth is not known or the plants produce secondary metabolic chemicals that stunt or kill the explant. Sometimes plants or cultivars do not come true to type after being tissue cultured. This is often dependent on the type of explant material utilized during the initiation phase or the result of the age of the cell or propagule line. Some plants are very difficult to disinfect of fungal organisms. The major limitation in the use of micropropagation for many plants is the cost of production; for many plants the use of seeds, which are normally disease free and produced in good numbers, readily produce plants (see orthodox seed) in good numbers at a lower cost. For this reason, many plant breeders do not utilize micropropagation because the cost is prohibitive. Other breeders use it to produce stock plants that are then used for seed multiplication. Mechanisation of the process could reduce labour costs, but has proven difficult to achieve, despite active attempts to develop technological solutions. Applications Micropropagation facilitates the growth, storage, and maintenance of a large number of plants in small spaces, which makes it a cost-effective process. Micropropagation is used for germplasm storage and the protection of endangered species. Micropropagation is widely used in ornamental plants to efficiently produce large quantities of uniform, disease-free specimens, significantly enhancing commercial horticulture operations. Among the species broadly propagated in vitro, one can mention chrysanthemum, damask rose, Saintpaulia ionantha, Zamioculcas zamiifolia and bleeding heart. Micropropagation can also be used with fruit trees, e.g. Pyrus communis. In order to reduce expenditures, natural plant extracts can be used to substitute traditional plant growth regulators.
Technology
Biotechnology
null
1935740
https://en.wikipedia.org/wiki/Extremely%20Large%20Telescope
Extremely Large Telescope
The Extremely Large Telescope (ELT) is an astronomical observatory under construction. When completed, it will be the world's largest optical and near-infrared extremely large telescope. Part of the European Southern Observatory (ESO) agency, it is located on top of Cerro Armazones in the Atacama Desert of northern Chile. The design consists of a reflecting telescope with a segmented primary mirror and a diameter secondary mirror. The telescope is equipped with adaptive optics, six laser guide star units, and various large-scale scientific instruments. The observatory's design will gather 100 million times more light than the human eye, equivalent to about 10 times more light than the largest optical telescopes in existence as of 2023, with the ability to correct for atmospheric distortion. It has around 250 times the light-gathering area of the Hubble Space Telescope and, according to the ELT's specifications, will provide images 16 times sharper than those from Hubble. The project was originally called the European Extremely Large Telescope (E-ELT), but the name was shortened in 2017. The ELT is intended to advance astrophysical knowledge by enabling detailed studies of planets around other stars, the first galaxies in the Universe, supermassive black holes, the nature of the Universe's dark sector, and to detect water and organic molecules in protoplanetary disks around other stars. As planned in 2011, the facility was expected to take 11 years to construct, from 2014 to 2025. On 11 June 2012, the ESO Council approved the ELT programme's plans to begin civil works at the telescope site, with the construction of the telescope itself pending final agreement with governments of some member states. Construction work on the ELT site started in June 2014. By December 2014, ESO had secured over 90% of the total funding and authorized construction of the telescope to start, estimated to cost around one billion euros for the first construction phase. The first stone of the telescope was ceremonially laid on 26 May 2017, initiating the construction of the dome's main structure and telescope. The telescope passed the halfway point in its development and construction in July 2023, with the expected completion and first light set for 2028. History On 26 April 2010, the European Southern Observatory (ESO) Council selected Cerro Armazones, Chile, as the baseline site for the planned ELT. Other sites that were under discussion included Cerro Macon, Salta, in Argentina; Roque de los Muchachos Observatory, on the Canary Islands; and sites in North Africa, Morocco, and Antarctica. Early designs included a segmented primary mirror with a diameter of and an area of about , with a secondary mirror with a diameter of . However, in 2011 a proposal was put forward to reduce overall size by 13% to 978 m2, with a diameter primary mirror and a diameter secondary mirror. This reduced projected costs from 1.275 billion to 1.055 billion euros and should allow the telescope to be finished sooner. The smaller secondary is a particularly important change; places it within the capabilities of multiple manufacturers, and the lighter mirror unit avoids the need for high-strength materials in the secondary mirror support spider. ESO's Director General commented in a 2011 press release that "With the new E-ELT design we can still satisfy the bold science goals and also ensure that the construction can be completed in only 10–11 years." The ESO Council endorsed the revised baseline design in June 2011 and expected a construction proposal for approval in December 2011. Funding was subsequently included in the 2012 budget for initial work to begin in early 2012. The project received preliminary approval in June 2012. ESO approved the start of construction in December 2014, with over 90% funding of the nominal budget secured. The design phase of the 5-mirror anastigmat was fully funded within the ESO budget. With the 2011 changes in the baseline design (such as a reduction in the size of the primary mirror from 42 m to 39.3 m), in 2017 the construction cost was estimated to be €1.15 billion (including first generation instruments). In 2014, the start of operations was planned for 2024. Actual construction officially began in early 2017, and a technical first light is planned for 2028. Planning ESO focused on the current design after a feasibility study concluded the proposed diameter, Overwhelmingly Large Telescope, would cost €1.5 billion (£1 billion), and be too complex. Both current fabrication technology and road transportation constraints limit single mirrors to being roughly per piece. The next-largest telescopes currently in use are the Keck Telescopes, the Gran Telescopio Canarias and the Southern African Large Telescope, which each use small hexagonal mirrors fitted together to make a composite mirror slightly over across. The ELT uses a similar design, as well as techniques to work around atmospheric distortion of incoming light, known as adaptive optics. A 40-metre-class mirror will allow the study of the atmospheres of extrasolar planets. The ELT is the highest priority in the European planning activities for research infrastructures, such as the Astronet Science Vision and Infrastructure Roadmap and the ESFRI Roadmap. The telescope underwent a Phase B study in 2014 that included "contracts with industry to design and manufacture prototypes of key elements like the primary mirror segments, the adaptive fourth mirror or the mechanical structure (...) [and] concept studies for eight instruments". Design The ELT will use a novel design with a total of five mirrors. The first three mirrors are curved (non-spherical) and form a three-mirror anastigmat design for excellent image quality over the 10-arcminute field of view (one-third of the width of the full Moon). The fourth and fifth mirrors are (almost) flat, and respectively provide adaptive optics correction for atmospheric distortions (mirror 4) and tip-tilt correction for image stabilization (mirror 5). The fourth and fifth mirrors also send the light sideways to one of two Nasmyth focal stations at either side of the telescope structure, allowing multiple large instruments to be mounted simultaneously. ELT mirror and sensors contracts Primary mirror The primary mirror will be composed of 798 hexagonal segments, each approximately across and with a thickness of . Two segments will be re-coated and replaced each working day, to keep the mirror always clean and highly reflective. Edge sensors constantly measure the positions of the primary mirror segments relative to their immediate neighbours. 2394 position actuators (3 for each segment) use this information to adjust the system, keeping the overall surface shape unchanged against deformations caused by external factors such as wind, gravity, temperature changes and vibrations. In January 2017, ESO awarded the contract for the fabrication of the 4608 edge sensors to the FAMES consortium, which is composed of French company Fogale and German company Micro-Epsilon. These sensors can measure relative positions to an accuracy of a few nanometres, the most accurate ever used in a telescope. In May 2017, ESO awarded two additional contracts. One was awarded to the German company Schott AG who manufactures the blanks of the 798 segments, as well as a maintenance set of 133 additional segments. This maintenance set allows segments to be removed, replaced, and recoated on a rotating basis once the ELT is in operation. The mirror is being cast from the same low-expansion ceramic Zerodur as the existing Very Large Telescope mirrors in Chile. The other contract was awarded to the French company, Safran Reosc, a subsidiary of Safran Electronics & Defense. They receive the mirror blanks from Schott, and polish one mirror segment per day to meet the 7-year deadline. During this process, each segment is polished until it has no surface irregularity greater than 7.5 nm root mean square. Afterward, Safran Reosc mounts, tests, and completes all optical testing before delivery. This is the second-largest contract for ELT construction and the third-largest contract ESO has ever signed. The segment support system units for the primary mirror were designed and are produced by CESA (Spain) and VDL (the Netherlands). The contracts signed with ESO also include the delivery of detailed and complete instructions and engineering drawings for their production. Additionally, they include the development of the procedures required to integrate the supports with the ELT glass segments; to handle and transport the segment assemblies; and to operate and maintain them. As of July 2023, over 70% of the mirror segment blanks and their supporting structures had been manufactured, and by early 2024 tens of segments had been polished. Secondary mirror Making the secondary mirror is a major challenge as it is highly convex, and aspheric. It is also very large; at in diameter and weighing , it will be the largest secondary mirror ever employed on an optical telescope and the largest convex mirror ever produced. In January 2017, ESO awarded a contract for the mirror blank to Schott AG, who cast it later the same year from Zerodur. In May 2017, Schott AG was also awarded the contract for the much larger primary segment of the mirror. Complex support cells are also necessary to ensure the flexible secondary and tertiary mirrors retain their correct shape and position; these support cells will be provided by SENER. Like the tertiary mirror, the secondary mirror will be mounted on 32 points, with 14 along its edges and 18 on the back. The entire assembly will be mounted on a hexapod, allowing its position to be aligned every few minutes to sub-micrometer precision. Deformations on the secondary mirror have a much smaller effect on the final image compared to errors on the tertiary, quaternary, or quinary mirrors. The pre-formed glass-ceramic blank of the secondary mirror is being polished and tested by Safran Reosc. The mirror will be shaped and polished to a precision of 15 nanometres (15 millionths of a millimetre) over the optical surface. By early 2024 this mirror was reported to be close to final accuracy. Tertiary mirror The concave tertiary mirror, also cast from Zerodur, will be an unusual feature of the telescope. Most current large telescopes, including the VLT and the NASA/ESA Hubble Space Telescope, use two curved mirrors to form an image. In these cases, a small, flat tertiary mirror is sometimes introduced to divert the light to a convenient focus. However, in the ELT the tertiary mirror also has a curved surface, as the use of three mirrors delivers a better final image quality over a larger field of view than would be possible with a two-mirror design. Much like the secondary mirror (with which it shares many design characteristics), the tertiary mirror will be slightly deformable to regularly allow deviations to be corrected. Both mirrors will be mounted on 32 points, with 18 on their backside and 14 along their edges. As of July 2023, the tertiary mirror has been cast and is in polishing. Quaternary mirror The quaternary mirror is a flat, thick adaptive mirror. With up to 8,000 actuators, the surface can be readjusted one thousand times per second. The deformable mirror will be the largest adaptive mirror ever made, and consists of six component petals, control systems, and voice-coil actuators. The image distortion caused by the turbulence of the Earth's atmosphere can be corrected in real-time, as well as deformations caused by the wind upon the main telescope. The ELT's adaptive optics system will provide an improvement of about a factor of 500 in the resolution compared to the best seeing conditions achieved so far without adaptive optics. The AdOptica consortium, partnered with INAF (Istituto Nazionale di Astrofisica) as subcontractors, are responsible for the design and manufacture of the quaternary mirror. The 6 petals were cast by Schott in Germany and polished by Safran Reosc. As of July 2023, all six petals are completed and in the process of being integrated into their support structure. The six laser sources for the adaptive optics system, which will work hand-in-hand with the quaternary mirror, have also been completed and are in testing. Quinary mirror The quinary mirror is a tip-tilt mirror used to refine the image using adaptive optics. The mirror will include a fast tip-tilt system for image stabilization that will compensate perturbations caused by wind, atmospheric turbulence, and the telescope itself before reaching the ELT instruments. As of early 2024 the six component petals had been fabricated and are being brazed into a single unit. ELT dome and structure Dome construction The ELT dome will have a height of nearly from the ground and a diameter of , making it the largest dome ever built for a telescope. The dome will have a total mass of around , and the telescope mounting and tube structure will have a total moving mass of around . For the observing slit, two main designs were under study: one with two sets of nested doors, and the current baseline design, i.e. a single pair of large sliding doors. This pair of doors has a total width of . ESO signed a contract for its construction, together with the main structure of the telescopes, with the Italian ACe Consortium, consisting of Astaldi and Cimolai and the nominated subcontractor, Italy's EIE Group. The signature ceremony took place on 25 May 2016 at ESO's Headquarters in Garching bei München, Germany. The dome is to provide needed protection to the telescope in inclement weather and during the day. A number of concepts for the dome were evaluated. The baseline concept for the 40-metre-class ELT dome is a nearly hemispherical dome, rotating atop a concrete pier, with curved laterally-opening doors. This is a re-optimisation from the previous design, aimed at reducing the costs, and it is being revalidated to be ready for construction. One year after signing the contract, and after the laying of the first stone ceremony in May 2017, the site was handed over to ACe, signifying the beginning of the construction of the dome's main structure. Astronomical performance In terms of astronomical performance the dome is required to be able to track about the 1-degree zenithal avoidance locus as well as preset to a new target within 5 minutes. This requires the dome to be able to accelerate and move at angular speeds of 2 degrees/s (the linear speed is approximately ). The dome is designed to allow complete freedom to the telescope so that it can position itself whether it is opened or closed. It will also permit observations from the zenith down to 20 degrees from the horizon. Windscreen With such a large opening, the ELT dome requires the presence of a windscreen to protect the telescope's mirrors (apart from the secondary), from direct exposure to the wind. The baseline design of the windscreen minimises the volume required to house it. Two spherical blades, either side of the observing slit doors, slide in front of the telescope aperture to restrict the wind. Ventilation and air-conditioning The dome design ensures that the dome provides sufficient ventilation for the telescope not to be limited by dome seeing. For this the dome is also equipped with louvers, whereby the windscreen is designed to allow them to fulfill their function. Computational fluid dynamic simulations and wind tunnel work are being carried out to study the airflow in and around the dome, as well as the effectiveness of the dome and windscreen in protecting the telescope. Besides being designed for water-tightness, air-tightness is also one of the requirements as it is critical to minimise the air-conditioning load. The air-conditioning of the dome is necessary not only to thermally prepare the telescope for the forthcoming night but also in order to keep the telescope optics clean. The air-conditioning of the telescope during the day is critical and the current specifications permit the dome to cool the telescope and internal volume by over 12 hours. Science goals The ELT will search for extrasolar planets—planets orbiting other stars. This will include not only the discovery of planets down to Earth-like masses through indirect measurements of the wobbling motion of stars perturbed by the planets that orbit them, but also the direct imaging of larger planets and possibly even the characterisation of their atmospheres. The telescope will attempt to image Earthlike exoplanets. Furthermore, the ELT's suite of instruments will allow astronomers to probe the earliest stages of the formation of planetary systems and to detect water and organic molecules in protoplanetary discs around stars in the making. Thus, the ELT will answer fundamental questions regarding planet formation and evolution. By probing the most distant objects the ELT will provide clues to understanding the formation of the first objects that formed: primordial stars, primordial galaxies and black holes and their relationships. Studies of extreme objects like black holes will benefit from the power of the ELT to gain more insight into time-dependent phenomena linked with the various processes at play around compact objects. The ELT is designed to make detailed studies of the first galaxies. Observations of these early galaxies with the ELT will give clues that will help understand how these objects form and evolve. In addition, the ELT will be a unique tool for making an inventory of the changing content of the various elements in the Universe with time, and to understand star formation history in galaxies. One of the goals of the ELT is the possibility of making a direct measurement of the acceleration of the Universe's expansion. Such a measurement would have a major impact on our understanding of the Universe. The ELT will also search for possible variations in the fundamental physical constants with time. An unambiguous detection of such variations would have far-reaching consequences for our comprehension of the general laws of physics. Instrumentation The telescope will have several science instruments and will be able to switch from one instrument to another within minutes. The telescope and dome will also be able to change positions on the sky and start a new observation in a short time. Four of its instruments, the first generation, will be available at or shortly after first light, while two others will begin operations later. Throughout its operation other instruments can be installed. The first generation includes four instruments: MICADO, HARMONI and METIS, along with the adaptive optics system MORFEO. HARMONI: The High Angular Resolution Monolithic Optical and Near-infrared Integral field spectrograph (HARMONI) will function as the telescope's workhorse instrument for spectroscopy. METIS: The Mid-infrared ELT Imager and Spectrograph (METIS) will be a mid-infrared imager and spectrograph. MICADO: The Multi-AO (adaptive optics) Imaging Camera for Deep Observations (MICADO) will be the first dedicated imaging camera for the ELT and will work with the Multiconjugate adaptive Optics Relay For ELT Observations, (MORFEO, formerly MAORY). The second generation of instruments consists of MOSAIC and ANDES. MOSAIC: A proposed multi-object spectrograph which will allow astronomers to trace the growth of galaxies and the distribution of matter from shortly after the Big Bang to the present day. ANDES (formerly HIRES): The ArmazoNes high Dispersion Echelle Spectrograph will be used to search for indications of life on Earth-like exoplanets, find the first-born stars of the universe, test for possible variations of the fundamental constants of physics, and measure the acceleration of the Universe's expansion. Comparison One of the largest optical telescopes operating today is the Gran Telescopio Canarias, with a aperture and a light-collecting area of . Other planned extremely large telescopes include the Giant Magellan Telescope with a mirror diameter of and area of , and the Thirty Meter Telescope with a diameter of , and an area of . Both of these are also targeting the second half of the 2020 decade for completion. These two other telescopes roughly belong to the same next generation of optical ground-based telescopes. Each design is much larger than previous telescopes. The size of the ELT has been reduced from its original design. Even with that reduction, the ELT is significantly larger than both other planned extremely large telescopes. It has the aim of observing the universe in greater detail than the Hubble Space Telescope by taking images 15 times sharper, although it is designed to be complementary to space telescopes, which typically have very limited observing time available. The ELT's 4.2-metre secondary mirror is the same size as the primary mirror on the William Herschel Telescope, the second largest optical telescope in Europe. The ELT under ideal conditions has an angular resolution of 0.005 arcsecond which corresponds to separating two light sources 1 AU apart from distance, or two light sources apart from roughly distance. At 0.03 arcseconds, the contrast is expected to be 108, sufficient to search for exoplanets. The unaided human eye has an angular resolution of 1 arcminute which corresponds to separating two light sources 30 cm apart from 1 km distance.
Technology
Ground-based observatories
null
1936648
https://en.wikipedia.org/wiki/Bathypterois%20grallator
Bathypterois grallator
The tripod fish or tripod spiderfish, Bathypterois grallator, is a deep-sea benthic fish in the family Ipnopidae found at lower latitudes. It is now relatively well known from photographs and submersible observations, and seems to prefer to perch on the ooze using very elongated fin rays in the tail and two pelvic fins to stand, facing upstream with the pectoral fins turned forward so the outthrust projecting fin rays resemble multiple antennae, and are also used as tactile organs. B. grallator is hermaphroditic. At least 18 species are placed in the genus Bathypterois, several of which have similar appearance and behavior to B. grallator. B. grallator is the largest member of its genus, commonly exceeding a standard length of and reaching up to . Characteristics The tripodfish, sometimes referred to as the abyssal spiderfish, has long, bony rays that stick out below its tail fin and both pelvic fins. The fish's head-and-body is up to long, but its fins can be more than . Most of the time, the tripodfish stands on its three fins on the bottom of the ocean, hunting for food. Even though the fins are presumably quite stiff, researchers have been successful in surprising the fish into swimming, and then the fins seem flexible. Scientists have suggested that fluids are pumped into these fins when the fish is 'standing' to make them more rigid. Habitat Bathypterois grallator has been found relatively widely in the Atlantic, Pacific, and Indian oceans from 40°N to 40°S. It is a wide-ranging eurybathic fish found from deep. Food The tripodfish uses tactile and mechanosensory cues to identify food; it apparently does not have special visual adaptations to help it find food in the low-light environment. When the fish is perched with its long rays on the ocean floor, it can get food without even seeing it. The tripodfish's mouth ends up at just the right height to catch shrimp, tiny fish, and small crustaceans swimming by. They seem to prefer to perch on the mud using much elongated fin rays in their tails and two pelvic fins to stand, facing upstream into the current to ambush with the pectoral fins turned forward so the outthrust projecting fins resemble multiple antennae. The fish senses objects in the water with its front fins. These fins act like hands. Once they feel prey and realize it is edible, the fins knock the food into the fish's mouth. The fish faces into the current, waiting for prey to drift by. Reproduction Each individual has male and female reproductive organs. If two tripodfish happen to meet, they mate. However, if a tripodfish does not find a partner, it makes both sperm and eggs to produce offspring by itself. Related and similar species At least 18 species included in the genus Bathypterois. Similar species are often observed in the same areas. A 2001 report included observations of Bathypterois dubius as far as 50°N in the Bay of Biscay. A striking parallel exists between some icefishes and the tripodfishes. The stance of Chionodraco is an even more striking parallel. Both icefishes and the tripodfish use a similar strategy of sitting motionless above the substrate with the attendant benefits that motionlessness brings to a nonvisual, particularly mechanosensory, function. The tripodfish is closely related to the spider fish Bathypterois longifilis, which is similar in appearances and habits but is smaller and has much shorter fin extensions. They are often found standing very close to each other. The family to which both fish belong, Inopidae, is called the family of tripod fishes or spiderfishes interchangeably.
Biology and health sciences
Aulopiformes
Animals
1937145
https://en.wikipedia.org/wiki/S%20wave
S wave
In seismology and other areas involving elastic waves, S waves, secondary waves, or shear waves (sometimes called elastic S waves) are a type of elastic wave and are one of the two main types of elastic body waves, so named because they move through the body of an object, unlike surface waves. S waves are transverse waves, meaning that the direction of particle movement of an S wave is perpendicular to the direction of wave propagation, and the main restoring force comes from shear stress. Therefore, S waves cannot propagate in liquids with zero (or very low) viscosity; however, they may propagate in liquids with high viscosity. The name secondary wave comes from the fact that they are the second type of wave to be detected by an earthquake seismograph, after the compressional primary wave, or P wave, because S waves travel more slowly in solids. Unlike P waves, S waves cannot travel through the molten outer core of the Earth, and this causes a shadow zone for S waves opposite to their origin. They can still propagate through the solid inner core: when a P wave strikes the boundary of molten and solid cores at an oblique angle, S waves will form and propagate in the solid medium. When these S waves hit the boundary again at an oblique angle, they will in turn create P waves that propagate through the liquid medium. This property allows seismologists to determine some physical properties of the Earth's inner core. History In 1830, the mathematician Siméon Denis Poisson presented to the French Academy of Sciences an essay ("memoir") with a theory of the propagation of elastic waves in solids. In his memoir, he states that an earthquake would produce two different waves: one having a certain speed and the other having a speed . At a sufficient distance from the source, when they can be considered plane waves in the region of interest, the first kind consists of expansions and compressions in the direction perpendicular to the wavefront (that is, parallel to the wave's direction of motion); while the second consists of stretching motions occurring in directions parallel to the front (perpendicular to the direction of motion). Theory Isotropic medium For the purpose of this explanation, a solid medium is considered isotropic if its strain (deformation) in response to stress is the same in all directions. Let be the displacement vector of a particle of such a medium from its "resting" position due elastic vibrations, understood to be a function of the rest position and time . The deformation of the medium at that point can be described by the strain tensor , the 3×3 matrix whose elements are where denotes partial derivative with respect to position coordinate . The strain tensor is related to the 3×3 stress tensor by the equation Here is the Kronecker delta (1 if , 0 otherwise) and and are the Lamé parameters ( being the material's shear modulus). It follows that From Newton's law of inertia, one also gets where is the density (mass per unit volume) of the medium at that point, and denotes partial derivative with respect to time. Combining the last two equations one gets the seismic wave equation in homogeneous media Using the nabla operator notation of vector calculus, , with some approximations, this equation can be written as Taking the curl of this equation and applying vector identities, one gets This formula is the wave equation applied to the vector quantity , which is the material's shear strain. Its solutions, the S waves, are linear combinations of sinusoidal plane waves of various wavelengths and directions of propagation, but all with the same speed . Assuming that the medium of propagation is linear, elastic, isotropic, and homogeneous, this equation can be rewritten as where ω is the angular frequency and is the wavenumber. Thus, . Taking the divergence of seismic wave equation in homogeneous media, instead of the curl, yields a wave equation describing propagation of the quantity , which is the material's compression strain. The solutions of this equation, the P waves, travel at the faster speed . The steady state SH waves are defined by the Helmholtz equation where is the wave number. S waves in viscoelastic materials Similar to in an elastic medium, in a viscoelastic material, the speed of a shear wave is described by a similar relationship , however, here, is a complex, frequency-dependent shear modulus and is the frequency dependent phase velocity. One common approach to describing the shear modulus in viscoelastic materials is through the Voigt Model which states: , where is the stiffness of the material and is the viscosity. S wave technology Magnetic resonance elastography Magnetic resonance elastography (MRE) is a method for studying the properties of biological materials in living organisms by propagating shear waves at desired frequencies throughout the desired organic tissue. This method uses a vibrator to send the shear waves into the tissue and magnetic resonance imaging to view the response in the tissue. The measured wave speed and wavelengths are then measured to determine elastic properties such as the shear modulus. MRE has seen use in studies of a variety of human tissues including liver, brain, and bone tissues.
Physical sciences
Seismology
Earth science
1937731
https://en.wikipedia.org/wiki/Sports%20motorcycle
Sports motorcycle
A sports motorcycle, sports bike, or sport bike is a motorcycle designed and optimized for speed, acceleration, braking, and cornering on asphalt concrete race tracks and roads. They are mainly designed for performance at the expense of comfort, fuel economy, safety, noise reduction and storage in comparison with other motorcycles. Sport bikes can be and are typically equipped with fairings and a windscreen to deflect wind from the rider to improve aerodynamics. Soichiro Honda wrote in the owner's manual of the 1959 Honda CB92 Benly Super Sport that, "Primarily, essentials of the motorcycle consists in the speed and the thrill," while Cycle Worlds Kevin Cameron says that, "A sportbike is a motorcycle whose enjoyment consists mainly from its ability to perform on all types of paved highway – its cornering ability, its handling, its thrilling acceleration and braking power, even (dare I say it?) its speed." Motorcycles are versatile and may be put to many uses as the rider sees fit. In the past there were few if any specialized types of motorcycles, but the number of types and sub-types has proliferated, particularly in the period since the 1950s. The introduction of the Honda CB750 in 1969 marked a dramatic increase in the power and speed of practical and affordable sport bikes available to the general public. This was followed in the 1970s by improvements in suspension and braking commensurate with the power of the large inline fours that had begun to dominate the sport bike world. In the 1980s sport bikes again took a leap ahead, becoming almost indistinguishable from racing motorcycles. Since the 1990s sport bikes have become more diverse, adding new variations like the naked bike and streetfighter to the more familiar road racing style of sport bike. Design elements With the emphasis of a sport bike being on speed, acceleration, braking, and maneuverability, there are certain design elements that most motorcycles of this type will share. Rider ergonomics favor function. This generally means higher foot pegs that move the legs closer to the body and more of a reach to a lower set of hand controls, such as clip on handlebars, which positions the body and weight forward and over the tank. Sport bikes have comparatively high-performance engines resting inside a lightweight frame. High tech and expensive materials are often used on sport bikes to reduce weight. Braking systems combine higher performance brake pads and disc brakes with multi-piston calipers that clamp onto oversized vented rotors. Suspension systems are advanced in terms of adjustments and materials for increased stability and durability. Front and rear tires are larger and wider than tires found on other types of motorcycles to allow higher cornering speeds and greater lean angles. Fairings may or may not be used on a sport bike; when used, fairings are shaped to reduce aerodynamic drag as much as possible and provide wind protection for the rider. The combination of rider position, location of the engine and other heavy components, and the motorcycle's geometry help maintain structural integrity and chassis rigidity, and determine how it will behave under acceleration, braking, and cornering. Correct front-to-rear weight distribution is of particular importance to the handling of sport bikes, and the changing position of the rider's body dynamically changes the handling of the motorcycle. Because of the complexity of modeling all the possible movements of different sized riders, to approach perfect tuning of a motorcycle's weight distribution and suspension is often only possible by having a bike customized or at least adjusted to fit a specific rider. Generally, road racing style sport bikes have shorter wheelbases than those intended for more comfortable touring, and the current trend in sport bike design is towards shorter wheelbases, giving quicker turning at the expense of a greater tendency for unintentional wheelies and stoppies under hard acceleration and braking, respectively. Some motorcycles have anti-wheelie systems, with various designs including computerized traction and suspension settings controls or mechanical suspension features, which are intended to reduce the lift and loss of traction of the front wheel under acceleration. Classes There is no universal authority defining the terminology of sport bikes or any other motorcycle classes. Legal definitions are limited by local jurisdiction, and race sanctioning bodies like the American Motorcyclist Association (AMA) and the Fédération Internationale de Motocyclisme (FIM) set rules that only apply to those who choose to participate in their competitions. Nonetheless, by present day standards in Europe, North America and the rest of the developed world, sport bikes are usually divided into three, four, or five rough categories, reflecting vaguely similar engine displacement, horsepower, price and intended use, with a good measure of subjective opinion and simplification. Marketing messages about a model from the manufacturer can diverge from the consensus of the motorcycling media and the public. Sometimes the classes used in motorcycle racing are approximated in production models, often but not always in connection with homologation. The sport bike classes in common usage are: Lightweight: also called entry level, small or beginner bikes. Some two strokes in this class have dramatically higher performance than the four strokes, being likened to miniature superbikes. Sport bikes with engine displacements of up to about are usually in this class. Middleweight: mid-sized, mid-level, or supersport. Some of the models in this range qualify for racing in the classes AMA Supersport Championship, British Supersport Championship and the Supersport World Championship, but many middleweights do not have a significant presence in racing. Displacements of are typical. Superbike: liter-class, literbike, or heavyweight i.e. or higher. As with supersport, many of the models in this class compete in superbike racing. Open class, hypersport or hyperbike, are terms sometimes used in lieu of superbike as a catch-all for everything larger than middleweight. Alternatively, these terms mark a class above the superbikes for the largest displacement sport bikes with the highest top speeds, with weights somewhat greater than the superbike class. Hyperbike was in use by 1979. The terms supersport and superbike are sometimes applied indiscriminately to all high-performance motorcycles. Categorization by engine displacement alone is a crude measure, particularly when comparing engines with different numbers of cylinders like inline or V fours with parallel and V twins, not to mention the greater power for a given displacement of two-stroke engines over four strokes. In the less developed world, smaller engine sizes are the norm, and relative terms like small, mid-sized and large displacement can have different meanings. For example, in India in 2002 there were about 37 million two-wheelers, but as of 2008, there were only about 3,000 motorcycles, or fewer than one in 12,000, of displacement or more. Similarly, the perception of relative sizes has shifted over time in developed countries, from smaller to larger displacements. When the original superbike, the Honda CB750, appeared in 1969, it was called a "big four," while today an inline four of would be classed in the middle range. Besides having product lines that span from entry level through high end sport bikes, many manufacturers add depth to that line by having pairs, or several pairs, of similar sport bikes aimed at riders of different levels. These are designed to appeal to riders seeking more or less extreme performance features. The more expensive model will be in the vein of a race replica, offering the latest technology updated with frequent design revisions, while the lower cost model typically relies on older technology, can have a more relaxed riding position, and is generally more practical for non-road racing tasks such as urban commuting and carrying passengers or baggage, and offering lower fuel, insurance and maintenance costs. Examples of these paired models are Buell's Firebolt and Lightning, Ducati's 916/748 through 1198/848 paired series, Honda's CBR600RR and F4i middleweights and RC51 and CBR1000RR liter-class, several different concurrent models in Kawasaki's Ninja line, and Yamaha's R6 and 600R. Variations Sport touring motorcycles share many features of sport bikes, but they are generally considered a class all their own. These are mid- to large-sized motorcycles that offer more carrying capacity, more relaxed ergonomics, and more versatility than specialized sport bikes, while being lighter and more agile than touring motorcycles. Some sport bikes are marketed as race replicas, implying that the model sold to the public is identical to the one used in racing, or at least is closer to the racing version than non-replica models. Suffixes R or RR applied to model codes can be interpreted as standing for replica or race replica. Race Replica was used in the late 1970s UK where 250 cc models customized with full bodykits providing race-styling in factory team colors themed to the top-level of sponsored riders of the time were available marketed towards 'learner' riders who had not passed a driving test enabling their progression to large-capacity machines. In 1982 Yamaha described their 1983 RD350 YPVS launched at the Cologne motorcycle show as "the nearest thing to a road going racer ever produced". The term race replica was then also used to distinguish the period of sport bike production from Japan and Europe since the mid-1980s having integrated race-styled bodywork, representing an evolution from the superbike period that began in 1969. The sport bike, or race replica, era began with the 1983 Suzuki RG250 Gamma, the 1984 Honda VF750F and the 1985 Suzuki GSX-R750, and had full fairings. Sport bikes with small or no fairings have proliferated since the mid-1990s. These are called naked bikes or streetfighters, and they retain many of the performance features of other sport bikes, but besides abbreviated bodywork, they give the rider a more upright posture by using, for example, higher handlebars instead of clip ons. The streetfighter name, associated with motorcycle stunt riding and perhaps hooliganism on public roads, can imply higher performance than the sometimes more tame naked bike, which in some cases is a synonym for a standard motorcycle. Others define naked bikes as equal in power and performance to sport bikes, merely absent the bodywork. The same period that saw the naked and streetfighter variants of the sport bike theme also had a resurgence of the versatile standard in response to demand for a return of the Universal Japanese Motorcycle. Supermoto-style street bikes, constructed with a completely different set of priorities than a road racing style sport bike, have also entered the mainstream, offering another option for riders seeking a spirited riding experience. The nickname muscle bike has been applied to sport bikes that give engine output a disproportionate priority over braking, handling or aerodynamics, harking back to the Japanese superbikes of the 1970s. A similar sensibility drives the so-called power cruiser motorcycles, based on cruiser class machines but with horsepower numbers in league with superbikes.
Technology
Motorized road transport
null
1939119
https://en.wikipedia.org/wiki/Shuttle%20%28weaving%29
Shuttle (weaving)
A shuttle is a tool designed to neatly and compactly store a holder that carries the thread of the weft yarn while weaving with a loom. Shuttles are thrown or passed back and forth through the shed, between the yarn threads of the warp in order to weave in the weft. The simplest shuttles, known as "stick shuttles", are made from a flat, narrow piece of wood with notches on the ends to hold the weft yarn. More complicated shuttles incorporate bobbins or pirns. In the United States, shuttles are often made of wood from the flowering dogwood, because it is hard, resists splintering, and can be polished to a very smooth finish. In the United Kingdom shuttles were usually made of boxwood, cornel, or persimmon. Gallery
Technology
Weaving
null
13096524
https://en.wikipedia.org/wiki/Guano
Guano
Guano (Spanish from ) is the accumulated excrement of seabirds or bats. Guano is a highly effective fertilizer due to the high content of nitrogen, phosphate, and potassium, all key nutrients essential for plant growth. Guano was also, to a lesser extent, sought for the production of gunpowder and other explosive materials. The 19th-century seabird guano trade played a pivotal role in the development of modern input-intensive farming. The demand for guano spurred the human colonization of remote bird islands in many parts of the world. Unsustainable seabird guano mining processes can result in permanent habitat destruction and the loss of millions of seabirds. Bat guano is found in caves throughout the world. Many cave ecosystems are wholly dependent on bats to provide nutrients via their guano which supports bacteria, fungi, invertebrates, and vertebrates. The loss of bats from a cave can result in the extinction of species that rely on their guano. Unsustainable harvesting of bat guano may cause bats to abandon their roost. Demand for guano rapidly declined after 1910 with the development of the Haber–Bosch process for extracting nitrogen from the atmosphere. Composition and properties Seabird guano Seabird guano is the fecal excrement from marine birds and has an organic matter content greater than 40%, and is a source of nitrogen (N) and available phosphate (P2O5). Unlike most mammals, birds do not excrete urea, but uric acid, so that the amount of nitrogen per volume is much higher than in other animal excrement. Seabird guano contains plant nutrients including nitrogen, phosphorus, calcium and potassium. Bat guano Bat guano is partially decomposed bat excrement and has an organic matter content greater than 40%; it is a source of nitrogen, and may contain up to 6% available phosphate (P2O5).The feces of insectivorous bats consists of fine particles of insect exoskeleton, which are largely composed of chitin. Elements found in large concentrations include nitrogen, phosphorus, potassium and trace elements needed for plant growth. Bat guano is slightly alkaline with an average pH of 7.25. Chitin from insect exoskeletons is an essential compound needed by soil fungi to grow and expand. Chitin is a major component of fungal cell wall membranes. The growth of beneficial fungi adds to soil fertility. Bat guano composition varies between species with different diets. Insectivorous bats are the only species that congregate in large enough numbers to produce sufficient guano for sustainable harvesting. History of human use Bird guano Indigenous use The word "guano" originates from the Andean indigenous language Quechua, where it refers to any form of dung used as an agricultural fertilizer. Archaeological evidence suggests that Andean people collected seabird guano from small islands and points off the desert coast of Peru for use as a soil amendment for well over 1,500 years and perhaps as long as 5,000 years. Spanish colonial documents suggest that the rulers of the Inca Empire greatly valued guano, restricted access to it, and punished any disturbance of the birds with death. The guanay cormorant is historically the most abundant and important producer of guano. Other important guano-producing bird species off the coast of Peru are the Peruvian pelican and the Peruvian booby. Western discovery (1548–1800) The earliest European records noting the use of guano as fertilizer date back to 1548. Although the first shipments of guano reached Spain as early as 1700, it did not become a popular product in Europe until the 19th century. The Guano Age (1802–1884) In November 1802, Prussian geographer and explorer Alexander von Humboldt first encountered guano and began investigating its fertilizing properties at Callao in Peru, and his subsequent writings on this topic made the subject well known in Europe. Although Europeans knew of its fertilizing properties, guano was not widely used before this time. Cornish chemist Humphry Davy delivered a series of lectures which he compiled into an 1813 bestselling book about the role of nitrogenous manure as a fertilizer, Elements of Agricultural Chemistry. It highlighted the special efficacy of Peruvian guano, noting that it made the "sterile plains" of Peru fruitful. Though Europe had marine seabird colonies and thus, guano, it was of poorer quality because its potency was leached by high levels of rainfall and humidity. Elements of Agricultural Chemistry was translated into German, Italian, and French; American historian Wyndham D. Miles said that it was likely "the most popular book ever written on the subject, outselling the works of Dundonald, Chaptal, Liebig..." He also said that "No other work on agricultural chemistry was read by as many English-speaking farmers." The arrival of commercial whaling on the Pacific coast of South America contributed to scaling of its guano industry. Whaling vessels carried consumer goods to Peru such as textiles, flour, and lard; unequal trade meant that ships returning north were often half empty, leaving entrepreneurs in search of profitable goods that could be exported. In 1840, Peruvian politician and entrepreneur negotiated a deal to commercialize guano export among a merchant house in Liverpool, a group of French businessmen, and the Peruvian government. This agreement resulted in the abolition of all preexisting claims to Peruvian guano; thereafter, it was the exclusive resource of the State. By nationalizing its guano resources, the Peruvian government was able to collect royalties on its sale, becoming the country's largest source of revenue. Some of this income was used by the State to free its more than 25,000 black slaves. Peru also used guano revenue to abolish the head tax on its indigenous citizens. This export of guano from Peru to Europe has been suggested as the vehicle that brought a virulent strain of potato blight from the Andean highlands that began the Great Famine of Ireland. Soon guano was sourced from regions besides Peru. By 1846, of guano had been exported from Ichaboe Island, off the coast of Namibia, and surrounding islands to Great Britain. Guano pirating took off in other regions as well, causing prices to plummet and more consumers to try it. The biggest markets for guano from 1840–1879 were in Great Britain, the Low Countries, Germany, and the United States. By the late 1860s, it became apparent that Peru's most productive guano site, the Chincha Islands, was nearing depletion. This caused guano mining to shift to other islands north and south of the Chincha Islands. Despite this near exhaustion, Peru achieved its greatest ever export of guano in 1870 at more than . Concern of exhaustion was ameliorated by the discovery of a new Peruvian resource: sodium nitrate, also called Chile saltpetre. After 1870, the use of Peruvian guano as a fertilizer was eclipsed by Chile saltpetre in the form of caliche (a sedimentary rock) extraction from the interior of the Atacama Desert, close to the guano areas. The Guano Age ended with the War of the Pacific (1879–1883), which saw Chilean marines invade coastal Bolivia to claim its guano and saltpetre resources. Knowing that Bolivia and Peru had a mutual defense agreement, Chile mounted a preemptive strike on Peru, resulting in its occupation of the Tarapacá, which included Peru's guano islands. With the Treaty of Ancón of 1884, the War of the Pacific ended. Bolivia ceded its entire coastline to Chile, which also gained half of Peru's guano income from the 1880s and its guano islands. The conflict ended with Chilean control over the most valuable nitrogen resources in the world. Chile's national treasury grew by 900% between 1879 and 1902 thanks to taxes coming from the newly acquired lands. Imperialism The demand for guano led the United States to pass the Guano Islands Act in 1856, which gave U.S. citizens discovering a source of guano on an unclaimed island exclusive rights to the deposits. In 1857, the U.S. began annexing uninhabited islands in the Pacific and Caribbean, totaling nearly 100, though some islands claimed under the Act did not end up having guano mining operations established on them. Several of these islands are still officially U.S. territories. Conditions on annexed guano islands were poor for workers, resulting in a rebellion on Navassa Island in 1889 where black workers killed their white overseers. In defending the workers, lawyer Everett J. Waring argued that the men could not be tried by U.S. law because the guano islands were not legally part of the country. The case went to the Supreme Court of the United States where it was decided in Jones v. United States (1890). The Court decided that Navassa Island and other guano islands were legally part of the U.S. American historian Daniel Immerwahr claimed that by establishing these land claims as constitutional, the Court laid the "basis for the legal foundation for the U.S. empire". Other countries also used their desire for guano as a reason to expand their empires. The United Kingdom claimed Kiritimati and Malden Island for the British Empire. Others nations that claimed guano islands included Australia, France, Germany, Hawaii, Japan, and Mexico. Decline and resurgence In 1913, a factory in Germany began the first large-scale synthesis of ammonia using German chemist Fritz Haber's catalytic process. The scaling of this energy-intensive process meant that farmers could cease practices such as crop rotation with nitrogen-fixing legumes or the application of naturally derived fertilizers such as guano. The international trade of guano and nitrates such as Chile saltpetre declined as artificially synthesized fertilizers became more widely used. With the rising popularity of organic food in the twenty-first century, the demand for guano has started to rise again. Bat guano In the U.S., bat guano was harvested from caves as early as the 1780s to manufacture gunpowder. During the American Civil War (1861–1865), the Union's blockade of the southern Confederate States of America meant that the Confederacy resorted to mining guano from caves to produce saltpetre. One Confederate guano kiln in New Braunfels, Texas, had a daily output of of saltpetre, produced from of guano from two area caves. From the 1930s, Bat Cave mine in Arizona was used for guano extraction, though it cost more to develop than it was worth. U.S. Guano Corporation bought the property in 1958 and invested 3.5 million dollars to make it operational; actual guano deposits in the cave were one percent of predicted and the mine was abandoned in 1960. In Australia, the first documented claim on Naracoorte's Bat Cave guano deposits was in 1867. Guano mining in the country remained a localized and small industry. In modern times, bat guano is used in low levels in developed countries. It remains an important resource in developing countries, particularly in Asia. Paleoenvironment reconstruction Coring accumulations of bat guano can be useful in determining past climate conditions. The level of rainfall, for example, impacts the relative frequency of nitrogen isotopes. In times of higher rainfall, 15N is more common. Bat guano also contains pollen, which can be used to identify prior plant assemblages. A layer of charcoal recovered from a guano core in the U.S. state of Alabama was seen as evidence that a Woodlands tribe inhabited the cave for some time, leaving charcoal via the fires they lit. Stable isotope analysis of bat guano was also used to support that the climate of the Grand Canyon was cooler and wetter during the Pleistocene epoch than it is now in the Holocene. Additionally, the climatic conditions were more variable in the past. Mining Process Mining seabird guano from Peruvian islands has remained largely the same since the industry began, relying on manual labor. First, picks, brooms, and shovels are used to loosen the guano. The use of excavation machinery is not only impractical due to the terrain but also prohibited because it would frighten the seabirds. The guano is then placed in sacks and carried to sieves, where impurities are removed. Similarly, harvesting bat guano in caves was and is manual. In Puerto Rico, cave entrances were enlarged to facilitate access and extraction. Guano was freed from the rocky substrate by explosives. Then, it was shoveled into carts and removed from the cave. From there, the guano was taken to kilns to dry. The dried guano would then be loaded into sacks, ready for transport via ship. Today, bat guano is usually harvested in the developing world, using "strong backs and shovels". Ecological impacts and mitigation Bird guano Peru's guano islands experienced severe ecological effects as a result of unsustainable mining. In the late 1800s, approximately 53 million seabirds lived on the twenty-two islands. As of 2011, only 4.2 million seabirds lived there. After realizing the depletion of guano in the Guano Age, the Peruvian government recognized that it needed to conserve the seabirds. In 1906, American zoologist Robert Ervin Coker was hired by the Peruvian government to create management plans for its marine species, including the seabirds. Specifically, he made five recommendations: That the government turn its coastal islands into a state-run bird sanctuary. Private use of the island for hunting or egg collecting should be prohibited. To eliminate unhealthy competition, each island should be assigned only one state contractor for guano extraction. Guano mining should be entirely ceased from November to March so that the breeding season for the birds was undisturbed. In rotation, each island should be closed to guano mining for an entire year. The Peruvian government should monopolize all processes related to guano production and distribution. This recommendation was made with the belief that a single entity with a vested interest in the long-term success of the guano industry would manage the resource most responsibly. Despite these policies, the seabird population continued to decline, which was exacerbated by the 1911 El Niño–Southern Oscillation. In 1913, Scottish ornithologist Henry Ogg Forbes authored a report on behalf of the Peruvian Corporation focusing on how human actions harmed the birds and subsequent guano production. Forbes suggested additional policies to conserve the seabirds, including keeping unauthorized visitors a mile away from guano islands at all times, eliminating all the birds' natural predators, armed patrols of the islands, and decreasing the frequency of harvest on each island to once every three to four years. In 2009, these conservation efforts culminated in the establishment of the Guano Islands, Isles, and Capes National Reserve System, which consists of twenty-two islands and eleven capes. This Reserve System was the first marine protected area in South America, encompassing . Bat guano Unlike bird guano which is deposited on the surface of islands, bat guano can be deep within caves. Cave structure is often altered via explosives or excavation to facilitate extraction of the guano, which changes the cave's microclimate. Bats are sensitive to cave microclimate, and such changes can cause them to abandon the cave as a roost, as happened when Robertson Cave in Australia had a hole opened in its ceiling for guano harvesting. Guano harvesting may also introduce artificial light into caves; one cave in the U.S. state of New Mexico was abandoned by its bat colony after the installation of electric lights. In addition to harming bats by necessitating they find another roost, guano harvesting techniques can ultimately harm human livelihood as well. Harming or killing bats means that less guano will be produced, resulting in unsustainable harvesting practices. In contrast, sustainable harvesting practices do not negatively impact bat colonies nor other cave fauna. The International Union for Conservation of Nature's (IUCN) 2014 recommendations for sustainable guano harvesting include extracting guano when the bats are not present, such as when migratory bats are gone for the season or when non-migratory bats are out foraging at night. Work conditions Guano mining in Peru was at first done with black slaves. After Peru formally ended slavery, it sought another source of cheap labor. In the 1840s and 1850s, thousands of men were blackbirded (coerced or kidnapped) from the Pacific islands and southern China. Thousands of coolies from South China worked as "virtual slaves" mining guano. By 1852, Chinese laborers comprised two-thirds of Peru's guano miners; others who mined guano included convicts and forced laborers paying off debts. Chinese laborers agreed to work for eight years in exchange for passage from China, though many were misled that they were headed to California's gold mines. Conditions on the guano islands were very poor, commonly resulting in floggings, unrest, and suicide. Workers experienced lung damage by inhaling guano dust, were buried alive by falling piles of guano, and risked falling into the ocean. After visiting the guano islands, U.S. politician George Washington Peck wrote: Hundreds or thousands of Pacific Islanders, especially Native Hawaiians, traveled or were blackbirded to the U.S.-held and Peruvian guano islands for work, including Howland Island, Jarvis Island, and Baker Island. While most Hawaiians were literate, they could usually not read English; the contract they received in their own language lacked key amendments that the English version had. Because of this, the Hawaiian language contract was often missing key information, such as the departure date, the length of the contract, and the name of the company for which they would be working. When they arrived at their destination to begin mining, they learned that both contracts were largely meaningless in terms of work conditions. Instead, their overseer (commonly referred to as a luna), who was usually white, had nearly unlimited power over them. Wages varied from lows of $5/month to highs of $14/month. Native Hawaiian laborers of Jarvis Island referred to the island as Paukeaho, meaning "out of breath" or "exhausted", due to the strain of loading heavy bags of guano onto ships. Pacific Islanders also risked death: one in thirty-six laborers from Honolulu died before completing their contract. Slaves blackbirded from Easter Island in 1862 were repatriated by the Peruvian government in 1863; only twelve of 800 slaves survived the journey. On Navassa Island, the guano mining company switched from white convicts to largely black laborers after the American Civil War. Black laborers from Baltimore claimed that they were misled into signing contracts with stories of mostly fruit-picking, not guano mining, and "access to beautiful women". Instead, the work was exhausting and punishments were brutal. Laborers were frequently placed in stocks or tied up and dangled in the air. A labor revolt ensued, where the workers attacked their overseers with stones, axes, and even dynamite, killing five overseers. Although the process for mining guano is mostly the same today, worker conditions have improved. As of 2018, guano miners in Peru made US$750 per month, which is more than twice the average national monthly income of $300. Workers also have health insurance, meals, and eight-hour shifts. Human health Guano is one of the habitats of the fungus Histoplasma capsulatum, which can cause the disease histoplasmosis in humans, cats, and dogs. H. capsulatum grows best in the nitrogen-rich conditions present in guano. In the United States, histoplasmosis affects 3.4 adults per 100,000 over age 65, with higher rates in the Midwestern United States (6.1 cases per 100,000). In addition to the United States, H. capsulatum is found in Central and South America, Africa, Asia, and Australia. Of 105 outbreaks in the U.S. from 1938–2013, seventeen occurred after exposure to a chicken coop while nine occurred after exposure to a cave. Birds or their droppings were present in 56% of outbreaks, while bats or their droppings were present in 23%. Developing any symptoms after exposure to H. capsulatum is very rare; less than 1% of those infected develop symptoms. Only patients with more severe cases require medical attention, and only about 1% of acute cases are fatal. It is a much more serious illness for the immunocompromised, however. Histoplasmosis is the first symptom of HIV/AIDS in 50–75% of patients, and results in death for 39–58% of those with HIV/AIDS. The Centers for Disease Control and Prevention recommends that the immunocompromised avoid exploring caves or old buildings, cleaning chicken coops, or disturbing soil where guano is present. Rabies, which can affect humans who have been bitten by infected mammals including bats, cannot be transmitted through bat guano. A 2011 study of bat guano viromes in the U.S. states of Texas and California recovered no viruses that are pathogenic to humans, nor any close relatives of pathogenic viruses. It is hypothesized that Egyptian fruit bats, which are native to Africa and the Middle East, can spread Marburg virus to each other through contact with infected secretions such as guano, but a 2018 review concluded that more studies are necessary to determine the specific mechanisms of exposure that cause Marburg virus disease in humans. Exposure to guano could be a route of transmission to humans. As early as in the 18th century there are reports of travellers complaining about the unhealthy air of Arica and Iquique resulting from abundant bird spilling. Ecological importance Colonial birds and their guano deposits have an outsized role on the surrounding ecosystem. Bird guano stimulates productivity, though species richness may be lower on guano islands than islands without the deposits. Guano islands have a greater abundance of detritivorous beetles than islands without guano. The intertidal zone is inundated by the guano's nutrients, causing algae to grow more rapidly and coalesce into algal mats. These algal mats are in turn colonized by invertebrates. The abundance of nutrients offshore of guano islands also supports coral reef ecosystems. Cave ecosystems are often limited by nutrient availability. Bats bring nutrients into these ecosystems via their excretions, however, which are often the dominant energy resource of a cave. Many cave species depend on bat guano for sustenance, directly or indirectly. Because cave-roosting bats are often highly colonial, they can deposit substantial quantities of nutrients into caves. The largest colony of bats in the world at Bracken Cave (about 20 million individuals) deposit of guano into the cave every year. Even smaller colonies have relatively large impacts, with one colony of 3,000 gray bats annually depositing of guano into their cave. Invertebrates inhabit guano piles, including fly larvae, nematodes, springtails, beetles, mites, pseudoscorpions, thrips, silverfish, moths, harvestmen, spiders, isopods, millipedes, centipedes, and barklice. The invertebrate communities associated with the guano depends on the bat species' feeding guild: frugivorous bat guano has the greatest invertebrate diversity. Some invertebrates feed directly on the guano, while others consume the fungi that use it as a growth medium. Predators such as spiders depend on guano to support their prey base. Vertebrates consume guano as well, including the bullhead catfish and larvae of the grotto salamander. Bat guano is integral to the existence of endangered cave fauna. The critically endangered Shelta Cave crayfish feeds on guano and other detritus. The Ozark cavefish, a U.S. federally listed species, also consumes guano. The loss of bats from a cave can result in declines or extinctions of other species that rely on their guano. A 1987 cave flood resulted in the death of its bat colony; the Valdina Farms salamander is now likely extinct as a result. Bat guano also has a role in shaping caves by making them larger. It has been estimated that 70–95% of the total volume of Gomantong cave in Borneo is due to biological processes such as guano excretion, as the acidity of the guano weathers the rocky substrate. The presence of high densities of bats in a cave is predicted to cause the erosion of of rock over 30,000 years. Cultural significance There are several references to guano in the arts. In his 1845 poem "Guanosong", German author Joseph Victor von Scheffel used a humorous verse to take a position in the popular polemic against Hegel's Naturphilosophie. The poem starts with an allusion to Heinrich Heine's Lorelei and may be sung to the same tune. The poem ends, however, with the blunt statement of a Swabian rapeseed farmer from Böblingen who praises the seagulls of Peru as providing better manure even than his fellow countryman Hegel. This refuted the widespread Enlightenment belief that nature in the New World was inferior to the Old World. The poem has been translated by, among others, Charles Godfrey Leland. English author Robert Smith Surtees parodied the obsession of wealthy landowners with the "religion of progress" in 1843. In one of his works featuring the character John Jorrocks, Surtees has the character develop an obsession with trying all the latest farming experiments, including guano. In an effort to impress the upper class around him and disguise his low-class origins, Jorrocks references guano in conversation at every chance he can. At one point, he exclaims, "Guano!" along with two other varieties of fertilizer, to which the Duke replies, "I see you understand it all!" Guano is also the namesake for one of the nucleobases in RNA and DNA: guanine, a purine base, consisting of a fused pyrimidine-imidazole planar ring system with conjugated double bonds. Guanine was first obtained from guano by , who incorrectly first described it as xanthine, a closely related purine, in 1844. After he was corrected by Einbrodt two years later, Bodo Unger agreed and published it with the new name of "guanine" in 1846.
Technology
Agronomical techniques
null
25439126
https://en.wikipedia.org/wiki/Vulva
Vulva
In mammals, the vulva (: vulvas or vulvae) comprises mostly external, visible structures of the female genitalia leading into the interior of the female reproductive tract. For humans, it includes the mons pubis, labia majora, labia minora, clitoris, vestibule, urinary meatus, vaginal introitus, hymen, and openings of the vestibular glands (Bartholin's and Skene's). The folds of the outer and inner labia provide a double layer of protection for the vagina (which leads to the uterus). Pelvic floor muscles support the structures of the vulva. Other muscles of the urogenital triangle also give support. Blood supply to the vulva comes from the three pudendal arteries. The internal pudendal veins give drainage. Afferent lymph vessels carry lymph away from the vulva to the inguinal lymph nodes. The nerves that supply the vulva are the pudendal nerve, perineal nerve, ilioinguinal nerve and their branches. Blood and nerve supply to the vulva contribute to the stages of sexual arousal that are helpful in the reproduction process. Following the development of the vulva, changes take place at birth, childhood, puberty, menopause and post-menopause. There is a great deal of variation in the appearance of the vulva, particularly in relation to the labia minora. The vulva can be affected by many disorders, which may often result in irritation. Vulvovaginal health measures can prevent many of these. Other disorders include a number of infections and cancers. There are several vulval restorative surgeries known as genitoplasties, and some of these are also used as cosmetic surgery procedures. Different cultures have held different views of the vulva. Some ancient religions and societies have worshipped the vulva and revered the female as a goddess. Major traditions in Hinduism continue this. In Western societies, there has been a largely negative attitude typified by the medical terminology of , meaning parts to be ashamed of. There has been an artistic reaction to this in various attempts to bring about a more positive and natural outlook. While the vagina is a separate part of the anatomy, it has often been used synonymously with vulva. Structure The human vulva is made up of the following: Mons pubis The mons pubis is a soft mound of fatty tissue in the pubic region covering the pubic bone. is Latin for "pubic mound" and is present in both sexes to act as a cushion during sexual intercourse, and is more pronounced in the female. The variant term mons veneris ('mound of Venus') is used specifically for females. Labia The labia minora are the small inner pair of skin folds that protect the openings. The large outer pair of folds are the labia majora, which contain and protect the labia minora and other structures of the vulva. The labia majora meet at the front of the mons pubis, and meet posteriorly at the urogenital triangle (the anterior part of the perineum) below the anus. The labia minora are often pink or brownish black, relevant to the person's skin color. The grooves between the labia majora and minora are called the interlabial sulci, or interlabial folds. The labia minora meet posteriorly as the frenulum (fourchette). Clitoris Located at the anterior junction of the labia minora is the clitoris, a highly erogenous sexual organ. The visible portions of the clitoris are the glans and frenulum. Typically, the glans is roughly the size and shape of a pea, and can vary in size from about 6 mm to 25 mm (less than an inch). The size can also vary when the clitoris is erect, which happens when two regions of erectile tissue known as the corpora cavernosa (along with the bulbs and crura, which both constitute the root of the clitoris) fill with blood, making the shaft engorged. The glans contains many nerve endings, which makes it highly sensitive. The only known function of the clitoris is to focus on sexual feelings. The clitoral hood is a protective fold of skin and it may partially or completely cover the shaft and glans. The hood may be partially or completely hidden within the pudendal cleft. Vestibule The area between the labia minora where the vaginal introitus and the urinary meatus (openings of the vagina and urethra respectively) are located is the vestibule. The meatus is below the clitoris and atop the introitus. The introitus is sometimes partly covered by a membrane called the hymen. The hymen will usually rupture during the first episode of vigorous sex, and the blood produced by this rupture has been seen to signify virginity. However, the hymen may also rupture spontaneously during exercise or be stretched by normal activities such as the use of tampons and menstrual cups, or be so minor as to be unnoticeable, or be absent. In some rare cases, the hymen may completely cover the introitus, requiring a surgical procedure called a hymenotomy. Two greater vestibular glands known as Bartholin's glands open into either side of the introitus and secrete a mucous vaginal lubricant. The openings of the lesser vestibular glands, known as Skene's glands, are found on either side of the urethral meatus. Muscles Pelvic floor muscles help to support the vulvar structures. The voluntary, pubococcygeus muscle, part of the levator ani muscle partially constricts the vaginal opening. Other muscles of the urogenital triangle support the vulvar area and they include the transverse perineal muscles, the bulbospongiosus, and the ischiocavernosus muscles. The bulbospongiosus muscle decreases the vaginal opening. They play a role in the vaginal contractions of orgasm by causing the vestibular bulbs to contract. Blood, lymph and nerve supply The tissues of the vulva are highly vascularised and blood supply is provided by the three pudendal arteries. Venous return is via the external and internal pudendal veins. The organs and tissues of the vulva are drained by a chain of superficial inguinal lymph nodes located along the blood vessels. The ilioinguinal nerve originates from the first lumbar nerve and gives branches that include the anterior labial nerves, which supply the skin of the mons pubis and the labia majora. The perineal nerve is one of the terminal branches of the pudendal nerve and this branches into the posterior labial nerves to supply the labia. The pudendal nerve branches include the dorsal nerve, which gives sensation to the clitoris. The clitoral glans is seen to be populated by a large number of small nerves, a number that decreases as the tissue changes towards the urethra. The density of nerves at the glans indicates that it is the center of heightened sensation. Cavernous nerves from the uterovaginal plexus supply the erectile tissue of the clitoris. These are joined underneath the pubic arch by the dorsal nerve of the clitoris. The pudendal nerve enters the pelvis through the lesser sciatic foramen and continues medial to the internal pudendal artery. The point where the nerve circles the ischial spine is the location where a pudendal block of local anesthetic can be administered to inhibit sensation to the vulva. A number of smaller nerves split off from the pudendal nerve. The deep branch of the perineal nerve supplies the muscles of the perineum and a branch of this supplies the bulb of the vestibule. Variations There is a great deal of variation in the appearance of the vulva. Much of this variation lies in the significant differences in the size, shape, and color of the labia minora. Though called the smaller lips, they can often be of considerable size and may protrude outside the labia majora. This variation has also been evidenced in a large display of 400 vulval casts called the Great Wall of Vagina created by Jamie McCartney to fill the lack of information of what a normal vulva looks like. The casts taken from a large and varied group of women showed clearly that there is much variation. Other variations of the vulva include the appearance of Fordyce spots and clitoral phimosis (when the clitoral hood cannot retract past the glans). Researchers from the Elizabeth Garret Anderson Hospital, London, measured multiple genital dimensions of 50 women between the ages of 18 and 50, with a mean age of 35.6: Development Prenatal development In week three of the development of the embryo, mesenchyme cells from the primitive streak migrate around the cloacal membrane. Early in the fifth week, the cells form two swellings called the cloacal folds. The cloacal folds meet in front of the cloacal membrane and form a raised area known as the genital tubercle. The urorectal septum fuses with the cloacal membrane to form the perineum. This division creates two areas one surrounded by the urethral folds and the other by the anal folds. These areas become the urogenital triangle and the anal triangle. The area between the vulva and the anus is known as the clinical perineum. At the same time, a pair of swellings on either side of the urethral folds known as the genital swellings develop into the labioscrotal swellings. Sexual differentiation takes place, and at the end of week six in the female, hormones stimulate further development and the genital tubercle bends and forms the clitoris. The urogenital sinus persists as the vulval vestibule, vestibular glands and urethra. The urethral folds form the labia minora and the labioscrotal swellings form the labia majora. The uterovaginal canal or genital canal, forms in the third month of the development of the urogenital system. The lower part of the canal is blocked off by a plate of tissue, the vaginal plate. This tissue develops and lengthens during the third to fifth months and the lower part of the vaginal canal is formed by a process of desquamation or cell shedding. The end of the vaginal canal is blocked off by an endodermal membrane, which separates the opening from the vestibule. In the fifth month, the membrane degenerates but leaves a remnant called the hymen. Childhood The newborn's vulva may be swollen or enlarged as a result of having been exposed, via the placenta, to her mother's increased levels of hormones. The labia majora are closed. These changes disappear over the first few months. During childhood before puberty, the lack of estrogen can cause the labia to become sticky and to ultimately join firmly together. This condition is known as labial fusion and is rarely found after puberty when estrogen production has increased. Puberty Puberty is the onset of the ability to reproduce, and takes place over two to three years, producing a number of changes. The structures of the vulva become proportionately larger and may become more pronounced. Pubarche, the first appearance of pubic hair develops, firstly on the labia majora, and later spreads to the mons pubis, and sometimes to the inner thighs and perineum. Pubic hair is much coarser than other body hair, and is considered a secondary sex characteristic. Pubarche can occur independently of puberty. Premature pubarche may sometimes indicate a later metabolic-endocrine disorder seen at adolescence. The disorder sometimes known as a polyendocrine disorder is marked by elevated levels of androgen, insulin, and lipids, and may originate in the fetus. Instead of being seen as a normal variant it is proposed that premature pubarche may be seen as a marker for these later endocrine disorders. Apocrine sweat glands secrete sweat into the pubic hair follicles. This is broken down by bacteria on the skin and produces an odor, which some consider to act as an attractant sex pheromone. The labia minora may grow more prominent and undergo changes in color. At puberty, the first monthly period known as menarche marks the onset of menstruation. In prepubertal girls, the skin of the vulva is thin and delicate, and its neutral pH makes it prone to irritation. The production of the female sex hormone estradiol (an estrogen) at puberty, causes the perineal skin to thicken by keratinising, and this reduces the risk of infection. Estrogen also causes the laying down of fat in the development of the secondary sex characteristics. This contributes to the maturation of the vulva with increases in the size of the mons pubis, and the labia majora and the enlargement of the labia minora. Pregnancy In pregnancy, the vulva and vagina take on a bluish coloring due to venous congestion. This appears between the eighth and twelfth week and continues to darken as the pregnancy continues. Estrogen is produced in large quantities during pregnancy and this causes the vulva to become enlarged. The vaginal opening and the vagina are also enlarged. After childbirth, a vaginal discharge known as lochia is produced and continues for about ten days. Menopause During menopause, hormone levels decrease, which causes changes in the vulva known as vulvovaginal atrophy. The decreased estrogen affects the mons, the labia, and the vaginal opening and can cause pale, itchy, and sore skin. Other visible changes are a thinning of the pubic hair, a loss of fat from the labia majora, a thinning of the labia minora, and a narrowing of the vaginal opening. This condition has been renamed by some bodies as the genitourinary syndrome of menopause as a more comprehensive term. Function and physiology The vulva has a major role to play in the reproductive system. It provides entry to, and protection for the uterus, and the right conditions in terms of warmth and moisture that aids in its sexual and reproductive functions. The vulva is richly innervated and provides pleasure when properly stimulated. The mons pubis provides cushioning against the pubic bone during intercourse. A number of different secretions are associated with the vulva, including urine (from the urethral opening during urination through control of the external sphincter muscle), sweat (from the apocrine glands), menses (leaving from the vagina via the introitus), sebum (from the sebaceous glands), alkaline fluid (from the Bartholin's glands), mucus (from the Skene's glands), vaginal lubrication from the vaginal wall and smegma. Smegma is a white substance formed from a combination of dead cells, skin oils, moisture and naturally occurring bacteria, that forms in the genitalia. In females, this thickened secretion collects around the clitoris and labial folds. It can cause discomfort during sexual activity as it can cause the clitoral glans to stick to the hood, and is easily removed by bathing. Aliphatic acids known as copulins are also secreted in the vagina. These are believed to act as pheromones. Their fatty acid composition, and consequently their odor changes in relation to the stages of the menstrual cycle. Sexual stimulation and arousal The clitoris and the labia minora are both the most erogenous areas of the vulva. The labia majora are also somewhat erogenous. Local stimulation can involve the clitoris, vagina and other perineal regions. The clitoris (especially the glans) is the human female's most sensitive erogenous zone and generally the primary anatomical source of human female sexual pleasure. Sexual stimulation of the clitoris (by a number of means) can result in widespread sexual arousal and, if maintained, can result in orgasm. Stimulation to vulvar orgasm is optimally achieved by a massaging sensation, such as oral sex (cunnilingus), fingering, and tribadism (two women rubbing vulvas together). Sexual arousal results in a number of physical changes in the vulva. During arousal, the Bartholin's glands produce more vaginal lubrication. Vulval tissue is highly vascularised; arterioles dilate in response to sexual arousal and the smaller veins will compress after arousal, so that the clitoris and labia minora increase in size. Increased vasocongestion in the vagina causes it to swell, decreasing the size of the vaginal opening by about 30%. Clitoral erection takes place, which retracts the clitoral hood, causing the glans to appear. The labia majora have swollen from blood flow, and slightly separated, revealing a thick and engorged labia minora. The labia minora sometimes change considerably in color, going from pink to red in lighter skinned women who have not borne a child, or red to dark red in those who have. During orgasm, rhythmic muscle contractions occur in the outer third of the vagina, as well as the uterus and anus. Contractions become less intense and more randomly spaced as the orgasm continues. The number of contractions that accompany an orgasm vary depending on its intensity. An orgasm may be accompanied by female ejaculation, causing liquid from the Skene's glands to be expelled through the urethra. The pooled blood begins to dissipate, although at a much slower rate if an orgasm has not occurred. The vagina and its opening return to their normal relaxed state, and the rest of the vulva returns to its normal size, position and color. Clinical significance Irritation Irritation and itching of the vulva is called pruritus vulvae. This can be a symptom of many disorders, some of which may be determined by a patch test. The most common cause of irritation is thrush, a fungal infection. Vulvovaginal health measures can help to prevent many disorders including thrush. Infections of the vagina such as vaginosis and of the uterus may produce vaginal discharge, which can be an irritant when it comes into contact with the vulvar tissue. Inflammation as vaginitis, vulvovaginitis and vulvitis can result from this causing irritation and pain. Ingrown hairs resulting from pubic hair shaving can cause folliculitis where the hair follicle becomes infected; or give rise to an inflammatory response known as pseudofolliculitis pubis. A less common cause of irritation is genital lichen planus, another inflammatory disorder. A severe variant of this is vulvovaginal-gingival syndrome, which can lead to narrowing of the vagina, or vulva destruction. Many types of infection and other diseases including some cancers may cause irritation. Sexually transmitted infections Vulvar organs and tissues can become affected by different infectious agents such as bacteria and viruses, or infested by parasites such as lice and mites. Over thirty types of pathogen can be sexually transmitted, and many of these affect the genitals. Most STIs do not produce symptoms or symptoms may be mild and not be indicative of an STI. The practice of safe sex can greatly reduce the risk of infection from many sexually transmitted pathogens. The use of condoms (either male or female condoms) is one of the most effective methods of protection. Bacterial infections include: chancroid – characterised by genital ulcers known as chancres; granuloma inguinale showing as inflammatory granulomas often described as nodules; syphilis –the primary stage classically presents with a single chancre, a firm, painless, non-itchy ulcer, but there may be multiple sores; and gonorrhea that very often presents no symptoms but can result in discharge. Viral infections include human papillomavirus infection (HPV) – this is the most common STI and has many types. Genital HPV can cause genital warts. There have been links made between HPV and vulvar cancer, though HPV most often causes cervical cancer. Genital herpes is mostly asymptomatic but can present with small blisters that break open into ulcers. HIV/AIDS is mostly transmitted through sexual activity, and the vulva in some cases can be affected by sores. A highly contagious viral infection is molluscum contagiosum, which is transmissible on close contact and causes water warts. Parasitic infections include trichomoniasis, pediculosis pubis, and scabies. Trichomoniasis is transmitted by a parasitic protozoan and is the most common non-viral STI. Most cases are asymptomatic but may present symptoms of irritation and a discharge of unusual odor. Pediculosis pubis, commonly called crabs, is a disease caused by the crab louse an ectoparasite. When the pubic hair is infested, the irritation produced can be intense. Scabies, also known as the "seven year itch", is caused by another ectoparasite, the mite Sarcoptes scabiei, giving intense irritation. Cancer Malignancies can develop in the glabrous and hair-bearing parts of the vulva. Based on the cellular origin and histology, vulvar cancers are classified into squamous cell carcinomas, melanomas, basal cell carcinomas, adenocarcinomas, sarcomas and invasive extramammary Paget's disease. Squamous cell carcinomas represent the most common variant of vulvar cancers and account for approximately 75%. These are usually found in the labia, particularly the labia majora. The second most common vulvar cancer is basal cell carcinoma, which rarely spreads to regional lymph nodes or distant organs. The third most common subtype is vulvar melanoma. Studies have shown that vulvar melanomas appear to have a different tumor biology and mutational characteristics compared to skin melanomas, which has a direct impact on the medical treatment of vulvar melanomas. Signs and symptoms of vulvar cancer can include: itching, or bleeding; skin changes including rashes, sores, lumps or ulcers, and changes in vulvar skin coloration. Pelvic pain might also occur especially during urinating and sex. However, a significant proportion remains asymptomatic in early disease stages, often delaying its diagnosis. As such, 32% of women with vulvar melanoma already have regional involvement or distant metastases at the time of diagnosis, which significantly impacts prognosis. Surgery (with or without removal of regional lymph nodes) is usually the primary treatment modality. Typically, a wide-local excision is performed, in which the tumor is excised including a safety-margin of healthy tissue to ensure its entire removal, which is confirmed by a pathologist. In more advanced disease, a (partial) vulvectomy may need to be performed in order to remove some or all of the vulva. Advanced-stage melanomas can be treated with checkpoint inhibitors. Other Labial fusion, also called labial adhesion, is the fusion of the labia minora. This affects a number of young girls and is not considered unduly problematic. The condition can usually be treated using creams, or it may right itself with the release of hormones at the onset of puberty. Clitoromegaly is an enlarged clitoris caused by either anabolic steroids or an intersex condition. Vulvodynia is chronic pain in the vulvar region. There is no single identifiable cause. A subtype of this is vulvar vestibulitis but since this is not thought to be an inflammatory condition it is more usually referred to as vestibulodynia. Vulvar vestibulitis usually affects pre-menopausal women. Pudendal nerve entrapment can cause sharp pain or numbness in the vulva. This condition can be caused by activities such as cycling, giving birth, or prolonged sitting. A number of skin disorders such as lichen sclerosus, and lichen simplex chronicus can affect the vulva. Crohn's disease of the vulva is an uncommon form of metastatic Crohn's disease, which manifests as a skin condition showing as hypertrophic lesions or vulvar abscesses. Papillary hidradenomas are nodules that can ulcerate and are mostly found on the skin of the labia or of the interlabial folds. Another more complex ulcerative condition is hidradenitis suppurativa, which is characterised by painful cysts that can ulcerate, and recur, and can become chronic lasting for many years. Chronic cases can develop into squamous cell carcinomas. An asymptomatic skin disorder of the vulval vestibule is vestibular papillomatosis, which is characterised by fine, pink projections from either the epithelium of the vulva or from the labia minora. Dermatoscopy can distinguish this condition from genital warts. A subtype of psoriasis, an autoimmune disease, is inverse psoriasis in which red patches can appear in the skin folds of the labia. Childbirth The vulvar region is at risk for trauma during childbirth. During childbirth, the vagina and vulva must stretch to accommodate the baby's head (approximately ). This can result in tears known as perineal tears in the vaginal opening, and other structures within the perineum. An episiotomy (a pre-emptive surgical cutting of the perineum) is sometimes performed to facilitate delivery and limit tearing. A tear takes longer to heal than an incision. Tears and incisions may be repaired using sutures that may be layered. Among the methods of hair removal evaluated for pre-surgeries, pubic hair shaving known as prepping, was seen to increase the risk of surgical site infections. No advantages have been demonstrated in the routine shaving of pubic hair prior to childbirth. Surgery Genitoplasties are plastic surgeries that can be carried out to repair, restore or alter vulvar tissues, particularly following damage caused by injury or cancer treatment. These procedures include vaginoplasty and vulvoplasty, which can also be performed as a cosmetic surgery. Other cosmetic surgeries to change the appearance of external structures include labiaplasties. Some of these procedures, vaginoplasties and vulvoplasties, are also carried out as sex reassignment surgeries. The use of cosmetic surgeries has been criticized by clinicians. The American College of Obstetricians and Gynecologists recommends that women be informed of the risks of these surgeries. They refer to the lack of data relevant to their safety and effectiveness and to the potential associated risks such as infection, altered sensation, dyspareunia, adhesions, and scarring. There is also a percentage of people seeking cosmetic surgery who may be suffering from body dysmorphic disorder and surgery in these cases can be counterproductive. Society and culture Altering the female genitalia In some cultural practices, particularly in the African Khoikhoi and Rwanda cultures, the labia minora are purposefully stretched by repeated pulling on them and sometimes by attaching weights. Labia stretching is a recognised, familial cultural practice in parts of Eastern and Southern Africa. This is a desired and encouraged practice by the women (starting at puberty) in order to promote better sexual satisfaction for both parties. The achieved extensions can hang down below the labia majora for up to seven inches. Children in the African diaspora practise this too, so it occurs within immigrant communities in, for example, Britain, where a BBC News report labelled it a hidden form of child abuse. The girls are subject to familial and social pressure to conform. In some cultures, including modern Western culture, women have shaved or otherwise removed the hair from part or all of the vulva. When high-cut swimsuits became fashionable, women who wished to wear them would remove the hair on either side of their pubic triangles, to avoid exhibiting pubic hair. Other women prefer to retain their vulva hair. The removal of hair from the vulva is a fairly recent phenomenon in the United States, Canada, and Western Europe, usually in the form of bikini waxing or Brazilian waxing, but has been prevalent in many Eastern European and Middle Eastern cultures for centuries, usually due to the idea that it may be more hygienic, or originating in prostitution and pornography. Hair removal may include all, most, or some of the hair. French waxing leaves a small amount of hair on either side of the labia or a strip directly above and in line with the pudendal cleft called a landing strip. Islam teaching includes Muslim hygienical jurisprudence a practice of which is the removal of pubic hair. Several forms of genital piercings can be made in the vulva, and include the Christina, Princess Albertina, Isabella, Nefertiti, fourchette, and labia piercings. Piercings are usually performed for aesthetic purposes, but some forms like the clitoral hood piercing (or rarely glans piercing) might also enhance pleasure during sex. Though they are common in traditional cultures, intimate piercings are a fairly recent trend in Western society. Other forms of permanent modifications of the vulva for cultural, decorative or aesthetic reasons are genital tattoos or scarification (so-called "Hanabira"). Female genital surgery includes laser resurfacing of the labia to remove wrinkles, labiaplasty (reducing the size of the labia) and vaginoplasty. In September 2007, the American College of Obstetricians and Gynecologists (ACOG) issued a committee opinion on these and other female genital surgeries, including "vaginal rejuvenation", "designer vaginoplasty", "revirgination", and "G-spot amplification". This opinion states that the safety of these procedures has not been documented. The ACOG and the ISSVD recommend that women seeking these surgeries need to be informed about the lack of data supporting these procedures and the potential associated risks such as infection, altered sensation, dyspareunia, adhesions, and scarring. With the growing popularity of female cosmetic genital surgeries, the practice increasingly draws criticism from an opposition movement of cyberfeminist activist groups and platforms, called the labia pride movement. The major point of contention is that heavy advertising for these procedures, in combination with a lack of public education, fosters body insecurities in women with larger labia in spite of the fact that there is normal and pronounced individual variation in the size of labia. The preference for smaller labia is a matter of a fashion fad and is without clinical or functional significance. Female genital mutilation The most prevalent form of non-consensual genital alteration is that of female genital mutilation. This mostly involves the partial or complete removal of the vulva. Female genital mutilation is carried out in thirty countries in Africa and Asia with more than 200 million girls being affected, and some women (as of 2018). Nearly all of the procedures are carried out on young girls. The practices are also carried out globally among migrants from these areas. Female genital mutilation is claimed to be mostly carried out for cultural traditional reasons. According to the research conducted under In the Name of Tradition, FGM/C is more common in Sunni countries and less common in Shia societies. FGM/C can have harmful effects on their physical and mental health. Various official and unofficial research reports also confirm these complications. In its various reports, the World Health Organization has considered FGM/C as an action that endangers women's health in various ways. This organization stated in a report published in January 2023 that FGM/C has no health benefits, and it harms girls and women in many ways. It involves removing and damaging healthy and normal female genital tissue, and it interferes with the natural functions of girls' and women's bodies. Although all forms of FGM/C are associated with increased risk of health complications, the risk is greater with more severe forms of FGM/C. The American National Library of Medicine also stated in an article in 2018 that the consequences of FGM/C have both physiological and psychological complications, including short- and long-term complications. The method in which the procedure is performed may determine the extent of the short-term complications. If the process was completed using unsterile equipment, no antiseptics, and no antibiotics, the victim may have increased risk of complications. Primary infections include staphylococcus infections, urinary tract infections, excessive and uncontrollable pain, and hemorrhaging. Infections such as human immunodeficiency virus (HIV), Chlamydia trachomatis, Clostridium tetani, and herpes simplex virus (HSV) 2 are significantly more common among women who underwent Type 3 mutilation compared with other categories. Etymology The word vulva is Latin for "womb". It derives from the 1540s in referring to the womb and female sexual organs, from the earlier volvere meaning to turn, roll or revolve, with further derivatives such as used in volvox, and volvulus (twisted bowel). The naming of the female (and male) genitals as , meaning parts to be ashamed of, dates from the mid-17th century. The naming influenced the general perception of the vulva and this is shown in depicted gynaecological procedures. The examiner shown in the Obstetrical examination dated 1822, is adopting the compromise procedure where the woman's genitals cannot be seen. Terminology In 2021, a study in the UK showed that few are able to label the structure of the vulva correctly. There are many sexual slang terms used for the vulva. "Cunt", a medieval word for the vulva and once the standard term, has become a vulgarism, and in other uses one of the strongest offensive and abusive swear words in English-speaking cultures. The word has been replaced in normal usage by a few euphemisms including "pussy" (vulgar slang) and "fanny" (UK), which used to be a common pet name. In the UK, these terms have other non-sexual meanings that lend themselves to double entendres, such as "pussy", which is used as a term of endearment for a pet cat, "pussy cat". In North American informal use, the term "pussy" can also refer to a weak or effeminate man, and "fanny" is a term used for the buttocks. Other slang terms are "muff", "snatch", and "twat". "Vagina" is often incorrectly used as a synonym for vulva since it is separate from that anatomy. Religion and art Some cultures have long celebrated and even worshipped the vulva. During the Uruk period ( 4000–3100 BC), the ancient Sumerians regarded the vulva as sacred and a vast number of Sumerian poems praising the vulva of Inanna, the goddess of love, sex, and fertility, have survived. In Sumerian religion, the goddess Ninimma is the divine personification of the vulva. Vaginal fluid is always described in Sumerian texts as tasting "sweet" and, in a Sumerian bridal hymn, a young maiden rejoices that her vulva has grown hair. Clay models of vulvas were discovered in the temple of Inanna at Ashur. Some major Hindu traditions such as Shaktism, a goddess-centered tradition, revere the vulva and vagina under the name yoni. The goddess as Devi is worshipped as the supreme deity. The yoni is a representation of the female deity and is found in many temples as a focus for prayer and offerings. It is also represented symbolically as a mudra in spiritual practices, including yoga. Sheela na gigs are figurative carvings of naked women displaying an exaggerated vulva. They are found in ancient and medieval European contexts. They are displayed on many churches, but their origin and significance is debatable. A main line of thinking is that they were used to ward off evil spirits. Another view is that the sheela na gig was a divine assistant in childbirth. Starr Goode explores the image and possible meanings of the Sheela na gig and Baubo images in particular, but writes also about the recurring image worldwide. Through hundreds of photographs, she demonstrates that the image of a female displaying her vulva is not specific to European religious art or architecture, but that similar images are found in the visual arts and in mythical narratives of goddesses and heroines parting their thighs to reveal what she calls, "sacred powers". Her theory is that "the image is so rooted in our psyches that it seems as if the icon is the original cosmological center of the human imagination". (Origin of the World), painted by Gustave Courbet in 1866, was an early Realist painting of a vulva that only became exhibited many years later. The painting was commissioned by Ottoman diplomat Halil Şerif Paşa. The woman used as the model for the painting was probably Halil's lover Constance Quéniaux. However, another potential model is Marie-Anne Detourbay, who was also a lover of Halil Şerif Pasha. Japanese sculptor and manga artist Megumi Igarashi has focused much of her work on painting and modelling vulvas and vulva-themed works. She has used molds to create dioramas – three-dimensional models of her vulva with the hope of demystifying the female genitals. An art installation called The Dinner Party by feminist artist, Judy Chicago, portrays a symbolic history of famous women. The dinner plates each depict an elaborate vulval form and they are arranged in a triangular vulva shape. Another installation was made by British artist Jamie McCartney who used the casts of four hundred vulvas to create The Great Wall of Vagina in 2011. The casts are life-size. Explanations written by the project's sexual health adviser accompany these. The purpose of the artist was to "address some of the stigmas and misconceptions that are commonplace". Other animals As a rule, only the external female genitals of placental mammals are referred to as the "vulva", although the term is also used in the scientific literature for functionally comparable structures in other animal groups such as marsupials and roundworms (Nematoda). For comparison, birds, reptiles, amphibians, and monotremes have a cloaca. An organ system like a vulva does not exist. The vulva of a placental consists of the following along with its variations: Clitoris: Made up of the root, glans and body and is usually retracted into a prepuce. Inside the clitoris of many non-human placentals is the baubellum, a small bone that possibly has origins in copulation. In horses and dogs, the clitoris is contained in clitoral fossa, which is a small pouch of tissue. Labia: A small, thin pair of lip-like structures that protect the vestibule. They are known as the labia vulvae in carnivorans and ungulates and as the labia minora in primates. The labia majora only exist in primates (including humans). Afrotherians do not have distinguishable labia. Vestibule/vulvar opening: In humans, other great apes, and some rodents, the vestibule is a flat and short external space that contains separate urethral and vaginal openings. In most other placentals, the urethra and vagina join as an internal vestibule (urogenital sinus), hence both urine and offspring exit through an orifice called the vulvar opening. During estrus, the clitoris of a mare (female horse) everts as the labia contracts by opening and closing. This is colloquially known as "winking". Throughout the menstrual cycle, some female primates' vulvar and anal regions will swell (sexual swelling) to attract a male, though the fundamental reason for this function is up for debate. The vulva of a spotted hyena has a large clitoris known as a pseudo-penis for copulating, giving birth and urinating, as well as fused labia (pseudo-scrotum). This can make it difficult to correctly sex the species. Additional images
Biology and health sciences
Human anatomy
Health
25441497
https://en.wikipedia.org/wiki/Stokes%27%20theorem
Stokes' theorem
Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls, or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface. Stokes' theorem is a special case of the generalized Stokes theorem. In particular, a vector field on can be considered as a 1-form in which case its curl is its exterior derivative, a 2-form. Theorem Let be a smooth oriented surface in with boundary . If a vector field is defined and has continuous first order partial derivatives in a region containing , then More explicitly, the equality says that The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary. Surfaces such as the Koch snowflake, for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in Lebesgue theory cannot be defined for a non-Lipschitz surface. One (advanced) technique is to pass to a weak formulation and then apply the machinery of geometric measure theory; for that approach see the coarea formula. In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of . A more detailed statement will be given for subsequent discussions. Let be a piecewise smooth Jordan plane curve. The Jordan curve theorem implies that divides into two components, a compact one and another that is non-compact. Let denote the compact part; then is bounded by . It now suffices to transfer this notion of boundary along a continuous map to our surface in . But we already have such a map: the parametrization of . Suppose is piecewise smooth at the neighborhood of , with . If is the space curve defined by then we call the boundary of , written . With the above notation, if is any smooth vector field on , then Here, the "" represents the dot product in . Special case of a more general theorem Stokes' theorem can be viewed as a special case of the following identity: where is any smooth vector or scalar field in . When is a uniform scalar field, the standard Stokes' theorem is recovered. Proof The proof of the theorem consists of 4 steps. We assume Green's theorem, so what is of concern is how to boil down the three-dimensional complicated problem (Stokes' theorem) to a two-dimensional rudimentary problem (Green's theorem). When proving this theorem, mathematicians normally deduce it as a special case of a more general result, which is stated in terms of differential forms, and proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not presuppose any knowledge beyond a familiarity with basic vector calculus and linear algebra. At the end of this section, a short alternative proof of Stokes' theorem is given, as a corollary of the generalized Stokes' theorem. Elementary proof First step of the elementary proof (parametrization of integral) As in , we reduce the dimension by using the natural parametrization of the surface. Let and be as in that section, and note that by change of variables where stands for the Jacobian matrix of at . Now let be an orthonormal basis in the coordinate directions of . Recognizing that the columns of are precisely the partial derivatives of at , we can expand the previous equation in coordinates as Second step in the elementary proof (defining the pullback) The previous step suggests we define the function Now, if the scalar value functions and are defined as follows, then, This is the pullback of along , and, by the above, it satisfies We have successfully reduced one side of Stokes' theorem to a 2-dimensional formula; we now turn to the other side. Third step of the elementary proof (second equation) First, calculate the partial derivatives appearing in Green's theorem, via the product rule: Conveniently, the second term vanishes in the difference, by equality of mixed partials. So, But now consider the matrix in that quadratic form—that is, . We claim this matrix in fact describes a cross product. Here the superscript "" represents the transposition of matrices. To be precise, let be an arbitrary matrix and let Note that is linear, so it is determined by its action on basis elements. But by direct calculation Here, represents an orthonormal basis in the coordinate directions of . Thus for any . Substituting for , we obtain We can now recognize the difference of partials as a (scalar) triple product: On the other hand, the definition of a surface integral also includes a triple product—the very same one! So, we obtain Fourth step of the elementary proof (reduction to Green's theorem) Combining the second and third steps and then applying Green's theorem completes the proof. Green's theorem asserts the following: for any region D bounded by the Jordans closed curve γ and two scalar-valued smooth functions defined on D; We can substitute the conclusion of STEP2 into the left-hand side of Green's theorem above, and substitute the conclusion of STEP3 into the right-hand side. Q.E.D. Proof via differential forms The functions can be identified with the differential 1-forms on via the map Write the differential 1-form associated to a function as . Then one can calculate that where is the Hodge star and is the exterior derivative. Thus, by generalized Stokes' theorem, Applications Irrotational fields In this section, we will discuss the irrotational field (lamellar vector field) based on Stokes' theorem. Definition 2-1 (irrotational field). A smooth vector field on an open is irrotational (lamellar vector field) if . This concept is very fundamental in mechanics; as we'll prove later, if is irrotational and the domain of is simply connected, then is a conservative vector field. Helmholtz's theorem In this section, we will introduce a theorem that is derived from Stokes' theorem and characterizes vortex-free vector fields. In classical mechanics and fluid dynamics it is called Helmholtz's theorem. Theorem 2-1 (Helmholtz's theorem in fluid dynamics). Let be an open subset with a lamellar vector field and let be piecewise smooth loops. If there is a function such that [TLH0] is piecewise smooth, [TLH1] for all , [TLH2] for all , [TLH3] for all . Then, Some textbooks such as Lawrence call the relationship between and stated in theorem 2-1 as "homotopic" and the function as "homotopy between and ". However, "homotopic" or "homotopy" in above-mentioned sense are different (stronger than) typical definitions of "homotopic" or "homotopy"; the latter omit condition [TLH3]. So from now on we refer to homotopy (homotope) in the sense of theorem 2-1 as a tubular homotopy (resp. tubular-homotopic). Proof of Helmholtz's theorem In what follows, we abuse notation and use "" for concatenation of paths in the fundamental groupoid and "" for reversing the orientation of a path. Let , and split into four line segments . so that By our assumption that and are piecewise smooth homotopic, there is a piecewise smooth homotopy Let be the image of under . That follows immediately from Stokes' theorem. is lamellar, so the left side vanishes, i.e. As is tubular(satisfying [TLH3]), and . Thus the line integrals along and cancel, leaving On the other hand, , , so that the desired equality follows almost immediately. Conservative forces Above Helmholtz's theorem gives an explanation as to why the work done by a conservative force in changing an object's position is path independent. First, we introduce the Lemma 2-2, which is a corollary of and a special case of Helmholtz's theorem. Lemma 2-2. Let be an open subset, with a Lamellar vector field and a piecewise smooth loop . Fix a point , if there is a homotopy such that [SC0] is piecewise smooth, [SC1] for all , [SC2] for all , [SC3] for all . Then, Above Lemma 2-2 follows from theorem 2–1. In Lemma 2-2, the existence of satisfying [SC0] to [SC3] is crucial;the question is whether such a homotopy can be taken for arbitrary loops. If is simply connected, such exists. The definition of simply connected space follows: Definition 2-2 (simply connected space). Let be non-empty and path-connected. is called simply connected if and only if for any continuous loop, there exists a continuous tubular homotopy from to a fixed point ; that is, [SC0'] is continuous, [SC1] for all , [SC2] for all , [SC3] for all . The claim that "for a conservative force, the work done in changing an object's position is path independent" might seem to follow immediately if the M is simply connected. However, recall that simple-connection only guarantees the existence of a continuous homotopy satisfying [SC1-3]; we seek a piecewise smooth homotopy satisfying those conditions instead. Fortunately, the gap in regularity is resolved by the Whitney's approximation theorem. In other words, the possibility of finding a continuous homotopy, but not being able to integrate over it, is actually eliminated with the benefit of higher mathematics. We thus obtain the following theorem. Theorem 2-2. Let be open and simply connected with an irrotational vector field . For all piecewise smooth loops Maxwell's equations In the physics of electromagnetism, Stokes' theorem provides the justification for the equivalence of the differential form of the Maxwell–Faraday equation and the Maxwell–Ampère equation and the integral form of these equations. For Faraday's law, Stokes' theorem is applied to the electric field, : For Ampère's law, Stokes' theorem is applied to the magnetic field, :
Mathematics
Multivariable and vector calculus
null
26924511
https://en.wikipedia.org/wiki/Wedgefish
Wedgefish
Wedgefishes are rays of the family Rhinidae, comprising eleven species in three genera. Classified in the order Rhinopristiformes along with guitarfishes and sawfishes, they have also been known as giant guitarfishes or sharkfin guitarfishes. Taxonomy Rhina Bloch & Schneider, 1801 Rhina ancylostoma Bloch & Schneider, 1801 (Shark ray) Rhynchobatus J. P. Müller & Henle, 1837 Rhynchobatus australiae Whitley, 1939 (Bottlenose wedgefish) Rhynchobatus cooki Last, Kyne & Compagno, 2016 (Roughnose wedgefish) Rhynchobatus djiddensis (Forsskål, 1775) (Whitespotted wedgefish) Rhynchobatus immaculatus Last, H.-C.Ho & R.-R. Chen, 2013 (Taiwanese wedgefish) Rhynchobatus laevis (Bloch & Schneider, 1801) (Smoothnose wedgefish) Rhynchobatus luebberti Ehrenbaum, 1915 (African wedgefish) Rhynchobatus mononoke Koeda, Itou, Yamada & Motomura, 2020 (Japanese wedgefish) Rhynchobatus palpebratus Compagno & Last, 2008 (Eyebrow wedgefish) Rhynchobatus springeri Compagno & Last, 2010 (Broadnose wedgefish) Rhynchorhina Séret & Naylor, 2016 Rhynchorhina mauritaniensis Séret & Naylor, 2016 (False shark ray)
Biology and health sciences
Batoidea
Animals
21051888
https://en.wikipedia.org/wiki/Parrot
Parrot
Parrots (Psittaciformes), also known as psittacines (), are birds with a strong curved beak, upright stance, and clawed feet. They are classified in four families that contain roughly 410 species in 101 genera, found mostly in tropical and subtropical regions. The four families are the Psittaculidae (Old World parrots), Psittacidae (African and New World parrots), Cacatuoidea (cockatoos), and Strigopidae (New Zealand parrots). One-third of all parrot species are threatened by extinction, with a higher aggregate extinction risk (IUCN Red List Index) than any other comparable bird group. Parrots have a generally pantropical distribution with several species inhabiting temperate regions as well. The greatest diversity of parrots is in South America and Australasia. Parrotsalong with ravens, crows, jays, and magpiesare among the most intelligent birds, and the ability of some species to imitate human speech enhances their popularity as pets. They form the most variably sized bird order in terms of length; many are vividly coloured and some, multi-coloured. Most parrots exhibit little or no sexual dimorphism in the visual spectrum. The most important components of most parrots' diets are seeds, nuts, fruit, buds, and other plant material. A few species sometimes eat animals and carrion, while the lories and lorikeets are specialised for feeding on floral nectar and soft fruits. Almost all parrots nest in tree hollows (or nest boxes in captivity), and lay white eggs from which hatch altricial (helpless) young. Trapping wild parrots for the pet trade, as well as hunting, habitat loss, and competition from invasive species, has diminished wild populations, with parrots being subjected to more exploitation than any other group of wild birds. As of 2021, about 50 million parrots (half of all parrots) live in captivity, with the vast majority of these living as pets in people's homes. Measures taken to conserve the habitats of some high-profile charismatic species have also protected many of the less charismatic species living in the same ecosystems. Parrots are the only creatures that display true tripedalism, using their necks and beaks as limbs with propulsive forces equal to or greater than those forces generated by the forelimbs of primates when climbing vertical surfaces. They can travel with cyclical tripedal gaits when climbing. Taxonomy Origins and evolution Psittaciform diversity in South America and Australasia suggests that the order may have evolved in Gondwana, centred in Australasia. The scarcity of parrots in the fossil record, however, presents difficulties in confirming the hypothesis. There is currently a higher number of fossil remains from the northern hemisphere in the early Cenozoic. Molecular studies suggest that parrots evolved approximately 59 million years ago (Mya) (range 66–51 Mya) in Gondwana. The Neotropical Parrots are monophyletic, and the three major clades originated about 50 Mya (range 57–41 Mya). A single fragment from a large lower bill (UCMP 143274), found in deposits from the Lance Creek Formation in Niobrara County, Wyoming, had been thought to be the oldest parrot fossil and is presumed to have originated from the Late Cretaceous period, which makes it about 70 million years old. However, other studies suggest that this fossil is not from a bird, but from a caenagnathid oviraptorosaur (a non-avian dinosaur with a birdlike beak), as several details of the fossil used to support its identity as a parrot are not actually exclusive to parrots, and it is dissimilar to the earliest-known unequivocal parrot fossils. It is generally assumed that the Psittaciformes were present during the Cretaceous–Paleogene extinction event (K-Pg extinction), 66 mya. They were probably generalised arboreal birds, and did not have the specialised crushing bills of modern species. Genomic analysis provides strong evidence that parrots are the sister group of passerines, forming the clade Psittacopasserae, which is the sister group of the falcons. The first uncontroversial parrot fossils date to tropical Eocene Europe around 50 mya. Initially, a neoavian named Mopsitta tanta, uncovered in Denmark's Early Eocene Fur Formation and dated to 54 mya, was assigned to the Psittaciformes. However, the rather nondescript bone is not unequivocally psittaciform, and it may rather belong to the ibis genus Rhynchaeites, whose fossil legs were found in the same deposits. Several fairly complete skeletons of parrot-like birds have been found in England and Germany. These are probably not transitional fossils between ancestral and modern parrots, but rather lineages that evolved parallel to true parrots and cockatoos: Psittacopes Serudaptus Halcyornithidae Cyrilavis Halcyornis Pulchrapollia Pseudasturides Vastanavidae Vastanavis Quercypsittidae Quercypsitta Messelasturidae Messelastur Tynskya The earliest records of modern parrots date to around 23–20 mya. The fossil record—mainly from Europe—consists of bones clearly recognisable as belonging to anatomically modern parrots. The Southern Hemisphere contains no known parrot-like remains earlier than the Early Miocene around 20 mya. Etymology The name 'Psittaciformes' comes from the ancient Greek for parrot, (), whose origin is unclear. Ctesias (5th century BCE) recorded the name after the Indian name for a bird, most likely a parakeet (now placed in the genus Psittacula). Pliny the Elder (23/24–79 CE) in his Natural History (book 10, chapter 58) noted that the Indians called the bird "siptaces"; however, no matching Indian name has been traced. Popinjay is an older term for parrots, first used in English in the 1500s. Phylogeny Molecular phylogenetic studies have shown that Psittaciformes form a monophyletic clade that is sister to the Passeriformes: The time calibrated phylogeny indicates that the Australaves diverged around 65 Ma (million years ago) and the Psittaciformes diverged from the Passeriformes around 62 Ma. Most taxonomists now divide Psittaciformes into four families: Strigopidae (New Zealand parrots), Cacatuidae (Cockatoos), Psittacidae (African and New World parrots) and Psittaculidae (Old World parrots). In 2012, Leo Joseph and collaborators proposed that the parrots should be divided into six families. The New Zealand parrots in the genus Nestor were placed in a separate family Nestoridae and the two basal genera in the family Psittaculidae (Psittrichas and Coracopsis) were placed in a separate family Psittrichasiidae. The two additional families have not been recognised by taxonomists involved in curating lists of world birds and instead only four families are recognised. The following cladogram shows the phylogenetic relationships between the four families. The species numbers are taken from the list maintained by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC), now the International Ornithologists' Union. The Psittaciformes comprise three main lineages: Strigopoidea, Psittacoidea and Cacatuoidea. The Strigopoidea were considered part of the Psittacoidea, but the former is now placed at the base of the parrot tree next to the remaining members of the Psittacoidea, as well as all members of the Cacatuoidea. The Cacatuoidea are quite distinct, having a movable head crest, a different arrangement of the carotid arteries, a gall bladder, differences in the skull bones, and lack the Dyck texture feathers that—in the Psittacidae—scatter light to produce the vibrant colours of so many parrots. Colourful feathers with high levels of psittacofulvin resist the feather-degrading bacterium Bacillus licheniformis better than white ones. Lorikeets were previously regarded as a third family, Loriidae, but are now considered a tribe (Loriini) within the subfamily Loriinae, family Psittaculidae. The two other tribes in the subfamily are the closely related fig parrots (two genera in the tribe Cyclopsittini) and budgerigar (tribe Melopsittacini). Systematics The order Psittaciformes consists of four families containing roughly 410 species belonging to 101 genera. Superfamily Strigopoidea: New Zealand parrots Family Strigopidae Subamily Nestorinae: two genera with two living (kea and New Zealand kākā) and several extinct species of the New Zealand region Subfamily Strigopinae: the flightless, critically endangered kākāpō of New Zealand Superfamily Cacatuoidea: cockatoos Family Cacatuidae Subfamily Nymphicinae: one genus with one species, the cockatiel. Subfamily Calyptorhynchinae: the black cockatoos Subfamily Cacatuinae Tribe Microglossini: one genus with one species, the black palm cockatoo Tribe Cacatuini: four genera of white, pink, and grey species Superfamily Psittacoidea: true parrots Family Psittacidae Subfamily Psittacinae: two African genera, Psittacus and Poicephalus Subfamily Arinae Tribe Arini: 18 genera Tribe Androglossini: seven genera. Family Psittaculidae Subfamily Psittrichasinae: two genera, Psittrichas (Pesquet's parrot), Coracopsis Subfamily Platycercinae Tribe Pezoporini: ground parrots and allies Tribe Platycercini: broad-tailed parrots Subfamily Psittacellinae: one genus (Psittacella) with several species Subfamily Loriinae Tribe Loriini: lories and lorikeets Tribe Melopsittacini: one genus with one species, the budgerigar Tribe Cyclopsittini: fig parrots Subfamily Agapornithinae: three genera Subfamily Psittaculinae Tribe Polytelini: three genera Tribe Psittaculini: Asian psittacines Tribe Micropsittini: pygmy parrots Morphology Living species range in size from the buff-faced pygmy parrot, at under in weight and in length, to the hyacinth macaw, at in length, and the kākāpō, at in weight. Among the superfamilies, the three extant Strigopoidea species are all large parrots, and the cockatoos tend to be large birds, as well. The Psittacoidea parrots are far more variable, ranging the full spectrum of sizes shown by the family. The most obvious physical characteristic is the strong, curved, broad bill. The upper mandible is prominent, curves downward, and comes to a point. It is not fused to the skull, which allows it to move independently, and contributes to the tremendous biting pressure the birds are able to exert. A large macaw, for example, has a bite force of , close to that of a large dog. The lower mandible is shorter, with a sharp, upward-facing cutting edge, which moves against the flat part of the upper mandible in an anvil-like fashion. Touch receptors occur along the inner edges of the keratinised bill, which are collectively known as the "bill tip organ", allowing for highly dexterous manipulations. Seed-eating parrots have a strong tongue (containing similar touch receptors to those in the bill tip organ), which helps to manipulate seeds or position nuts in the bill so that the mandibles can apply an appropriate cracking force. The head is large, with eyes positioned high and laterally in the skull, so the visual field of parrots is unlike any other birds. Without turning its head, a parrot can see from just below its bill tip, all above its head, and quite far behind its head. Parrots also have quite a wide frontal binocular field for a bird, although this is nowhere near as large as primate binocular visual fields. Unlike humans, the vision of parrots is also sensitive to ultraviolet light. Parrots have strong zygodactyl feet (two toes facing forward and two back) with sharp, elongated claws, which are used for climbing and swinging. Most species are capable of using their feet to manipulate food and other objects with a high degree of dexterity, in a similar manner to a human using their hands. A study conducted with Australian parrots has demonstrated that they exhibit "handedness", a distinct preference with regards to the foot used to pick up food, with adult parrots being almost exclusively "left-footed" or "right-footed", and with the prevalence of each preference within the population varying by species. Cockatoo species have a mobile crest of feathers on the top of their heads, which they can raise for display, and retract. No other parrots can do so, but the Pacific lorikeets in the genera Vini and Phigys can ruffle the feathers of the crown and nape, and the red-fan parrot (or hawk-headed parrot) has a prominent feather neck frill that it can raise and lower at will. The predominant colour of plumage in parrots is green, though most species have some red or another colour in small quantities. Cockatoos, however, are predominately black or white with some red, pink, or yellow. Strong sexual dimorphism in plumage is not typical among parrots, with some notable exceptions, the most striking being the eclectus parrot. However, it has been shown that some parrot species exhibit sexually dimorphic plumage in the ultraviolet spectrum, normally invisible to humans. Distribution and habitat Parrots are found on all tropical and subtropical continents and regions including Australia and Oceania, South Asia, Southeast Asia, Central America, South America, and Africa. Some Caribbean and Pacific islands are home to endemic species. By far the greatest number of parrot species come from Australasia and South America. The lories and lorikeets range from Sulawesi and the Philippines in the north to Australia and across the Pacific as far as French Polynesia, with the greatest diversity being found in and around New Guinea. The subfamily Arinae encompasses all the neotropical parrots, including the amazons, macaws, and conures, and ranges from northern Mexico and the Bahamas to Tierra del Fuego in the southern tip of South America. The pygmy parrots, tribe Micropsittini, form a small genus restricted to New Guinea and the Solomon Islands. The superfamily Strigopoidea contains three living species of aberrant parrots from New Zealand. The broad-tailed parrots, subfamily Platycercinae, are restricted to Australia, New Zealand, and the Pacific islands as far eastwards as Fiji. The true parrot superfamily, Psittacoidea, includes a range of species from Australia and New Guinea to South Asia and Africa. The centre of cockatoo biodiversity is Australia and New Guinea, although some species reach the Solomon Islands (and one formerly occurred in New Caledonia), Wallacea and the Philippines. Several parrots inhabit the cool, temperate regions of South America and New Zealand. Three species—the thick-billed parrot, the green parakeet, and the now-extinct Carolina parakeet—have lived as far north as the southern United States. Many parrots, especially monk parakeets, have been introduced to areas with temperate climates, and have established stable populations in parts of the United States (including New York City), the United Kingdom, Belgium, Spain, and Greece. These birds can be quite successful in introduced areas, such as the non-native population of red-crowned amazons in the U.S. which may rival that of their native Mexico. The only parrot to inhabit alpine climates is the kea, which is endemic to the Southern Alps mountain range on New Zealand's South Island. Few parrots are wholly sedentary or fully migratory. Most fall somewhere between the two extremes, making poorly understood regional movements, with some adopting an entirely nomadic lifestyle. Only three species are migratory – the orange-bellied, blue-winged and swift parrots. Behaviour Numerous challenges are found in studying wild parrots, as they are difficult to catch and once caught, they are difficult to mark. Most wild bird studies rely on banding or wing tagging, but parrots chew off such attachments. Parrots also tend to range widely, and consequently many gaps occur in knowledge of their behaviour. Some parrots have a strong, direct flight. Most species spend much of their time perched or climbing in tree canopies. They often use their bills for climbing by gripping or hooking on branches and other supports. Researchers at the New York Institute of Technology published findings that showed parrots used their beaks as a "third limb" to propel themselves. On the ground, parrots often walk with a rolling gait. Diet The diet of parrots consists of seeds, fruit, nectar, pollen, buds, and sometimes arthropods and other animal prey. The most important of these for most true parrots and cockatoos are seeds; the large and powerful bill has evolved to open and consume tough seeds. All true parrots, except the Pesquet's parrot, employ the same method to obtain the seed from the husk; the seed is held between the mandibles and the lower mandible crushes the husk, whereupon the seed is rotated in the bill and the remaining husk is removed. They may use their foot sometimes to hold large seeds in place. Parrots are granivores rather than seed dispersers, and in many cases where they are seen consuming fruit, they are only eating the fruit to get at the seed. As seeds often have poisons that protect them, parrots carefully remove seed coats and other chemically defended fruit parts prior to ingestion. Many species in the Americas, Africa, and Papua New Guinea consume clay, which releases minerals and absorbs toxic compounds from the gut. Geographical range and body size predominantly explains the diet composition of Neotropical parrots rather than phylogeny. Lories, lorikeets, hanging parrots, and swift parrots are primarily nectar and pollen consumers, and have tongues with brush tips to collect it, as well as some specialised gut adaptations. Many other species also consume nectar when it becomes available. Some parrot species prey on animals, especially invertebrate larvae. Golden-winged parakeets prey on water snails, the New Zealand kea can, though uncommonly, hunt adult sheep, and the Antipodes parakeet, another New Zealand parrot, enters the burrows of nesting grey-backed storm petrels and kills the incubating adults. Some cockatoos and the New Zealand kākā excavate branches and wood to feed on grubs; the bulk of the yellow-tailed black cockatoo's diet is made up of insects. Some extinct parrots had carnivorous diets. Pseudasturids were probably cuckoo- or puffbird-like insectivores, while messelasturids were raptor-like carnivores. Breeding With few exceptions, parrots are monogamous breeders who nest in cavities and hold no territories other than their nesting sites. The pair bonds of the parrots and cockatoos are strong and a pair remains close during the nonbreeding season, even if they join larger flocks. As with many birds, pair bond formation is preceded by courtship displays; these are relatively simple in the case of cockatoos. In Psittacidae parrots' common breeding displays, usually undertaken by the male, include slow, deliberate steps known as a "parade" or "stately walk" and the "eye-blaze", where the pupil of the eye constricts to reveal the edge of the iris. Allopreening is used by the pair to help maintain the bond. Cooperative breeding, where birds other than the breeding pair help raise the young and is common in some bird families, is extremely rare in parrots, and has only unambiguously been demonstrated in the El Oro parakeet and the golden parakeet (which may also exhibit polygamous, or group breeding, behaviour with multiple females contributing to the clutch). Only the monk parakeet and five species of lovebirds build nests in trees, and three Australian and New Zealand ground parrots nest on the ground. All other parrots and cockatoos nest in cavities, either tree hollows or cavities dug into cliffs, banks, or the ground. The use of holes in cliffs is more common in the Americas. Many species use termite nests, possibly to reduce the conspicuousness of the nesting site or to create a favourable microclimate. In most cases, both parents participate in nest excavation. The length of the burrow varies with species, but is usually between in length. The nests of cockatoos are often lined with sticks, wood chips, and other plant material. In the larger species of parrots and cockatoos, the availability of nesting hollows may be limited, leading to intense competition for them both within the species and between species, as well as with other bird families. The intensity of this competition can limit breeding success in some cases. Hollows created artificially by arborists have proven successful in boosting breeding rates in these areas. Some species are colonial, with the burrowing parrot nesting in colonies up to 70,000 strong. Coloniality is not as common in parrots as might be expected, possibly because most species adopt old cavities rather than excavate their own. The eggs of parrots are white. In most species, the female undertakes all the incubation, although incubation is shared in cockatoos, the blue lorikeet, and the vernal hanging parrot. The female remains in the nest for almost all of the incubation period and is fed both by the male and during short breaks. Incubation varies from 17 to 35 days, with larger species having longer incubation periods. The newly born young are altricial, either lacking feathers or with sparse white down. The young spend three weeks to four months in the nest, depending on species, and may receive parental care for several months thereafter. As typical of K-selected species, the macaws and other larger parrot species have low reproductive rates. They require several years to reach maturity, produce one or very few young per year, and do not necessarily breed every year. Intelligence and learning Some grey parrots have shown an ability to associate words with their meanings and form simple sentences. Along with crows, ravens, and jays (family Corvidae), parrots are considered the most intelligent of birds. The brain-to-body size ratio of psittacines and corvines is comparable to that of higher primates. Instead of using the cerebral cortex like mammals, birds use the mediorostral HVC for cognition. Not only have parrots demonstrated intelligence through scientific testing of their language-using ability, but also some species of parrots, such as the kea, are also highly skilled at using tools and solving puzzles. Learning in early life is apparently important to all parrots, and much of that learning is social learning. Social interactions are often practised with siblings, and in several species, crèches are formed with several broods. Foraging behaviour is generally learnt from parents, and can be a very protracted affair. Generalists and specialists generally become independent of their parents much quicker than partly specialised species who may have to learn skills over long periods as various resources become seasonally available. Play forms a large part of learning in parrots; play can be solitary or social. Species may engage in play fights or wild flights to practice predator evasion. An absence of stimuli can delay the development of young birds, as demonstrated by a group of vasa parrots kept in tiny cages with domesticated chickens from the age of three months; at nine months, these birds still behaved in the same way as three-month-olds, but had adopted some chicken behaviour. In a similar fashion, captive birds in zoo collections or pets can, if deprived of stimuli, develop stereotyped and harmful behaviours like self-plucking. Aviculturists working with parrots have identified the need for environmental enrichment to keep parrots stimulated. Sound imitation and speech Many parrots can imitate human speech or other sounds. A study by scientist Irene Pepperberg suggested a high learning ability in a grey parrot named Alex. Alex was trained to use words to identify objects, describe them, count them, and even answer complex questions such as "How many red squares?" with over 80% accuracy. N'kisi, another grey parrot, has been shown to have a vocabulary of around a thousand words, and has displayed an ability to invent and use words in context in correct tenses. Parrots do not have vocal cords, so sound is accomplished by expelling air across the mouth of the trachea in the organ called the syrinx. Different sounds are produced by changing the depth and shape of the trachea. Grey parrots are known for their superior ability to imitate sounds and human speech, which has made them popular pets since ancient times. Although most parrot species are able to imitate, some of the amazon parrots are generally regarded as the next-best imitators and speakers of the parrot world. The question of why birds imitate remains open, but those that do often score very high on tests designed to measure problem-solving ability. Wild grey parrots have been observed imitating other birds. Besides imitation, it is possible that parrots could be trained to use simple communication tools, e.g., to request food or a favourite activity by pushing a button. Song Parrots are unusual among birds due to their learned vocalizations, a trait they share with only hummingbirds and songbirds. The syrinx (vocal organ) of parrots, which aids in their ability to produce song, is located at the base of the trachea and consists of two complex syringeal muscles that allow for the production of sound vibrations, and a pair of lateral tympaniform membranes that control sound frequency. The position of the syrinx in birds allows for directed air flow into the interclavicular air sacs according to air sac pressure, which in turn creates a higher and louder tone in birds' singing. Cooperation A 2011 study stated that some African grey parrots preferred to work alone, while others like to work together. With two parrots, they know the order of tasks or when they should do something together at once, but they have trouble exchanging roles. With three parrots, one parrot usually prefers to cooperate with one of the other two, but all of them are cooperating to solve the task. Longevity The heightened longevity of parrots appears to involve increased expression of several genomic features including genes employed in cell division, cell cycle regulation, RNA binding/processing, repair of DNA damage and oxidative stress response pathways. Relationship with humans Pets Parrots may not make good pets for most people because of their natural wild instincts such as screaming and chewing. Although parrots can be very affectionate and cute when immature, they often become aggressive when mature (partly due to mishandling and poor training) and may bite, causing serious injury. For this reason, parrot rescue groups estimate that most parrots are surrendered and rehomed through at least five homes before reaching their permanent destinations or before dying prematurely from unintentional or intentional neglect and abuse. The parrots' ability to mimic human words and their bright colours and beauty prompt impulse buying from unsuspecting consumers. The domesticated budgerigar, a small parrot, is the most popular of all pet bird species. In 1992, the newspaper USA Today published that 11 million pet birds were in the United States alone, many of them parrots. Europeans kept birds matching the description of the rose-ringed parakeet (or called the ring-necked parrot), documented particularly in a first-century account by Pliny the Elder. As they have been prized for thousands of years for their beauty and ability to talk, they have also often been misunderstood. For example, author Wolfgang de Grahl says in his 1987 book The Grey Parrot that some importers had parrots drink only coffee while they were shipped by boat, believing that pure water was detrimental and that their actions would increase survival rates during shipping. Nowadays, it is commonly accepted that the caffeine in coffee is toxic to birds. Pet parrots may be kept in a cage or aviary; though generally, tame parrots should be allowed out regularly on a stand or gym. Depending on locality, parrots may be either wild-caught or be captive-bred, though in most areas without native parrots, pet parrots are captive-bred. Parrot species that are commonly kept as pets include conures, macaws, amazon parrots, cockatoos, greys, lovebirds, cockatiels, budgerigars, caiques, parakeets, and Eclectus, Pionus, and Poicephalus species. Temperaments and personalities vary even within a species, just as with dog breeds. Grey parrots are thought to be excellent talkers, but not all grey parrots want to talk, though they have the capability to do so. Noise level, talking ability, cuddliness with people, and care needs can sometimes depend on how the bird is cared for and the attention he/she regularly receives. Parrots invariably require an enormous amount of attention, care, and intellectual stimulation to thrive, akin to that required by a three-year-old child, which many people find themselves unable to provide in the long term. Parrots that are bred for pets may be hand-fed or otherwise accustomed to interacting with people from a young age to help ensure they become tame and trusting. However, even when hand fed, parrots revert to biting and aggression during hormonal surges and if mishandled or neglected. Parrots are not low-maintenance pets; they require feeding, grooming, veterinary care, training, and environmental enrichment through the provision of toys, exercise, and social interaction (with other parrots or humans) for good health. Some large parrot species, including large cockatoos, amazons, and macaws, have very long lifespans, with 80 years being reported, and record ages of over 100. Small parrots, such as lovebirds, hanging parrots, and budgies, have shorter lifespans up to 15–20 years. Some parrot species can be quite loud, and many of the larger parrots can be destructive and require a very large cage, and a regular supply of new toys, branches, or other items to chew up. The intelligence of parrots means they are quick to learn tricks and other behaviours—both good and bad—that get them what they want, such as attention or treats. The popularity, longevity, and intelligence of many of the larger kinds of pet parrots and their wild traits such as screaming, has led to many birds needing to be rehomed during the course of their long lifespans. A common problem is that large parrots that are cuddly and gentle as juveniles mature into intelligent, complex, often demanding adults who can outlive their owners, and can also become aggressive or even dangerous. Due to an increasing number of homeless parrots, they are being euthanised like dogs and cats, and parrot adoption centres and sanctuaries are becoming more common. Parrots do not often do well in captivity, causing some parrots to go insane and develop repetitive behaviours, such as swaying and screaming, or they become riddled with intense fear. Feather destruction and self-mutilation, although not commonly seen in the wild, occur often in captivity. Some owners have offered their pet parrots mobile apps for entertainment. Scientists Rébecca Kleinberger of Northeastern University and Ilyena Hirskyj-Douglas of the University of Glasgow performed a pilot study to tailor apps to parrots' preferences. The birds tended to use rapid tongue movements to interact with screens, possibly mimicking movements used to manipulate seeds. To motivate parrots participating in the pilot study, researchers used treats such as peanut butter, yoghurt and pine nuts; one bird responded better to "cheering and praise". Trade The popularity of parrots as pets has led to a thriving—and often illegal—trade in the birds, and some species are now threatened with extinction. A combination of trapping of wild birds and damage to parrot habitats makes survival difficult or even impossible for some species of parrot. Importation of wild-caught parrots into the US and Europe is illegal after the Wild Bird Population Act was passed in 1992. The scale of the problem can be seen in the Tony Silva case of 1996, in which a parrot expert and former director at Tenerife's Loro Parque (Europe's largest parrot park) was jailed in the United States for 82 months and fined $100,000 for smuggling hyacinth macaws (such birds command a very high price.) Different nations have different methods of handling internal and international trade. Australia has banned the export of its native birds since 1960. In July 2007, following years of campaigning by NGOs and outbreaks of avian flu, the European Union (EU) halted the importation of all wild birds with a permanent ban on their import. Prior to an earlier temporary ban started in late October 2005, the EU was importing about two million live birds a year, about 90% of the international market: hundreds of thousands of these were parrots. No national laws protect feral parrot populations in the U.S. Mexico has a licensing system for capturing and selling native birds. According to a 2007 report, 65,000 to 78,500 parrots are captured annually, but the mortality rate before reaching a buyer is over 75%, meaning around 50,000 to 60,000 will die. Culture Parrots have featured in human writings, story, art, humor, religion, and music for thousands of years, such as Aesop's fable "The parrot and the cat", the mention "The parrot can speak, and yet is nothing more than a bird" in The Book of Rites of Ancient China, the Masnavi by Rumi of Persia in 1250 "The Merchant and the Parrot". Recent books about parrots in human culture include Parrot Culture. In ancient times and current, parrot feathers have been used in ceremonies and for decoration. They also have a long history as pets, stretching back thousands of years, and were often kept as a symbol of royalty or wealth. Parrots are used as symbols of nations and nationalism. A parrot is found on the flag of Dominica and two parrots on their coat of arms. The St. Vincent parrot is the national bird of St. Vincent and the Grenadines, a Caribbean nation. Sayings about parrots colour the modern English language. The verb "parrot" in the dictionary means "to repeat by rote". Also clichés such as the British expression "sick as a parrot" are given; although this refers to extreme disappointment rather than illness, it may originate from the disease of psittacosis, which can be passed to humans. The first occurrence of a related expression is in Aphra Behn's 1681 play The False Count. Fans of Jimmy Buffett are known as parrotheads. Parrots feature in many media. Magazines are devoted to parrots as pets, and to the conservation of parrots. Fictional media include Monty Python's "Dead Parrot sketch", Home Alone 3 and Rio; and documentaries include The Wild Parrots of Telegraph Hill. Parrots have been a food source to several groups. Australian settlers made parrot pies, while the Maori hunted kakapos for their meat and feathers. Mythology As early as the ancient Chinese Shang dynasty ( 1600 BCE – 1045 BCE), jade artifacts are found crafted in the shape of parrots and were subjected to burning over wood along with other jade objects and livestock, likely as a part of ritual sacrifices known as 'Liao' sacrifices (), generating smoke offerings to the heavens, gods and ancestors. This ritual is believed to have been inherited from previous worship practices and continued into the Zhou dynasty. A jade parrot, among other artifacts, recovered from the tomb of Fu Hao at Yinxu provides significant evidence of this practice. In Polynesian legend as current in the Marquesas Islands, the hero Laka/Aka is mentioned as having undertaken a long and dangerous voyage to Aotona in what are now the Cook Islands, to obtain the highly prized feathers of a red parrot as gifts for his son and daughter. On the voyage, 100 of his 140 rowers died of hunger on their way, but the survivors reached Aotona and captured enough parrots to fill 140 bags with their feathers. Parrots have also been considered sacred. The Moche people of ancient Peru worshipped birds and often depicted parrots in their art. Parrots are popular in Buddhist scripture and many writings about them exist. For example, Amitābha once changed himself into a parrot to aid in converting people. Another old story tells how after a forest caught fire, the parrot was so concerned, it carried water to try to put out the flames. The ruler of heaven was so moved upon seeing the parrot's act, he sent rain to put out the fire. In Chinese Buddhist iconography, a parrot is sometimes depicted hovering on the upper right side Guan Yin clasping a pearl or prayer beads in its beak. In Hindu mythology, the parrot is the mount of the god of love, Kamadeva. The bird is also associated with the goddess Meenakshi and the poet-saint Andal. Feral populations Escaped parrots of several species have become established in the wild outside their natural ranges and in some cases outside the natural range of parrots. Among the earliest instances were pet red shining-parrots from Fiji, which established a population on the islands of southern Tonga. These introductions were prehistoric and red-shining parrots were recorded in Tonga by Captain Cook in the 1770s. Escapees first began breeding in cities in California, Texas, and Florida in the 1950s (with unproven earlier claims dating to the 1920s in Texas and Florida). They have proved surprisingly hardy in adapting to conditions in Europe and North America. They sometimes even multiply to the point of becoming a nuisance or pest, and a threat to local ecosystems, and control measures have been used on some feral populations. Feral parrot flocks can be formed after mass escapes of newly imported, wild-caught parrots from airports or quarantine facilities. Large groups of escapees have the protection of a flock and possess the skills to survive and breed in the wild. Some feral parakeets may have descended from escaped zoo birds. Escaped or released pets rarely contribute to establishing feral populations, as they usually result in only a few escapees, and most captive-born birds do not possess the necessary survival skills to find food or avoid predators and often do not survive long without human caretakers. However, in areas where there are existing feral parrot populations, escaped pets may sometimes successfully join these flocks. The most common years that feral parrots were released to non-native environments was from the 1890s to the 1940s, during the wild-caught parrot era. In the "parrot fever" panic of 1930, a city health commissioner urged everyone who owned a parrot to put them down, but some owners abandoned their parrots on the streets. Threats and conservation The principal threats of parrots are habitat loss and degradation, hunting, and, for certain species, the wild-bird trade. Parrots are persecuted because, in some areas, they are (or have been) hunted for food and feathers, and as agricultural pests. For a time, Argentina offered a bounty on monk parakeets for that reason, resulting in hundreds of thousands of birds being killed, though apparently this did not greatly affect the overall population. Parrots, being cavity nesters, are vulnerable to the loss of nesting sites and to competition with introduced species for those sites. The loss of old trees is a particular problem in some areas, particularly in Australia, where suitable nesting trees must be centuries old. Many parrots occur only on islands and are vulnerable to introduced species such as rats and feral cat, as they lack the appropriate antipredator behaviours needed to deal with predators. Island species, such as the Puerto Rican amazon, which have small populations in restricted habitats, are also vulnerable to natural events, such as hurricanes. Due to deforestation, the Puerto Rican amazon is one of the world's rarest birds despite conservation efforts. One of the largest parrot conservation groups is the World Parrot Trust, an international organisation. The group gives assistance to worthwhile projects, as well as producing a magazine (PsittaScene) and raising funds through donations and memberships, often from pet parrot owners. On a smaller scale, local parrot clubs raise money to donate to a conservation cause. Zoo and wildlife centres usually provide public education, to change habits that cause damage to wild populations. Conservation measures to conserve the habitats of some of the high-profile charismatic parrot species has also protected many of the less charismatic species living in the ecosystem. A popular attraction that many zoos employ is a feeding station for lories and lorikeets, where visitors feed them with cups of liquid food. This is usually done in association with educational signs and lectures. Birdwatching-based ecotourism can be beneficial to economies. Several projects aimed specifically at parrot conservation have met with success. Translocation of vulnerable kākāpō, followed by intensive management and supplementary feeding, has increased the population from 50 individuals to 123 in 2010 and 247 in 2024. In New Caledonia, the Ouvea parakeet was threatened by trapping for the pet trade and loss of habitat. Community-based conservation, which eliminated the threat of poaching, has allowed the population to increase from around 600 birds in 1993 to over 2,000 birds in 2009. As of 2009, the IUCN recognises 19 species of parrot as extinct since 1500 (the date used to denote modern extinctions). This does not include species like the New Caledonian lorikeet, which has not been officially seen for 100 years, yet is still listed as critically endangered. Trade, export, and import of all wild-caught parrots is regulated and only permitted under special licensed circumstances in countries party to the Convention on the International Trade in Endangered Species (CITES) which came into force in 1975 to regulate the international trade of all endangered, wild-caught animal and plant species. In 1975, 24 parrot species were included in Appendix I, thus prohibiting commercial international trade in these birds. Since that initial listing, continuing threats from international trade led it to add an additional 32 parrot varieties to Appendix I. All other parrot species, aside from the rosy-faced lovebird, budgerigar, cockatiel and rose-ringed parakeet (which are not included in the appendices) are protected on Appendix II of CITES. In addition, individual countries may have laws to regulate trade in certain species; for example, the EU has banned parrot trade, whereas Mexico has a licensing system for capturing parrots. World Parrot Day Every year on 31 May, World Parrot Day is celebrated.
Biology and health sciences
Psittaciformes
null
21053528
https://en.wikipedia.org/wiki/Typhoid%20vaccine
Typhoid vaccine
Typhoid vaccines are vaccines that prevent typhoid fever. Several types are widely available: typhoid conjugate vaccine (TCV), Ty21a (a live oral vaccine) and Vi capsular polysaccharide vaccine (ViPS) (an injectable subunit vaccine). They are about 30 to 70% effective in the first two years, depending on the specific vaccine in question. The Vi-rEPA vaccine is efficacious in children. The World Health Organization (WHO) recommends vaccinating all children in areas where the disease is common. Otherwise they recommend vaccinating those at high risk. Vaccination campaigns can also be used to control outbreaks of disease. Depending on the vaccine, additional doses are recommended every three to seven years. In the United States the vaccine is only recommended in those at high risk such as travelers to areas of the world where the disease is common. The vaccines available as of 2018 are very safe. Minor side effects may occur at the site of injection. The injectable vaccine is safe in people with HIV/AIDS and the oral vaccine can be used as long as symptoms are not present. While it has not been studied during pregnancy, the non-live vaccines are believed to be safe while the live vaccine is not recommended. The first typhoid vaccines were developed in 1896 by Almroth Edward Wright, Richard Pfeiffer, and Wilhelm Kolle. Due to side-effects newer formulations are recommended as of 2018. It is on the World Health Organization's List of Essential Medicines. Medical uses Ty21a, the Vi capsular polysaccharide vaccine, and Vi-rEPA are effective in reducing typhoid fever with low rates of adverse effects. Newer vaccines such as Vi-TT (PedaTyph) are awaiting field trials to demonstrate efficacy against natural exposure. The oral Ty21a vaccine prevents around one-half of typhoid cases in the first three years after vaccination. The injectable Vi polysaccharide vaccine prevented about two-thirds of typhoid cases in the first year and had a cumulative efficacy of 55% by the third year. The efficacy of these vaccines has only been demonstrated in children older than two years. Vi-rEPA vaccine, a new conjugate form of the injectable Vi vaccine, may be more effective and prevents the disease in many children under the age of five years. In a trial in 2-to-5-year-old children in Vietnam, the vaccine had more than 90 percent efficacy in the first year and protection lasted at least four years. Schedule Depending on the formulation it can be given starting at the age of two (ViPS), six (Ty21a), or six months (TCV). Types Vi capsular polysaccharide vaccine: Typhim VI (Sanofi Pasteur); Typherix (GSK) Ty21a oral vaccine: Vivotif (Emergent BioSolutions) Typhoid conjugate vaccine: Typbar-TCV (Bharat Biotech) Combined hepatitis A and Vi polysaccharide vaccine: ViVaxim and ViATIM (Sanofi Pasteur); Hepatyrix (GSK) Activated whole cell vaccine remains available in some parts of the developing world .
Biology and health sciences
Vaccines
Health
21053569
https://en.wikipedia.org/wiki/Yellow%20fever%20vaccine
Yellow fever vaccine
Yellow fever vaccine is a vaccine that protects against yellow fever. Yellow fever is a viral infection that occurs in Africa and South America. Most people begin to develop immunity within ten days of vaccination and 99% are protected within one month, and this appears to be lifelong. The vaccine can be used to control outbreaks of disease. It is given either by injection into a muscle or just under the skin. The World Health Organization (WHO) recommends routine immunization in all countries where the disease is common. This should typically occur between nine and twelve months of age. Those traveling to areas where the disease occurs should also be immunized. Additional doses after the first are generally not needed. The yellow fever vaccine is generally safe. This includes in those with HIV infection but without symptoms. Mild side effects may include headache, muscle pains, pain at the injection site, fever, and rash. Severe allergies occur in about eight per million doses, serious neurological problems occur in about four per million doses, and organ failure occurs in about three per million doses. It appears to be safe in pregnancy and is therefore recommended among those who will be potentially exposed. It should not be given to those with very poor immune function. Yellow fever vaccine came into use in 1938. It is on the World Health Organization's List of Essential Medicines. The vaccine is made from weakened yellow fever virus. Some countries require a yellow fever vaccination certificate before entry from a country where the disease is common. Medical uses Targeting Medical experts recommend vaccinating people most at risk of contracting the virus, such as woodcutters working in tropical areas. Insecticides, protective clothing, and screening of houses are helpful, but not always sufficient for mosquito control; medical experts recommend using personal insecticide spray in endemic areas. In affected areas, mosquito control methods have proven effective in decreasing the number of cases. Travellers need to have the vaccine ten days before being in an endemic area to ensure full immunity. Duration and effectiveness For most people, the vaccine remains effective permanently. People who are HIV positive at vaccination can benefit from a booster after ten years. On 17 May 2013, the World Health Organization (WHO) Strategic Advisory Group of Experts on immunization (SAGE) announced that a booster dose of yellow fever (YF) vaccine, ten years after a primary dose, is not necessary. Since yellow fever vaccination began in the 1930s, only 12 known cases of yellow fever post-vaccination have been identified after 600 million doses have been dispensed. Evidence showed that among this small number of "vaccine failures", all cases developed the disease within five years of vaccination. This demonstrates that immunity does not decrease with time. Schedule The World Health Organization recommends the vaccine between the ages of 9 and 12 months in areas where the disease is common. Anyone over the age of nine months who has not been previously immunized and either lives in or is traveling to an area where the disease occurs should also be immunized. Side effects The yellow fever 17D vaccine is considered safe, with over 500 million doses given and very few documented cases of vaccine-associated illness (62 confirmed cases and 35 deaths as of January 2019). In no case of vaccine-related illness has there been evidence of the virus reverting to a virulent phenotype. The majority of adverse reactions to the 17D vaccine result from allergic reactions to the eggs in which the vaccine is grown. Persons with known egg allergy should discuss this with their physician before vaccination. In addition, there is a small risk of neurologic disease and encephalitis, particularly in individuals with compromised immune systems and very young children. The 17D vaccine is contraindicated in (among others) infants between zero and six months, people with thymus disorders associated with abnormal immune cell function, people with primary immunodeficiencies, and anyone with a diminished immune capacity including those taking immunosuppressant drugs. There is a small risk of more severe yellow fever-like disease associated with the vaccine. This reaction, known as yellow fever vaccine-associated acute viscerotropic disease (YEL-AVD), causes a fairly severe disease closely resembling yellow fever caused by virulent strains of the virus. The risk factors for YEL-AVD are not known, although it has been suggested that it may be genetic. The 2'-5'-oligoadenylate synthase (OAS) component of the innate immune response is particularly important in protection from Flavivirus infection. Another reaction to the yellow fever vaccine is known as yellow fever vaccine-associated acute neurotropic disease (YEL-AND). The Canadian Medical Association published a 2001 CMAJ article entitled "Yellow fever vaccination: be sure the patient needs it". The article begins by stating that of the seven people who developed system failure within two to five days of the vaccine in 1996–2001, six died "including 2 who were vaccinated even though they were planning to travel to countries where yellow fever has never been reported." The article cites that "3 demonstrated histopatholic changes consistent with wild yellow fever virus." The author recommends vaccination for only non-contraindicated travelers (see the articles list) and those travelers going where yellow fever activity is reported or in the endemic zone which can be found mapped at the CDC website cited below. In addition, the 2010 online edition of the Center for Disease Control Traveler's Health Yellow Book states that between 1970 and 2002 only "nine cases of yellow fever were reported in unvaccinated travelers from the United States and Europe who traveled" to West Africa and South America, and 8 of the 9 died. However, it goes on to cite "only 1 documented case of yellow fever in a vaccinated traveler. This nonfatal case occurred in a traveler from Spain who visited several West African countries in 1988". History African tropical cultures had adopted burial traditions in which the deceased were buried near their habitation, including those who died of Yellow fever. This ensured that people within these cultures gained immunity through a childhood case of "endemic" yellow fever through acquired immunity. This led to a lasting misperception, first by colonial authorities and foreign medical experts, that Africans have a "natural immunity" to the illness. In the nineteenth century health provisioners forced the abandonment of these traditional burial traditions, leading to local populations dying of yellow fever as frequently as those without such burial customs such as settler populations. The first modern attempts to develop a yellow fever vaccine followed the opening of the Panama Canal in 1912, which increased global exposure to the disease. The Japanese bacteriologist Hideyo Noguchi led investigations for the Rockefeller Foundation in Ecuador that resulted in a vaccine based on his theory that the disease was caused by a leptospiral bacterium. However, other investigators could not duplicate his results and the ineffective vaccine was eventually abandoned. Another vaccine was developed from the "French strain" of the virus, obtained by Pasteur Institute scientists from a man in Dakar, Senegal, who survived his bout with the disease. This vaccine could be administered by scarification, like the smallpox vaccine, and was given in combination to produce immunity to both diseases, but it also had severe systemic and neurologic complications in a few cases. Attempts to attenuate the virus used in the vaccine failed. Scientists at the Rockefeller Foundation developed another vaccine derived from the serum of an African named Asibi in 1927, the first isolation of the virus from a human. It was safer but involved the use of large amounts of human serum, which limited widespread use. Both vaccines were in use for several years, the Rockefeller vaccine in the Western hemisphere and England, and the Pasteur Institute vaccine in France and its African colonies. In 1937, Max Theiler, working with Hugh Smith and Eugen Haagen at the Rockefeller Foundation to improve the vaccine from the "Asibi" strain, discovered that a favorable chance mutation in the attenuated virus had produced a highly effective strain that was named 17D. Following the work of Ernest Goodpasture, Theiler used chicken eggs to culture the virus. After field trials in Brazil, over one million people were vaccinated by 1939, without severe complications. This vaccine was widely used by the U.S. Army during World War II. For his work on the yellow fever vaccine, Theiler received the 1951 Nobel Prize in Physiology or Medicine. Only the 17D vaccine remains in use today. Theiler's vaccine was responsible for the largest outbreak of hepatitis B in history, infecting 330,000 soldiers and giving 50,000 jaundice between 1941 and 1942. At the time, chronic infectious hepatitis was not known, so when human serum was used in vaccine preparation, serum drawn from chronic hepatitis B virus (HBV) carriers contaminated the yellow fever vaccine. In 1941, researchers at Rocky Mountain Laboratories developed a safer alternative, an "aqueous-base" version of the 17D vaccine using distilled water combined with the virus grown in chicken eggs. Since 1971, screening technology for HBV has been available and is routinely used in situations where HBV contamination is possible including vaccine preparation. Also in the 1930s, a French team developed the French neurotropic vaccine (FNV), which was extracted from mouse brain tissue. Since this vaccine was associated with a higher incidence of encephalitis, FNV was not recommended after 1961. Vaccine 17D is still in use, and more than 400 million doses have been distributed. Little research has been done to develop new vaccines. Newer vaccines, based on vero cells, are in development (as of 2018). Manufacture and global supply Increases in cases of yellow fever in endemic areas of Africa and South America in the 1980s were addressed by the WHO Yellow Fever Initiative launched in the mid-2000s. The initiative was supported by the Gavi Alliance, a collaboration of the WHO, UNICEF, vaccine manufacturers, and private philanthropists such as the Bill & Melinda Gates Foundation. Gavi-supported vaccination campaigns since 2011 have covered 88 million people in 14 countries considered at "high-risk" of a yellow fever outbreak (Angola was considered "medium risk"). As of 2013, there were four WHO-qualified manufacturers: Bio-Manguinhos in Brazil (with the Oswaldo Cruz Foundation), Institute Pasteur in Dakar, Senegal, the Federal State Unitary Enterprise of Chumakov Institute in Russia, and Sanofi Pasteur, the French pharmaceutical company. Two other manufacturers supply domestic markets: Wuhan Institute of Biological Products in China and Sanofi Pasteur in the United States. Demand for yellow fever vaccine for preventive campaigns has increased from about five million doses per year to a projected 62 million per year by 2014. UNICEF reported in 2013 that supplies were insufficient. Manufacturers are producing about 35 million of the 64 million doses needed per year. Demand for the yellow fever vaccine has continued to increase due to the growing number of countries implementing yellow fever vaccination as part of their routine immunization programmes. The outbreak of yellow fever in Angola and the Democratic Republic of Congo in 2016 has raised concerns about whether the global supply of the vaccine is adequate to meet the need during a large epidemic or pandemic of the disease. Routine childhood immunization was suspended in other African countries to ensure an adequate supply in the vaccination campaign against the outbreak in Angola. Emergency stockpiles of vaccine diverted to Angola, which consisted of about 10 million doses at the end of March 2016, had become exhausted, but were being replenished by May 2016. However, in August it was reported that about one million doses of six million shipped in February had been sent to the wrong place or not kept cold enough to ensure efficacy, resulting in shortages to fight the spreading epidemic in DR Congo. As an emergency measure, experts suggested fractional dose vaccination, using a fractional dose (1/5 or 1/10 of the usual dose) to extend existing supplies of vaccine. Others have noted that switching manufacturing processes to modern cell-culture technology might improve vaccine supply shortfalls, as the manufacture of the current vaccine in chicken eggs is slow and laborious. On 17 June 2016, the WHO agreed to the use of 1/5 the usual dose as an emergency measure during the ongoing outbreak in Angola and the DR Congo. The fractional dose would not qualify for a yellow fever certificate of vaccination for travelers. Later studies found that the fractional dose was just as protective as the full dose, even 10 years after vaccination. As of February 2021, UNICEF reported awarded contract prices ranging from to per dose under multi-year contracts with various suppliers. Travel requirements Travellers who wish to enter certain countries or territories must be vaccinated against yellow fever 10 days before crossing the border, and be able to present a vaccination record/certificate at the border checks. In most cases, this travel requirement depends on whether the country they are travelling from has been designated by the World Health Organization as being a 'country with risk of yellow fever transmission'. In a few countries, it does not matter which country the traveller comes from: everyone who wants to enter these countries must be vaccinated against yellow fever. There are exemptions for newborn children; in most cases, any child who is at least 9 months or 1 year old needs to be vaccinated.
Biology and health sciences
Vaccines
Health
21054623
https://en.wikipedia.org/wiki/Mosquito-borne%20disease
Mosquito-borne disease
Mosquito-borne diseases or mosquito-borne illnesses are diseases caused by bacteria, viruses or parasites transmitted by mosquitoes. Nearly 700 million people contract mosquito-borne illnesses each year, resulting in more than a million deaths. Diseases transmitted by mosquitoes include malaria, dengue, West Nile virus, chikungunya, yellow fever, filariasis, tularemia, dirofilariasis, Japanese encephalitis, Saint Louis encephalitis, Western equine encephalitis, Eastern equine encephalitis, Venezuelan equine encephalitis, Ross River fever, Barmah Forest fever, La Crosse encephalitis, and Zika fever, as well as newly detected Keystone virus and Rift Valley fever. A preprint by Australian research group argues that Mycobacterium ulcerans, the causative pathogen of Buruli ulcer is also transmitted by mosquitoes. There is no evidence as of April 2020 that COVID-19 can be transmitted by mosquitoes, and it is extremely unlikely this could occur. Types Protozoa The female mosquito of the genus Anopheles may carry the malaria parasite. Five different species of Plasmodium cause malaria in humans: Plasmodium falciparum, Plasmodium malariae, Plasmodium ovale, Plasmodium knowlesi and Plasmodium vivax (see Plasmodium). Worldwide, malaria is a leading cause of premature mortality, particularly in children under the age of five, with an estimated 207 million cases and more than half a million deaths in 2012, according to the World Malaria Report 2013 published by the World Health Organization (WHO). The death toll increased to one million as of 2018 according to the American Mosquito Control Association. Bacterial In January 2024, a publication by an Australian research group demonstrated significant genetic similarity between Mycobacterium ulcerans in humans and possums, compared to PCR screening of M. ulcerans from trapped Aedes notoscriptus mosquitoes, and concluded that Mycobacterium ulcerans, the causative pathogen of Buruli ulcer, is transmitted by mosquitos. Myiasis Botflies are known to parasitize humans or other mammalians, causing myiasis, and to use mosquitoes as intermediate vector agents to deposit eggs on a host. The human botfly Dermatobia hominis attaches its eggs to the underside of a mosquito, and when the mosquito takes a blood meal from a human or an animal, the body heat of the mammalian host induces hatching of the larvae. Helminthiasis Some species of mosquito can carry the filariasis worm, a parasite that causes a disfiguring condition (often referred to as elephantiasis) characterized by a great swelling of several parts of the body; worldwide, around 40 million people are living with a filariasis disability. Virus The viral diseases yellow fever, dengue fever, Zika fever and chikungunya are transmitted mostly by Aedes aegypti mosquitoes. Other viral diseases like epidemic polyarthritis, Rift Valley fever, Ross River fever, St. Louis encephalitis, West Nile fever, Japanese encephalitis, La Crosse encephalitis and several other encephalitic diseases are carried by several different mosquitoes. Eastern equine encephalitis (EEE) and Western equine encephalitis (WEE) occur in the United States where they cause disease in humans, horses, and some bird species. Because of the high mortality rate, EEE and WEE are regarded as two of the most serious mosquito-borne diseases in the United States. Symptoms range from mild flu-like illness to encephalitis, coma, and death. Viruses carried by arthropods such as mosquitoes or ticks are known collectively as arboviruses. West Nile virus was accidentally introduced into the US in 1999 and by 2003 had spread to almost every state with over 3,000 cases in 2006. Other species of Aedes as well as Culex and Culiseta are also involved in the transmission of disease. Myxomatosis is spread by biting insects, including mosquitoes. Transmission A mosquito's period of feeding is often undetected; the bite only becomes apparent because of the immune reaction it provokes. When a mosquito bites a human, it injects saliva and anti-coagulants. With the initial bite to an individual, there is no reaction, but with subsequent bites, the body's immune system develops antibodies. The bites become inflamed and itchy within 24 hours. This is the usual reaction in young children. With more bites, the sensitivity of the human immune system increases, and an itchy red hive appears in minutes where the immune response has broken capillary blood vessels and fluid has collected under the skin. This type of reaction is common in older children and adults. Some adults can become desensitized to mosquitoes and have little or no reaction to their bites, while others can become hyper-sensitive with bites causing blistering, bruising, and large inflammatory reactions, a response known as skeeter syndrome. One study found Dengue virus and Zika virus altered the skin bacteria of rats in a way that caused their body odor to be more attractive to mosquitoes. Signs and symptoms Symptoms of illness are specific to the type of viral infection and vary in severity, based on the individuals infected. Zika virus Symptoms vary in severity, from mild unnoticeable symptoms to more common symptoms like fever, rash, headache, achy muscle and joints, and conjunctivitis. Symptoms can last several days to weeks, but death resulting from this infection is rare. West Nile virus, dengue fever Most people infected with the West Nile virus usually do not develop symptoms. However, some individuals can develop cases of severe fatigue, weakness, headaches, body aches, joint and muscle pain, vomiting, diarrhea, and rash, which can last for weeks or months. More serious symptoms have a greater risk of appearing in people over 60 years of age, or those with cancer, diabetes, hypertension, and kidney disease. Dengue fever is mostly characterized by high fever, headaches, joint pain, and rash. However, more severe instances can lead to hemorrhagic fever, internal bleeding, and breathing difficulty, which can be fatal. Chikungunya People infected with this virus can develop sudden onset fever along with debilitating joint and muscle pain, rash, headache, nausea, and fatigue. Symptoms can last a few days or be prolonged to weeks and months. Although patients can recover completely, there have been cases in which joint pain has persisted for several months and can extend beyond that for years. Other people can develop heart complications, eye problems, and even neurological complications. Mechanism Mosquitoes carrying such arboviruses are able to stay healthy due to their immune system being able to recognize the virions as foreign particles and "chop off" the virus' genetic coding, rendering it inert. A human is infection with a mosquito-borne virus when a female mosquito carrying the virus, along with its viral particles that have yet to be destroyed by the mosquito, bites a human by penetrating the skin and releasing the virus into the bloodstream. It is not completely known how mosquitoes handle eukaryotic parasites to carry them without being harmed. Data has shown that the malaria parasite Plasmodium falciparum alters the mosquito vector's feeding behavior by increasing frequency of biting in infected mosquitoes, thus increasing the chance of transmitting the parasite. The mechanism of transmission of this disease starts with the injection of the parasite into the victim's blood when malaria-infected female Anopheles mosquitoes bite into a human being. The parasite uses human liver cells as hosts for maturation where it will continue to replicate and grow, moving into other areas of the body via the bloodstream. The spread of this infection cycle then continues when other mosquitoes bite the same individual. The result will cause that mosquito to ingest the parasite and allow it to transmit the Malaria disease into another person through the same mode of bite injection. Flaviviridae viruses transmissible via vectors like mosquitoes include West Nile virus and yellow fever virus, which are single stranded, positive-sense RNA viruses enveloped in a protein coat. Once inside the host's body, the virus will attach itself to a cell's surface through receptor-mediated endocytosis. This essentially means that the proteins and DNA material of the virus are ingested into the host cell. The viral RNA material will undergo several changes and processes inside the host's cell so that it can release more viral RNA that can then be replicated and assembled to infect neighboring host cells. Mosquito-borne flaviviruses also encode viral antagonists to the innate immune system in order to cause persistent infection in mosquitoes and a broad spectrum of diseases in humans. The data on transmissibility via insect vectors of hepatitis C virus, also belonging to family Flaviviridae (as well as for hepatitis B virus, belonging to family Hepadnaviridae) are inconclusive. WHO states that "There is no insect vector or animal reservoir for HCV", while there are experimental data supporting at least the presence of [PCR]-detectable hepatitis C viral RNA in Culex mosquitoes for up to 13 days. Currently, there are no specific vaccine therapies for West Nile virus approved for humans; however, vaccines are available and some show promise for animals, as a means to intervene with the mechanism of spreading such pathogens. Diagnosis Doctors can typically identify a mosquito bite by sight. A doctor will perform a physical examination and ask about medical history as well as any travel history. Be ready to give details on any international trips, including the dates you were traveling, the countries you visited and any contact you had with mosquitoes. Dengue fever Diagnosing dengue fever can be difficult, as its symptoms often overlap with many other diseases such as malaria and typhoid fever. Laboratory tests can detect evidence of the dengue viruses, however the results often come back too late to assist in directing treatment. West Nile virus Medical testing can confirm the presence of West Nile fever or a West Nile-related illness, such as meningitis or encephalitis. If infected, a blood test may show a rising level of antibodies to the West Nile virus. A lumbar puncture (spinal tap) is the most common way to diagnose meningitis, by analyzing the cerebrospinal fluid surrounding your brain and spinal cord. The fluid sample may show an elevated white cell count and antibodies to the West Nile virus if you were exposed. In some cases, an electroencephalography (EEG) or magnetic resonance imaging (MRI) scan can help detect brain inflammation. Zika virus A Zika virus infection might be suspected if symptoms are present and an individual has traveled to an area with known Zika virus transmission. Zika virus can only be confirmed by a laboratory test of body fluids, such as urine or saliva, or by blood test. Chikungunya Laboratory blood tests can identify evidence of chikungunya or other similar viruses such as dengue and Zika. Blood test may confirm the presence of IgM and IgG anti-chikungunya antibodies. IgM antibodies are highest 3 to 5 weeks after the beginning of symptoms and will continue be present for about 2 months. Prevention There is a re-emergence of mosquito vectored viruses (arthropod-borne viruses) called arboviruses carried by the Aedes aegypti mosquito. Examples are the Zika virus, chikungunya virus, yellow fever and dengue fever. The re-emergence of the viruses has been at a faster rate, and over a wider geographic area, than in the past. The rapid re-emergence is due to expanding global transportation networks, the mosquito's increasing ability to adapt to urban settings, the disruption of traditional land use and the inability to control expanding mosquito populations. Like malaria, arboviruses do not have a vaccine. (The only exception is yellow fever.) Prevention is focused on reducing the adult mosquito populations, controlling mosquito larvae and protecting individuals from mosquito bites. Depending on the mosquito vector, and the affected community, a variety of prevention methods may be deployed at one time. Mosquito borne diseases are indirectly contagious, a mosquito needs to get infected from biting a patient first than transfer it to the next thus, they both need to be in the general area. Mosquito control measures during the Panama canal construction provide the only successful case study of reducing from outbreak status s to zero-malaria and zero-yellow fever, where among applied measures the authority achieve zero yellow fever and zero malaria status where patients were aggressively treat in off-site facilities. Most of the current testing for mosquito-borne diseases is extremely costly, often requiring expensive equipment, resources, and laboratory staff. This is an increasing need for low cost, accessible, easily detectable and dispensable assays that can detect the presence of these mosquito-borne diseases. Further research into these point of care detection methods, especially in rural areas when dengue is most prevalent, would allow for increased monitoring, detection and prevention of mosquito-borne viruses. Insecticidal nets and indoor residual spraying The use of insecticide treated mosquito nets (ITNs) are at the forefront of preventing mosquito bites that cause malaria. The prevalence of ITNs in sub-Saharan Africa has grown from 3% of households to 50% of households from 2000 to 2010 with over 254 million insecticide treated nets distributed throughout sub-Saharan Africa for use against the mosquito vectors Anopheles gambiae and Anopheles funestus which carry malaria. Because the Anopheles gambiae feeds indoors (endophagic) and rests indoors after feeding (endophilic), insecticide treated nets (ITNs) interrupt the mosquito's feeding pattern. The ITNs continue to offer protection, even after there are holes in the nets, because of their excito-repellency properties which reduce the number of mosquitoes that enter the home. The World Health Organization (WHO) recommends treating ITNs with the pyrethroid class of insecticides. There is an emerging concern of mosquito resistance to insecticides used in ITNs. Twenty-seven (27) sub-Saharan African countries have reported Anopheles vector resistance to pyrethroid insecticides. Indoor spraying of insecticides is another prevention method widely used to control mosquito vectors. To help control the Aedes aegypti mosquito, homes are sprayed indoors with residual insecticide applications. Indoor residual spraying (IRS) reduces the female mosquito population and mitigates the risk of dengue virus transmission. Indoor residual spraying is completed usually once or twice a year. Mosquitoes rest on walls and ceilings after feeding and are killed by the insecticide. Indoor spraying can be combined with spraying the exterior of the building to help reduce the number of mosquito larvae and subsequently, the number of adult mosquitoes. This measure works excellently in city and urban areas where with running water people don't have the need of indoor water containers for their daily consumption for: First. according to the mosquito rearing protocol, one larval mosquito habitat could release 1,000 adult mosquitoes in 6–10 days. That means about 100 mosquitoes could emerge from a 1-liter habitat per day while people there try to have their water in much larger volume there come at-home mosquito habitats, they don't emerge at once but gradually throughout the day. At best spraying will kill all live insects at the time, not the newly emerges. Second, people are wary, think twice on any introduction of poison into their own home. Therefore, for the prevention to be effective it is necessary to have mosquito-to-be larvae and pupae in people's houses killed without contaminating their water such as to have them suffocated. Female mosquito trap Only female mosquito bite on only warm blooded animals, they have capability to identify and target their hosts from 1–3 miles away in real time proportioning to 1500 miles in human distance. Even us, we only can identify miles far targets through vision, by the rays they emit, so do mosquitoes, they must be able to see our warmth, or our thermal images because warmth is an obligatory condition they are on the hunt and because electromagnetic radiation is the only media that has miles long atmospheric reach. then for the trap to target only female mosquitoes it must utilize their capacity to see thermal images to use warmth as attractant or a warm lure such as:. with distinct preferences, between side-by-side 37 °C, 40 °C and 42 °C thermal image footprints, they choose to go to the warmer first. A 42 °C trap in front of a house will have its font yard mosquito-bite-free area for humans and mammal pets but not birds for their body temperatures are also at 42 °C. Personal protection methods There are other methods that an individual can use to protect themselves from mosquito bites. Limiting exposure to mosquitoes from dusk to dawn when the majority of mosquitoes are active and wearing long sleeves and long pants during the period mosquitoes are most active. Placing screens on windows and doors is a simple and effective means of reducing the number of mosquitoes indoors. Anticipating mosquito contact and using a topical mosquito repellant with icaridin or DEET is also recommended. Draining or covering water receptacles, both indoor and outdoors, is also a simple but effective prevention method. Removing debris and tires, cleaning drains, and cleaning gutters help larval control and reduce the number of adult mosquitoes. Vaccines There is a vaccine for yellow fever which was developed in the 1930s, the yellow 17D vaccine, and it is still in use today. The initial yellow fever vaccination provides lifelong protection for most people and provides immunity within 30 days of the vaccine. Reactions to the yellow fever vaccine have included mild headache and fever, and muscle aches. There are rare cases of individuals presenting with symptoms that mirror the disease itself. The risk of complications from the vaccine are greater for individuals over 60 years of age. In addition, the vaccine is not usually administered to babies under nine months of age, pregnant women, people with allergies to egg protein, and individuals living with AIDS/HIV. The World Health Organization (WHO) reports that 105 million people have been vaccinated for yellow fever in West Africa from 2000 to 2015. To date, there are relatively few vaccines against mosquito-borne diseases, this is due to the fact that most viruses and bacteria caused by mosquitos are highly mutatable. The National Institute of Allergy and Infectious Disease (NIAID) began Phase 1 clinical trials of a new vaccine that would be nearly universal in protecting against the majority of mosquito-borne diseases. Dengvaxia Dengvaxia, developed by Sanofi-Pasteur, was the first dengue vaccine available in the United States. Dengvaxia (CYD-TVD) is a live attenuated vaccine, meaning it consists of a less severe pathogen which helps provide the human immune system with protective antigens and greater long term immunity. In order to receive the vaccine a previous laboratory confirmed positive dengue infection is required. Three doses of the vaccine are required for full protection against dengue, with dose 1 being given immediately after conformation of a previous dengue infection, dose 2 given six months after receiving the first dose, and dose 3 given six months after receiving the second dose. Statistics have shown Dengvaxia to protect against dengue illness in 8 out of 10 children who contracted dengue virus prior to receiving the vaccine. However, recently the manufacturers of Dengvaxia, Sanofi-Pasteur has begun to stop manufacturing the vaccine, citing a lack of demand. TAK-003 In May 2024, TAK-003 became the second dengue vaccine to be prequalified by the World Health Organization (WHO). This live-attenuated vaccine, developed by Takeda is similar to the Dengvaxia vaccine in the fact that it contains a weakened version of the four variants of dengue virus. The difference between the two vaccines is the TAK-003 vaccine can be administered without a prior dengue infection and it also induces cellular immunity against dengue virus along with host immunity. This vaccine is administered in two doses, with three months in between the doses. Education and community involvement The arboviruses have expanded their geographic range and infected populations that had no recent community knowledge of the diseases carried by the Aedes aegypti mosquito. Education and community awareness campaigns are necessary for prevention to be effective. Communities are educated on how the disease is spread, how they can protect themselves from infection and the symptoms of infection. Community health education programs can identify and address the social/economic and cultural issues that can hinder preventative measures. Community outreach and education programs can identify which preventative measures a community is most likely to employ. Leading to a targeted prevention method that has a higher chance of success in that particular community. Community outreach and education includes engaging community health workers and local healthcare providers, local schools and community organizations to educate the public on mosquito vector control and disease prevention. Treatments Yellow fever Numerous drugs have been used to treat yellow fever disease with minimal satisfaction to date. Patients with multisystem organ involvement will require critical care support such as possible hemodialysis or mechanical ventilation. Rest, fluids, and acetaminophen are also known to relieve milder symptoms of fever and muscle pain. Due to hemorrhagic complications, aspirin should be avoided. Infected individuals should avoid mosquito exposure by staying indoors or using a mosquito net. Dengue fever Dengue infection's therapeutic management is simple, cost effective and successful in saving lives by adequately performing timely institutionalized interventions. Treatment options are restricted, while no effective antiviral drugs for this infection have been accessible to date. Patients in the early phase of the dengue virus may recover without hospitalization. However, ongoing clinical research is in the works to find specific anti-dengue drugs. Dengue fever occurs via Aedes aegypti mosquito (it acts as a vector). Zika virus Zika virus vaccine clinical trials are to be conducted and established. There are efforts being put toward advancing antiviral therapeutics against zika virus for swift control. Present day Zika virus treatment is symptomatic through antipyretics and analgesics. Currently there are no publications regarding viral drug screening. Nevertheless, therapeutics for this infection have been used. Chikungunya There are no treatment modalities for acute and chronic chikungunya that currently exist. Most treatment plans use supportive and symptomatic care like analgesics for pain and anti-inflammatories for inflammation caused by arthritis. In acute stages of this virus, rest, antipyretics and analgesics are used to subside symptoms. Most use non-steroidal anti-inflammatory drugs (NSAIDs). In some cases, joint pain may resolve from treatment but stiffness remains. Latest treatment The sterile insect technique (SIT) uses irradiation to sterilize insect pests before releasing them in large numbers to mate with wild females. Since they do not produce any offspring, the population, and consequently the disease incidence, is reduced over time. Used successfully for decades to combat fruit flies and livestock pests such as screwworm and tsetse flies, the technique can be adapted also for some disease-transmitting mosquito species. Pilot projects are being initiated or are under way in different parts of the world. Epidemiology Mosquito-borne diseases, such as dengue fever and malaria, typically affect developing countries and areas with tropical climates. Mosquito vectors are sensitive to climate changes and tend to follow seasonal patterns. Between years there are often dramatic shifts in incidence rates. The occurrence of this phenomenon in endemic areas makes mosquito-borne viruses difficult to treat. Dengue fever is caused by infection through viruses of the family Flaviviridae. The illness is most commonly transmitted by Aedes aegypti mosquitoes in tropical and subtropical regions. Dengue virus has four different serotypes, each of which are antigenically related but have limited cross-immunity to reinfection. Although dengue fever has a global incidence of 50–100 million cases, only several hundreds of thousands of these cases are life-threatening. The geographic prevalence of the disease can be examined by the spread of Aedes aegypti. Over the last twenty years, there has been a geographic spread of the disease. Dengue incidence rates have risen sharply within urban areas which have recently become endemic hot spots for the disease. The recent spread of Dengue can also be attributed to rapid population growth, increased coagulation in urban areas, and global travel. Without sufficient vector control, the dengue virus has evolved rapidly over time, posing challenges to both government and public health officials. Malaria is caused by a protozoan called Plasmodium falciparum. P. falciparum parasites are transmitted mainly by the Anopheles gambiae complex in rural Africa. In just this area, P. falciparum infections comprise an estimated 200 million clinical cases and 1 million annual deaths. 75% of individuals affected in this region are children. As with dengue, changing environmental conditions have led to novel disease characteristics. Due to increased illness severity, treatment complications, and mortality rates, many public health officials concede that malaria patterns are rapidly transforming in Africa. Scarcity of health services, rising instances of drug resistance, and changing vector migration patterns are factors that public health officials believe contribute to malaria's dissemination. Climate heavily affects mosquito vectors of malaria and dengue. Climate patterns influence the lifespan of mosquitos as well as the rate and frequency of reproduction. Climate change impacts have been of great interest to those studying these diseases and their vectors. Additionally, climate impacts mosquito blood feeding patterns as well as extrinsic incubation periods. Climate consistency gives researchers an ability to accurately predict annual cycling of the disease but recent climate unpredictability has eroded researchers' ability to track the disease with such precision. Advances in biological control of arboviruses In many insect species, such as Drosophila melanogaster, researchers found that a natural infection with the bacteria strain Wolbachia pipientis increases the fitness of the host by increasing resistance to RNA viral infections. Robert L. Glaser and Mark A. Meola investigated Wolbachia-induced resistance to West Nile virus (WNV) in Drosophila melanogaster (fruit flies). Two groups of fruit flies were naturally infected with Wolbachia. Glaser and Meola then cured one group of fruit flies of Wolbachia using tetracycline. Both the infected group and the cured groups were then infected with WNV. Flies infected with Wolbachia were found to have a changed phenotype that caused resistance to WNV. The phenotype was found to be caused by a "dominant, maternally transmitted, cytoplasmic factor". The WNV-resistance phenotype was then reversed by curing the fruit flies of Wolbachia. Since Wolbachia is also maternally transmitted, it was found that the WNV-resistant phenotype is directly related to the Wolbachia infection. West Nile virus is transmitted to humans and animals through the Southern house mosquito, Culex quinquefasciatus. Glaser and Meola knew vector compatibility could be reduced through Wolbachia infection due to studies done with other species of mosquitoes, mainly, Aedes aegypti. Their goal was to transfer WNV resistance to Cx. quinquefasciatus by inoculating the embryos of the mosquito with the same strain of Wolbachia that naturally occurred in the fruit flies. Upon infection, Cx. quinquefasciatus showed an increased resistance to WNV that was transferable to offspring. The ability to genetically modify mosquitoes in the lab and then have the infected mosquitoes transmit it to their offspring showed that it was possible to transmit the bacteria to wild populations to decrease human infections. In 2011, Ary Hoffmann and associates produced the first case of Wolbachia-induced arbovirus resistance in wild populations of Aedes aegypti through a small project called Eliminate Dengue: Our Challenge. This was made possible by an engineered strain of Wolbachia termed wMel that came from D. melanogaster. The transfer of wMel from D. melanogaster into field-caged populations of the mosquito Aedes aegypti induced resistance to dengue, yellow fever, and chikungunya viruses. Although other strains of Wolbachia also reduced susceptibility to dengue infection, they also put a greater demand on the fitness of Ae. aegypti. wMel was different in that it was thought to only cost the organism a small portion of its fitness. wMel-infected Ae. aegypti were released into two residential areas in the city of Cairns, Australia over a 14-week period. Hoffmann and associates, released a total of 141,600 infected adult mosquitoes in Yorkeys Knob suburb and 157,300 in Gordonvale suburb. After release, the populations were monitored for three years to record the spread of wMel. Population monitoring was gauged by measuring larvae laid in traps. At the beginning of the monitoring period but still within the release period, it was found that wMel-infected Ae. aegypti had doubled in Yorkeys Knob and increased 1.5-fold in Gordonvale. Uninfected Ae. aegypti populations were in decline. By the end of the three years, wMel-infected Ae. aegypti had stable populations of about 90%. However, these populations were isolated to the Yorkeys Knob and Gordonvale suburbs due to unsuitable habitat surrounding the neighborhoods. Although populations flourished in these areas with nearly 100% transmission, no signs of spread were noted, proving disappointing for some. Following this experiment, Tom L. Schmidt and his colleagues conducted an experiment releasing Wolbachia-infected Aedes aegypti using different site selection methods occurred in different areas of Cairns during 2013. The release sites were monitored over two years. This time the release was done in urban areas that were adjacent to adequate habitat to encourage mosquito dispersal. Over the two years, the population doubled, and spatial spread was also increased, unlike the first release, giving ample satisfactory results. By increasing the spread of the Wolbachia-infected mosquitoes, the researchers were able to establish that population of a large city was possible if the mosquitoes were given adequate habitat to spread into upon release in different local locations throughout the city. In both of these studies, no adverse effects on public health or the natural ecosystem occurred. This made it an extremely attractive alternative to traditional insecticide methods given the increased pesticide resistance occurring from heavy use. From the success seen in Australia, the researchers were able to begin operating in more threatened portions of the world. The Eliminate Dengue program spread to 10 countries throughout Asia, Latin America, and the Western Pacific blooming into the non-profit organization, World Mosquito Program, as of September 2017. They still use the same technique of infecting wild populations of Ae. aegypti as they did in Australia, but their target diseases now include Zika, chikungunya and yellow fever as well as dengue. Although not alone in their efforts to use Wolbachia-infected mosquitoes to reduce mosquito-borne disease, the World Mosquito Program method is praised for being self-sustaining in that it causes permanent phenotype change rather than reducing mosquito populations through cytoplasmic incompatibility through male-only dispersal. Researchers working with dengue virus have also tried to introduce anti-dengue genes into the mosquito population through a Gene drive mechanism. The result would be any female mosquitos not inheriting the anti-dengue gene would die. However, this mechanism has only be shown in Drosophila melanogaster and has not yet been successful in Aedes aegypti. One possible answer to this could be utilizing the new CRISPR/Cas9 gene editing system which could potentially introduce anti-dengue genes into the offspring genome.
Biology and health sciences
Concepts
Health
27290438
https://en.wikipedia.org/wiki/Geomathematics
Geomathematics
Geomathematics (also: mathematical geosciences, mathematical geology, mathematical geophysics) is the application of mathematical methods to solve problems in geosciences, including geology and geophysics, and particularly geodynamics and seismology. Applications Geophysical fluid dynamics Geophysical fluid dynamics develops the theory of fluid dynamics for the atmosphere, ocean and Earth's interior. Applications include geodynamics and the theory of the geodynamo. Geophysical inverse theory Geophysical inverse theory is concerned with analyzing geophysical data to get model parameters. It is concerned with the question: What can be known about the Earth's interior from measurements on the surface? Generally there are limits on what can be known even in the ideal limit of exact data. The goal of inverse theory is to determine the spatial distribution of some variable (for example, density or seismic wave velocity). The distribution determines the values of an observable at the surface (for example, gravitational acceleration for density). There must be a forward model predicting the surface observations given the distribution of this variable. Applications include geomagnetism, magnetotellurics and seismology. Fractals and complexity Many geophysical data sets have spectra that follow a power law, meaning that the frequency of an observed magnitude varies as some power of the magnitude. An example is the distribution of earthquake magnitudes; small earthquakes are far more common than large earthquakes. This is often an indicator that the data sets have an underlying fractal geometry. Fractal sets have a number of common features, including structure at many scales, irregularity, and self-similarity (they can be split into parts that look much like the whole). The manner in which these sets can be divided determine the Hausdorff dimension of the set, which is generally different from the more familiar topological dimension. Fractal phenomena are associated with chaos, self-organized criticality and turbulence. Fractal Models in the Earth Sciences by Gabor Korvin was one of the earlier books on the application of Fractals in the Earth Sciences. Data assimilation Data assimilation combines numerical models of geophysical systems with observations that may be irregular in space and time. Many of the applications involve geophysical fluid dynamics. Fluid dynamic models are governed by a set of partial differential equations. For these equations to make good predictions, accurate initial conditions are needed. However, often the initial conditions are not very well known. Data assimilation methods allow the models to incorporate later observations to improve the initial conditions. Data assimilation plays an increasingly important role in weather forecasting. Geophysical statistics Some statistical problems come under the heading of mathematical geophysics, including model validation and quantifying uncertainty. Terrestrial Tomography An important research area that utilises inverse methods is seismic tomography, a technique for imaging the subsurface of the Earth using seismic waves. Traditionally seismic waves produced by earthquakes or anthropogenic seismic sources (e.g., explosives, marine air guns) were used. Crystallography Crystallography is one of the traditional areas of geology that use mathematics. Crystallographers make use of linear algebra by using the Metrical Matrix. The Metrical Matrix uses the basis vectors of the unit cell dimensions to find the volume of a unit cell, d-spacings, the angle between two planes, the angle between atoms, and the bond length. Miller's Index is also helpful in the application of the Metrical Matrix. Brag's equation is also useful when using an electron microscope to be able to show relationship between light diffraction angles, wavelength, and the d-spacings within a sample. Geophysics Geophysics is one of the most math heavy disciplines of Earth Science. There are many applications which include gravity, magnetic, seismic, electric, electromagnetic, resistivity, radioactivity, induced polarization, and well logging. Gravity and magnetic methods share similar characteristics because they're measuring small changes in the gravitational field based on the density of the rocks in that area. While similar gravity fields tend to be more uniform and smooth compared to magnetic fields. Gravity is used often for oil exploration and seismic can also be used, but it is often significantly more expensive. Seismic is used more than most geophysics techniques because of its ability to penetrate, its resolution, and its accuracy. Geomorphology Many applications of mathematics in geomorphology are related to water. In the soil aspect things like Darcy's law, Stokes' law, and porosity are used. Darcy's law is used when one has a saturated soil that is uniform to describe how fluid flows through that medium. This type of work would fall under hydrogeology. Stokes' law measures how quickly different sized particles will settle out of a fluid. This is used when doing pipette analysis of soils to find the percentage sand vs silt vs clay. A potential error is it assumes perfectly spherical particles which don't exist. Stream power is used to find the ability of a river to incise into the river bed. This is applicable to see where a river is likely to fail and change course or when looking at the damage of losing stream sediments on a river system (like downstream of a dam). Differential equations can be used in multiple areas of geomorphology including: The exponential growth equation, distribution of sedimentary rocks, diffusion of gas through rocks, and crenulation cleavages. Glaciology Mathematics in Glaciology consists of theoretical, experimental, and modeling. It usually covers glaciers, sea ice, waterflow, and the land under the glacier. Polycrystalline ice deforms slower than single crystalline ice, due to the stress being on the basal planes that are already blocked by other ice crystals. It can be mathematically modeled with Hooke's Law to show the elastic characteristics while using Lamé constants. Generally the ice has its linear elasticity constants averaged over one dimension of space to simplify the equations while still maintaining accuracy. Viscoelastic polycrystalline ice is considered to have low amounts of stress usually below one bar. This type of ice system is where one would test for creep or vibrations from the tension on the ice. One of the more important equations to this area of study is called the relaxation function. Where it's a stress-strain relationship independent of time. This area is usually applied to transportation or building onto floating ice. Shallow-Ice approximation is useful for glaciers that have variable thickness, with a small amount of stress and variable velocity. One of the main goals of the mathematical work is to be able to predict the stress and velocity. Which can be affected by changes in the properties of the ice and temperature. This is an area in which the basal shear-stress formula can be used. Academic journals International Journal on Geomathematics Mathematical Geosciences
Physical sciences
Geophysics
Earth science
27294985
https://en.wikipedia.org/wiki/Andean%20orogeny
Andean orogeny
The Andean orogeny () is an ongoing process of orogeny that began in the Early Jurassic and is responsible for the rise of the Andes mountains. The orogeny is driven by a reactivation of a long-lived subduction system along the western margin of South America. On a continental scale the Cretaceous (90 Ma) and Oligocene (30 Ma) were periods of re-arrangements in the orogeny. The details of the orogeny vary depending on the segment and the geological period considered. Overview Subduction orogeny has been occurring in what is now western South America since the break-up of the supercontinent Rodinia in the Neoproterozoic. The Paleozoic Pampean, Famatinian and Gondwanan orogenies are the immediate precursors to the later Andean orogeny. The first phases of Andean orogeny in the Jurassic and Early Cretaceous were characterized by extensional tectonics, rifting, the development of back-arc basins and the emplacement of large batholiths. This development is presumed to have been linked to the subduction of cold oceanic lithosphere. During the mid to Late Cretaceous (ca. 90 million years ago) the Andean orogeny changed significantly in character. Warmer and younger oceanic lithosphere is believed to have started to be subducted beneath South America around this time. Such kind of subduction is held responsible not only for the intense contractional deformation that different lithologies were subject to, but also the uplift and erosion known to have occurred from the Late Cretaceous onward. Plate tectonic reorganization since the mid-Cretaceous might also have been linked to the opening of the South Atlantic Ocean. Another change related to mid-Cretaceous plate tectonic changes was the change of subduction direction of the oceanic lithosphere that went from having south-east motion to having a north-east motion at about 90 million years ago. While subduction direction changed it remained oblique (and not perpendicular) to the coast of South America, and the direction change affected several subduction zone-parallel faults including Atacama, Domeyko and Liquiñe-Ofqui. Low angle subduction or flat-slab subduction has been common during the Andean orogeny leading to crustal shortening and deformation and the suppression of arc volcanism. Flat-slab subduction has occurred at different times in various part of the Andes, with northern Colombia (6–10° N), Ecuador (0–2° S), northern Peru (3–13° S) and north-central Chile (24–30° S) experiencing these conditions at present. The tectonic growth of the Andes and the regional climate have evolved simultaneously and have influenced each other. The topographic barrier formed by the Andes stopped the income of humid air into the present Atacama desert. This aridity, in turn, changed the normal superficial redistribution of mass via erosion and river transport, modifying the later tectonic deformation. In the Oligocene the Farallon Plate broke up, forming the modern Cocos and Nazca plates ushering a series of changes in the Andean orogeny. The new Nazca Plate was then directed into an orthogonal subduction with South America causing ever-since uplift in the Andes, but causing most impact in the Miocene. While the various segments of the Andes have their own uplift histories, as a whole the Andes have risen significantly in last 30 million years (Oligocene–present). Orogeny by segment Colombia, Ecuador and Venezuela (12° N–3° S) Tectonic blocks of continental crust that had separated from northwestern South America in the Jurassic re-joined the continent in the Late Cretaceous by colliding obliquely with it. This episode of accretion occurred in a complex sequence. The accretion of the island arcs against northwestern South America in the Early Cretaceous led to the development of a magmatic arc caused by subduction. The Romeral Fault in Colombia forms the suture between the accreted terranes and the rest of South America. Around the Cretaceous–Paleogene boundary (ca. 65 million years ago) the oceanic plateau of the Caribbean large igneous province collided with South America. The subduction of the lithosphere as the oceanic plateau approached South America led to the formation of a magmatic arc now preserved in the Cordillera Real of Ecuador and the Cordillera Central of Colombia. In the Miocene an island arc and terrane (Chocó terrane) collided against northwestern South America. This terrane forms parts of what is now Chocó Department and Western Panama. The Caribbean Plate collided with South America in the Early Cenozoic but shifted then its movement eastward. Dextral fault movement between the South American and Caribbean plate started 17–15 million years ago. This movement was canalized along a series of strike-slip faults, but these faults alone do not account for all deformation. The northern part of the Dolores-Guayaquil Megashear forms part of the dextral fault systems while in the south the megashear runs along the suture between the accreted tectonic blocks and the rest of South America. Northern Peru (3–13° S) Long before the Andean orogeny the northern half of Peru was subject of the accretion of terranes in the Neoproterozoic and Paleozoic. Andean orogenic deformation in northern Peru can be traced to the Albian (Early Cretaceous). This first phase of deformation —the Mochica Phase— is evidenced in the folding of Casma Group sediments near the coast. Sedimentary basins in western Peru changed from marine to continental conditions in the Late Cretaceous as a consequence of a generalized vertical uplift. The uplift in northern Peru is thought to be associated with the contemporary accretion of the Piñón terrane in Ecuador. This stage of orogeny is called the Peruvian Phase. Besides coastal Peru the Peruvian Phase affected or caused crustal shortening along the Cordillera Oriental and the tectonic inversion of Santiago Basin in the Sub-Andean zone. The bulk of the Sub-Andean zone was however unaffected by the Peruvian Phase. After a period without much tectonic activity in the Early Eocene the Incaic Phase of orogeny occurred in the Mid and Late Eocene. No other tectonic event in the western Peruvian Andes compare with the Incaic Phase in magnitude. Horizontal shortening during the Incaic Phase resulted in the formation of the Marañón fold and thrust belt. An unconformity cutting across the Marañón fold and thrust belt show the Incaic Phase ended no later than 33 million years ago in the earliest Oligocene. In the period after the Eocene the Northern Peruvian Andes were subject to the Quechua Phase of orogeny. The Quechua Phase is divided into the sub-phases Quechua 1, Quechua 2 and Quechua 3. The Quechua 1 Phase lasted from 17 to 15 million years ago and included a reactivation of Inca Phase structures in the Cordillera Occidental. 9–8 million years ago, in the Quechua 2 Phase, the older parts of the Andes in northern Peru were thrusted to the northeast. Most of the Sub-Andean zone of northern Peru deformed 7–5 million years ago (Late Miocene) during the Quechua 3 Phase. The Sub-Andean stacked in a thrust belt. The Miocene rise of the Andes in Peru and Ecuador led to increased orographic precipitation along its eastern parts and to the birth of the modern Amazon River. One hypothesis links these two changes by assuming that increased precipitation led to increased erosion and this erosion led to filling the Andean foreland basins beyond their capacity and that it would have been the basin over-sedimentation rather than the rise of the Andes that made drainage basins flow to the east. Previously the interior of northern South America drained to the Pacific. Bolivian Orocline (13–26° S) Early Andean subduction in the Jurassic formed a volcanic arc in northern Chile known as La Negra Arc. The remnants of this arc are now exposed in the Chilean Coast Range. Several plutons were emplaced in the Chilean Coast Range in the Jurassic and Early Cretaceous including the Vicuña Mackenna Batholith. Further east at similar latitudes, in Argentina and Bolivia, the Salta rift system developed during the Late Jurassic and the Early Cretaceous. Salar de Atacama Basin, which is thought to be the western arm of the rift system, accumulated during the Late Cretaceous and Early Paleogene a >6,000 m thick pile of sediments now known as the Purilactis Group. Pisco Basin, around latitude 14° S, was subject to a marine transgression in the Oligocene and Early Miocene epochs (25–16 Ma). In contrast Moquegua Basin to the southeast and the coast to south of Pisco Basin saw no transgression during this time but a steadily rise of the land. From the Late Miocene onward the region that would become the Altiplano rose from low elevations to more than 3,000 m.a.s.l. It is estimated that the region rose 2000 to 3000 meters in the last ten million years. Together with this uplift several valleys incised in the western flank of the Altiplano. In the Miocene the Atacama Fault moved, uplifting the Chilean Coast Range and creating sedimentary basins east of it. At the same time the Andes around the Altiplano region broadened to exceed any other Andean segment in width. Possibly about 1000 km of lithosphere has been lost due to lithospheric shortening. During subduction the western end of the forearc region flexured downward forming a giant monocline. Somewhat to the south, tectonic inversion belonging during the "Incaic Phase" (Eocene?) have tilted the strata of Purilactis Group and in some localities also thrust younger strata on top of it. The region east of the Altiplano is characterized by deformation and tectonics along a complex fold and thrust belt. Over-all the region surrounding the Altiplano and Puna plateaux has been horizontally shortened since the Eocene. In southern Bolivia lithospheric shortening has made the Andean foreland basin to move eastward relative to the continent at an average rate of ca. 12–20 mm per year during most of the Cenozoic. Along the Argentine Northwest the Andean uplift has caused Andean foreland basins to separate into several minor isolated intermontane sedimentary basins. Towards the east the piling up of crust in Bolivia and the Argentine Norwest caused a north-south forebulge known as Asunción arch to develop in Paraguay. The uplift of the Altiplano is thought to be indebted to a combination of horizontal shortening of the crust and to increased temperatures in the mantle (thermal thinning). The bend in the Andes and the west coast of South America known as the Bolivian Orocline was enhanced by Cenozoic horizontal shortening but existed already independently of it. Meso-scale tectonic processes aside, the particular characteristics of the Bolivian Orocline–Altiplano region have been attributed to a variety of deeper causes. These causes include a local steepening of the subduction angle of Nazca Plate, increased crustal shortening and plate convergence between the Nazca and South American plates, an acceleration in the westward drift of the South American Plate, and a rise in the shear stress between the Nazca and South American plates. This increase in shear stress could in turn be related to the scarcity of sediments in the Atacama trench which is caused by the arid conditions along Atacama Desert. Capitanio et al. attributes the rise of Altiplano and the bending of the Bolivian Orocline to the varying ages of the subducted Nazca Plate with the older parts of the plate subducting at the centre of the orocline. As Andrés Tassara puts it the rigidity of the Bolivian Orocline crust is derivative of the thermal conditions. The crust of the western region (forearc) of the orocline has been cold and rigid, resisting and damming up the westward flow of warmer and weaker ductile crustal material from beneath the Altiplano. The Cenozoic orogeny at the Bolivian orocline has produced a significant anatexis of crustal rocks including metasediments and gneisses resulting in the formation of peraluminous magmas. These characteristics imply that the Cenozoic tectonics and magmatism in parts of Bolivian Andes is similar to that seen in collisional orogens. The peralumineous magmatism in Cordillera Oriental is the cause of the world-class mineralizations of the Bolivian tin belt. The rise of the Altiplano is thought by scientist Adrian Hartley to have enhanced an already prevailing aridity or semi-aridity in Atacama Desert by casting a rain shadow over the region. Central Chile and Western Argentina (26–39° S) At the latitudes between 17 and 39° S the Late Cretaceous and Cenozoic development of the Andean orogeny is characterized by an eastward migration of the magmatic belt and the development of several foreland basins. The eastward migration of the arc is thought to be caused by subduction erosion. At the latitudes of 32–36° S —that is Central Chile and most of Mendoza Province— the Andean orogeny proper began in the Late Cretaceous when backarc basins were inverted. Immediately east of the early Andes foreland basins developed and their flexural subsidence caused the ingression of waters from the Atlantic all the way to the front of the orogen in the Maastrichtian. The Andes at the latitudes of 32–36° S experienced a sequence of uplift in the Cenozoic that started in the west and spread to the east. Beginning about 20 million years ago in the Miocene the Principal Cordillera (east of Santiago) began an uplift that lasted until about 8 million years ago. From the Eocene to the early Miocene, sediments accumulated in the Abanico Extensional Basin, a north-south elongated basin in Chile that spanned from 29° to 38° S. Tectonic inversion from 21 to 16 million years ago made the basin to collapse and the sediments to be incorporated to the Andean cordillera. Lavas and volcanic material that are now part of Farellones Formation accumulated while the basin was being inverted and uplifted. The Miocene continental divide was about 20 km to the west of the modern water divide that makes up the Argentina–Chile border. Subsequent river incision shifted the divide to the east leaving old flattish surfaces hanging. Compression and uplift in this part of the Andes has continued into the present. The Principal Cordillera had risen to heights that allowed for the development of valley glaciers about 1 million years ago. Before the Miocene uplift of the Principal Cordillera was over, the Frontal Cordillera to the east started a period of uplift that lasted from 12 to 5 million years ago. Further east the Precordillera was uplifted in the last 10 million years and the Sierras Pampeanas has experienced a similar uplift in the last 5 million years. The more eastern part of the Andes at these latitudes had their geometry controlled by ancient faults dating to the San Rafael orogeny of the Paleozoic. The Sierras de Córdoba (part of the Sierras Pampeanas) where the effects of the ancient Pampean orogeny can be observed, owes it modern uplift and relief to the Andean orogeny in the late Cenozoic. Similarly the San Rafael Block east of the Andes and south of Sierras Pampeanas was raised in the Miocene during the Andean orogeny. In broad terms the most active phase of orogeny in area of southern Mendoza Province and northern Neuquén Province (34–38° S) happened in the Late Miocene while arc volcanism occurred east of the Andes. At more southern latitudes (36–39° S) various Jurassic and Cretaceous marine transgressions from the Pacific are recorded in the sediments of Neuquén Basin. In the Late Cretaceous conditions changed. A marine regression occurred and the fold and thrust belts of Malargüe (36°00 S), Chos Malal (37° S) and Agrio (38° S) started to develop in the Andes and did so in until Eocene times. This meant an advance of the orogenic deformation since the Late Cretaceous that caused the western part of Neuquén Basin to stack in the Malargüe and Agrio fold and thrust belts. In the Oligocene the western part of the fold and thrust belt was subject to a short period of extensional tectonics whose structures were inverted in the Miocene. After a period of quiescence the Agrio fold and thrust belt resumed limited activity in the Eocene and then again in the Late Miocene. In the south of Mendoza Province the Guañacos fold and thrust belt (36.5° S) appeared and grew in the Pliocene and Pleistocene consuming the western fringes of the Neuquén Basin. Northern Patagonian Andes (39–48° S) Southern Patagonian Andes (48–55° S) The early development of the Andean orogeny in southernmost South America affected also the Antarctic Peninsula. In southern Patagonia at the onset of the Andean orogeny in the Jurassic, extensional tectonics created the Rocas Verdes Basin, a back-arc basin whose southeastern extension survives as the Weddell Sea in Antarctica. In the Late Cretaceous the tectonic regime of Rocas Verdes Basin changed leading to its transformation into a compressional foreland basin –the Magallanes Basin– in the Cenozoic. This change was associated with an eastward move of the basin depocenter and the obduction of ophiolites. The closure of Rocas Verdes Basin in the Cretaceous is linked to the high-grade metamorphism of Cordillera Darwin Metamorphic Complex in southern Tierra del Fuego. As the Andean orogeny went on, South America drifted away from Antarctica during the Cenozoic leading first to the formation of an isthmus and then to the opening of the Drake Passage 45 million years ago. The separation from Antarctica changed the tectonics of the Fuegian Andes into a transpressive regime with transform faults. About 15 million years ago in the Miocene the Chile Ridge began to subduct beneath the southern tip of Patagonia (55° S). The point of subduction, the triple junction has gradually moved to the north and lies at present at 47° S. The subduction of the ridge has created a northward moving "window" or gap in the asthenosphere beneath South America.
Physical sciences
Geologic features
Earth science
27298083
https://en.wikipedia.org/wiki/Neanderthal
Neanderthal
Neanderthals ( ; Homo neanderthalensis or H. sapiens neanderthalensis) are an extinct group of archaic humans (generally regarded as a distinct species, though some regard it as a subspecies of Homo sapiens) who lived in Eurasia until about 40,000 years ago. The type specimen, Neanderthal 1, was found in 1856 in the Neander Valley in present-day Germany. It is not clear when the line of Neanderthals split from that of modern humans; studies have produced various times ranging from 315,000 to more than 800,000 years ago. The date of divergence of Neanderthals from their ancestor H. heidelbergensis is also unclear. The oldest potential Neanderthal bones date to 430,000 years ago, but the classification remains uncertain. Neanderthals are known from numerous fossils, especially from after 130,000 years ago. The reasons for Neanderthal extinction are disputed. Theories for their extinction include demographic factors such as small population size and inbreeding, competitive replacement, interbreeding and assimilation with modern humans, change of climate, disease, or a combination of these factors. Neanderthals lived in a high-stress environment with high trauma rates, and about 80% died before the age of 40. The total population of Neanderthals remained low, and interbreeding with modern humans tended toward a loss of Neanderthal genes over time. They lacked effective long-distance networks. Despite this, there is evidence of regional cultures and regular communication between communities, possibly moving between caves seasonally. For much of the early 20th century, European researchers depicted Neanderthals as primitive, unintelligent and brutish. Although knowledge and perception of them has markedly changed since then in the scientific community, the image of the unevolved caveman archetype remains prevalent in popular culture. In truth, Neanderthal technology was quite sophisticated. It includes the Mousterian stone-tool industry as well as the abilities to create fire, build cave hearths (to cook food, keep warm, defend themselves from animals, placing it at the centre of their homes), make adhesive birch bark tar, craft at least simple clothes similar to blankets and ponchos, weave, possibly go seafaring through the Mediterranean, make use of medicinal plants, treat severe injuries, store food, and use various cooking techniques such as roasting, boiling, and smoking. Neanderthals consumed a wide array of food, mainly hoofed mammals, but also megafauna, plants, small mammals, birds, and aquatic and marine resources. Although they were probably apex predators, they still competed with cave lions, cave hyenas and other large predators. A number of examples of symbolic thought and Palaeolithic art have been inconclusively attributed to Neanderthals, namely possible ornaments made from bird claws and feathers, shells, collections of unusual objects including crystals and fossils, engravings, music production (possibly indicated by the Divje Babe flute), and Spanish cave paintings contentiously dated to before 65,000 years ago. Some claims of religious beliefs have been made. Neanderthals were likely capable of speech, possibly articulate, although the complexity of their language is not known. Compared with modern humans, Neanderthals had a more robust build and proportionally shorter limbs. Researchers often explain these features as adaptations to conserve heat in a cold climate, but they may also have been adaptations for sprinting in the warmer, forested landscape that Neanderthals often inhabited. They had cold-specific adaptations, such as specialised body-fat storage and an enlarged nose to warm air (although the nose could have been caused by genetic drift). Average Neanderthal men stood around and women tall, similar to pre-industrial modern Europeans. The braincases of Neanderthal men and women averaged about and , respectively, which is considerably larger than the modern human average ( and , respectively). The Neanderthal skull was more elongated and the brain had smaller parietal lobes and cerebellum, but larger temporal, occipital and orbitofrontal regions. The 2010 Neanderthal genome project's draft report presented evidence for interbreeding between Neanderthals and modern humans. Neanderthals also appear to have interbred with Denisovans, a different group of archaic humans, in Siberia. Around 1–4% of genomes of Eurasians, Indigenous Australians, Melanesians, Native Americans and North Africans is of Neanderthal ancestry, while most inhabitants of sub-Saharan Africa have around 0.3% of Neanderthal genes, save possible traces from early sapiens-to-Neanderthal gene flow and/or more recent back-migration of Eurasians to Africa. In all, about 20% of distinctly Neanderthal gene variants survive in modern humans. Although many of the gene variants inherited from Neanderthals may have been detrimental and selected out, Neanderthal introgression appears to have affected the modern human immune system, and is also implicated in several other biological functions and structures, but a large portion appears to be non-coding DNA. Taxonomy Etymology Neanderthals are named after the Neander Valley in which the first identified specimen was found. The valley was spelled Neanderthal and the species was spelled Neanderthaler in German until the spelling reform of 1901. The spelling Neandertal for the species is occasionally seen in English, even in scientific publications, but the scientific name, H. neanderthalensis, is always spelled with th according to the principle of priority. The vernacular name of the species in German is always Neandertaler ("inhabitant of the Neander Valley"), whereas Neandertal always refers to the valley. The valley itself was named after the late 17th century German theologian and hymn writer Joachim Neander, who often visited the area. His name in turn means 'new man', being a learned Graecisation of the German surname Neumann. Neanderthal can be pronounced using the (as in ) or the standard English pronunciation of th with the fricative /θ/ (as ). The latter pronunciation, nevertheless, has no basis in the original German word which is pronounced always with a t regardless of the historical spelling. Neanderthal 1, the type specimen, was known as the "Neanderthal cranium" or "Neanderthal skull" in anthropological literature, and the individual reconstructed on the basis of the skull was occasionally called "the Neanderthal man". The binomial name Homo neanderthalensis—extending the name "Neanderthal man" from the individual specimen to the entire species, and formally recognising it as distinct from humans—was first proposed by Irish geologist William King in a paper read to the 33rd British Science Association in 1863. However, in 1864, he recommended that Neanderthals and modern humans be classified in different genera as he compared the Neanderthal braincase to that of a chimpanzee and argued that they were "incapable of moral and [theistic] conceptions". Research history The first Neanderthal remains—Engis 2 (a skull)—were discovered in 1829 by Dutch/Belgian prehistorian Philippe-Charles Schmerling in the Grottes d'Engis, Belgium. He concluded that these "poorly developed" human remains must have been buried at the same time and by the same causes as the co-existing remains of extinct animal species. In 1848, Gibraltar 1 from Forbes' Quarry was presented to the Gibraltar Scientific Society by their Secretary Lieutenant Edmund Henry Réné Flint, but was thought to be a modern human skull. In 1856, local schoolteacher Johann Carl Fuhlrott recognised bones from Kleine Feldhofer Grotte in Neander Valley—Neanderthal 1 (the holotype specimen)—as distinct from modern humans, and gave them to German anthropologist Hermann Schaaffhausen to study in 1857. It comprised the cranium, thigh bones, right arm, left humerus and ulna, left ilium (hip bone), part of the right shoulder blade, and pieces of the ribs. Following Charles Darwin's On the Origin of Species, Fuhlrott and Schaaffhausen argued the bones represented an ancient modern human form; Schaaffhausen, a social Darwinist, believed that humans linearly progressed from savage to civilised, and so concluded that Neanderthals were barbarous cave-dwellers. Fuhlrott and Schaaffhausen met opposition namely from the prolific pathologist Rudolf Virchow who argued against defining new species based on only a single find. In 1872, Virchow erroneously interpreted Neanderthal characteristics as evidence of senility, disease and malformation instead of archaicness, which stalled Neanderthal research until the end of the century. By the early 20th century, numerous other Neanderthal discoveries were made, establishing H. neanderthalensis as a legitimate species. The most influential specimen was La Chapelle-aux-Saints 1 ("The Old Man") from La Chapelle-aux-Saints, France. French palaeontologist Marcellin Boule authored several publications, among the first to establish palaeontology as a science, detailing the specimen, but reconstructed him as slouching, ape-like, and only remotely related to modern humans. The 1912 'discovery' of Piltdown Man (a hoax), appearing much more similar to modern humans than Neanderthals, was used as evidence that multiple different and unrelated branches of primitive humans existed, and supported Boule's reconstruction of H. neanderthalensis as a far distant relative and an evolutionary dead-end. He fuelled the popular image of Neanderthals as barbarous, slouching, club-wielding primitives; this image was reproduced for several decades and popularised in science fiction works, such as the 1911 The Quest for Fire by J.-H. Rosny aîné and the 1927 The Grisly Folk by H. G. Wells in which they are depicted as monsters. In 1911, Scottish anthropologist Arthur Keith reconstructed La Chapelle-aux-Saints 1 as an immediate precursor to modern humans, sitting next to a fire, producing tools, wearing a necklace, and having a more humanlike posture, but this failed to garner much scientific rapport, and Keith later abandoned his thesis in 1915. By the middle of the century, based on the exposure of Piltdown Man as a hoax as well as a reexamination of La Chapelle-aux-Saints 1 (who had osteoarthritis which caused slouching in life) and new discoveries, the scientific community began to rework its understanding of Neanderthals. Ideas such as Neanderthal behaviour, intelligence and culture were being discussed, and a more humanlike image of them emerged. In 1939, American anthropologist Carleton Coon reconstructed a Neanderthal in a modern business suit and hat to emphasise that they would be, more or less, indistinguishable from modern humans had they survived into the present. William Golding's 1955 novel The Inheritors depicts Neanderthals as much more emotional and civilised. However, Boule's image continued to influence works until the 1960s. In modern-day, Neanderthal reconstructions are often very humanlike. Hybridisation between Neanderthals and early modern humans had been suggested early on, such as by English anthropologist Thomas Huxley in 1890, Danish ethnographer Hans Peder Steensby in 1907, and Coon in 1962. In the early 2000s, supposed hybrid specimens were discovered: Lagar Velho 1 and Muierii 1. However, similar anatomy could also have been caused by adapting to a similar environment rather than interbreeding. Neanderthal admixture was found to be present in modern populations in 2010 with the mapping of the first Neanderthal genome sequence. This was based on three specimens in Vindija Cave, Croatia, which contained almost 4% archaic DNA (allowing for near complete sequencing of the genome). However, there was approximately 1 error for every 200 letters (base pairs) based on the implausibly high mutation rate, probably due to the preservation of the sample. In 2012, British-American geneticist Graham Coop hypothesised that they instead found evidence of a different archaic human species interbreeding with modern humans, which was disproven in 2013 by the sequencing of a high-quality Neanderthal genome preserved in a toe bone from Denisova Cave, Siberia. Classification Neanderthals are hominids in the genus Homo, humans, and generally classified as a distinct species, H. neanderthalensis, although sometimes as a subspecies of modern human as Homo sapiens neanderthalensis. This would necessitate the classification of modern humans as H. sapiens sapiens. A large part of the controversy stems from the vagueness of the term "species", as it is generally used to distinguish two genetically isolated populations, but admixture between modern humans and Neanderthals is known to have occurred. However, the absence of Neanderthal-derived patrilineal Y-chromosome and matrilineal mitochondrial DNA (mtDNA) in modern humans, along with the underrepresentation of Neanderthal X chromosome DNA, could imply reduced fertility or frequent sterility of some hybrid crosses, representing a partial biological reproductive barrier between the groups, and therefore species distinction. In 2014 geneticist Svante Pääbo summarised the controversy, describing such "taxonomic wars" as unresolvable, "since there is no definition of species perfectly describing the case". Neanderthals are thought to have been more closely related to Denisovans than to modern humans. Likewise, Neanderthals and Denisovans share a more recent last common ancestor (LCA) than to modern humans, based on nuclear DNA (nDNA). However, Neanderthals and modern humans share a more recent mitochondrial LCA (observable by studying mtDNA) and Y chromosome LCA. This likely resulted from an interbreeding event subsequent to the Neanderthal/Denisovan split. This involved either introgression coming from an unknown archaic human into Denisovans, or introgression from an earlier unidentified modern human wave from Africa into Neanderthals. The fact that the mtDNA of a ~430,000 years old early Neanderthal-line archaic human from Sima de los Huesos in Spain is more closely related to those of Denisovans than to other Neanderthals or modern humans has been cited as evidence in favour of the latter hypothesis. Evolution It is largely thought that H. heidelbergensis was the last common ancestor of Neanderthals, Denisovans and modern humans before populations became isolated in Europe, Asia and Africa, respectively. The taxonomic distinction between H. heidelbergensis and Neanderthals is mostly based on a fossil gap in Europe between 300 and 243,000 years ago during marine isotope stage 8. "Neanderthals", by convention, are fossils which date to after this gap. DNA from archaic humans from the 430,000-year-old Sima de los Huesos site in Spain indicate that they are more closely related to Neanderthals than to Denisovans, indicating that the split between Neanderthals and Denisovans must predate this time. The 400,000-year-old Aroeira 3 skull may also represent an early member of the Neanderthal line. It is possible that gene flow between Western Europe and Africa during the Middle Pleistocene, may have obscured Neanderthal characteristics in some Middle Pleistocene European hominin specimens, such those from Ceprano, Italy, and Sićevo Gorge, Serbia. The fossil record is much more complete from 130,000 years ago onwards, and specimens from this period make up the bulk of known Neanderthal skeletons. Dental remains from the Italian Visogliano and Fontana Ranuccio sites indicate that Neanderthal dental features had evolved by around 450–430,000 years ago during the Middle Pleistocene. There are two main hypotheses regarding the evolution of Neanderthals following the Neanderthal/human split: two-phase and accretion. Two-phase argues that a single major environmental event—such as the Saale glaciation—caused European H. heidelbergensis to increase rapidly in body size and robustness, as well as undergoing a lengthening of the head (phase 1), which then led to other changes in skull anatomy (phase 2). However, Neanderthal anatomy may not have been driven entirely by adapting to cold weather. Accretion holds that Neanderthals slowly evolved over time from the ancestral H. heidelbergensis, divided into four stages: early-pre-Neanderthals (MIS 12, Elster glaciation), pre-Neanderthals (MIS 11–9, Holstein interglacial), early Neanderthals (MIS 7–5, Saale glaciation–Eemian), and classic Neanderthals (MIS 4–3, Würm glaciation). Numerous dates for the Neanderthal/human split have been suggested. The date of around 250,000 years ago cites "H. helmei" as being the last common ancestor (LCA), and the split is associated with the Levallois technique of making stone tools. The date of about 400,000 years ago uses H. heidelbergensis as the LCA. Estimates of 600,000 years ago assume that "H. rhodesiensis" was the LCA, which split off into modern human lineage and a Neanderthal/H. heidelbergensis lineage. Eight hundred thousand years ago has H. antecessor as the LCA, but different variations of this model would push the date back to 1 million years ago. However, a 2020 analysis of H. antecessor enamel proteomes suggests that H. antecessor is related but not a direct ancestor. DNA studies have yielded various results for the Neanderthal/human divergence time, such as 538–315, 553–321, 565–503, 654–475, 690–550, 765–550, 741–317, and 800–520,000 years ago; and a dental analysis concluded before 800,000 years ago. Neanderthals and Denisovans are more closely related to each other than they are to modern humans, meaning the Neanderthal/Denisovan split occurred after their split with modern humans. Assuming a mutation rate of 1 × 10−9 or 0.5 × 10−9 per base pair (bp) per year, the Neanderthal/Denisovan split occurred around either 236–190,000 or 473–381,000 years ago, respectively. Using 1.1 × 10−8 per generation with a new generation every 29 years, the time is 744,000 years ago. Using 5 × 10−10 nucleotide sites per year, it is 616,000 years ago. Using the latter dates, the split had likely already occurred by the time hominins spread out across Europe, and unique Neanderthal features had begun evolving by 600–500,000 years ago. Before splitting, Neanderthal/Denisovans (or "Neandersovans") migrating out of Africa into Europe apparently interbred with an unidentified "superarchaic" human species who were already present there; these superarchaics were the descendants of a very early migration out of Africa around 1.9 mya. Demographics Range Pre- and early Neanderthals, living before the Last Interglacial (130–115,000 years ago), are poorly known and come mostly from Western European sites. From 130,000 years ago onwards, the quality of the fossil record increases dramatically with classic Neanderthals, who are recorded from Western, Central, Eastern and Mediterranean Europe, as well as Southwest, Central and Northern Asia up to the Altai Mountains in southern Siberia. Pre- and early Neanderthals, on the other hand, seem to have continuously occupied only France, Spain and Italy, although some appear to have moved out of this "core-area" to form temporary settlements eastward (although without leaving Europe). Nonetheless, southwestern France has the highest density of sites for pre-, early and classic Neanderthals. The Neanderthals were the first human species to permanently occupy Europe as the continent was only sporadically occupied by earlier humans. The southernmost find was recorded at Shuqba Cave, Levant; reports of Neanderthals from the North African Jebel Irhoud and Haua Fteah have been reidentified as H. sapiens. Their easternmost presence is recorded at Denisova Cave, Siberia 85°E; the southeast Chinese Maba Man, a skull, shares several physical attributes with Neanderthals, although these may be the result of convergent evolution rather than Neanderthals extending their range to the Pacific Ocean. The northernmost bound is generally accepted to have been 55°N, with unambiguous sites known between 50–53°N, although this is difficult to assess because glacial advances destroy most human remains, and palaeoanthropologist Trine Kellberg Nielsen has argued that a lack of evidence of Southern Scandinavian occupation is (at least during the Last Interglacial) due to the former explanation and a lack of research in the area. Middle Palaeolithic artefacts have been found up to 60°N on the Russian plains, but these are more likely attributed to modern humans. A 2017 study claimed the presence of Homo at the 130,000-year-old Californian Cerutti Mastodon site in North America, but this is largely considered implausible. It is unknown how the rapidly fluctuating climate of the last glacial period (Dansgaard–Oeschger events) impacted Neanderthals, as warming periods would produce more favourable temperatures but encourage forest growth and deter megafauna, whereas frigid periods would produce the opposite. However, Neanderthals may have preferred a forested landscape. Stable environments with mild mean annual temperatures may have been the most suitable Neanderthal habitats. Populations may have peaked in cold but not extreme intervals, such as marine isotope stages 8 and 6 (respectively, 300,000 and 191,000 years ago during the Saale glaciation). It is possible their range expanded and contracted as the ice retreated and grew, respectively, to avoid permafrost areas, residing in certain refuge zones during glacial maxima. In 2021, Israeli anthropologist Israel Hershkovitz and colleagues suggested the 140- to 120,000-year-old Israeli Nesher Ramla remains, which feature a mix of Neanderthal and more ancient H. erectus traits, represent one such source population which recolonised Europe following a glacial period. Population Like modern humans, Neanderthals probably descended from a very small population with an effective population—the number of individuals who can bear or father children—of 3,000 to 12,000 approximately. However, Neanderthals maintained this very low population, proliferating weakly harmful genes due to the reduced effectivity of natural selection. Various studies, using mtDNA analysis, yield varying effective populations, such as about 1,000 to 5,000; 5,000 to 9,000 remaining constant; or 3,000 to 25,000 steadily increasing until 52,000 years ago before declining until extinction. Archaeological evidence suggests that there was a tenfold increase in the modern human population in Western Europe during the period of the Neanderthal/modern human transition, and Neanderthals may have been at a demographic disadvantage due to a lower fertility rate, a higher infant mortality rate, or a combination of the two. Estimates giving a total population in the higher tens of thousands are contested. A consistently low population may be explained in the context of the "Boserupian Trap": a population's carrying capacity is limited by the amount of food it can obtain, which in turn is limited by its technology. Innovation increases with population, but if the population is too low, innovation will not occur very rapidly and the population will remain low. This is consistent with the apparent 150,000 year stagnation in Neanderthal lithic technology. In a sample of 206 Neanderthals, based on the abundance of young and mature adults in comparison to other age demographics, about 80% of them above the age of 20 died before reaching 40. This high mortality rate was probably due to their high-stress environment. However, it has also been estimated that the age pyramids for Neanderthals and contemporary modern humans were the same. Infant mortality was estimated to have been very high for Neanderthals, about 43% in northern Eurasia. Anatomy Build Neanderthals had more robust and stockier builds than typical modern humans, wider and barrel-shaped rib cages; wider pelvises; and proportionally shorter forearms and forelegs. Based on 45 Neanderthal long bones from 14 men and 7 women, the average height was for males and for females. For comparison, the average height of 20 males and 10 females Upper Palaeolithic humans is, respectively, and , although this decreases by nearer the end of the period based on 21 males and 15 females; and the average in the year 1900 was and , respectively. The fossil record shows that adult Neanderthals varied from about in height, although some may have grown much taller (73.8 to 184.8 cm based on footprint length and from 65.8 to 189.3 cm based on footprint width). For Neanderthal weight, samples of 26 specimens found an average of for males and for females. Using , the body mass index for Neanderthal males was calculated to be 26.9–28.2, which in modern humans correlates to being overweight. This indicates a very robust build. The Neanderthal LEPR gene concerned with storing fat and body heat production is similar to that of the woolly mammoth, and so was likely an adaptation for cold climate. The neck vertebrae of Neanderthals are thicker from the front to the rear and transversely than those of (most) modern humans, leading to stability, possibly to accommodate a different head shape and size. Although the Neanderthal thorax (where the ribcage is) was similar in size to modern humans, the longer and straighter ribs would have equated to a widened mid-lower thorax and stronger breathing in the lower thorax, which are indicative of a larger diaphragm and possibly greater lung capacity. The lung capacity of Kebara 2 was estimated to have been , compared to the average human capacity of for males and for females. The Neanderthal chest was also more pronounced (expanded front-to-back, or antero-posteriorly). The sacrum (where the pelvis connects to the spine) was more vertically inclined, and was placed lower in relation to the pelvis, causing the spine to be less curved (exhibit less lordosis) and to fold in on itself somewhat (to be invaginated). In modern populations, this condition affects just a proportion of the population, and is known as a lumbarised sacrum. Such modifications to the spine would have enhanced side-to-side (mediolateral) flexion, better supporting the wider lower thorax. It is claimed by some that this feature would be normal for all Homo, even tropically adapted Homo ergaster or erectus, with the condition of a narrower thorax in most modern humans being a unique characteristic. Body proportions are usually cited as being "hyperarctic" as adaptations to the cold, because they are similar to those of human populations which developed in cold climates—the Neanderthal build is most similar to that of Inuit and Siberian Yupiks among modern humans—and shorter limbs result in higher retention of body heat. Nonetheless, Neanderthals from more temperate climates—such as Iberia—still retain the "hyperarctic" physique. In 2019, English anthropologist John Stewart and colleagues suggested Neanderthals instead were adapted for sprinting, because of evidence of Neanderthals preferring warmer wooded areas over the colder mammoth steppe, and DNA analysis indicating a higher proportion of fast-twitch muscle fibres in Neanderthals than in modern humans. He explained their body proportions and greater muscle mass as adaptations to sprinting as opposed to the endurance-oriented modern human physique, as persistence hunting may only be effective in hot climates where the hunter can run prey to the point of heat exhaustion (hyperthermia). They had longer heel bones, reducing their ability for endurance running, and their shorter limbs would have reduced moment arm at the limbs, allowing for greater net rotational force at the wrists and ankles, causing faster acceleration. In 1981, American palaeoanthropologist Erik Trinkaus made note of this alternate explanation, but considered it less likely. Face Neanderthals had less developed chins, sloping foreheads, and longer, broader, more projecting noses. The Neanderthal skull is typically more elongated, but also wider, and less globular than that of most modern humans, and features much more of an occipital bun, or "chignon", a protrusion on the back of the skull, although it is within the range of variation for modern humans who have it. It is caused by the cranial base and temporal bones being placed higher and more towards the front of the skull, and a flatter skullcap. The Neanderthal face is characterised by subnasal as well as mid-facial prognathism, where the zygomatic arches are positioned in a rearward location relative to modern humans, while their maxillary bones and nasal bones are positioned in a more forward direction, by comparison. Neanderthal eyeballs are larger than those of modern humans. One study proposed that this was due to Neanderthals having enhanced visual abilities, at the expense of neocortical and social development. However, this study was rejected by other researchers who concluded that eyeball size does not offer any evidence for the cognitive abilities of Neanderthal or modern humans. The projected Neanderthal nose and paranasal sinuses have generally been explained as having warmed air as it entered the lungs and retained moisture ("nasal radiator" hypothesis); if their noses were wider, it would differ to the generally narrowed shape in cold-adapted creatures, and that it would have been caused instead by genetic drift. Also, the sinuses reconstructed wide are not grossly large, being comparable in size to those of modern humans. However, if sinus size is not an important factor for breathing cold air, then the actual function would be unclear, so they may not be a good indicator of evolutionary pressures to evolve such a nose. Further, a computer reconstruction of the Neanderthal nose and predicted soft tissue patterns shows some similarities to those of modern Arctic peoples, potentially meaning the noses of both populations convergently evolved for breathing cold, dry air. Neanderthals featured a rather large jaw which was once cited as a response to a large bite force evidenced by heavy wearing of Neanderthal front teeth (the "anterior dental loading" hypothesis), but similar wearing trends are seen in contemporary humans. It could also have evolved to fit larger teeth in the jaw, which would better resist wear and abrasion, and the increased wear on the front teeth compared to the back teeth probably stems from repetitive use. Neanderthal dental wear patterns are most similar to those of modern Inuit. The incisors are large and shovel-shaped, and, compared to modern humans, there was an unusually high frequency of taurodontism, a condition where the molars are bulkier due to an enlarged pulp (tooth core). Taurodontism was once thought to have been a distinguishing characteristic of Neanderthals which lent some mechanical advantage or stemmed from repetitive use, but was more likely simply a product of genetic drift. The bite force of Neanderthals and modern humans is now thought to be about the same, about and in modern human males and females, respectively. Brain The Neanderthal braincase averages for males and for females, which is significantly larger than the averages for all groups of extant humans; for example, modern European males average and females . For 28 modern human specimens from 190,000 to 25,000 years ago, the average was about disregarding sex, and modern human brain size is suggested to have decreased since the Upper Palaeolithic. The largest Neanderthal brain, Amud 1, was calculated to be , one of the largest ever recorded in hominids. Both Neanderthal and human infants measure about . When viewed from the rear, the Neanderthal braincase has lower, wider, rounder appearance than in anatomically modern humans. This characteristic shape is referred to as "en bombe" (bomb-like), and is unique to Neanderthals, with all other hominid species (including most modern humans) generally having narrow and relatively upright cranial vaults, when viewed from behind. The Neanderthal brain would have been characterised by relatively smaller parietal lobes and a larger cerebellum. Neanderthal brains also have larger occipital lobes (relating to the classic occurrence of an occipital bun in Neanderthal skull anatomy, as well as the greater width of their skulls), which implies internal differences in the proportionality of brain-internal regions, relative to Homo sapiens, consistent with external measurements obtained with fossil skulls. Their brains also have larger temporal lobe poles, wider orbitofrontal cortex, and larger olfactory bulbs, suggesting potential differences in language comprehension and associations with emotions (temporal functions), decision making (the orbitofrontal cortex) and sense of smell (olfactory bulbs). Their brains also show different rates of brain growth and development. Such differences, while slight, would have been visible to natural selection and may underlie and explain differences in the material record in things like social behaviours, technological innovation and artistic output. Hair and skin colour The lack of sunlight might have led to the proliferation of lighter skin in Neanderthals; however, light skin in modern Europeans was not particularly prolific until perhaps the Bronze Age. Genetically, BNC2 was present in Neanderthals, which is associated with light skin colour; however, a second variation of BNC2 was also present, which in modern populations is associated with darker skin colour in the UK Biobank. DNA analysis of three Neanderthal females from southeastern Europe indicates that they had brown eyes, dark skin colour and brown hair, with one having red hair. In modern humans, skin and hair colour is regulated by the melanocyte-stimulating hormone—which increases the proportion of eumelanin (black pigment) to phaeomelanin (red pigment)—which is encoded by the MC1R gene. There are five known variants in modern humans of the gene which cause loss-of-function and are associated with light skin and hair colour, and another unknown variant in Neanderthals (the R307G variant) which could be associated with pale skin and red hair. The R307G variant was identified in a Neanderthal from Monti Lessini, Italy, and possibly Cueva del Sidrón, Spain. However, as in modern humans, red was probably not a very common hair colour because the variant is not present in many other sequenced Neanderthals. Metabolism Maximum natural lifespan and the timing of adulthood, menopause and gestation were most likely very similar to modern humans. However, it has been hypothesised, based on the growth rates of teeth and tooth enamel, that Neanderthals matured faster than modern humans, although this is not backed up by age biomarkers. The main differences in maturation are the atlas bone in the neck as well as the middle thoracic vertebrae fused about 2 years later in Neanderthals than in modern humans, but this was more likely caused by a difference in anatomy rather than growth rate. Generally, models on Neanderthal caloric requirements report significantly higher intakes than those of modern humans because they typically assume Neanderthals had higher basal metabolic rates (BMRs) due to higher muscle mass, faster growth rate and greater body heat production against the cold; and higher daily physical activity levels (PALs) due to greater daily travelling distances while foraging. However, using a high BMR and PAL, American archaeologist Bryan Hockett estimated that a pregnant Neanderthal would have consumed 5,500 calories per day, which would have necessitated a heavy reliance on big game meat; such a diet would have caused numerous deficiencies or nutrient poisonings, so he concluded that these are poorly warranted assumptions to make. Neanderthals may have been more active during dimmer light conditions rather than broad daylight because they lived in regions with reduced daytime hours in the winter, hunted large game (such predators typically hunt at night to enhance ambush tactics), and had large eyes and visual processing neural centres. Genetically, colour blindness (which may enhance mesopic vision) is typically correlated with northern-latitude populations, and the Neanderthals from Vindija Cave, Croatia, had some substitutions in the Opsin genes which could have influenced colour vision. However, the functional implications of these substitutions are inconclusive. Neanderthal-derived alleles near ASB1 and EXOC6 are associated with being an evening person, narcolepsy and day-time napping. Pathology Neanderthals suffered a high rate of traumatic injury, with an estimated 79–94% of specimens showing evidence of healed major trauma, of which 37–52% were severely injured, and 13–19% injured before reaching adulthood. One extreme example is Shanidar 1, who shows signs of an amputation of the right arm likely due to a nonunion after breaking a bone in adolescence, osteomyelitis (a bone infection) on the left clavicle, an abnormal gait, vision problems in the left eye, and possible hearing loss (perhaps swimmer's ear). In 1995, Trinkaus estimated that about 80% succumbed to their injuries and died before reaching 40, and thus theorised that Neanderthals employed a risky hunting strategy ("rodeo rider" hypothesis). However, rates of cranial trauma are not significantly different between Neanderthals and Middle Palaeolithic modern humans (although Neanderthals seem to have had a higher mortality risk), there are few specimens of both Upper Palaeolithic modern humans and Neanderthals who died after the age of 40, and there are overall similar injury patterns between them. In 2012, Trinkaus concluded that Neanderthals instead injured themselves in the same way as contemporary humans, such as by interpersonal violence. A 2016 study looking at 124 Neanderthal specimens argued that high trauma rates were instead caused by animal attacks, and found that about 36% of the sample were victims of bear attacks, 21% big cat attacks, and 17% wolf attacks (totalling 92 positive cases, 74%). There were no cases of hyena attacks, although hyenas still nonetheless probably attacked Neanderthals, at least opportunistically. Such intense predation probably stemmed from common confrontations due to competition over food and cave space, and from Neanderthals hunting these carnivores. Low population caused a low genetic diversity and probably inbreeding, which reduced the population's ability to filter out harmful mutations (inbreeding depression). However, it is unknown how this affected a single Neanderthal's genetic burden and, thus, if this caused a higher rate of birth defects than in modern humans. It is known, however, that the 13 inhabitants of Sidrón Cave collectively exhibited 17 different birth defects likely due to inbreeding or recessive disorders. Likely due to advanced age (60s or 70s), La Chapelle-aux-Saints 1 had signs of Baastrup's disease, affecting the spine, and osteoarthritis. Shanidar 1, who likely died at about 30 or 40, was diagnosed with the most ancient case of diffuse idiopathic skeletal hyperostosis (DISH), a degenerative disease which can restrict movement, which, if correct, would indicate a moderately high incident rate for older Neanderthals. Neanderthals were subject to several infectious diseases and parasites. Modern humans likely transmitted diseases to them; one possible candidate is the stomach bacteria Helicobacter pylori. The modern human papillomavirus variant 16A may descend from Neanderthal introgression. A Neanderthal at Cueva del Sidrón, Spain, shows evidence of a gastrointestinal Enterocytozoon bieneusi infection. The leg bones of the French La Ferrassie 1 feature lesions that are consistent with periostitis—inflammation of the tissue enveloping the bone—likely a result of hypertrophic osteoarthropathy, which is primarily caused by a chest infection or lung cancer. Neanderthals had a lower cavity rate than modern humans, despite some populations consuming typically cavity-causing foods in great quantity, which could indicate a lack of cavity-causing oral bacteria, namely Streptococcus mutans. Two 250,000-year-old Neanderthaloid children from Payré, France, present the earliest known cases of lead exposure of any hominin. They were exposed on two distinct occasions either by eating or drinking contaminated food or water, or inhaling lead-laced smoke from a fire. There are two lead mines within of the site. Culture Social structure Group dynamics Neanderthals likely lived in more sparsely distributed groups than contemporary modern humans, but group size is thought to have averaged 10 to 30 individuals, similar to modern hunter-gatherers. Reliable evidence of Neanderthal group composition comes from Cueva del Sidrón, Spain, and the footprints at Le Rozel, France: the former shows 7 adults, 3 adolescents, 2 juveniles and an infant; whereas the latter, based on footprint size, shows a group of 10 to 13 members where juveniles and adolescents made up 90%. A Neanderthal child's teeth analysed in 2018 showed it was weaned after 2.5 years, similar to modern hunter gatherers, and was born in the spring, which is consistent with modern humans and other mammals whose birth cycles coincide with environmental cycles. Indicated from various ailments resulting from high stress at a low age, such as stunted growth, British archaeologist Paul Pettitt hypothesised that children of both sexes were put to work directly after weaning; and Trinkaus said that, upon reaching adolescence, an individual may have been expected to join in hunting large and dangerous game. However, the bone trauma is comparable to modern Inuit, which could suggest a similar childhood between Neanderthals and contemporary modern humans. Further, such stunting may have also resulted from harsh winters and bouts of low food resources. Sites showing evidence of no more than three individuals may have represented nuclear families or temporary camping sites for special task groups (such as a hunting party). Bands likely moved between certain caves depending on the season, indicated by remains of seasonal materials such as certain foods, and returned to the same locations generation after generation. Some sites may have been used for over 100 years. Cave bears may have greatly competed with Neanderthals for cave space, and there is a decline in cave bear populations starting 50,000 years ago onwards (although their extinction occurred well after Neanderthals had died out). Neanderthals also had a preference for caves whose openings faced towards the south. Although Neanderthals are generally considered to have been cave dwellers, with 'home base' being a cave, open-air settlements near contemporaneously inhabited cave systems in the Levant could indicate mobility between cave and open-air bases in this area. Evidence for long-term open-air settlements is known from the 'Ein Qashish site in Israel, and Moldova I in Ukraine. Although Neanderthals appear to have had the ability to inhabit a range of environments—including plains and plateaux—open-air Neanderthals sites are generally interpreted as having been used as slaughtering and butchering grounds rather than living spaces. In 2022, remains of the first-known Neanderthal family (six adults and five children) were excavated from Chagyrskaya Cave in the Altai Mountains of southern Siberia in Russia. The family, which included a father, a daughter, and what appear to be cousins, most likely died together, presumably from starvation. According to a study, the Neanderthals and the early anatomically modern human Qafzeh 9 had lower digit ratios than most contemporary human populations, indicating increased androgenization and possibly higher incidence of polygyny, although the study acknowledged that this conclusion is speculative owing to small sample sizes. Inter-group relations Canadian ethnoarchaeologist Brian Hayden calculated a self-sustaining population that avoids inbreeding to consist of about 450–500 individuals, which would necessitate these bands to interact with 8–53 other bands, but more likely the larger estimate given low population density. Analysis of the mtDNA of the Neanderthals of Cueva del Sidrón, Spain, showed that the three adult men belonged to the same maternal lineage, while the three adult women belonged to different ones. This suggests a patrilocal residence (that a woman moved out of her group to live with her partner). However, the DNA of a Neanderthal from Denisova Cave, Russia, shows that she had an inbreeding coefficient of (her parents were either half-siblings with a common mother, double first cousins, an uncle and niece or aunt and nephew, or a grandfather and granddaughter or grandmother and grandson) and the inhabitants of Cueva del Sidrón show several defects, which may have been caused by inbreeding or recessive disorders. Considering most Neanderthal artefacts were sourced no more than from the main settlement, Hayden considered it unlikely these bands interacted very often, and mapping of the Neanderthal brain and their small group size and population density could indicate that they had a reduced ability for inter-group interaction and trade. However, a few Neanderthal artefacts in a settlement could have originated 20, 30, 100 and 300 km (12.5, 18.5, 60 and 185 mi) away. Based on this, Hayden also speculated that macro-bands formed which functioned much like those of the low-density hunter-gatherer societies of the Western Desert of Australia. Macro-bands collectively encompass , with each band claiming , maintaining strong alliances for mating networks or to cope with leaner times and enemies. Similarly, British anthropologist Eiluned Pearce and Cypriot archaeologist Theodora Moutsiou speculated that Neanderthals were possibly capable of forming geographically expansive ethnolinguistic tribes encompassing upwards of 800 people, based on the transport of obsidian up to from the source compared to trends seen in obsidian transfer distance and tribe size in modern hunter-gatherers. However, according to their model Neanderthals would not have been as efficient at maintaining long-distance networks as modern humans, probably due to a significantly lower population. Hayden noted an apparent cemetery of six or seven individuals at La Ferrassie, France, which, in modern humans, is typically used as evidence of a corporate group which maintained a distinct social identity and controlled some resource, trading, manufacturing and so on. La Ferrassie is also located in one of the richest animal-migration routes of Pleistocene Europe. Genetic analysis indicates there were at least three distinct geographical groups—Western Europe, the Mediterranean coast, and east of the Caucasus—with some migration among these regions. Post-Eemian Western European Mousterian lithics can also be broadly grouped into three distinct macro-regions: Acheulean-tradition Mousterian in the southwest, Micoquien in the northeast, and Mousterian with bifacial tools (MBT) in between the former two. MBT may actually represent the interactions and fusion of the two different cultures. Southern Neanderthals exhibit regional anatomical differences from northern counterparts: a less protrusive jaw, a shorter gap behind the molars, and a vertically higher jawbone. These all instead suggest Neanderthal communities regularly interacted with neighbouring communities within a region, but not as often beyond. Nonetheless, over long periods of time, there is evidence of large-scale cross-continental migration. Early specimens from Mezmaiskaya Cave in the Caucasus and Denisova Cave in the Siberian Altai Mountains differ genetically from those found in Western Europe, whereas later specimens from these caves both have genetic profiles more similar to Western European Neanderthal specimens than to the earlier specimens from the same locations, suggesting long-range migration and population replacement over time. Similarly, artefacts and DNA from Chagyrskaya and Okladnikov Caves, also in the Altai Mountains, resemble those of eastern European Neanderthal sites about away more than they do artefacts and DNA of the older Neanderthals from Denisova Cave, suggesting two distinct migration events into Siberia. Neanderthals seem to have suffered a major population decline during MIS 4 (71–57,000 years ago), and the distribution of the Micoquian tradition could indicate that Central Europe and the Caucasus were repopulated by communities from a refuge zone either in eastern France or Hungary (the fringes of the Micoquian tradition) who dispersed along the rivers Prut and Dniester. There is also evidence of inter-group conflict: a skeleton from La Roche à Pierrot, France, showing a healed fracture on top of the skull apparently caused by a deep blade wound, and another from Shanidar Cave, Iraq, found to have a rib lesion characteristic of projectile weapon injuries. Social hierarchy It is sometimes suggested that, since they were hunters of challenging big game and lived in small groups, there was no sexual division of labour as seen in modern hunter-gatherer societies. That is, men, women and children all had to be involved in hunting, instead of men hunting with women and children foraging. However, with modern hunter-gatherers, the higher the meat dependency, the higher the division of labour. Further, tooth-wearing patterns in Neanderthal men and women suggest they commonly used their teeth for carrying items, but men exhibit more wearing on the upper teeth, and women the lower, suggesting some cultural differences in tasks. It is controversially proposed that some Neanderthals wore decorative clothing or jewellery—such as a leopard skin or raptor feathers—to display elevated status in the group. Hayden postulated that the small number of Neanderthal graves found was because only high-ranking members would receive an elaborate burial, as is the case for some modern hunter-gatherers. Trinkaus suggested that elderly Neanderthals were given special burial rites for lasting so long given the high mortality rates. Alternatively, many more Neanderthals may have received burials, but the graves were infiltrated and destroyed by bears. Given that 20 graves of Neanderthals aged under 4 have been found—over a third of all known graves—deceased children may have received greater care during burial than other age demographics. Looking at Neanderthal skeletons recovered from several natural rock shelters, Trinkaus said that, although Neanderthals were recorded as bearing several trauma-related injuries, none of them had significant trauma to the legs that would debilitate movement. He suggested that self worth in Neanderthal culture derived from contributing food to the group; a debilitating injury would remove this self-worth and result in near-immediate death, and individuals who could not keep up with the group while moving from cave to cave were left behind. However, there are examples of individuals with highly debilitating injuries being nursed for several years, and caring for the most vulnerable within the community dates even further back to H. heidelbergensis. Especially given the high trauma rates, it is possible that such an altruistic strategy ensured their survival as a species for so long. Food Hunting and gathering Neanderthals were once thought of as scavengers, but are now considered to have been apex predators. In 1980, it was hypothesised that two piles of mammoth skulls at La Cotte de St Brelade, Jersey, at the base of a gulley were evidence of mammoth drive hunting (causing them to stampede off a ledge), but this is contested. Living in a forested environment, Neanderthals were likely ambush hunters, getting close to and attacking their target—a prime adult—in a short burst of speed, thrusting in a spear at close quarters. Younger or wounded animals may have been hunted using traps, projectiles, or pursuit. Some sites show evidence that Neanderthals slaughtered whole herds of animals in large, indiscriminate hunts and then carefully selected which carcasses to process. Nonetheless, they were able to adapt to a variety of habitats. They appear to have eaten predominantly what was abundant within their immediate surroundings, with steppe-dwelling communities (generally outside of the Mediterranean) subsisting almost entirely on meat from large game, forest-dwelling communities consuming a wide array of plants and smaller animals, and waterside communities gathering aquatic resources, although even in more southerly, temperate areas such as the southeastern Iberian Peninsula, large game still featured prominently in Neanderthal diets. Contemporary humans, in contrast, seem to have used more complex food extraction strategies and generally had a more diverse diet. Nonetheless, Neanderthals still would have had to have eaten a varied enough diet to prevent nutrient deficiencies and protein poisoning, especially in the winter when they presumably ate mostly lean meat. Any food with high contents of other essential nutrients not provided by lean meat would have been vital components of their diet, such as fat-rich brains, carbohydrate-rich and abundant underground storage organs (including roots and tubers), or, like modern Inuit, the stomach contents of herbivorous prey items. For meat, Neanderthals appear to have fed predominantly on hoofed mammals. They primarily consumed red deer and reindeer, as these two were the most abundant game; however, they also ate other Pleistocene megafauna such as chamois, ibex, wild boar, steppe wisent, aurochs, Irish elk, woolly mammoth, straight-tusked elephant, woolly rhinoceros, Merck's rhinoceros the narrow-nosed rhinoceros, wild horse, and so on. There is evidence of directed cave and brown bear hunting both in and out of hibernation, as well as butchering. Analysis of Neanderthal bone collagen from Vindija Cave, Croatia, shows nearly all of their protein needs derived from animal meat. Some caves show evidence of regular rabbit and tortoise consumption. At Gibraltar sites, there are remains of 143 different bird species, many ground-dwelling such as the common quail, corn crake, woodlark, and crested lark. Scavenging birds such as corvids and eagles were commonly exploited. Neanderthals also exploited marine resources on the Iberian, Italian and Peloponnesian Peninsulas, where they waded or dived for shellfish, as early as 150,000 years ago at Cueva Bajondillo, Spain, similar to the fishing record of modern humans. At Vanguard Cave, Gibraltar, the inhabitants consumed Mediterranean monk seal, short-beaked common dolphin, common bottlenose dolphin, Atlantic bluefin tuna, sea bream and purple sea urchin; and at Gruta da Figueira Brava, Portugal, there is evidence of large-scale harvest of shellfish, crabs and fish. Evidence of freshwater fishing was found in Grotte di Castelcivita, Italy, for trout, chub and eel; Abri du Maras, France, for chub and European perch; Payré, France; and Kudaro Cave, Russia, for Black Sea salmon. Edible plant and mushroom remains are recorded from several caves. Neanderthals from Cueva del Sidrón, Spain, based on dental tartar, likely had a meatless diet of mushrooms, pine nuts and moss, indicating they were forest foragers. Remnants from Amud Cave, Israel, indicates a diet of figs, palm tree fruits and various cereals and edible grasses. Several bone traumas in the leg joints could possibly suggest habitual squatting, which, if the case, was likely done while gathering food. Dental tartar from Grotte de Spy, Belgium, indicates the inhabitants had a meat-heavy diet including woolly rhinoceros and mouflon sheep, while also regularly consuming mushrooms. Neanderthal faecal matter from El Salt, Spain, dated to 50,000 years ago—the oldest human faecal matter remains recorded—show a diet mainly of meat but with a significant component of plants. Evidence of cooked plant foods—mainly legumes and, to a far lesser extent, acorns—was discovered at the Kebara Cave site in Israel, with its inhabitants possibly gathering plants in spring and fall and hunting in all seasons except fall, although the cave was probably abandoned in late summer to early fall. At Shanidar Cave, Iraq, Neanderthals collected plants with various harvest seasons, indicating they scheduled returns to the area to harvest certain plants, and that they had complex food-gathering behaviours for both meat and plants. Food preparation Neanderthals probably could employ a wide range of cooking techniques, such as roasting, and they may have been able to heat up or boil soup, stew, or animal stock. The abundance of animal bone fragments at settlements may indicate the making of fat stocks from boiling bone marrow, possibly taken from animals that had already died of starvation. These methods would have substantially increased fat consumption, which was a major nutritional requirement of communities with low carbohydrate and high protein intake. Neanderthal tooth size had a decreasing trend after 100,000 years ago, which could indicate an increased dependence on cooking or the advent of boiling, a technique that would have softened food. At Cueva del Sidrón, Spain, Neanderthals likely cooked and possibly smoked food, as well as used certain plants—such as yarrow and camomile—as flavouring, although these plants may have instead been used for their medicinal properties. At Gorham's Cave, Gibraltar, Neanderthals may have been roasting pinecones to access pine nuts. At Grotte du Lazaret, France, a total of twenty-three red deer, six ibexes, three aurochs, and one roe deer appear to have been hunted in a single autumn hunting season, when strong male and female deer herds would group together for rut. The entire carcasses seem to have been transported to the cave and then butchered. Because this is such a large amount of food to consume before spoilage, it is possible these Neanderthals were curing and preserving it before winter set in. At 160,000 years old, it is the oldest potential evidence of food storage. The great quantities of meat and fat which could have been gathered in general from typical prey items (namely mammoths) could also indicate food storage capability. With shellfish, Neanderthals needed to eat, cook, or in some manner preserve them soon after collection, as shellfish spoils very quickly. At Cueva de los Aviones, Spain, the remains of edible, algae eating shellfish associated with the alga Jania rubens could indicate that, like some modern hunter gatherer societies, harvested shellfish were held in water-soaked algae to keep them alive and fresh until consumption. Competition Competition from large Ice Age predators was rather high. Cave lions likely targeted horses, large deer and wild cattle; and leopards primarily reindeer and roe deer; which heavily overlapped with Neanderthal diet. To defend a kill against such ferocious predators, Neanderthals may have engaged in a group display of yelling, arm waving, or stone throwing; or quickly gathered meat and abandoned the kill. However, at Grotte de Spy, Belgium, the remains of wolves, cave lions and cave bears—which were all major predators of the time—indicate Neanderthals hunted their competitors to some extent. Neanderthals and cave hyenas may have exemplified niche differentiation, and actively avoided competing with each other. Although they both mainly targeted the same groups of creatures—deer, horses and cattle—Neanderthals mainly hunted the former and cave hyenas the latter two. Further, animal remains from Neanderthal caves indicate they preferred to hunt prime individuals, whereas cave hyenas hunted weaker or younger prey, and cave hyena caves have a higher abundance of carnivore remains. Nonetheless, there is evidence that cave hyenas stole food and leftovers from Neanderthal campsites and scavenged on dead Neanderthal bodies. Similarly, evidence from the site of Payre in southern France shows that Neanderthals exhibited resource partitioning with wolves. Cannibalism There are several instances of Neanderthals practising cannibalism across their range. The first example came from the Krapina, Croatia site, in 1899, and other examples were found at Cueva del Sidrón and Zafarraya in Spain; and the French Grotte de Moula-Guercy, Les Pradelles, and La Quina. For the five cannibalised Neanderthals at the Grottes de Goyet, Belgium, there is evidence that the upper limbs were disarticulated, the lower limbs defleshed and also smashed (likely to extract bone marrow), the chest cavity disembowelled, and the jaw dismembered. There is also evidence that the butchers used some bones to retouch their tools. The processing of Neanderthal meat at Grottes de Goyet is similar to how they processed horse and reindeer. About 35% of the Neanderthals at Marillac-le-Franc, France, show clear signs of butchery, and the presence of digested teeth indicates that the bodies were abandoned and eaten by scavengers, likely hyaenas. These cannibalistic tendencies have been explained as either ritual defleshing, pre-burial defleshing (to prevent scavengers or foul smell), acts of war, or simply food preparation. Because of a small number of cases, and the higher number of cut marks seen on cannibalised individuals than animals (indicating inexperience), cannibalism was probably not a very common practice, and it may have only been done in times of extreme food shortages, as in some cases in recorded human history. The arts Personal adornment Neanderthals used ochre, a clay earth pigment. Ochre is well documented from 60 to 45,000 years ago in Neanderthal sites, with the earliest example dating to 250–200,000 years ago from Maastricht-Belvédère, the Netherlands (a similar timespan to the ochre record of H. sapiens). It has been hypothesised to have functioned as body paint, and analyses of pigments from Pech de l'Azé, France, indicates they were applied to soft materials (such as a hide or human skin). However, modern hunter gatherers, in addition to body paint, also use ochre for medicine, for tanning hides, as a food preservative, and as an insect repellent, so its use as decorative paint for Neanderthals is speculative. Containers apparently used for mixing ochre pigments were found in Peștera Cioarei, Romania, which could indicate modification of ochre for solely aesthetic purposes. Neanderthals collected uniquely shaped objects and are suggested to have modified them into pendants, such as a fossil Aspa marginata sea snail shell possibly painted red from Grotta di Fumane, Italy, transported over to the site about 47,500 years ago; three shells, dated to about 120–115,000 years ago, perforated through the umbo belonging to a rough cockle, a Glycymeris insubrica, and a Spondylus gaederopus from Cueva de los Aviones, Spain, the former two associated with red and yellow pigments, and the latter a red-to-black mix of hematite and pyrite; and a king scallop shell with traces of an orange mix of goethite and hematite from Cueva Antón, Spain. The discoverers of the latter two claim that pigment was applied to the exterior to make it match the naturally vibrant inside colouration. Excavated from 1949 to 1963 from the French Grotte du Renne, Châtelperronian beads made from animal teeth, shells and ivory were found associated with Neanderthal bones, but the dating is uncertain and Châtelperronian artefacts may actually have been crafted by modern humans and simply redeposited with Neanderthal remains. Gibraltarian palaeoanthropologists Clive and Geraldine Finlayson suggested that Neanderthals used various bird parts as artistic media, specifically black feathers. In 2012, the Finlaysons and colleagues examined 1,699 sites across Eurasia, and argued that raptors and corvids, species not typically consumed by any human species, were overrepresented and show processing of only the wing bones instead of the fleshier torso, and thus are evidence of feather plucking of specifically the large flight feathers for use as personal adornment. They specifically noted the cinereous vulture, red-billed chough, kestrel, lesser kestrel, alpine chough, rook, jackdaw and the white tailed eagle in Middle Palaeolithic sites. Other birds claimed to present evidence of modifications by Neanderthals are the golden eagle, rock pigeon, common raven and the bearded vulture. The earliest claim of bird bone jewellery is a number of 130,000-year-old white tailed eagle talons found in a cache near Krapina, Croatia, speculated, in 2015, to have been a necklace. A similar 39,000-year-old Spanish imperial eagle talon necklace was reported in 2019 at Cova Foradà in Spain, though from the contentious Châtelperronian layer. In 2017, 17 incision-decorated raven bones from the Zaskalnaya VI rock shelter, Ukraine, dated to 43–38,000 years ago were reported. Because the notches are more-or-less equidistant to each other, they are the first modified bird bones that cannot be explained by simple butchery, and for which the argument of design intent is based on direct evidence. Discovered in 1975, the so-called Mask of la Roche-Cotard, a mostly flat piece of flint with a bone pushed through a hole on the midsection—dated to 32, 40, or 75,000 years ago—has been purported to resemble the upper half of a face, with the bone representing eyes. It is contested whether it represents a face, or if it even counts as art. In 1988, American archaeologist Alexander Marshack speculated that a Neanderthal at Grotte de L'Hortus, France, wore a leopard pelt as personal adornment to indicate elevated status in the group based on a recovered leopard skull, phalanges and tail vertebrae. Abstraction As of 2014, 63 purported engravings have been reported from 27 different European and Middle Eastern Lower-to-Middle Palaeolithic sites, of which 20 are on flint cortexes from 11 sites, 7 are on slabs from 7 sites, and 36 are on pebbles from 13 sites. It is debated whether or not these were made with symbolic intent. In 2012, deep scratches on the floor of Gorham's Cave, Gibraltar, were discovered, dated to older than 39,000 years ago, which the discoverers have interpreted as Neanderthal abstract art. The scratches could have also been produced by a bear. In 2021, an Irish elk phalanx with five engraved offset chevrons stacked above each other was discovered at the entrance to the Einhornhöhle cave in Germany, dating to about 51,000 years ago. A flint flake at the Mousterian site of Kiik-Koba in Crimea, Ukraine, is decorated with an engraving which would have required skilled workmanship. In 2018, some red-painted dots, disks, lines and hand stencils on the cave walls of the Spanish La Pasiega, Maltravieso, and Doña Trinidad were dated to be older than 66,000 years ago, at least 20,000 years prior to the arrival of modern humans in Western Europe. This would indicate Neanderthal authorship, and similar iconography recorded in other Western European sites—such as Les Merveilles, France, and Cueva del Castillo, Spain—could potentially also have Neanderthal origins. However, the dating of these Spanish caves, and thus attribution to Neanderthals, is contested. Neanderthals are known to have collected a variety of unusual objects—such as crystals or fossils—without any real functional purpose or any indication of damage caused by use. It is unclear if these objects were simply picked up for their aesthetic qualities, or if some symbolic significance was applied to them. These items are mainly quartz crystals, but also other minerals such as cerussite, iron pyrite, calcite and galena. A few findings feature modifications, such as a mammoth tooth with an incision and a fossil nummulite shell with a cross etched in from Tata, Hungary; a large slab with 18 cupstones hollowed out from a grave in La Ferrassie, France; and a geode from Peștera Cioarei, Romania, coated with red ochre. A number of fossil shells are also known from French Neanderthals sites, such as a rhynchonellid and a Taraebratulina from Combe Grenal; a belemnite beak from Grottes des Canalettes; a polyp from Grotte de l'Hyène; a sea urchin from La Gonterie-Boulouneix; and a rhynchonella, feather star and belemnite beak from the contentious Châtelperronian layer of Grotte du Renne. Music Purported Neanderthal bone flute fragments made of bear long bones were reported from Potočka zijalka, Slovenia, in the 1920s, and Istállós-kői-barlang, Hungary, and Mokriška jama, Slovenia, in 1985; but these are now attributed to modern human activities. The 43,000-year-old Divje Babe flute from Slovenia, found in 1995, has been attributed by some researchers to Neanderthals, though its status as a flute is heavily disputed. Many researchers consider it to be most likely the product of a carnivorous animal chewing the bone, but its discoverer Ivan Turk and other researchers have maintained an argument that it was manufactured by Neanderthal as a musical instrument. Technology Despite the apparent 150,000-year stagnation in Neanderthal lithic innovation, there is evidence that Neanderthal technology was more sophisticated than was previously thought. However, the high frequency of potentially debilitating injuries could have prevented very complex technologies from emerging, as a major injury would have impeded an expert's ability to effectively teach a novice. Stone tools Neanderthals made stone tools, and are associated with the Mousterian industry. The Mousterian is also associated with North African H. sapiens as early as 315,000 years ago and was found in Northern China about 47–37,000 years ago in caves such as Jinsitai or Tongtiandong. It evolved around 300,000 years ago with the Levallois technique which developed directly from the preceding Acheulean industry (invented by H. erectus about 1.8 mya). Levallois made it easier to control flake shape and size, and as a difficult-to-learn and unintuitive process, the Levallois technique may have been directly taught generation to generation rather than via purely observational learning. There are distinct regional variants of the Mousterian industry, such as: the Quina and La Ferrassie subtypes of the Charentian industry in southwestern France, Acheulean-tradition Mousterian subtypes A and B along the Atlantic and northwestern European coasts, the Micoquien industry of Central and Eastern Europe and the related Sibiryachikha variant in the Siberian Altai Mountains, the Denticulate Mousterian industry in Western Europe, the racloir industry around the Zagros Mountains, and the flake cleaver industry of Cantabria, Spain, and both sides of the Pyrenees. In the mid-20th century, French archaeologist François Bordes debated against American archaeologist Lewis Binford to explain this diversity (the "Bordes–Binford debate"), with Bordes arguing that these represent unique ethnic traditions and Binford that they were caused by varying environments (essentially, form vs. function). The latter sentiment would indicate a lower degree of inventiveness compared to modern humans, adapting the same tools to different environments rather than creating new technologies. A continuous sequence of occupation is well documented in Grotte du Renne, France, where the lithic tradition can be divided into the Levallois–Charentian, Discoid–Denticulate (43,300  ±929 – 40,900 ±719 years ago), Levallois Mousterian (40,200 ±1,500 – 38,400 ±1,300 years ago) and Châtelperronian (40,930 ±393 – 33,670 ±450 years ago). There is some debate if Neanderthals had long-ranged weapons. A wound on the neck of an African wild ass from Umm el Tlel, Syria, was likely inflicted by a heavy Levallois-point javelin, and bone trauma consistent with habitual throwing has been reported in Neanderthals. Some spear tips from Abri du Maras, France, may have been too fragile to have been used as thrusting spears, possibly suggesting their use as darts. Organic tools The Châtelperronian in central France and northern Spain is a distinct industry from the Mousterian, and is controversially hypothesised to represent a culture of Neanderthals borrowing (or by process of acculturation) tool-making techniques from immigrating modern humans, crafting bone tools and ornaments. In this frame, the makers would have been a transitional culture between the Neanderthal Mousterian and the modern human Aurignacian. The opposing viewpoint is that the Châtelperronian was manufactured by modern humans instead. Abrupt transitions similar to the Mousterian/Châtelperronian could also simply represent natural innovation, like the La Quina–Neronian transition 50,000 years ago featuring technologies generally associated with modern humans such as bladelets and microliths. Other ambiguous transitional cultures include the Italian Uluzzian industry, and the Balkan Szeletian industry. Before immigration, the only evidence of Neanderthal bone tools are animal rib lissoirs—which are rubbed against hide to make it more supple or waterproof—although this could also be evidence for modern humans immigrating earlier than expected. In 2013, two 51,400- to 41,100-year-old deer rib lissoirs were reported from Pech-de-l'Azé and the nearby Abri Peyrony in France. In 2020, five more lissoirs made of aurochs or bison ribs were reported from Abri Peyrony, with one dating to about 51,400 years ago and the other four to 47,700–41,100 years ago. This indicates the technology was in use in this region for a long time. Since reindeer remains were the most abundant, the use of less abundant bovine ribs may indicate a specific preference for bovine ribs. Potential lissoirs have also been reported from Grosse Grotte, Germany (made of mammoth), and Grottes des Canalettes, France (red deer). The Neanderthals in 10 coastal sites in Italy (namely Grotta del Cavallo and Grotta dei Moscerini) and Kalamakia Cave, Greece, are known to have crafted scrapers using smooth clam shells, and possibly hafted them to a wooden handle. They probably chose this clam species because it has the most durable shell. At Grotta dei Moscerini, about 24% of the shells were gathered alive from the seafloor, meaning these Neanderthals had to wade or dive into shallow waters to collect them. At Grotta di Santa Lucia, Italy, in the Campanian volcanic arc, Neanderthals collected the porous volcanic pumice, which, for contemporary humans, was probably used for polishing points and needles. The pumices are associated with shell tools. At Abri du Maras, France, twisted fibres and a 3-ply inner-bark-fibre cord fragment associated with Neanderthals show that they produced string and cordage, but it is unclear how widespread this technology was because the materials used to make them (such as animal hair, hide, sinew, or plant fibres) are biodegradable and preserve very poorly. This technology could indicate at least a basic knowledge of weaving and knotting, which would have made possible the production of nets, containers, packaging, baskets, carrying devices, ties, straps, harnesses, clothes, shoes, beds, bedding, mats, flooring, roofing, walls and snares, and would have been important in hafting, fishing and seafaring. Dating to 52–41,000 years ago, the cord fragment is the oldest direct evidence of fibre technology, although 115,000-year-old perforated shell beads from Cueva Antón possibly strung together to make a necklace are the oldest indirect evidence. In 2020, British archaeologist Rebecca Wragg Sykes expressed cautious support for the genuineness of the find, but pointed out that the string would have been so weak that it would have had limited functions. One possibility is as a thread for attaching or stringing small objects. The archaeological record shows that Neanderthals commonly used animal hide and birch bark, and may have used them to make cooking containers. However, this is based largely on circumstantial evidence, as neither fossilises well. It is possible that the Neanderthals at Kebara Cave in Israel, used the shells of the spur-thighed tortoise as containers. At the Italian Poggetti Vecchi site, there is evidence that they used fire to process boxwood branches to make digging sticks, a common implement in hunter-gatherer societies. The Schöningen spears are a collection of wooden spears probably made by early Neanderthals found in Germany, dating to around 300,000 years ago. They were likely both thrown and used as handheld thrusting spears. The tools were specifically made of spruce, (or possibly larch in some specimens) and pine despite their uncommonness in the environment, suggesting that they had been deliberately selected for their material properties. The spears had been deliberately debarked, followed by the ends being sharpened using cutting and scraping. Other wooden tools made of split wood were also found at the site, some rounded and some pointed, which may have functioned for domestic tasks, like serving as awls (used to make holes) and hide smoothers for the pointed and rounded types respectively. The wooden artefacts show evidence of being repurposed and reshaped. Fire and construction Many Mousterian sites have evidence of fire, some for extended periods of time, though it is unclear whether they were capable of starting fire or simply scavenged from naturally occurring wildfires. Indirect evidence of fire-starting ability includes pyrite residue on a couple of dozen bifaces from late Mousterian (c. 50,000 years ago) northwestern France (which could indicate they were used as percussion fire starters), and collection of manganese dioxide by late Neanderthals which can lower the combustion temperature of wood. They were also capable of zoning areas for specific activities, such as for knapping, butchering, hearths and wood storage. Many Neanderthal sites lack evidence for such activity perhaps due to natural degradation of the area over tens of thousands of years, such as by bear infiltration after abandonment of the settlement. In a number of caves, evidence of hearths has been detected. Neanderthals likely considered air circulation when making hearths as a lack of proper ventilation for a single hearth can render a cave uninhabitable in several minutes. Abric Romaní rock shelter, Spain, indicates eight evenly spaced hearths lined up against the rock wall, likely used to stay warm while sleeping, with one person sleeping on either side of the fire. At Cueva de Bolomor, Spain, with hearths lined up against the wall, the smoke flowed upwards to the ceiling, and led to outside the cave. In Grotte du Lazaret, France, smoke was probably naturally ventilated during the winter as the interior cave temperature was greater than the outside temperature; likewise, the cave was likely only inhabited in the winter. In 1990, two 176,000-year-old ring structures, several metres wide, made of broken stalagmite pieces, were discovered in a large chamber more than from the entrance within Grotte de Bruniquel, France. One ring was with stalagmite pieces averaging in length, and the other with pieces averaging . There were also four other piles of stalagmite pieces for a total of or worth of stalagmite pieces. Evidence of the use of fire and burnt bones also suggest human activity. A team of Neanderthals was likely necessary to construct the structure, but the chamber's actual purpose is uncertain. Building complex structures so deep in a cave is unprecedented in the archaeological record, and indicates sophisticated lighting and construction technology, and great familiarity with subterranean environments. The 44,000-year-old Moldova I open-air site, Ukraine, shows evidence of a ring-shaped dwelling made out of mammoth bones meant for long-term habitation by several Neanderthals, which would have taken a long time to build. It appears to have contained hearths, cooking areas and a flint workshop, and there are traces of woodworking. Upper Palaeolithic modern humans in the Russian plains are thought to have also made housing structures out of mammoth bones. Birch tar Neanderthal produced the adhesive birch bark tar, using the bark of birch trees, for hafting. It was long believed that birch bark tar required a complex recipe to be followed, and that it thus showed complex cognitive skills and cultural transmission. However, a 2019 study showed it can be made simply by burning birch bark beside smooth vertical surfaces, such as a flat, inclined rock. Thus, tar making does not require cultural processes per se. However, at Königsaue (Germany), Neanderthals did not make tar with such an aboveground method but rather employed a technically more demanding underground production method. This is one of our best indicators that some of their techniques were conveyed by cultural processes. Clothes Neanderthals were likely able to survive in a similar range of temperatures to modern humans while sleeping: about while naked in the open and windspeed , or while naked in an enclosed space. Since ambient temperatures were markedly lower than this—averaging, during the Last Interglacial, in July and in January and dropping to as a low as on the coldest days—Danish physicist Bent Sørensen hypothesised that Neanderthals required tailored clothing capable of preventing airflow to the skin. Especially during extended periods of travelling (such as a hunting trip), tailored footwear completely enwrapping the feet may have been necessary. Nonetheless, as opposed to the bone sewing-needles and stitching awls assumed to have been in use by contemporary modern humans, the only known Neanderthal tools that could have been used to fashion clothes are hide scrapers, which could have made items similar to blankets or ponchos, and there is no direct evidence they could produce fitted clothes. Indirect evidence of tailoring by Neanderthals includes the ability to manufacture string, which could indicate weaving ability, and a naturally-pointed horse metatarsal bone from Cueva de los Aviones, Spain, which was speculated to have been used as an awl, perforating dyed hides, based on the presence of orange pigments. Whatever the case, Neanderthals would have needed to cover up most of their body, and contemporary humans would have covered 80–90%. Since human/Neanderthal admixture is known to have occurred in the Middle East, and no modern body louse species descends from their Neanderthal counterparts (body lice only inhabit clothed individuals), it is possible Neanderthals (and/or humans) in hotter climates did not wear clothes, or Neanderthal lice were highly specialised. Seafaring Several journal articles by archaeologists have claimed to show evidence of Neanderthal seafaring. Remains of Middle Palaeolithic stone tools on Greek islands are said to indicate early seafaring by Neanderthals in the Ionian Sea possibly starting as far back as 200–150,000 years ago. The oldest stone artefacts from Crete date to 130–107,000 years ago, Cephalonia 125,000 years ago, and Zakynthos 110–35,000 years ago. The makers of these artefacts likely employed simple reed boats and made one-day crossings back and forth. Other Mediterranean islands with such remains include Sardinia, Melos, Alonnisos, and Naxos (although Naxos may have been connected to land), and it is possible they crossed the Strait of Gibraltar. If this interpretation is correct, Neanderthals' ability to engineer boats and navigate through open waters would speak to their advanced cognitive and technical skills. Specialists on the Neanderthals such as Rebecca Wragg Sykes are more sceptical and do not mention seafaring. Cyprian Broodbank finds most of the evidence unconvincing, but thinks that the evidence for Cephalonia is best and the Neanderthals may have made brief visits to small islands. There were no Neanderthals in Britain during the Last Interglacial, when conditions were very suitable for them but Britain was an island, whereas they were present earlier and later when conditions were harsher and low sea levels meant that Britain was connected to the Continent. Medicine Given their dangerous hunting and extensive skeletal evidence of healing, Neanderthals appear to have lived lives of frequent traumatic injury and recovery. Well-healed fractures on many bones indicate the setting of splints. Individuals with severe head and rib traumas (which would have caused massive blood loss) indicate they had some manner of dressing major wounds, such as bandages made from animal skin. By and large, they appear to have avoided severe infections, indicating good long-term treatment of such wounds. Their knowledge of medicinal plants was comparable to that of contemporary humans. An individual at Cueva del Sidrón, Spain, seems to have been medicating a dental abscess using poplar—which contains salicylic acid, the active ingredient in aspirin—and there were also traces of the antibiotic-producing Penicillium chrysogenum. They may also have used yarrow and camomile, and their bitter taste—which should act as a deterrent as it could indicate poison—means it was likely a deliberate act. At the Kebara Cave in Israel, plant remains which have historically been used for their medicinal properties were found, including the common grape vine, the pistachios of the Persian turpentine tree, ervil seeds and oak acorns. Language It is not known whether the Neanderthals had the capacity for advanced language, but some researchers have argued that a complex language—possibly using syntax—was probably necessary to survive in their harsh environment, with Neanderthals needing to communicate about topics such as locations, hunting and gathering, and tool-making techniques. The FOXP2 gene in modern humans is associated with speech and language development. FOXP2 was present in Neanderthals, but not the gene's modern human variant. Neurologically, Neanderthals had an expanded Broca's area—operating the formulation of sentences, and speech comprehension, but out of a group of 48 genes believed to affect the neural substrate of language, 11 had different methylation patterns between Neanderthals and modern humans. This could indicate a stronger ability in modern humans than in Neanderthals to express language. In 1971, cognitive scientist Philip Lieberman attempted to reconstruct the Neanderthal vocal tract and concluded that it was similar to that of a newborn and incapable of producing a large range of speech sounds, due to the large size of the mouth and the small size of the pharyngeal cavity (according to his reconstruction), thus no need for a descended larynx to fit the entire tongue inside the mouth. He claimed that they were anatomically unable to produce the sounds /a/, /i/, /u/, /ɔ/, /g/, and /k/ and thus lacked the capacity for articulate speech, though were still able to speak at a level higher than non-human primates. However, the lack of a descended larynx does not necessarily equate to a reduced vowel capacity. The 1983 discovery of a Neanderthal hyoid bone—used in speech production in humans—in Kebara 2 which is almost identical to that of humans suggests Neanderthals were capable of speech. Also, the ancestral Sima de los Huesos hominins had humanlike hyoid and ear bones, which could suggest the early evolution of the modern human vocal apparatus. However, the hyoid does not definitively provide insight into vocal tract anatomy. Subsequent studies reconstruct the Neanderthal vocal apparatus as comparable to that of modern humans, with a similar vocal repertoire. In 2015, Lieberman hypothesized that Neanderthals were capable of syntactical language, although nonetheless incapable of mastering any human dialect. It is debated if behavioural modernity is a recent and uniquely modern human innovation, or if Neanderthals also possessed it. Religion Funerals Although Neanderthals did bury their dead, at least occasionally—which may explain the abundance of fossil remains—the behaviour is not indicative of a religious belief of life after death because it could also have had non-symbolic motivations, such as great emotion or the prevention of scavenging. Estimates made regarding the number of known Neanderthal burials range from thirty-six to sixty. The oldest confirmed burials do not seem to occur before approximately 70,000 years ago. The small number of recorded Neanderthal burials implies that the activity was not particularly common. The setting of inhumation in Neanderthal culture largely consisted of simple, shallow graves and pits. Sites such as La Ferrassie in France or Shanidar in Iraq may imply the existence of mortuary centers or cemeteries in Neanderthal culture due to the number of individuals found buried at them. The debate on Neanderthal funerals has been active since the 1908 discovery of La Chapelle-aux-Saints 1 in a small, artificial hole in a cave in southwestern France, very controversially postulated to have been buried in a symbolic fashion. Another grave at Shanidar Cave, Iraq, was associated with the pollen of several flowers that may have been in bloom at the time of deposition—yarrow, centaury, ragwort, grape hyacinth, joint pine and hollyhock. The medicinal properties of the plants led American archaeologist Ralph Solecki to claim that the man buried was some leader, healer, or shaman, and that "The association of flowers with Neanderthals adds a whole new dimension to our knowledge of his humanness, indicating that he had 'soul' ". However, it is also possible the pollen was deposited by a small rodent after the man's death. The graves of children and infants, especially, are associated with grave goods such as artefacts and bones. The grave of a newborn from La Ferrassie, France, was found with three flint scrapers, and an infant from Cave, Syria, was found with a triangular flint placed on its chest. A 10-month-old from Amud Cave, Israel, was associated with a red deer mandible, likely purposefully placed there given other animal remains are now reduced to fragments. Teshik-Tash 1 from Uzbekistan was associated with a circle of ibex horns, and a limestone slab argued to have supported the head. Nonetheless, these contentiously constitute evidence of symbolic meaning as the grave goods' significance and worth are unclear. Cults It was once argued that the bones of the cave bear, particularly the skull, in some European caves were arranged in a specific order, indicating an ancient bear cult that killed bears and then ceremoniously arranged the bones. This would be consistent with bear-related rituals of modern human Arctic hunter-gatherers. However, the alleged peculiarity of the arrangement could also be sufficiently explained by natural causes, and bias could be introduced as the existence of a bear cult would conform with the idea that totemism was the earliest religion, leading to undue extrapolation of evidence. It was also once thought that Neanderthals ritually hunted, killed and cannibalised other Neanderthals and used the skull as the focus of some ceremony. In 1962, Italian palaeontologist Alberto Blanc believed a skull from Grotta Guattari, Italy, had evidence of a swift blow to the head—indicative of ritual murder—and a precise and deliberate incising at the base to access the brain. He compared it to the victims of headhunters in Malaysia and Borneo, putting it forward as evidence of a skull cult. However, it is now thought to have been a result of cave hyaena scavengery. Although Neanderthals are known to have practised cannibalism, there is unsubstantial evidence to suggest ritual defleshing. In 2019, Gibraltarian palaeoanthropologists Stewart, Geraldine and Clive Finlayson and Spanish archaeologist Francisco Guzmán speculated that the golden eagle had iconic value to Neanderthals, as exemplified in some modern human societies because they reported that golden eagle bones had a conspicuously high rate of evidence of modification compared to the bones of other birds. They then proposed some "Cult of the Sun Bird" where the golden eagle was a symbol of power. There is evidence from Krapina, Croatia, from wear use and even remnants of string, that suggests that raptor talons were worn as personal ornaments. Interbreeding Interbreeding with modern humans The first Neanderthal genome sequence was published in 2010, and strongly indicated interbreeding between Neanderthals and early modern humans. The genomes of all studied modern populations contain Neanderthal DNA. Various estimates exist for the proportion, such as 1–4% or 3.4–7.9% in modern Eurasians, or 1.8–2.4% in modern Europeans and 2.3–2.6% in modern East Asians. Pre-agricultural Europeans appear to have had similar, or slightly higher, percentages to modern East Asians, and the numbers may have decreased in the former due to dilution with a group of people which had split off before Neanderthal introgression. Typically, studies have reported finding no significant levels of Neanderthal DNA in Sub-Saharan Africans, but a 2020 study detected 0.3-0.5% in the genomes of five African sample populations, likely the result of Eurasians back-migrating and interbreeding with Africans, as well as human-to-neanderthal gene flow from dispersals of Homo sapiens preceding the larger Out-of-Africa migration, and also showed more equal Neanderthal DNA percentages for European and Asian populations. Such low percentages of Neanderthal DNA in all present day populations indicate infrequent past interbreeding, unless interbreeding was more common with a different population of modern humans which did not contribute to the present day gene pool. Of the inherited Neanderthal genome, 25% in modern Europeans and 32% in modern East Asians may be related to viral immunity. In all, approximately 20% of the Neanderthal genome appears to have survived in the modern human gene pool. However, due to their small population and resulting reduced effectivity of natural selection, Neanderthals accumulated several weakly harmful mutations, which were introduced to and slowly selected out of the much larger modern human population; the initial hybridised population may have experienced up to a 94% reduction in fitness compared to contemporary humans. By this measure, Neanderthals may have substantially increased in fitness. A 2017 study focusing on archaic genes in Turkey found associations with coeliac disease, malaria severity and Costello syndrome. Nonetheless, some genes may have helped modern East Asians adapt to the environment; the putatively Neanderthal Val92Met variant of the MC1R gene, which may be weakly associated with red hair and UV radiation sensitivity, is primarily found in East Asian, rather than European, individuals. Some genes related to the immune system appear to have been affected by introgression, which may have aided migration, such as OAS1, STAT2, TLR6, TLR1, TLR10, and several related to immune response. In addition, Neanderthal genes have also been implicated in the structure and function of the brain, keratin filaments, sugar metabolism, muscle contraction, body fat distribution, enamel thickness and oocyte meiosis. Nonetheless, a large portion of surviving introgression appears to be non-coding ("junk") DNA with few biological functions. There is considerably less Neanderthal ancestry on the X-chromosome as compared to the autosomal chromosomes. This has led to suggestions that admixture with modern humans was sex biased, and primarily the result of mating between modern human females and Neanderthal males. Other authors have suggested that this may be due to negative selection against Neanderthal alleles, however these two proposals are not mutually exclusive. A 2023 study confirmed that the low level of Neanderthal ancestry on the X-chromosomes is best explained by sex bias in the admixture events, and these authors also found evidence for negative selection on archaic genes. Neanderthal mtDNA (which is passed on from mother to child) is absent in modern humans. This is evidence that interbreeding occurred mainly between Neanderthal males and modern human females. According to Svante Pääbo, it is not clear that modern humans were socially dominant over Neanderthals, which may explain why the interbreeding occurred primarily between Neanderthal males and modern human females. Furthermore, even if Neanderthal women and modern human males did interbreed, Neanderthal mtDNA lineages may have gone extinct if women who carried them only gave birth to sons. The lack of Neanderthal-derived Y-chromosomes in modern humans (which is passed on from father to son), has also inspired the suggestions that the hybrids that contributed ancestry to modern populations were predominantly females, or that the Neanderthal Y-chromosome was not compatible with H. sapiens and became extinct. According to linkage disequilibrium mapping, the last Neanderthal gene flow into the modern human genome occurred 86–37,000 years ago, but most likely 65–47,000 years ago. It is thought that Neanderthal genes which contributed to the present day human genome stemmed from interbreeding in the Near East rather than the entirety of Europe. However, interbreeding still occurred without contributing to the modern genome. The approximately 40,000-year-old modern human Oase 2 was found, in 2015, to have had 6–9% (point estimate 7.3%) Neanderthal DNA, indicating a Neanderthal ancestor up to four to six generations earlier, but this hybrid population does not appear to have made a substantial contribution to the genomes of later Europeans. In 2016, the DNA of Neanderthals from Denisova Cave revealed evidence of interbreeding 100,000 years ago, and interbreeding with an earlier dispersal of H. sapiens may have occurred as early as 120,000 years ago in places such as the Levant. The earliest H. sapiens remains outside of Africa occur at Misliya Cave 194–177,000 years ago, and Skhul and Qafzeh 120–90,000 years ago. The Qafzeh humans lived at approximately the same time as the Neanderthals from the nearby Tabun Cave. The Neanderthals of the German Hohlenstein-Stadel have deeply divergent mtDNA compared to more recent Neanderthals, possibly due to introgression of human mtDNA between 316,000 and 219,000 years ago, or simply because they were genetically isolated. Whatever the case, these first interbreeding events have not left any trace in modern human genomes. Genetic evidence suggests that following their split from Denisovans, Neanderthals experienced gene flow (around 3% of their genome) from the lineage leading to modern humans prior to the expansion of modern humans outside of Africa during the Last Glacial Period, with this interbreeding suggested to have taken place around 200–300,000 years ago. Detractors of the interbreeding model argue that the genetic similarity is only a remnant of a common ancestor instead of interbreeding, although this is unlikely as it fails to explain why sub-Saharan Africans do not have Neanderthal DNA. Interbreeding with Denisovans Although nDNA confirms that Neanderthals and Denisovans are more closely related to each other than they are to modern humans, Neanderthals and modern humans share a more recent maternally-transmitted mtDNA common ancestor, possibly due to interbreeding between Denisovans and some unknown human species. The 400,000-year-old Neanderthal-like humans from Sima de los Huesos in northern Spain, looking at mtDNA, are more closely related to Denisovans than Neanderthals. Several Neanderthal-like fossils in Eurasia from a similar time period are often grouped into H. heidelbergensis, of which some may be relict populations of earlier humans, which could have interbred with Denisovans. This is also used to explain an approximately 124,000-year-old German Neanderthal specimen with mtDNA that diverged from other Neanderthals (except for Sima de los Huesos) about 270,000 years ago, while its genomic DNA indicated divergence less than 150,000 years ago. Sequencing of the genome of a Denisovan from Denisova Cave has shown that 17% of its genome derives from Neanderthals. This Neanderthal DNA more closely resembled that of a 120,000-year-old Neanderthal bone from the same cave than that of Neanderthals from Vindija Cave, Croatia, or Mezmaiskaya Cave in the Caucasus, suggesting that interbreeding was local. For the 90,000-year-old Denisova 11, it was found that her father was a Denisovan related to more recent inhabitants of the region, and her mother a Neanderthal related to more recent European Neanderthals at Vindija Cave, Croatia. Given how few Denisovan bones are known, the discovery of a first-generation hybrid indicates interbreeding was very common between these species, and Neanderthal migration across Eurasia likely occurred sometime after 120,000 years ago. Extinction Transition The extinction of Neanderthals was part of the broader Late Pleistocene megafaunal extinction event. Whatever the cause of their extinction, Neanderthals were replaced by modern humans, indicated by near full replacement of Middle Palaeolithic Mousterian stone technology with modern human Upper Palaeolithic Aurignacian stone technology across Europe (the Middle-to-Upper Palaeolithic Transition) from 41,000 to 39,000 years ago. By between 44,200 and 40,600 BP, Neanderthals vanished from northwestern Europe. However, it is postulated that Iberian Neanderthals persisted until about 35,000 years ago, as indicated by the date range of transitional lithic assemblages—Châtelperronian, Uluzzian, Protoaurignacian and Early Aurignacian. The latter two are attributed to modern humans, but the former two have unconfirmed authorship, potentially products of Neanderthal/modern human cohabitation and cultural transmission. Further, the appearance of the Aurignacian south of the Ebro River has been dated to roughly 37,500 years ago, which has prompted the "Ebro Frontier" hypothesis which states that the river presented a geographic barrier preventing modern human immigration, and thus prolonging Neanderthal persistence. However, the dating of the Iberian Transition is debated, with a contested timing of 43,000–40,800 years ago at Cueva Bajondillo, Spain. The Châtelperronian appears in northeastern Iberia about 42,500–41,600 years ago. Some Neanderthals in Gibraltar were dated to much later than this—such as Zafarraya (30,000 years ago) and Gorham's Cave (28,000 years ago)—which may be inaccurate as they were based on ambiguous artefacts instead of direct dating. A claim of Neanderthals surviving in a polar refuge in the Ural Mountains is loosely supported by Mousterian stone tools dating to 34,000 years ago from the northern Siberian Byzovaya site at a time when modern humans may not yet have colonised the northern reaches of Europe; however, modern human remains are known from the nearby Mamontovaya Kurya site dating to 40,000 years ago. Indirect dating of Neanderthals remains from Mezmaiskaya Cave reported a date of about 30,000 years ago, but direct dating instead yielded 39,700 ±1,100 years ago, more in line with trends exhibited in the rest of Europe. The earliest indication of Upper Palaeolithic modern human immigration into Europe is a series of modern human teeth with Neronian industry stone tools found at Mandrin Cave, Malataverne in France, dated in 2022 to between 56,800 and 51,700 years ago. The earliest bones in Europe date to roughly 45–43,000 years ago in Bulgaria, Italy, and Britain. This wave of modern humans replaced Neanderthals. However, Neanderthals and H. sapiens have a much longer contact history. DNA evidence indicates H. sapiens contact with Neanderthals and admixture as early as 120–100,000 years ago. A 2019 reanalysis of 210,000-year-old skull fragments from the Greek Apidima Cave assumed to have belonged to a Neanderthal concluded that they belonged to a modern human, and a Neanderthal skull dating to 170,000 years ago from the cave indicates H. sapiens were replaced by Neanderthals until returning about 40,000 years ago. This identification was refuted by a 2020 study. Archaeological evidence suggests that Neanderthals displaced modern humans in the Near East around 100,000 years ago until about 60–50,000 years ago. Cause Modern humans Historically, modern human technology was viewed as vastly superior to that of Neanderthals, with more efficient weaponry and subsistence strategies, and Neanderthals simply went extinct because they could not compete. The discovery of Neanderthal/modern human introgression has caused the resurgence of the multiregional hypothesis, wherein the present day genetic makeup of all humans is the result of complex genetic contact among several different populations of humans dispersed across the world. By this model, Neanderthals and other recent archaic humans were simply assimilated into the modern human genome – that is, they were effectively bred out into extinction. Modern humans coexisted with Neanderthals in Europe for around 3,000 to 5,000 years. Climate change Their ultimate extinction coincides with Heinrich event 4, a period of intense seasonality; later Heinrich events are also associated with massive cultural turnovers when European human populations collapsed. This climate change may have depopulated several regions of Neanderthals, like previous cold spikes, but these areas were instead repopulated by immigrating humans, leading to Neanderthal extinction. In southern Iberia, there is evidence that Neanderthal populations declined during H4 and the associated proliferation of Artemisia-dominated desert-steppes. It has also been proposed that climate change was the primary driver, as their low population left them vulnerable to any environmental change, with even a small drop in survival or fertility rates possibly quickly leading to their extinction. However, Neanderthals and their ancestors had survived through several glacial periods over their hundreds of thousands of years of European habitation. It is also proposed that around 40,000 years ago, when Neanderthal populations may have already been dwindling from other factors, the Campanian Ignimbrite Eruption in Italy could have led to their final demise, as it produced cooling for a year and acid rain for several more years. Disease Modern humans may have introduced African diseases to Neanderthals, contributing to their extinction. A lack of immunity, compounded by an already low population, was potentially devastating to the Neanderthal population, and low genetic diversity could have also rendered fewer Neanderthals naturally immune to these new diseases ("differential pathogen resistance" hypothesis). However, compared to modern humans, Neanderthals had a similar or higher genetic diversity for 12 major histocompatibility complex (MHC) genes associated with the adaptive immune system, casting doubt on this model. Low population and inbreeding depression may have caused maladaptive birth defects, which could have contributed to their decline (mutational meltdown). In late-20th-century New Guinea, due to cannibalistic funerary practices, the Fore people were decimated by transmissible spongiform encephalopathies, specifically kuru, a highly virulent disease spread by ingestion of prions found in brain tissue. However, individuals with the 129 variant of the PRNP gene were naturally immune to the prions. Studying this gene led to the discovery that the 129 variant was widespread among all modern humans, which could indicate widespread cannibalism at some point in human prehistory. Because Neanderthals are known to have practised cannibalism to an extent and to have co-existed with modern humans, British palaeoanthropologist Simon Underdown speculated that modern humans transmitted a kuru-like spongiform disease to Neanderthals, and, because the 129 variant appears to have been absent in Neanderthals, it quickly killed them off. In popular culture Neanderthals have been portrayed in popular culture including appearances in literature, visual media and comedy. The "caveman" archetype often mocks Neanderthals and depicts them as primitive, hunchbacked, knuckle-dragging, club-wielding, grunting, nonsocial characters driven solely by animal instinct. "Neanderthal" can also be used as an insult. In literature, they are sometimes depicted as brutish or monstrous, such as in H. G. Wells' The Grisly Folk and Elizabeth Marshall Thomas' The Animal Wife, but sometimes with a civilised but unfamiliar culture, as in William Golding's The Inheritors, Björn Kurtén's Dance of the Tiger, and Jean M. Auel's Clan of the Cave Bear and her Earth's Children series.
Biology and health sciences
Evolution
null
6474767
https://en.wikipedia.org/wiki/Freundlich%20equation
Freundlich equation
The Freundlich equation or Freundlich adsorption isotherm, an adsorption isotherm, is an empirical relationship between the quantity of a gas adsorbed into a solid surface and the gas pressure. The same relationship is also applicable for the concentration of a solute adsorbed onto the surface of a solid and the concentration of the solute in the liquid phase. In 1909, Herbert Freundlich gave an expression representing the isothermal variation of adsorption of a quantity of gas adsorbed by unit mass of solid adsorbent with gas pressure. This equation is known as Freundlich adsorption isotherm or Freundlich adsorption equation. As this relationship is entirely empirical, in the case where adsorption behavior can be properly fit by isotherms with a theoretical basis, it is usually appropriate to use such isotherms instead (see for example the Langmuir and BET adsorption theories). The Freundlich equation is also derived (non-empirically) by attributing the change in the equilibrium constant of the binding process to the heterogeneity of the surface and the variation in the heat of adsorption. Freundlich adsorption isotherm The Freundlich adsorption isotherm is mathematically expressed as In Freundlich's notation (used for his experiments dealing with the adsorption of organic acids on coal in aqueous solutions), signifies the ratio between the adsorbed mass or adsorbate and the mass of the adsorbent , which in Freundlich's studies was coal. In the figure above, the x-axis represents , which denotes the equilibrium concentration of the adsorbate within the solvent. Freundlich's numerical analysis of the three organic acids for the parameters and according to equation were: Freundlich's experimental data can also be used in a contemporary computer based fit. These values are added to appreciate the numerical work done in 1907. △ K and △ n values are the error bars of the computer based fit. The K and n values itself are used to calculate the dotted lines in the figure. Equation can also be written as Sometimes also this notation for experiments in the gas phase can be found: = mass of adsorbate = mass of adsorbent = equilibrium pressure of the gaseous adsorbate in case of experiments made in the gas phase (gas/solid interaction with gaseous species/adsorbed species) and are constants for a given adsorbate and adsorbent at a given temperature (from there, the term isotherm needed to avoid significant gas pressure fluctuations due to uncontrolled temperature variations in the case of adsorption experiments of a gas onto a solid phase). = distribution coefficient = correction factor At high pressure , hence extent of adsorption becomes independent of pressure. The Freundlich equation is unique; consequently, if the data fit the equation, it is only likely, but not proved, that the surface is heterogeneous. The heterogeneity of the surface can be confirmed with calorimetry. Homogeneous surfaces (or heterogeneous surfaces that exhibit homogeneous adsorption (single site)) have a constant of adsorption. On the other hand, heterogeneous adsorption (multi-site) have a variable of adsorption depending on the percent of sites occupied. When the adsorbate pressure in the gas phase (or the concentration in solution) is low, high-energy sites will be occupied first. As the pressure in the gas phase (or the concentration in solution) increases, the low-energy sites will then be occupied resulting in a weaker of adsorption. Limitation of Freundlich adsorption isotherm Experimentally it was determined that extent of gas adsorption varies directly with pressure, and then it directly varies with pressure raised to the power until saturation pressure is reached. Beyond that point, the rate of adsorption saturates even after applying higher pressure. Thus, the Freundlich adsorption isotherm fails at higher pressure.
Physical sciences
Other separations
Chemistry
8443731
https://en.wikipedia.org/wiki/Fluorapatite
Fluorapatite
Fluorapatite, often with the alternate spelling of fluoroapatite, is a phosphate mineral with the formula Ca5(PO4)3F (calcium fluorophosphate). Fluorapatite is a hard crystalline solid. Although samples can have various color (green, brown, blue, yellow, violet, or colorless), the pure mineral is colorless, as expected for a material lacking transition metals. Along with hydroxylapatite, it can be a component of tooth enamel, especially in individuals who use fluoridated toothpaste, but for industrial use both minerals are mined in the form of phosphate rock, whose usual mineral composition is primarily fluorapatite but often with significant amounts of the other. Fluorapatite crystallizes in a hexagonal crystal system. It is often combined as a solid solution with hydroxylapatite (Ca5(PO4)3OH or Ca10(PO4)6(OH)2) in biological matrices. Chlorapatite (Ca5(PO4)3Cl) is another related structure. Industrially, the mineral is an important source of both phosphoric and hydrofluoric acids. Fluorapatite as a mineral is the most common phosphate mineral. It occurs widely as an accessory mineral in igneous rocks and in calcium rich metamorphic rocks. It commonly occurs as a detrital or diagenic mineral in sedimentary rocks and is an essential component of phosphorite ore deposits. It occurs as a residual mineral in lateritic soils. Fluorapatite is found in the teeth of sharks and other fishes in varying concentrations. It is also present in human teeth that have been exposed to fluoride ions, for example, through water fluoridation or by using fluoride-containing toothpaste. The presence of fluorapatite helps prevent tooth decay or dental caries. Fluoroapatite has a mild bacteriostatic property as well, which helps decrease the proliferation of Streptococcus mutans, the predominant bacterium related to dental caries. Synthesis Fluorapatite can be synthesized in a three step process. First, calcium phosphate is generated by combining calcium and phosphate salts at neutral pH. This material then reacts further with fluoride sources (often sodium monofluorophosphate or calcium fluoride (CaF2)) to give the mineral. This reaction is integral in the global phosphorus cycle. 3 + 2 → 3 + → 2 Applications Fluorapatite as a naturally occurring impurity in apatite generates hydrogen fluoride as a byproduct during the production of phosphoric acid, as apatite is digested by sulfuric acid. The hydrogen fluoride byproduct is now one of the industrial sources of hydrofluoric acid, which in turn is used as a starting reagent for synthesis of a range of important industrial and pharmaceutical fluorine compounds. Synthetic fluorapatite doped with manganese-II and antimony-V formed the basis for the second generation of fluorescent tube phosphors referred to as halophosphors. When irradiated with 253.7 nm mercury resonance radiation they fluoresced with broad emission which appeared within the range of acceptable whites. The antimony-V acted as the primary activator and produced a broad blue emission. Addition of manganese-II produced a second broad peak to appear at the red end of the emission spectrum at the expense of the antimony peak, excitation energy being transferred from the antimony to the manganese by a non radiative process and making the emitted light appear less blue and more pink. Replacement of some of the fluoride ions with chloride ions in the lattice caused a general shift of the emission bands to the longer wavelength red end of the spectrum. These alterations allowed phosphors for Warm White, White and Daylight tubes, (with corrected color temperatures of 2900, 4100 and 6500 K respectively), to be made. The amounts of the manganese and antimony activators vary between 0.05 and 0.5 mole percent. The reaction used to create halophosphor is shown below. The antimony and manganese must be incorporated in the correct trace amounts if the product is to be fluorescent. 6 + (3+x) + (1−x) + (2x) → 2 + (3+x) + (3+x) + (2x) Sometimes some of the calcium was substituted with strontium giving narrower emission peaks. For special purpose or colored tubes the halophosphor was mixed with small quantities of other phosphors, particularly in De-Luxe tubes with higher color rendering index for use in food market or art studio lighting. Prior to the development of halophosphor in 1942, the first generation willemite latticed, manganese-II activated zinc orthosilicate and zinc beryllium orthosilicate phosphors were used in fluorescent tubes. Due to the respiratory toxicity of beryllium compounds the obsolescence of these early phosphor types were advantageous to health. Since about 1990 the third generation tri-phosphors, three separate red, blue and green phosphors activated with rare earth ions and mixed in proportions to produce acceptable whites, have largely replaced halophosphors. Fluorapatite can be used as a precursor for the production of phosphorus. It can be reduced by carbon in the presence of quartz: 4 + 21 + 30 C → 20 + 30 CO + + 6 Upon cooling, white phosphorus (P4) is generated: 2 → Fluorapatite is also used as a gemstone.
Physical sciences
Minerals
Earth science
8447393
https://en.wikipedia.org/wiki/Ligand%20cone%20angle
Ligand cone angle
In coordination chemistry, the ligand cone angle (θ) is a measure of the steric bulk of a ligand in a transition metal coordination complex. It is defined as the solid angle formed with the metal at the vertex of a cone and the outermost edge of the van der Waals spheres of the ligand atoms at the perimeter of the base of the cone. Tertiary phosphine ligands are commonly classified using this parameter, but the method can be applied to any ligand. The term cone angle was first introduced by Chadwick A. Tolman, a research chemist at DuPont. Tolman originally developed the method for phosphine ligands in nickel complexes, determining them from measurements of accurate physical models. Asymmetric cases The concept of cone angle is most easily visualized with symmetrical ligands, e.g. PR3. But the approach has been refined to include less symmetrical ligands of the type PRR′R″ as well as diphosphines. In such asymmetric cases, the substituent angles' half angles, , are averaged and then doubled to find the total cone angle, θ. In the case of diphosphines, the of the backbone is approximated as half the chelate bite angle, assuming a bite angle of 74°, 85°, and 90° for diphosphines with methylene, ethylene, and propylene backbones, respectively. The Manz cone angle is often easier to compute than the Tolman cone angle: Variations The Tolman cone angle method assumes empirical bond data and defines the perimeter as the maximum possible circumscription of an idealized free-spinning substituent. The metal-ligand bond length in the Tolman model was determined empirically from crystal structures of tetrahedral nickel complexes. In contrast, the solid-angle concept derives both bond length and the perimeter from empirical solid state crystal structures. There are advantages to each system. If the geometry of a ligand is known, either through crystallography or computations, an exact cone angle (θ) can be calculated. No assumptions about the geometry are made, unlike the Tolman method. Application The concept of cone angle is of practical importance in homogeneous catalysis because the size of the ligand affects the reactivity of the attached metal center. In an example, the selectivity of hydroformylation catalysts is strongly influenced by the size of the coligands. Despite being monovalent, some phosphines are large enough to occupy more than half of the coordination sphere of a metal center. Recent research has found that other descriptors—such as percent buried volume—are more accurate than cone angle at capturing the relevant steric effects of the phosphine ligand(s) when bound to the metal center.
Physical sciences
Bond structure
Chemistry
1313540
https://en.wikipedia.org/wiki/Lar%20gibbon
Lar gibbon
The lar gibbon (Hylobates lar), also known as the white-handed gibbon, is an endangered primate in the gibbon family, Hylobatidae. It is one of the better-known gibbons and is often kept in captivity. Taxonomy There are five subspecies of lar gibbon: Malaysian lar gibbon (H. l. lar) Carpenter's lar gibbon (H. l. carpenteri) Central lar gibbon (H. l. entelloides) Sumatran lar gibbon (H. l. vestitus) Yunnan lar gibbon (H. l. yunnanensis) (possibly extinct) Physical description The fur coloring of the lar gibbon varies from black and dark-brown to light-brown, sandy colors. The hands and feet are white-colored, likewise a ring of white hair surrounds the black face. Both males and females can be all color variants, and the sexes also hardly differ in size. Gibbons are true brachiators, propelling themselves through the forest by swinging under the branches using their arms. Reflecting this mode of locomotion, the white-handed gibbon has curved fingers, elongated hands, extremely long arms and relatively short legs, giving it an intermembral index of 129.7, one of the highest of the primates. As with all apes, the number of caudal vertebrae has been reduced drastically, resulting in the loss of a functional tail. Gibbons have tough, bony padding on their buttocks, known as the ischial callosities, or sitting pads. Distribution and habitat Lar gibbons have the greatest north-south range of any of the gibbon species. They are found in Indonesia, Laos, Malaysia, Myanmar and Thailand. Their range historically extended from southwest China to Thailand and Burma south to the whole Malay Peninsula in primary and secondary tropical rain forests. It is also present in the northwest portion of the island of Sumatra. In recent decades, especially, the continental range has been reduced and fragmented. Lar gibbons are likely extinct in China. However, if any populations persist, they would only be found in southwest Yunnan, their former range. Lar gibbon are usually found in lowland dipterocarp forest, hill dipterocarp forest, and upper dipterocarp forest, including primary lowland and submontane rainforest, mixed deciduous bamboo forest, and seasonal evergreen forest. They are not usually found higher than 1200 meters above sea level. The gibbon genus is highly allopatric, usually separated by large rivers. As a result, their range extends through southern and eastern Myanmar, but only east of the Salween River. They are found through the Malay Peninsula. Lar gibbons also exist west of the Mekong River in northwestern Laos and northern Sumatra. The lar gibbon can be found living in sympatry with several other primates and apes, including orangutans (Pongo pygmaeus), siamangs (S. syndactylus), pileated gibbons (Hylobates pileatus), purple-faced langurs (Trachypithecus spp.), Thomas's langur (Presbytis thomasi), slow loris (Nycticebus coucang), and several macaques (Macaca spp.) In Thailand, lar gibbons probably number between 15,000 and 20,000, though there may be as few as 10 in China, if any. Diet and dentition The lar gibbon is considered frugivorous with fruit constituting 50% of its diet, but leaves (29%) are a substantial part, with insects (13%) and flowers (9%) forming the remainder. In the wild, lar gibbons will eat a large variety of foods, including figs and other small, sweet fruits, liana fruit, tree fruit and berries, as well as young leaves, buds and flowers, new shoots, vines, vine shoots, and insects, including mantids and wasps, and even birds' eggs. During the summer months, when figs and leaves are less available, insect consumption increases twenty-fold relative to the winter. Its dental formula is , the generalized formula for Old World monkeys and apes (including humans). The dental arcade is U-shaped, and the mandible is thin and light. The incisors are broad and flat, while the molars have low, rounded cusps with thick enamel. The most noticeable characteristic of the dentition of Hylobates lar is the presence of large, dagger-like canines in both the upper and lower jaw. These canines are not sexually dimorphic. Behavior Lar gibbons are diurnal and arboreal, inhabiting rain forests. Lar gibbons are usually active for an average of 8.7 hours per day, leaving their sleeping sites right around sunrise and entering sleeping trees an average of 3.4 hours before sunset. On average, lar gibbons spend their days feeding (32.6%), resting (26.2%), traveling (24.2%), in social activities (11.3%), vocalizing (4.0%) and in intergroup encounters (1.9%), although actual proportions of activities can change significantly over the course of the year. They rarely come to the ground, instead using their long arms to brachiate through the trees. With their hooked hands, they can move swiftly with great momentum, swinging from the branches. Although they rarely come to the ground naturally, while there, they walk bipedally with arms raised above their heads for balance. Their social organization is dominated by monogamous family pairs, with one breeding male and one female along with their offspring. When a juvenile reaches sexual maturity, it is expelled from the family unit. However, this traditional conception has come under scrutiny. Long-term studies conducted in Khao Yai National Park in Thailand suggest their mating system is somewhat flexible, incorporating extra-pair copulations, partner changes and polyandrous groupings. This multimale polyandry may be attributed to cooperative territory use and female defense. As range size increases, males are more successful in defending it in a pair or group. Additionally, these extra pair copulations may increase the chance of reproduction with a mate of superior genetic quality and decrease the chance of infanticide. Vocalisations Family groups inhabit a firm territory, which they protect by warding off other gibbons with their calls. Each morning, the family gathers on the edge of its territory and begins a "great call", a duet between the breeding pair. Each species has a typified call and each breeding pair has unique variations on that theme. The great call of Hylobates lar is characterized by its frequent use of short hoots with more complex hoots, along with a "quavering" opening and closing. These calls are one of the traits used determining species differences among the gibbons. Recent studies indicate that gibbon song have evolved to communicate conflict in terms of predation. In the presence of tiger, clouded leopard, crested serpent eagle and reticulated python songs were more likely to contain sharp wow elements than normal duets. Reproduction Sexually, they are similar to other gibbons. Mating occurs in every month of the year, but most conceptions occur during the dry season in March, with a peak in births during the late rainy season, in October. On average, females reproduce for the first time at about 11 years of age in the wild, much later than in captivity. Gestation is six months long on average, and pregnancies are usually of a single young. Young are nursed for approximately two years, and full maturity comes at about eight years. The life expectancy of the lar gibbons in the wild is about 25 years. Conservation Lar gibbons are threatened in various ways: they are sometimes hunted for their meat, sometimes a parent is killed to capture young animals for pets, but perhaps the most pervasive is the loss of habitat. Lar gibbon habitats are already threatened by forest clearance for the construction of roads, agriculture, ecotourism, domesticated cattle and elephants, forest fires, subsistence logging, illegal logging, new village settlement, and palm oil plantations.
Biology and health sciences
Apes
Animals
1315218
https://en.wikipedia.org/wiki/Local%20Interstellar%20Cloud
Local Interstellar Cloud
The Local Interstellar Cloud (LIC), also known as the Local Fluff, is an interstellar cloud roughly across, through which the Solar System is moving. This feature overlaps with a region around the Sun referred to as the solar neighborhood. It is unknown whether the Sun is embedded in the Local Interstellar Cloud, or is in the region where the Local Interstellar Cloud is interacting with the neighboring G-Cloud. Like the G-Cloud and others, the LIC is part of the Very Local Interstellar Medium which begins where the heliosphere and interplanetary medium end, the furthest that probes have traveled. Structure The Solar System is located within a structure called the Local Bubble, a low-density region of the galactic interstellar medium. Within this region is the Local Interstellar Cloud (LIC), an area of slightly higher hydrogen density. It is estimated that the Solar System entered the LIC within the past 10,000 years. It is uncertain whether the Sun is still inside of the LIC or has already entered a transition zone between the LIC and the G cloud. A recent analysis estimates the Sun will completely exit the LIC in no more than 1,900 years. The cloud has a temperature of about , about the same temperature as the surface of the Sun. However, its specific heat capacity is very low because it is not very dense, with . This is less dense than the average for the interstellar medium in the Milky Way (), though six times denser than the gas in the hot, low-density Local Bubble () which surrounds the local cloud. In comparison, Earth's atmosphere at the edge of space (i.e. 100 km above sea level) has around 1.2 molecules per cubic centimeter, dropping to around 50 million (5.0) at . The cloud is flowing outwards from the Scorpius–Centaurus association, a stellar association that is a star-forming region, roughly perpendicular to the Sun's own direction, if assumed to be two dimensional. In 2019, researchers found interstellar iron-60 (60Fe) in Antarctica, which they relate to the Local Interstellar Cloud. Interaction with solar magnetic field In 2009, Voyager 2 data suggested that the magnetic strength of the local interstellar medium was much stronger than expected (370 to 550 picoteslas (pT), against previous estimates of 180 to 250 pT). The fact that the Local Interstellar Cloud is strongly magnetized could explain its continued existence despite the pressures exerted upon it by the winds that blew out the Local Bubble. The Local Interstellar Cloud's potential effects on Earth are greatly diminished by the solar wind and the Sun's magnetic field. This interaction with the heliosphere is under study by the Interstellar Boundary Explorer (IBEX), a NASA satellite mapping the boundary between the Solar System and interstellar space.
Physical sciences
Notable patches of universe
Astronomy
1315693
https://en.wikipedia.org/wiki/Giant%20armadillo
Giant armadillo
The giant armadillo (Priodontes maximus), colloquially tatu-canastra, tatou, ocarro or tatú carreta, is the largest living species of armadillo (although their extinct relatives, the glyptodonts, were much larger). It lives in South America, ranging throughout as far south as northern Argentina. This species is considered vulnerable to extinction. The giant armadillo prefers termites and some ants as prey, and often consumes the entire population of a termite mound. It also has been known to prey upon worms, larvae and larger creatures, such as spiders and snakes, and plants. Some giant armadillos have been reported to have eaten bees by digging into beehives. Description The giant armadillo is the largest living species of armadillo, with 11 to 13 hinged bands protecting the body and a further three or four on the neck. Its body is dark brown in color, with a lighter, yellowish band running along the sides, and a pale, yellow-white head. These armadillos have around 80 to 100 teeth, which is more than any other terrestrial mammal. The teeth are all similar in appearance, being reduced premolars and molars, grow constantly throughout life, and lack enamel. They also possess extremely long front claws, including a sickle-shaped third claw up to in length, which are proportionately the largest of any living mammal. The tail is covered in small rounded scales and does not have the heavy bony scutes that cover the upper body and top of the head. The animal is almost entirely hairless, with just a few beige colored hairs protruding between the scutes. Giant armadillos typically weigh around when fully grown, however a specimen has been weighed in the wild and captive specimens have been weighed up to . The typical length of the species is , with the tail adding another . Distribution and habitat Giant armadillos are found throughout much of northern South America east of the Andes, except for eastern Brazil and Paraguay. In the south, they reach the northernmost provinces of Argentina, including Salta, Formosa, Chaco, and Santiago del Estero. There are no recognised geographic subspecies. They primarily inhabit open habitats, with cerrado grasslands covering about 25% of their range, but they can also be found in lowland forests. Biology and behavior Giant armadillos are solitary and nocturnal, spending the day in burrows. They also burrow to escape predators, being unable to completely roll into a protective ball. Compared with those of other armadillos, their burrows are unusually large, with entrances averaging wide, and typically opening to the west. Giant armadillos use their large front claws to dig for prey and rip open termite mounds. The diet is mainly composed of termites, although ants, worms, spiders, other invertebrates, small vertebrates and carrion are also eaten. Little is currently known about this species' reproductive biology, and no juveniles have ever been discovered in the field. The average sleep time of a captive giant armadillo is said to be 18.1 hours. Some giant armadillos have been reported to have eaten bees by digging into beehives. In a long-term study on the species, that started in 2003 in the Peruvian Amazon, dozens of other species of mammals, reptiles and birds were found using the giant armadillos' burrows on the same day, including the rare short-eared dog (Atelocynus microtis). Because of this, the species is considered a habitat engineer, and the local extinction of Priodontes may have cascading effects in the mammalian community by impoverishing fossorial habitat. Additionally, the giant armadillo was once key to controlling leaf cutter populations which could destroy crops, but they can also damage crops themselves when digging through soil. Female giant armadillos have two teats and have a gestational period of about five months. Evidence points to only giving birth once every three years. Little is known with certainty about their life history, although it is thought that the young are weaned by about seven to eight months of age, and that the mother periodically seals up the entrance to burrows containing younger offspring, presumably to protect them from predators. Although they have never bred in captivity, a wild-born giant armadillo at San Antonio Zoo was estimated to have been around sixteen years old when it died. Threats Hunted throughout its range, a single giant armadillo supplies a great deal of meat, and is the primary source of protein for some indigenous peoples. In addition, live giant armadillos are frequently captured for trade on the black market, and invariably die during transportation or in captivity. Despite this species' wide range, it is locally rare. This is further exacerbated by habitat loss resulting from deforestation. Current estimates indicate the giant armadillo may have undergone a worrying population decline of 30 to 50 percent over the past three decades. Without intervention, this trend is likely to continue. Conservation The giant armadillo was classified as vulnerable on the World Conservation Union's Red List in 2002, and is listed under Appendix I (threatened with extinction) of the Convention on the International Trade in Endangered Species of Wild Flora and Fauna. The giant armadillo is protected by law in Colombia, Guyana, Brazil, Argentina, Paraguay, Suriname and Peru, and commercial international trade is banned by its listing on Appendix I of the Convention on International Trade in Endangered Species (CITES). However, hunting for food and sale in the black market continues to occur throughout its entire range. Some populations occur in protected reserves, including the Parque das Emas in Brazil, and the Central Suriname Nature Reserve, a massive 1.6-million-hectare site of pristine rainforest managed by Conservation International. Such protection helps to some degree to mitigate the threat of habitat loss, but targeted conservation action is required to prevent the further decline of this species. At least one zoo park, in Villavicencio, Colombia – Los Ocarros – is dedicated to this animal.
Biology and health sciences
Xenarthra
Animals
1317242
https://en.wikipedia.org/wiki/Atrium%20%28heart%29
Atrium (heart)
The atrium (; : atria) is one of the two upper chambers in the heart that receives blood from the circulatory system. The blood in the atria is pumped into the heart ventricles through the atrioventricular mitral and tricuspid heart valves. There are two atria in the human heart – the left atrium receives blood from the pulmonary circulation, and the right atrium receives blood from the venae cavae of the systemic circulation. During the cardiac cycle, the atria receive blood while relaxed in diastole, then contract in systole to move blood to the ventricles. Each atrium is roughly cube-shaped except for an ear-shaped projection called an atrial appendage, previously known as an auricle. All animals with a closed circulatory system have at least one atrium. The atrium was formerly called the 'auricle'. That term is still used to describe this chamber in some other animals, such as the Mollusca. Auricles in this modern terminology are distinguished by having thicker muscular walls. Structure Humans have a four-chambered heart consisting of the right and left atrium, and the right and left ventricle. The atria are the two upper chambers which pump blood to the two lower ventricles. The right atrium and ventricle are often referred to together as the right heart, and the left atrium and ventricle as the left heart. As the atria do not have valves at their inlets, a venous pulsation is normal, and can be detected in the jugular vein as the jugular venous pressure. Internally, there are the rough pectinate muscles, and the crista terminalis of His, which act as a boundary inside the atrium and the smooth-walled part of the right atrium, the sinus venarum, which are derived from the sinus venosus. The sinus venarum is the adult remnant of the sinus venosus and it surrounds the openings of the venae cavae and the coronary sinus. Attached to each atrium is an atrial appendage. Right atrium The right atrium receives and holds deoxygenated blood from the superior vena cava, inferior vena cava, anterior cardiac veins, smallest cardiac veins and the coronary sinus, which it then sends down to the right ventricle through the tricuspid valve, which in turn sends it to the pulmonary artery for pulmonary circulation. Right atrial appendage The right atrial appendage (lat: auricula atrii dextra) is located at the front upper surface of the right atrium. Looking from the front, the right atrial appendage appears wedge-shaped or triangular. Its base surrounds the superior vena cava. The right atrial appendage is a pouch-like extension of the right atrium and is covered by a trabecula network of pectinate muscles. The interatrial septum separates the right atrium from the left atrium; this is marked by a depression in the right atrium – the fossa ovalis. The atria are depolarised by calcium. Left atrium The left atrium receives the oxygenated blood from the left and right pulmonary veins, which it pumps to the left ventricle (through the mitral valve (left atrioventricular valve) for pumping out through the aorta for systemic circulation. Left atrial appendage High in the upper part of the left atrium is a muscular ear-shaped pouch – the left atrial appendage (LAA) (lat: auricula atrii sinistra), which has a tubular trabeculated structure. LAA anatomy as seen in a CT scan is characterized as being in one of four groups: chicken wing (48%), cactus (30%), windsock (19%), and cauliflower(3%). Cauliflower is the morphology most often associated with embolism. The LAA appears to "function as a decompression chamber during left ventricular systole and during other periods when left atrial pressure is high". It also modulates intravascular volume by secreting natriuretic peptides, namely atrial natriuretic peptide (ANP), and brain natriuretic peptide (BNP) into the coronary sinus, where they enter into the blood circulation. The left atrial appendage can be seen on a standard posteroanterior X-ray, where the lower level of the left hilum becomes concave. It can also be seen clearly using transesophageal echocardiography. The left atrial appendage can serve as an approach for mitral valve surgery. The body of the left atrial appendage is anterior to the left atrium and parallel to the left pulmonary veins. The left pulmonary artery passes posterosuperiorly and is separated from the atrial appendage by the transverse sinus. In atrial fibrillation, the left atrial appendage fibrillates rather than contracts resulting in blood stasis that predisposes to the formation of blood clots. Because of consequent stroke risk, surgeons may choose to close it during open-heart surgery, using a left atrial appendage occlusion procedure. Conduction system The sinoatrial node (SA node) is located in the posterior aspect of the right atrium, next to the superior vena cava. This is a group of pacemaker cells which spontaneously depolarize to create an action potential. The cardiac action potential then spreads across both atria causing them to contract, forcing the blood they hold into their corresponding ventricles. The atrioventricular node (AV node) is another node in the cardiac conduction system. This is located between the atria and the ventricles. Blood supply The left atrium is supplied mainly by the left circumflex coronary artery, and its small branches. The oblique vein of the left atrium is partly responsible for venous drainage; it derives from the embryonic left superior vena cava. Development During embryogenesis at about two weeks, a primitive atrium begins to be formed as one chamber, which over the following two weeks becomes divided by the septum primum into the left atrium and the right atrium. The interatrial septum has an opening in the right atrium, the foramen ovale, which provides access to the left atrium; this connects the two chambers, which is essential for fetal blood circulation. At birth, when the first breath is taken fetal blood flow is reversed to travel through the lungs. The foramen ovale is no longer needed and it closes to leave a depression (the fossa ovalis) in the atrial wall. In some cases, the foramen ovale fails to close. This abnormality is present in approximately 25% of the general population. This is known as a patent foramen ovale, an atrial septal defect. It is mostly unproblematic, although it can be associated with paradoxical embolization and stroke. Within the fetal right atrium, blood from the inferior vena cava and the superior vena cava flow in separate streams to different locations in the heart; this has been reported to occur through the Coandă effect. Function In human physiology, the atria facilitate circulation primarily by allowing uninterrupted venous flow to the heart during ventricular systole. By being partially empty and distensible, atria prevent the interruption of venous flow to the heart that would occur during ventricular systole if the veins ended at the inlet valves of the heart. In normal physiologic states, the output of the heart is pulsatile, and the venous inflow to the heart is continuous and non-pulsatile. But without functioning atria, venous flow becomes pulsatile, and the overall circulation rate decreases significantly. Atria have four essential characteristics that cause them to promote continuous venous flow. (1) There are no atrial inlet valves to interrupt blood flow during atrial systole. (2) The atrial systole contractions are incomplete and thus do not contract to the extent that would block flow from the veins through the atria into the ventricles. During atrial systole, blood not only empties from the atria to the ventricles, but blood continues to flow uninterrupted from the veins right through the atria into the ventricles. (3) The atrial contractions must be gentle enough so that the force of contraction does not exert significant back pressure that would impede venous flow. (4) The "let go" of the atria must be timed so that they relax before the start of ventricular contraction, to be able to accept venous flow without interruption. By preventing the inertia of interrupted venous flow that would otherwise occur at each ventricular systole, atria allow approximately 75% more cardiac output than would otherwise occur. The fact that atrial contraction is 15% of the amount of the succeeding ventricular ejection has led to a misplaced emphasis on their role in pumping up the ventricles (the so-called "atrial kick"), whereas the key benefit of atria is in preventing circulatory inertia and allowing uninterrupted venous flow to the heart. Also of importance in maintaining the blood flow are the presence of atrial volume receptors. These are low-pressure baroreceptors in the atria, which send signals to the hypothalamus when a drop in atrial pressure (which indicates a drop in blood volume) is detected. This triggers a release of vasopressin. Disorders Atrial septal defect In an adult, an atrial septal defect results in the flow of blood in the reverse direction – from the left atrium to the right – which reduces cardiac output, potentially causing cardiac failure, and in severe or untreated cases cardiac arrest and sudden death. Left atrial appendage thrombosis In patients with atrial fibrillation, mitral valve disease, and other conditions, blood clots have a tendency to form in the left atrial appendage. The clots may dislodge (forming emboli), which may lead to ischemic damage to the brain, kidneys, or other organs supplied by the systemic circulation. In those with uncontrollable atrial fibrillation, left atrial appendage occlusion may be performed at the time of any open-heart surgery to prevent future clot formation within the appendage. Functional abnormalities Wolff-Parkinson-White syndrome Atrial flutter Atrial tachycardia Sinus tachycardia Multifocal atrial tachycardia – several types Premature atrial contraction Other animals Many other animals, including mammals, also have four-chambered hearts, which have a similar function. Some animals (amphibians and reptiles) have a three-chambered heart, in which the blood from each atrium is mixed in the single ventricle before being pumped to the aorta. In these animals, the left atrium still serves the purpose of collecting blood from the pulmonary veins. In most fish, the circulatory system is very simple: a two-chambered heart including one atrium and one ventricle. Among sharks, the heart consists of four parts arranged serially: blood flows into the most posterior part, the sinus venosus, and then to the atrium which moves it to the third part, the ventricle, before it reaches the conus anteriosus, which itself is connected to the ventral aorta. This is considered a primitive arrangement, and many vertebrates have condensed the atrium with the sinus venosus and the ventricle with the conus anteriosus. With the advent of lungs came a partitioning of the atrium into two parts divided by a septum. Among frogs, the oxygenated and deoxygenated blood is mixed in the ventricle before being pumped out to the body's organs; in turtles, the ventricle is almost entirely divided by a septum, but retains an opening through which some mixing of blood occurs. In birds, mammals, and some other reptiles (alligators in particular) the partitioning of both chambers is complete.
Biology and health sciences
Circulatory system
Biology
24063078
https://en.wikipedia.org/wiki/Pisces%E2%80%93Cetus%20Supercluster%20Complex
Pisces–Cetus Supercluster Complex
The Pisces–Cetus Supercluster Complex is a galaxy filament. It includes the Laniakea Supercluster which contains the Virgo Supercluster lobe which in turn contains the Local Group, the galaxy cluster that includes the Milky Way. This filament is adjacent to the Perseus–Pegasus Filament. Astronomer R. Brent Tully of the University of Hawaii's Institute of Astronomy identified the Complex in 1987. Extent The Pisces–Cetus Supercluster Complex is estimated to be about 1.0 billion light-years (Gly) long and 150 million light years (Mly) wide. It is one of the largest structures known in the observable universe, but is exceeded by the Sloan Great Wall (1.3 Gly), Clowes–Campusano LQG (2.0 Gly), U1.11 LQG (2.5 Gly), Huge-LQG (4.0 Gly), and Hercules–Corona Borealis Great Wall (10 Gly), respectively. Sixty clusters comprise the complex, which is estimated to have a total mass of 10 . According to the discoverer, the complex is composed of 5 parts: The Pisces–Cetus Supercluster The Perseus–Pegasus chain, including the Perseus–Pisces Supercluster The Pegasus–Pisces chain The Sculptor region, including the Sculptor Supercluster and Hercules Supercluster The Laniakea Supercluster, which contains our Virgo Supercluster (Local Supercluster) as well as the Hydra–Centaurus Supercluster. With its mass of 10 , our Virgo Supercluster accounts only for 0.1 percent of the total mass of the complex. The complex was named after the Pisces–Cetus Superclusters, which are its richest superclusters. Image
Physical sciences
Notable galaxy clusters
Astronomy
853682
https://en.wikipedia.org/wiki/Baltica
Baltica
Baltica is a paleocontinent that formed in the Paleoproterozoic and now constitutes northwestern Eurasia, or Europe north of the Trans-European Suture Zone and west of the Ural Mountains. The thick core of Baltica, the East European Craton, is more than three billion years old and formed part of the Rodinia supercontinent at 1 . Tectonic history Baltica formed at 2.0–1.7 Ga by the collision of three Archaean-Proterozoic continental blocks: Fennoscandia (including the exposed Baltic Shield), Sarmatia (Ukrainian Shield and Voronezh Massif), and Volgo-Uralia (covered by younger deposits). Sarmatia and Volgo-Uralia formed a proto-craton (sometimes called "Proto-Baltica") at c. 2.0 Ga which collided with Fennoscandia c. 1.8–1.7 Ga. The sutures between these three blocks were reactivated during the Mesoproterozoic and Neoproterozoic. 750–600 million years ago, Baltica and Laurentia rotated clockwise together and drifted away from the Equator towards the South Pole where they were affected by the Cryogenian Varanger glaciations. Initial rifting between the two continents is marked by the c. 650 Ma Egersund dike swarm in southern Norway and from 600 Ma they began to rotate up to 180° relative to each other, thus opening the Iapetus Ocean between the two landmasses. Laurentia quickly moved northward into low latitudes but Baltica remained an isolated continent in the temperate mid-latitudes of the Southern Hemisphere, closer to Gondwana, on which endemic trilobites evolved in the Early and Middle Ordovician. During the Ordovician, Baltica moved northward, approaching Laurentia, which again allowed trilobites and brachiopods to cross the Iapetus Ocean. In the Silurian, c. 425 Ma, the final collision between Scotland-Greenland and Norway resulted in the closure of the Iapetus and the Scandian Orogeny. Margins Baltica is a very old continent and its core is a very well-preserved and thick craton. Its current margins, however, are the sutures that are the result of mergers with other, much younger continental blocks. These often deformed sutures do not represent the original, Precambrian–early Palaeozoic extent of Baltica; for example, the curved margin north of the Urals running parallel to Novaya Zemlya was probably deformed during the eruption of the Siberian Traps in the Late Permian and Early Triassic. Baltica's western margin is the Caledonide orogen, which stretches northward from the Scandinavian Mountains across Barents Sea to Svalbard. Its eastern margin is the Timanide orogen which stretches north to the Novaya Zemlya archipelago. The extent of the Proterozoic continent are defined by the Iapetus Suture to the west; the Trollfjorden-Komagelva Fault Zone in the north; the Variscan-Hercynian suture to the south; the Tornquist Zone to the southwest; and the Ural Mountains to the east. Northern margin At c. 555 Ma during the Timanian Orogeny the northern margin became an active margin and Baltica expanded northward with the accretion of a series of continental blocks: the Timan-Pechora Basin, the northernmost Ural Mountains, and the Novaya Zemlya islands. This expansion coincided with the Marinoan or Varanger glaciations, also known as Snowball Earth. Terranes of the North American Cordillera, including Alaska-Chukotka, Alexander, Northern Sierra, and Eastern Klamath, share a rift history with Baltica and most likely were part of Baltica from the Caledonian orogeny until the formation of the Ural Mountains. These terranes can be linked to either northeastern Laurentia, Baltica, or Siberia because of a similar sequence of fossils; detrital zircon from 2–1 Ga-old sources and evidence of Grenvillian magmatism; and magmatism and island arcs from the Late Neoproterozoic and Ordovician-Silurian. Southern margin From at least 1.8 Ga to at least 0.8 Ga the southwestern margin of Baltica was connected to Amazonia while the southeast margin was connected to the West African Craton. Baltica, Amazonia, and West Africa rotated 75° clockwise relative to Laurentia until Baltica and Amazonia collided with Laurentia in the 1.1–0.9 Ga Grenville-Sveconorwegian-Sunsás orogenies to form the supercontinent Rodinia. When the break-up of Rodinia was complete c. 0.6 Ga Baltica became an isolated continent — a 200 million year period when Baltica was truly a separate continent. Laurentia and Baltica formed a single continent until 1.265 Ga which broke up some time before 0.99 Ga. After the subsequent closure of the Mirovoi Ocean Laurentia, Baltica and Amazonia remained merged until the opening of the Iapetus Ocean in the Neoproterozoic. Western margin The Western Gneiss Region in western Norway is composed of 1650–950 Ma-old gneisses overlain by continental and oceanic allochthons that were transferred from Laurentia to Baltica during the Scandian orogeny. The allochthons were accreted to Baltica during the closure of the Iapetus Ocean c. 430–410 Ma; Baltica's basement and the allochthons were then subducted to UHP depth c. 425–400 Ma; and they were finally exhumed to their present location c. 400–385 Ma. The presence of micro-diamonds in two islands in western Norway, Otrøya and Flemsøya, indicate that this margin of Baltica was buried c. for at least 25 million years around 429 Ma shortly after the Baltica-Laurentia collision. The Baltica-Laurentia-Avalonia triple junction in the North Sea is the southwest corner of Baltica. The Baltica-Laurentia suture stretching northeast from the triple junction was deformed in the Late Cambrian in the Scandinavian Caledonides as well as in the Scandian Orogeny during the Silurian. Some Norwegian terranes have faunas distinct from those of either Baltica or Laurentia as a result of being island arcs that originated in the Iapetus Ocean and were later accreted to Baltica. The Baltica craton most likely underlies these terranes and the continent-ocean boundary passes several kilometres off Norway, but, since the North Atlantic opened c. 54 Ma where the Iapetus Ocean closed, it is unlikely the craton also reached into Laurentia. The margin stretches north to Novaya Zemlya where early Palaeozoic Baltica faunas have been found, but the sparsity of data makes it difficult to locate the margin in the Arctic. Ordovician faunas indicate that most of Svalbard, including Bjørnøya, was part of Laurentia, but Franz Josef Land and Kvitøya (an eastern island of the Svalbard archipelago) most likely became part of Baltica in the Timanide Orogeny. The Taymyr Peninsula, in contrast, never was part of Baltica: southern Taymyr was part of Siberia whilst northern Taymyr and the Severnaya Zemlya archipelago were part of the independent Kara Terrane in the early Palaeozoic. Eastern margin The eastern margin, the Uralide orogen, extends from the Arctic Novaya Zemlya archipelago to the Aral Sea. The orogen contains the record of at least two collisions between Baltica and intra-oceanic island arcs before the final collision between Baltica and Kazakhstania-Siberia during the formation of Pangaea. The Silurian-Devonian island arcs were accreted to Baltica along the Main Uralian Fault, east of which are metamorphosed fragments of volcanic arc mixed with small amounts of Precambrian and Paleozoic continental rocks. However, no rocks unambiguously originating from either Kazakhstania or Siberia have been found in the Urals. The basement of the eastern margin is composed of an Archaean craton, metamorphosed rocks at least 1.6 Ga old, which is surrounded by the fold belt of the Timanide orogeny and overlain by Mesoproterozoic sediments. The margin became a passive margin facing the Ural Ocean in the Cambrian–Ordovician. The eastern margin stretches south through the Ural Mountains from the northern end of the Novaya Zemlya archipelago. The margin follows the bent shape of Novaya Zemlya which was caused in the Late Permian by the Siberian Traps. It is clear from Baltic endemic fossils in Novaya Zemlya that the islands have been part of Baltica since the Early Palaeozoic, whereas the Taymyr Peninsula farther east was part of the passive margin of Siberia in the Early Palaeozoic. Northern Taymyr, together with Severnaya Zemlya and parts of the crust of the Arctic Ocean, formed the Kara Terrane. The Urals Mountains formed in the mid and late Palaeozoic when Laurussia collided with Kazakhstania, a series of terranes. The eastern margin, however, originally extended farther east to an active margin bordered by island arcs, but those parts have been compressed, fractured, and distorted especially in the eastern Urals. The early Palaeozoic eastern margin is better preserved south of the polar region (65 °N) where shallow-water sediments can be found in the western Urals whilst the eastern Urals are characterised by deep-water deposits. The oldest known mid-ocean hydrothermal vent in the south-central part of the Urals clearly delimits the eastern extent. The straightness of the mountain chain is the result of continuous strike-slip movements during the Late Carboniferous to Early Permian (300–290 Ma). Baltic endemic faunas from the Early Ordovician have been found in Kazakhstan near the southern end of the eastern margin, or the triple junction between Baltica, the Mangyshlak Terrane, and the accretionary Altaids. Here the Early Palaeozoic rocks are buried under the Caspian Depression.
Physical sciences
Paleogeography
Earth science
853756
https://en.wikipedia.org/wiki/Trailer%20%28vehicle%29
Trailer (vehicle)
A trailer is an unpowered vehicle towed by a powered vehicle. It is commonly used for the transport of goods and materials.There are two general categories of trailers: the full trailer and the semitrailer. A full trailer is a type of trailer whose entire weight is supported by its own wheels, with no weight transferred to the towing vehicle. In contrast, a semi-trailer is designed so that a portion of its weight is carried by its own wheels, while the remaining weight is borne by the towing vehicle. Sometimes recreational vehicles, travel trailers, or mobile homes with limited living facilities where people can camp or stay have been referred to as trailers. In earlier days, many such vehicles were towable trailers. Trailers have been used for thousands of years, predating the invention of the automobile. Before the advent of the wheel, early humans employed the concept of trailering by using drag sleds to transport goods. While the two wheel war chariot is one of the earliest and simplest forms of a semi-trailer. Alexander Winston has been credited with inventing the modern semitrailer in Cleveland, Ohio. United States In the United States, the term is sometimes used interchangeably with travel trailer and mobile home, varieties of trailers, and manufactured housing designed for human habitation. Their origins lay in utility trailers built in a similar fashion to horse-drawn wagons. A trailer park is an area where mobile homes are placed for habitation. In the United States trailers ranging in size from single-axle dollies to 6-axle, , semi-trailers are commonplace. The latter, when towed as part of a tractor-trailer or "18-wheeler", carries a large percentage of the freight that travels over land in North America. Types Some trailers are made for personal (or small business) use with practically any powered vehicle having an appropriate hitch, but some trailers are part of large trucks called semi-trailer trucks for transportation of cargo. Enclosed toy trailers and motorcycle trailers can be towed by commonly accessible pickup truck or van, which generally require no special permit beyond a regular driver's license. Specialized trailers like open-air motorcycle trailers, bicycle trailers are much smaller, accessible to small automobiles, as are some simple trailers, have a drawbar and ride on a single axle. Other trailers, such as utility trailers and travel trailers or campers come in single and multiple axle varieties, to allow for varying sizes of tow vehicles. There also exist highly specialized trailers, such as genset trailers, pusher trailers and other types that are also used to power the towing vehicle. Others are custom-built to hold entire kitchens and other specialized equipment used by carnival vendors. There are also trailers for hauling boats. Trackless train Utility A utility trailer is a general purpose trailer designed to by towed by a light vehicle and to carry light, compact loads of up to a few metric tonnes. It typically has short metal sides (either rigid or folding) to constrain the load, and may have cage sides, and a rear folding gate or ramps. Utility trailers do not have a roof. Utility trailers have one axle set comprising one, two or three axles. If it does not have sides then it is usually called a flatbed or flat-deck trailer. If it has rails rather than sides, with ramps at the rear, it is usually called an open car transporter, auto-transporter, or a plant trailer, as they are designed to transport vehicles and mobile plant. If it has fully rigid sides and a roof with a rear door, creating a weatherproof compartment, this is usually called a furniture trailer, cargo trailer, box van trailer or box trailer. Fixed Plant A Fixed Plant Trailer is a special purpose trailer built to carry units which usually are immobile such as large generators & pumps Bicycle A bicycle trailer is a motor less wheeled frame with a hitch system for transporting cargo by bicycle. Construction Toilets are usually provided separately. Construction trailers are mobile structures (trailers) used to accommodate temporary offices, dining facilities and storage of building materials during construction projects. The trailers are equipped with radios for communication. Travel Popular campers use lightweight trailers, aerodynamic trailers that can be towed by a small car, such as the BMW Air Camper. They are built to be lower than the tow vehicle, minimizing drag. Others range from two-axle campers that can be pulled by most mid-sized pickups to trailers that are as long as the host country's law allows for drivers without special permits. Larger campers tend to be fully integrated recreational vehicles, which often are used to tow single-axle dolly trailers to allow the users to bring small cars on their travels. Teardrop Semi A semi-trailer is a trailer without a front axle. A large proportion of its weight is supported either by a road tractor or by a detachable front axle assembly known as a dolly. A semi-trailer is normally equipped with legs, called "landing gear", which can be lowered to support it when it is uncoupled. In the United States, a single trailer cannot exceed a length of on interstate highways (unless a special permit is granted), although it is possible to link two smaller trailers together to a maximum length of . Semi-trailers vary considerably in design, ranging from open-topped grain haulers through Tautliners to normal-looking but refrigerated x enclosures ("reefers"). Many semi-trailers are part of semi-trailer trucks. Other types of semi-trailers include dry vans, flatbeds and chassis. Many commercial organizations choose to rent or lease semi-trailer equipment rather than own their own semi-trailers, to free up capital and to keep trailer debt from appearing on their balance sheet. Full A full trailer is a term used in the United States and New Zealand for a freight trailer supported by front and rear axles and pulled by a drawbar. In Europe this is known as an A-frame drawbar trailer, and in Australia it is known as a dog trailer. Commercial freight trailers are produced to length and width specifications defined by the country of operation. In America this is wide and long. In New Zealand, the maximum width is while the maximum length is , giving a 22-pallet capacity. As per AIS 053, full trailer is a towed vehicle having at least two axles, and equipped with a towing device which can move vertically in relation to the trailer and controls the direction of the front axle(s), but which transmits no significant static load to the towing vehicle. Common types of full trailers are flat deck, hardside/box, curtainside or bathtub tipper style with axle configurations up to two at the drawbar end and three at the rear of the trailer. This style of trailer is also popular for use with farm tractors. Close-coupled A close-coupled trailer is fitted with a rigid towbar which projects from its front and hooks onto a hook on the tractor. It does not pivot as a drawbar does. Motorcycle A motorcycle trailer may be a trailer designed to haul motorcycles behind an automobile or truck. Such trailers may be open or enclosed, ranging in size from trailers capable of carrying several motorcycles or only one. They may be designed specifically to carry motorcycles, with ramps and tie-downs, or may be a utility trailer adapted permanently or occasionally to haul one or more motorcycles. Another type of motorcycle trailer is a wheeled frame with a hitch system designed for transporting cargo by motorcycle. Motorcycle trailers are often narrow and styled to match the appearance of the motorcycle they are intended to be towed behind. There are two-wheeled versions and single-wheeled versions. Single-wheeled trailers, such as the Unigo or Pav 40/41, are designed to allow the bike to have all the normal flexibility of a motorcycle, usually using a universal joint to enable the trailer to lean and turn with the motorcycle. No motorcycle manufacturer recommends that its motorcycles be used to tow a trailer because it results in additional safety hazards for motorcyclists. Livestock There are a number of different styles of trailers used to haul livestock such as cattle, horses, sheep and pigs. The most common is the stock trailer, a trailer that is enclosed on the bottom, but has openings at approximately the eye level of the animals to allow ventilation. The horse trailer is a more elaborate form of stock trailer. Because horses are usually hauled for the purpose of competition or work, where they must be in peak physical condition, horse trailers are designed for the comfort and safety of the animals. They usually have adjustable vents and windows as well as suspension designed to provide a smooth ride and less stress on the animals. In addition, horse trailers have internal partitions that assist the animal in staying upright during travel and protect horses from injuring each other in transit. Larger horse trailers may incorporate additional storage areas for horse tack and may even include elaborate living quarters with sleeping areas, bathroom and cooking facilities, and other comforts. Both stock trailers and horse trailers range in size from small units capable of holding one to three animals, able to be pulled by a pickup truck, SUV or even a quad bike; to large semi-trailers that can haul a significant number of animals. Boat Roll trailer Baggage trailer Baggage trailers are used for the transportation of loose baggage, oversized bags, mail bags, loose cargo carton boxes, etc. between the aircraft and the terminal or sorting facility. Dollies for loose baggage are fitted with a brake system which blocks the wheels from moving when the connecting rod is not attached to a tug. Most dollies for loose baggage are completely enclosed except for the sides which use plastic curtains to protect items from weather. In the US, these dollies are called baggage carts, but in Europe baggage cart means passenger baggage trolleys. Hydraulic modular trailer A hydraulic modular trailer (HMT) is a special platform trailer unit which feature swing axles, hydraulic suspension, independently steerable axles, two or more axle rows, compatible to join two or more units longitudinally and laterally and uses power pack unit (PPU) to steer and adjust height. These trailer units are used to transport oversized load, which are difficult to disassemble and are overweight. These trailers are manufactured using high tensile steel, which makes it  possible to bear the weight of the load with the help of one or more ballast tractors which push and pull these units via drawbar or gooseneck together making a heavy hauler unit. Typical loads include oil rig modules, bridge sections, buildings, ship sections, and industrial machinery such as generators and turbines. There is a limited number of manufacturers who produce these heavy-duty trailers because the market share of oversized loads is very thin when we talk about transportation industry. There are self powered units of hydraulic modular trailer which are called SPMT which are used when the ballast tractors can not be applied. Bus trailer A bus trailer is for transporting passengers hauled by a tractor unit similar like that of a truck. These trailers have become obsolete due to the issue of the communication between the driver and the conductor and traffic jams. Hitching A trailer hitch, fifth-wheel coupling or other type of tow hitch is needed to draw a trailer with a car, truck or other traction engine. Ball and socket A trailer coupler is used to secure the trailer to the towing vehicle. The trailer coupler attaches to the trailer ball. This forms a ball and socket connection to allow for relative movement between the towing vehicle and trailer while towing over uneven road surfaces. The trailer ball is mounted to the rear bumper or to a draw bar, which may be removable. The draw bar is secured to the trailer hitch by inserting it into the hitch receiver and pinning it. The three most common types of couplers are straight couplers, A-frame couplers, and adjustable couplers. Bumper-pull hitches and draw bars can exert tremendous leverage on the tow vehicle making it harder to recover from a swerving situation. Fifth wheel and gooseneck These are available for loads between . Both the hitches are better than a receiver hitch and allow a more efficient and central attachment of a large trailer to the tow vehicle. They can haul large loads without disrupting the stability of the vehicle. Traditional hitches are connected to the rear of the vehicle at the frame or bumper, while fifth wheel and gooseneck trailers are attached to the truck bed above the rear axle. This coupling location allows the truck to make sharper turns and haul heavier trailers. They can be mounted in the bed of a pickup truck or any type of flatbed. A fifth-wheel coupling is also referred to as a kingpin hitch and is a smaller version of the semi-trailer "fifth wheel". Though a fifth wheel and a gooseneck trailer look much the same, their method for coupling is different. A fifth wheel uses a large horseshoe-shaped coupling device mounted or more above the bed of the tow vehicle. A gooseneck couples to a standard ball mounted on the bed of the tow vehicle. The operational difference between the two is the range of movement in the hitch. The gooseneck is very maneuverable and can tilt in all directions, while the fifth wheel is intended for level roads and limited tilt side to side. Gooseneck mounts are often used for agricultural and industrial trailers. Fifth-wheel mounts are often used for recreational trailers. Standard bumper-hitch trailers typically allow a 10% or 15% hitch load while a fifth wheel and gooseneck can handle 20% or 25% weight transfer. Jacks The basic function of a trailer jack is to lift the trailer to a height that allows the trailer to be hitched or unhitched to and from the towing vehicle. Trailer jacks are also used for leveling the trailer during storage. The most common types of trailer jacks are A-frame jacks, swivel jacks, and drop-leg jacks. Some trailers, such as horse trailers, have a built-in jack at the tongue for this purpose. Electrical components Many older cars took the feeds for the trailer's lights directly from the towing vehicle's rear light circuits. As bulb-check systems were introduced in the 1990s "by-pass relays" were introduced. These took a small signal from the rear lights to switch a relay which in turn powered the trailer's lights with its own power feed. Many towing electrical installations, including vehicle-specific kits incorporate some form of bypass relays. In the US, trailer lights usually have a shared light for brake and turn indicators. If such a trailer is to be connected to a car with separate lamps for turn indicator and brake a trailer light converter is needed, which allows for attaching the trailer's lights to the wiring of the vehicle. Nowadays some vehicles are being fitted with CANbus networks, and some of these use the CANbus to connect the tow bar electrics to various safety systems and controls. For vehicles that use the CANbus to activate towing-related safety systems, a wiring kit that can interact appropriately must be used. Without such a towbar wiring kit the vehicle cannot detect the presence of a trailer and can therefore not activate safety features such as trailer stability program which can electronically control a snaking trailer or caravan. By-pass systems are cheap, but may not be appropriate on cars with interactive safety features. Brakes Larger trailers are usually fitted with brakes. These can be either electrically operated, air operated, or overrun brakes. Stability Trailer stability can be defined as the tendency of a trailer to dissipate side-to-side motion. The initial motion may be caused by aerodynamic forces, such as from a cross wind or a passing vehicle. One common criterion for stability is the center of mass location with respect to the wheels, which can usually be detected by tongue weight. If the center of mass of the trailer is behind its wheels, therefore having a negative tongue weight, the trailer will likely be unstable. Another parameter which is less commonly a factor is the trailer moment of inertia. Even if the center of mass is forward of the wheels, a trailer with a long load, and thus large moment of inertia, may be unstable. Some vehicles are equipped with a Trailer Stability Program that may be able to compensate for improper loading.
Technology
Motorized road transport
null
854063
https://en.wikipedia.org/wiki/Cantharellus%20cibarius
Cantharellus cibarius
Cantharellus cibarius (Latin: cantharellus, "chanterelle"; cibarius, "culinary") is the golden chanterelle, the type species of the chanterelle genus Cantharellus. It is also known as girolle (or girole). Despite its characteristic features, C. cibarius can be confused with species such as the poisonous Omphalotus illudens. The golden chanterelle is a commonly consumed and choice edible species. Taxonomy At one time, all yellow or golden chanterelles in North America had been classified as Cantharellus cibarius. Using DNA analysis, they have since been shown to be a group of related species known as the Cantharellus cibarius group or species complex, with C. cibarius sensu stricto restricted to Europe. In 1997, C. formosus (the Pacific golden chanterelle) and C. cibarius var. roseocanus were identified, followed by C. cascadensis in 2003 and C. californicus in 2008. In 2018, an Asian species belonging to the C. cibarius complex has been described and sequenced, C. anzutake, recorded in Japan and Korea. Description The mushroom is easy to detect and recognize in nature. The body is wide and tall. The color varies from yellow to dark yellow. Red spots will appear on the cap of the mushroom if it is damaged. Chanterelles have a faint aroma and flavor of apricots. Similar species The species can resemble the dangerously poisonous Omphalotus illudens (eastern jack-o'lantern) and Hygrophoropsis aurantiaca (the false chanterelle). Distribution and habitat The species grows in Europe from Scandinavia to the Mediterranean Basin, mainly in deciduous and coniferous forests and typically from June to December. Uses A commonly eaten and favored mushroom, the chanterelle is typically harvested from late summer to late fall in its European distribution. Chanterelles are used in many culinary dishes, and can be preserved by either drying or freezing. The use of an oven for drying is not recommended because it can make the mushroom bitter.
Biology and health sciences
Edible fungi
Plants
854127
https://en.wikipedia.org/wiki/Morchella%20esculenta
Morchella esculenta
Morchella esculenta (commonly known as common morel, morel, yellow morel, true morel, morel mushroom, and sponge morel) is a species of fungus in the family Morchellaceae of the Ascomycota. It is one of the most readily recognized of all the edible mushrooms and highly sought after. Each fruit body begins as a tightly compressed, grayish sponge with lighter ridges, and expands to form a large yellowish sponge with large pits and ridges raised on a large white stem. The pitted yellow-brown caps measure broad by tall, and are fused to the stem at its lower margin, forming a continuous hollow. The pits are rounded and irregularly arranged. The hollow stem is typically long by thick, and white to yellow. The fungus fruits under hardwoods and conifers during a short period in the spring, depending on the weather, and is also associated with old orchards, woods and disturbed grounds. Description The cap is pale brownish cream, yellow to tan or pale brown to grayish brown. The edges of the ridges are usually lighter than the pits, and somewhat oval in outline, sometimes bluntly cone-shaped with a rounded top or more elongate. Caps are hollow, attached to the stem at the lower edge, and typically about broad by tall. The flesh is brittle. The stem is white to pallid or pale yellow, hollow, and straight or with a club-shaped or bulbous base. It is finely granular overall, somewhat ridged, generally about long by thick. In age it may have brownish stains near the base. It has a passing resemblance to the common stinkhorn (Phallus impudicus), for which it is sometimes mistaken. Yellow morels are often found near wooded areas. Centipedes sometimes make their home inside these morels; infested morels usually have a hole in the top. Microscopic characteristics The spores range from white to cream to slightly yellow in deposit, although a spore print may be difficult to obtain given the shape of the fruit body. The spores are formed in asci lining the pits—the ridges are sterile. They are ellipsoidal, smooth, thin-walled, translucent (hyaline), and measure 17.5–21.9 by 8.8–11.0 μm. The asci are eight-spored, 223–300 by 19–20 μm, cylindrical, and hyaline. The paraphyses are filamentous, cylindrical, 5.8–8.8 μm wide, and hyaline. The hyphae of the stem are interwoven, hyaline, and measure 5.8–9.4 μm wide. The surface hyphae are inflated, spherical to pear-shaped, 22–44 μm wide, covered by a network of interwoven hyphae 11–16.8 μm wide with recurved cylindrical hyphal ends. Development Fruit bodies have successfully been grown in the laboratory. R. Ower was the first to describe the developmental stages of ascomata grown in a controlled chamber. This was followed by in-depth cytological studies by Thomas Volk and Leonard (1989, 1990). To study the morel life cycle they followed the development of ascoma fruiting in association with tuberous begonias (Begonia tuberhybrida), from very small primordia to fully developed fruit bodies. Young fruit bodies begin development in the form of a dense knot of hyphae, when suitable conditions of moisture and nutrient availability conditions have been reached. Hyphal knots are underground and cup-shaped for some time, but later emerge from the soil and develop into a stalked fruiting body. Further growth makes the hymenium convex with the asci facing towards the outer side. Because of the unequal growth of the surface of the hymenium, it becomes folded to form many ridges and depressions, resulting in the sponge or honeycomb appearance. Similar species Morchella esculenta is probably the most familiar of the morels. In contrast to M. angusticeps and its relatives, the caps are light-colored throughout development, especially the ridges, which remain paler than the pits. M. crassipes is sometimes confused with M. esculenta. According to Smith (1975), the two are distinct, but young forms of M. crassipes are difficult to separate from M. esculenta. The two are similar in color, but M. crassipes is larger, often has thin ridges, and sometimes has a stem base that is enlarged and longitudinally grooved. Stinkhorns (esp. Phallus impudicus) have also been confused with morels, but specimens of the former have a volva at the base of the stem, and are covered with gleba—a slimy, foul-smelling spore mass.
Biology and health sciences
Edible fungi
Plants
854294
https://en.wikipedia.org/wiki/DNA%20repair
DNA repair
DNA repair is a collection of processes by which a cell identifies and corrects damage to the DNA molecules that encode its genome. In human cells, both normal metabolic activities and environmental factors such as radiation can cause DNA damage, resulting in tens of thousands of individual molecular lesions per cell per day. Many of these lesions cause structural damage to the DNA molecule and can alter or eliminate the cell's ability to transcribe the gene that the affected DNA encodes. Other lesions induce potentially harmful mutations in the cell's genome, which affect the survival of its daughter cells after it undergoes mitosis. As a consequence, the DNA repair process is constantly active as it responds to damage in the DNA structure. When normal repair processes fail, and when cellular apoptosis does not occur, irreparable DNA damage may occur. This can eventually lead to malignant tumors, or cancer as per the two-hit hypothesis. The rate of DNA repair depends on various factors, including the cell type, the age of the cell, and the extracellular environment. A cell that has accumulated a large amount of DNA damage or can no longer effectively repair its DNA may enter one of three possible states: an irreversible state of dormancy, known as senescence cell suicide, also known as apoptosis or programmed cell death unregulated cell division, which can lead to the formation of a tumor that is cancerous The DNA repair ability of a cell is vital to the integrity of its genome and thus to the normal functionality of that organism. Many genes that were initially shown to influence life span have turned out to be involved in DNA damage repair and protection. The 2015 Nobel Prize in Chemistry was awarded to Tomas Lindahl, Paul Modrich, and Aziz Sancar for their work on the molecular mechanisms of DNA repair processes. DNA damage DNA damage, due to environmental factors and normal metabolic processes inside the cell, occurs at a rate of 10,000 to 1,000,000 molecular lesions per cell per day. While this constitutes at most only 0.0003125% of the human genome's approximately 3.2 billion bases, unrepaired lesions in critical genes (such as tumor suppressor genes) can impede a cell's ability to carry out its function and appreciably increase the likelihood of tumor formation and contribute to tumor heterogeneity. The vast majority of DNA damage affects the primary structure of the double helix; that is, the bases themselves are chemically modified. These modifications can in turn disrupt the molecules' regular helical structure by introducing non-native chemical bonds or bulky adducts that do not fit in the standard double helix. Unlike proteins and RNA, DNA usually lacks tertiary structure and therefore damage or disturbance does not occur at that level. DNA is, however, supercoiled and wound around "packaging" proteins called histones (in eukaryotes), and both superstructures are vulnerable to the effects of DNA damage. Sources DNA damage can be subdivided into two main types: endogenous damage such as attack by reactive oxygen species produced from normal metabolic byproducts (spontaneous mutation), especially the process of oxidative deamination also includes replication errors exogenous damage caused by external agents such as ultraviolet (UV) radiation (200–400 nm) from the sun or other artificial light sources other radiation frequencies, including x-rays and gamma rays hydrolysis or thermal disruption certain plant toxins human-made mutagenic chemicals, especially aromatic compounds that act as DNA intercalating agents viruses The replication of damaged DNA before cell division can lead to the incorporation of wrong bases opposite damaged ones. Daughter cells that inherit these wrong bases carry mutations from which the original DNA sequence is unrecoverable (except in the rare case of a back mutation, for example, through gene conversion). Types There are several types of damage to DNA due to endogenous cellular processes: oxidation of bases [e.g. 8-oxo-7,8-dihydroguanine (8-oxoG)] and generation of DNA strand interruptions from reactive oxygen species, alkylation of bases (usually methylation), such as formation of 7-methylguanosine, 1-methyladenine, 6-O-Methylguanine hydrolysis of bases, such as deamination, depurination, and depyrimidination. "bulky adduct formation" (e.g., benzo[a]pyrene diol epoxide-dG adduct, aristolactam I-dA adduct) mismatch of bases, due to errors in DNA replication, in which the wrong DNA base is stitched into place in a newly forming DNA strand, or a DNA base is skipped over or mistakenly inserted. Monoadduct damage cause by change in single nitrogenous base of DNA Di adduct damage Damage caused by exogenous agents comes in many forms. Some examples are: Absorption of UV light directly by DNA induces photochemical reactions, leading to the formation of pyrimidine dimers, and photoionization, provoking oxidative damage. UV-A light creates mostly free radicals. The damage caused by free radicals is called indirect DNA damage. Ionizing radiation such as that created by radioactive decay or in cosmic rays causes breaks in DNA strands. Intermediate-level ionizing radiation may induce irreparable DNA damage (leading to replicational and transcriptional errors needed for neoplasia or may trigger viral interactions) leading to pre-mature aging and cancer. Thermal disruption at elevated temperature increases the rate of depurination (loss of purine bases from the DNA backbone) and single-strand breaks. For example, hydrolytic depurination is seen in the thermophilic bacteria, which grow in hot springs at 40–80 °C. The rate of depurination (300 purine residues per genome per generation) is too high in these species to be repaired by normal repair machinery, hence a possibility of an adaptive response cannot be ruled out. Industrial chemicals such as vinyl chloride and hydrogen peroxide, and environmental chemicals such as polycyclic aromatic hydrocarbons found in smoke, soot and tar create a huge diversity of DNA adducts- ethanoates, oxidized bases, alkylated phosphodiesters and crosslinking of DNA, just to name a few. UV damage, alkylation/methylation, X-ray damage and oxidative damage are examples of induced damage. Spontaneous damage can include the loss of a base, deamination, sugar ring puckering and tautomeric shift. Constitutive (spontaneous) DNA damage caused by endogenous oxidants can be detected as a low level of histone H2AX phosphorylation in untreated cells. Nuclear versus mitochondrial In eukaryotic cells, DNA is found in two cellular locations – inside the nucleus and inside the mitochondria. Nuclear DNA (nDNA) exists as chromatin during non-replicative stages of the cell cycle and is condensed into aggregate structures known as chromosomes during cell division. In either state the DNA is highly compacted and wound up around bead-like proteins called histones. Whenever a cell needs to express the genetic information encoded in its nDNA the required chromosomal region is unraveled, genes located therein are expressed, and then the region is condensed back to its resting conformation. Mitochondrial DNA (mtDNA) is located inside mitochondria organelles, exists in multiple copies, and is also tightly associated with a number of proteins to form a complex known as the nucleoid. Inside mitochondria, reactive oxygen species (ROS), or free radicals, byproducts of the constant production of adenosine triphosphate (ATP) via oxidative phosphorylation, create a highly oxidative environment that is known to damage mtDNA. A critical enzyme in counteracting the toxicity of these species is superoxide dismutase, which is present in both the mitochondria and cytoplasm of eukaryotic cells. Senescence and apoptosis Senescence, an irreversible process in which the cell no longer divides, is a protective response to the shortening of the chromosome ends, called telomeres. The telomeres are long regions of repetitive noncoding DNA that cap chromosomes and undergo partial degradation each time a cell undergoes division (see Hayflick limit). In contrast, quiescence is a reversible state of cellular dormancy that is unrelated to genome damage (see cell cycle). Senescence in cells may serve as a functional alternative to apoptosis in cases where the physical presence of a cell for spatial reasons is required by the organism, which serves as a "last resort" mechanism to prevent a cell with damaged DNA from replicating inappropriately in the absence of pro-growth cellular signaling. Unregulated cell division can lead to the formation of a tumor (see cancer), which is potentially lethal to an organism. Therefore, the induction of senescence and apoptosis is considered to be part of a strategy of protection against cancer. Mutation It is important to distinguish between DNA damage and mutation, the two major types of error in DNA. DNA damage and mutation are fundamentally different. Damage results in physical abnormalities in the DNA, such as single- and double-strand breaks, 8-hydroxydeoxyguanosine residues, and polycyclic aromatic hydrocarbon adducts. DNA damage can be recognized by enzymes, and thus can be correctly repaired if redundant information, such as the undamaged sequence in the complementary DNA strand or in a homologous chromosome, is available for copying. If a cell retains DNA damage, transcription of a gene can be prevented, and thus translation into a protein will also be blocked. Replication may also be blocked or the cell may die. In contrast to DNA damage, a mutation is a change in the base sequence of the DNA. A mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and thus a mutation cannot be repaired. At the cellular level, mutations can cause alterations in protein function and regulation. Mutations are replicated when the cell replicates. In a population of cells, mutant cells will increase or decrease in frequency according to the effects of the mutation on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damage and mutation are related because DNA damage often causes errors of DNA synthesis during replication or repair; these errors are a major source of mutation. Given these properties of DNA damage and mutation, it can be seen that DNA damage is a special problem in non-dividing or slowly-dividing cells, where unrepaired damage will tend to accumulate over time. On the other hand, in rapidly dividing cells, unrepaired DNA damage that does not kill the cell by blocking replication will tend to cause replication errors and thus mutation. The great majority of mutations that are not neutral in their effect are deleterious to a cell's survival. Thus, in a population of cells composing a tissue with replicating cells, mutant cells will tend to be lost. However, infrequent mutations that provide a survival advantage will tend to clonally expand at the expense of neighboring cells in the tissue. This advantage to the cell is disadvantageous to the whole organism because such mutant cells can give rise to cancer. Thus, DNA damage in frequently dividing cells, because it gives rise to mutations, is a prominent cause of cancer. In contrast, DNA damage in infrequently-dividing cells is likely a prominent cause of aging. Mechanisms Cells cannot function if DNA damage corrupts the integrity and accessibility of essential information in the genome (but cells remain superficially functional when non-essential genes are missing or damaged). Depending on the type of damage inflicted on the DNA's double helical structure, a variety of repair strategies have evolved to restore lost information. If possible, cells use the unmodified complementary strand of the DNA or the sister chromatid as a template to recover the original information. Without access to a template, cells use an error-prone recovery mechanism known as translesion synthesis as a last resort. Damage to DNA alters the spatial configuration of the helix, and such alterations can be detected by the cell. Once damage is localized, specific DNA repair molecules bind at or near the site of damage, inducing other molecules to bind and form a complex that enables the actual repair to take place. Direct reversal Cells are known to eliminate three types of damage to their DNA by chemically reversing it. These mechanisms do not require a template, since the types of damage they counteract can occur in only one of the four bases. Such direct reversal mechanisms are specific to the type of damage incurred and do not involve breakage of the phosphodiester backbone. The formation of pyrimidine dimers upon irradiation with UV light results in an abnormal covalent bond between adjacent pyrimidine bases. The photoreactivation process directly reverses this damage by the action of the enzyme photolyase, whose activation is obligately dependent on energy absorbed from blue/UV light (300–500 nm wavelength) to promote catalysis. Photolyase, an old enzyme present in bacteria, fungi, and most animals no longer functions in humans, who instead use nucleotide excision repair to repair damage from UV irradiation. Another type of damage, methylation of guanine bases, is directly reversed by the enzyme methyl guanine methyl transferase (MGMT), the bacterial equivalent of which is called ogt. This is an expensive process because each MGMT molecule can be used only once; that is, the reaction is stoichiometric rather than catalytic. A generalized response to methylating agents in bacteria is known as the adaptive response and confers a level of resistance to alkylating agents upon sustained exposure by upregulation of alkylation repair enzymes. The third type of DNA damage reversed by cells is certain methylation of the bases cytosine and adenine. Single-strand damage When only one of the two strands of a double helix has a defect, the other strand can be used as a template to guide the correction of the damaged strand. In order to repair damage to one of the two paired molecules of DNA, there exist a number of excision repair mechanisms that remove the damaged nucleotide and replace it with an undamaged nucleotide complementary to that found in the undamaged DNA strand. Base excision repair (BER): damaged single bases or nucleotides are most commonly repaired by removing the base or the nucleotide involved and then inserting the correct base or nucleotide. In base excision repair, a glycosylase enzyme removes the damaged base from the DNA by cleaving the bond between the base and the deoxyribose. These enzymes remove a single base to create an apurinic or apyrimidinic site (AP site). Enzymes called AP endonucleases nick the damaged DNA backbone at the AP site. DNA polymerase then removes the damaged region using its 5' to 3' exonuclease activity and correctly synthesizes the new strand using the complementary strand as a template. The gap is then sealed by enzyme DNA ligase. Nucleotide excision repair (NER): bulky, helix-distorting damage, such as pyrimidine dimerization caused by UV light is usually repaired by a three-step process. First the damage is recognized, then 12-24 nucleotide-long strands of DNA are removed both upstream and downstream of the damage site by endonucleases, and the removed DNA region is then resynthesized. NER is a highly evolutionarily conserved repair mechanism and is used in nearly all eukaryotic and prokaryotic cells. In prokaryotes, NER is mediated by Uvr proteins. In eukaryotes, many more proteins are involved, although the general strategy is the same. Mismatch repair systems are present in essentially all cells to correct errors that are not corrected by proofreading. These systems consist of at least two proteins. One detects the mismatch, and the other recruits an endonuclease that cleaves the newly synthesized DNA strand close to the region of damage. In E. coli , the proteins involved are the Mut class proteins: MutS, MutL, and MutH. In most Eukaryotes, the analog for MutS is MSH and the analog for MutL is MLH. MutH is only present in bacteria. This is followed by removal of damaged region by an exonuclease, resynthesis by DNA polymerase, and nick sealing by DNA ligase. Double-strand breaks Double-strand breaks, in which both strands in the double helix are severed, are particularly hazardous to the cell because they can lead to genome rearrangements. In fact, when a double-strand break is accompanied by a cross-linkage joining the two strands at the same point, neither strand can be used as a template for the repair mechanisms, so that the cell will not be able to complete mitosis when it next divides, and will either die or, in rare cases, undergo a mutation. Three mechanisms exist to repair double-strand breaks (DSBs): non-homologous end joining (NHEJ), microhomology-mediated end joining (MMEJ), and homologous recombination (HR): In NHEJ, DNA Ligase IV, a specialized DNA ligase that forms a complex with the cofactor XRCC4, directly joins the two ends. To guide accurate repair, NHEJ relies on short homologous sequences called microhomologies present on the single-stranded tails of the DNA ends to be joined. If these overhangs are compatible, repair is usually accurate. NHEJ can also introduce mutations during repair. Loss of damaged nucleotides at the break site can lead to deletions, and joining of nonmatching termini forms insertions or translocations. NHEJ is especially important before the cell has replicated its DNA, since there is no template available for repair by homologous recombination. There are "backup" NHEJ pathways in higher eukaryotes. Besides its role as a genome caretaker, NHEJ is required for joining hairpin-capped double-strand breaks induced during V(D)J recombination, the process that generates diversity in B-cell and T-cell receptors in the vertebrate immune system. MMEJ starts with short-range end resection by MRE11 nuclease on either side of a double-strand break to reveal microhomology regions. In further steps, Poly (ADP-ribose) polymerase 1 (PARP1) is required and may be an early step in MMEJ. There is pairing of microhomology regions followed by recruitment of flap structure-specific endonuclease 1 (FEN1) to remove overhanging flaps. This is followed by recruitment of XRCC1–LIG3 to the site for ligating the DNA ends, leading to an intact DNA. MMEJ is always accompanied by a deletion, so that MMEJ is a mutagenic pathway for DNA repair. HR requires the presence of an identical or nearly identical sequence to be used as a template for repair of the break. The enzymatic machinery responsible for this repair process is nearly identical to the machinery responsible for chromosomal crossover during meiosis. This pathway allows a damaged chromosome to be repaired using a sister chromatid (available in G2 after DNA replication) or a homologous chromosome as a template. DSBs caused by the replication machinery attempting to synthesize across a single-strand break or unrepaired lesion cause collapse of the replication fork and are typically repaired by recombination. In an in vitro system, MMEJ occurred in mammalian cells at the levels of 10–20% of HR when both HR and NHEJ mechanisms were also available. The extremophile Deinococcus radiodurans has a remarkable ability to survive DNA damage from ionizing radiation and other sources. At least two copies of the genome, with random DNA breaks, can form DNA fragments through annealing. Partially overlapping fragments are then used for synthesis of homologous regions through a moving D-loop that can continue extension until complementary partner strands are found. In the final step, there is crossover by means of RecA-dependent homologous recombination. Topoisomerases introduce both single- and double-strand breaks in the course of changing the DNA's state of supercoiling, which is especially common in regions near an open replication fork. Such breaks are not considered DNA damage because they are a natural intermediate in the topoisomerase biochemical mechanism and are immediately repaired by the enzymes that created them. Another type of DNA double-strand breaks originates from the DNA heat-sensitive or heat-labile sites. These DNA sites are not initial DSBs. However, they convert to DSB after treating with elevated temperature. Ionizing irradiation can induces a highly complex form of DNA damage as clustered damage. It consists of different types of DNA lesions in various locations of the DNA helix. Some of these closely located lesions can probably convert to DSB by exposure to high temperatures. But the exact nature of these lesions and their interactions is not yet known Translesion synthesis Translesion synthesis (TLS) is a DNA damage tolerance process that allows the DNA replication machinery to replicate past DNA lesions such as thymine dimers or AP sites. It involves switching out regular DNA polymerases for specialized translesion polymerases (i.e. DNA polymerase IV or V, from the Y Polymerase family), often with larger active sites that can facilitate the insertion of bases opposite damaged nucleotides. The polymerase switching is thought to be mediated by, among other factors, the post-translational modification of the replication processivity factor PCNA. Translesion synthesis polymerases often have low fidelity (high propensity to insert wrong bases) on undamaged templates relative to regular polymerases. However, many are extremely efficient at inserting correct bases opposite specific types of damage. For example, Pol η mediates error-free bypass of lesions induced by UV irradiation, whereas Pol ι introduces mutations at these sites. Pol η is known to add the first adenine across the T^T photodimer using Watson-Crick base pairing and the second adenine will be added in its syn conformation using Hoogsteen base pairing. From a cellular perspective, risking the introduction of point mutations during translesion synthesis may be preferable to resorting to more drastic mechanisms of DNA repair, which may cause gross chromosomal aberrations or cell death. In short, the process involves specialized polymerases either bypassing or repairing lesions at locations of stalled DNA replication. For example, Human DNA polymerase eta can bypass complex DNA lesions like guanine-thymine intra-strand crosslink, G[8,5-Me]T, although it can cause targeted and semi-targeted mutations. Paromita Raychaudhury and Ashis Basu studied the toxicity and mutagenesis of the same lesion in Escherichia coli by replicating a G[8,5-Me]T-modified plasmid in E. coli with specific DNA polymerase knockouts. Viability was very low in a strain lacking pol II, pol IV, and pol V, the three SOS-inducible DNA polymerases, indicating that translesion synthesis is conducted primarily by these specialized DNA polymerases. A bypass platform is provided to these polymerases by Proliferating cell nuclear antigen (PCNA). Under normal circumstances, PCNA bound to polymerases replicates the DNA. At a site of lesion, PCNA is ubiquitinated, or modified, by the RAD6/RAD18 proteins to provide a platform for the specialized polymerases to bypass the lesion and resume DNA replication. After translesion synthesis, extension is required. This extension can be carried out by a replicative polymerase if the TLS is error-free, as in the case of Pol η, yet if TLS results in a mismatch, a specialized polymerase is needed to extend it; Pol ζ. Pol ζ is unique in that it can extend terminal mismatches, whereas more processive polymerases cannot. So when a lesion is encountered, the replication fork will stall, PCNA will switch from a processive polymerase to a TLS polymerase such as Pol ι to fix the lesion, then PCNA may switch to Pol ζ to extend the mismatch, and last PCNA will switch to the processive polymerase to continue replication. Global response to DNA damage Cells exposed to ionizing radiation, ultraviolet light or chemicals are prone to acquire multiple sites of bulky DNA lesions and double-strand breaks. Moreover, DNA damaging agents can damage other biomolecules such as proteins, carbohydrates, lipids, and RNA. The accumulation of damage, to be specific, double-strand breaks or adducts stalling the replication forks, are among known stimulation signals for a global response to DNA damage. The global response to damage is an act directed toward the cells' own preservation and triggers multiple pathways of macromolecular repair, lesion bypass, tolerance, or apoptosis. The common features of global response are induction of multiple genes, cell cycle arrest, and inhibition of cell division. Initial steps The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow DNA repair, the chromatin must be remodeled. In eukaryotes, ATP dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of a DNA damage. In one of the earliest steps, the stress-activated protein kinase, c-Jun N-terminal kinase (JNK), phosphorylates SIRT6 on serine 10 in response to double-strand breaks or other DNA damage. This post-translational modification facilitates the mobilization of SIRT6 to DNA damage sites, and is required for efficient recruitment of poly (ADP-ribose) polymerase 1 (PARP1) to DNA break sites and for efficient repair of DSBs. PARP1 protein starts to appear at DNA damage sites in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. PARP1 synthesizes polymeric adenosine diphosphate ribose (poly (ADP-ribose) or PAR) chains on itself. Next the chromatin remodeler ALC1 quickly attaches to the product of PARP1 action, a poly-ADP ribose chain, and ALC1 completes arrival at the DNA damage within 10 seconds of the occurrence of the damage. About half of the maximum chromatin relaxation, presumably due to action of ALC1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA double-strand breaks. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. DDB2 occurs in a heterodimeric complex with DDB1. This complex further complexes with the ubiquitin ligase protein CUL4A and with PARP1. This larger complex rapidly associates with UV-induced damage within chromatin, with half-maximum association completed in 40 seconds. The PARP1 protein, attached to both DDB1 and DDB2, then PARylates (creates a poly-ADP ribose chain) on DDB2 that attracts the DNA remodeling protein ALC1. Action of ALC1 relaxes the chromatin at the site of UV damage to DNA. This relaxation allows other proteins in the nucleotide excision repair pathway to enter the chromatin and repair UV-induced cyclobutane pyrimidine dimer damages. After rapid chromatin remodeling, cell cycle checkpoints are activated to allow DNA repair to occur before the cell cycle progresses. First, two kinases, ATM and ATR are activated within 5 or 6 minutes after DNA is damaged. This is followed by phosphorylation of the cell cycle checkpoint protein Chk1, initiating its function, about 10 minutes after DNA is damaged. DNA damage checkpoints After DNA damage, cell cycle checkpoints are activated. Checkpoint activation pauses the cell cycle and gives the cell time to repair the damage before continuing to divide. DNA damage checkpoints occur at the G1/S and G2/M boundaries. An intra-S checkpoint also exists. Checkpoint activation is controlled by two master kinases, ATM and ATR. ATM responds to DNA double-strand breaks and disruptions in chromatin structure, whereas ATR primarily responds to stalled replication forks. These kinases phosphorylate downstream targets in a signal transduction cascade, eventually leading to cell cycle arrest. A class of checkpoint mediator proteins including BRCA1, MDC1, and 53BP1 has also been identified. These proteins seem to be required for transmitting the checkpoint activation signal to downstream proteins. DNA damage checkpoint is a signal transduction pathway that blocks cell cycle progression in G1, G2 and metaphase and slows down the rate of S phase progression when DNA is damaged. It leads to a pause in cell cycle allowing the cell time to repair the damage before continuing to divide. Checkpoint Proteins can be separated into four groups: phosphatidylinositol 3-kinase (PI3K)-like protein kinase, proliferating cell nuclear antigen (PCNA)-like group, two serine/threonine(S/T) kinases and their adaptors. Central to all DNA damage induced checkpoints responses is a pair of large protein kinases belonging to the first group of PI3K-like protein kinases-the ATM (Ataxia telangiectasia mutated) and ATR (Ataxia- and Rad-related) kinases, whose sequence and functions have been well conserved in evolution. All DNA damage response requires either ATM or ATR because they have the ability to bind to the chromosomes at the site of DNA damage, together with accessory proteins that are platforms on which DNA damage response components and DNA repair complexes can be assembled. An important downstream target of ATM and ATR is p53, as it is required for inducing apoptosis following DNA damage. The cyclin-dependent kinase inhibitor p21 is induced by both p53-dependent and p53-independent mechanisms and can arrest the cell cycle at the G1/S and G2/M checkpoints by deactivating cyclin/cyclin-dependent kinase complexes. The prokaryotic SOS response The SOS response is the changes in gene expression in Escherichia coli and other bacteria in response to extensive DNA damage. The prokaryotic SOS system is regulated by two key proteins: LexA and RecA. The LexA homodimer is a transcriptional repressor that binds to operator sequences commonly referred to as SOS boxes. In Escherichia coli it is known that LexA regulates transcription of approximately 48 genes including the lexA and recA genes. The SOS response is known to be widespread in the Bacteria domain, but it is mostly absent in some bacterial phyla, like the Spirochetes. The most common cellular signals activating the SOS response are regions of single-stranded DNA (ssDNA), arising from stalled replication forks or double-strand breaks, which are processed by DNA helicase to separate the two DNA strands. In the initiation step, RecA protein binds to ssDNA in an ATP hydrolysis driven reaction creating RecA–ssDNA filaments. RecA–ssDNA filaments activate LexA autoprotease activity, which ultimately leads to cleavage of LexA dimer and subsequent LexA degradation. The loss of LexA repressor induces transcription of the SOS genes and allows for further signal induction, inhibition of cell division and an increase in levels of proteins responsible for damage processing. In Escherichia coli, SOS boxes are 20-nucleotide long sequences near promoters with palindromic structure and a high degree of sequence conservation. In other classes and phyla, the sequence of SOS boxes varies considerably, with different length and composition, but it is always highly conserved and one of the strongest short signals in the genome. The high information content of SOS boxes permits differential binding of LexA to different promoters and allows for timing of the SOS response. The lesion repair genes are induced at the beginning of SOS response. The error-prone translesion polymerases, for example, UmuCD'2 (also called DNA polymerase V), are induced later on as a last resort. Once the DNA damage is repaired or bypassed using polymerases or through recombination, the amount of single-stranded DNA in cells is decreased, lowering the amounts of RecA filaments decreases cleavage activity of LexA homodimer, which then binds to the SOS boxes near promoters and restores normal gene expression. Eukaryotic transcriptional responses to DNA damage Eukaryotic cells exposed to DNA damaging agents also activate important defensive pathways by inducing multiple proteins involved in DNA repair, cell cycle checkpoint control, protein trafficking and degradation. Such genome wide transcriptional response is very complex and tightly regulated, thus allowing coordinated global response to damage. Exposure of yeast Saccharomyces cerevisiae to DNA damaging agents results in overlapping but distinct transcriptional profiles. Similarities to environmental shock response indicates that a general global stress response pathway exist at the level of transcriptional activation. In contrast, different human cell types respond to damage differently indicating an absence of a common global response. The probable explanation for this difference between yeast and human cells may be in the heterogeneity of mammalian cells. In an animal different types of cells are distributed among different organs that have evolved different sensitivities to DNA damage. In general global response to DNA damage involves expression of multiple genes responsible for postreplication repair, homologous recombination, nucleotide excision repair, DNA damage checkpoint, global transcriptional activation, genes controlling mRNA decay, and many others. A large amount of damage to a cell leaves it with an important decision: undergo apoptosis and die, or survive at the cost of living with a modified genome. An increase in tolerance to damage can lead to an increased rate of survival that will allow a greater accumulation of mutations. Yeast Rev1 and human polymerase η are members of Y family translesion DNA polymerases present during global response to DNA damage and are responsible for enhanced mutagenesis during a global response to DNA damage in eukaryotes. Aging Pathological effects of poor DNA repair Experimental animals with genetic deficiencies in DNA repair often show decreased life span and increased cancer incidence. For example, mice deficient in the dominant NHEJ pathway and in telomere maintenance mechanisms get lymphoma and infections more often, and, as a consequence, have shorter lifespans than wild-type mice. In similar manner, mice deficient in a key repair and transcription protein that unwinds DNA helices have premature onset of aging-related diseases and consequent shortening of lifespan. However, not every DNA repair deficiency creates exactly the predicted effects; mice deficient in the NER pathway exhibited shortened life span without correspondingly higher rates of mutation. The maximum life spans of mice, naked mole-rats and humans are respectively ~3, ~30 and ~129 years. Of these, the shortest lived species, mouse, expresses DNA repair genes, including core genes in several DNA repair pathways, at a lower level than do humans and naked mole rats. Furthermore several DNA repair pathways in humans and naked mole-rats are up-regulated compared to mouse. These observations suggest that elevated DNA repair facilitates greater longevity. If the rate of DNA damage exceeds the capacity of the cell to repair it, the accumulation of errors can overwhelm the cell and result in early senescence, apoptosis, or cancer. Inherited diseases associated with faulty DNA repair functioning result in premature aging, increased sensitivity to carcinogens and correspondingly increased cancer risk (see below). On the other hand, organisms with enhanced DNA repair systems, such as Deinococcus radiodurans, the most radiation-resistant known organism, exhibit remarkable resistance to the double-strand break-inducing effects of radioactivity, likely due to enhanced efficiency of DNA repair and especially NHEJ. Longevity and caloric restriction A number of individual genes have been identified as influencing variations in life span within a population of organisms. The effects of these genes is strongly dependent on the environment, in particular, on the organism's diet. Caloric restriction reproducibly results in extended lifespan in a variety of organisms, likely via nutrient sensing pathways and decreased metabolic rate. The molecular mechanisms by which such restriction results in lengthened lifespan are as yet unclear (see for some discussion); however, the behavior of many genes known to be involved in DNA repair is altered under conditions of caloric restriction. Several agents reported to have anti-aging properties have been shown to attenuate constitutive level of mTOR signaling, an evidence of reduction of metabolic activity, and concurrently to reduce constitutive level of DNA damage induced by endogenously generated reactive oxygen species. For example, increasing the gene dosage of the gene SIR-2, which regulates DNA packaging in the nematode worm Caenorhabditis elegans, can significantly extend lifespan. The mammalian homolog of SIR-2 is known to induce downstream DNA repair factors involved in NHEJ, an activity that is especially promoted under conditions of caloric restriction. Caloric restriction has been closely linked to the rate of base excision repair in the nuclear DNA of rodents, although similar effects have not been observed in mitochondrial DNA. The C. elegans gene AGE-1, an upstream effector of DNA repair pathways, confers dramatically extended life span under free-feeding conditions but leads to a decrease in reproductive fitness under conditions of caloric restriction. This observation supports the pleiotropy theory of the biological origins of aging, which suggests that genes conferring a large survival advantage early in life will be selected for even if they carry a corresponding disadvantage late in life. Medicine and DNA repair modulation Hereditary DNA repair disorders Defects in the NER mechanism are responsible for several genetic disorders, including: Xeroderma pigmentosum: hypersensitivity to sunlight/UV, resulting in increased skin cancer incidence and premature aging Cockayne syndrome: hypersensitivity to UV and chemical agents Trichothiodystrophy: sensitive skin, brittle hair and nails Mental retardation often accompanies the latter two disorders, suggesting increased vulnerability of developmental neurons. Other DNA repair disorders include: Werner's syndrome: premature aging and retarded growth Bloom's syndrome: sunlight hypersensitivity, high incidence of malignancies (especially leukemias). Ataxia telangiectasia: sensitivity to ionizing radiation and some chemical agents All of the above diseases are often called "segmental progerias" ("accelerated aging diseases") because those affected appear elderly and experience aging-related diseases at an abnormally young age, while not manifesting all the symptoms of old age. Other diseases associated with reduced DNA repair function include Fanconi anemia, hereditary breast cancer and hereditary colon cancer. Cancer Because of inherent limitations in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. There are at least 34 Inherited human DNA repair gene mutations that increase cancer risk. Many of these mutations cause DNA repair to be less effective than normal. In particular, Hereditary nonpolyposis colorectal cancer (HNPCC) is strongly associated with specific mutations in the DNA mismatch repair pathway. BRCA1 and BRCA2, two important genes whose mutations confer a hugely increased risk of breast cancer on carriers, are both associated with a large number of DNA repair pathways, especially NHEJ and homologous recombination. Cancer therapy procedures such as chemotherapy and radiotherapy work by overwhelming the capacity of the cell to repair DNA damage, resulting in cell death. Cells that are most rapidly dividing – most typically cancer cells – are preferentially affected. The side-effect is that other non-cancerous but rapidly dividing cells such as progenitor cells in the gut, skin, and hematopoietic system are also affected. Modern cancer treatments attempt to localize the DNA damage to cells and tissues only associated with cancer, either by physical means (concentrating the therapeutic agent in the region of the tumor) or by biochemical means (exploiting a feature unique to cancer cells in the body). In the context of therapies targeting DNA damage response genes, the latter approach has been termed 'synthetic lethality'. Perhaps the most well-known of these 'synthetic lethality' drugs is the poly(ADP-ribose) polymerase 1 (PARP1) inhibitor olaparib, which was approved by the Food and Drug Administration in 2015 for the treatment in women of BRCA-defective ovarian cancer. Tumor cells with partial loss of DNA damage response (specifically, homologous recombination repair) are dependent on another mechanism – single-strand break repair – which is a mechanism consisting, in part, of the PARP1 gene product. Olaparib is combined with chemotherapeutics to inhibit single-strand break repair induced by DNA damage caused by the co-administered chemotherapy. Tumor cells relying on this residual DNA repair mechanism are unable to repair the damage and hence are not able to survive and proliferate, whereas normal cells can repair the damage with the functioning homologous recombination mechanism. Many other drugs for use against other residual DNA repair mechanisms commonly found in cancer are currently under investigation. However, synthetic lethality therapeutic approaches have been questioned due to emerging evidence of acquired resistance, achieved through rewiring of DNA damage response pathways and reversion of previously inhibited defects. DNA repair defects in cancer It has become apparent over the past several years that the DNA damage response acts as a barrier to the malignant transformation of preneoplastic cells. Previous studies have shown an elevated DNA damage response in cell-culture models with oncogene activation and preneoplastic colon adenomas. DNA damage response mechanisms trigger cell-cycle arrest, and attempt to repair DNA lesions or promote cell death/senescence if repair is not possible. Replication stress is observed in preneoplastic cells due to increased proliferation signals from oncogenic mutations. Replication stress is characterized by: increased replication initiation/origin firing; increased transcription and collisions of transcription-replication complexes; nucleotide deficiency; increase in reactive oxygen species (ROS). Replication stress, along with the selection for inactivating mutations in DNA damage response genes in the evolution of the tumor, leads to downregulation and/or loss of some DNA damage response mechanisms, and hence loss of DNA repair and/or senescence/programmed cell death. In experimental mouse models, loss of DNA damage response-mediated cell senescence was observed after using a short hairpin RNA (shRNA) to inhibit the double-strand break response kinase ataxia telangiectasia (ATM), leading to increased tumor size and invasiveness. Humans born with inherited defects in DNA repair mechanisms (for example, Li-Fraumeni syndrome) have a higher cancer risk. The prevalence of DNA damage response mutations differs across cancer types; for example, 30% of breast invasive carcinomas have mutations in genes involved in homologous recombination. In cancer, downregulation is observed across all DNA damage response mechanisms (base excision repair (BER), nucleotide excision repair (NER), DNA mismatch repair (MMR), homologous recombination repair (HR), non-homologous end joining (NHEJ) and translesion DNA synthesis (TLS). As well as mutations to DNA damage repair genes, mutations also arise in the genes responsible for arresting the cell cycle to allow sufficient time for DNA repair to occur, and some genes are involved in both DNA damage repair and cell cycle checkpoint control, for example ATM and checkpoint kinase 2 (CHEK2) – a tumor suppressor that is often absent or downregulated in non-small cell lung cancer. Epigenetic DNA repair defects in cancer Classically, cancer has been viewed as a set of diseases that are driven by progressive genetic abnormalities that include mutations in tumour-suppressor genes and oncogenes, and chromosomal aberrations. However, it has become apparent that cancer is also driven by epigenetic alterations. Epigenetic alterations refer to functionally relevant modifications to the genome that do not involve a change in the nucleotide sequence. Examples of such modifications are changes in DNA methylation (hypermethylation and hypomethylation) and histone modification, changes in chromosomal architecture (caused by inappropriate expression of proteins such as HMGA2 or HMGA1) and changes caused by microRNAs. Each of these epigenetic alterations serves to regulate gene expression without altering the underlying DNA sequence. These changes usually remain through cell divisions, last for multiple cell generations, and can be considered to be epimutations (equivalent to mutations). While large numbers of epigenetic alterations are found in cancers, the epigenetic alterations in DNA repair genes, causing reduced expression of DNA repair proteins, appear to be particularly important. Such alterations are thought to occur early in progression to cancer and to be a likely cause of the genetic instability characteristic of cancers. Reduced expression of DNA repair genes causes deficient DNA repair. When DNA repair is deficient DNA damages remain in cells at a higher than usual level and these excess damages cause increased frequencies of mutation or epimutation. Mutation rates increase substantially in cells defective in DNA mismatch repair or in homologous recombinational repair (HRR). Chromosomal rearrangements and aneuploidy also increase in HRR defective cells. Higher levels of DNA damage not only cause increased mutation, but also cause increased epimutation. During repair of DNA double strand breaks, or repair of other DNA damages, incompletely cleared sites of repair can cause epigenetic gene silencing. Deficient expression of DNA repair proteins due to an inherited mutation can cause increased risk of cancer. Individuals with an inherited impairment in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) have an increased risk of cancer, with some defects causing up to a 100% lifetime chance of cancer (e.g. p53 mutations). However, such germline mutations (which cause highly penetrant cancer syndromes) are the cause of only about 1 percent of cancers. Frequencies of epimutations in DNA repair genes Deficiencies in DNA repair enzymes are occasionally caused by a newly arising somatic mutation in a DNA repair gene, but are much more frequently caused by epigenetic alterations that reduce or silence expression of DNA repair genes. For example, when 113 colorectal cancers were examined in sequence, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region (an epigenetic alteration). Five different studies found that between 40% and 90% of colorectal cancers have reduced MGMT expression due to methylation of the MGMT promoter region. Similarly, out of 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, PMS2 was deficient in 6 due to mutations in the PMS2 gene, while in 103 cases PMS2 expression was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1). In the other 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1. In a further example, epigenetic defects were found in various cancers (e.g. breast, ovarian, colorectal and head and neck). Two or three deficiencies in the expression of ERCC1, XPF or PMS2 occur simultaneously in the majority of 49 colon cancers evaluated by Facista et al. The chart in this section shows some frequent DNA damaging agents, examples of DNA lesions they cause, and the pathways that deal with these DNA damages. At least 169 enzymes are either directly employed in DNA repair or influence DNA repair processes. Of these, 83 are directly employed in repairing the 5 types of DNA damages illustrated in the chart. Some of the more well studied genes central to these repair processes are shown in the chart. The gene designations shown in red, gray or cyan indicate genes frequently epigenetically altered in various types of cancers. Wikipedia articles on each of the genes highlighted by red, gray or cyan describe the epigenetic alteration(s) and the cancer(s) in which these epimutations are found. Review articles, and broad experimental survey articles also document most of these epigenetic DNA repair deficiencies in cancers. Red-highlighted genes are frequently reduced or silenced by epigenetic mechanisms in various cancers. When these genes have low or absent expression, DNA damages can accumulate. Replication errors past these damages (see translesion synthesis) can lead to increased mutations and, ultimately, cancer. Epigenetic repression of DNA repair genes in accurate DNA repair pathways appear to be central to carcinogenesis. The two gray-highlighted genes RAD51 and BRCA2, are required for homologous recombinational repair. They are sometimes epigenetically over-expressed and sometimes under-expressed in certain cancers. As indicated in the Wikipedia articles on RAD51 and BRCA2, such cancers ordinarily have epigenetic deficiencies in other DNA repair genes. These repair deficiencies would likely cause increased unrepaired DNA damages. The over-expression of RAD51 and BRCA2 seen in these cancers may reflect selective pressures for compensatory RAD51 or BRCA2 over-expression and increased homologous recombinational repair to at least partially deal with such excess DNA damages. In those cases where RAD51 or BRCA2 are under-expressed, this would itself lead to increased unrepaired DNA damages. Replication errors past these damages (see translesion synthesis) could cause increased mutations and cancer, so that under-expression of RAD51 or BRCA2 would be carcinogenic in itself. Cyan-highlighted genes are in the microhomology-mediated end joining (MMEJ) pathway and are up-regulated in cancer. MMEJ is an additional error-prone inaccurate repair pathway for double-strand breaks. In MMEJ repair of a double-strand break, an homology of 5–25 complementary base pairs between both paired strands is sufficient to align the strands, but mismatched ends (flaps) are usually present. MMEJ removes the extra nucleotides (flaps) where strands are joined, and then ligates the strands to create an intact DNA double helix. MMEJ almost always involves at least a small deletion, so that it is a mutagenic pathway. FEN1, the flap endonuclease in MMEJ, is epigenetically increased by promoter hypomethylation and is over-expressed in the majority of cancers of the breast, prostate, stomach, neuroblastomas, pancreas, and lung. PARP1 is also over-expressed when its promoter region ETS site is epigenetically hypomethylated, and this contributes to progression to endometrial cancer and BRCA-mutated serous ovarian cancer. Other genes in the MMEJ pathway are also over-expressed in a number of cancers (see MMEJ for summary), and are also shown in cyan. Genome-wide distribution of DNA repair in human somatic cells Differential activity of DNA repair pathways across various regions of the human genome causes mutations to be very unevenly distributed within tumor genomes. In particular, the gene-rich, early-replicating regions of the human genome exhibit lower mutation frequencies than the gene-poor, late-replicating heterochromatin. One mechanism underlying this involves the histone modification H3K36me3, which can recruit mismatch repair proteins, thereby lowering mutation rates in H3K36me3-marked regions. Another important mechanism concerns nucleotide excision repair, which can be recruited by the transcription machinery, lowering somatic mutation rates in active genes and other open chromatin regions. Epigenetic alterations due to DNA repair Damage to DNA is very common and is constantly being repaired. Epigenetic alterations can accompany DNA repair of oxidative damage or double-strand breaks. In human cells, oxidative DNA damage occurs about 10,000 times a day and DNA double-strand breaks occur about 10 to 50 times a cell cycle in somatic replicating cells (see DNA damage (naturally occurring)). The selective advantage of DNA repair is to allow the cell to survive in the face of DNA damage. The selective advantage of epigenetic alterations that occur with DNA repair is not clear. Repair of oxidative DNA damage can alter epigenetic markers In the steady state (with endogenous damages occurring and being repaired), there are about 2,400 oxidatively damaged guanines that form 8-oxo-2'-deoxyguanosine (8-OHdG) in the average mammalian cell DNA. 8-OHdG constitutes about 5% of the oxidative damages commonly present in DNA. The oxidized guanines do not occur randomly among all guanines in DNA. There is a sequence preference for the guanine at a methylated CpG site (a cytosine followed by guanine along its 5' → 3' direction and where the cytosine is methylated (5-mCpG)). A 5-mCpG site has the lowest ionization potential for guanine oxidation. Oxidized guanine has mispairing potential and is mutagenic. Oxoguanine glycosylase (OGG1) is the primary enzyme responsible for the excision of the oxidized guanine during DNA repair. OGG1 finds and binds to an 8-OHdG within a few seconds. However, OGG1 does not immediately excise 8-OHdG. In HeLa cells half maximum removal of 8-OHdG occurs in 30 minutes, and in irradiated mice, the 8-OHdGs induced in the mouse liver are removed with a half-life of 11 minutes. When OGG1 is present at an oxidized guanine within a methylated CpG site it recruits TET1 to the 8-OHdG lesion (see Figure). This allows TET1 to demethylate an adjacent methylated cytosine. Demethylation of cytosine is an epigenetic alteration. As an example, when human mammary epithelial cells were treated with H2O2 for six hours, 8-OHdG increased about 3.5-fold in DNA and this caused about 80% demethylation of the 5-methylcytosines in the genome. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene into messenger RNA. In cells treated with H2O2, one particular gene was examined, BACE1. The methylation level of the BACE1 CpG island was reduced (an epigenetic alteration) and this allowed about 6.5 fold increase of expression of BACE1 messenger RNA. While six-hour incubation with H2O2 causes considerable demethylation of 5-mCpG sites, shorter times of H2O2 incubation appear to promote other epigenetic alterations. Treatment of cells with H2O2 for 30 minutes causes the mismatch repair protein heterodimer MSH2-MSH6 to recruit DNA methyltransferase 1 (DNMT1) to sites of some kinds of oxidative DNA damage. This could cause increased methylation of cytosines (epigenetic alterations) at these locations. Jiang et al. treated HEK 293 cells with agents causing oxidative DNA damage, (potassium bromate (KBrO3) or potassium chromate (K2CrO4)). Base excision repair (BER) of oxidative damage occurred with the DNA repair enzyme polymerase beta localizing to oxidized guanines. Polymerase beta is the main human polymerase in short-patch BER of oxidative DNA damage. Jiang et al. also found that polymerase beta recruited the DNA methyltransferase protein DNMT3b to BER repair sites. They then evaluated the methylation pattern at the single nucleotide level in a small region of DNA including the promoter region and the early transcription region of the BRCA1 gene. Oxidative DNA damage from bromate modulated the DNA methylation pattern (caused epigenetic alterations) at CpG sites within the region of DNA studied. In untreated cells, CpGs located at −189, −134, −29, −19, +16, and +19 of the BRCA1 gene had methylated cytosines (where numbering is from the messenger RNA transcription start site, and negative numbers indicate nucleotides in the upstream promoter region). Bromate treatment-induced oxidation resulted in the loss of cytosine methylation at −189, −134, +16 and +19 while also leading to the formation of new methylation at the CpGs located at −80, −55, −21 and +8 after DNA repair was allowed. Homologous recombinational repair alters epigenetic markers At least four articles report the recruitment of DNA methyltransferase 1 (DNMT1) to sites of DNA double-strand breaks. During homologous recombinational repair (HR) of the double-strand break, the involvement of DNMT1 causes the two repaired strands of DNA to have different levels of methylated cytosines. One strand becomes frequently methylated at about 21 CpG sites downstream of the repaired double-strand break. The other DNA strand loses methylation at about six CpG sites that were previously methylated downstream of the double-strand break, as well as losing methylation at about five CpG sites that were previously methylated upstream of the double-strand break. When the chromosome is replicated, this gives rise to one daughter chromosome that is heavily methylated downstream of the previous break site and one that is unmethylated in the region both upstream and downstream of the previous break site. With respect to the gene that was broken by the double-strand break, half of the progeny cells express that gene at a high level and in the other half of the progeny cells expression of that gene is repressed. When clones of these cells were maintained for three years, the new methylation patterns were maintained over that time period. In mice with a CRISPR-mediated homology-directed recombination insertion in their genome there were a large number of increased methylations of CpG sites within the double-strand break-associated insertion. Non-homologous end joining can cause some epigenetic marker alterations Non-homologous end joining (NHEJ) repair of a double-strand break can cause a small number of demethylations of pre-existing cytosine DNA methylations downstream of the repaired double-strand break. Further work by Allen et al. showed that NHEJ of a DNA double-strand break in a cell could give rise to some progeny cells having repressed expression of the gene harboring the initial double-strand break and some progeny having high expression of that gene due to epigenetic alterations associated with NHEJ repair. The frequency of epigenetic alterations causing repression of a gene after an NHEJ repair of a DNA double-strand break in that gene may be about 0.9%. Evolution The basic processes of DNA repair are highly conserved among both prokaryotes and eukaryotes and even among bacteriophages (viruses which infect bacteria); however, more complex organisms with more complex genomes have correspondingly more complex repair mechanisms. The ability of a large number of protein structural motifs to catalyze relevant chemical reactions has played a significant role in the elaboration of repair mechanisms during evolution. For an extremely detailed review of hypotheses relating to the evolution of DNA repair, see. The fossil record indicates that single-cell life began to proliferate on the planet at some point during the Precambrian period, although exactly when recognizably modern life first emerged is unclear. Nucleic acids became the sole and universal means of encoding genetic information, requiring DNA repair mechanisms that in their basic form have been inherited by all extant life forms from their common ancestor. The emergence of Earth's oxygen-rich atmosphere (known as the "oxygen catastrophe") due to photosynthetic organisms, as well as the presence of potentially damaging free radicals in the cell due to oxidative phosphorylation, necessitated the evolution of DNA repair mechanisms that act specifically to counter the types of damage induced by oxidative stress. The mechanism by which this came about, however, is unclear. Rate of evolutionary change On some occasions, DNA damage is not repaired or is repaired by an error-prone mechanism that results in a change from the original sequence. When this occurs, mutations may propagate into the genomes of the cell's progeny. Should such an event occur in a germ line cell that will eventually produce a gamete, the mutation has the potential to be passed on to the organism's offspring. The rate of evolution in a particular species (or, in a particular gene) is a function of the rate of mutation. As a consequence, the rate and accuracy of DNA repair mechanisms have an influence over the process of evolutionary change. DNA damage protection and repair does not influence the rate of adaptation by gene regulation and by recombination and selection of alleles. On the other hand, DNA damage repair and protection does influence the rate of accumulation of irreparable, advantageous, code expanding, inheritable mutations, and slows down the evolutionary mechanism for expansion of the genome of organisms with new functionalities. The tension between evolvability and mutation repair and protection needs further investigation. Technology A technology named clustered regularly interspaced short palindromic repeat (shortened to CRISPR-Cas9) was discovered in 2012. The new technology allows anyone with molecular biology training to alter the genes of any species with precision, by inducing DNA damage at a specific point and then altering DNA repair mechanisms to insert new genes. It is cheaper, more efficient, and more precise than other technologies. With the help of CRISPR–Cas9, parts of a genome can be edited by scientists by removing, adding, or altering parts in a DNA sequence.
Biology and health sciences
Molecular biology
Biology
854453
https://en.wikipedia.org/wiki/Grazing
Grazing
In agriculture, grazing is a method of animal husbandry whereby domestic livestock are allowed outdoors to free range (roam around) and consume wild vegetations in order to convert the otherwise indigestible (by human gut) cellulose within grass and other forages into meat, milk, wool and other animal products, often on land that is unsuitable for arable farming. Farmers may employ many different strategies of grazing for optimum production: grazing may be continuous, seasonal, or rotational within a grazing period. Longer rotations are found in ley farming, alternating arable and fodder crops; in rest rotation, deferred rotation, and mob grazing, giving grasses a longer time to recover or leaving land fallow. Patch-burn sets up a rotation of fresh grass after burning with two years of rest. Conservation grazing proposes to use grazing animals to improve the biodiversity of a site. Grazing has existed since the beginning of agriculture; sheep and goats were domesticated by nomads before the first permanent settlements were constructed around 7000 BC, enabling cattle and pigs to be kept. Livestock grazing contributes to many negative effects on the environment, including deforestation, extinction of native wildlife, pollution of streams and rivers, overgrazing, soil degradation, ecological disturbance, desertification, and ecosystem stability. History Sheep, goats, cattle, and pigs were domesticated early in the history of agriculture. Sheep were domesticated first, soon followed by goats; both species were suitable for nomadic peoples. Cattle and pigs were domesticated somewhat later, around 7000 BC, once people started to live in fixed settlements. In America, livestock were grazed on public land from the Civil War. The Taylor Grazing Act of 1934 was enacted after the Great Depression to regulate the use of public land for grazing purposes. Production According to a report by the Food and Agriculture Organization, about 60% of the world's grassland (just less than half of the world's usable surface) is covered by grazing systems. It states that "Grazing systems supply about 9 percent of the world's production of beef and about 30 percent of the world's production of sheep and goat meat. For an estimated 100 million people in arid areas, and probably a similar number in other zones, grazing livestock is the only possible source of livelihood." Management Grazing management has two overall goals: Protecting the quality of the pasturage against deterioration by overgrazing: in other words, maintain the sustainability of the pasturage Protecting the health of the animals against acute threats, such as: Grass tetany and nitrate poisoning Trace element overdose, such as molybdenum and selenium poisoning Grass sickness and laminitis in horses Milk sickness in calves A proper land use and grazing management technique balances maintenance of forage and livestock production, with maintenance of biodiversity and ecosystem services. It does this by allowing sufficient recovery periods for regrowth. Producers can keep a low density on a pasture, so as not to overgraze. Controlled burning of the land can help in the regrowth of plants. Although grazing can be problematic for the ecosystem, well-managed grazing techniques can reverse damage and improve the land. On commons in England and Wales, rights of pasture (grassland grazing) and pannage (forest grazing) for each commoner are tightly defined by number and type of animal, and by the time of year when certain rights can be exercised. For example, the occupier of a particular cottage might be allowed to graze fifteen cattle, four horses, ponies or donkeys, and fifty geese, while the numbers allowed for their neighbours would probably be different. On some commons (such as the New Forest and adjoining commons), the rights are not limited by numbers, and instead a 'marking fee' is paid each year for each animal 'turned out'. However, if excessive use was made of the common, for example, in overgrazing, a common would be 'stinted'; that is, a limit would be put on the number of animals each commoner was allowed to graze. These regulations were responsive to demographic and economic pressure. Thus, rather than let a common become degraded, access was restricted even further. Systems Ranchers and range science researchers have developed grazing systems to improve sustainable forage production for livestock. These can be contrasted with intensive animal farming on feedlots. Continuous With continuous grazing, livestock is allowed access to the same grazing area throughout the year. Seasonal Seasonal grazing incorporates "grazing animals on a particular area for only part of the year". This allows the land that is not being grazed to rest and allow for new forage to grow. Rotational Rotational grazing "involves dividing the range into several pastures and then grazing each in sequence throughout the grazing period". Utilizing rotational grazing can improve livestock distribution while incorporating rest period for new forage. Ley farming In ley farming, pastures are not permanently planted, but alternated between fodder crops and arable crops. Rest rotation Rest rotation grazing "divides the range into at least four pastures. One pasture remains rested throughout the year and grazing is rotated amongst the residual pastures." This grazing system can be especially beneficial when using sensitive grass that requires time for rest and regrowth. Deferred rotation Deferred rotation "involves at least two pastures with one not grazed until after seed-set". By using deferred rotation, grasses can achieve maximum growth during the period when no grazing occurs. Patch-burn Patch-burn grazing burns a third of a pasture each year, no matter the size of the pasture. This burned patch attracts grazers (cattle or bison) that graze the area heavily because of the fresh grasses that grow as a result. The other patches receive little to no grazing. During the next two years the next two patches are burned consecutively, then the cycle begins anew. In this way, patches receive two years of rest and recovery from the heavy grazing. This technique results in a diversity of habitats that different prairie plants and birds can utilize—mimicking the effects of the pre-historical relationship between bison and fire, whereby bison heavily graze one area and other areas have opportunity to rest, based on the concept of pyric herbivory. The Tallgrass Prairie Preserve in northeastern Oklahoma has been patch-burn grazed with bison herds for over ten years. These efforts have effectively restored the bison–fire relationship on a large landscape scale of . In the grazed heathland of Devon, the periodic burning is known as swailing. Riparian area management Riparian area grazing is intended to improve wildlife and their habitats. It uses fencing to keep livestock off ranges near streams or water areas until after wildlife or waterfowl periods, or to limit the amount of grazing to a short period of time. Conservation grazing Conservation grazing is the use of grazing animals to help improve the biodiversity of a site. Due to their hardy nature, rare and native breeds are often used in conservation grazing. In some cases, to re-establish traditional hay meadows, cattle such as the English Longhorn and Highland are used to provide grazing. Cell grazing A form of rotational grazing using as many small paddocks as fencing allows, said to be more sustainable. Mob grazing Mob grazing is a system, said to be more sustainable, invented in 2002; it uses very large herds on land left fallow longer than usual. Environmental considerations Ecology Many ecological effects derive from grazing, which may be positive or negative. Negative effects of grazing may include overgrazing, increased soil erosion, compaction and degradation, deforestation, biodiversity loss, and adverse water quality impacts from run-off. Sometimes grazers can have beneficial environmental effects such as improving the soil with nutrient redistribution and aerating the soil by trampling, and by controlling fire and increasing biodiversity by removing biomass, controlling shrub growth and dispersing seeds. In some habitats, appropriate levels of grazing may be effective in restoring or maintaining native grass and herb diversity in rangeland that has been disturbed by overgrazing, lack of grazing (such as by the removal of wild grazing animals), or by other human disturbance. Conservation grazing is the use of grazers to manage such habitats, often to replicate the ecological effects of the wild relatives of domestic livestock, or those of other species now absent or extinct. Grazer urine and faeces "recycle nitrogen, phosphorus, potassium and other plant nutrients and return them to the soil". Grazing can reduce the accumulation of litter (organic matter) in some seasons and areas, but can also increase it, which may help to combat soil erosion. This acts as nutrition for insects and organisms found within the soil. These organisms "aid in carbon sequestration and water filtration". When grass is grazed, dead grass and litter are reduced which is advantageous for birds such as waterfowl. Grazing can increase biodiversity. Without grazing, many of the same grasses grow, for example brome and bluegrass, consequently producing a monoculture. The ecosystems of North American tallgrass prairies are controlled to a large extent by nitrogen availability, which is itself controlled by interactions between fires and grazing by large herbivores. Fires in spring enhance growth of certain grasses, and herbivores preferentially graze these grasses, producing a system of checks and balances, and allowing higher plant biodiversity. In Europe heathland is a cultural landscape which requires grazing by cattle, sheep or other grazers to be maintained. Conservation An author of the Food and Agriculture Organization (FAO) report Livestock's Long Shadow, stated in an interview: Much grazing land has resulted from a process of clearance or drainage of other habitats such as woodland or wetland. According to the opinion of the Center for Biological Diversity, extensive grazing of livestock in the arid lands of the southwestern United States has many negative impacts on the local biodiversity there. In arid climates such as the southwestern United States, livestock grazing has severely degraded riparian areas, the wetland environment adjacent to rivers or streams. The Environmental Protection Agency states that agriculture has a greater impact on stream and river contamination than any other nonpoint source. Improper grazing of riparian areas can contribute to nonpoint source pollution of riparian areas. Riparian zones in arid and semiarid environments have been called biodiversity hotspots. The water, higher biomass, favorable microclimate and periodic flood events together produce higher biological diversity than in the surrounding uplands. In 1990, "according to the Arizona state park department, over 90% of the original riparian zones of Arizona and New Mexico are gone". A 1988 report of the Government Accountability Office estimated that 90% of the 5,300 miles of riparian habitat managed by the Bureau of Land Management in Colorado was in an unsatisfactory condition, as was 80% of Idaho's riparian zones, concluding that "poorly managed livestock grazing is the major cause of degraded riparian habitat on federal rangelands". A 2013 FAO report estimated livestock were responsible for 14.5% of anthropogenic greenhouse gas emissions. Grazing is common in New Zealand; in 2004, methane and nitrous oxide from agriculture made up somewhat less than half of New Zealand's greenhouse gas emissions, of which most is attributable to livestock. A 2008 United States Environmental Protection Agency report on emissions found agriculture was responsible for 6% of total United States greenhouse gas emissions in 2006. This included rice production, enteric fermentation in domestic livestock, livestock manure management, and agricultural soil management, but omitted some things that might be attributable to agriculture. Studies comparing the methane emissions from grazing and feedlot cattle concluded that grass-fed cattle produce much more methane than grain-fed cattle. One study in the Journal of Animal Science found four times as much, and stated: "these measurements clearly document higher CH4 production for cattle receiving low-quality, high-fiber diets than for cattle fed high-grain diets". Agrivoltaics Agrivoltaics for grazing would allow for shade for the animals as well as the vegetation so the soil retains a higher moisture level.
Biology and health sciences
Ethology
null
854468
https://en.wikipedia.org/wiki/Simmental%20cattle
Simmental cattle
The Simmental or Swiss Fleckvieh is a Swiss breed of dual-purpose cattle. It is named after the Simmental – the valley of the Simme river – in the Bernese Oberland, in the canton of Bern in Switzerland. The breed is typically reddish in colour with white markings, and is raised for both milk and meat. History European origin Among the older and most widely distributed of all breeds of cattle in the world, and recorded since the Middle Ages, the Simmental breed has contributed to the creation of several other famous European breeds, including the Montbéliarde (France), the Pezzata Rossa d'Oropa (Italy), and the Fleckvieh (Germany and Austria). Africa Namibia (1893) and South Africa (1905) were the first countries outside Europe where the breed was successfully established. Here the breed is known as Simmentaler and is mainly used for beef cattle production under suckler cow systems. The Simmentaler breeders' society is, as far as registered animals are concerned, by far the largest of the 17 European and British breeds. The main reasons for its popularity are (i) it can be used with great success in crossbreeding for breeding of both cows with much milk and heavy weaners/oxen, (ii) its superb weight growth rate in feedlots - pure or crossed, and (iii) a strict visual inspection is compulsory for registration in the Herdbook. Soviet Union In the former Soviet Union, the Simmental was the most important cattle breed. Russian Simmental (Симментальская корова) accounted for one-quarter of all cattle in the USSR. Through extensive crossbreeding, six strains were developed: Steppe Simmental (Russian cattle × Simmental bulls) Ukrainian Simmental (Grey steppe cattle × Simmental bulls) Volga Simmental (Central Russian Kalmyk and Kazakh cattle × Simmental bulls) Ural Simmental (Siberian and Kazakh cattle × Simmental) Siberian Simmental (Siberian and Buryat cattle × Simmental) Far Eastern Simmental (Transbaikal and Yakutian cattle × Simmental) In 1990, there were 12,849,800 Simmental in the USSR. In 2003, the Simmental count in Russia stood at 2,970,400. Different names The breed is known under the following names Fleckvieh Simmental: Argentina Simmental: Australia, Brazil, Bulgaria, Canada, Colombia, Denmark, France (early 1990s name change from Pie Rouge), Ireland, Mexico, New Zealand, Poland, Sweden, Switzerland (SI-division), United Kingdom, USA, Zambia and Zimbabwe Fleckvieh: Austria, Germany, Netherlands, Spain, Switzerland (SF-division) and Uruguay Simmentaler: South Africa and Namibia Local names based on the breed-name used in the official breed association names which boil down to "spotted cattle": Bosnia-Herzegovina, Croatia, Czech Republic, Hungary, Romania, Serbia, Slovakia, Slovenia. Most of these countries use Simmental as a translation of their local name. Pezzata Rossa: Italy. Montbéliarde: A French dairy breed. Member of European Simmental Federation but not of the World Simmental-Fleckvieh Federation. Characteristics Traditional The Simmental has historically been used for dairy and beef, and as draught animals. They are particularly renowned for the rapid growth of their young, if given sufficient feed. Simmentals provide more combined weaning gain (growth) and milk yield than any other breed. They also have lower frequency of dental lesions compared to other breeds. Africa In contrast to countries which allow black and solid brown coloured Simmental in the herdbook, Namibia and South Africa only register Simmentaler with the typical colour i.e. from dark red or brown to yellow spread over the body in any pattern with at least some white on the forehead and the lower-leg area, solid black or solid red animals are non-existent because they are not registered. Types No other breed in the world has such a large within-breed-type variation as Simmental-Fleckvieh which is classifiable in the following types: Dairy type like specialised dairy breeds (referring to Swiss Fleckvieh (code SF) with over 55% Red Holstein blood and the French Montebeliard breed); Dual purpose but major emphasis on milk; Truly dual-purpose (all cows are milked and bulls excel in weight gain); Moderate beef type (suckler, extensive ranching with moderate to small frame size); Extreme beef type (suckler, comparable to specialised beef breeds like for example Charolais, large frame size). The traditional colouration of the Simmental has been described variously as "red and white spotted" or "gold and white", although there is no specific standard colouration, and the dominant shade varies from a pale yellow-gold all the way to very dark red (the latter being particularly popular in the United States). The face is normally white, and this characteristic is usually passed to crossbred calves. The white face is genetically distinct from the white head of the Hereford.
Biology and health sciences
Cattle
Animals
854790
https://en.wikipedia.org/wiki/Teleoceras
Teleoceras
Teleoceras (Greek: "perfect" (teleos), "horn" (keratos)) is an extinct genus of rhinocerotid. It lived in North America during the Miocene and Pliocene epochs during the Hemingfordian to the end of Hemphillian from around 17.5 to 4.9 million years ago. It grew up to lengths of long. Teleoceras went extinct in North America alongside Aphelops at the end of the Hemphillian, most likely due to rapid climate cooling, increased seasonality and expansion of C4 grasses, as isotopic evidence suggests that the uptake of C4 plants was far less than that in contemporary horses. The Gray Fossil Site in northeast Tennessee, dated to 4.5-5 million years ago, hosts one of the latest-known populations of Teleoceras, Teleoceras aepysoma. Description Teleoceras had much shorter legs than modern rhinos, and a barrel chest, making its build more like that of a hippopotamus than a modern rhino. Based on this description, Henry Fairfield Osborn suggested in 1898 that it was semi-aquatic and hippo-like in habits. This idea persisted for about a century, but has recently been discounted by isotopic evidence. Some species of Teleoceras have a small nasal horn, but this appears to be absent in other species such as T. aepysoma. Teleoceras has high crowned (hypsodont) molar teeth, which has historically led to suggestions that the species were grazers. Dental microwear and mesowear analysis alternatively suggest a browsing or mixed feeding (both browsing and grazing) diet. Carbon and oxygen isotope analysis of tooth enamel suggests hippo-like grazing habits, but not aquatic. However, δ18O measurements from Ashfall suggest that the species T. major was semi-aquatic. Sexual dimorphism Teleoceras was sexually dimorphic. Males were larger, with larger tusks (lower incisors), a more massive head and neck, and significantly larger forelimbs. As a result of bimaturism, females matured and stopped growing before males, which is often seen in extant polygynous mammals. Males may have fought for mating rights; healed wounds on skulls have been observed, and healed broken ribs are not uncommon (although not all have their sexes determined). This is further supported by the breeding age female-to-male ratio in the Ashfall Fossil Beds being 4.25:1. There is also a rarity of young adult males preserved at Ashfall, which may be accounted for if they formed bachelor herds away from females and dominant bulls. Discovery Ashfall Fossil Beds Teleoceras major is the most common fossil in the Ashfall Fossil Beds of Nebraska. Over 100 intact T. major skeletons are preserved in ash from the Bruneau-Jarbidge supervolcanic eruption. Of the 20+ taxa present, T. major was buried above the rest, being the last of the animals to succumb (small animals died faster), several weeks or months after the pyroclastic airfall event. Their skeletons show evidence of bone disease, ie hypertrophic pulmonary osteodystrophy (HPOD), as a result of lung failure from the fine volcanic ash. Most of the skeletons are adult females and young, the breeding age female-to-male ratio being 4.25:1. There is also a rarity of young adult males. If the rhinos at Ashfall represent a herd, this may be accounted for if young adult males formed bachelor herds away from females and dominant bulls. The age demographic is very similar to that of modern hippo herds, as amongst the skeletons, 54% are immature, 30% are young adults, and 16% are older adults. The greatest concentration of Ashfall fossils is housed in a building called the "Rhino Barn", due to the prevalence of T. major skeletons at the site, of which most were preserved in a nearly complete state. One extraordinary specimen includes the remains of a Teleoceras calf trying to suckle from its mother.
Biology and health sciences
Perissodactyla
Animals
855297
https://en.wikipedia.org/wiki/Pest%20control
Pest control
Pest control is the regulation or management of a species defined as a pest; such as any animal, plant or fungus that impacts adversely on human activities or environment. The human response depends on the importance of the damage done and will range from tolerance, through deterrence and management, to attempts to completely eradicate the pest. Pest control measures may be performed as part of an integrated pest management strategy. In agriculture, pests are kept at bay by mechanical, cultural, chemical and biological means. Ploughing and cultivation of the soil before sowing mitigate the pest burden, and crop rotation helps to reduce the build-up of a certain pest species. Concern about environment means limiting the use of pesticides in favour of other methods. This can be achieved by monitoring the crop, only applying pesticides when necessary, and by growing varieties and crops which are resistant to pests. Where possible, biological means are used, encouraging the natural enemies of the pests and introducing suitable predators or parasites. In homes and urban environments, the pests are the rodents, birds, insects and other organisms that share the habitat with humans, and that feed on or spoil possessions. Control of these pests is attempted through exclusion or quarantine, repulsion, physical removal or chemical means. Alternatively, various methods of biological control can be used including sterilisation programmes. History Pest control is at least as old as agriculture, as there has always been a need to keep crops free from pests. As long ago as 3000 BC in Egypt, cats were used to control pests of grain stores such as rodents. Ferrets were domesticated by 1500 BC in Europe for use as mousers. Mongooses were introduced into homes to control rodents and snakes, probably by the ancient Egyptians. The conventional approach was probably the first to be employed, since it is comparatively easy to destroy weeds by burning them or ploughing them under, and to kill larger competing herbivores. Techniques such as crop rotation, companion planting (also known as intercropping or mixed cropping), and the selective breeding of pest-resistant cultivars have a long history. Chemical pesticides were first used around 2500 BC, when the Sumerians used sulphur compounds as insecticides. Modern pest control was stimulated by the spread across the United States of the Colorado potato beetle. After much discussion, arsenical compounds were used to control the beetle and the predicted poisoning of the human population did not occur. This led the way to a widespread acceptance of insecticides across the continent. With the industrialisation and mechanization of agriculture in the 18th and 19th centuries, and the introduction of the insecticides pyrethrum and derris, chemical pest control became widespread. In the 20th century, the discovery of several synthetic insecticides, such as DDT, and herbicides boosted this development. The harmful side effect of pesticides on humans has now resulted in the development of newer approaches, such as the use of biological control to eliminate the ability of pests to reproduce or to modify their behavior to make them less troublesome. Biological control is first recorded around 300 AD in China, when colonies of weaver ants, Oecophylla smaragdina, were intentionally placed in citrus plantations to control beetles and caterpillars. Also around 4000 BC in China, ducks were used in paddy fields to consume pests, as illustrated in ancient cave art. In 1762, an Indian mynah was brought to Mauritius to control locusts, and about the same time, citrus trees in Burma were connected by bamboos to allow ants to pass between them and help control caterpillars. In the 1880s, ladybirds were used in citrus plantations in California to control scale insects, and other biological control experiments followed. The introduction of DDT, a cheap and effective compound, put an effective stop to biological control experiments. By the 1960s, problems of resistance to chemicals and damage to the environment began to emerge, and biological control had a renaissance. Chemical pest control is still the predominant type of pest control today, although a renewed interest in traditional and biological pest control developed towards the end of the 20th century and continues to this day. In agriculture Control methods Biological pest control Biological pest control is a method of controlling pests such as insects and mites by using other organisms. It relies on predation, parasitism, herbivory, parasitody or other natural mechanisms, but typically also involves an active human management role. Classical biological control involves the introduction of natural enemies of the pest that are bred in the laboratory and released into the environment. An alternative approach is to augment the natural enemies that occur in a particular area by releasing more, either in small, repeated batches, or in a single large-scale release. Ideally, the released organism will breed and survive, and provide long-term control. Biological control can be an important component of an integrated pest management programme. For example: mosquitoes are often controlled by putting Bt Bacillus thuringiensis ssp. israelensis, a bacterium that infects and kills mosquito larvae, in local water sources. Cultural control Mechanical pest control is the use of hands-on techniques as well as simple equipment and devices, that provides a protective barrier between plants and insects. This is referred to as tillage and is one of the oldest methods of weed control as well as being useful for pest control; wireworms, the larvae of the common click beetle, are very destructive pests of newly ploughed grassland, and repeated cultivation exposes them to the birds and other predators that feed on them. Crop rotation can help to control pests by depriving them of their host plants. It is a major tactic in the control of corn rootworm, and has reduced early season incidence of Colorado potato beetle by as much as 95%. Trap cropping A trap crop is a crop of a plant that attracts pests, diverting them from nearby crops. Pests aggregated on the trap crop can be more easily controlled using pesticides or other methods. However, trap-cropping, on its own, has often failed to cost effectively reduce pest densities on large commercial scales, without the use of pesticides, possibly due to the pests' ability to disperse back into the main field. Pesticides Pesticides are substances applied to crops to control pests, they include herbicides to kill weeds, fungicides to kill fungi and insecticides to kill insects. They can be applied as sprays by hand, tractors, or aircraft or as seed dressings. To be effective, the correct substance must be applied at the correct time and the method of application is important to ensure adequate coverage and retention on the crop. The killing of natural enemies of the target pest should be minimized. This is particularly important in countries where there are natural reservoirs of pests and their enemies in the countryside surrounding plantation crops, and these co-exist in a delicate balance. Often in less-developed countries, the crops are well adapted to the local situation and no pesticides are needed. Where progressive farmers are using fertilizers to grow improved crop varieties, these are often more susceptible to pest damage, but the indiscriminate application of pesticides may be detrimental in the longer term. The efficacy of chemical pesticides tends to diminish over time. This is because any organism that manages to survive the initial application will pass on its genes to its offspring and a resistant strain will be developed. In this way, some of the most serious pests have developed resistance and are no longer killed by pesticides that used to kill their ancestors. This necessitates higher concentrations of chemical, more frequent applications and a movement to more expensive formulations. Pesticides are intended to kill pests, but many have detrimental effects on non-target species; of particular concern is the damage done to honey-bees, solitary bees and other pollinating insects and in this regard, the time of day when the spray is applied can be important. The widely used neonicotinoids have been banned on flowering crops in some countries because of their effects on bees. Some pesticides may cause cancer and other health problems in humans, as well as being harmful to wildlife. There can be acute effects immediately after exposure or chronic effects after continuous low-level, or occasional exposure. Maximum residue limits for pesticides in foodstuffs and animal feed are set by many nations. Genetics Using crops with inheritable resistance to pests is referred to as host-plant resistance and reduces the need for pesticide use. These crops can harm or even kill pests, repel feeding, prevent colonization, or tolerate the presence of a pest without significantly impacting yield. Resistance can also occur through genetic engineering to have traits with resistance to insects, such as with Bt corn, or papaya resistance to ringspot virus. When farmers are purchasing seed, variety information often includes resistance to selected pests in addition to other traits. Hunting Pest control can also be achieved via culling the pest animals — generally small- to medium-sized wild or feral mammals or birds that inhabit the ecological niches near farms, pastures or other human settlements — by employing human hunters or trappers to physically track down, kill and remove them from the area. The culled animals, known as vermin, may be targeted because they are deemed harmful to agricultural crops, livestock or facilities; serve as hosts or vectors that transmit pathogens across species or to humans; or for population control as a mean of protecting other vulnerable species and ecosystems. Pest control via hunting, like all forms of harvest, has imposed an artificial selective pressure on the organisms being targeted. While varmint hunting is potentially selecting for desired behavioural and demographic changes (e.g. animals avoiding human populated areas, crops and livestock), it can also result in unpredicted outcomes such as the targeted animal adapting for faster reproductive cycles. Forestry Forest pests present a significant problem because it is not easy to access the canopy and monitor pest populations. In addition, forestry pests such as bark beetles, kept under control by natural enemies in their native range, may be transported large distances in cut timber to places where they have no natural predators, enabling them to cause extensive economic damage. Pheromone traps have been used to monitor pest populations in the canopy. These release volatile chemicals that attract males. Pheromone traps can detect the arrival of pests or alert foresters to outbreaks. For example, the spruce budworm, a destructive pest of spruce and balsam fir, has been monitored using pheromone traps in Canadian forests for several decades. In some regions, such as New Brunswick, areas of forest are sprayed with pesticide to control the budworm population and prevent the damage caused during outbreaks. In homes and cities Many unwelcome animals visit or make their home in residential buildings, industrial sites and urban areas. Some contaminate foodstuffs, damage structural timbers, chew through fabrics or infest stored dry goods. Some inflict great economic loss, others carry diseases or cause fire hazards, and some are just a nuisance. Control of these pests has been attempted by improving sanitation and garbage control, modifying the habitat, and using repellents, growth regulators, traps, baits and pesticides. General methods Physical pest control Physical pest control involves trapping or killing pests such as insects and rodents. Historically, local people or paid rat-catchers caught and killed rodents using dogs and traps. On a domestic scale, sticky flypapers are used to trap flies. In larger buildings, insects may be trapped using such means as pheromones, synthetic volatile chemicals or ultraviolet light to attract the insects; some have a sticky base or an electrically charged grid to kill them. Glueboards are sometimes used for monitoring cockroaches and to catch rodents. Rodents can be killed by suitably baited spring traps and can be caught in cage traps for relocation. Talcum powder or "tracking powder" can be used to establish routes used by rodents inside buildings and acoustic devices can be used for detecting beetles in structural timbers. Historically, firearms have been one of the primary methods used for pest control. "Garden Guns" are smooth bore shotguns specifically made to fire .22 caliber snake shot or 9mm Flobert, and are commonly used by gardeners and farmers for snakes, rodents, birds, and other pest. Garden Guns are short-range weapons that can do little harm past 15 to 20 yards, and they're relatively quiet when fired with snake shot, compared to standard ammunition. These guns are especially effective inside of barns and sheds, as the snake shot will not shoot holes in the roof or walls, or more importantly, injure livestock with a ricochet. They are also used for pest control at airports, warehouses, stockyards, etc. The most common shot cartridge is .22 Long Rifle loaded with #12 shot. At a distance of about , which is about the maximum effective range, the pattern is about in diameter from a standard rifle. Special smoothbore shotguns, such as the Marlin Model 25MG can produce effective patterns out to 15 or 20 yards using .22 WMR shotshells, which hold 1/8 oz. of #12 shot contained in a plastic capsule. Poisoned bait Poisoned bait is a common method for controlling rats, mice, birds, slugs, snails, ants, cockroaches, and other pests. The basic granules, or other formulation, contains a food attractant for the target species and a suitable poison. For ants, a slow-acting toxin is needed so that the workers have time to carry the substance back to the colony, and for flies, a quick-acting substance to prevent further egg-laying and nuisance. Baits for slugs and snails often contain the molluscide metaldehyde, dangerous to children and household pets. An article in Scientific American in 1885 described effective elimination of a cockroach infestation using fresh cucumber peels. Warfarin has traditionally been used to kill rodents, but many populations have developed resistance to this anticoagulant, and difenacoum may be substituted. These are cumulative poisons, requiring bait stations to be topped up regularly. Poisoned meat has been used for centuries to kill animals such as wolves and birds of prey. Poisoned carcasses however kill a wide range of carrion feeders, not only the targeted species. Raptors in Israel were nearly wiped out following a period of intense poisoning of rats and other crop pests. Fumigation Fumigation is the treatment of a structure to kill pests such as wood-boring beetles by sealing it or surrounding it with an airtight cover such as a tent, and fogging with liquid insecticide for an extended period, typically of 24–72 hours. This is costly and inconvenient as the structure cannot be used during the treatment, but it targets all life stages of pests. An alternative, space treatment, is fogging or misting to disperse a liquid insecticide in the atmosphere within a building without evacuation or airtight sealing, allowing most work within the building to continue, at the cost of reduced penetration. Contact insecticides are generally used to minimize long-lasting residual effects. Sterilization Populations of pest insects can sometimes be dramatically reduced by the release of sterile individuals. This involves the mass rearing of a pest, sterilising it by means of X-rays or some other means, and releasing it into a wild population. It is particularly useful where a female only mates once and where the insect does not disperse widely. This technique has been successfully used against the New World screw-worm fly, some species of tsetse fly, tropical fruit flies, the pink bollworm and the codling moth, among others. To chemically sterilize pests using chemosterilants, laboratory studies conducted using U-5897 (3-chloro-1,2-propanediol) attempted in the early 1970s for rat control, although these proved unsuccessful. In 2013, New York City tested sterilization traps, demonstrating a 43% reduction in rat populations. The product ContraPest was approved for the sterilization of rodents by the U.S. Environmental Protection Agency in August 2016 as a chemosterilant. Insulation Boron, a known pesticide can be impregnated into the paper fibers of cellulose insulation at certain levels to achieve a mechanical kill factor for self-grooming insects such as ants, cockroaches, termites, and more. The addition of insulation into the attic and walls of a structure can provide control of common pests in addition to known insulation benefits such a robust thermal envelope and acoustic noise-canceling properties. The EPA regulates this type of general-use pesticide within the United States allowing it to only be sold and installed by licensed pest management professionals as part of an integrated pest management program. Simply adding Boron or an EPA-registered pesticide to an insulation does not qualify it as a pesticide. The dosage and method must be carefully controlled and monitored. Methods for specific pests Rodent control Urban rodent control Rodent control is vital in cities. New York City and cities across the state dramatically reduced their rodent populations in the early 1970s. Rio de Janeiro claims a reduction of 80% over only 2 years shortly thereafter. To better target efforts, London began scientifically surveying populations in 1972 and this was so useful that all Local Authorities in England and Wales soon followed. Natural rodent control Several wildlife rehabilitation organizations encourage natural form of rodent control through exclusion and predator support and preventing secondary poisoning altogether. The United States Environmental Protection Agency notes in its Proposed Risk Mitigation Decision for Nine Rodenticides that "without habitat modification to make areas less attractive to commensal rodents, even eradication will not prevent new populations from recolonizing the habitat." The United States Environmental Protection Agency has prescribed guidelines for natural rodent control and for safe trapping in residential areas with subsequent release to the wild. People sometimes attempt to limit rodent damage using repellents. Balsam fir oil from the tree Abies balsamea is an EPA approved non-toxic rodent repellent. Acacia polyacantha subsp. campylacantha root emits chemical compounds that repel animals including rats. Pantry pests Insect pests including the Mediterranean flour moth, the Indian mealmoth, the cigarette beetle, the drugstore beetle, the confused flour beetle, the red flour beetle, the merchant grain beetle, the sawtoothed grain beetle, the wheat weevil, the maize weevil and the rice weevil infest stored dry foods such as flour, cereals and pasta. In the home, foodstuffs found to be infested are usually discarded, and storing such products in sealed containers should prevent the problem from reoccurring. The eggs of these insects are likely to go unnoticed, with the larvae being the destructive life stage, and the adult the most noticeable stage. Since pesticides are not safe to use near food, alternative treatments such as freezing for four days at or baking for half an hour at should kill any insects present. Clothes moths The larvae of clothes moths (mainly Tineola bisselliella and Tinea pellionella) feed on fabrics and carpets, particularly those that are stored or soiled. The adult females lay batches of eggs on natural fibres, including wool, silk, and fur, as well as cotton and linen in blends. The developing larvae spin protective webbing and chew into the fabric, creating holes and specks of excrement. Damage is often concentrated in concealed locations, under collars and near seams of clothing, in folds and crevices in upholstery and round the edges of carpets as well as under furniture. Methods of control include using airtight containers for storage, periodic laundering of garments, trapping, freezing, heating and the use of chemicals; mothballs contain volatile insect repellents such as 1,4-Dichlorobenzene which deter adults, but to kill the larvae, permethrin, pyrethroids or other insecticides may need to be used. Carpet beetles Carpet beetles are members of the family Dermestidae, and while the adult beetles feed on nectar and pollen, the larvae are destructive pests in homes, warehouses, and museums. They feed on animal products including wool, silk, leather, fur, the bristles of hair brushes, pet hair, feathers, and museum specimens. They tend to infest hidden locations and may feed on larger areas of fabrics than do clothes moths, leaving behind specks of excrement and brown, hollow, bristly-looking cast skins. Management of infestations is difficult and is based on exclusion and sanitation where possible, resorting to pesticides when necessary. The beetles can fly in from outdoors and the larvae can survive on lint fragments, dust, and inside the bags of vacuum cleaners. In warehouses and museums, sticky traps baited with suitable pheromones can be used to identify problems, and heating, freezing, spraying the surface with insecticide, and fumigation will kill the insects when suitably applied. Susceptible items can be protected from attack by keeping them in clean airtight containers. Bookworms Books are sometimes attacked by cockroaches, silverfish, book mites, booklice, and various beetles which feed on the covers, paper, bindings and glue. They leave behind physical damage in the form of tiny holes as well as staining from their faeces. Book pests include the larder beetle, and the larvae of the black carpet beetle and the drugstore beetle which attack leather-bound books, while the common clothes moth and the brown house moth attack cloth bindings. These attacks are largely a problem with historic books, because modern bookbinding materials are less susceptible to this type of damage. Evidence of attack may be found in the form of tiny piles of book-dust and specks of frass. Damage may be concentrated in the spine, the projecting edges of pages and the cover. Prevention of attack relies on keeping books in cool, clean, dry positions with low humidity, and occasional inspections should be made. Treatment can be by freezing for lengthy periods, but some insect eggs are very resistant and can survive for long periods at low temperatures. Beetles Various beetles in the Bostrichoidea superfamily attack the dry, seasoned wood used as structural timber in houses and to make furniture. In most cases, it is the larvae that do the damage; these are invisible from the outside of the timber but are chewing away at the wood in the interior of the item. Examples of these are the powderpost beetles, which attack the sapwood of hardwoods, and the furniture beetles, which attacks softwoods, including plywood. The damage has already been done by the time the adult beetles bore their way out, leaving neat round holes behind them. The first that a householder knows about the beetle damage is often when a chair leg breaks off or a piece of structural timber caves in. Prevention is possible through chemical treatment of the timber prior to its use in construction or in furniture manufacturing. Termites Termites with colonies in close proximity to houses can extend their galleries underground and make mud tubes to enter homes. The insects keep out of sight and chew their way through structural and decorative timbers, leaving the surface layers intact, as well as through cardboard, plastic and insulation materials. Their presence may become apparent when winged insects appear and swarm in the home in spring. Regular inspection of structures by a trained professional may help detect termite activity before the damage becomes substantial.; Inspection and monitoring of termites is important because termite alates (winged reproductives) may not always swarm inside a structure. Control and extermination is a professional job involving trying to exclude the insects from the building and trying to kill those already present. Soil-applied liquid termiticides provide a chemical barrier that prevents termites from entering buildings, and lethal baits can be used; these are eaten by foraging insects, and carried back to the nest and shared with other members of the colony, which goes into slow decline. Mosquitoes Mosquitoes are midge-like flies in the family Culicidae. Females of most species feed on blood and some act as vectors for malaria and other diseases. Historically they have been controlled by use of DDT and other chemical means, but since the adverse environmental effects of these insecticides have been realized, other means of control have been attempted. The insects rely on water in which to breed and the first line of control is to reduce possible breeding locations by draining marshes and reducing accumulations of standing water. Other approaches include biological control of larvae by the use of fish or other predators, genetic control, the introduction of pathogens, growth-regulating hormones, the release of pheromones and mosquito trapping. On airfields Birds are a significant hazard to aircraft, but it is difficult to keep them away from airfields. Several methods have been explored. Stunning birds by feeding them a bait containing stupefying substances has been tried, and it may be possible to reduce their numbers on airfields by reducing the number of earthworms and other invertebrates by soil treatment. Leaving the grass long on airfields rather than mowing it is also a deterrent to birds. Sonic nets are being trialled; these produce sounds that birds find distracting and seem effective at keeping birds away from affected areas.
Technology
Pest and disease control
null
855383
https://en.wikipedia.org/wiki/Dik-dik
Dik-dik
A dik-dik is the name for any of four species of small antelope in the genus Madoqua, which live in the bushlands of eastern and southern Africa. Dik-diks stand about at the shoulder, are long, weigh and can live for up to 10 years. Dik-diks are named for the alarm calls of the females. In addition to the females' alarm call, both the male and female make a shrill, whistling sound. These calls may alert other animals to predators. Name The name dik-dik comes from an onomatopoeia of the repetitive dik sound female dik-diks whistle through their long, tubular snouts when they feel threatened. Physical characteristics Female dik-diks are somewhat larger than males. The males have horns, which are small (about ), slanted backwards and longitudinally grooved. The hair on the crown forms an upright tuft that sometimes partially conceals the short, ribbed horns of the male. The upper body is gray-brown, while the lower parts of the body, including the legs, belly, crest, and flanks, are tan. A bare black spot below the inside corner of each eye contains a preorbital gland that produces a dark, sticky secretion. Dik-diks insert grass stems and twigs into the gland to scent-mark their territories. Perhaps to prevent overheating, dik-diks (especially Guenther's dik-diks) have elongated snouts with bellows-like muscles through which blood is pumped. Airflow and subsequent evaporation cools this blood before it is recirculated to the body. However, this panting is only implemented in extreme conditions; dik-diks can tolerate air temperatures of up to . Adaptations for desert environments Dik-diks have special physiological adaptations to help them survive in arid environments. For instance, dik-diks have a lower density of sweat glands compared to other animals such as cattle. Similarly, in more arid environments, dik-diks can concentrate their urine. These adaptations help dik-diks preserve body water. Because of their small body size, dik-diks are predicted to have among the highest metabolic rates and highest energy requirement per kilogram of all ruminants. However, dik-diks have a lower metabolic rate than would be predicted for their size as a physiological adaptation to heat and aridity. Habitat Dik-diks live in shrublands and savannas of eastern Africa. Dik-diks seek habitats with a plentiful supply of edible plants such as shrubs. Dik-diks may live in places as varied as dense forest or open plain, but they require good cover and not too much tall grass. They usually live in pairs in territories of about . The territories are often in low, shrubby bushes (sometimes along dry, rocky streambeds) with plenty of cover. Dik-diks, with their dusty colored coat, are able to blend in with their surroundings. Dik-diks have an established series of runways through and around the borders of their territories that are used when they feel threatened. Diet Dik-diks are herbivores. Their diet mainly consists of foliage, shoots, fruit and berries, but little or no grass. They receive sufficient amounts of water from their food, which makes drinking unnecessary. Like all even-toed ungulates, they digest their food with the aid of micro-organisms in their four-chambered stomachs. After initial digestion, the food is repeatedly eructated and rechewed, a process known also as rumination, or 'chewing the cud'. Dik-diks' tapering heads may help them eat the leaves between the spines on acacia trees, and feed while still keeping their head high to detect predators. Reproduction Dik-diks are monogamous, and conflicts between territorial neighbors are rare. When they occur, the males from each territory dash at each other, either stop short or make head-to-head contact, then back off for another round, with head crests erected. Males mark their territories with dung piles, and cover the females' dung with their own. One suggestion for monogamy in dik-diks is that it may be an evolutionary response to predation; surrounded by predators, it is dangerous to explore, looking for new partners. Pairs spend about 64% of their time together. Males, but not females, will attempt to initiate extra-pair mating if an opportunity arises. Females are sexually mature at six months and males at 12 months. The female gestates for 169 to 174 days and bears a single calf. This happens up to twice a year (at the start and finish of the rainy season). Unlike other ruminants which are born forefeet first, the dik-dik is born nose first, with its forelegs laid back alongside its body. Females weigh about at birth, while males weigh . The mother lactates for six weeks, feeding her calf for no longer than a few minutes at a time. The survival rate for young dik-diks is 50%. The young stay concealed for a time after birth, but grow quickly and reach full size by seven months. At that age, the young are forced to leave their parents' territory. The fathers run the sons off the territory and the mothers run off the daughters. Predators Dik-diks are hunted by leopards, caracals, lions, hyenas, wild dogs and humans. Other predators include monitor lizards, cheetahs, jackals, baboons, eagles, hawks and pythons. Dik-diks' adaptations to predation include excellent eyesight, the ability to reach speeds up to , and high birth rates. Species The four species of dik-dik are: Madoqua guntheri Thomas, 1894 – Günther's dik-dik M. kirkii (Günther, 1880) – Kirk's dik-dik M. piacentinii Drake-Brockman, 1911 – Silver dik-dik M. saltiana (de Blainville, 1816) – Salt's dik-dik
Biology and health sciences
Bovidae
Animals
855555
https://en.wikipedia.org/wiki/Hydrofluorocarbon
Hydrofluorocarbon
Hydrofluorocarbons (HFCs) are synthetic organic compounds that contain fluorine and hydrogen atoms, and are the most common type of organofluorine compounds. Most are gases at room temperature and pressure. They are frequently used in air conditioning and as refrigerants; R-134a (1,1,1,2-tetrafluoroethane) is one of the most commonly used HFC refrigerants. In order to aid the recovery of the stratospheric ozone layer, HFCs were adopted to replace the more potent chlorofluorocarbons (CFCs), which were phased out from use by the Montreal Protocol, and hydrochlorofluorocarbons (HCFCs) which are presently being phased out. HFCs replaced older chlorofluorocarbons such as R-12 and hydrochlorofluorocarbons such as R-21. HFCs are also used in insulating foams, aerosol propellants, as solvents and for fire protection. They may not harm the ozone layer as much as the compounds they replace, but they still contribute to global warming --- with some like trifluoromethane having 11,700 times the warming potential of carbon dioxide. Their atmospheric concentrations and contribution to anthropogenic greenhouse gas emissions are rapidly increasing --- consumption rose from near zero in 1990 to 1.2 billion tons of carbon dioxide equivalent in 2010 --- causing international concern about their radiative forcing. Chemistry Fluorocarbons with few C–F bonds behave similarly to the parent hydrocarbons, but their reactivity can be altered significantly. For example, both uracil and 5-fluorouracil are colourless, high-melting crystalline solids, but the latter is a potent anti-cancer drug. The use of the C-F bond in pharmaceuticals is predicated on this altered reactivity. Several drugs and agrochemicals contain only one fluorine center or one trifluoromethyl group. Environmental regulation Unlike other greenhouse gases in the Paris Agreement, hydrofluorocarbons are included in other international negotiations. In September 2016, the New York Declaration on Forests urged a global reduction in the use of HFCs. On 15 October 2016, due to these chemicals' contribution to climate change, negotiators from 197 nations meeting at a summit of the United Nations Environment Programme in Kigali, Rwanda reached a legally-binding accord (the Kigali Amendment) to phase down hydrofluorocarbons (HFCs) in an amendment to the Montreal Protocol. As of February 2020, 16 U.S. states ban or are phasing down HFCs. COVID-19 relief legislation, which included a measure that would require chemical manufacturers to phase down the production and use of HFCs, was passed by the United States House of Representatives and United States Senate on December 21, 2020. The U.S. Environmental Protection Agency signed a final rule phasing down HFCs on 23 September 2021.
Physical sciences
Halocarbons
Chemistry
855850
https://en.wikipedia.org/wiki/Extrusion
Extrusion
Extrusion is a process used to create objects of a fixed cross-sectional profile by pushing material through a die of the desired cross-section. Its two main advantages over other manufacturing processes are its ability to create very complex cross-sections; and to work materials that are brittle, because the material encounters only compressive and shear stresses. It also creates excellent surface finish and gives considerable freedom of form in the design process. Drawing is a similar process, using the tensile strength of the material to pull it through the die. It limits the amount of change that can be performed in one step, so it is limited to simpler shapes, and multiple stages are usually needed. Drawing is the main way to produce wire. Metal bars and tubes are also often drawn. Extrusion may be continuous (theoretically producing indefinitely long material) or semi-continuous (producing many pieces). It can be done with hot or cold material. Commonly extruded materials include metals, polymers, ceramics, concrete, modelling clay, and foodstuffs. Products of extrusion are generally called extrudates. Also referred to as "hole flanging", hollow cavities within extruded material cannot be produced using a simple flat extrusion die, because there would be no way to support the centre barrier of the die. Instead, the die assumes the shape of a block with depth, beginning first with a shape profile that supports the center section. The die shape then internally changes along its length into the final shape, with the suspended center pieces supported from the back of the die. The material flows around the supports and fuses to create the desired closed shape. The extrusion of metals can also increase their strength. History In 1797, Joseph Bramah patented the first extrusion process for making pipe out of soft metals. It involved preheating the metal and then forcing it through a die via a hand-driven plunger. In 1820 Thomas Burr implemented that process for lead pipe, with a hydraulic press (also invented by Joseph Bramah). At that time the process was called "squirting". In 1894, Alexander Dick expanded the extrusion process to copper and brass alloys. Types of extrusions The process begins by heating the stock material (for hot or warm extrusion). It is then loaded into the container in the press. A dummy block is placed behind it where the ram then presses on the material to push it out of the die. Afterward the extrusion is stretched in order to straighten it. If better properties are required then it may be heat treated or cold worked. The extrusion ratio is defined as the starting cross-sectional area divided by the cross-sectional area of the final extrusion. One of the main advantages of the extrusion process is that this ratio can be very large while still producing quality parts. Hot extrusion Hot extrusion is a hot working process, which means it is done above the material's recrystallization temperature to keep the material from work hardening and to make it easier to push the material through the die. Most hot extrusions are done on horizontal hydraulic presses that range from . Pressures range from , therefore lubrication is required, which can be oil or graphite for lower temperature extrusions, or glass powder for higher temperature extrusions. The biggest disadvantage of this process is its cost for machinery and its upkeep. The extrusion process is generally economical when producing between several kilograms (pounds) and many tons, depending on the material being extruded. There is a crossover point where roll forming becomes more economical. For instance, some steels become more economical to roll if producing more than 20,000 kg (50,000 lb). Cold extrusion Cold extrusion is done at room temperature or near room temperature. The advantages of this over hot extrusion are the lack of oxidation, higher strength due to cold working, closer tolerances, better surface finish, and fast extrusion speeds if the material is subject to hot shortness. Materials that are commonly cold extruded include: lead, tin, aluminium, copper, zirconium, titanium, molybdenum, beryllium, vanadium, niobium, and steel. Examples of products produced by this process are: collapsible tubes, fire extinguisher cases, shock absorber cylinders and gear blanks. Warm extrusion In March 1956, a US patent was filed for "process for warm extrusion of metal". Patent US3156043 A outlines that a number of important advantages can be achieved with warm extrusion of both ferrous and non-ferrous metals and alloys if a billet to be extruded is changed in its physical properties in response to physical forces by being heated to a temperature below the critical melting point. Warm extrusion is done above room temperature, but below the recrystallization temperature of the material the temperatures ranges from 800 to 1,800 °F (424 to 975 °C). It is usually used to achieve the proper balance of required forces, ductility and final extrusion properties. Friction extrusion Friction extrusion was invented at the Welding Institute in the UK and patented in 1991. It was originally intended primarily as a method for production of homogeneous microstructures and particle distributions in metal matrix composite materials. Friction extrusion differs from conventional extrusion in that the charge (billet or other precursor) rotates relative to the extrusion die. An extrusion force is applied so as to push the charge against the die. In practice either the die or the charge may rotate or they may be counter-rotating. The relative rotary motion between the charge and the die has several significant effects on the process. First, the relative motion in the plane of rotation leads to large shear stresses, hence, plastic deformation in the layer of charge in contact with and near the die. This plastic deformation is dissipated by recovery and recrystallization processes leading to substantial heating of the deforming charge. Because of the deformation heating, friction extrusion does not generally require preheating of the charge by auxiliary means potentially resulting in a more energy efficient process. Second, the substantial level of plastic deformation in the region of relative rotary motion can promote solid state welding of powders or other finely divided precursors, such as flakes and chips, effectively consolidating the charge (friction consolidation) prior to extrusion. Micro-extrusion Microextrusion is a microforming extrusion process performed at the submillimetre range. Like extrusion, metal is pushed through a die orifice, but the resulting product's cross section can fit through a 1 mm square. Several microextrusion processes have been developed since microforming was envisioned in 1990. Forward (ram and billet move in the same direction) and backward (ram and billet move in the opposite direction) microextrusion were first introduced, with forward rod-backward cup and double cup extrusion methods developing later. Regardless of method, one of the greatest challenges of creating a successful microextrusion machine is the manufacture of the die and ram. "The small size of the die and ram, along with the stringent accuracy requirement, needs suitable manufacturing processes." Additionally, as Fu and Chan pointed out in a 2013 state-of-the-art technology review, several issues must still be resolved before microextrusion and other microforming technologies can be implemented more widely, including deformation load and defects, forming system stability, mechanical properties, and other size-related effects on the crystallite (grain) structure and boundaries. Equipment There are many different variations of extrusion equipment. They vary by four major characteristics: Movement of the extrusion with relation to the ram. If the die is held stationary and the ram moves towards it then it is called "direct extrusion". If the ram is held stationary and the die moves towards the ram it is called "indirect extrusion". The position of the press, either vertical or horizontal The type of drive, either hydraulic or mechanical The type of load applied, either conventional (variable) or hydrostatic A single or twin screw auger, powered by an electric motor, or a ram, driven by hydraulic pressure (often used for steel and titanium alloys), oil pressure (for aluminium), or in other specialized processes such as rollers inside a perforated drum for the production of many simultaneous streams of material. Forming internal cavities There are several methods for forming internal cavities in extrusions. One way is to use a hollow billet and then use a fixed or floating mandrel. A fixed mandrel, also known as a German type, means it is integrated into the dummy block and stem. A floating mandrel, also known as a French type, floats in slots in the dummy block and aligns itself in the die when extruding. If a solid billet is used as the feed material then it must first be pierced by the mandrel before extruding through the die. A special press is used in order to control the mandrel independently from the ram. The solid billet could also be used with a spider die, porthole die or bridge die. All of these types of dies incorporate the mandrel in the die and have "legs" that hold the mandrel in place. During extrusion the metal divides, flows around the legs, then merges, leaving weld lines in the final product. Direct extrusion Direct extrusion, also known as forward extrusion, is the most common extrusion process. It works by placing the billet in a heavy walled container. The billet is pushed through the die by a ram or screw. There is a reusable dummy block between the ram and the billet to keep them separated. The major disadvantage of this process is that the force required to extrude the billet is greater than that needed in the indirect extrusion process because of the frictional forces introduced by the need for the billet to travel the entire length of the container. Because of this the greatest force required is at the beginning of process and slowly decreases as the billet is used up. At the end of the billet the force greatly increases because the billet is thin and the material must flow radially to exit the die. The end of the billet (called the butt end) is not used for this reason. Indirect extrusion In indirect extrusion, also known as backwards extrusion, the billet and container move together while the die is stationary. The die is held in place by a "stem" which has to be longer than the container length. The maximum length of the extrusion is ultimately dictated by the column strength of the stem. Because the billet moves with the container the frictional forces are eliminated. This leads to the following advantages: A 25 to 30% reduction of friction, which allows for extruding larger billets, increasing speed, and an increased ability to extrude smaller cross-sections There is less of a tendency for extrusions to crack because there is no heat formed from friction The container liner will last longer due to less wear The billet is used more uniformly so extrusion defects and coarse grained peripherals zones are less likely. The disadvantages are: Impurities and defects on the surface of the billet affect the surface of the extrusion. These defects ruin the piece if it needs to be anodized or the aesthetics are important. In order to get around this the billets may be wire brushed, machined or chemically cleaned before being used. This process is not as versatile as direct extrusions because the cross-sectional area is limited by the maximum size of the stem. Hydrostatic extrusion In the hydrostatic extrusion process the billet is completely surrounded by a pressurized liquid, except where the billet contacts the die. This process can be done hot, warm, or cold, however the temperature is limited by the stability of the fluid used. The process must be carried out in a sealed cylinder to contain the hydrostatic medium. The fluid can be pressurized two ways: Constant-rate extrusion: A ram or plunger is used to pressurize the fluid inside the container. Constant-pressure extrusion: A pump is used, possibly with a pressure intensifier, to pressurize the fluid, which is then pumped to the container. The advantages of this process include: No friction between the container and the billet reduces force requirements. This ultimately allows for faster speeds, higher reduction ratios, and lower billet temperatures. Usually the ductility of the material increases when high pressures are applied An even flow of material Large billets and large cross-sections can be extruded No billet residue is left on the container walls The disadvantages are: The billets must be prepared by tapering one end to match the die entry angle. This is needed to form a seal at the beginning of the cycle. Usually the entire billet needs to be machined to remove any surface defects. Containing the fluid under high pressures can be difficult. A billet remnant or a plug of a tougher material must be left at the end of the extrusion to prevent a sudden release of the extrusion fluid. Drives Most modern direct or indirect extrusion presses are hydraulically driven, but there are some small mechanical presses still used. Of the hydraulic presses there are two types: direct-drive oil presses and accumulator water drives. Direct-drive oil presses are the most common because they are reliable and robust. They can deliver over 35 MPa (5,000 psi). They supply a constant pressure throughout the whole billet. The disadvantage is that they are slow, between 50 and 200 mm/s (2–8 ips). Accumulator water drives are more expensive and larger than direct-drive oil presses, and they lose about 10% of their pressure over the stroke, but they are much faster, up to 380 mm/s (15 ips). Because of this they are used when extruding steel. They are also used on materials that must be heated to very hot temperatures for safety reasons. Hydrostatic extrusion presses usually use castor oil at pressure up to 1,400 MPa (200 ksi). Castor oil is used because it has good lubricity and high pressure properties. Die design The design of an extrusion profile has a large impact on how readily it can be extruded. The maximum size for an extrusion is determined by finding the smallest circle that will fit around the cross-section, this is called the circumscribing circle. This diameter, in turn, controls the size of the die required, which ultimately determines if the part will fit in a given press. For example, a larger press can handle diameter circumscribing circles for aluminium and 55 cm (22 in) diameter circles for steel and titanium. The complexity of an extruded profile can be roughly quantified by calculating the shape factor, which is the amount of surface area generated per unit mass of extrusion. This affects the cost of tooling as well as the rate of production. Thicker sections generally need an increased section size. In order for the material to flow properly legs should not be more than ten times longer than their thickness. If the cross-section is asymmetrical, adjacent sections should be as close to the same size as possible. Sharp corners should be avoided; for aluminium and magnesium the minimum radius should be 0.4 mm (1/64 in) and for steel corners should be and fillets should be . The following table lists the minimum cross-section and thickness for various materials. Materials Metal Metals that are commonly extruded include: Aluminium is the most commonly extruded material. Aluminium can be hot or cold extruded. If it is hot extruded it is heated to 575 to 1100 °F (300 to 600 °C). Examples of products include profiles for tracks, frames, rails, mullions, and heat sinks. Brass is used to extrude corrosion free rods, automobile parts, pipe fittings, engineering parts. Copper (1100 to 1825 °F (600 to 1000 °C)) pipe, wire, rods, bars, tubes, and welding electrodes. Often more than 100 ksi (690 MPa) is required to extrude copper. Lead and tin (maximum 575 °F (300 °C)) pipes, wire, tubes, and cable sheathing. Molten lead may also be used in place of billets on vertical extrusion presses. Magnesium (575 to 1100 °F (300 to 600 °C)) aircraft parts and nuclear industry parts. Magnesium is about as extrudable as aluminum. Zinc (400 to 650 °F (200 to 350 °C)) rods, bar, tubes, hardware components, fitting, and handrails. Steel (1825 to 2375 °F (1000 to 1300 °C)) rods and tracks. Usually plain carbon steel is extruded, but alloy steel and stainless steel can also be extruded. Titanium (1100 to 1825 °F (600 to 1000 °C)) aircraft components including seat tracks, engine rings, and other structural parts. Magnesium and aluminium alloys usually have a RMS or better surface finish. Titanium and steel can achieve a RMS. In 1950, Ugine Séjournet, of France, invented a process which uses glass as a lubricant for extruding steel. The Ugine-Sejournet, or Sejournet, process is now used for other materials that have melting temperatures higher than steel or that require a narrow range of temperatures to extrude, such as the platinum-iridium alloy used to make kilogram mass standards. The process starts by heating the materials to the extruding temperature and then rolling it in glass powder. The glass melts and forms a thin film, 20 to 30 mils (0.5 to 0.75 mm), in order to separate it from chamber walls and allow it to act as a lubricant. A thick solid glass ring that is 0.25 to 0.75 in (6 to 18 mm) thick is placed in the chamber on the die to lubricate the extrusion as it is forced through the die. A second advantage of this glass ring is its ability to insulate the heat of the billet from the die. The extrusion will have a 1 mil thick layer of glass, which can be easily removed once it cools. Another breakthrough in lubrication is the use of phosphate coatings. With this process, in conjunction with glass lubrication, steel can be cold extruded. The phosphate coat absorbs the liquid glass to offer even better lubricating properties. Plastic Plastics extrusion commonly uses plastic chips or pellets, which are usually dried, to drive out moisture, in a hopper before going to the feed screw. The polymer resin is heated to molten state by a combination of heating elements and shear heating from the extrusion screw. The screw, or screws as the case with twin screw extrusion, forces the resin through a die, forming the resin into the desired shape. The extrudate is cooled and solidified as it is pulled through the die or water tank. A "caterpillar haul-off" (called a "puller" in the US) is used to provide tension on the extrusion line which is essential for overall quality of the extrudate. Pelletizers can also create this tension while pulling extruded strands in to be cut. The caterpillar haul-off must provide a consistent pull; otherwise, variation in cut lengths or distorted product will result. In some cases (such as fibre-reinforced tubes) the extrudate is pulled through a very long die, in a process called "pultrusion". The configuration of the interior screws are a driving force dependent on the application. Mixing elements or convey elements are used in various formations. Extrusion is common in the application of adding colorant to molten plastic thus creating specific custom color. Extrusion is also a process used in fused filament deposition 3D printers, whereby the extruder is often composed of a geared motor pushing plastic filament through a nozzle. Rubber Rubber extrusion is a method used to make rubber items. In this process, either synthetic or natural rubber that hasn't been hardened yet is put through a machine called an extruder. This machine has a desired shaped mold and a pressurized conveyor system. The rubber gets heated and softened in the extruder, making it bendable. It then gets pushed through the mold, which gives it its final shape. The extruder consists of two main parts: a screw that moves the rubber along the conveyor while adding other materials, and a mold where the soft rubber is squeezed into. After the rubber gets its shape from the mold, it is then vulcanized to harden it into a usable product. This method is effective for large rubber pieces that are long and have a consistent shape, and the dies used in this process are inexpensive. It is often used to make things like rubber seals or hoses. Polymers are used in the production of plastic tubing, pipes, rods, rails, seals, and sheets or films. Ceramic Ceramic can also be formed into shapes via extrusion. Terracotta extrusion is used to produce pipes. Many modern bricks are also manufactured using a brick extrusion process. Applications Food With the advent of industrial manufacturing, extrusion found application in food processing of instant foods and snacks, along with its already known uses in plastics and metal fabrication. The main role of extrusion was originally developed for conveying and shaping fluid forms of processed raw materials. Present day, extrusion cooking technologies and capabilities have developed into sophisticated processing functions including: mixing, conveying, shearing, separation, heating, cooling, shaping, co-extrusion, venting volatiles and moisture, encapsulation, flavor generation and sterilization. Products such as certain pastas, many breakfast cereals, premade cookie dough, some french fries, certain baby foods, dry or semi-moist pet food and ready-to-eat snacks are mostly manufactured by extrusion. It is also used to produce modified starch, and to pelletize animal feed. Generally, high-temperature extrusion is used for the manufacture of ready-to-eat snacks, while cold extrusion is used for the manufacture of pasta and related products intended for later cooking and consumption. The processed products have low moisture and hence considerably higher shelf life, and provide variety and convenience to consumers. In the extrusion process, raw materials are first ground to the correct particle size. The dry mix is passed through a pre-conditioner, in which other ingredients may be added, and steam is injected to start the cooking process. The preconditioned mix is then passed through an extruder, where it is forced through a die and cut to the desired length. The cooking process takes place within the extruder where the product produces its own friction and heat due to the pressure generated (10–20 bar). The main independent parameters during extrusion cooking are feed rate, particle size of the raw material, barrel temperature, screw speed and moisture content. The extruding process can induce both protein denaturation and starch gelatinization, depending on inputs and parameters. Sometimes, a catalyst is used, for example, when producing texturised vegetable proteins (TVP). Drug carriers For use in pharmaceutical products, extrusion through nano-porous, polymeric filters is being used to produce suspensions of lipid vesicles liposomes or transfersomes with a particular size of a narrow size distribution. The anti-cancer drug Doxorubicin in liposome delivery system is formulated by extrusion, for example. Hot melt extrusion is also utilized in pharmaceutical solid oral dose processing to enable delivery of drugs with poor solubility and bioavailability. Hot melt extrusion has been shown to molecularly disperse poorly soluble drugs in a polymer carrier increasing dissolution rates and bioavailability. The process involves the application of heat, pressure and agitation to mix materials together and ‘extrude’ them through a die. Twin-screw high shear extruders blend materials and simultaneously break up particles. The resulting particle can be blended with compression aids and compressed into tablets or filled into unit dose capsules. Biomass briquettes The extrusion production technology of fuel briquettes is the process of extrusion screw wastes (straw, sunflower husks, buckwheat, etc.) or finely shredded wood waste (sawdust) under high pressure when heated from 160 to 350 °C. The resulting fuel briquettes do not include any of the binders, but one natural – the lignin contained in the cells of plant wastes. The temperature during compression causes melting of the surface of bricks, making it more solid, which is important for the transportation of briquettes. Textiles The majority of synthetic materials in textiles are manufactured with extrusion only. Fiber forming substances are used in extrusion to form various synthetic filaments. The molten materials are passed through a spinneret that helps in forming fibers.
Technology
Metallurgy
null
856544
https://en.wikipedia.org/wiki/Methyl%20salicylate
Methyl salicylate
Methyl salicylate (oil of wintergreen or wintergreen oil) is an organic compound with the formula C8H8O3. It is the methyl ester of salicylic acid. It is a colorless, viscous liquid with a sweet, fruity odor reminiscent of root beer (in which it is used as a flavoring), but often associatively called "minty", as it is an ingredient in mint candies. It is produced by many species of plants, particularly wintergreens. It is also produced synthetically, used as a fragrance and as a flavoring agent. Biosynthesis and occurrence Methyl salicylate was first isolated (from the plant Gaultheria procumbens) in 1843 by the French chemist Auguste André Thomas Cahours (1813–1891), who identified it as an ester of salicylic acid and methanol. The biosynthesis of methyl salicylate arises via the hydroxylation of benzoic acid by a cytochrome P450 followed by reaction with a methyltransferase enzyme. Methyl salicylate as a plant metabolite Many plants produce methyl salicylate in small quantities. Methyl salicylate levels are often upregulated in response to biotic stress, especially infection by pathogens, where it plays a role in the induction of resistance. Methyl salicylate is believed to function by being metabolized to the plant hormone salicylic acid. Since methyl salicylate is volatile, these signals can spread through the air to distal parts of the same plant or even to neighboring plants, whereupon they can function as a mechanism of plant-to-plant communication, "warning" neighbors of danger. Methyl salicylate is also released in some plants when they are damaged by herbivorous insects, where they may function as a cue aiding in the recruitment of predators, notably hoverflies, lacewings, and lady beetles. Some plants produce methyl salicylate in larger quantities, where it likely involved in direct defense against predators or pathogens. Examples of this latter class include: some species of the genus Gaultheria in the family Ericaceae, including Gaultheria procumbens, the wintergreen or eastern teaberry; some species of the genus Betula in the family Betulaceae, particularly those in the subgenus Betulenta such as B. lenta, the black birch; all species of the genus Spiraea in the family Rosaceae, also called the meadowsweets; species of the genus Polygala in the family Polygalaceae. Methyl salicylate can also be a component of floral scents, especially in plants dependent on nocturnal pollinators like moths, scarab beetles, and (nocturnal) bees. Commercial production Methyl salicylate can be produced by esterifying salicylic acid with methanol. Commercial methyl salicylate is now synthesized, but in the past, it was commonly distilled from the twigs of Betula lenta (sweet birch) and Gaultheria procumbens (eastern teaberry or wintergreen). Uses Methyl salicylate is used in high concentrations as a rubefacient and analgesic in deep heating liniments (such as Bengay) to treat joint and muscular pain. Randomised double blind trials report that evidence of its effectiveness is weak, but stronger for acute pain than chronic pain, and that effectiveness may be due entirely to counterirritation. However, in the body it metabolizes into salicylates, including salicylic acid, a known NSAID. Methyl salicylate is used in low concentrations (0.04% and under) as a flavoring agent in root beer, chewing gum, mints and medicine such as Pepto-Bismol. When mixed with sugar and dried, it is a potentially entertaining source of triboluminescence, for example by crushing Wint-O-Green Life Savers in a dark room. When crushed, sugar crystals emit light; methyl salicylate amplifies the spark because it fluoresces, absorbing ultraviolet light and re-emitting it in the visible spectrum. It is used as an antiseptic in Listerine mouthwash produced by the Johnson & Johnson company. It provides fragrance to various products and as an odor-masking agent for some organophosphate pesticides. Methyl salicylate is also used as a bait for attracting male orchid bees for study, which apparently gather the chemical to synthesize pheromones, and to clear plant or animal tissue samples of color, and as such is useful for microscopy and immunohistochemistry when excess pigments obscure structures or block light in the tissue being examined. This clearing generally only takes a few minutes, but the tissue must first be dehydrated in alcohol. It has also been discovered that methyl salicylate works as a kairomone that attracts some insects, such as the spotted lanternfly. Unlike some other kairomone's, Methyl Salicylate attracts all stages of the spotted lanternflies life. Additional applications include: used as a simulant or surrogate for the research of chemical warfare agent sulfur mustard, due to its similar chemical and physical properties, in restoring (at least temporarily) the elastomeric properties of old rubber rollers, especially in printers, as a transfer agent in printmaking (to release toner from photocopied images and apply them to other surfaces), and as a penetrating oil to loosen rusted parts. Safety and toxicity Methyl salicylate is potentially deadly, especially for young children who may accidentally ingest preparations containing methyl salicylate, such as an essential oil solution. A single teaspoon (5 mL) of methyl salicylate contains approximately 6 g of salicylate, which is equivalent to almost twenty 300 mg aspirin tablets (5mL × 1.174g/mL = 5.87g). Toxic ingestions of salicylates typically occur with doses of approximately 150 mg/kg body weight. This can be achieved with 1 mL of oil of wintergreen, which equates to 140mg/kg of salicylates for a 10kg child (22lbs). The lowest published lethal dose is 101 mg/kg body weight in adult humans, (or 7.07 grams for a 70 kg adult). It has proven fatal to small children in doses as small as 4 mL. A seventeen-year-old cross-country runner at Notre Dame Academy on Staten Island died in April 2007 after her body absorbed methyl salicylate through excessive use of topical muscle-pain relief products (using multiple patches against the manufacturer's instructions). Most instances of human toxicity due to methyl salicylate are a result of overapplication of topical analgesics, especially involving children. Salicylate, the major metabolite of methyl salicylate, may accumulate in blood, plasma or serum which may help professionals to confirm a diagnosis of poisoning in hospitalized patients or to assist in an autopsy. Compendial status British Pharmacopoeia Japanese Pharmacopoeia
Physical sciences
Esters and ethers
Chemistry
2684988
https://en.wikipedia.org/wiki/Fluid%20mechanics
Fluid mechanics
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. It has applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology. It can be divided into fluid statics, the study of fluids at rest; and fluid dynamics, the study of the effect of forces on fluid motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic. Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. History The study of fluid mechanics goes back at least to the days of ancient Greece, when Archimedes investigated fluid statics and buoyancy and formulated his famous law known now as the Archimedes' principle, which was published in his work On Floating Bodies—generally considered to be the first major work on fluid mechanics. Iranian scholar Abu Rayhan Biruni and later Al-Khazini applied experimental scientific methods to fluid mechanics. Rapid advancement in fluid mechanics began with Leonardo da Vinci (observations and experiments), Evangelista Torricelli (invented the barometer), Isaac Newton (investigated viscosity) and Blaise Pascal (researched hydrostatics, formulated Pascal's law), and was continued by Daniel Bernoulli with the introduction of mathematical fluid dynamics in Hydrodynamica (1739). Inviscid flow was further analyzed by various mathematicians (Jean le Rond d'Alembert, Joseph Louis Lagrange, Pierre-Simon Laplace, Siméon Denis Poisson) and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille and Gotthilf Hagen. Further mathematical justification was provided by Claude-Louis Navier and George Gabriel Stokes in the Navier–Stokes equations, and boundary layers were investigated (Ludwig Prandtl, Theodore von Kármán), while various scientists such as Osborne Reynolds, Andrey Kolmogorov, and Geoffrey Ingram Taylor advanced the understanding of fluid viscosity and turbulence. Main branches Fluid statics Fluid statics or hydrostatics is the branch of fluid mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium; and is contrasted with fluid dynamics, the study of fluids in motion. Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of water is always level whatever the shape of its container. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspects of geophysics and astrophysics (for example, in understanding plate tectonics and anomalies in the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields. Fluid dynamics Fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density, and temperature, as functions of space and time. It has several subdisciplines itself, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and movements on aircraft, determining the mass flow rate of petroleum through pipelines, predicting evolving weather patterns, understanding nebulae in interstellar space and modeling explosions. Some fluid-dynamical principles are used in traffic engineering and crowd dynamics. Relationship to continuum mechanics Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table. In a mechanical view, a fluid is a substance that does not support shear stress; that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress. Assumptions The assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. Fundamentally, every fluid mechanical system is assumed to obey: Conservation of mass Conservation of energy Conservation of momentum The continuum assumption For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume. The is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale. Fluid properties can vary continuously from one volume element to another and are average values of the molecular properties. The continuum hypothesis can lead to inaccurate results in applications like supersonic speed flows, or molecular flows on nano scale. Those problems for which the continuum hypothesis fails can be solved using statistical mechanics. To determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, but molecular approach (statistical mechanics) can be applied to find the fluid motion for larger Knudsen numbers. Navier–Stokes equations The Navier–Stokes equations (named after Claude-Louis Navier and George Gabriel Stokes) are differential equations that describe the force balance at a given point within a fluid. For an incompressible fluid with vector velocity field , the Navier–Stokes equations are . These differential equations are the analogues for deformable materials to Newton's equations of motion for particles – the Navier–Stokes equations describe changes in momentum (force) in response to pressure and viscosity, parameterized by the kinematic viscosity . Occasionally, body forces, such as the gravitational force or Lorentz force are added to the equations. Solutions of the Navier–Stokes equations for a given physical problem must be sought with the help of calculus. In practical terms, only the simplest cases can be solved exactly in this way. These cases generally involve non-turbulent, steady flow in which the Reynolds number is small. For more complex cases, especially those involving turbulence, such as global weather systems, aerodynamics, hydrodynamics and many more, solutions of the Navier–Stokes equations can currently only be found with the help of computers. This branch of science is called computational fluid dynamics. Inviscid and viscous fluids An inviscid fluid has no viscosity, . In practice, an inviscid flow is an idealization, one that facilitates mathematical treatment. In fact, purely inviscid flows are only known to be realized in the case of superfluidity. Otherwise, fluids are generally viscous, a property that is often most important within a boundary layer near a solid surface, where the flow must match onto the no-slip condition at the solid. In some cases, the mathematics of a fluid mechanical system can be treated by assuming that the fluid outside of boundary layers is inviscid, and then matching its solution onto that for a thin laminar boundary layer. For fluid flow over a porous boundary, the fluid velocity can be discontinuous between the free fluid and the fluid in the porous media (this is related to the Beavers and Joseph condition). Further, it is useful at low subsonic speeds to assume that gas is incompressible—that is, the density of the gas does not change even though the speed and static pressure change. Newtonian versus non-Newtonian fluids A Newtonian fluid (named after Isaac Newton) is defined to be a fluid whose shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear. This definition means regardless of the forces acting on a fluid, it continues to flow. For example, water is a Newtonian fluid, because it continues to display fluid properties no matter how much it is stirred or mixed. A slightly less rigorous definition is that the drag of a small object being moved slowly through the fluid is proportional to the force applied to the object. (Compare friction). Important fluids, like water as well as most gasses, behave—to good approximation—as a Newtonian fluid under normal conditions on Earth. By contrast, stirring a non-Newtonian fluid can leave a "hole" behind. This will gradually fill up over time—this behavior is seen in materials such as pudding, oobleck, or sand (although sand isn't strictly a fluid). Alternatively, stirring a non-Newtonian fluid can cause the viscosity to decrease, so the fluid appears "thinner" (this is seen in non-drip paints). There are many types of non-Newtonian fluids, as they are defined to be something that fails to obey a particular property—for example, most fluids with long molecular chains can react in a non-Newtonian manner. Equations for a Newtonian fluid The constant of proportionality between the viscous stress tensor and the velocity gradient is known as the viscosity. A simple equation to describe incompressible Newtonian fluid behavior is where is the shear stress exerted by the fluid ("drag"), is the fluid viscosity—a constant of proportionality, and is the velocity gradient perpendicular to the direction of shear. For a Newtonian fluid, the viscosity, by definition, depends only on temperature, not on the forces acting upon it. If the fluid is incompressible the equation governing the viscous stress (in Cartesian coordinates) is where is the shear stress on the face of a fluid element in the direction is the velocity in the direction is the direction coordinate. If the fluid is not incompressible the general form for the viscous stress in a Newtonian fluid is where is the second viscosity coefficient (or bulk viscosity). If a fluid does not obey this relation, it is termed a non-Newtonian fluid, of which there are several types. Non-Newtonian fluids can be either plastic, Bingham plastic, pseudoplastic, dilatant, thixotropic, rheopectic, viscoelastic. In some applications, another rough broad division among fluids is made: ideal and non-ideal fluids. An ideal fluid is non-viscous and offers no resistance whatsoever to a shearing force. An ideal fluid really does not exist, but in some calculations, the assumption is justifiable. One example of this is the flow far from solid surfaces. In many cases, the viscous effects are concentrated near the solid boundaries (such as in boundary layers) while in regions of the flow field far away from the boundaries the viscous effects can be neglected and the fluid there is treated as it were inviscid (ideal flow). When the viscosity is neglected, the term containing the viscous stress tensor in the Navier–Stokes equation vanishes. The equation reduced in this form is called the Euler equation.
Physical sciences
Fluid mechanics
null
2685394
https://en.wikipedia.org/wiki/Quercus%20acutissima
Quercus acutissima
Quercus acutissima, the sawtooth oak, is an Asian species of oak native to China, Tibet, Korea, Japan, Taiwan, Siberia, Mongolia, Bangladesh, Philippines, Indonesia, Malaysia, India, Pakistan, Sri Lanka, Brunei, Indochina (Vietnam, Thailand, Myanmar, Cambodia, Laos), Himalayas (Nepal, Bhutan, Northeast India). It is widely planted in many lands and has become naturalized in parts of North America. Quercus acutissima is closely related to the Turkey oak, classified with it in Quercus sect. Cerris, a section of the genus characterized by shoot buds surrounded by soft bristles, bristle-tipped leaf lobes, and acorns that mature in about 18 months. Description Quercus acutissima is a medium-sized deciduous tree growing to tall with a trunk up to in diameter. The bark is dark gray and deeply furrowed. The leaves are long and wide, with 14–20 small saw-tooth-like triangular lobes on each side, with teeth of very regular shape. The flowers are wind-pollinated catkins. The fruit is an acorn, maturing about 18 months after pollination, long and 2 cm broad, bi-coloured with an orange basal half grading to a green-brown tip; the acorn cap is deep, densely covered in soft long 'mossy' bristles. It is closely related to Quercus cerris, classified with it in Quercus sect. Cerris, a section of the genus characterized by shoot buds surrounded by soft bristles, bristle-tipped leaf lobes, and acorns that mature in about 18 months. Ecology The acorns are very bitter, but are eaten by jays and pigeons; squirrels usually only eat them when other food sources have run out. The sap of the tree can leak out of the trunk. Beetles, stag beetles, butterflies, and Vespa mandarinia gather to reach this sap. Native to Asia, sawtooth oak has found its way into the Eastern part of the United States in states including Florida, Missouri, New York, Alabama, Pennsylvania, and many others. Quercus acutissima was introduced into the United States around the 1920s. In order to reduce the potential harms of the sawtooth oak, researchers and scientists are advising to remove tree saplings and remove the plant species altogether from reclamation species lists. Due to their preference for well-drained acid soils, Quercus acutissima is able to thrive and survive in various harsh locations. Similarly to other species, the sawtooth oak is able to outcompete with other native species, which has the possibility to be detrimental to ecosystems. Due to its fast-growing nature, these saplings are being planted with little thought about the potential damage it may have to native species. Uses Sawtooth oak is widely planted in eastern North America and is naturalized in scattered locations; it is also occasionally planted in Europe but has not naturalized there. Most planting in North America was carried out for wildlife food provision, as the species tends to bear heavier crops of acorns than other native American oak species; however, the bitterness of the acorns makes it less suitable for this purpose, and sawtooth oak is becoming a problematic invasive species in some areas and states, such as Louisiana. Sawtooth oak trees also grow at a faster rate which helps it compete against native trees. The wood has many of the characteristics of other oaks, but is very prone to crack and split and hence is relegated to such uses as fencing. Charcoal made using this wood is used especially for the braziers for heating water for the Japanese tea ceremony.
Biology and health sciences
Fagales
Plants
140282
https://en.wikipedia.org/wiki/Space%20settlement
Space settlement
A space settlement (also called a space habitat, spacestead, space city or space colony) is a settlement in outer space, sustaining more extensively habitation facilities in space than a general space station or spacecraft. Possibly including closed ecological systems, its particular purpose is permanent habitation. No space settlement has been constructed yet, but many design concepts, with varying degrees of realism, have been introduced in science-fiction or proposed for actual realization. Space settlements include orbital settlements (also called orbital habitat, orbital stead, orbital city or orbital colony) around the Earth or any other celestial body, as well as cyclers and interstellar arks, as generation ships or world ships. Space settlements are a form of extraterrestrial settlements, which more broadly includes habitats built on or within a body other than Earth, such as a settlement developed from a moonbase, a Mars habitat or an asteroid. Definition A space settlement is any large-scale habitation facility in outer space, or more particularly in an orbit. The International Astronautical Federation has differentiated space settlements to space habitats and space infrastructure the following way: While not automatically constituting a colonial entity, a space settlement can be an element of a space colony. The term "space colony" has been viewed critically, prompting Carl Sagan to propose the term space city. History The idea of space settlements either in fact or fiction goes back to the second half of the 19th century. "The Brick Moon", a fictional story written in 1869 by Edward Everett Hale, is perhaps the first treatment of this idea in writing. In 1903, space pioneer Konstantin Tsiolkovsky speculated about rotating cylindrical space settlements in Beyond Planet Earth. In 1929 John Desmond Bernal speculated about giant space settlements. Dandridge M. Cole in the late 1950s and 1960s speculated about hollowing out asteroids and then rotating the to use as settlements in various magazine articles and books, notably Islands In Space: The Challenge Of The Planetoids. O'Neill – The High Frontier Around 1970, near the end of Project Apollo (1961–1972), Gerard K. O'Neill, an experimental physicist at Princeton University, was looking for a topic to tempt his physics students, most of them freshmen in engineering. He hit upon the idea of assigning them feasibility calculations for large space-settlements. To his surprise, the habitats seemed feasible even in very large sizes: cylinders in diameter and long, even if made from ordinary materials such as steel and glass. Also, the students solved problems such as radiation protection from cosmic rays (almost free in the larger sizes), getting naturalistic Sun angles, provision of power, realistic pest-free farming and orbital attitude control without reaction motors. O'Neill published an article about these colony concepts in Physics Today in 1974. He expanded the article in his 1976 book The High Frontier: Human Colonies in Space. NASA Ames/Stanford 1975 Summer Study The result motivated NASA to sponsor a couple of summer workshops led by O'Neill. Several concepts were studied, with sizes ranging from 1,000 to 10,000,000 people, including versions of the Stanford torus. Three concepts were presented to NASA: the Bernal Sphere, the Toroidal Colony and the Cylindrical Colony. O'Neill's concepts had an example of a payback scheme: construction of solar power satellites from lunar materials. O'Neill did not emphasize the building of solar power satellites as such, but rather offered proof that orbital manufacturing from lunar materials could generate profits. He and other participants presumed that once such manufacturing facilities had started production, many profitable uses for them would be found, and the colony would become self-supporting and begin to build other colonies as well. The concept studies generated a notable groundswell of public interest. One effect of this expansion was the founding of the L5 Society in the U.S., a group of enthusiasts that desired to build and live in such colonies. The group was named after the space-colony orbit which was then believed to be the most profitable, a kidney-shaped orbit around either of Earth's lunar Lagrange points 5 or 4. Space Studies Institute In 1977 O'Neill founded the Space Studies Institute, which initially funded and constructed some prototypes of the new hardware needed for a space colonization effort, as well as producing a number of feasibility studies. One of the early projects, for instance, involved a series of functional prototypes of a mass driver, the essential technology for moving ores efficiently from the Moon to space colony orbits. Motivation There are a range of arguments for space settlements, including: As a base for crewed space exploration Relieving Earth of industry and population pressure Recreational habitation, either as visitors or residents at space (see space hotel) Economic growth, developing access to resources in space and a space economy, without destroying ecosystems and displacing peoples on Earth Space colonization, claiming extraterrestrial space for settler colonial independence For survival of human civilization and the biosphere, in case of a disaster on the Earth (natural or man-made) Advantages A number of arguments are made for space settlements having a number of advantages: Access to solar energy Space has an abundance of light produced from the Sun. In Earth orbit, this amounts to 1400 watts of power per square meter. This energy can be used to produce electricity from solar cells or heat engine based power stations, process ores, provide light for plants to grow and to warm space settlements. Outside gravity well Earth-to-space settlement trade would be easier than Earth-to-planetary habitat trade, as habitats orbiting Earth will not have a gravity well to overcome to export to Earth, and a smaller gravity well to overcome to import from Earth. In-situ resource utilization Space settlements may be supplied with resources from extraterrestrial places like Mars, asteroids, or the Moon (in-situ resource utilization [ISRU]; see Asteroid mining). One could produce breathing oxygen, drinking water, and rocket fuel with the help of ISRU. It may become possible to manufacture solar panels from lunar materials. Asteroids and other small bodies Most asteroids have a mixture of materials, that could be mined, and because these bodies do not have substantial gravity wells, it would require low delta-V to draw materials from them and haul them to a construction site. There is estimated to be enough material in the main asteroid belt alone to build enough space settlements to equal the habitable surface area of 3,000 Earths. Population A 1974 estimate assumed that collection of all the material in the main asteroid belt would allow habitats to be constructed to give an immense total population capacity. Using the free-floating resources of the Solar System, this estimate extended into the trillions. Zero g recreation If a large area at the rotation axis is enclosed, various zero-g sports are possible, including swimming, hang gliding and the use of human-powered aircraft. Passenger compartment A space settlement can be the passenger compartment of a large spacecraft for colonizing asteroids, moons, and planets. It can also function as one for a generation ship for travel to other planets or distant stars (L. R. Shepherd described a generation starship in 1952 comparing it to a small planet with many people living in it.) Requirements The requirements for a space settlement are many. They would have to provide all the material needs for hundreds or thousands of humans, in an environment out in space that is very hostile to human life. Regulation The governance or regulation of space settlements is crucial for responsible habitation conditions. The physical as well as socio-political architecture of a space settlement, if poorly established, can lead to tyrannical and precarious conditions. Initial capital outlay Even the smallest of the settlement designs mentioned below are more massive than the total mass of all items that humans have ever launched into Earth orbit combined. Prerequisites to building settlements are either cheaper launch costs or a mining and manufacturing base on the Moon or other body having low delta-v from the desired habitat location. Location The optimal settlement orbits are still debated, and so orbital stationkeeping is probably a commercial issue. The lunar and orbits are now thought to be too far away from the Moon and Earth. A more modern proposal is to use a two-to-one resonance orbit that alternately has a close, low-energy (cheap) approach to the Moon, and then to the Earth. This provides quick, inexpensive access to both raw materials and the major market. Most settlement designs plan to use electromagnetic tether propulsion, or mass drivers used instead of rocket motors. The advantage of these is that they either use no reaction mass at all, or use cheap reaction mass. Protection from radiation If a space settlement is located at L4 or L5, then its orbit will take it outside of the protection of the Earth's magnetosphere for approximately two-thirds of the time (as happens with the Moon), putting residents at risk of proton exposure from the solar wind (see Health threat from cosmic rays). Protection can be attained through passive or active shielding. Passive shielding through the use of materials has been the method to shield current spacecrafts. Water walls or ice walls can provide protection from solar and cosmic radiation, as 7 cm of water depth blocks approximately half of incident radiation. Alternatively, rock could be used as shielding; 4 metric tons per square meter of surface area could reduce radiation dosage to several mSv or less annually, below the rate of some populated high natural background areas on Earth. Alternative concepts based on active shielding are untested yet and more complex than such passive mass shielding, but usage of magnetic and/or electric fields, like through spacecraft encapsulating wires, to deflect particles could potentially greatly reduce mass requirements. Atmosphere Air pressure, with normal partial pressures of oxygen (21%), carbon dioxide and nitrogen (78%), is a basic requirement of any space settlement. Basically, most space settlement designs concepts envision large, thin-walled pressure vessels. The required oxygen could be obtained from lunar rock. Nitrogen is most easily available from the Earth, but is also recycled nearly perfectly. Also, nitrogen in the form of ammonia () may be obtainable from comets and the moons of outer planets. Nitrogen may also be available in unknown quantities on certain other bodies in the outer Solar System. The air of a habitat could be recycled in a number of ways. One concept is to use photosynthetic gardens, possibly via hydroponics, or forest gardening. However, these do not remove certain industrial pollutants, such as volatile oils, and excess simple molecular gases. The standard method used on nuclear submarines, a similar form of closed environment, is to use a catalytic burner, which effectively decomposes most organics. Further protection might be provided by a small cryogenic distillation system which would gradually remove impurities such as mercury vapor, and noble gases that cannot be catalytically burned. Food production Organic materials for food production would also need to be provided. At first, most of these would have to be imported from Earth. After that, feces recycling should reduce the need for imports. One proposed recycling method would start by burning the cryogenic distillate, plants, garbage and sewage with air in an electric arc, and distilling the result. The resulting carbon dioxide and water would be immediately usable in agriculture. The nitrates and salts in the ash could be dissolved in water and separated into pure minerals. Most of the nitrates, potassium and sodium salts would recycle as fertilizers. Other minerals containing iron, nickel, and silicon could be chemically purified in batches and reused industrially. The small fraction of remaining materials, well below 0.01% by weight, could be processed into pure elements with zero-gravity mass spectrometry, and added in appropriate amounts to the fertilizers and industrial stocks. It is likely that methods would be greatly refined as people began to actually live in space settlements. Artificial gravity Long-term on-orbit studies have proven that zero gravity weakens bones and muscles, and upsets calcium metabolism and immune systems. Most people have a continual stuffy nose or sinus problems, and a few people have dramatic, incurable motion sickness. Most habitat designs would rotate in order to use inertial forces to simulate gravity. NASA studies with chickens and plants have proven that this is an effective physiological substitute for gravity. Turning one's head rapidly in such an environment causes a "tilt" to be sensed as one's inner ears move at different rotational rates. Centrifuge studies show that people get motion-sick in habitats with a rotational radius of less than 100 metres, or with a rotation rate above 3 rotations per minute. However, the same studies and statistical inference indicate that almost all people should be able to live comfortably in habitats with a rotational radius larger than 500 meters and below 1 RPM. Experienced persons were not merely more resistant to motion sickness, but could also use the effect to determine "spinward" and "antispinward" directions in the centrifuges. Meteoroids and dust The habitat would need to withstand potential impacts from space debris, meteoroids, dust, etc. Most meteoroids that strike the earth vaporize in the atmosphere. Without a thick protective atmosphere meteoroid strikes would pose a much greater risk to a space settlement. Radar will sweep the space around each habitat mapping the trajectory of debris and other man-made objects and allowing corrective actions to be taken to protect the habitat. In some designs (O'Neill/NASA Ames "Stanford Torus" and "Crystal palace in a Hatbox" habitat designs have a non-rotating cosmic ray shield of packed sand (~1.9 m thick) or even artificial aggregate rock (1.7 m ersatz concrete). Other proposals use the rock as structure and integral shielding (O'Neill, "the High Frontier". Sheppard, "Concrete Space Colonies"; Spaceflight, journal of the B.I.S.) In any of these cases, strong meteoroid protection is implied by the external radiation shell ~4.5 tonnes of rock material, per square meter. Note that Solar Power Satellites are proposed in the multi-GW ranges, and such energies and technologies would allow constant radar mapping of nearby 3D space out-to arbitrarily far away, limited only by effort expended to do so. Proposals are available to move even kilometer-sized NEOs to high Earth orbits, and reaction engines for such purposes would move a space settlement and any arbitrarily large shield, but not in any timely or rapid manner, the thrust being very low compared to the huge mass. Heat rejection The habitat is in a vacuum, and therefore resembles a giant thermos bottle. Habitats also need a radiator to eliminate heat from absorbed sunlight. Very small habitats might have a central vane that rotates with the habitat. In this design, convection would raise hot air "up" (toward the center), and cool air would fall down into the outer habitat. Some other designs would distribute coolants, such as chilled water from a central radiator. Attitude control Most mirror geometries require something on the habitat to be aimed at the Sun and so attitude control is necessary. The original O'Neill design used the two cylinders as momentum wheels to roll the colony, and pushed the sunward pivots together or apart to use precession to change their angle. Concepts Base concepts The two common original concepts are the Bernal sphere and the O'Neill cylinder. Dumbbell-shape assembly concept A dumbbell-like spacecraft or habitat, connected by a cable to a counterweight or other habitat. This design has been proposed as a Mars ship, initial construction shack for a space habitat, and orbital hotel. It has a comfortably long and slow rotational radius for a relatively small station mass. Also, if some of the equipment can form the counter-weight, the equipment dedicated to artificial gravity is just a cable, and thus has a much smaller mass-fraction than in other concepts. For a long-term habitation, however, radiation shielding must rotate with the habitat, and is extremely heavy, thus requiring a much stronger and heavier cable. This speculative design was also considered by the NASA studies. Small habitats would be mass-produced to standards that allow the habitats to interconnect. A single habitat can operate alone as a bola. However, further habitats can be attached, to grow into a "dumbbell" then a "bow-tie", then a ring, then a cylinder of "beads", and finally a framed array of cylinders. Each stage of growth shares more radiation shielding and capital equipment, increasing redundancy and safety while reducing the cost per person. This concept was originally proposed by a professional architect because it can grow much like Earth-bound cities, with incremental individual investments, unlike those that require large start-up investments. The main disadvantage is that the smaller versions use a large structure to support the radiation shielding, which rotates with them. In large sizes, the shielding becomes economical, because it grows roughly as the square of the colony radius. The number of people, their habitats, and the radiators to cool them grow roughly as the cube of the colony radius. Further concepts Island One, a Bernal sphere settlement for about 10,000–20,000 people. Stanford torus: an alternative to Island One. Lewis One, a cylinder of radius 250 m with a non-rotating radiation shielding. The shielding protects the micro-gravity industrial space, too. The rotating part is 450m long and has several inner cylinders. Some of them are used for agriculture. Island Three or O'Neill cylinder, an even larger cylindrical design (3.2 or 4 km radius and 32 km long). McKendree cylinder, another concept that would use carbon nanotubes, a McKendree cylinder is paired cylinders in the same vein as the Island Three concept, but each 460 km in radius and 4600 km long (versus 3.2-4 km radius and 32 km long in the Island Three). Kalpana One, revised, a short cylinder with 250 m radius and 325 m length. The radiation shielding is 10 t/m2 and rotates. It has several inner cylinders for agriculture and recreation. It is sized for 3,000 residents. Bubbleworld or Inside/Outside concept, originated by Dandridge M. Cole in 1964, calls for drilling a tunnel through the longest axis of a large metallic asteroid and filling it with a volatile substance, possibly water. A very large solar reflector would be constructed nearby, focusing solar heat onto the asteroid, first to weld and seal the tunnel ends, then more diffusely to slowly heat the entire outer surface. As the metal softens, the water inside expands and inflates the mass, while rotational forces help shape it into a cylindrical form. Once expanded and allowed to cool, it can be spun to produce centrifugal pseudogravity, and the interior filled with soil, air and water. By creating a slight bulge in the middle of the cylinder, a ring-shaped lake can be made to form. Reflectors would allow sunlight to enter and to be directed where needed. This method would require a significant human and industrial presence in space to be at all feasible. The concept was popularized by science fiction author Larry Niven in his Known Space stories, describing such worlds as the primary habitats of the Belters, a civilization who had colonized the asteroid belt. "Bubbleworld" is also the name of a different concept of space settlement thought of by Dani Eder in 1995 (it is alternatively known as an Ederworld). This is a relatively thin, spherical shell surrounding a mass of gas great enough to be held together by gravity. If hydrogen is used as the gas, the shell would have a radius of about 240,000 km. The outside of the shell would have a living space 2,400 km thick (filled with breathable air) with an additional outer shell (possibly made of 500 m of steel) above it to hold in the air. Asteroid terrarium, a similar idea to the bubble world, in the 2012 novel 2312 by hard science fiction writer Kim Stanley Robinson. Bishop Ring, a speculative design using carbon nanotubes: a torus 1000 km in radius, 500 km in width, and with atmosphere retention walls 200 km in height. The habitat would be large enough that it could be "roofless", open to outer space on the inner rim. Space station projects Space settlements are in principle space stations, developments in space station construction therefore share many elements. The following projects and proposals, while not truly space settlements, incorporate aspects of what they would have and may represent stepping stones towards eventually building of space settlements. The Lunar Gateway is a planned lunar space station, the first outside of Low Earth Orbit, therefore being the first spacecraft designed in unshielded space. The ISS Centrifuge Demo was proposed in 2011 as a demonstration project for an artificial gravity compartment, preparatory for a similar module of a Nautilus-X Multi-Mission Space Exploration Vehicle (MMSEV). The ISS module would have an outside diameter of with a ring interior cross-section diameter and would provide 0.08 to partial gravity. This test and evaluation centrifuge would have the capability to become a Sleep Module for ISS crew. The subsequent vehicle design would be a long-duration crewed space transport vehicle including the artificial gravity compartment intended to promote crew-health for a crew of up to six persons on missions of up to two years duration. The partial-g torus-ring centrifuge would utilize both standard metal-frame and inflatable spacecraft structures and would provide 0.11 to if built with the diameter option. The Bigelow Commercial Space Station was announced in mid-2010. Bigelow has publicly shown space station design configurations with up to nine modules containing of habitable space. Bigelow began to publicly refer to the initial configuration as "Space Complex Alpha" in October 2010. In fiction Space settlements have been elements of different science-fiction stories, across different media, from books to movies like Elysium (2013) for a wheel shaped Stanford torus type and Interstellar (2014) for a cylindrical O'Neill type.
Technology
Basics_6
null
140432
https://en.wikipedia.org/wiki/Hepatology
Hepatology
Hepatology is the branch of medicine that incorporates the study of liver, gallbladder, biliary tree, and pancreas as well as management of their disorders. Although traditionally considered a sub-specialty of gastroenterology, rapid expansion has led in some countries to doctors specializing solely on this area, who are called hepatologists. Diseases and complications related to viral hepatitis and alcohol are the main reason for seeking specialist advice. More than two billion people have been infected with hepatitis B virus at some point in their life, and approximately 350 million have become persistent carriers. Up to 80% of liver cancers can be attributed to either hepatitis B or hepatitis C virus. In terms of mortality, the former is second only to smoking among known agents causing cancer. With more widespread implementation of vaccination and strict screening before blood transfusion, lower infection rates are expected in the future. In many countries, however, overall alcohol consumption is increasing, and consequently the number of people with cirrhosis and other related complications is commensurately increasing. Scope of specialty As for many medical specialties, patients are most likely to be referred by family physicians (i.e., GP) or by physicians from different disciplines. The reasons might be: Drug overdose. Paracetamol overdose is common. Gastrointestinal bleeding from portal hypertension related to liver damage Abnormal blood test suggesting liver disease Enzyme defects leading to bigger liver in children commonly named storage disease of liver Jaundice / Hepatitis virus positivity in blood, perhaps discovered on screening blood tests Ascites or swelling of abdomen from fluid accumulation, commonly due to liver disease but can be from other diseases like heart failure All patients with advanced liver disease e.g. cirrhosis should be under specialist care To undergo ERCP for diagnosing diseases of biliary tree or their management Fever with other features suggestive of infection involving mentioned organs. Some exotic tropical diseases like hydatid cyst, kala-azar or schistosomiasis may be suspected. Microbiologists would be involved as well Systemic diseases affecting liver and biliary tree e.g. haemochromatosis Follow-up of liver transplant Pancreatitis - commonly due to alcohol or gallstone Cancer of above organs. Usually multi-disciplinary approach is undertaken with involvement of oncologist and other experts. History Evidence from autopsies on Egyptian mummies suggests that liver damage from the parasitic infection bilharziasis was widespread in the ancient society. It is possible that the Greeks may have been aware of the liver's ability to exponentially duplicate as illustrated by the story of Prometheus. However, knowledge about liver disease in antiquity is questionable. Most of the important advances in the field have been made in the last 50 years. In 400 BC Hippocrates mentioned liver abscess in aphorisms. Roman anatomist Galen thought the liver was the principal organ of the body. He also identified its relationship with the gallbladder and spleen. Around 100 CE Aretaeus of Cappadocia wrote on jaundice In the medieval period Avicenna noted the importance of urine in diagnosing liver conditions. In 1770, French anatomist Antoine Portal noted bleeding due to oesophageal varices, In 1844, Gabriel Valentin showed pancreatic juices break down food in digestion. 1846 Justus Von Leibig discovered pancreatic juice tyrosine 1862 Austin Flint described the production of "stercorin". 1875 Victor Charles Hanot described cirrhotic jaundice and other diseases of the liver In 1958, Moore developed a standard technique for canine orthotopic liver transplantation. The first human liver transplant was performed in 1963 by Dr. Thomas E. Starzl on a three-year-old male afflicted with biliary atresia after perfecting the technique on canine livers. Baruch S. Blumberg discovered hepatitis B virus in 1966 and developed the first vaccine against it 1969. He was awarded the Nobel Prize in Physiology or Medicine 1976. In 1989, investigators from the CDC (Daniel W. Bradley) and Chiron (Michael Houghton) identified the hepatitis C virus, which had previously been known as non-A, non-B hepatitis and could not be detected in the blood supply. Only in 1992 was a blood test created that could detect hepatitis C in donated blood. The word hepatology is from Ancient Greek ἧπαρ (hepar) or ἡπατο- (hepato-), meaning "liver", and -λογία (-logia), meaning "study". Disease classification 1. International Classification of Disease (ICD 2007) – WHO classification: Chapter XI: Diseases of the digestive system K70-K77 Diseases of liver K80-K87 Disorders of gallbladder, biliary tract and pancreas 2. MeSH (medical subject heading):sam G02.403.776.409.405 same as "Gastroenterology" C06.552 Liver Diseases C06.130 Biliary Tract Diseases C06.689 Pancreatic diseases 3. National Library of Medicine Catalogue WI 700-740 Liver and biliary tree Diseases WI 800-830 Pancrease Also see Hepato-biliary diseases Important procedures Endoscopic retrograde cholangiopancreatography (ERCP) Transhepatic pancreato-cholangiography (TPC) Transjugular intrahepatic portosystemic shunt (TIPSS) Liver transplant and pancreas transplant See Also Journal of Clinical and Translational Hepatology
Biology and health sciences
Fields of medicine
Health
140459
https://en.wikipedia.org/wiki/Base%20%28chemistry%29
Base (chemistry)
In chemistry, there are three definitions in common use of the word "base": Arrhenius bases, Brønsted bases, and Lewis bases. All definitions agree that bases are substances that react with acids, as originally proposed by G.-F. Rouelle in the mid-18th century. In 1884, Svante Arrhenius proposed that a base is a substance which dissociates in aqueous solution to form hydroxide ions OH−. These ions can react with hydrogen ions (H+ according to Arrhenius) from the dissociation of acids to form water in an acid–base reaction. A base was therefore a metal hydroxide such as NaOH or Ca(OH)2. Such aqueous hydroxide solutions were also described by certain characteristic properties. They are slippery to the touch, can taste bitter and change the color of pH indicators (e.g., turn red litmus paper blue). In water, by altering the autoionization equilibrium, bases yield solutions in which the hydrogen ion activity is lower than it is in pure water, i.e., the water has a pH higher than 7.0 at standard conditions. A soluble base is called an alkali if it contains and releases OH− ions quantitatively. Metal oxides, hydroxides, and especially alkoxides are basic, and conjugate bases of weak acids are weak bases. Bases and acids are seen as chemical opposites because the effect of an acid is to increase the hydronium (H3O+) concentration in water, whereas bases reduce this concentration. A reaction between aqueous solutions of an acid and a base is called neutralization, producing a solution of water and a salt in which the salt separates into its component ions. If the aqueous solution is saturated with a given salt solute, any additional such salt precipitates out of the solution. In the more general Brønsted–Lowry acid–base theory (1923), a base is a substance that can accept hydrogen cations (H+)—otherwise known as protons. This does include aqueous hydroxides since OH− does react with H+ to form water, so that Arrhenius bases are a subset of Brønsted bases. However, there are also other Brønsted bases which accept protons, such as aqueous solutions of ammonia (NH3) or its organic derivatives (amines). These bases do not contain a hydroxide ion but nevertheless react with water, resulting in an increase in the concentration of hydroxide ion. Also, some non-aqueous solvents contain Brønsted bases which react with solvated protons. For example, in liquid ammonia, NH2− is the basic ion species which accepts protons from NH4+, the acidic species in this solvent. G. N. Lewis realized that water, ammonia, and other bases can form a bond with a proton due to the unshared pair of electrons that the bases possess. In the Lewis theory, a base is an electron pair donor which can share a pair of electrons with an electron acceptor which is described as a Lewis acid. The Lewis theory is more general than the Brønsted model because the Lewis acid is not necessarily a proton, but can be another molecule (or ion) with a vacant low-lying orbital which can accept a pair of electrons. One notable example is boron trifluoride (BF3). Some other definitions of both bases and acids have been proposed in the past, but are not commonly used today. Properties General properties of bases include: Concentrated or strong bases are caustic on organic matter and react violently with acidic substances. Aqueous solutions or molten bases dissociate in ions and conduct electricity. Reactions with indicators: bases turn red litmus paper blue, phenolphthalein pink, keep bromothymol blue in its natural colour of blue, and turn methyl orange-yellow. The pH of a basic solution at standard conditions is greater than seven. Bases are bitter. Reactions between bases and water The following reaction represents the general reaction between a base (B) and water to produce a conjugate acid (BH+) and a conjugate base (OH−):{B}_{(aq)} + {H2O}_{(l)} <=> {BH+}_{(aq)} + {OH- }_{(aq)}The equilibrium constant, Kb, for this reaction can be found using the following general equation: In this equation, the base (B) and the extremely strong base (the conjugate base OH−) compete for the proton. As a result, bases that react with water have relatively small equilibrium constant values. The base is weaker when it has a lower equilibrium constant value. Neutralization of acids Bases react with acids to neutralize each other at a fast rate both in water and in alcohol. When dissolved in water, the strong base sodium hydroxide ionizes into hydroxide and sodium ions: NaOH -> Na+ + OH- and similarly, in water the acid hydrogen chloride forms hydronium and chloride ions: HCl + H2O -> H3O+ + Cl- When the two solutions are mixed, the and ions combine to form water molecules: H3O+ + OH- -> 2H2O If equal quantities of NaOH and HCl are dissolved, the base and the acid neutralize exactly, leaving only NaCl, effectively table salt, in solution. Weak bases, such as baking soda or egg white, should be used to neutralize any acid spills. Neutralizing acid spills with strong bases, such as sodium hydroxide or potassium hydroxide, can cause a violent exothermic reaction, and the base itself can cause just as much damage as the original acid spill. Alkalinity of non-hydroxides Bases are generally compounds that can neutralize an amount of acid. Both sodium carbonate and ammonia are bases, although neither of these substances contains groups. Both compounds accept H+ when dissolved in protic solvents such as water: Na2CO3 + H2O -> 2Na+ + HCO3- + OH- NH3 + H2O -> NH4+ + OH- From this, a pH, or acidity, can be calculated for aqueous solutions of bases. A base is also defined as a molecule that has the ability to accept an electron pair bond by entering another atom's valence shell through its possession of one electron pair. There are a limited number of elements that have atoms with the ability to provide a molecule with basic properties. Carbon can act as a base as well as nitrogen and oxygen. Fluorine and sometimes rare gases possess this ability as well. This occurs typically in compounds such as butyl lithium, alkoxides, and metal amides such as sodium amide. Bases of carbon, nitrogen and oxygen without resonance stabilization are usually very strong, or superbases, which cannot exist in a water solution due to the acidity of water. Resonance stabilization, however, enables weaker bases such as carboxylates; for example, sodium acetate is a weak base. Strong bases A strong base is a basic chemical compound that can remove a proton (H+) from (or deprotonate) a molecule of even a very weak acid (such as water) in an acid–base reaction. Common examples of strong bases include hydroxides of alkali metals and alkaline earth metals, like NaOH and , respectively. Due to their low solubility, some bases, such as alkaline earth hydroxides, can be used when the solubility factor is not taken into account. One advantage of this low solubility is that "many antacids were suspensions of metal hydroxides such as aluminium hydroxide and magnesium hydroxide"; compounds with low solubility and the ability to stop an increase in the concentration of the hydroxide ion, preventing the harm of the tissues in the mouth, oesophagus, and stomach. As the reaction continues and the salts dissolve, the stomach acid reacts with the hydroxide produced by the suspensions. Strong bases hydrolyze in water almost completely, resulting in the leveling effect." In this process, the water molecule combines with a strong base, due to the water's amphoteric ability; and, a hydroxide ion is released. Very strong bases can even deprotonate very weakly acidic C–H groups in the absence of water. Here is a list of several strong bases: The cations of these strong bases appear in the first and second groups of the periodic table (alkali and earth alkali metals). Tetraalkylated ammonium hydroxides are also strong bases since they dissociate completely in water. Guanidine is a special case of a species that is exceptionally stable when protonated, analogously to the reason that makes perchloric acid and sulfuric acid very strong acids. Acids with a pKa of more than about 13 are considered very weak, and their conjugate bases are strong bases. Superbases Group 1 salts of carbanions, amide ions, and hydrides tend to be even stronger bases due to the extreme weakness of their conjugate acids, which are stable hydrocarbons, amines, and dihydrogen. Usually, these bases are created by adding pure alkali metals such as sodium into the conjugate acid. They are called superbases, and it is impossible to keep them in aqueous solutions because they are stronger bases than the hydroxide ion (See the leveling effect.) For example, the ethoxide ion (conjugate base of ethanol) undergoes this reaction quantitatively in presence of water. CH3CH2O- + H2O -> CH3CH2OH + OH- Examples of common superbases are: Butyl lithium (n-C4H9Li) Lithium diisopropylamide (LDA) [(CH3)2CH]2NLi Lithium diethylamide (LDEA) Sodium amide (NaNH2) Sodium hydride (NaH) Lithium bis(trimethylsilyl)amide Strongest superbases are synthesised in only gas phase: Ortho-diethynylbenzene dianion (C6H4(C2)2)2− (the strongest superbase ever synthesized) Meta-diethynylbenzene dianion (C6H4(C2)2)2− (second strongest superbase) Para-diethynylbenzene dianion (C6H4(C2)2)2− (third strongest superbase) Lithium monoxide anion (LiO−) was considered the strongest superbase before diethynylbenzene dianions were created. Weak bases A weak base is one which does not fully ionize in an aqueous solution, or in which protonation is incomplete. For example, ammonia transfers a proton to water according to the equation NH3(aq) + H2O(l) → NH(aq) + OH−(aq) The equilibrium constant for this reaction at 25 °C is 1.8 x 10−5, such that the extent of reaction or degree of ionization is quite small. Lewis bases A Lewis base or electron-pair donor is a molecule with one or more high-energy lone pairs of electrons which can be shared with a low-energy vacant orbital in an acceptor molecule to form an adduct. In addition to H+, possible electron-pair acceptors (Lewis acids) include neutral molecules such as BF3 and high oxidation state metal ions such as Ag2+, Fe3+ and Mn7+. Adducts involving metal ions are usually described as coordination complexes. According to the original formulation of Lewis, when a neutral base forms a bond with a neutral acid, a condition of electric stress occurs. The acid and the base share the electron pair that formerly belonged to the base. As a result, a high dipole moment is created, which can only be decreased to zero by rearranging the molecules. Solid bases Examples of solid bases include: Oxide mixtures: SiO2, Al2O3; MgO, SiO2; CaO, SiO2 Mounted bases: LiCO3 on silica; NR3, NH3, KNH2 on alumina; NaOH, KOH mounted on silica on alumina Inorganic chemicals: BaO, KNaCO3, BeO, MgO, CaO, KCN Anion exchange resins Charcoal that has been treated at 900 degrees Celsius or activates with N2O, NH3, ZnCl2-NH4Cl-CO2 Depending on a solid surface's ability to successfully form a conjugate base by absorbing an electrically neutral acid, basic strength of the surface is determined. The "number of basic sites per unit surface area of the solid" is used to express how much basic strength is found on a solid base catalyst. Scientists have developed two methods to measure the amount of basic sites: one, titration with benzoic acid using indicators and gaseous acid adsorption. A solid with enough basic strength will absorb an electrically neutral acidic indicator and cause the acidic indicator's color to change to the color of its conjugate base. When performing the gaseous acid adsorption method, nitric oxide is used. The basic sites are then determined by calculating the amount of carbon dioxide that is absorbed. Bases as catalysts Basic substances can be used as insoluble heterogeneous catalysts for chemical reactions. Some examples are metal oxides such as magnesium oxide, calcium oxide, and barium oxide as well as potassium fluoride on alumina and some zeolites. Many transition metals make good catalysts, many of which form basic substances. Basic catalysts are used for hydrogenation, the migration of double bonds, in the Meerwein-Ponndorf-Verley reduction, the Michael reaction, and many others. Both CaO and BaO can be highly active catalysts if they are heated to high temperatures. Uses of bases Sodium hydroxide is used in the manufacture of soap, paper, and the synthetic fiber rayon. Calcium hydroxide (slaked lime) is used in the manufacture of bleaching powder. Calcium hydroxide is also used to clean the sulfur dioxide, which is caused by the exhaust, that is found in power plants and factories. Magnesium hydroxide is used as an 'antacid' to neutralize excess acid in the stomach and cure indigestion. Sodium carbonate is used as washing soda and for softening hard water. Sodium bicarbonate (or sodium hydrogen carbonate) is used as baking soda in cooking food, for making baking powders, as an antacid to cure indigestion and in soda acid fire extinguisher. Ammonium hydroxide is used to remove grease stains from clothes Monoprotic and polyprotic bases Bases with only one ionizable hydroxide (OH−) ion per formula unit are called monoprotic since they can accept one proton (H+). Bases with more than one OH- per formula unit are polyprotic. The number of ionizable hydroxide (OH−) ions present in one formula unit of a base is also called the acidity of the base. On the basis of acidity bases can be classified into three types: monoacidic, diacidic and triacidic. Monoacidic bases When one molecule of a base via complete ionization produces one hydroxide ion, the base is said to be a monoacidic or monoprotic base. Examples of monoacidic bases are: Sodium hydroxide, potassium hydroxide, silver hydroxide, ammonium hydroxide, etc. Diacidic bases When one molecule of base via complete ionization produces two hydroxide ions, the base is said to be diacidic or diprotic. Examples of diacidic bases are: Barium hydroxide, magnesium hydroxide, calcium hydroxide, zinc hydroxide, iron(II) hydroxide, tin(II) hydroxide, lead(II) hydroxide, copper(II) hydroxide, etc. Triacidic bases When one molecule of base via complete ionization produces three hydroxide ions, the base is said to be triacidic or triprotic. Examples of triacidic bases are: Aluminium hydroxide, ferrous hydroxide, Gold Trihydroxide, Etymology of the term The concept of base stems from an older alchemical notion of "the matrix":
Physical sciences
Inorganic compounds
null
140558
https://en.wikipedia.org/wiki/Fiber
Fiber
Fiber (also spelled fibre in British English; from ) is a natural or artificial substance that is significantly longer than it is wide. Fibers are often used in the manufacture of other materials. The strongest engineering materials often incorporate fibers, for example carbon fiber and ultra-high-molecular-weight polyethylene. Synthetic fibers can often be produced very cheaply and in large amounts compared to natural fibers, but for clothing natural fibers have some benefits, such as comfort, over their synthetic counterparts. Natural fibers Natural fibers develop or occur in the fiber shape, and include those produced by plants, animals, and geological processes. They can be classified according to their origin: Vegetable fibers are generally based on arrangements of cellulose, often with lignin: examples include cotton, hemp, jute, flax, abaca, piña, ramie, sisal, bagasse, and banana. Plant fibers are employed in the manufacture of paper and textile (cloth), and dietary fiber is an important component of human nutrition. Wood fiber, distinguished from vegetable fiber, is from tree sources. Forms include groundwood, lacebark, thermomechanical pulp (TMP), and bleached or unbleached kraft or sulfite pulps. Kraft and sulfite refer to the type of pulping process used to remove the lignin bonding the original wood structure, thus freeing the fibers for use in paper and engineered wood products such as fiberboard. Animal fibers consist largely of particular proteins. Instances are silkworm silk, spider silk, sinew, catgut, wool, sea silk and hair such as cashmere wool, mohair and angora, fur such as sheepskin, rabbit, mink, fox, beaver, etc. Mineral fibers include the asbestos group. Asbestos is the only naturally occurring long mineral fiber. Six minerals have been classified as "asbestos" including chrysotile of the serpentine class and those belonging to the amphibole class: amosite, crocidolite, tremolite, anthophyllite and actinolite. Short, fiber-like minerals include wollastonite and palygorskite. Biological fibers, also known as fibrous proteins or protein filaments, consist largely of biologically relevant and biologically very important proteins, in which mutations or other genetic defects can lead to severe diseases. Instances include the collagen family of proteins, tendons, muscle proteins like actin, cell proteins like microtubules and many others, such as spider silk, sinew, and hair. Artificial fibers Artificial or chemical fibers are fibers whose chemical composition, structure, and properties are significantly modified during the manufacturing process. In fashion, a fiber is a long and thin strand or thread of material that can be knit or woven into a fabric. Artificial fibers consist of regenerated fibers and synthetic fibers. Semi-synthetic fibers Semi-synthetic fibers are made from raw materials with naturally long-chain polymer structure and are only modified and partially degraded by chemical processes, in contrast to completely synthetic fibers such as nylon (polyamide) or dacron (polyester), which the chemist synthesizes from low-molecular weight compounds by polymerization (chain-building) reactions. The earliest semi-synthetic fiber is the cellulose regenerated fiber, rayon. Most semi-synthetic fibers are cellulose regenerated fibers. Cellulose regenerated fibers Cellulose fibers are a subset of artificial fibers, regenerated from natural cellulose. The cellulose comes from various sources: rayon from tree wood fiber, bamboo fiber from bamboo, seacell from seaweed, etc. In the production of these fibers, the cellulose is reduced to a fairly pure form as a viscous mass and formed into fibers by extrusion through spinnerets. Therefore, the manufacturing process leaves few characteristics distinctive of the natural source material in the finished products. Some examples of this fiber type are: rayon Lyocell, a brand of rayon Modal diacetate fiber triacetate fiber. Historically, cellulose diacetate and -triacetate were classified under the term rayon, but are now considered distinct materials. Synthetic fibers Synthetic come entirely from synthetic materials such as petrochemicals, unlike those artificial fibers derived from such natural substances as cellulose or protein. Fiber classification in reinforced plastics falls into two classes: (i) short fibers, also known as discontinuous fibers, with a general aspect ratio (defined as the ratio of fiber length to diameter) between 20 and 60, and (ii) long fibers, also known as continuous fibers, the general aspect ratio is between 200 and 500. Metallic fibers Metallic fibers can be drawn from ductile metals such as copper, gold or silver and extruded or deposited from more brittle ones, such as nickel, aluminum or iron. Carbon fiber Carbon fibers are often based on oxidized and via pyrolysis carbonized polymers like PAN, but the end product is almost pure carbon. Silicon carbide fiber Silicon carbide fibers, where the basic polymers are not hydrocarbons but polymers, where about 50% of the carbon atoms are replaced by silicon atoms, so-called poly-carbo-silanes. The pyrolysis yields an amorphous silicon carbide, including mostly other elements like oxygen, titanium, or aluminium, but with mechanical properties very similar to those of carbon fibers. Fiberglass Fiberglass, made from specific glass, and optical fiber, made from purified natural quartz, are also artificial fibers that come from natural raw materials, silica fiber, made from sodium silicate (water glass) and basalt fiber made from melted basalt. Mineral fibers Mineral fibers can be particularly strong because they are formed with a low number of surface defects; asbestos is a common one. Polymer fibers Polymer fibers are a subset of artificial fibers, which are based on synthetic chemicals (often from petrochemical sources) rather than arising from natural materials by a purely physical process. These fibers are made from: polyamide nylon PET or PBT polyester phenol-formaldehyde (PF) polyvinyl chloride fiber (PVC) vinyon polyolefins (PP and PE) olefin fiber acrylic polyesters, pure polyester PAN fibers are used to make carbon fiber by roasting them in a low oxygen environment. Traditional acrylic fiber is used more often as a synthetic replacement for wool. Carbon fibers and PF fibers are noted as two resin-based fibers that are not thermoplastic, most others can be melted. aromatic polyamids (aramids) such as Twaron, Kevlar and Nomex thermally degrade at high temperatures and do not melt. These fibers have strong bonding between polymer chains polyethylene (PE), eventually with extremely long chains / HMPE (e.g. Dyneema or Spectra). Elastomers can even be used, e.g. spandex although urethane fibers are starting to replace spandex technology. polyurethane fiber Elastolefin Coextruded fibers have two distinct polymers forming the fiber, usually as a core-sheath or side by side. Coated fibers exist such as nickel-coated to provide static elimination, silver-coated to provide anti-bacterial properties and aluminum-coated to provide RF deflection for radar chaff. Radar chaff is actually a spool of continuous glass tow that has been aluminum coated. An aircraft-mounted high speed cutter chops it up as it spews from a moving aircraft to confuse radar signals. Microfibers Invented in Japan in the early 1980s, microfibers are also known as microdenier fibers. Acrylic, nylon, polyester, lyocell and rayon can be produced as microfibers. In 1986, Hoechst A.G. of Germany produced microfiber in Europe. This fiber made it way into the United States in 1990 by DuPont. Microfibers in textiles refer to sub-denier fiber (such as polyester drawn to 0.5 denier). Denier and Dtex are two measurements of fiber yield based on weight and length. If the fiber density is known, you also have a fiber diameter, otherwise it is simpler to measure diameters in micrometers. Microfibers in technical fibers refer to ultra-fine fibers (glass or meltblown thermoplastics) often used in filtration. Newer fiber designs include extruding fiber that splits into multiple finer fibers. Most synthetic fibers are round in cross-section, but special designs can be hollow, oval, star-shaped or trilobal. The latter design provides more optically reflective properties. Synthetic textile fibers are often crimped to provide bulk in a woven, non woven or knitted structure. Fiber surfaces can also be dull or bright. Dull surfaces reflect more light while bright tends to transmit light and make the fiber more transparent. Very short and/or irregular fibers have been called fibrils. Natural cellulose, such as cotton or bleached kraft, show smaller fibrils jutting out and away from the main fiber structure. Typical properties of selected fibers Fibers can be divided into natural and artificial (synthetic) substance, their properties can affect their performance in many applications. Synthetic fiber materials are increasingly replacing other conventional materials like glass and wood in a number of applications. This is because artificial fibers can be engineered chemically, physically, and mechanically to suit particular technical engineering. In choosing a fiber type, a manufacturer would balance their properties with the technical requirements of the applications. Various fibers are available to select for manufacturing. Here are typical properties of the sample natural fibers as compared to the properties of artificial fibers. The tables above just show typical properties of fibers, in fact there are more properties which could be referred as follows (from a to z): Arc Resistance, Biodegradable, Coefficient of Linear Thermal Expansion, Continuous Service Temperature, Density of Plastics, Ductile / Brittle Transition Temperature, Elongation at Break, Elongation at Yield, Fire Resistance, Flexibility, Gamma Radiation Resistance, Gloss, Glass Transition Temperature, Hardness, Heat Deflection Temperature, Shrinkage, Stiffness, Ultimate tensile strength, Thermal Insulation, Toughness, Transparency, UV Light Resistance, Volume Resistivity, Water absorption, Young's Modulus
Technology
Fabrics and fibers
null
140592
https://en.wikipedia.org/wiki/Assignment%20problem
Assignment problem
The assignment problem is a fundamental combinatorial optimization problem. In its most general form, the problem is as follows: The problem instance has a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost that may vary depending on the agent-task assignment. It is required to perform as many tasks as possible by assigning at most one agent to each task and at most one task to each agent, in such a way that the total cost of the assignment is minimized. Alternatively, describing the problem using graph theory: The assignment problem consists of finding, in a weighted bipartite graph, a matching of maximum size, in which the sum of weights of the edges is minimum. If the numbers of agents and tasks are equal, then the problem is called balanced assignment, and the graph-theoretic version is called minimum-cost perfect matching. Otherwise, it is called unbalanced assignment. If the total cost of the assignment for all tasks is equal to the sum of the costs for each agent (or the sum of the costs for each task, which is the same thing in this case), then the problem is called linear assignment. Commonly, when speaking of the assignment problem without any additional qualification, then the linear balanced assignment problem is meant. Examples Suppose that a taxi firm has three taxis (the agents) available, and three customers (the tasks) wishing to be picked up as soon as possible. The firm prides itself on speedy pickups, so for each taxi the "cost" of picking up a particular customer will depend on the time taken for the taxi to reach the pickup point. This is a balanced assignment problem. Its solution is whichever combination of taxis and customers results in the least total cost. Now, suppose that there are four taxis available, but still only three customers. This is an unbalanced assignment problem. One way to solve it is to invent a fourth dummy task, perhaps called "sitting still doing nothing", with a cost of 0 for the taxi assigned to it. This reduces the problem to a balanced assignment problem, which can then be solved in the usual way and still give the best solution to the problem. Similar adjustments can be done in order to allow more tasks than agents, tasks to which multiple agents must be assigned (for instance, a group of more customers than will fit in one taxi), or maximizing profit rather than minimizing cost. Formal definition The formal definition of the assignment problem (or linear assignment problem) is Given two sets, A and T, together with a weight function C : A × T → R. Find a bijection f : A → T such that the cost function: is minimized. Usually the weight function is viewed as a square real-valued matrix C, so that the cost function is written down as: The problem is "linear" because the cost function to be optimized as well as all the constraints contain only linear terms. Algorithms A naive solution for the assignment problem is to check all the assignments and calculate the cost of each one. This may be very inefficient since, with n agents and n tasks, there are n! (factorial of n) different assignments. Another naive solution is to greedily assign the pair with the smallest cost first, and remove the vertices; then, among the remaining vertices, assign the pair with the smallest cost; and so on. This algorithm may yield a non-optimal solution. For example, suppose there are two tasks and two agents with costs as follows: Alice: Task 1 = 1, Task 2 = 2. George: Task 1 = 5, Task 2 = 8. The greedy algorithm would assign Task 1 to Alice and Task 2 to George, for a total cost of 9; but the reverse assignment has a total cost of 7. Fortunately, there are many algorithms for finding the optimal assignment in time polynomial in n. The assignment problem is a special case of the transportation problem, which is a special case of the minimum cost flow problem, which in turn is a special case of a linear program. While it is possible to solve any of these problems using the simplex algorithm, or in worst-case polynomial time using the ellipsoid method, each specialization has a smaller solution space and thus more efficient algorithms designed to take advantage of its special structure. Balanced assignment In the balanced assignment problem, both parts of the bipartite graph have the same number of vertices, denoted by n. One of the first polynomial-time algorithms for balanced assignment was the Hungarian algorithm. It is a global algorithm – it is based on improving a matching along augmenting paths (alternating paths between unmatched vertices). Its run-time complexity, when using Fibonacci heaps, is , where m is a number of edges. This is currently the fastest run-time of a strongly polynomial algorithm for this problem. If all weights are integers, then the run-time can be improved to , but the resulting algorithm is only weakly-polynomial. If the weights are integers, and all weights are at most C (where C>1 is some integer), then the problem can be solved in weakly-polynomial time in a method called weight scaling. In addition to the global methods, there are local methods which are based on finding local updates (rather than full augmenting paths). These methods have worse asymptotic runtime guarantees, but they often work better in practice. These algorithms are called auction algorithms, push-relabel algorithms, or preflow-push algorithms. Some of these algorithms were shown to be equivalent. Some of the local methods assume that the graph admits a perfect matching; if this is not the case, then some of these methods might run forever. A simple technical way to solve this problem is to extend the input graph to a complete bipartite graph, by adding artificial edges with very large weights. These weights should exceed the weights of all existing matchings, to prevent appearance of artificial edges in the possible solution. As shown by Mulmuley, Vazirani and Vazirani, the problem of minimum weight perfect matching is converted to finding minors in the adjacency matrix of a graph. Using the isolation lemma, a minimum weight perfect matching in a graph can be found with probability at least . For a graph with n vertices, it requires time. Unbalanced assignment In the unbalanced assignment problem, the larger part of the bipartite graph has n vertices and the smaller part has r<n vertices. There is also a constant s which is at most the cardinality of a maximum matching in the graph. The goal is to find a minimum-cost matching of size exactly s. The most common case is the case in which the graph admits a one-sided-perfect matching (i.e., a matching of size r), and s=r. Unbalanced assignment can be reduced to a balanced assignment. The naive reduction is to add new vertices to the smaller part and connect them to the larger part using edges of cost 0. However, this requires new edges. A more efficient reduction is called the doubling technique. Here, a new graph G' is built from two copies of the original graph G: a forward copy Gf and a backward copy Gb. The backward copy is "flipped", so that, in each side of G, there are now n+r vertices. Between the copies, we need to add two kinds of linking edges: Large-to-large: from each vertex in the larger part of Gf, add a zero-cost edge to the corresponding vertex in Gb. Small-to-small: if the original graph does not have a one-sided-perfect matching, then from each vertex in the smaller part of Gf, add a very-high-cost edge to the corresponding vertex in Gb. All in all, at most new edges are required. The resulting graph always has a perfect matching of size . A minimum-cost perfect matching in this graph must consist of minimum-cost maximum-cardinality matchings in Gf and Gb. The main problem with this doubling technique is that there is no speed gain when . Instead of using reduction, the unbalanced assignment problem can be solved by directly generalizing existing algorithms for balanced assignment. The Hungarian algorithm can be generalized to solve the problem in strongly-polynomial time. In particular, if s=r then the runtime is . If the weights are integers, then Thorup's method can be used to get a runtime of . Solution by linear programming The assignment problem can be solved by presenting it as a linear program. For convenience we will present the maximization problem. Each edge , where i is in A and j is in T, has a weight . For each edge we have a variable . The variable is 1 if the edge is contained in the matching and 0 otherwise, so we set the domain constraints: The total weight of the matching is: . The goal is to find a maximum-weight perfect matching. To guarantee that the variables indeed represent a perfect matching, we add constraints saying that each vertex is adjacent to exactly one edge in the matching, i.e., . All in all we have the following LP: This is an integer linear program. However, we can solve it without the integrality constraints (i.e., drop the last constraint), using standard methods for solving continuous linear programs. While this formulation allows also fractional variable values, in this special case, the LP always has an optimal solution where the variables take integer values. This is because the constraint matrix of the fractional LP is totally unimodular – it satisfies the four conditions of Hoffman and Gale. Other methods and approximation algorithms Other approaches for the assignment problem exist and are reviewed by Duan and Pettie (see Table II). Their work proposes an approximation algorithm for the assignment problem (and the more general maximum weight matching problem), which runs in linear time for any fixed error bound. Many-to-many assignment In the basic assignment problem, each agent is assigned to at most one task and each task is assigned to at most one agent. In the many-to-many assignment problem, each agent i may take up to ci tasks (ci is called the agent's capacity), and each task j may be taken by up to dj agents simultaneously (dj is called the task's capacity). If the sums of capacities in both sides are equal (), then the problem is balanced, and the goal is to find a perfect matching (assign exactly ci tasks to each agent i and exactly dj agents to each task j) such that the total cost is as small as possible. The problem can be solved by reduction to the minimum cost network flow problem. Construct a flow network with the following layers: Layer 1: One source-node s. Layer 2: a node for each agent. There is an arc from s to each agent i, with cost 0 and capacity ci . Level 3: a node for each task. There is an arc from each agent i to each task j, with the corresponding cost, and capacity 1. Level 4: One sink-node t. There is an arc from each task to t''', with cost 0 and capacity dj. An integral maximum flow of minimum cost can be found in polynomial time; see network flow problem. Every integral maximum flow in this network corresponds to a matching in which at most ci tasks are assigned to each agent i and at most dj agents are assigned to each task j (in the balanced case, exactly ci tasks are assigned to i and exactly dj agents are assigned to j''). A min-cost maximum flow corresponds to a min-cost assignment. Generalization When phrased as a graph theory problem, the assignment problem can be extended from bipartite graphs to arbitrary graphs. The corresponding problem, of finding a matching in a weighted graph where the sum of weights is maximized, is called the maximum weight matching problem. Another generalization of the assignment problem is extending the number of sets to be matched from two to many. So that rather than matching agents to tasks, the problem is extended to matching agents to tasks to time intervals to locations. This results in Multidimensional assignment problem (MAP).
Mathematics
Optimization
null
140599
https://en.wikipedia.org/wiki/Basidium
Basidium
A basidium (: basidia) is a microscopic spore-producing structure found on the hymenophore of reproductive bodies of basidiomycete fungi. The presence of basidia is one of the main characteristic features of the group. These bodies are also called tertiary mycelia, which are highly coiled versions of secondary mycelia. A basidium usually bears four sexual spores called basidiospores. Occasionally the number may be two or even eight. Each reproductive spore is produced at the tip of a narrow prong or horn called a sterigma (), and is forcefully expelled at full growth. The word basidium literally means "little pedestal". This is the way the basidium supports the spores. However, some biologists suggest that the structure looks more like a club. A partially grown basidium is known as a basidiole. Structure Most basidiomycota have single celled basidia (holobasidia), but some have ones with many cells (a phragmobasidia). For instance, rust fungi in the order Puccinales have phragmobasidia with four cells that are separated by walls along their cross section. Some jelly fungi in the order Tremellales also have phragmobasidia with four cells that are separated by walls and are shaped like a cross. Sometimes the basidium develops from a probasidium, which is not elongated like a typical hypha. The basidium may be stalked or attached directly to the hyphae. The basidium is normally club-shaped: narrow at the stem and wide near its outer end. It is widest in the middle hemispherical dome at its apex, and its base is about half the size of the widest diameter at the highest point. Basidia with a short and narrow base are shaped like an inverted egg, and occur in genera such as Paullicorticium, Oliveonia, and Tulasnella. Basidia with a wide base are often shaped like a barrel. How basidiospores are expelled In most basidiomycota, the basidiospores are forcibly expelled. The propulsive force is derived from a sudden change in the center of gravity of the discharged spore. Important factors in forcible discharge include Buller's drop, a drop of fluid that builds up at the nearer tip (hilar appendage) of each basidiospore; the offset attachment of the spore to the extending narrow prong, and the presence of hygroscopic regions on the basidiospore surface. Basidiospore discharge can only succeed after sufficient water vapor has condensed on the spore. When a basidiospore matures, sugars present in the cell wall begin to serve as condensation loci for water vapour in the air. Two separate regions of condensation are critical. At the pointed tip of the spore (the hilum) closest to the supporting basidium, Buller's drop builds up as a large, almost spherical water droplet. At the same time, condensation occurs in a thin film on the stalk-facing part of the spore. When these two bodies of water combine, the release of surface tension and the sudden change in the center of gravity suddenly expels the basidiospore. Remarkably, the initial acceleration of the spore is estimated to be about 10,000 . Evolutionary loss of expulsion by force Some basidiomycetes do not have a means to forcibly expel their basidiospores, although they still form them. In each of these groups, spore dispersal occurs through other means of expulsion. For example: Members of the order Phallales (stinkhorns) rely on insect vectors for dispersal. The dry spores of the Lycoperdales (puffballs) and Sclerodermataceae (earth balls and kin) are dispersed when the basidiocarps are disturbed. Species of the Nidulariales (bird's nest fungi) use a splash cup mechanism. In these cases the basidiospore typically lacks a hilar appendage, and expulsion by force does not occur. Each example is thought to represent an independent evolutionary loss of the forcible discharge that comes before all basidiomycetes.
Biology and health sciences
Fungal morphology and anatomy
Biology
140618
https://en.wikipedia.org/wiki/Tortoise
Tortoise
Tortoises ( ) are reptiles of the family Testudinidae of the order Testudines (Latin for "tortoise"). Like other turtles, tortoises have a shell to protect from predation and other threats. The shell in tortoises is generally hard, and like other members of the suborder Cryptodira, they retract their necks and heads directly backward into the shell to protect them. Tortoises can vary in size with some species, such as the Galápagos giant tortoise, growing to more than in length, whereas others like the Speckled cape tortoise have shells that measure only long. Several lineages of tortoises have independently evolved very large body sizes in excess of , including the Galapagos giant tortoise and the Aldabra giant tortoise. They are usually diurnal animals with tendencies to be crepuscular depending on the ambient temperatures. They are generally reclusive animals. Tortoises are the longest-living land animals in the world, although the longest-living species of tortoise is a matter of debate. Galápagos tortoises are noted to live over 150 years, but an Aldabra giant tortoise named Adwaita may have lived an estimated 255 years. In general, most tortoise species can live 80–150 years. Tortoises are placid and slow-moving, with an average walking speed of 0.2–0.5 km/h. Terminology Differences exist in usage of the common terms turtle, tortoise, and terrapin, depending on the variety of English being used; usage is inconsistent and contradictory. These terms are common names and do not reflect precise biological or taxonomic distinctions.The American Society of Ichthyologists and Herpetologists uses "turtle" to describe all species of the order Testudines, regardless of whether they are land-dwelling or sea-dwelling, and uses "tortoise" as a more specific term for slow-moving terrestrial species. General American usage agrees; turtle is often a general term; tortoise is used only in reference to terrestrial turtles or, more narrowly, only those members of Testudinidae, the family of modern land tortoises; and terrapin may refer to turtles that are small and live in fresh and brackish water, in particular the diamondback terrapin (Malaclemys terrapin). In America, for example, the members of the genus Terrapene dwell on land, yet are referred to as box turtles rather than tortoises. British usage, by contrast, tends not to use "turtle" as a generic term for all members of the order, and also applies the term "tortoises" broadly to all land-dwelling members of the order Testudines, regardless of whether they are actually members of the family Testudinidae. In Britain, terrapin is used to refer to a larger group of semiaquatic turtles than the restricted meaning in America. Australian usage is different from both American and British usage. Land tortoises are not native to Australia, and traditionally freshwater turtles have been called "tortoises" in Australia. Some Australian experts disapprove of this usage—believing that the term tortoises is "better confined to purely terrestrial animals with very different habits and needs, none of which are found in this country"—and promote the use of the term "freshwater turtle" to describe Australia's primarily aquatic members of the order Testudines because it avoids misleading use of the word "tortoise" and also is a useful distinction from marine turtles. Biology Life cycle Most species of tortoises lay small clutch sizes, seldom exceeding 20 eggs, and many species have clutch sizes of only 1–2 eggs. Incubation is characteristically long in most species, the average incubation period are between 100 and 160.0 days. Egg-laying typically occurs at night, after which the mother tortoise covers her clutch with sand, soil, and organic material. The eggs are left unattended, and depending on the species, take from 60 to 120 days to incubate. The size of the egg depends on the size of the mother and can be estimated by examining the width of the cloacal opening between the carapace and plastron. The plastron of a female tortoise often has a noticeable V-shaped notch below the tail which facilitates passing the eggs. Upon completion of the incubation period, a fully formed hatchling uses an egg tooth to break out of its shell. It digs to the surface of the nest and begins a life of survival on its own. They are hatched with an embryonic egg sac which serves as a source of nutrition for the first three to seven days until they have the strength and mobility to find food. Juvenile tortoises often require a different balance of nutrients than adults, so may eat foods which a more mature tortoise would not. For example, the young of a strictly herbivorous species commonly will consume worms or insect larvae for additional protein. The number of concentric rings on the carapace, much like the cross-section of a tree, can sometimes give a clue to how old the animal is, but, since the growth depends highly on the accessibility of food and water, a tortoise that has access to plenty of forage (or is regularly fed by its owner) with no seasonal variation will have no noticeable rings. Moreover, some tortoises grow more than one ring per season, and in some others, due to wear, some rings are no longer visible. Tortoises generally have one of the longest lifespans of any animal, and some individuals are known to have lived longer than 150 years. Because of this, they symbolize longevity in some cultures, such as Chinese culture. The oldest tortoise ever recorded, and one of the oldest individual animals ever recorded, was Tu'i Malila, which was presented to the Tongan royal family by the British explorer James Cook shortly after its birth in 1777. Tu'i Malila remained in the care of the Tongan royal family until its death by natural causes on May 19, 1965, at the age of 188. The Alipore Zoo in India was the home to Adwaita, which zoo officials claimed was the oldest living animal until its death on March 23, 2006. Adwaita (also spelled Addwaita) was an Aldabra giant tortoise brought to India by Lord Wellesley, who handed it over to the Alipur Zoological Gardens in 1875 when the zoo was set up. West Bengal officials said records showed Adwaita was at least 150 years old, but other evidence pointed to 250. Adwaita was said to be the pet of Robert Clive. Harriet was a resident at the Australia Zoo in Queensland from 1987 to her death in 2006; she was believed to have been brought to England by Charles Darwin aboard the Beagle and then on to Australia by John Clements Wickham. Harriet died on June 23, 2006, just shy of her 176th birthday. Timothy, a female spur-thighed tortoise, lived to be about 165 years old. For 38 years, she was carried as a mascot aboard various ships in Britain's Royal Navy. Then in 1892, at age 53, she retired to the grounds of Powderham Castle in Devon. Up to the time of her death in 2004, she was believed to be the United Kingdom's oldest resident. Jonathan, a Seychelles giant tortoise living on the island of St Helena, may be as old as years. DNA analysis of the genomes of the long-lived tortoises, Lonesome George, the iconic last member of Chelonoidis abingdonii, and the Aldabra giant tortoise Aldabrachelys gigantea led to the detection of lineage-specific variants affecting DNA repair genes that might contribute to their long lifespan. Dimorphism Many species of tortoises are sexually dimorphic, though the differences between males and females vary from species to species. In some species, males have a longer, more protruding neck plate than their female counterparts, while in others, the claws are longer on the females. The male plastron is curved inwards to aid reproduction. The easiest way to determine the sex of a tortoise is to look at the tail. The females, as a general rule, have smaller tails, dropped down, whereas the males have much longer tails which are usually pulled up and to the side of the rear shell. Brain The brain of a tortoise is extremely small. Red-footed tortoises, from Central and South America, do not have an area in the brain called the hippocampus, which relates to emotion, learning, memory and spatial navigation. Studies have shown that red-footed tortoises may rely on an area of the brain called the medial cortex for emotional actions, an area that humans use for actions such as decision making. In the 17th century, Francesco Redi performed an experiment that involved removing the brain of a land tortoise, which then proceeded to live six months. Freshwater tortoises, when subjected to the same experiment, continued similarly, but did not live so long. Redi also cut the head off a tortoise entirely, and it lived for 23 days. Distribution Tortoises are found from southern North America to southern South America, around the Mediterranean basin, across Eurasia to Southeast Asia, in sub-Saharan Africa, Madagascar, and some Pacific islands. They are absent from Australasia. They live in diverse habitats, including deserts, arid grasslands, and scrub to wet evergreen forests, and from sea level to mountains. Most species, however, occupy semiarid habitats. Many large islands are or were characterized by species of giant tortoises. Part of the reason for this is that tortoises are good at oceanic dispersal. Despite being unable to swim, tortoises are able to survive long periods adrift at sea because they can survive months without food or fresh water. Tortoises have been known to survive oceanic dispersals of more than 740 km. Once on islands tortoises faced few predators or competitors and could grow to large sizes and become the dominant large herbivores on many islands due to their low metabolic rate and reduced need for fresh water compared to mammals. Today there are only two living species of giant tortoises, the Aldabra giant tortoise on Aldabra Atoll and the dozen subspecies of Galapagos giant tortoise found on the Galapagos Islands. However, until recently giant tortoises could be found on nearly every major island group, including the Bahamas, the Greater Antilles (including Cuba and Hispaniola), the Lesser Antilles, the Canary Islands, Malta, the Seychelles, the Mascarene Islands (including Mauritius and Reunion), and Madagascar. Most of these tortoises were wiped out by human arrival. Many of these giant tortoises are not closely related (belonging to different genera such as Megalochelys, Chelonoidis, Centrochelys, Aldabrachelys, Cylindraspis, and Hesperotestudo), but are thought to have independently evolved large body size through convergent evolution. Giant tortoises are notably absent from Australasia and many south Pacific islands, but the distantly related meiolaniid turtles are thought to have filled the same niche. Giant tortoises are also known from the Oligocene-Pliocene of mainland North America, South America, Europe, Asia, and Africa, but are all now extinct, which is also attributed to human activity. Diet Tortoises are generally considered to be strict herbivores, feeding on grasses, weeds, leafy greens, flowers, and some fruits. However, hunting and eating of birds has been observed on occasion. Pet tortoises typically require diets based on wild grasses, weeds, leafy greens and certain flowers. Certain species consume worms or insects and carrion in their normal habitats. Too much protein is detrimental in herbivorous species, and has been associated with shell deformities and other medical problems. Different tortoise species vary greatly in their nutritional requirements. Behavior Communication in tortoises is different from many other reptiles. Because they are restricted by their shell and short limbs, visual communication is not a strong form of communication in tortoises. Tortoises use olfactory cues to determine the sex of other tortoises so that they can find a potential mate. Tactile communication is important in tortoises during combat and courtship. In both combat and courtship, tortoises use ramming to communicate with other individuals. Taxonomy This species list largely follows Turtle Taxonomy Working Group (2021) and the Turtle Extinctions Working Group (2015). Family Testudinidae Batsch 1788 Alatochelon Alatochelon myrteum Aldabrachelys Loveridge and Williams 1957:166 Aldabrachelys gigantea Aldabra giant tortoise. A. g. gigantea Aldabra tortoise. A. g. arnoldi Arnold’s giant tortoise. A. g. daudinii Daudin’s giant tortoise. A. g. hololissa Domed Seychelles giant tortoise. †Aldabrachelys abrupta Late Holocene, extinct circa 1200 AD †Aldabrachelys grandidieri Late Holocene, extinct circa 884 AD Astrochelys Gray, 1873:4 Astrochelys radiata, radiated tortoise Astrochelys yniphora, angonoka tortoise, (Madagascan) plowshare tortoise Centrochelys Gray 1872:5 Centrochelys atlantica Centrochelys burchardi Tenerife giant tortoise Centrochelys marocana Centrochelys robusta Maltese giant tortoise Centrochelys sulcata, African spurred tortoise, sulcata tortoise Centrochelys vulcanica Gran Canaria giant tortoise Chelonoidis Fitzinger 1835:112 Chelonoidis alburyorum Abaco tortoise, Late Pleistocene, extinct c. 1400 CE Chelonoidis carbonarius, red-footed tortoise Chelonoidis chilensis, Chaco tortoise, Argentine tortoise or southern wood tortoise Chelonoidis cubensis Cuban giant tortoise Chelonoidis denticulatus Brazilian giant tortoise, yellow-footed tortoise C. dominicensis Dominican giant tortoise Chelonoidis lutzae Lutz's giant tortoise, Late Pleistocene Chelonoidis monensis Mona tortoise Chelonoidis niger Galapagos giant tortoise Chelonoidis sellovii Southern Cone giant tortoise, Pleistocene Chelonoidis sombrerensis Sombrero giant tortoise, Late Pleistocene Chersina Gray 1830:5 Chersina angulata, angulated tortoise, South African bowsprit tortoise Cheirogaster Bergounioux 1935:78 †Cheirogaster gymnesica Late Pliocene to Early Pleistocene †Cheirogaster schafferi Pliocene to Early Pleistocene Chersobius Fitzinger, 1835 Chersobius boulengeri, Karoo padloper, Karoo dwarf tortoise, Boulenger's Cape tortoise Chersobius signatus, speckled padloper tortoise Chersobius solus, Nama padloper, Berger's Cape tortoise †Cylindraspis Fitzinger 1835:112 (all species extinct) following Austin and Arnold, 2001: †Cylindraspis indica, synonym Cylindraspis borbonica, Reunion giant tortoise †Cylindraspis inepta, saddle-backed Mauritius giant tortoise or Mauritius giant domed tortoise †Cylindraspis peltastes, domed Rodrigues giant tortoise †Cylindraspis triserrata, domed Mauritius giant tortoise or Mauritius giant flat-shelled tortoise †Cylindraspis vosmaeri, saddle-backed Rodrigues giant tortoise Ergilemys Ckhikvadze, 1984 Ergilemys bruneti Ergilemys insolitus Ergilemys saikanensis Geochelone Fitzinger 1835:112 Geochelone elegans, Indian star tortoise Geochelone platynota, Burmese star tortoise Gopherus Rafinesque 1832:64 Gopherus agassizii, Mojave desert tortoise, Agassiz's desert tortoise Gopherus berlandieri, Texas tortoise, Berlandier's tortoise Gopherus flavomarginatus, Bolson tortoise Gopherus morafkai, Sonoran desert tortoise, Morafka's desert tortoise Gopherus polyphemus, gopher tortoise Gopherus evgoodei, Sinaloan desert tortoise, Goode's thornscrub tortoise Hadrianus Hadrianus corsoni (syn. H. octonarius) Hadrianus robustus Hadrianus schucherti Hadrianus utahensis Hesperotestudo Hesperotestudo alleni Hesperotestudo angusticeps Hesperotestudo brontops Hesperotestudo equicomes Hesperotestudo impensa Hesperotestudo incisa Hesperotestudo johnstoni Hesperotestudo kalganensis Hesperotestudo niobrarensis Hesperotestudo orthopygia Hesperotestudo osborniana Hesperotestudo percrassa Hesperotestudo riggsi Hesperotestudo tumidus Hesperotestudo turgida Hesperotestudo wilsoni Homopus Duméril and Bibron 1834:357 Homopus areolatus, common padloper, parrot-beaked tortoise, beaked Cape tortoise Homopus femoralis, greater padloper, greater dwarf tortoise Indotestudo Lindholm, 1929 Indotestudo elongata, elongated tortoise, yellow-headed tortoise Indotestudo forstenii, Forsten's tortoise, East Indian tortoise Indotestudo travancorica, Travancore tortoise Kinixys Kinixys belliana, Bell's hinge-back tortoise Kinixys erosa, forest hinge-back tortoise, serrated hinge-back tortoise Kinixys homeana, Home's hinge-back tortoise Kinixys lobatsiana, Lobatse hinge-back tortoise Kinixys natalensis, Natal hinge-back tortoise Kinixys spekii, Speke's hinge-back tortoise Malacochersus Lindholm 1929:285 Malacochersus tornieri, pancake tortoise Manouria Gray 1854:133 Manouria emys, Asian giant tortoise, brown tortoise (mountain tortoise) Manouria impressa, impressed tortoise Megalochelys Falconer, H. and Cautley, P.T. 1837. Megalochelys atlas, Atlas tortoise, Extinct – Pliocene to Pleistocene Megalochelys cautleyi, Cautley's giant tortoise Psammobates Fitzinger 1835:113 Psammobates geometricus, geometric tortoise Psammobates oculifer, serrated tent tortoise, Kalahari tent tortoise Psammobates tentorius, African tent tortoise Pyxis Bell 1827:395 Pyxis arachnoides, (Madagascan) spider tortoise Pyxis planicauda, flat-backed spider tortoise, (Madagascan) flat-tailed tortoise, flat-tailed spider tortoise Stigmochelys Gray, 1873 Stigmochelys pardalis, leopard tortoise Stylemys Stylemys botti Stylemys calaverensis Stylemys canetotiana Stylemys capax Stylemys conspecta Stylemys copei Stylemys emiliae Stylemys frizaciana Stylemys karakolensis Stylemys nebrascensis (syn. S. amphithorax) Stylemys neglectus Stylemys oregonensis Stylemys pygmea Stylemys uintensis Stylemys undabuna Titanochelon Titanochelon gymnesica (Bate, 1914) Balearic Islands, Pliocene Titanochelon bolivari (Hernandez-Pacheco, 1917) (type) Iberian Peninsula, Miocene Titanochelon bacharidisi (Vlachos et al., 2014) Greece, Bulgaria, Late Miocene Titanochelon perpiniana (Deperet 1885) France, Pliocene Titanochelon schafferi (Szalai, 1931) Samos, Greece, Miocene Titanochelon vitodurana (Biedermann 1862) Switzerland, Early Miocene Titanochelon kayadibiensis Karl, Staesche & Safi, 2021, Anatolia, Miocene Titanochelon eurysternum (Gervais, 1848–1852) France, Miocene Titanochelon ginsburgi (de Broin, 1977 ) France, Miocene Titanochelon leberonensis (Depéret, 1890) France, Miocene Testudo Testudo graeca, Greek tortoise, spur-thighed tortoise, Moorish tortoise Testudo hermanni, Hermann's tortoise Testudo horsfieldii, Russian tortoise Testudo kleinmanni, Egyptian tortoise, including Negev tortoise Testudo marginata, marginated tortoise Phylogeny A molecular phylogeny of tortoises, following Le et al. (2006: 525): A separate phylogeny via mtDNA analysis was found by Kehlmaier et al. (2021): In 2023 Kehlmaier again recovered a very similar phylogeny to the 2021 one, which further reaffirmed the evolutionary distinctiveness of the extinct Cylindraspis, but swapped the position of Gopherus and Manouria, making Gopherus the most basal genus. In human culture In religion In Hinduism, Kurma () was the second Avatar of Vishnu. Like the Matsya Avatara, Kurma also belongs to the Satya Yuga. Vishnu took the form of a half-man, half-tortoise, the lower half being a tortoise. He is normally shown as having four arms. He sat on the bottom of the ocean after the Great Flood. A mountain was placed on his back by the other gods so they could churn the sea and find the ancient treasures of the Vedic peoples. In Judaism, tortoises are seen as unclean animals. Early Christians also viewed tortoises as unclean. Tortoise shells were used by ancient Chinese as oracle bones to make predictions. In Ancient Greek mythology, Hermes crafts the first lyre from a tortoise. In space In September, 1968, two Russian tortoises became the first animals to fly to and circle the Moon. Their Zond 5 mission brought them back to Earth safely. As pets As food Gallery
Biology and health sciences
Reptiles
null
140627
https://en.wikipedia.org/wiki/Apatite
Apatite
Apatite is a group of phosphate minerals, usually hydroxyapatite, fluorapatite and chlorapatite, with high concentrations of OH−, F− and Cl− ion, respectively, in the crystal. The formula of the admixture of the three most common endmembers is written as Ca10(PO4)6(OH,F,Cl)2, and the crystal unit cell formulae of the individual minerals are written as Ca10(PO4)6(OH)2, Ca10(PO4)6F2 and Ca10(PO4)6Cl2. The mineral was named apatite by the German geologist Abraham Gottlob Werner in 1786, although the specific mineral he had described was reclassified as fluorapatite in 1860 by the German mineralogist Karl Friedrich August Rammelsberg. Apatite is often mistaken for other minerals. This tendency is reflected in the mineral's name, which is derived from the Greek word ἀπατάω (apatáō), which means to deceive. Geology Apatite is very common as an accessory mineral in igneous and metamorphic rocks, where it is the most common phosphate mineral. However, occurrences are usually as small grains which are often visible only in thin section. Coarsely crystalline apatite is usually restricted to pegmatites, gneiss derived from sediments rich in carbonate minerals, skarns, or marble. Apatite is also found in clastic sedimentary rock as grains eroded out of the source rock. Phosphorite is a phosphate-rich sedimentary rock containing as much as 80% apatite, which is present as cryptocrystalline masses referred to as collophane. Economic quantities of apatite are also sometimes found in nepheline syenite or in carbonatites. Apatite is the defining mineral for 5 on the Mohs scale. It can be distinguished in the field from beryl and tourmaline by its relative softness. It is often fluorescent under ultraviolet light. Apatite is one of a few minerals produced and used by biological micro-environmental systems. Hydroxyapatite, also known as hydroxylapatite, is the major component of tooth enamel and bone mineral. A relatively rare form of apatite in which most of the OH groups are absent and containing many carbonate and acid phosphate substitutions is a large component of bone material. Fluorapatite (or fluoroapatite) is more resistant to acid attack than is hydroxyapatite; in the mid-20th century, it was discovered that communities whose water supply naturally contained fluorine had lower rates of dental caries. Fluoridated water allows exchange in the teeth of fluoride ions for hydroxyl groups in apatite. Similarly, toothpaste typically contains a source of fluoride anions (e.g. sodium fluoride, sodium monofluorophosphate). Too much fluoride results in dental fluorosis and/or skeletal fluorosis. Fission tracks in apatite are commonly used to determine the thermal histories of orogenic belts and of sediments in sedimentary basins. (U-Th)/He dating of apatite is also well established from noble gas diffusion studies for use in determining thermal histories and other, less typical applications such as paleo-wildfire dating. Uses The primary use of apatite is as a source of phosphate in the manufacture of fertilizer and in other industrial uses. It is occasionally used as a gemstone. Ground apatite was used as a pigment for the Terracotta Army of 3rd-century BCE China, and in Qing Dynasty enamel for metalware. During digestion of apatite with sulfuric acid to make phosphoric acid, hydrogen fluoride is produced as a byproduct from any fluorapatite content. This byproduct is a minor industrial source of hydrofluoric acid. Apatite is also occasionally a source of uranium and vanadium, present as trace elements in the mineral. Fluoro-chloro apatite forms the basis of the now obsolete Halophosphor fluorescent tube phosphor system. Dopant elements of manganese and antimony, at less than one mole-percent — in place of the calcium and phosphorus — impart the fluorescence, and adjustment of the fluorine-to-chlorine ratio alter the shade of white produced. This system has been almost entirely replaced by the Tri-Phosphor system. Apatites are also a proposed host material for storage of nuclear waste, along with other phosphates. Gemology Apatite is infrequently used as a gemstone. Transparent stones of clean color have been faceted, and chatoyant specimens have been cabochon-cut. Chatoyant stones are known as cat's-eye apatite, transparent green stones are known as asparagus stone, and blue stones have been called moroxite. If crystals of rutile have grown in the crystal of apatite, in the right light the cut stone displays a cat's-eye effect. Major sources for gem apatite are Brazil, Myanmar, and Mexico. Other sources include Canada, Czech Republic, Germany, India, Madagascar, Mozambique, Norway, South Africa, Spain, Sri Lanka, and the United States. Use as an ore mineral Apatite is occasionally found to contain significant amounts of rare-earth elements and can be used as an ore for those metals. This is preferable to traditional rare-earth ores such as monazite, as apatite is not very radioactive and does not pose an environmental hazard in mine tailings. However, apatite often contains uranium and its equally radioactive decay-chain nuclides. The town of Apatity in the Arctic North of Russia was named for its mining operations for these ores. Apatite is an ore mineral at the Hoidas Lake rare-earth project. Thermodynamics The standard enthalpies of formation in the crystalline state of hydroxyapatite, chlorapatite and a preliminary value for bromapatite, have been determined by reaction-solution calorimetry. Speculations on the existence of a possible fifth member of the calcium apatites family, iodoapatite, have been drawn from energetic considerations. Structural and thermodynamic properties of crystal hexagonal calcium apatites, Ca10(PO4)6(X)2 (X= OH, F, Cl, Br), have been investigated using an all-atom Born-Huggins-Mayer potential by a molecular dynamics technique. The accuracy of the model at room temperature and atmospheric pressure was checked against crystal structural data, with maximum deviations of c. 4% for the haloapatites and 8% for hydroxyapatite. High-pressure simulation runs, in the range 0.5–75 kbar, were performed in order to estimate the isothermal compressibility coefficient of those compounds. The deformation of the compressed solids is always elastically anisotropic, with BrAp exhibiting a markedly different behavior from those displayed by HOAp and ClAp. High-pressure p-V data were fitted to the Parsafar-Mason equation of state with an accuracy better than 1%. The monoclinic solid phases Ca10(PO4)6(X)2 (X= OH, Cl) and the molten hydroxyapatite compound have also been studied by molecular dynamics. Lunar science Moon rocks collected by astronauts during the Apollo program contain traces of apatite. Following new insights about the presence of water in the moon, re-analysis of these samples in 2010 revealed water trapped in the mineral as hydroxyl, leading to estimates of water on the lunar surface at a rate of at least 64 parts per billion100 times greater than previous estimatesand as high as 5 parts per million. If the minimum amount of mineral-locked water was hypothetically converted to liquid, it would cover the Moon's surface in roughly one meter of water. Bio-leaching The ectomycorrhizal fungi Suillus granulatus and Paxillus involutus can release elements from apatite. Release of phosphate from apatite is one of the most important activities of mycorrhizal fungi, which increase phosphorus uptake in plants. Apatite group and supergroup Apatite is the prototype of a class of chemically, stoichometrically or structurally similar minerals, biological materials, and synthetic chemicals. Those most similar to apatite are also known as apatites, such as lead apatite (pyromorphite) and barium apatite (alforsite). More chemically dissimilar minerals of the apatite supergroup include belovites, britholites, ellestadites and hedyphanes. Apatites have been investigated for their potential use as pigments (copper-doped alkaline earth apatites), as phosphors and for absorbing and immobilising toxic heavy metals. In apatite minerals strontium, barium and lead can be substituted for calcium; arsenate and vanadate for phosphate; and the final balancing anion can be fluoride (fluorapatites), chloride (chlorapatites), hydroxide (hydroxyapatites) or oxide (oxyapatites). Synthetic apatites add hypomanganate, hypochromate, bromide (bromoapatites), iodide (iodoapatites), sulfide (sulfoapatites), and selenide (selenoapatites). Evidence for natural sulfide substitution has been found in lunar rock samples. Furthermore, compensating substitution of monovalent and trivalent cations for calcium, of dibasic and tetrabasic anions for phosphate, and of the balancing anion, can occur to a greater or lesser degree. For example, in biological apatites there is appreciable substitution of sodium for calcium and carbonate for phosphate, in belovite sodium and cerium or lanthanum substitute for a pair of divalent metal ions, in germanate-pyromorphite germanate replaces phosphate and chloride, and in ellestadites silicate and sulphate replace pairs of phosphate anions. Metals forming smaller divalent ions, such as magnesium and iron, cannot substitute extensively for the relatively large calcium ions but may be present in small quantities.
Physical sciences
Minerals
Earth science
140710
https://en.wikipedia.org/wiki/Electrical%20reactance
Electrical reactance
In electrical circuits, reactance is the opposition presented to alternating current by inductance and capacitance. Along with resistance, it is one of two elements of impedance; however, while both elements involve transfer of electrical energy, no dissipation of electrical energy as heat occurs in reactance; instead, the reactance stores energy until a quarter-cycle later when the energy is returned to the circuit. Greater reactance gives smaller current for the same applied voltage. Reactance is used to compute amplitude and phase changes of sinusoidal alternating current going through a circuit element. Like resistance, reactance is measured in ohms, with positive values indicating inductive reactance and negative indicating capacitive reactance. It is denoted by the symbol . An ideal resistor has zero reactance, whereas ideal reactors have no shunt conductance and no series resistance. As frequency increases, inductive reactance increases and capacitive reactance decreases. Comparison to resistance Reactance is similar to resistance in that larger reactance leads to smaller currents for the same applied voltage. Further, a circuit made entirely of elements that have only reactance (and no resistance) can be treated the same way as a circuit made entirely of resistances. These same techniques can also be used to combine elements with reactance with elements with resistance but complex numbers are typically needed. This is treated below in the section on impedance. There are several important differences between reactance and resistance, though. First, reactance changes the phase so that the current through the element is shifted by a quarter of a cycle relative to the phase of the voltage applied across the element. Second, power is not dissipated in a purely reactive element but is stored instead. Third, reactances can be negative so that they can 'cancel' each other out. Finally, the main circuit elements that have reactance (capacitors and inductors) have a frequency dependent reactance, unlike resistors which have the same resistance for all frequencies, at least in the ideal case. The term reactance was first suggested by French engineer M. Hospitalier in L'Industrie Electrique on 10 May 1893. It was officially adopted by the American Institute of Electrical Engineers in May 1894. Capacitive reactance A capacitor consists of two conductors separated by an insulator, also known as a dielectric. Capacitive reactance is an opposition to the change of voltage across an element. Capacitive reactance is inversely proportional to the signal frequency (or angular frequency ) and the capacitance . There are two choices in the literature for defining reactance for a capacitor. One is to use a uniform notion of reactance as the imaginary part of impedance, in which case the reactance of a capacitor is the negative number, . Another choice is to define capacitive reactance as a positive number, . In this case however one needs to remember to add a negative sign for the impedance of a capacitor, i.e. . At , the magnitude of the capacitor's reactance is infinite, behaving like an open circuit (preventing any current from flowing through the dielectric). As frequency increases, the magnitude of reactance decreases, allowing more current to flow. As approaches , the capacitor's reactance approaches , behaving like a short circuit. The application of a DC voltage across a capacitor causes positive charge to accumulate on one side and negative charge to accumulate on the other side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero. Driven by an AC supply (ideal AC current source), a capacitor will only accumulate a limited amount of charge before the potential difference changes polarity and the charge is returned to the source. The higher the frequency, the less charge will accumulate and the smaller the opposition to the current. Inductive reactance Inductive reactance is a property exhibited by an inductor, and inductive reactance exists based on the fact that an electric current produces a magnetic field around it. In the context of an AC circuit (although this concept applies any time current is changing), this magnetic field is constantly changing as a result of current that oscillates back and forth. It is this change in magnetic field that induces another electric current to flow in the same wire (counter-EMF), in a direction such as to oppose the flow of the current originally responsible for producing the magnetic field (known as Lenz's Law). Hence, inductive reactance is an opposition to the change of current through an element. For an ideal inductor in an AC circuit, the inhibitive effect on change in current flow results in a delay, or a phase shift, of the alternating current with respect to alternating voltage. Specifically, an ideal inductor (with no resistance) will cause the current to lag the voltage by a quarter cycle, or 90°. In electric power systems, inductive reactance (and capacitive reactance, however inductive reactance is more common) can limit the power capacity of an AC transmission line, because power is not completely transferred when voltage and current are out-of-phase (detailed above). That is, current will flow for an out-of-phase system, however real power at certain times will not be transferred, because there will be points during which instantaneous current is positive while instantaneous voltage is negative, or vice versa, implying negative power transfer. Hence, real work is not performed when power transfer is "negative". However, current still flows even when a system is out-of-phase, which causes transmission lines to heat up due to current flow. Consequently, transmission lines can only heat up so much (or else they would physically sag too much, due to the heat expanding the metal transmission lines), so transmission line operators have a "ceiling" on the amount of current that can flow through a given line, and excessive inductive reactance can limit the power capacity of a line. Power providers utilize capacitors to shift the phase and minimize the losses, based on usage patterns. Inductive reactance is proportional to the sinusoidal signal frequency and the inductance , which depends on the physical shape of the inductor: . The average current flowing through an inductance in series with a sinusoidal AC voltage source of RMS amplitude and frequency is equal to: Because a square wave has multiple amplitudes at sinusoidal harmonics, the average current flowing through an inductance in series with a square wave AC voltage source of RMS amplitude and frequency is equal to: making it appear as if the inductive reactance to a square wave was about 19% smaller than the reactance to the AC sine wave. Any conductor of finite dimensions has inductance; the inductance is made larger by the multiple turns in an electromagnetic coil. Faraday's law of electromagnetic induction gives the counter-emf (voltage opposing current) due to a rate-of-change of magnetic flux density through a current loop. For an inductor consisting of a coil with loops this gives: . The counter-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency. Impedance Both reactance and resistance are components of impedance . where: is the complex impedance, measured in ohms; is the resistance, measured in ohms. It is the real part of the impedance: is the reactance, measured in ohms. It is the imaginary part of the impedance: is the square root of minus one, usually represented by in non-electrical formulas. is used so as not to confuse the imaginary unit with current, commonly represented by . When both a capacitor and an inductor are placed in series in a circuit, their contributions to the total circuit impedance are opposite. Capacitive reactance and inductive reactance contribute to the total reactance as follows: where: is the inductive reactance, measured in ohms; is the capacitive reactance, measured in ohms; is the angular frequency, times the frequency in Hz. Hence: if , the total reactance is said to be inductive; if , then the impedance is purely resistive; if , the total reactance is said to be capacitive. Note however that if and are assumed both positive by definition, then the intermediary formula changes to a difference: but the ultimate value is the same. Phase relationship The phase of the voltage across a purely reactive device (i.e. with zero parasitic resistance) lags the current by radians for a capacitive reactance and leads the current by radians for an inductive reactance. Without knowledge of both the resistance and reactance the relationship between voltage and current cannot be determined. The origin of the different signs for capacitive and inductive reactance is the phase factor in the impedance. For a reactive component the sinusoidal voltage across the component is in quadrature (a phase difference) with the sinusoidal current through the component. The component alternately absorbs energy from the circuit and then returns energy to the circuit, thus a pure reactance does not dissipate power.
Physical sciences
Electrical circuits
Physics
140711
https://en.wikipedia.org/wiki/Capacitance
Capacitance
Capacitance is the ability of an object to store electric charge. It is measured by the change in charge in response to a difference in electric potential, expressed as the ratio of those quantities. Commonly recognized are two closely related notions of capacitance: self capacitance and mutual capacitance. An object that can be electrically charged exhibits self capacitance, for which the electric potential is measured between the object and ground. Mutual capacitance is measured between two components, and is particularly important in the operation of the capacitor, an elementary linear electronic component designed to add capacitance to an electric circuit. The capacitance between two conductors depends only on the geometry; the opposing surface area of the conductors and the distance between them; and the permittivity of any dielectric material between them. For many dielectric materials, the permittivity, and thus the capacitance, is independent of the potential difference between the conductors and the total charge on them. The SI unit of capacitance is the farad (symbol: F), named after the English physicist Michael Faraday. A 1 farad capacitor, when charged with 1 coulomb of electrical charge, has a potential difference of 1 volt between its plates. The reciprocal of capacitance is called elastance. Self capacitance In discussing electrical circuits, the term capacitance is usually a shorthand for the mutual capacitance between two adjacent conductors, such as the two plates of a capacitor. However, every isolated conductor also exhibits capacitance, here called self capacitance. It is measured by the amount of electric charge that must be added to an isolated conductor to raise its electric potential by one unit of measurement, e.g., one volt. The reference point for this potential is a theoretical hollow conducting sphere, of infinite radius, with the conductor centered inside this sphere. Self capacitance of a conductor is defined by the ratio of charge and electric potential: where is the charge held, is the electric potential, is the surface charge density, is an infinitesimal element of area on the surface of the conductor, over which the surface charge density is integrated, is the length from to a fixed point M on the conductor, is the vacuum permittivity. Using this method, the self capacitance of a conducting sphere of radius in free space (i.e. far away from any other charge distributions) is: Example values of self capacitance are: for the top "plate" of a van de Graaff generator, typically a sphere 20 cm in radius: 22.24 pF, the planet Earth: about 710 μF. The inter-winding capacitance of a coil is sometimes called self capacitance, but this is a different phenomenon. It is actually mutual capacitance between the individual turns of the coil and is a form of stray or parasitic capacitance. This self capacitance is an important consideration at high frequencies: it changes the impedance of the coil and gives rise to parallel resonance. In many applications this is an undesirable effect and sets an upper frequency limit for the correct operation of the circuit. Mutual capacitance A common form is a parallel-plate capacitor, which consists of two conductive plates insulated from each other, usually sandwiching a dielectric material. In a parallel plate capacitor, capacitance is very nearly proportional to the surface area of the conductor plates and inversely proportional to the separation distance between the plates. If the charges on the plates are and , and gives the voltage between the plates, then the capacitance is given by which gives the voltage/current relationship where is the instantaneous rate of change of voltage, and is the instantaneous rate of change of the capacitance. For most applications, the change in capacitance over time is negligible, so the formula reduces to: The energy stored in a capacitor is found by integrating the work : Capacitance matrix The discussion above is limited to the case of two conducting plates, although of arbitrary size and shape. The definition does not apply when there are more than two charged plates, or when the net charge on the two plates is non-zero. To handle this case, James Clerk Maxwell introduced his coefficients of potential. If three (nearly ideal) conductors are given charges , then the voltage at conductor 1 is given by and similarly for the other voltages. Hermann von Helmholtz and Sir William Thomson showed that the coefficients of potential are symmetric, so that , etc. Thus the system can be described by a collection of coefficients known as the elastance matrix or reciprocal capacitance matrix, which is defined as: From this, the mutual capacitance between two objects can be defined by solving for the total charge and using . Since no actual device holds perfectly equal and opposite charges on each of the two "plates", it is the mutual capacitance that is reported on capacitors. The collection of coefficients is known as the capacitance matrix, and is the inverse of the elastance matrix. Capacitors The capacitance of the majority of capacitors used in electronic circuits is generally several orders of magnitude smaller than the farad. The most common units of capacitance are the microfarad (μF), nanofarad (nF), picofarad (pF), and, in microcircuits, femtofarad (fF). Some applications also use supercapacitors that can be much larger, as much as hundreds of farads, and parasitic capacitive elements can be less than a femtofarad. Historical texts use other, obsolete submultiples of the farad, such as "mf" and "mfd" for microfarad (μF); "mmf", "mmfd", "pfd", "μμF" for picofarad (pF). The capacitance can be calculated if the geometry of the conductors and the dielectric properties of the insulator between the conductors are known. Capacitance is proportional to the area of overlap and inversely proportional to the separation between conducting sheets. The closer the sheets are to each other, the greater the capacitance. An example is the capacitance of a capacitor constructed of two parallel plates both of area separated by a distance . If is sufficiently small with respect to the smallest chord of , there holds, to a high level of accuracy: where is the capacitance, in farads; is the area of overlap of the two plates, in square meters; is the electric constant is the relative permittivity (also dielectric constant) of the material in between the plates for air); and is the separation between the plates, in meters. The equation is a good approximation if d is small compared to the other dimensions of the plates so that the electric field in the capacitor area is uniform, and the so-called fringing field around the periphery provides only a small contribution to the capacitance. Combining the equation for capacitance with the above equation for the energy stored in a capacitor, for a flat-plate capacitor the energy stored is: where is the energy, in joules; is the capacitance, in farads; and is the voltage, in volts. Stray capacitance Any two adjacent conductors can function as a capacitor, though the capacitance is small unless the conductors are close together for long distances or over a large area. This (often unwanted) capacitance is called parasitic or stray capacitance. Stray capacitance can allow signals to leak between otherwise isolated circuits (an effect called crosstalk), and it can be a limiting factor for proper functioning of circuits at high frequency. Stray capacitance between the input and output in amplifier circuits can be troublesome because it can form a path for feedback, which can cause instability and parasitic oscillation in the amplifier. It is often convenient for analytical purposes to replace this capacitance with a combination of one input-to-ground capacitance and one output-to-ground capacitance; the original configuration – including the input-to-output capacitance – is often referred to as a pi-configuration. Miller's theorem can be used to effect this replacement: it states that, if the gain ratio of two nodes is , then an impedance of Z connecting the two nodes can be replaced with a impedance between the first node and ground and a impedance between the second node and ground. Since impedance varies inversely with capacitance, the internode capacitance, C, is replaced by a capacitance of KC from input to ground and a capacitance of from output to ground. When the input-to-output gain is very large, the equivalent input-to-ground impedance is very small while the output-to-ground impedance is essentially equal to the original (input-to-output) impedance. Capacitance of conductors with simple shapes Calculating the capacitance of a system amounts to solving the Laplace equation with a constant potential on the 2-dimensional surface of the conductors embedded in 3-space. This is simplified by symmetries. There is no solution in terms of elementary functions in more complicated cases. For plane situations, analytic functions may be used to map different geometries to each other.
Physical sciences
Electrical circuits
null
140806
https://en.wikipedia.org/wiki/Maximum%20likelihood%20estimation
Maximum likelihood estimation
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference. If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance. From the perspective of Bayesian inference, MLE is generally equivalent to maximum a posteriori (MAP) estimation with a prior distribution that is uniform in the region of interest. In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood. Principles We model a set of observations as a random sample from an unknown joint probability distribution which is expressed in terms of a set of parameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vector so that this distribution falls within a parametric family where is called the parameter space, a finite-dimensional subset of Euclidean space. Evaluating the joint density at the observed data sample gives a real-valued function, which is called the likelihood function. For independent and identically distributed random variables, will be the product of univariate density functions: The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space, that is Intuitively, this selects the parameter values that make the observed data most probable. The specific value that maximizes the likelihood function is called the maximum likelihood estimate. Further, if the function so defined is measurable, then it is called the maximum likelihood estimator. It is generally a function defined over the sample space, i.e. taking a given sample as its argument. A sufficient but not necessary condition for its existence is for the likelihood function to be continuous over a parameter space that is compact. For an open the likelihood function may increase without ever reaching a supremum value. In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood: Since the logarithm is a monotonic function, the maximum of occurs at the same value of as does the maximum of If is differentiable in sufficient conditions for the occurrence of a maximum (or a minimum) are known as the likelihood equations. For some models, these equations can be explicitly solved for but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found via numerical optimization. Another problem is that in finite samples, there may exist multiple roots for the likelihood equations. Whether the identified root of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-called Hessian matrix is negative semi-definite at , as this indicates local concavity. Conveniently, most common probability distributions – in particular the exponential family – are logarithmically concave. Restricted parameter space While the domain of the likelihood function—the parameter space—is generally a finite-dimensional subset of Euclidean space, additional restrictions sometimes need to be incorporated into the estimation process. The parameter space can be expressed as where is a vector-valued function mapping into Estimating the true parameter belonging to then, as a practical matter, means to find the maximum of the likelihood function subject to the constraint Theoretically, the most natural approach to this constrained optimization problem is the method of substitution, that is "filling out" the restrictions to a set in such a way that is a one-to-one function from to itself, and reparameterize the likelihood function by setting Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also. For instance, in a multivariate normal distribution the covariance matrix must be positive-definite; this restriction can be imposed by replacing where is a real upper triangular matrix and is its transpose. In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to the restricted likelihood equations and where is a column-vector of Lagrange multipliers and is the Jacobian matrix of partial derivatives. Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero. This in turn allows for a statistical test of the "validity" of the constraint, known as the Lagrange multiplier test. Nonparametric maximum likelihood estimation Nonparametric maximum likelihood estimation can be performed using the empirical likelihood. Properties A maximum likelihood estimator is an extremum estimator obtained by maximizing, as a function of θ, the objective function . If the data are independent and identically distributed, then we have this being the sample analogue of the expected log-likelihood , where this expectation is taken with respect to the true density. Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value. However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties: Consistency: the sequence of MLEs converges in probability to the value being estimated. Equivariance: If is the maximum likelihood estimator for , and if is a bijective transform of , then the maximum likelihood estimator for is . The equivariance property can be generalized to non-bijective transforms, although it applies in that case on the maximum of an induced likelihood function which is not the true likelihood in general. Efficiency, i.e. it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound), which also means that MLE has asymptotic normality. Second-order efficiency after correction for bias. Consistency Under the conditions outlined below, the maximum likelihood estimator is consistent. The consistency means that if the data were generated by and we have a sufficiently large number of observations n, then it is possible to find the value of θ0 with arbitrary precision. In mathematical terms this means that as n goes to infinity the estimator converges in probability to its true value: Under slightly stronger conditions, the estimator converges almost surely (or strongly): In practical applications, data is never generated by . Rather, is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics that all models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have. To establish consistency, the following conditions are sufficient. The dominance condition can be employed in the case of i.i.d. observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence is stochastically equicontinuous. If one wants to demonstrate that the ML estimator converges to θ0 almost surely, then a stronger condition of uniform convergence almost surely has to be imposed: Additionally, if (as assumed above) the data were generated by , then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. Specifically, where is the Fisher information matrix. Functional invariance The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, if is the MLE for , and if is any transformation of , then the MLE for is by definition It maximizes the so-called profile likelihood: The MLE is also equivariant with respect to certain transformations of the data. If where is one to one and does not depend on the parameters to be estimated, then the density functions satisfy and hence the likelihood functions for and differ only by a factor that does not depend on the model parameters. For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case if , then follows a log-normal distribution. The density of Y follows with standard Normal and , for . Efficiency As assumed above, if the data were generated by then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. It is -consistent and asymptotically efficient, meaning that it reaches the Cramér–Rao bound. Specifically, where is the Fisher information matrix: In particular, it means that the bias of the maximum likelihood estimator is equal to zero up to the order . Second-order efficiency after correction for bias However, when we consider the higher-order terms in the expansion of the distribution of this estimator, it turns out that has bias of order . This bias is equal to (componentwise) where (with superscripts) denotes the (j,k)-th component of the inverse Fisher information matrix , and Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, and correct for that bias by subtracting it: This estimator is unbiased up to the terms of order , and is called the bias-corrected maximum likelihood estimator. This bias-corrected estimator is (at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order  . It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator is not third-order efficient. Relation to Bayesian inference A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter that maximizes the probability of given the data, given by Bayes' theorem: where is the prior distribution for the parameter and where is the probability of the data averaged over all parameters. Since the denominator is independent of , the Bayesian estimator is obtained by maximizing with respect to . If we further assume that the prior is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood function . Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distribution . Application of maximum-likelihood estimation in Bayes decision theory In many practical applications in machine learning, maximum-likelihood estimation is used as the model for parameter estimation. The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution. Thus, the Bayes Decision Rule is stated as "decide if otherwise decide " where are predictions of different classes. From a perspective of minimizing error, it can also be stated as where if we decide and if we decide By applying Bayes' theorem , and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as: where is the prediction and is the prior probability. Relation to minimizing Kullback–Leibler divergence and cross entropy Finding that maximizes the likelihood is asymptotically equivalent to finding the that defines a probability distribution () that has a minimal distance, in terms of Kullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated by ). In an ideal world, P and Q are the same (and the only thing unknown is that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends on ) to the real distribution . Examples Discrete uniform distribution Consider a case where n tickets numbered from 1 to n are placed in a box and one is selected at random (see uniform distribution); thus, the sample size is 1. If n is unknown, then the maximum likelihood estimator of n is the number m on the drawn ticket. (The likelihood is 0 for n < m, for n ≥ m, and this is greatest when n = m. Note that the maximum likelihood estimate of n occurs at the lower extreme of possible values {m, m + 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) The expected value of the number m on the drawn ticket, and therefore the expected value of , is (n + 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator for n will systematically underestimate n by (n − 1)/2. Discrete distribution, finite parameter space Suppose one wishes to determine just how biased an unfair coin is. Call the probability of tossing a 'head' p. The goal then becomes to determine p. Suppose the coin is tossed 80 times: i.e. the sample might be something like x1 = H, x2 = T, ..., x80 = T, and the count of the number of heads "H" is observed. The probability of tossing tails is 1 − p (so here p is θ above). Suppose the outcome is 49 heads and 31 tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probability p = , one which gives heads with probability p =  and another which gives heads with probability p = . The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using the probability mass function of the binomial distribution with sample size equal to 80, number successes equal to 49 but for different values of p (the "probability of success"), the likelihood function (defined below) takes one of three values: The likelihood is maximized when  = , and so this is the maximum likelihood estimate for . Discrete distribution, continuous parameter space Now suppose that there was only one coin but its could have been any value The likelihood function to be maximised is and the maximisation is over all possible values One way to maximize this function is by differentiating with respect to and setting to zero: This is a product of three terms. The first term is 0 when  = 0. The second is 0 when  = 1. The third is zero when  = . The solution that maximizes the likelihood is clearly  =  (since  = 0 and  = 1 result in a likelihood of 0). Thus the maximum likelihood estimator for is . This result is easily generalized by substituting a letter such as in the place of 49 to represent the observed number of 'successes' of our Bernoulli trials, and a letter such as in the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yields which is the maximum likelihood estimator for any sequence of Bernoulli trials resulting in 'successes'. Continuous distribution, continuous parameter space For the normal distribution which has probability density function the corresponding probability density function for a sample of independent identically distributed normal random variables (the likelihood) is This family of distributions has two parameters: ; so we maximize the likelihood, , over both parameters simultaneously, or if possible, individually. Since the logarithm function itself is a continuous strictly increasing function over the range of the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows: (Note: the log-likelihood is closely related to information entropy and Fisher information.) We now compute the derivatives of this log-likelihood as follows. where is the sample mean. This is solved by This is indeed the maximum of the function, since it is the only turning point in and the second derivative is strictly less than zero. Its expected value is equal to the parameter of the given distribution, which means that the maximum likelihood estimator is unbiased. Similarly we differentiate the log-likelihood with respect to and equate to zero: which is solved by Inserting the estimate we obtain To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error) . Expressing the estimate in these variables yields Simplifying the expression above, utilizing the facts that and , allows us to obtain This means that the estimator is biased for . It can also be shown that is biased for , but that both and are consistent. Formally we say that the maximum likelihood estimator for is In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously. The normal log-likelihood at its maximum takes a particularly simple form: This maximum log-likelihood can be shown to be the same for more general least squares, even for non-linear least squares. This is often used in determining likelihood-based approximate confidence intervals and confidence regions, which are generally more accurate than those using the asymptotic normality discussed above. Non-independent variables It may be the case that variables are correlated, or more generally, not independent. Two random variables and are independent only if their joint probability density function is the product of the individual probability density functions, i.e. Suppose one constructs an order-n Gaussian vector out of random variables , where each variable has means given by . Furthermore, let the covariance matrix be denoted by . The joint probability density function of these n random variables then follows a multivariate normal distribution given by: In the bivariate case, the joint probability density function is given by: In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density. Example are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to be :. The probability of each box is , with a constraint: . This is a case in which the s are not independent, the joint probability of a vector is called the multinomial and has the form: Each box taken separately against all the other boxes is a binomial and this is an extension thereof. The log-likelihood of this is: The constraint has to be taken into account and use the Lagrange multipliers: By posing all the derivatives to be 0, the most natural estimate is derived Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures. Iterative procedures Except for special cases, the likelihood equations cannot be solved explicitly for an estimator . Instead, they need to be solved iteratively: starting from an initial guess of (say ), one seeks to obtain a convergent sequence . Many methods for this kind of optimization problem are available, but the most commonly used ones are algorithms based on an updating formula of the form where the vector indicates the descent direction of the rth "step," and the scalar captures the "step length," also known as the learning rate. Gradient descent method (Note: here it is a maximization problem, so the sign before gradient is flipped) that is small enough for convergence and Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method. Newton–Raphson method and where is the score and is the inverse of the Hessian matrix of the log-likelihood function, both evaluated the rth iteration. But because the calculation of the Hessian matrix is computationally costly, numerous alternatives have been proposed. The popular Berndt–Hall–Hall–Hausman algorithm approximates the Hessian with the outer product of the expected gradient, such that Quasi-Newton methods Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix. Davidon–Fletcher–Powell formula DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative: where Broyden–Fletcher–Goldfarb–Shanno algorithm BFGS also gives a solution that is symmetric and positive-definite: where BFGS method is not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances Fisher's scoring Another popular method is to replace the Hessian with the Fisher information matrix, , giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such as generalized linear models. Although popular, quasi-Newton methods may converge to a stationary point that is not necessarily a local or global maximum, but rather a local minimum or a saddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is both negative definite and well-conditioned. History Early users of maximum likelihood include Carl Friedrich Gauss, Pierre-Simon Laplace, Thorvald N. Thiele, and Francis Ysidro Edgeworth. It was Ronald Fisher however, between 1912 and 1922, who singlehandedly created the modern version of the method. Maximum-likelihood estimation finally transcended heuristic justification in a proof published by Samuel S. Wilks in 1938, now called Wilks' theorem. The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptotically χ 2-distributed, which enables convenient determination of a confidence region around any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of the Fisher information matrix, which is provided by a theorem proven by Fisher. Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962. Reviews of the development of maximum likelihood estimation have been provided by a number of authors.
Mathematics
Statistics
null
140857
https://en.wikipedia.org/wiki/Electron%E2%80%93positron%20annihilation
Electron–positron annihilation
Electron–positron annihilation occurs when an electron () and a positron (, the electron's antiparticle) collide. At low energies, the result of the collision is the annihilation of the electron and positron, and the creation of energetic photons:  +  →  +  At high energies, other particles, such as B mesons or the W and Z bosons, can be created. All processes must satisfy a number of conservation laws, including: Conservation of electric charge. The net charge before and after is zero. Conservation of linear momentum and total energy. This forbids the creation of a single photon. However, in quantum field theory this process is allowed; see examples of annihilation. Conservation of angular momentum. Conservation of total (i.e. net) lepton number, which is the number of leptons (such as the electron) minus the number of antileptons (such as the positron); this can be described as a conservation of (net) matter law. As with any two charged objects, electrons and positrons may also interact with each other without annihilating, in general by elastic scattering. Low-energy case There are only a very limited set of possibilities for the final state. The most probable is the creation of two or more gamma photons. Conservation of energy and linear momentum forbid the creation of only one photon. (An exception to this rule can occur for tightly bound atomic electrons.) In the most common case, two gamma photons are created, each with energy equal to the rest energy of the electron or positron (). A convenient frame of reference is that in which the system has no net linear momentum before the annihilation; thus, after collision, the gamma photons are emitted in opposite directions. It is also common for three to be created, since in some angular momentum states, this is necessary to conserve charge parity. It is also possible to create any larger number of photons, but the probability becomes lower with each additional gamma photon because these more complex processes have lower probability amplitudes. Since neutrinos also have a smaller mass than electrons, it is also possible – but exceedingly unlikely – for the annihilation to produce one or more neutrino–antineutrino pairs. The probability for such process is on the order of 10000 times less likely than the annihilation into photons. The same would be true for any other particles, which are as light, as long as they share at least one fundamental interaction with electrons and no conservation laws forbid it. However, no other such particles are known. High-energy case If either the electron or positron, or both, have appreciable kinetic energies, other heavier particles can also be produced (such as D mesons or B mesons), since there is enough kinetic energy in the relative velocities to provide the rest energies of those particles. Alternatively, it is possible to produce photons and other light particles, but they will emerge with higher kinetic energies. At energies near and beyond the mass of the carriers of the weak force, the W and Z bosons, the strength of the weak force becomes comparable to the electromagnetic force. As a result, it becomes much easier to produce particles such as neutrinos that interact only weakly with other matter. The heaviest particle pairs yet produced by electron–positron annihilation in particle accelerators are – pairs (mass 80.385 GeV/c2 × 2). The heaviest single-charged particle is the Z boson (mass 91.188 GeV/c2). The driving motivation for constructing the International Linear Collider is to produce the Higgs bosons (mass 125.09 GeV/c2) in this way. Practical uses The electron–positron annihilation process is the physical phenomenon relied on as the basis of positron emission tomography (PET) and positron annihilation spectroscopy (PAS). It is also used as a method of measuring the Fermi surface and band structure in metals by a technique called Angular Correlation of Electron Positron Annihilation Radiation. It is also used for nuclear transition. Positron annihilation spectroscopy is also used for the study of crystallographic defects in metals and semiconductors; it is considered the only direct probe for vacancy-type defects. Reverse reaction The reverse reaction, electron–positron creation, is a form of pair production governed by two-photon physics.
Physical sciences
Antimatter
Physics
140858
https://en.wikipedia.org/wiki/Pair%20production
Pair production
Pair production is the creation of a subatomic particle and its antiparticle from a neutral boson. Examples include creating an electron and a positron, a muon and an antimuon, or a proton and an antiproton. Pair production often refers specifically to a photon creating an electron–positron pair near a nucleus. As energy must be conserved, for pair production to occur, the incoming energy of the photon must be above a threshold of at least the total rest mass energy of the two particles created. (As the electron is the lightest, hence, lowest mass/energy, elementary particle, it requires the least energetic photons of all possible pair-production processes.) Conservation of energy and momentum are the principal constraints on the process. All other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1, or if one particle has strangeness of +1 then another one must have strangeness of −1. The probability of pair production in photon–matter interactions increases with photon energy and also increases approximately as the square of atomic number of (hence, number of protons in) the nearby atom. Photon to electron and positron For photons with high photon energy (MeV scale and higher), pair production is the dominant mode of photon interaction with matter. These interactions were first observed in Patrick Blackett's counter-controlled cloud chamber, leading to the 1948 Nobel Prize in Physics. If the photon is near an atomic nucleus, the energy of a photon can be converted into an electron–positron pair: (Z+) →  +  The photon's energy is converted to particle mass in accordance with Einstein's equation, ; where is energy, is mass and is the speed of light. The photon must have higher energy than the sum of the rest mass energies of an electron and positron (2 × 511 keV = 1.022 MeV, resulting in a photon wavelength of ) for the production to occur. (Thus, pair production does not occur in medical X-ray imaging because these X-rays only contain ~ 150 keV.) The photon must be near a nucleus in order to satisfy conservation of momentum, as an electron–positron pair produced in free space cannot satisfy conservation of both energy and momentum. Because of this, when pair production occurs, the atomic nucleus receives some recoil. The reverse of this process is electron–positron annihilation. Basic kinematics These properties can be derived through the kinematics of the interaction. Using four vector notation, the conservation of energy–momentum before and after the interaction gives: where is the recoil of the nucleus. Note the modulus of the four vector is which implies that for all cases and . We can square the conservation equation However, in most cases the recoil of the nucleus is small compared to the energy of the photon and can be neglected. Taking this approximation of and expanding the remaining relation Therefore, this approximation can only be satisfied if the electron and positron are emitted in very nearly the same direction, that is, . This derivation is a semi-classical approximation. An exact derivation of the kinematics can be done taking into account the full quantum mechanical scattering of photon and nucleus. Energy transfer The energy transfer to electron and positron in pair production interactions is given by where is the Planck constant, is the frequency of the photon and the is the combined rest mass of the electron–positron. In general the electron and positron can be emitted with different kinetic energies, but the average transferred to each (ignoring the recoil of the nucleus) is Cross section The exact analytic form for the cross section of pair production must be calculated through quantum electrodynamics in the form of Feynman diagrams and results in a complicated function. To simplify, the cross section can be written as: where is the fine-structure constant, is the classical electron radius, is the atomic number of the material, and is some complex-valued function that depends on the energy and atomic number. Cross sections are tabulated for different materials and energies. In 2008 the Titan laser, aimed at a 1 millimeter-thick gold target, was used to generate positron–electron pairs in large numbers. Astronomy Pair production is invoked in the heuristic explanation of hypothetical Hawking radiation. According to quantum mechanics, particle pairs are constantly appearing and disappearing as a quantum foam. In a region of strong gravitational tidal forces, the two particles in a pair may sometimes be wrenched apart before they have a chance to mutually annihilate. When this happens in the region around a black hole, one particle may escape while its antiparticle partner is captured by the black hole. Pair production is also the mechanism behind the hypothesized pair-instability supernova type of stellar explosion, where pair production suddenly lowers the pressure inside a supergiant star, leading to a partial implosion, and then explosive thermonuclear burning. Supernova SN 2006gy is hypothesized to have been a pair production type supernova.
Physical sciences
Particle physics: General
Physics
140951
https://en.wikipedia.org/wiki/Fowl
Fowl
Fowl are birds belonging to one of two biological orders, namely the gamefowl or landfowl (Galliformes) and the waterfowl (Anseriformes). Anatomical and molecular similarities suggest these two groups are close evolutionary relatives; together, they form the fowl clade which is scientifically known as Galloanserae or Galloanseres (initially termed Galloanseri) (Latin gallus ("rooster") + ānser ("goose")). This clade is also supported by morphological and DNA sequence data as well as retrotransposon presence/absence data. Terminology As opposed to "fowl", "poultry" is a term for any kind of domesticated bird or bird captive-raised for meat, eggs, or feathers; ostriches, for example, are sometimes kept as poultry, but are neither gamefowl nor waterfowl. In colloquial speech, however, the term "fowl" is often used near-synonymously with "poultry", and many languages do not distinguish between "poultry" and "fowl". Nonetheless, the fact that the Galliformes and Anseriformes most likely form a monophyletic group makes a distinction between "fowl" and "poultry" warranted. The historic difference in English is due to the Germanic/Latin split word pairs characteristic of Middle English; the word 'fowl' is of Germanic origin (cf. Old English "", West Frisian , Dutch , German , Swedish , Danish/Norwegian ), whilst 'poultry' is of Latin via Norman French origin; the presence of an initial /p/ in poultry and an initial /f/ in fowl is due to Grimm's Law. Many birds that are eaten by humans are fowl, including poultry such as chickens or turkeys, game birds such as pheasants or partridges, other wildfowl like guineafowl or peafowl, and waterfowl such as ducks or geese. Characteristics While they are quite diverse ecologically and consequently, in an adaptation to their different lifestyles, also morphologically and ethologically, some features still unite water- and landfowl. Many of these, however, are plesiomorphic for Neornithes as a whole, and are also shared with paleognaths. Galloanserae are very prolific; they regularly produce clutches of more than five or even more than 10 eggs, which is a lot for such sizeable birds. By comparison, birds of prey and pigeons rarely lay more than two eggs. While most living birds are monogamous, at least for a breeding season, many Galloanserae are notoriously polygynous or polyandrous. To ornithologists, this is particularly well known in dabbling ducks, where the males band together occasionally to forcefully mate with unwilling females. The general public is probably most familiar with the polygynous habits of domestic chickens, where usually one or two roosters are kept with a whole flock of females. Hybridization is extremely frequent in the Galloanserae, and genera, not usually known to produce viable hybrids in birds, can be brought to interbreed with comparative ease. Guineafowl have successfully produced hybrids with domestic fowl and Indian peafowl, to which they are not particularly closely related as Galliformes go. This is an important factor complicating mtDNA sequence-based research on their relationships. The mallards of North America, for example, are apparently mostly derived from some males which arrived from Siberia, settled down, and mated with American black duck ancestors.
Biology and health sciences
Basics
Animals
140953
https://en.wikipedia.org/wiki/Onager
Onager
The onager ()(Equus hemionus), also known as hemione or Asiatic wild ass, is a species of the family Equidae native to Asia. A member of the subgenus Asinus, the onager was described and given its binomial name by German zoologist Peter Simon Pallas in 1775. Six subspecies have been recognized, two of which are extinct. The onager weighs about and reaches about head-body length. They are reddish-brown or yellowish-brown in color and have broad dorsal stripe on the middle of the back. The onager has never been domesticated. It is among the fastest mammals, capable of running 64–70 km/h (40–43 mph). The onager formerly had a wider range, from southwest and central to northern Asia including the Levant region, Arabian Peninsula, Afghanistan and Siberia; the prehistoric European wild ass subspecies ranged through Europe until the Bronze age. During early 20th century, the species lost most of its range in the Middle East and Eastern Asia. Today, onagers live in deserts and other arid regions of Iran, Kazakhstan, Uzbekistan, Turkmenistan, India, Mongolia and China. Other than deserts, it lives in grasslands, plains, steppes, and savannahs. Like many other large grazing animals, the onager's range has contracted greatly under the pressures of poaching and habitat loss. The onager has been classified as Near Threatened on the IUCN Red List in 2015. Of the five subspecies, one is extinct, two are endangered, and two are near threatened; its status in China is not well known. Etymology The specific name is from the Ancient Greek , from , and ; thus, 'half-donkey' or mule. The term onager comes from the ancient Greek , again from , and . The species was commonly known as Asian wild ass, in which case the term onager was reserved for the E. h. onager subspecies, more specifically known as the Persian onager. Until this day, the species share the same name, onager. Taxonomy and evolution The onager is a member of the subgenus Asinus, belonging to the genus Equus and is classified under the family Equidae. The species was described and given its binomial name Equus hemionus by German zoologist Peter Simon Pallas in 1775. The Asiatic wild ass, among Old World equids, existed for more than 4 million years. The oldest divergence of Equus was the onager followed by the zebras and onwards. A new species called the kiang (E.kiang), a Tibetan relative, was previously considered to be a subspecies of the onager as E.hemionus kiang, but recent molecular studies indicate it to be a distinct species, having diverged from the closest relative of the Mongolian wild ass's ancestor less than 500,000 years ago. Subspecies Six widely recognized subspecies of the onager include: A sixth possible subspecies, the Gobi khulan (E. h. luteus, also called the chigetai or dziggetai) has been proposed, but may be synonymous with E. h. hemionus. Debates over the taxonomic identity of the onager occurred until 1980. , four living subspecies and one extinct subspecies of the Asiatic wild ass have been recognized. The Persian onager was formerly known as Equus onager, as it was thought to be a distinct species. Characteristics The onager is generally reddish-brown in color during the summer, becoming yellowish-brown or grayish-brown in the winter. It has a black stripe bordered in white that extends down the middle of the back. The belly, the rump, and the muzzle are white, except for the Mongolian wild ass that has a broad black dorsal stripe bordered with white. It is about in size and in head-body length. Male onagers are usually larger than females. Evolution The genus Equus, which includes all extant equines, is believed to have evolved from Dinohippus via the intermediate form Plesippus. One of the oldest species is Equus simplicidens, described as zebra-like with a donkey-shaped head. The oldest fossil to date is about 3.5 million years old from Idaho, USA. The genus appears to have spread quickly into the Old World, with the similarly aged Equus livenzovensis documented from western Europe and Russia. Molecular phylogenies indicate the most recent common ancestor of all modern equids (members of the genus Equus) lived around 5.6 (3.9–7.8) million years ago (Mya). Direct paleogenomic sequencing of a 700,000-year-old middle Pleistocene horse metapodial bone from Canada implies a more recent 4.07 Mya for the most recent common ancestor within the range of 4.0 to 4.5 Mya. The oldest divergencies are the Asian hemiones (subgenus E. (Asinus), including the kulan, onager, and kiang), followed by the African zebras (subgenera E. (Dolichohippus), and E. (Hippotigris)). All other modern forms including the domesticated horse (and many fossil Pliocene and Pleistocene forms) belong to the subgenus E. (Equus) which diverged about 4.8 (3.2–6.5) Mya. Distribution and habitat The onagers' favored habitats consist of desert plains, semideserts, oases, arid grasslands, savannahs, shrublands, steppes, mountainous steppes, and mountain ranges. The Turkmenian kulan and Mongolian wild asses are known to live in hot and colder deserts. The IUCN estimates about 28,000 mature individuals in total remain in the wild. During the late Pleistocene era around 40,000 years ago, the Asiatic wild ass ranged widely across Europe and in southwestern to northeastern Asia. It is also known from Middle Pleistocene fossils from the Nefud Desert of Saudi Arabia. The onager has been regionally extinct in Israel, Saudi Arabia, Iraq, Jordan, Syria, and southern regions of Siberia. The Mongolian wild ass lives in deserts, mountains, and grasslands of Mongolia and Inner Mongolian region of northern China. A few live in northern Xinjiang region of northwestern China, most of which live mainly in Kalamaili Nature Reserve. It is the most common subspecies, but its populations have drastically decreased to a few thousand due to years of poaching and habitat loss in East Asia. The Gobi Desert is the onager's main stronghold. It is regionally extinct in eastern Kazakhstan, southern Siberia, and the Manchurian region of China. The Indian wild ass was once found throughout the arid parts and desert steppes of northwest India and Pakistan, but about 4,500 of them are found in a few very hot wildlife sanctuaries of Gujarat. The Persian onager is found in two subpopulations in southern and northern Iran. The larger population is found at Khar Turan National Park. However, it is extirpated from Afghanistan. The Turkmenian kulan used to be widespread in central to north Asia. However, it is now found in Turkmenistan and has been reintroduced in southern Kazakhstan and Uzbekistan. Biology and behavior Asiatic wild asses are mostly active at dawn and dusk, even during the intense heat. Social structure Like most equids, onagers are social animals. Stallions are either solitary or live in groups of two or three. The males have been observed holding harems of females, but in other studies, the dominant stallions defend territories that attract females. Differences in behaviour and social structure likely are the result of changes in climate, vegetation cover, predation, and hunting. The social behavior of the Asian wild ass can vary widely, depending on different habitats and ranges, and on threats by predators including humans. In Mongolia and Central Asia (E. h. hemionus and E. h. kulan), an onager stallion can adopt harem-type social groups, with several mares and foals in large home areas in the southwest, or in territory-based social groups in the south and southeast. Also, annual large hikes occur, covering to , where hiking in summer is more limited than in the winter. Onagers also occasionally form large group associations of 450 to 1,200 individuals, but this usually only occurs in places with food or water sources. As these larger groups dissolve again within a day, no overarching hierarchy apart from the ranking of the individual herds seems to exist. Young male onagers also frequently form "bachelor groups" during the winter. Such a lifestyle is also seen in the wild horse, the plains zebras (E. quagga) and mountain zebras (E. zebra). Southern populations of onagers in the Middle East and South Asia tend to have a purely territorial life, where areas partly overlap. Dominant stallions have home ranges of , but they can also be significantly larger. These territories include food and rest stops and permanent or periodic water sources. The waters are usually at the edge of a coalfield and not in the center. Mares with foals sometimes find themselves in small groups, in areas up to , which overlap with those of the other groups and dominant stallions. Such features are also seen among Grévy's zebras (E. grevyi) and the African wild asses. Reproduction The Asian wild ass is sexually mature at two years old, and the first mating usually takes place at three to four years old. Breeding is seasonal, and the gestation period of onagers is 11 months; the birth lasts a little more than 10 minutes. Mating and births occur from April to September, with an accumulation from June to July. The mating season in India is in the rainy season. The foal can stand and starts to nurse within 15 to 20 minutes. Females with young tend to form groups of up to five females. During rearing, a foal and dam remain close, but other animals and her own older offspring are displaced by the dam. Occasionally, stallions in territorial wild populations expel the young to mate with the mare again. Wild Asian wild asses reach an age of 14 years, but in captivity, they can live up to 26 years. Diet The onager is a herbivore and eats grasses, herbs, leaves, fruits, and saline vegetation when available. In dry habitats, it browses on shrubs and trees, but also feeds on seed pods such as Prosopis and breaks up woody vegetation with its hooves to get at more succulent herbs growing at the base of woody plants. The succulent plants of the Zygophyllaceae form an important component of its diet in Mongolia during spring and summer When natural water sources are unavailable, the onager digs holes in dry riverbeds to reach subsurface water. Predation The onager is preyed upon by predators such as Persian leopards and striped hyenas. A few cases of onager deaths due to predation by leopards have been recorded in Iran. Threats The greatest threat facing the onager is poaching for meat and hides, and in some areas for use in traditional medicine. The extreme isolation of many subpopulations also threatens the species, as genetic problems can result from inbreeding. Overgrazing by livestock reduces food availability, and herders also reduce the availability of water at springs. The cutting down of nutritious shrubs and bushes exacerbates the problem. Furthermore, a series of drought years could have devastating effects on this beleaguered species. Habitat loss and fragmentation are also major threats to the onager, a particular concern in Mongolia as a result of the increasingly dense network of roads, railway lines, and fences required to support mining activities. The Asiatic wild ass is also vulnerable to diseases. A disease known as the "South African horse sickness" caused a major decline to the Indian wild ass population in the 1960s. However, the subspecies is no longer under threat to such disease and is continuously increasing in number. Conservation Various breeding programs have been started for the onager subspecies in captivity and in the wild, which increases their numbers to save the endangered species. The species is legally protected in many of the countries in which it occurs. The priority for future conservation measures is to ensure the protection of this species in particularly vulnerable parts of its range, to encourage the involvement of local people in the conservation of the onager, and to conduct further research into the behavior, ecology, and taxonomy of the species. Two onager subspecies, the Persian onager and the Turkmenian kulan are being reintroduced to their former ranges, including in other regions the Syrian wild ass used to occur in the Middle East. The two subspecies have been reintroduced to the wild of Israel since 1982, and had been breeding hybrids there, whilst the Persian onager alone has been reintroduced to Jordan and the deserts of Saudi Arabia. Relationship with humans Onagers are notoriously difficult to tame. Equids were used in ancient Sumer to pull wagons , and then chariots on the Standard of Ur, . Clutton-Brock (1992) suggests that these were donkeys rather than onagers on the basis of a "shoulder stripe". However, close examination of the animals (equids, sheep and cattle) on both sides of the piece indicate that what appears to be a stripe may well be a harness, a trapping, or a joint in the inlay. Genetic testing of skeletons from that era shows that they were kungas, a cross between an onager and a donkey. In literature In the Hebrew Bible there is a reference to the onager in Job 39:5: In by Honoré de Balzac, the onager is identified as the animal from which comes the ass' skin or shagreen of the title. A short poem by Ogden Nash also features the onager:
Biology and health sciences
Equidae
Animals
140990
https://en.wikipedia.org/wiki/Tanning%20%28leather%29
Tanning (leather)
Tanning, or hide tanning, is the process of treating skins and hides of animals to produce leather. A tannery is the place where the skins are processed. Historically, vegetable based tanning used tannin, an acidic chemical compound derived from the bark of certain trees, in the production of leather. An alternative method, developed in the 1800s, is chrome tanning, where chromium salts are used instead of natural tannins. History Tanning hide into leather involves a process which permanently alters the protein structure of skin, making it more durable and less susceptible to decomposition and coloring. The place where hides are processed is known as a tannery. The English word for tanning is from medieval Latin , derivative of (oak bark), from French (tanbark), from old-Cornish (oak). These terms are related to the hypothetical Proto-Indo-European * meaning 'fir tree'. (The same word is source for Old High German meaning 'fir', related to modern German Tannenbaum). Ancient civilizations used leather for waterskins, bags, harnesses and tack, boats, armour, quivers, scabbards, boots, and sandals. Tanning was being carried out by the inhabitants of Mehrgarh in Pakistan between 7000 and 3300 BCE. Around 2500 BCE, the Sumerians began using leather, affixed by copper studs, on chariot wheels. The process of tanning was also used for boats and fishing vessels: ropes, nets, and sails were tanned using tree bark. Formerly, tanning was considered a noxious or "odoriferous trade" and relegated to the outskirts of town, among the poor. Tanning by ancient methods is so foul-smelling that tanneries are still isolated from those towns today where the old methods are used. Skins typically arrived at the tannery dried stiff and dirty with soil and gore. First, the ancient tanners would soak the skins in water to clean and soften them. Then they would pound and scour the skin to remove any remaining flesh and fat. Hair was removed by soaking the skin in urine, painting it with an alkaline lime mixture, or simply allowing the skin to putrefy for several months then dipping it in a salt solution. After the hair was loosened, the tanners scraped it off with a knife. Once the hair was removed, the tanners would "bate" (soften) the material by pounding dung into the skin, or soaking the skin in a solution of animal brains. Bating was a fermentative process that relied on enzymes produced by bacteria found in the dung. Among the kinds of dung commonly used were those of dogs or pigeons. Historically the actual tanning process used vegetable tanning. In some variations of the process, cedar oil, alum, or tannin was applied to the skin as a tanning agent. As the skin was stretched, it would lose moisture and absorb the agent. Following the adoption in medicine of soaking gut sutures in a chromium (III) solution after 1840, it was discovered that this method could also be used with leather and thus was adopted by tanners. Preparation The tanning process begins with obtaining an animal skin. When an animal skin is to be tanned, the animal is killed and skinned before the body heat leaves the tissues. This can be done by the tanner, or by obtaining a skin at a slaughterhouse, farm, or local fur trader. Before tanning, the skins are often dehaired, then have fat, meat and connective tissue removed. They are then washed and soaked in water with various compounds, and prepared to receive a tanning agent. They are then soaked, stretched, dried, and sometimes smoked. Curing Preparing hides begins by curing them with salt to prevent putrefaction of the collagen from bacterial growth during the time lag from procuring the hide to when it is processed. Curing removes water from the hides and skins using a difference in osmotic pressure. The moisture content of hides and skins is greatly reduced, and osmotic pressure increased, to the point that bacteria are unable to grow. In wet-salting, the hides are heavily salted, then pressed into packs for about 30 days. In brine-curing, the hides are agitated in a saltwater bath for about 16 hours. Curing can also be accomplished by preserving the hides and skins at very low temperatures. Beamhouse operations The steps in the production of leather between curing and tanning are collectively referred to as beamhouse operations. They include, in order, soaking, liming, removal of extraneous tissues (unhairing, scudding and fleshing), deliming, bating or puering, drenching, and pickling. Soaking In soaking, the hides are soaked in clean water to remove the salt left over from curing and increase the moisture so that the hide or skin can be further treated. To prevent damage of the skin by bacterial growth during the soaking period, biocides, typically dithiocarbamates, may be used. Fungicides such as TCMTB may also be added later in the process, to protect wet leathers from mold growth. After 1980, the use of pentachlorophenol and mercury-based biocides and their derivatives was forbidden. Liming After soaking, the hides are treated with milk of lime (a basic agent) typically supplemented by "sharpening agents" (disulfide reducing agents) such as sodium sulfide, cyanides, amines, etc. This: Removes the hair and other keratinous matter Removes some of the interfibrillary soluble proteins such as mucins Causes the fibers to swell up and split up to the desired extent Removes the natural grease and fats to some extent Brings the collagen in the hide to a proper condition for satisfactory tannage The weakening of hair is dependent on the breakdown of the disulfide link of the amino acid cystine, which is the characteristic of the keratin class of proteins that gives strength to hair and wools (keratin typically makes up 90% of the dry weight of hair). The hydrogen atoms supplied by the sharpening agent weaken the cystine molecular link whereby the covalent disulfide bond links are ultimately ruptured, weakening the keratin. To some extent, sharpening also contributes to unhairing, as it tends to break down the hair proteins. The isoelectric point of the collagen (a tissue-strengthening protein unrelated to keratin) in the hide is also shifted to around pH 4.7 due to liming. Any hairs remaining after liming are removed mechanically by scraping the skin with a dull knife, a process known as scudding. Deliming and bating The pH of the collagen is then reduced so the enzymes may act on it in a process known as deliming. Depending on the end use of the leather, hides may be treated with enzymes to soften them, a process called bating. In modern tanning, these enzymes are purified agents, and the process no longer requires bacterial fermentation (as from dung-water soaking) to produce them. Pickling Pickling is another term for tanning, or what is the modern equivalent of turning rawhide into leather by the use of modern chemical agents, if mineral tanning is preferred. Once bating is complete, the hides and skins are treated by first soaking them in a bath containing common salt (sodium chloride), usually 1 quart of salt to 1 gallon of hot water. When the water cools, one fluid ounce of sulfuric acid is added. Small skins are left in this liquor for 2 days, while larger skins between 1 week and as much as 2 months. In vegetable tanning, the hides are made to soak in a bath solution containing vegetable tannins, such as found in gallnuts, the leaves of sumac, the leaves of certain acacia trees, the outer green shells of walnuts, among other plants. The use of vegetable tanning is a process that takes longer than mineral tanning when converting rawhides into leather. Mineral tanned leather is used principally for shoes, car seats, and upholstery in homes (sofas, etc.). Vegetable tanned leather is used in leather crafting and in making small leather items, such as wallets, handbags and clothes. Process Chrome tanning Chromium(III) sulfate () has long been regarded as the most efficient and effective tanning agent. Chromium(III) compounds of the sort used in tanning are significantly less toxic than hexavalent chromium, although the latter arises in inadequate waste treatment. Chromium(III) sulfate dissolves to give the hexaaquachromium(III) cation, [Cr(H2O)6]3+, which at higher pH undergoes processes called olation to give polychromium(III) compounds that are active in tanning, being the cross-linking of the collagen subunits. The chemistry of [Cr(H2O)6]3+ is more complex in the tanning bath rather than in water due to the presence of a variety of ligands. Some ligands include the sulfate anion, the collagen's carboxyl groups, amine groups from the side chains of the amino acids, and masking agents. Masking agents are carboxylic acids, such as acetic acid, used to suppress formation of polychromium(III) chains. Masking agents allow the tanner to further increase the pH to increase collagen's reactivity without inhibiting the penetration of the chromium(III) complexes. Collagen is characterized by a high content of glycine, proline, and hydroxyproline, usually in the repeat -gly-pro-hypro-gly-. These residues give rise to collagen's helical structure. Collagen's high content of hydroxyproline allows cross-linking by hydrogen bonding within the helical structure. Ionized carboxyl groups (RCO2−) are formed by the action of hydroxide. This conversion occurs during the liming process, before introduction of the tanning agent (chromium salts). Later during pickling, collagen carboxyl groups are temporarily protonated for ready transport of chromium ions. During basification step of tanning, the carboxyl groups are ionized and coordinate as ligands to the chromium(III) centers of the oxo-hydroxide clusters. Tanning increases the spacing between protein chains in collagen from 10 to 17 Å. The difference is consistent with cross-linking by polychromium species, of the sort arising from olation and oxolation. Before the introduction of the basic chromium species in tanning, several steps are required to produce a tannable hide. The pH must be very acidic when the chromium is introduced to ensure that the chromium complexes are small enough to fit between the fibers and residues of the collagen. Once the desired level of penetration of chrome into the substance is achieved, the pH of the material is raised again to facilitate the process. This step is known as basification. In the raw state, chrome-tanned skins are greyish-blue, so are referred to as wet blue. Chrome tanning is faster than vegetable tanning (taking less than a day for this part of the process) and produces a stretchable leather which is excellent for use in handbags and garments. After application of the chromium agent, the bath is treated with sodium bicarbonate in the basification process to increase the pH to 3.8–4.0, inducing cross-linking between the chromium and the collagen. The pH increase is normally accompanied by a gradual temperature increase up to 40 °C. Chromium's ability to form such stable bridged bonds explains why it is considered one of the most effective tanning compounds. Chromium-tanned leather can contain between 4 and 5% of chromium. This efficiency is characterized by its increased hydrothermal stability of the skin, and its resistance to shrinkage in heated water. Vegetable tanning Vegetable tanning uses tannins (a class of polyphenol astringent chemicals), which occur naturally in the bark and leaves of many plants. Tannins bind to the collagen proteins in the hide and coat them, causing them to become less water-soluble and more resistant to bacterial attack. The process also causes the hide to become more flexible. The primary barks processed in bark mills and used in modern times are chestnut, oak, redoul, tanoak, hemlock, quebracho, mangrove, wattle (acacia; see catechol), and myrobalans from Terminalia spp., such as Terminalia chebula. In Ethiopia, the combined vegetable oils of Niger seed (Guizotia abyssinica) and flaxseeds were used in treating the flesh side of the leather, as a means of tawing, rather than of tanning. In Yemen and Egypt, hides were tanned by soaking them in a bath containing the crushed leaves and bark of the Salam acacia (Acacia etbaica; A. nilotica kraussiana). Hides that have been stretched on frames are immersed for several weeks in vats of increasing concentrations of tannin. Vegetable-tanned hide is not very flexible. It is used for luggage, furniture, footwear, belts, and other clothing accessories. Alternative chemicals Wet white is a term used for leathers produced using alternative tanning methods that produce an off-white colored leather. Like wet blue, wet white is also a semifinished stage. Wet white can be produced using aldehydes, aluminum, zirconium, titanium, or iron salts, or a combination thereof. Concerns with the toxicity and environmental impact of any chromium (VI) that may form during the tanning process have led to increased research into more efficient wet white methods. Natural tanning The conditions present in bogs, including highly acidic water, low temperature, and a lack of oxygen, combine to preserve but severely tan the skin of bog bodies. Tawing Tawing is a method that uses alum and other aluminium salts, generally in conjunction with binders such as egg yolk, flour, or other salts. The hide is tawed by soaking in a warm potash alum and salts solution, between . The process increases the hide's pliability, stretchability, softness, and quality. Then, the hide is air dried (crusted) for several weeks, which allows it to stabilize. The use of alum alone for tanning rawhides is not recommended, as it shrinks the surface area of the skin, making it thicker and hard to the touch. If alum is applied to the fur, it makes the fur dull and harsh. Post-tanning finishing Depending on the finish desired, the leather may be waxed, rolled, lubricated, injected with oil, split, shaved, or dyed. Health and environmental impact The tanning process involves chemical and organic compounds that can have a detrimental effect on the environment. Agents such as chromium, vegetable tannins, and aldehydes are used in the tanning step of the process. Chemicals used in tanned leather production increase the levels of chemical oxygen demand and total dissolved solids in water when not disposed of responsibly. These processes also use large quantities of water and produce large amounts of pollutants. Boiling and sun drying can oxidize and convert the various chromium(III) compounds used in tanning into carcinogenic hexavalent chromium, or chromium(VI). This hexavalent chromium runoff and scraps are then consumed by animals, in the case of Bangladesh, chickens (the nation's most common source of protein). Up to 25% of the chickens in Bangladesh contained harmful levels of hexavalent chromium, adding to the national health problem load. Chromium is not solely responsible for these diseases. Methylisothiazolinone, which is used for microbiological protection (fungal or bacterial growth), causes problems with the eyes and skin. Anthracene, which is used as a leather tanning agent, can cause problems in the kidneys and liver and is also considered a carcinogen. Formaldehyde and arsenic, which are used for leather finishing, cause health problems in the eyes, lungs, liver, kidneys, skin, and lymphatic system and are also considered carcinogens. The waste from leather tanneries is detrimental to the environment and the people who live in it. The use of old technologies plays a large factor in how hazardous wastewater results in contaminating the environment. This is especially prominent in small and medium-sized tanneries in developing countries. The UN Leather Working Group (LWG) "provides an environmental audit protocol, designed to assess the facilities of leather manufacturers," for "traceability, energy conservation, [and] responsible management of waste products." Alternatives Untanned hides can be dried and made pliable by rubbing and stretching the fibers with a hide stretcher, and fatting. However the hide will revert to rawhide if not periodically replenished with fat or oil, especially if it gets wet. Many Native Americans of the arid western regions wore clothing made by this process. Smoke tanning is listed among the conventional methods like chrome tanning and vegetable tanning. Impregnation of the hide's cells with formaldehyde (from smoke) offers some microbial and water resistance. Associated processes Leftover leather would historically be turned into glue. Tanners would place scraps of hides in a vat of water and let them deteriorate for months. The mixture would then be placed over a fire to boil off the water to produce glue. A tannery may be associated with a grindery, originally a whetstone facility for sharpening knives and other sharp tools, but later could carry shoemakers' tools and materials for sale. There are several solid and waste water treatment methodologies currently being researched, such as anaerobic digestion of solid wastes and wastewater sludge.
Technology
Materials
null
141008
https://en.wikipedia.org/wiki/Akashi%20Kaikyo%20Bridge
Akashi Kaikyo Bridge
The is a suspension bridge which links the city of Kobe on the Japanese island of Honshu and on Awaji Island. It is part of the Kobe-Awaji-Naruto Expressway, and crosses the busy and turbulent Akashi Strait (Akashi Kaikyō in Japanese). It was completed in 1998, and at the time, was the longest central span of any suspension bridge in the world, at . Currently, it is the second-longest, behind the 1915 Çanakkale Bridge in Turkey that was opened in March 2022. The Akashi Kaikyo Bridge is one of the key links of the Honshū–Shikoku Bridge Project, which created three routes across the Seto Inland Sea. History Background The Akashi Kaikyo Bridge forms part of the Kobe-Awaji-Naruto Expressway, the easternmost route of the bridge system linking the islands of Honshu and Shikoku. The bridge crosses the Akashi Strait (width 4 km) between Kobe on Honshu and Iwaya on Awaji Island; the other major part of the crossing is completed by the Ōnaruto Bridge, which links Awaji Island to Ōge Island across the Naruto Strait. Before the Akashi Kaikyo Bridge was built, ferries carried passengers across the Akashi Strait. A major passageway for shipping, it is also known for its gales, heavy rain, storms, and other natural disasters. The in stormy weather in December 1945, while carrying more than three times its capacity of 100 passengers, killed 304 people, first stirring public discussion on the possibility of a bridge over the span. In 1955, two ferries sank in the Shiun Maru disaster during a storm, killing 168 people. The ensuing shock and public outrage convinced the Japanese government to develop plans for a bridge to cross the strait. Investigations Investigations for a bridge across the strait were first conducted by the Kobe municipal government in 1957, followed by an evaluation by the national Ministry of Construction in 1959. In 1961, the Ministry of Construction and Japan National Railways jointly commissioned the Japan Society of Civil Engineers (JSCE) to conduct a technical study, and the JSCE established a committee to investigate five potential routes between Honshu and Shikoku. In 1967, the committee compiled the results of the technical study, concluding that a bridge across the Akashi Strait would face "extremely severe design and construction conditions, which have no similar examples in the world's long-span bridges" and recommending an additional study. In response to the report, the Honshu–Shikoku Bridge Authority (now the Honshu-Shikoku Bridge Expressway Company) was established in 1970, which conducted extensive investigations, including sea trials to establish the construction method of a submarine foundation. In 1973, a bridge with a central span of 1,780 meters on the route was approved, but construction was halted due to poor economic conditions. Construction The original plan called for a mixed railway-road bridge, but when construction on the bridge began in April 1988, it was restricted to road only, with six lanes. Actual construction did not begin until May 1988 and involved more than 100 contractors. The Great Hanshin Earthquake in January 1995 did not do substantial damage to the bridge due to anti-seismic building methods. Construction was finished on time in September 1996. The bridge was opened for traffic on April 5, 1998, in a ceremony officiated by the then-Crown Prince Naruhito and his spouse Crown Princess Masako of Japan along with Construction Minister Tsutomu Kawara. The bridge was the last Japanese megaprojects of the 20th century. Structure Substructures The bridge has four substructures: two main piers (located beneath the water) and two anchorages (on land). These are denoted 1A, 2P, 3P, and 4A in sequence from the Kobe side. 1A consists of an underground circular retaining wall filled with roller-compacted concrete, 2P and 3P are circular underwater spread-foundation caisson structures, and 4A is a rectangular direct foundation. 2P is located at the edge of the sea plateau at a level depth of 40–50 m and a bearing depth of 60 m, and 3P is located at the symmetrical point to 2P with respect to the bridge's center, at a level depth of 36–39 m and a bearing depth of 57 m. The towers are located in an area of strong tidal currents where water velocity exceeds 7 knots (about 3.6 m/s). The selected scour protection measure includes the installation of a filtering layer with a thickness of 2 m in a range of 10 m around the caisson, covered with riprap of 8 m thick. Superstructures The bridge has three spans. The central span is , and the two other sections are each . The bridge is long overall. The two towers were originally apart, but the Great Hanshin earthquake on January 17, 1995 (magnitude 7.3, with epicenter 20 km west of Kobe) moved the towers (the only structures that had been erected at the time) such that the central span had to be increased by . The central span was required to be greater than 1,500 m to accommodate maritime traffic; it was concluded before construction began that a larger span between 1950 and 2050 meters would minimize construction costs. The bridge was designed with a dual-hinged stiffening girder system, allowing the structure to withstand winds of , earthquakes measuring up to magnitude 8.5, and harsh sea currents. The bridge also contains tuned mass dampers that are designed to operate at the resonance frequency of the bridge to damp forces. The two main supporting towers rise above sea level, and the bridge can expand because of heat by up to over the course of a day. Each anchorage required of concrete. The steel cables have of wire: each cable is in diameter and contains 36,830 strands of wire. The Akashi–Kaikyo bridge has a total of 1,737 illumination lights: 1,084 for the main cables, 116 for the main towers, 405 for the girders and 132 for the anchorages. Sets of three high-intensity discharge lamps in the colors red, green and blue are mounted on the main cables. The RGB color model and computer technology make for a variety of combinations. Twenty-eight patterns are used for occasions such as national or regional holidays, memorial days or festivities. Cost The total cost is estimated at ¥500 billion or US$3.6 billion (per 1998 exchange rates). It is expected to be repaid by charging drivers a toll to cross the bridge. The toll is 2,300 yen and the bridge is used by approximately 23,000 cars per day.
Technology
Transport infrastructure
null
141029
https://en.wikipedia.org/wiki/African%20trypanosomiasis
African trypanosomiasis
African trypanosomiasis is an insect-borne parasitic infection of humans and other animals. Human African trypanosomiasis (HAT), also known as African sleeping sickness or simply sleeping sickness, is caused by the species Trypanosoma brucei. Humans are infected by two types, Trypanosoma brucei gambiense (TbG) and Trypanosoma brucei rhodesiense (TbR). TbG causes over 92% of reported cases. Both are usually transmitted by the bite of an infected tsetse fly and are most common in rural areas. Initially, the first stage of the disease is characterized by fevers, headaches, itchiness, and joint pains, beginning one to three weeks after the bite. Weeks to months later, the second stage begins with confusion, poor coordination, numbness, and trouble sleeping. Diagnosis is by finding the parasite in a blood smear or in the fluid of a lymph node. A lumbar puncture is often needed to tell the difference between first- and second-stage disease. If the disease is not treated quickly, it can lead to death. Prevention of severe disease involves screening the at-risk population with blood tests for TbG. Treatment is easier when the disease is detected early and before neurological symptoms occur. Treatment of the first stage has been with the medications pentamidine or suramin. Treatment of the second stage has involved eflornithine or a combination of nifurtimox and eflornithine for TbG. Fexinidazole is a more recent treatment that can be taken by mouth, for either stage of TbG. While melarsoprol works for both types, it is typically only used for TbR, due to serious side effects. Without treatment, sleeping sickness typically results in death. The disease occurs regularly in some regions of sub-Saharan Africa with the population at risk being about 70 million in 36 countries. An estimated 11,000 people are currently infected with 2,800 new infections in 2015. In 2018 there were 977 new cases. In 2015 it caused around 3,500 deaths, down from 34,000 in 1990. More than 80% of these cases are in the Democratic Republic of the Congo. Three major outbreaks have occurred in recent history: one from 1896 to 1906 primarily in Uganda and the Congo Basin, and two in 1920 and 1970, in several African countries. It is classified as a neglected tropical disease. Other animals, such as cows, may carry the disease and become infected in which case it is known as Nagana or animal trypanosomiasis. Signs and symptoms African trypanosomiasis symptoms occur in two stages: the hemolymphatic stage and the neurological stage (the latter being characterised by parasitic invasion of the central nervous system). Neurological symptoms occur in addition to the initial features, and the two stages may be difficult to distinguish based on clinical features alone. The disease has been reported to present with atypical symptoms in infected individuals who originate from non-endemic areas (e.g. travelers). The reasons for this are unclear and may be genetic. The low number of such cases may also have skewed findings. In such persons, the infection is said to present mainly as fever with gastrointestinal symptoms (e.g. diarrhoea and jaundice) with lymphadenopathy developing only rarely. Trypanosomal ulcer Systemic disease is sometimes presaged by a trypanosomal ulcer developing at the site of the infectious fly bite within 2 days of infection. The ulcer is most commonly observed in T. b. rhodesiense infection, and only rarely in T. b. gambiense (however, in T. b. gambiense infection, ulcers are more common in persons from non-endemic areas). Hemolymphatic phase The incubation period is 1–3 weeks for T. b. rhodesiense, and longer (but less precisely characterised) in T. b. gambiense infection. The first/initial stage, known as the hemolymphatic phase, is characterized by non-specific, generalised symptoms like: fever (intermittent), headaches (severe), joint pains, itching, weakness, malaise, fatigue, weight loss, lymphadenopathy, and hepatosplenomegaly. Diagnosis may be delayed due to the vagueness of initial symptoms. The disease may also be mistaken for malaria (which may occur as a co-infection). Intermittent fever Fever is intermittent, with attacks lasting from a day to a week, separated by intervals of a few days to a month or longer. Episodes of fever become less frequent throughout the disease. Lymphadenopathy Invasion of the circulatory and lymphatic systems by the parasite is associated with severe swelling of lymph nodes, often to tremendous sizes. Posterior cervical lymph nodes are most commonly affected, however, axillary, inguinal, and epitrochlear lymph node involvement may also occur. Winterbottom's sign, the tell-tale swollen lymph nodes along the back of the neck, may appear. Winterbottom's sign is common in T. b. gambiense infection. Other features Those affected may additionally present with: skin rash, haemolytic anaemia, hepatomegaly and abnormal liver function, splenomegaly, endocrine disturbance, cardiac involvement (e.g. pericarditis, and congestive heart failure), and ophthalmic involvement. Neurological phase The second phase of the disease, the neurological phase (also called the meningoencephalic stage), begins when the parasite invades the central nervous system by passing through the blood–brain barrier. Progression to the neurological phase occurs after an estimated 21–60 days in case of T. b. rhodesiense infection, and 300–500 days in case of T. b. gambiense infection. In actuality, the two phases overlap and are difficult to distinguish based on clinical features alone; determining the actual stage of the disease is achieved by examining the cerebrospinal fluid for the presence of the parasite. Sleep disorders Sleep-wake disturbances are a leading feature of the neurological stage and give the disease its common name of "sleeping sickness". Infected individuals experience a disorganized and fragmented sleep-wake cycle. Those affected experience sleep inversion resulting in daytime sleep and somnolence, and nighttime periods of wakefulness and insomnia. Additionally, those affected also experience episodes of sudden sleepiness. Neurological/neurocognitive symptoms Neurological symptoms include: tremor, general muscle weakness, hemiparesis, paralysis of a limb, abnormal muscle tone, gait disturbance, ataxia, speech disturbances, paraesthesia, hyperaesthesia, anaesthesia, visual disturbance, abnormal reflexes, seizures, and coma. Parkinson-like movements might arise due to non-specific movement disorders and speech disorders. Psychiatric/behavioural symptoms Individuals may exhibit psychiatric symptoms which may sometimes dominate the clinical diagnosis and may include aggressiveness, apathy, irritability, psychotic reactions and hallucinations, anxiety, emotional lability, confusion, mania, attention deficit, and delirium. Advanced/late disease and outcomes Without treatment, the disease is invariably fatal, with progressive mental deterioration leading to coma, systemic organ failure, and death. An untreated infection with T. b. rhodesiense will cause death within months whereas an untreated infection with T. b. gambiense will cause death after several years. Damage caused in the neurological phase is irreversible. Cause Trypanosoma brucei gambiense accounts for the majority of African trypanosomiasis cases, with humans as the main reservoir needed for the transmission, while Trypanosoma brucei rhodesiense is mainly zoonotic, with accidental human infections. The epidemiology of African trypanosomiasis is dependent on the interactions between the parasite (trypanosome), the vector (tsetse fly), and the host. Trypanosoma brucei There are two subspecies of the parasite that are responsible for starting the disease in humans. Trypanosoma brucei gambiense causes the diseases in west and central Africa, whereas Trypanosoma brucei rhodesiense has a limited geographical range and is responsible for causing the disease in east and southern Africa. In addition, a third subspecies of the parasite known as Trypanosoma brucei brucei is responsible for affecting animals but not humans. Humans are the main reservoir for T. b. gambiense but this species can also be found in pigs and other animals. Wild game animals and cattle are the main reservoir of T. b. rhodesiense. These parasites primarily infect individuals in sub-Saharan Africa because that is where the vector (tsetse fly) is located. The two human forms of the disease also vary greatly in intensity. T. b. gambiense causes a chronic condition that can remain in a passive phase for months or years before symptoms emerge and the infection can last about three years before death occurs. T. b. rhodesiense is the acute form of the disease, and death can occur within months since the symptoms emerge within weeks and it is more virulent and faster developing than T. b. gambiense. Furthermore, trypanosomes are surrounded by a coat that is composed of variant surface glycoproteins (VSG). These proteins act to protect the parasite from any lytic factors that are present in human plasma. The host's immune system recognizes the glycoproteins present on the coat of the parasite leading to the production of different antibodies (IgM and IgG). These antibodies will then act to destroy the parasites that circulate in the blood. However, from the several parasites present in the plasma, a small number of them will experience changes in their surface coats resulting in the formation of new VSGs. Thus, the antibodies produced by the immune system will no longer recognize the parasite leading to proliferation until new antibodies are created to combat the novel VSGs. Eventually, the immune system will no longer be able to fight off the parasite due to the constant changes in VSGs and infection will arise. Vector The tsetse fly (genus Glossina) is a large, brown, biting fly that serves as both a host and vector for the trypanosome parasites. While taking blood from a mammalian host, an infected tsetse fly injects metacyclic trypomastigotes into skin tissue. From the bite, parasites first enter the lymphatic system and then pass into the bloodstream. Inside the mammalian host, they transform into bloodstream trypomastigotes and are carried to other sites throughout the body, reach other body fluids (e.g., lymph, spinal fluid), and continue to replicate by binary fission. The entire life cycle of African trypanosomes is represented by extracellular stages. A tsetse fly becomes infected with bloodstream trypomastigotes when taking a blood meal on an infected mammalian host. In the fly's midgut, the parasites transform into procyclic trypomastigotes, multiply by binary fission, leave the midgut, and transform into epimastigotes. The epimastigotes reach the fly's salivary glands and continue multiplication by binary fission. The entire life cycle of the fly takes about three weeks. In addition to the bite of the tsetse fly, the disease can be transmitted by: Mother-to-child infection: the trypanosome can sometimes cross the placenta and infect the fetus. Laboratories: accidental infections, for example, through the handling of blood of an infected person and organ transplantation, although this is uncommon. Blood transfusion Sexual contact Horse-flies (Tabanidae) and stable flies (Muscidae) possibly play a role in the transmission of nagana (the animal form of sleeping sickness) and the human disease form. Pathophysiology Tryptophol is a chemical compound produced by the trypanosomal parasite in sleeping sickness which induces sleep in humans. Diagnosis The gold standard for diagnosis is the identification of trypanosomes in a sample by microscopic examination. Samples that can be used for diagnosis include ulcer fluid, lymph node aspirates, blood, bone marrow, and, during the neurological stage, cerebrospinal fluid. Detection of trypanosome-specific antibodies can be used for diagnosis, but the sensitivity and specificity of these methods are too variable to be used alone for clinical diagnosis. Further, seroconversion occurs after the onset of clinical symptoms during a T. b. rhodesiense infection, so is of limited diagnostic use. Trypanosomes can be detected from samples using two different preparations. A wet preparation can be used to look for the motile trypanosomes. Alternatively, a fixed (dried) smear can be stained using Giemsa's or Field's technique and examined under a microscope. Often, the parasite is in relatively low abundance in the sample, so techniques to concentrate the parasites can be used before microscopic examination. For blood samples, these include centrifugation followed by an examination of the buffy coat; mini anion-exchange/centrifugation; and the quantitative buffy coat (QBC) technique. For other samples, such as spinal fluid, concentration techniques include centrifugation followed by an examination of the sediment. Three serological tests are also available for the detection of the parasite: the micro-CATT (card agglutination test for trypanosomiasis), wb-CATT, and wb-LATEX. The first uses dried blood, while the other two use whole blood samples. A 2002 study found the wb-CATT to be the most efficient for diagnosis, while the wb-LATEX is a better exam for situations where greater sensitivity is required. Prevention Currently, there are few medically related prevention options for African trypanosomiasis (i.e. no vaccine exists for immunity). Although the risk of infection from a tsetse fly bite is minor (estimated at less than 0.1%), the use of insect repellants, wearing long-sleeved clothing, avoiding tsetse-dense areas, implementing bush clearance methods and wild game culling are the best options to avoid infection available for residents of affected areas. Regular active and passive surveillance, involving detection and prompt treatment of new infections, and tsetse fly control are the backbone of the strategy used to control sleeping sickness. Systematic screening of at-risk communities is the best approach, because case-by-case screening is not practical in endemic regions. Systematic screening may be in the form of mobile clinics or fixed screening centres where teams travel daily to areas with high infection rates. Such screening efforts are important because early symptoms are not evident or serious enough to warrant people with gambiense disease to seek medical attention, particularly in very remote areas. Also, diagnosis of the disease is difficult and health workers may not associate such general symptoms with trypanosomiasis. Systematic screening allows early-stage disease to be detected and treated before the disease progresses and removes the potential human reservoir. A single case of sexual transmission of West African sleeping sickness has been reported. In July 2000, a resolution was passed to form the Pan African Tsetse and Trypanosomiasis Eradication Campaign (PATTEC). The campaign works to eradicate the tsetse vector population levels and subsequently the protozoan disease, by use of insecticide-impregnated targets, fly traps, insecticide-treated cattle, ultra-low dose aerial/ground spraying (SAT) of tsetse resting sites and the sterile insect technique (SIT). The use of SIT in Zanzibar proved effective in eliminating the entire population of tsetse flies but was expensive and is relatively impractical to use in many of the endemic countries afflicted with African trypanosomiasis. A pilot program in Senegal has reduced the tsetse fly population by as much as 99% by introducing male flies that have been sterilized by exposure to gamma rays. Treatment The treatment is dependent on if the disease is discovered in the first or second stage of the disease. A requirement for treatment of the second stage is that the drug passes the blood-brain barrier. First stage The treatment for first-stage disease is fexinidazole by mouth or pentamidine by injection for T. b. gambiense. Suramin by injection is used for T. b. rhodesiense. Second stage Fexinidazole may be used for the second stage of TbG, if the disease is not severe. Otherwise a regimen involving the combination of nifurtimox and eflornithine, nifurtimox-eflornithine combination treatment (NECT), or eflornithine alone appear to be more effective and result in fewer side effects. These treatments may replace melarsoprol when available. NECT has the benefit of requiring fewer injections of eflornithine. Intravenous melarsoprol was previously the standard treatment for second-stage (neurological phase) disease and is effective for both types. Melarsoprol is the only treatment for second stage T. b. rhodesiense; however, it causes death in 5% of people who take it. Resistance to melarsoprol can occur. Drug development projects. A major challenge has been to find drugs that readily pass the blood-brain barrier. The latest drug that has come into clinical use is fexinidazol, but promising results have also been obtained with the benzoxaborole drug acoziborole (SCYX-7158). This drug is currently under evaluation as a single-dose oral treatment, which is a great advantage compared to currently used drugs. Another research field that has been extensively studied in Trypanosoma brucei is to target its nucleotide metabolism. The nucleotide metabolism studies have both led to the development of adenosine analogues looking promising in animal studies, and to the finding that downregulation of the P2 adenosine transporter is a common way to acquire partial drug resistance against the melaminophenyl arsenical and diamidine drug families (containing melarsoprol and pentamidine, respectively). Drug uptake and degradation are two major issues to consider to avoid drug resistance development. In the case of nucleoside analogues, they need to be taken up by the P1 nucleoside transporter (instead of P2), and they also need to be resistant to cleavage in the parasite. Prognosis If untreated, T. b. gambiense almost always results in death, with only a few individuals shown in a long-term 15-year follow-up to have survived after refusing treatment. T. b. rhodesiense, being a more acute and severe form of the disease, is consistently fatal if not treated. Disease progression greatly varies depending on disease form. For individuals who are infected by T. b. gambiense, which accounts for 92% of all of the reported cases, a person can be infected for months or even years without signs or symptoms until the advanced disease stage, where it is too late to be treated successfully. For individuals affected by T. b. rhodesiense, which accounts for 2% of all reported cases, symptoms appear within weeks or months of the infection. Disease progression is rapid and invades the central nervous system, causing death within a short amount of time. Epidemiology In 2010, it caused around 9,000 deaths, down from 34,000 in 1990. As of 2000, the disability-adjusted life-years (9 to 10 years) lost due to sleeping sickness are 2.0 million. From 2010 to 2014, there was an estimated 55 million people at risk for gambiense African Trypanosomiasis and over 6 million people at risk for rhodesiense African trypanosomiasis. In 2014, the World Health Organization reported 3,797 cases of Human African Trypanosomiasis when the predicted number of cases was to be 5,000. The number of total reported cases in 2014 is an 86% reduction to the total number of cases reported in 2000. The disease has been recorded as occurring in 37 countries, all in sub-Saharan Africa. The Democratic Republic of the Congo is the most affected country in the world, accounting for 75% of the Trypanosoma brucei gambiense cases. In 2009, the population at risk was estimated at about 69 million with one-third of this number being at a 'very high' to 'moderate' risk and the remaining two-thirds at a 'low' to 'very low' risk. Since then, the number of people being affected by the disease has continued to decline, with fewer than 1000 cases per year reported from 2018 onwards. Against this backdrop, sleeping sickness elimination is considered a real possibility, with the World Health Organization targeting the elimination of the transmission of the gambiese form by 2030. History The condition has been present in Africa for thousands of years. Because of a lack of travel between Indigenous people, sleeping sickness in humans had been limited to isolated pockets. This changed after Arab slave traders entered central Africa from the east, following the Congo River, bringing parasites along. Gambian sleeping sickness travelled up the Congo River, and then further east. An Arab writer of the 14th century left the following description in the case of a sultan of the Mali Kingdom: "His end was to be overtaken by the sleeping sickness (illat an-nawm) which is a disease that frequently befalls the inhabitants of these countries, especially their chieftains. Sleep overtakes one of them in such a manner that it is hardly possible to awake him." The British naval surgeon John Atkins described the disease on his return from West Africa in 1734: French naval surgeon Marie-Théophile Griffon du Bellay treated and described cases while stationed aboard the hospital ship Caravane in Gabon in the late 1860s. In 1901, a devastating epidemic erupted in Uganda, killing more than 250,000 people, including about two-thirds of the population in the affected lakeshore areas. According to The Cambridge History of Africa, "It has been estimated that up to half the people died of sleeping-sickness and smallpox in the lands on either bank of the lower river Congo." The causative agent and vector were identified in 1903 by David Bruce, and the subspecies of the protozoa were differentiated in 1910. Bruce had earlier shown that T. brucei was the cause of a similar disease in horses and cattle that was transmitted by the tsetse fly (Glossina morsitans). The first effective treatment, atoxyl, an arsenic-based drug developed by Paul Ehrlich and Kiyoshi Shiga, was introduced in 1910, but blindness was a serious side effect. Suramin was first synthesized by Oskar Dressel and Richard Kothe in 1916 for Bayer. It was introduced in 1920 to treat the first stage of the disease. By 1922, Suramin was generally combined with tryparsamide (another pentavalent organoarsenic drug), the first drug to enter the nervous system and be useful in the treatment of the second stage of the gambiense form. Tryparsamide was announced in the Journal of Experimental Medicine in 1919 and tested in the Belgian Congo by Louise Pearce of the Rockefeller Institute in 1920. It was used during the grand epidemic in West and Central Africa on millions of people and was the mainstay of therapy until the 1960s. American medical missionary Arthur Lewis Piper was active in using tryparsamide to treat sleeping sickness in the Belgian Congo in 1925. Pentamidine, a highly effective drug for the first stage of the disease, has been used since 1937. During the 1950s, it was widely used as a prophylactic agent in western Africa, leading to a sharp decline in infection rates. At the time, eradication of the disease was thought to be at hand. The organoarsenical melarsoprol (Arsobal) developed in the 1940s is effective for people with second-stage sleeping sickness. However, 3–10% of those injected have reactive encephalopathy (convulsions, progressive coma, or psychotic reactions), and 10–70% of such cases result in death; it can cause brain damage in those who survive the encephalopathy. However, due to its effectiveness, melarsoprol is still used today. Resistance to melarsoprol is increasing, and combination therapy with nifurtimox is currently under research. Eflornithine (difluoromethylornithine or DFMO), the most modern treatment, was developed in the 1970s by Albert Sjoerdsma and underwent clinical trials in the 1980s. The drug was approved by the United States Food and Drug Administration in 1990. Aventis, the company responsible for its manufacture, halted production in 1999. In 2001, Aventis, in association with Médecins Sans Frontières and the World Health Organization, signed a long-term agreement to manufacture and donate the drug. In addition to sleeping sickness, previous names have included negro lethargy, maladie du sommeil (Fr), Schlafkrankheit (Ger), African lethargy, and Congo trypanosomiasis. Research The genome of the parasite has been sequenced and several proteins have been identified as potential targets for drug treatment. Analysis of the genome also revealed the reason why generating a vaccine for this disease has been so difficult. T. brucei has over 800 genes that make proteins the parasite "mixes and matches" to evade immune system detection. Using a genetically modified form of a bacterium that occurs naturally in the gut of the vectors is being studied as a method of controlling the disease. Recent findings indicate that the parasite is unable to survive in the bloodstream without its flagellum. This insight gives researchers a new angle with which to attack the parasite. Trypanosomiasis vaccines are undergoing research. Additionally, the Drugs for Neglected Diseases Initiative has contributed to the African sleeping sickness research by developing a compound called fexinidazole. This project was originally started in April 2007 and enrolled 749 people in the DRC and Central African Republic. The results showed efficacy and safety in both stages of the disease, both in adults and children ≥ 6 years old and weighing ≥ 20 kg. The European Medicines Agency approved it for first and second stage disease outside of Europe in November 2018. The treatment was approved in the DRC in December 2018. Funding For current funding statistics, human African trypanosomiasis is grouped with kinetoplastid infections. Kinetoplastids refer to a group of flagellate protozoa. Kinetoplastid infections include African sleeping sickness, Chagas' disease, and Leishmaniasis. Altogether, these three diseases accounted for 4.4 million disability adjusted life years (DALYs) and an additional 70,075 recorded deaths yearly. For kinetoplastid infections, the total global research and development funding was approximately $136.3 million in 2012. Each of the three diseases, African sleeping sickness, Chagas' disease, and Leishmaniasis each received approximately a third of the funding, which was about US$36.8 million, US$38.7 million, and US $31.7 million, respectively. For sleeping sickness, funding was split into basic research, drug discovery, vaccines, and diagnostics. The greatest amount of funding was directed towards basic research of the disease; approximately US$21.6 million was directed towards that effort. As for therapeutic development, approximately $10.9 million was invested. The top funder for kinetoplastid infection research and development are public sources. About 62% of the funding comes from high-income countries while 9% comes from low- and middle-income countries. High-income countries' public funding is the largest contributor to the neglected disease research effort. However, in recent years, funding from high-income countries has been steadily decreasing; in 2007, high-income countries provided 67.5% of the total funding whereas, in 2012, high-income countries public funds only provided 60% of the total funding for kinetoplastid infections. This downward trend leaves a gap for other funders, such as philanthropic foundations and private pharmaceutical companies to fill. Much of the progress that has been made in African sleeping sickness and neglected disease research as a whole is a result of the other non-public funders. One of these major sources of funding has come from foundations, which have increasingly become more committed to neglected disease drug discovery in the 21st century. In 2012, philanthropic sources provided 15.9% of the total funding. The Bill and Melinda Gates Foundation has been a leader in providing funding for neglected diseases drug development. They have provided US$444.1 million towards neglected disease research in 2012. To date, they have donated over US$1.02 billion towards the neglected disease discovery efforts. For kinetoplastid infections specifically, they have donated an average of US$28.15 million annually between the years 2007 to 2011. They have labeled human African trypanosomiasis a high-opportunity target meaning it is a disease that presents the greatest opportunity for control, elimination, and eradication, through the development of new drugs, vaccines, public health programs, and diagnostics. They are the second-highest funding source for neglected diseases, immediately behind the US National Institutes of Health. At a time when public funding is decreasing and government grants for scientific research are harder to obtain, the philanthropic world has stepped in to push the research forward. Another important component of increased interest and funding has come from industry. In 2012, they contributed 13.1% total to the kinetoplastid research and development effort, and have additionally played an important role by contributing to public-private partnerships (PPP) as well as product-development partnerships (PDP). A public-private partnership is an arrangement between one or more public entities and one or more private entities that exists to achieve a specific health outcome or to produce a health product. The partnership can exist in numerous ways; they may share and exchange funds, property, equipment, human resources, and intellectual property. These public-private partnerships and product-development partnerships have been established to address challenges in the pharmaceutical industry, especially related to neglected disease research. These partnerships can help increase the scale of the effort toward therapeutic development by using different knowledge, skills, and expertise from different sources. These types of partnerships are more effective than industry or public groups working independently. Other animals and reservoir Trypanosoma of both the rhodesiense and gambiense types can affect other animals such as cattle and wild animals. African trypanosomiasis has generally been considered an anthroponotic disease and thus its control program was mainly focused on stopping the transmission by treating human cases and eliminating the vector. However, animal reservoirs were reported to possibly play an important role in the endemic nature of African trypanosomiasis, and for its resurgence in the historic foci of West and Central Africa.
Biology and health sciences
Protozoan infections
Health
504841
https://en.wikipedia.org/wiki/Osteoarthritis
Osteoarthritis
Osteoarthritis (OA) is a type of degenerative joint disease that results from breakdown of joint cartilage and underlying bone. It is believed to be the fourth leading cause of disability in the world, affecting 1 in 7 adults in the United States alone. The most common symptoms are joint pain and stiffness. Usually the symptoms progress slowly over years. Other symptoms may include joint swelling, decreased range of motion, and, when the back is affected, weakness or numbness of the arms and legs. The most commonly involved joints are the two near the ends of the fingers and the joint at the base of the thumbs, the knee and hip joints, and the joints of the neck and lower back. The symptoms can interfere with work and normal daily activities. Unlike some other types of arthritis, only the joints, not internal organs, are affected. Causes include previous joint injury, abnormal joint or limb development, and inherited factors. Risk is greater in those who are overweight, have legs of different lengths, or have jobs that result in high levels of joint stress. Osteoarthritis is believed to be caused by mechanical stress on the joint and low grade inflammatory processes. It develops as cartilage is lost and the underlying bone becomes affected. As pain may make it difficult to exercise, muscle loss may occur. Diagnosis is typically based on signs and symptoms, with medical imaging and other tests used to support or rule out other problems. In contrast to rheumatoid arthritis, in osteoarthritis the joints do not become hot or red. Treatment includes exercise, decreasing joint stress such as by rest or use of a cane, support groups, and pain medications. Weight loss may help in those who are overweight. Pain medications may include paracetamol (acetaminophen) as well as NSAIDs such as naproxen or ibuprofen. Long-term opioid use is not recommended due to lack of information on benefits as well as risks of addiction and other side effects. Joint replacement surgery may be an option if there is ongoing disability despite other treatments. An artificial joint typically lasts 10 to 15 years. Osteoarthritis is the most common form of arthritis, affecting about 237million people or 3.3% of the world's population, as of 2015. It becomes more common as people age. Among those over 60 years old, about 10% of males and 18% of females are affected. Osteoarthritis is the cause of about 2% of years lived with disability. Signs and symptoms The main symptom is pain, causing loss of ability and often stiffness. The pain is typically made worse by prolonged activity and relieved by rest. Stiffness is most common in the morning, and typically lasts less than thirty minutes after beginning daily activities, but may return after periods of inactivity. Osteoarthritis can cause a crackling noise (called "crepitus") when the affected joint is moved, especially shoulder and knee joint. A person may also complain of joint locking and joint instability. These symptoms would affect their daily activities due to pain and stiffness. Some people report increased pain associated with cold temperature, high humidity, or a drop in barometric pressure, but studies have had mixed results. Osteoarthritis commonly affects the hands, feet, spine, and the large weight-bearing joints, such as the hips and knees, although in theory, any joint in the body can be affected. As osteoarthritis progresses, movement patterns (such as gait), are typically affected. Osteoarthritis is the most common cause of a joint effusion of the knee. In smaller joints, such as at the fingers, hard bony enlargements, called Heberden's nodes (on the distal interphalangeal joints) or Bouchard's nodes (on the proximal interphalangeal joints), may form, and though they are not necessarily painful, they do limit the movement of the fingers significantly. Osteoarthritis of the toes may be a factor causing formation of bunions, rendering them red or swollen. Causes Damage from mechanical stress with insufficient self repair by joints is believed to be the primary cause of osteoarthritis. Sources of this stress may include misalignments of bones caused by congenital or pathogenic causes; mechanical injury; excess body weight; loss of strength in the muscles supporting a joint; and impairment of peripheral nerves, leading to sudden or uncoordinated movements. The risk of osteoarthritis increases with aging, history of joint injury, or family history of osteoarthritis. However exercise, including running in the absence of injury, has not been found to increase the risk of knee osteoarthritis. Nor has cracking one's knuckles been found to play a role. Primary The development of osteoarthritis is correlated with a history of previous joint injury and with obesity, especially with respect to knees. Changes in sex hormone levels may play a role in the development of osteoarthritis, as it is more prevalent among post-menopausal women than among men of the same age. Conflicting evidence exists for the differences in hip and knee osteoarthritis in African Americans and Caucasians. Occupational Increased risk of developing knee and hip osteoarthritis was found among those who work with manual handling (e.g. lifting), have physically demanding work, walk at work, and have climbing tasks at work (e.g. climb stairs or ladders). With hip osteoarthritis, in particular, increased risk of development over time was found among those who work in bent or twisted positions. For knee osteoarthritis, in particular, increased risk was found among those who work in a kneeling or squatting position, experience heavy lifting in combination with a kneeling or squatting posture, and work standing up. Women and men have similar occupational risks for the development of osteoarthritis. Secondary This type of osteoarthritis is caused by other factors but the resulting pathology is the same as for primary osteoarthritis: Alkaptonuria Congenital disorders of joints Diabetes doubles the risk of having a joint replacement due to osteoarthritis and people with diabetes have joint replacements at a younger age than those without diabetes. Ehlers-Danlos syndrome Hemochromatosis and Wilson's disease Inflammatory diseases (such as Perthes' disease), (Lyme disease), and all chronic forms of arthritis (e.g., costochondritis, gout, and rheumatoid arthritis). In gout, uric acid crystals cause the cartilage to degenerate at a faster pace. Injury to joints or ligaments (such as the ACL) as a result of an accident or orthopedic operations. Ligamentous deterioration or instability may be a factor. Marfan syndrome Obesity Joint infection Pathophysiology While osteoarthritis is a degenerative joint disease that may cause gross cartilage loss and morphological damage to other joint tissues, more subtle biochemical changes occur in the earliest stages of osteoarthritis progression. The water content of healthy cartilage is finely balanced by compressive force driving water out and hydrostatic and osmotic pressure drawing water in. Collagen fibres exert the compressive force, whereas the Gibbs–Donnan effect and cartilage proteoglycans create osmotic pressure which tends to draw water in. However, during onset of osteoarthritis, the collagen matrix becomes more disorganized and there is a decrease in proteoglycan content within cartilage. The breakdown of collagen fibers results in a net increase in water content. This increase occurs because whilst there is an overall loss of proteoglycans (and thus a decreased osmotic pull), it is outweighed by a loss of collagen. Other structures within the joint can also be affected. The ligaments within the joint become thickened and fibrotic, and the menisci can become damaged and wear away. Menisci can be completely absent by the time a person undergoes a joint replacement. New bone outgrowths, called "spurs" or osteophytes, can form on the margins of the joints, possibly in an attempt to improve the congruence of the articular cartilage surfaces in the absence of the menisci. The subchondral bone volume increases and becomes less mineralized (hypo mineralization). All these changes can cause problems functioning. The pain in an osteoarthritic joint has been related to thickened synovium and to subchondral bone lesions. Diagnosis Diagnosis is made with reasonable certainty based on history and clinical examination. X-rays may confirm the diagnosis. The typical changes seen on X-ray include: joint space narrowing, subchondral sclerosis (increased bone formation around the joint), subchondral cyst formation, and osteophytes. Plain films may not correlate with the findings on physical examination or with the degree of pain. In 1990, the American College of Rheumatology, using data from a multi-center study, developed a set of criteria for the diagnosis of hand osteoarthritis based on hard tissue enlargement and swelling of certain joints. These criteria were found to be 92% sensitive and 98% specific for hand osteoarthritis versus other entities such as rheumatoid arthritis and spondyloarthropathies. Classification A number of classification systems are used for gradation of osteoarthritis: WOMAC scale, taking into account pain, stiffness and functional limitation. Kellgren-Lawrence grading scale for osteoarthritis of the knee. It uses only projectional radiography features. Tönnis classification for osteoarthritis of the hip joint, also using only projectional radiography features. Both primary generalized nodal osteoarthritis and erosive osteoarthritis (EOA, also called inflammatory osteoarthritis) are sub-sets of primary osteoarthritis. EOA is a much less common, and more aggressive inflammatory form of osteoarthritis which often affects the distal interphalangeal joints of the hand and has characteristic articular erosive changes on X-ray. Management Lifestyle modification (such as weight loss and exercise) and pain medications are the mainstays of treatment. Acetaminophen (also known as paracetamol) is recommended first line, with NSAIDs being used as add-on therapy only if pain relief is not sufficient. Medications that alter the course of the disease have not been found as of 2018. For overweight people, weight loss may help relieve pain due to hip arthritis. Recommendations include modification of risk factors through targeted interventions including 1) obesity and overweight, 2) physical activity, 3) dietary exposures, 4) comorbidities, 5) biomechanical factors, 6) occupational factors. Successful management of the condition is often made more difficult by differing priorities and poor communication between clinicians and people with osteoarthritis. Realistic treatment goals can be achieved by developing a shared understanding of the condition, actively listening to patient concerns, avoiding medical jargon and tailoring treatment plans to the patient's needs. Exercise Weight loss and exercise provide long-term treatment and are advocated in people with osteoarthritis. Weight loss and exercise are the most safe and effective long-term treatments, in contrast to short-term treatments which usually have risk of long-term harm. High impact exercise can increase the risk of joint injury, whereas low or moderate impact exercise, such as walking or swimming, is safer for people with osteoarthritis. A study has suggested that an increase in blood calcium levels had a positive impact on osteoarthritis. An adequate dietary calcium intake and regular weight-bearing exercise can increase calcium levels and is helpful in preventing osteoarthritis in the general population. There is also a weak protective effect factor of LDL (low-density lipoprotein) cholesterol. However, this is not recommended since an increase in LDL has an increased chance of cardiovascular comorbidities. Moderate exercise may be beneficial with respect to pain and function in those with osteoarthritis of the knee and hip. These exercises should occur at least three times per week, under supervision, and focused on specific forms of exercise found to be most beneficial for this form of osteoarthritis. While some evidence supports certain physical therapies, evidence for a combined program is limited. Providing clear advice, making exercises enjoyable, and reassuring people about the importance of doing exercises may lead to greater benefit and more participation. Some evidence suggests that supervised exercise therapy may improve exercise adherence, although for knee osteoarthritis supervised exercise has shown the best results. Physical measures There is not enough evidence to determine the effectiveness of massage therapy. The evidence for manual therapy is inconclusive. A 2015 review indicated that aquatic therapy is safe, effective, and can be an adjunct therapy for knee osteoarthritis. Functional, gait, and balance training have been recommended to address impairments of position sense, balance, and strength in individuals with lower extremity arthritis, as these can contribute to a higher rate of falls in older individuals. For people with hand osteoarthritis, exercises may provide small benefits for improving hand function, reducing pain, and relieving finger joint stiffness. A study showed that there is low quality evidence that weak knee extensor muscle increased the chances of knee osteoarthritis. Strengthening of the knee extensors could possibly prevent knee osteoarthritis. Lateral wedge insoles and neutral insoles do not appear to be useful in osteoarthritis of the knee. Knee braces may help but their usefulness has also been disputed. For pain management, heat can be used to relieve stiffness, and cold can relieve muscle spasms and pain. Among people with hip and knee osteoarthritis, exercise in water may reduce pain and disability, and increase quality of life in the short term. Also therapeutic exercise programs such as aerobics and walking reduce pain and improve physical functioning for up to 6 months after the end of the program for people with knee osteoarthritis. In a study conducted over a period of 2 years on a group of individuals, a research team found that for every additional 1,000 steps per day, there was a 16% reduction in functional limitations in cases of knee osteoarthritis. Hydrotherapy might also be an advantage on the management of pain, disability and quality of life reported by people with osteoarthritis. Thermotherapy A 2003 Cochrane review of 7 studies between 1969 and 1999 found ice massage to be of significant benefit in improving range of motion and function, though not necessarily relief of pain. Cold packs could decrease swelling, but hot packs had no effect on swelling. Heat therapy could increase circulation, thereby reducing pain and stiffness, but with risk of inflammation and edema. Medication By mouth The pain medication paracetamol (acetaminophen) is the first line treatment for osteoarthritis. Pain relief does not differ according to dosage. However, a 2015 review found acetaminophen to have only a small short-term benefit with some concerns on abnormal results for liver function test. For mild to moderate symptoms effectiveness of acetaminophen is similar to non-steroidal anti-inflammatory drugs (NSAIDs) such as naproxen, though for more severe symptoms NSAIDs may be more effective. NSAIDs are associated with greater side effects such as gastrointestinal bleeding. Another class of NSAIDs, COX-2 selective inhibitors (such as celecoxib) are equally effective when compared to nonselective NSAIDs, and have lower rates of adverse gastrointestinal effects, but higher rates of cardiovascular disease such as myocardial infarction. They are also more expensive than non-specific NSAIDs. Benefits and risks vary in individuals and need consideration when making treatment decisions, and further unbiased research comparing NSAIDS and COX-2 selective inhibitors is needed. NSAIDS applied topically are effective for a small number of people. The COX-2 selective inhibitor rofecoxib was removed from the market in 2004, as cardiovascular events were associated with long term use. Education is helpful in self-management of arthritis, and can provide coping methods leading to about 20% more pain relief when compared to NSAIDs alone. Failure to achieve desired pain relief in osteoarthritis after two weeks should trigger reassessment of dosage and pain medication. Opioids by mouth, including both weak opioids such as tramadol and stronger opioids, are also often prescribed. Their appropriateness is uncertain, and opioids are often recommended only when first line therapies have failed or are contraindicated. This is due to their small benefit and relatively large risk of side effects. The use of tramadol likely does not improve pain or physical function and likely increases the incidence of adverse side effects. Oral steroids are not recommended in the treatment of osteoarthritis. Use of the antibiotic doxycycline orally for treating osteoarthritis is not associated with clinical improvements in function or joint pain. Any small benefit related to the potential for doxycycline therapy to address the narrowing of the joint space is not clear, and any benefit is outweighed by the potential harm from side effects. A 2018 meta-analysis found that oral collagen supplementation for the treatment of osteoarthritis reduces stiffness but does not improve pain and functional limitation. Topical There are several NSAIDs available for topical use, including diclofenac. A Cochrane review from 2016 concluded that reasonably reliable evidence is available only for use of topical diclofenac and ketoprofen in people aged over 40 years with painful knee arthritis. Transdermal opioid pain medications are not typically recommended in the treatment of osteoarthritis. The use of topical capsaicin to treat osteoarthritis is controversial, as some reviews found benefit while others did not. Joint injections Use of analgesia, intra-articular cortisone injection and consideration of hyaluronic acids and platelet-rich plasma are recommended for pain relief in people with knee osteoarthritis. Local drug delivery by intra-articular injection may be more effective and safer in terms of increased bioavailability, less systemic exposure and reduced adverse events. Several intra-articular medications for symptomatic treatment are available on the market as follows. Steroids Joint injection of glucocorticoids (such as hydrocortisone) leads to short-term pain relief that may last between a few weeks and a few months. A 2015 Cochrane review found that intra-articular corticosteroid injections of the knee did not benefit quality of life and had no effect on knee joint space; clinical effects one to six weeks after injection could not be determined clearly due to poor study quality. Another 2015 study reported negative effects of intra-articular corticosteroid injections at higher doses, and a 2017 trial showed reduction in cartilage thickness with intra-articular triamcinolone every 12 weeks for 2 years compared to placebo. A 2018 study found that intra-articular triamcinolone is associated with an increase in intraocular pressure. Hyaluronic acid Injections of hyaluronic acid have not produced improvement compared to placebo for knee arthritis, but did increase risk of further pain. In ankle osteoarthritis, evidence is unclear. Platelet-rich plasma The effectiveness of injections of platelet-rich plasma (PRP) is unclear; there are suggestions that such injections improve function but not pain, and are associated with increased risk. A 2014 Cochrane review of studies involving PRP found the evidence to be insufficient. Radiosynoviorthesis Injection of beta particle-emitting radioisotopes (called radiosynoviorthesis) is used for the local treatment of inflammatory joint conditions. Radiotherapy Low-dose radiotherapy has been shown to improve pain and mobility of affected joints, primarily in extremities. It is approximately 70-90% effective, with minimal side effects. Surgery Bone fusion Arthrodesis (fusion) of the bones may be an option in some types of osteoarthritis. An example is ankle osteoarthritis, in which ankle fusion is considered to be the gold standard treatment in end-stage cases. Joint replacement If the impact of symptoms of osteoarthritis on quality of life is significant and more conservative management is ineffective, joint replacement surgery or resurfacing may be recommended. Evidence supports joint replacement for both knees and hips as it is both clinically effective and cost-effective. People who underwent total knee replacement had improved SF-12 quality of life scores, were feeling better compared to those who did not have surgery, and may have short- and long-term benefits for quality of life in terms of pain and function. The beneficial effects of these surgeries may be time-limited due to various environmental factors, comorbidities, and pain in other regions of the body. For people who have shoulder osteoarthritis and do not respond to medications, surgical options include a shoulder hemiarthroplasty (replacing a part of the joint), and total shoulder arthroplasty (replacing the joint). Biological joint replacement involves replacing the diseased tissues with new ones. This can either be from the person (autograft) or from a donor (allograft). People undergoing a joint transplant (osteochondral allograft) do not need to take immunosuppressants as bone and cartilage tissues have limited immune responses. Autologous articular cartilage transfer from a non-weight-bearing area to the damaged area, called osteochondral autograft transfer system, is one possible procedure that is being studied. When the missing cartilage is a focal defect, autologous chondrocyte implantation is also an option. Shoulder replacement For those with osteoarthritis in the shoulder, a complete shoulder replacement is sometimes suggested to improve pain and function. Demand for this treatment is expected to increase by 750% by the year 2030. There are different options for shoulder replacement surgeries, however, there is a lack of evidence in the form of high-quality randomized controlled trials, to determine which type of shoulder replacement surgery is most effective in different situations, what are the risks involved with different approaches, or how the procedure compares to other treatment options. There is some low-quality evidence that indicates that when comparing total shoulder arthroplasty over hemiarthroplasty, no large clinical benefit was detected in the short term. It is not clear if the risk of harm differs between total shoulder arthroplasty or a hemiarthroplasty approach. Other surgical options Osteotomy may be useful in people with knee osteoarthritis, but has not been well studied and it is unclear whether it is more effective than non-surgical treatments or other types of surgery. Arthroscopic surgery is largely not recommended, as it does not improve outcomes in knee osteoarthritis, and may result in harm. It is unclear whether surgery is beneficial in people with mild to moderate knee osteoarthritis. Unverified treatments Glucosamine and chondroitin The effectiveness of glucosamine is controversial. Reviews have found it to be equal to or slightly better than placebo. A difference may exist between glucosamine sulfate and glucosamine hydrochloride, with glucosamine sulfate showing a benefit and glucosamine hydrochloride not. The evidence for glucosamine sulfate having an effect on osteoarthritis progression is somewhat unclear and if present likely modest. The Osteoarthritis Research Society International recommends that glucosamine be discontinued if no effect is observed after six months and the National Institute for Health and Care Excellence no longer recommends its use. Despite the difficulty in determining the efficacy of glucosamine, it remains a treatment option. The European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis (ESCEO) recommends glucosamine sulfate and chondroitin sulfate for knee osteoarthritis. Its use as a therapy for osteoarthritis is usually safe. A 2015 Cochrane review of clinical trials of chondroitin found that most were of low quality, but that there was some evidence of short-term improvement in pain and few side effects; it does not appear to improve or maintain the health of affected joints. Supplements Avocado–soybean unsaponifiables (ASU) is an extract made from avocado oil and soybean oil sold under many brand names worldwide as a dietary supplement and as a prescription drug in France. A 2014 Cochrane review found that while ASU might help relieve pain in the short term for some people with osteoarthritis, it does not appear to improve or maintain the health of affected joints. The review noted a high-quality, two-year clinical trial comparing ASU to chondroitin which has uncertain efficacy in osteoarthritis with no difference between the two agents. The review also found there is insufficient evidence of ASU safety. A few high-quality studies of Boswellia serrata show consistent, but small, improvements in pain and function. Curcumin, phytodolor, and s-adenosyl methionine (SAMe) may be effective in improving pain. A 2009 Cochrane review recommended against the routine use of SAMe, as there has not been sufficient high-quality clinical research to prove its effect. A 2021 review found that hydroxychloroquine (HCQ) had no benefit in reducing pain and improving physical function in hand or knee osteoarthritis, and the off-label use of HCQ for people with osteoarthritis should be discouraged. There is no evidence for the use of colchicine for treating the pain of hand or knee arthritis. There is limited evidence to support the use of hyaluronan, methylsulfonylmethane, rose hip, capsaicin, or vitamin D. Acupuncture and other interventions While acupuncture leads to improvements in pain relief, this improvement is small and may be of questionable importance. Waiting list–controlled trials for peripheral joint osteoarthritis do show clinically relevant benefits, but these may be due to placebo effects. Acupuncture does not seem to produce long-term benefits. Electrostimulation techniques such as TENS have been used for twenty years to treat osteoarthritis in the knee. However, there is no conclusive evidence to show that it reduces pain or disability. A Cochrane review of low-level laser therapy found unclear evidence of benefit, whereas another review found short-term pain relief for osteoarthritic knees. Further research is needed to determine if balnotherapy for osteoarthritis (mineral baths or spa treatments) improves a person's quality of life or ability to function. The use of ice or cold packs may be beneficial; however, further research is needed. There is no evidence of benefit from placing hot packs on joints. There is low quality evidence that therapeutic ultrasound may be beneficial for people with osteoarthritis of the knee; however, further research is needed to confirm and determine the degree and significance of this potential benefit. Therapeutic ultrasound is safe and helps reducing pain and improving physical function for knee osteoarthritis. While phonophoresis does not improve functions, it may offer greater pain relief than standard non-drug ultrasound. Continuous and pulsed ultrasound modes (especially 1 MHz, 2.5 W/cm2, 15min/ session, 3 session/ week, during 8 weeks protocol) may be effective in improving patients physical function and pain. There is weak evidence suggesting that electromagnetic field treatment may result in moderate pain relief; however, further research is necessary and it is not known if electromagnetic field treatment can improve quality of life or function. Viscosupplementation for osteoarthritis of the knee may have positive effects on pain and function at 5 to 13 weeks post-injection. Epidemiology Globally, , approximately 250million people had osteoarthritis of the knee (3.6% of the population). Hip osteoarthritis affects about 0.85% of the population. , osteoarthritis globally causes moderate to severe disability in 43.4 million people. Together, knee and hip osteoarthritis had a ranking for disability globally of 11th among 291 disease conditions assessed. Middle East and North Africa (MENA) In the Middle East and North Africa from 1990 to 2019, the prevalence of people with hip osteoarthritis increased threefold over the three decades, a total of 1.28 million cases. It increased 2.88-fold, from 6.16 million cases to 17.75 million, between 1990 and 2019 for knee osteoarthritis. Hand osteoarthritis in MENA also increased 2.7-fold, from 1.6 million cases to 4.3 million from 1990 to 2019. United States , osteoarthritis affected 52.5 million people in the United States, approximately 50% of whom were 65 years or older. It is estimated that 80% of the population have radiographic evidence of osteoarthritis by age 65, although only 60% of those will have symptoms. The rate of osteoarthritis in the United States is forecast to be 78 million (26%) adults by 2040. In the United States, there were approximately 964,000 hospitalizations for osteoarthritis in 2011, a rate of 31 stays per 10,000 population. With an aggregate cost of $14.8 billion ($15,400 per stay), it was the second-most expensive condition seen in US hospital stays in 2011. By payer, it was the second-most costly condition billed to Medicare and private insurance. Europe In Europe, the number of individuals affected by osteoarthritis has increased from 27.9 million in 1990 to 50.8 million in 2019. Hand osteoarthritis was the second most prevalent type, affecting an estimated 12.5 million people. In 2019, Knee osteoarthritis was the 18th most common cause of years lived with disability (YLDs) in Europe, accounting for 1.28% of all YLDs. This has increased from 1.12% in 1990. India In India, the number of individuals affected by osteoarthritis has increased from 23.46 million in 1990 to 62.35 million in 2019. Knee osteoarthritis was the most prevalent type of osteoarthritis, followed by hand osteoarthritis. In 2019, osteoarthritis was the 20th most common cause of years lived with disability (YLDs) in India, accounting for 1.48% of all YLDs, which increased from 1.25% and 23rd most common cause in 1990. History Etymology Osteoarthritis is derived from the prefix osteo- (from ) combined with arthritis (from , , ), which is itself derived from arthr- (from , , ) and -itis (from , , ), the latter suffix having come to be associated with inflammation. The -itis of osteoarthritis could be considered misleading as inflammation is not a conspicuous feature. Some clinicians refer to this condition as osteoarthrosis to signify the lack of inflammatory response, the suffix -osis (from , , ) simply referring to the pathosis itself. Other animals Osteoarthritis has been reported in several species of animals all over the world, including marine animals and even some fossils; including but not limited to: cats, many rodents, cattle, deer, rabbits, sheep, camels, elephants, buffalo, hyena, lions, mules, pigs, tigers, kangaroos, dolphins, dugong, and horses. Osteoarthritis has been reported in fossils of the large carnivorous dinosaur Allosaurus fragilis. Research Therapies Pharmaceutical agents that will alter the natural history of disease progression by arresting joint structural change and ameliorating symptoms are termed as disease modifying therapy. Therapies under investigation include the following: Strontium ranelate – may decrease degeneration in osteoarthritis and improve outcomes Gene therapy – Gene transfer strategies aim to target the disease process rather than the symptoms. Cell-mediated gene therapy is also being studied. One version was approved in South Korea for the treatment of moderate knee osteoarthritis, but later revoked for the mislabeling and the false reporting of an ingredient used. The drug was administered intra-articularly. Cause As well as attempting to find disease-modifying agents for osteoarthritis, there is emerging evidence that a system-based approach is necessary to find the causes of osteoarthritis. A study conducted by scientists at the University of Twente found that osmolarity induced intracellular molecular crowding might drive the disease pathology. Diagnostic biomarkers Guidelines outlining requirements for inclusion of soluble biomarkers in osteoarthritis clinical trials were published in 2015, but there are no validated biomarkers used clinically to detect osteoarthritis, as of 2021. A 2015 systematic review of biomarkers for osteoarthritis looking for molecules that could be used for risk assessments found 37 different biochemical markers of bone and cartilage turnover in 25 publications. The strongest evidence was for urinary C-terminal telopeptide of type II collagen (uCTX-II) as a prognostic marker for knee osteoarthritis progression, and serum cartilage oligomeric matrix protein (COMP) levels as a prognostic marker for incidence of both knee and hip osteoarthritis. A review of biomarkers in hip osteoarthritis also found associations with uCTX-II. Procollagen type II C-terminal propeptide (PIICP) levels reflect type II collagen synthesis in body and within joint fluid PIICP levels can be used as a prognostic marker for early osteoarthritis.
Biology and health sciences
Specific diseases
Health
505449
https://en.wikipedia.org/wiki/Sign%20function
Sign function
In mathematics, the sign function or signum function (from signum, Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zero. In mathematical notation the sign function is often represented as or . Definition The signum function of a real number is a piecewise function which is defined as follows: The law of trichotomy states that every real number must be positive, negative or zero. The signum function denotes which unique category a number falls into by mapping it to one of the values , or which can then be used in mathematical expressions or further calculations. For example: Basic properties Any real number can be expressed as the product of its absolute value and its sign: It follows that whenever is not equal to 0 we have Similarly, for any real number , We can also be certain that: and so Some algebraic identities The signum can also be written using the Iverson bracket notation: The signum can also be written using the floor and the absolute value functions: If is accepted to be equal to 1, the signum can also be written for all real numbers as Properties in mathematical analysis Discontinuity at zero Although the sign function takes the value when is negative, the ringed point in the plot of indicates that this is not the case when . Instead, the value jumps abruptly to the solid point at where . There is then a similar jump to when is positive. Either jump demonstrates visually that the sign function is discontinuous at zero, even though it is continuous at any point where is either positive or negative. These observations are confirmed by any of the various equivalent formal definitions of continuity in mathematical analysis. A function , such as is continuous at a point if the value can be approximated arbitrarily closely by the sequence of values where the make up any infinite sequence which becomes arbitrarily close to as becomes sufficiently large. In the notation of mathematical limits, continuity of at requires that as for any sequence for which The arrow symbol can be read to mean approaches, or tends to, and it applies to the sequence as a whole. This criterion fails for the sign function at . For example, we can choose to be the sequence which tends towards zero as increases towards infinity. In this case, as required, but and for each so that . This counterexample confirms more formally the discontinuity of at zero that is visible in the plot. Despite the sign function having a very simple form, the step change at zero causes difficulties for traditional calculus techniques, which are quite stringent in their requirements. Continuity is a frequent constraint. One solution can be to approximate the sign function by a smooth continuous function; others might involve less stringent approaches that build on classical methods to accommodate larger classes of function. Smooth approximations and limits The signum function coincides with the limits and as well as, Here, is the Hyperbolic tangent and the superscript of -1, above it, is shorthand notation for the inverse function of the Trigonometric function, tangent. For , a smooth approximation of the sign function is Another approximation is which gets sharper as ; note that this is the derivative of . This is inspired from the fact that the above is exactly equal for all nonzero if , and has the advantage of simple generalization to higher-dimensional analogues of the sign function (for example, the partial derivatives of ). See . Differentiation The signum function is differentiable everywhere except when Its derivative is zero when is non-zero: This follows from the differentiability of any constant function, for which the derivative is always zero on its domain of definition. The signum acts as a constant function when it is restricted to the negative open region where it equals . It can similarly be regarded as a constant function within the positive open region where the corresponding constant is Although these are two different constant functions, their derivative is equal to zero in each case. It is not possible to define a classical derivative at , because there is a discontinuity there. Although it is not differentiable at in the ordinary sense, under the generalized notion of differentiation in distribution theory, the derivative of the signum function is two times the Dirac delta function. This can be demonstrated using the identity where is the Heaviside step function using the standard formalism. Using this identity, it is easy to derive the distributional derivative: Integration The signum function has a definite integral between any pair of finite values and , even when the interval of integration includes zero. The resulting integral for and is then equal to the difference between their absolute values: In fact, the signum function is the derivative of the absolute value function, except where there is an abrupt change in gradient at zero: We can understand this as before by considering the definition of the absolute value on the separate regions and For example, the absolute value function is identical to in the region whose derivative is the constant value , which equals the value of there. Because the absolute value is a convex function, there is at least one subderivative at every point, including at the origin. Everywhere except zero, the resulting subdifferential consists of a single value, equal to the value of the sign function. In contrast, there are many subderivatives at zero, with just one of them taking the value . A subderivative value occurs here because the absolute value function is at a minimum. The full family of valid subderivatives at zero constitutes the subdifferential interval , which might be thought of informally as "filling in" the graph of the sign function with a vertical line through the origin, making it continuous as a two dimensional curve. In integration theory, the signum function is a weak derivative of the absolute value function. Weak derivatives are equivalent if they are equal almost everywhere, making them impervious to isolated anomalies at a single point. This includes the change in gradient of the absolute value function at zero, which prohibits there being a classical derivative. Fourier transform The Fourier transform of the signum function is where means taking the Cauchy principal value. Generalizations Complex signum The signum function can be generalized to complex numbers as: for any complex number except . The signum of a given complex number is the point on the unit circle of the complex plane that is nearest to . Then, for , where is the complex argument function. For reasons of symmetry, and to keep this a proper generalization of the signum function on the reals, also in the complex domain one usually defines, for : Another generalization of the sign function for real and complex expressions is , which is defined as: where is the real part of and is the imaginary part of . We then have (for ): Polar decomposition of matrices Thanks to the Polar decomposition theorem, a matrix ( and ) can be decomposed as a product where is a unitary matrix and is a self-adjoint, or Hermitian, positive definite matrix, both in . If is invertible then such a decomposition is unique and plays the role of 's signum. A dual construction is given by the decomposition where is unitary, but generally different than . This leads to each invertible matrix having a unique left-signum and right-signum . In the special case where and the (invertible) matrix , which identifies with the (nonzero) complex number , then the signum matrices satisfy and identify with the complex signum of , . In this sense, polar decomposition generalizes to matrices the signum-modulus decomposition of complex numbers. Signum as a generalized function At real values of , it is possible to define a generalized function–version of the signum function, such that everywhere, including at the point , unlike , for which . This generalized signum allows construction of the algebra of generalized functions, but the price of such generalization is the loss of commutativity. In particular, the generalized signum anticommutes with the Dirac delta function in addition, cannot be evaluated at ; and the special name, is necessary to distinguish it from the function . ( is not defined, but .)
Mathematics
Specific functions
null