id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
5381912
https://en.wikipedia.org/wiki/Eupleridae
Eupleridae
Eupleridae is a family of carnivorans endemic to Madagascar and comprising 10 known living species in seven genera, commonly known as euplerids, Malagasy mongooses or Malagasy carnivorans. The best known species is the fossa (Cryptoprocta ferox), in the subfamily Euplerinae. All species of Euplerinae were formerly classified as viverrids, while all species in the subfamily Galidiinae were classified as herpestids. Recent molecular studies indicate that the 10 living species of Madagascar carnivorans evolved from one ancestor that is thought to have rafted over from mainland Africa 18–24 million years ago. This makes Malagasy carnivorans a clade. They are closely allied with the true herpestid mongooses, their closest living relatives. The fossa and the Malagasy civet (Fossa fossana) are each evolutionarily quite distinct from each other and from the rest of the clade. All Eupleridae are considered threatened species due to habitat destruction, as well as predation and competition from non-native species. Taxonomy and phylogeny Historically, the relationships of the Madagascar carnivorans have been contentious, but molecular evidence suggests that they form a single clade, now recognized as the family Eupleridae. The hyena family, Hyaenidae, is a sister taxon of the euplerid and herpestid clade, and when grouped together with the viverrids and felids, as well as some smaller groups, forms the feliform (cat-like carnivores) clade. The evolutionary divergence between the herpestids and the euplerids dates back to the Oligocene. At that time, feliforms shared many similarities, particularly between the cats and the viverrids. Palaeoprionodon (within the clade Aeluroidea), found in Europe and Asia from the late Eocene or early Oligocene, looked similar to the modern fossa, while Proailurus, an extinct form of cat, exhibited many viverrid-like characteristics. Despite these similarities in the fossil record, the modern Malagasy carnivores are distinctly different, with the Euplerinae and Galidiinae subfamilies bearing similarities with civets and mongooses, respectively. Species in Euplerinae (including the fossa, falanouc, and Malagasy civet) have auditory regions similar to those of viverrids, while those in Galidiinae have auditory regions similar to those of herpestids. Based on this trait, Robert M. Hunt Jr. proposed in 1996 that Madagascar was colonized twice, once by viverrids and once by herpestids. However, the genetic studies by Yoder and colleagues in 2003 suggested that a single colonization event occurred by a primitive herpestid ancestor, which was quickly followed by adaptive radiation. The common ancestor arrived from Africa, probably by rafting, during the late Oligocene or early Miocene (24–18 Mya), though Philippe Gaubert and Veron estimated a divergence date of 19.4 Mya (16.5–22.7 Mya). Classification Phylogenetic tree The phylogenetic relationships of Malagasy carnivorans (Eupleridae) are shown in the following cladogram:
Biology and health sciences
Other carnivora
Animals
5382118
https://en.wikipedia.org/wiki/European%20seabass
European seabass
The European seabass (Dicentrarchus labrax), also known as the branzino, European bass, sea bass, common bass, white bass, capemouth, white salmon, sea perch, white mullet, sea dace or loup de mer, is a primarily ocean-going fish native to the waters off Europe's western and southern and Africa's northern coasts, though it can also be found in shallow coastal waters and river mouths during the summer months and late autumn. It is one of only six species in its family, Moronidae, collectively called the temperate basses. It is fished and raised commercially and is considered the most important fish currently cultured in the Mediterranean. In Ireland and the United Kingdom, the popular restaurant fish sold and consumed as sea bass is exclusively the European bass. In North America, it is widely known by one of its Italian names, branzino. European seabass is a slow-growing species that takes several years to reach adulthood. An adult European seabass usually weighs around . European seabass can reach measurements of up to in length and in weight, though the most common size is only about half of that at . Individuals are silvery grey and sometimes a dark-bluish color on the back. Juveniles form schools and feed on invertebrates, while adults are less social and prefer to consume other fish. They are generally found in the littoral zone near the banks of rivers, lagoons, and estuaries during the summer and migrate offshore during the winter. European sea bass feed on prawns, crabs and small fish. Though it is a sought-after gamefish, it is listed as Least Concern by the International Union for Conservation of Nature because it is widespread and there are no known major threats. Taxonomy and phylogeny The European seabass was first described in 1758 by Swedish zoologist Carl Linnaeus in his work Systema Naturae. He named it Perca labrax. In the century and a half following, it was classified under a variety of new synonyms, with Dicentrarchus labrax winning out as the accepted name in 1987. Its generic name, Dicentrarchus, derives from Greek, from the presence of two anal spines, "di" meaning two, "kentron" meaning sting, and "archos" meaning anus. The European bass is sold under dozens of common names in various languages. In the British Isles, it is known as the "European bass," "European seabass," "common bass," "capemouth," "king of the mullets," "sea bass," "sea dace," "sea perch," "white mullet," "white salmon," or simply "bass". There are two genetically distinct populations of wild European seabass. The first is found in the northeast Atlantic Ocean, and the second is in the western Mediterranean Sea. The two populations are separated by a relatively narrow distance in a region known as the Almeria-Oran oceanographic front, located east of the Spanish city of Almería. The exact reason for this separation is unknown, as the geographic divide should not account for the lack of gene flow between the two populations. The larval stage of the European seabass can last up to 3 months, during which it cannot swim well, and even a small amount of water flow should transport some individuals between the two regions. In addition, juveniles can survive temperature and salinity changes, and adults can migrate hundreds of miles. Distribution and habitat European seabass habitats include estuaries, lagoons, coastal waters, and rivers. It is found in a large part of the eastern Atlantic Ocean, from southern Norway to Senegal. It can also be found in the entire Mediterranean Sea and in the southern Black Sea but is absent from the Baltic Sea. It has entered the Red Sea through the Suez Canal as an anti-Lessepsian migrant. It is a seasonally migratory species, moving further winter spawning grounds during at least one month before moving towards their summer feeding areas. Diet and behaviour The European seabass hunts as much during the day as it does at night, feeding on small fish (both pelagic, such as sardines, sprats, and sand smelts, and demersal, such as sand eels), polychaetes, cephalopods (such as squid), and crustaceans. The big fish weighing more than are mostly night hunters. They spawn from February to June, mostly in inshore waters. As fry they are pelagic, but as they develop, they move into estuaries, where they stay for a year or two. Fisheries and aquaculture Capture fisheries Annual catches of wild European seabass are relatively modest, fluctuating between 8,500 and 11,900 tonnes from 2000 to 2009. Most reported catches originate from the Atlantic Ocean, with France typically reporting the highest catches. In the Mediterranean, Italy used to report the largest catches but has been surpassed by Egypt. The fish has come under increasing pressure from commercial fishing and became the focus in the United Kingdom of a conservation effort by recreational anglers. The Republic of Ireland has strict laws regarding bass. All commercial fishing for the species is banned, and several restrictions are in place for recreational anglers: a closed season from May 15 – June 15 inclusive every year; a minimum size of ; and a bag limit of two fish per day. In a scientific advisory (June 2013), it is stressed that fishing mortality is increasing. The total biomass has been declining since 2005. Total biomass assumed as the best stock size indicator in the last two years (2011–2012) was 32% lower than the total biomass in the three previous years (2008–2010). Farming European seabass was one of Europe's first fish to be farmed commercially. Historically, they were cultured in coastal lagoons and tidal reservoirs before mass-production techniques were developed in the late 1960s. It is the most important commercial fish widely cultured in the Mediterranean. Greece, Turkey, Italy, Spain, Croatia, and Egypt are the most important farming countries. Annual production was more than 120,000 tonnes in 2010. The world's biggest producer of European seabass is Turkey. Dish Branzino, sometimes known as "spigola" in southern Italy, is popular in Italian cuisine as a main course. It is often prepared by roasting the entire fish and serving it with lemon. The meat is often desired because of its sweet taste and flaky white texture. In French it is sometimes known as "barr" or "loup de mar", and in Spanish it is often referred to as "robalo". Each time the whole fish is cooked and plated.
Biology and health sciences
Acanthomorpha
null
5382243
https://en.wikipedia.org/wiki/Thalassodromeus
Thalassodromeus
Thalassodromeus is a genus of pterosaur that lived in what is now Brazil during the Early Cretaceous period, about a hundred million years ago. The original skull, discovered in 1983 in the Araripe Basin of northeastern Brazil, was collected in several pieces. In 2002, the skull was made the holotype specimen of Thalassodromeus sethi by palaeontologists Alexander Kellner and Diogenes de Almeida Campos. The generic name means "sea runner" (in reference to its supposed mode of feeding), and the specific name refers to the Egyptian god Seth due to its crest being supposedly reminiscent of Seth's crown. Other scholars have pointed out that the crest was instead similar to the crown of Amon. A jaw tip was assigned to T. sethi in 2005, became the basis of the new genus Banguela in 2015, and assigned back to Thalassodromeus as the species T. oberlii in 2018, though other researchers consider it a valid genus. Another species (T. sebesensis) was described in 2015 based on a supposed crest fragment, but this was later shown to be part of a turtle shell. Thalassodromeus had one of the largest known skulls among pterosaurs, around long, with one of the proportionally largest cranial crests of any vertebrate. Though only the skull is known, the animal is estimated to have had a wingspan of . The crest was lightly built and ran from the tip of the upper jaw to beyond the back of the skull, ending in a unique V-shaped notch. The jaws were toothless, and had sharp upper and lower edges. Its skull had large nasoantorbital fenestrae (opening that combined the antorbital fenestra in front of the eye with the bony nostril), and part of its palate was concave. The lower jaw was blade-like, and may have turned slightly upwards. The closest relative of Thalassodromeus was Tupuxuara; both are grouped in a clade called Thalassodromidae, which, depending on the study, has been placed either within Tapejaromorpha, closely related to the family Tapejaridae, or within Neoazhdarchia, closely related to the family Dsungaripteridae. Several theories have been suggested to explain the function of Thalassodromeuss crest, including thermoregulation and display, but it likely had more than one function. The crests of thalassodromids appear to have developed late in growth (probably correlated with sexual maturity) and they may have been sexually dimorphic (differing according to sex). As the genus name implies, Thalassodromeus was originally proposed to have fed like a modern skimmer bird, by skimming over the water's surface and dipping its lower jaws to catch prey. This idea was later criticised for lack of evidence; Thalassodromeus has since been found to have had strong jaw musculature, and may have been able to kill and eat relatively large prey on the ground. The limb proportions of related species indicate that it may have adapted to fly in inland settings, and would have been efficient at moving on the ground. Thalassodromeus is known from the Romualdo Formation, where it coexisted with many other types of pterosaurs, dinosaurs and other animals. History of discovery The first known specimen of this pterosaur (an extinct order of flying reptiles) was collected in 1983 near the town of Santana do Cariri in the Araripe Basin of northeastern Brazil. Found in outcrops of the Romualdo Formation, it was collected over a long period in several pieces. The specimen (catalogued as DGM 1476-R at the Museu de Ciências da Terra) was preserved in a calcareous nodule, and consists of an almost-complete, three-dimensional skull (pterosaur bones are often flattened compression fossils), missing two segments of the bottom of the skull and mandible and the front of the lower jaw. The left jugal region and right mandibular ramus (half of the mandible) are pushed slightly inward. The skull was first reported in a 1984 Italian book, and preliminarily described and figured in 1990 by palaeontologists Alexander Kellner and Diogenes de Almeida Campos. Although the pieces of skull had been divided between museums in South and North America, they were assembled before 2002. In 2002, Kellner and Campos described and named the new genus and species Thalassodromeus sethi, skull DGM 1476-R being the holotype specimen. The generic name is derived from the Ancient Greek words and , meaning "sea runner" in reference to the animal's supposed skim-feeding behaviour. The specific name refers to the Egyptian god Seth. The specimen was not fully prepared at the time of this preliminary description. The original describers chose the name sethi because the crest of this pterosaur was supposedly reminiscent of the crown worn by Seth, but the palaeontologists André Jacques Veldmeijer, Marco Signore, and Hanneke J. M. Meijer pointed out in 2005 that the crown (with its two tall plumes) was typically worn by the god Amon (or Amon-Ra) and his manifestationsnot by Seth. In 2006, palaeontologists David M. Martill and Darren Naish suggested that Thalassodromeus was a junior synonym of the related genus Tupuxuara, which was named by Kellner and Campos in 1988 based on fossils from the same formation. In the view of Martill and Naish, the differences between these genera (including two species of Tupuxuara, T. longicristatus and T. leonardii) were due to ontogeny (changes during growth) and compression of the fossils; Thalassodromeus was simply an older, larger, and better-preserved individual. This idea was rejected by Kellner and Campos in 2007, who pointed out these species had differences in features other than their crests. They also noted that one specimen of Tupuxuara had a larger skull than Thalassodromeus (measured from the tip of the premaxilla to the back of the squamosal bone), despite Martill and Naish's contention that the latter was an older individual. Kellner and Campos' view has since been accepted by other researchers, including Martill and Naish. Assigned and formerly assigned species Veldmeijer and colleagues assigned the front part of a mandible collected from the same formation to T. sethi in 2005. They concluded that although the two specimens differed in several details, the differences were not significant enough to base a new species on the mandible, and that the new specimen filled in the gap of Kellner and Campos' T. sethi skull reconstruction. Palaeontologists Jaime A. Headden and Herbert B. N. Campos coined the new binomial Banguela oberlii, based on their reinterpretation of the jaw tip as belonging to a toothless member of the family Dsungaripteridae, in 2015. The generic name is Portuguese for "toothless" and the specific name honours private collector Urs Oberli, who had donated the specimen to the Naturmuseum St. Gallen (where it is catalogued as NMSG SAO 25109). Headden and Campos interpreted the tip of T. sethi lower jaw as downturned; this and other features distinguished it from Banguela. In their 2018 re-description of the further-prepared T. sethi holotype skull, palaeontologists Rodrigo V. Pêgas, Fabiana R. Costa, and Kellner assigned B. oberlii back to Thalassodromeus while recognising it as a distinct species, and thereby created the new combination T. oberlii. Pêgas and colleagues also rejected the theory that the lower jaw of T. sethi was downturned, and reinterpreted the frontmost piece of the lower jaw to have connected directly with the subsequent piece (with no gap). In 2020, palaeontlogist James McPhee and colleagues considered Banguela a valid genus, and instead classified it as a member of the family Chaoyangopteridae, and did not find a dsungaripterid identity well-supported. In 2015 palaeontologists Gerald Grellet Tinner and Vlad A. Codrea named a new species, T. sebesensis, based on what they interpreted as part of a cranial crest in a concretion found near the Sebeș River in Romania. The authors said that this would extend the range in time and space for the genus Thalassodromeus considerably, creating a 42-million-year gap between the older South American species and the younger European species. Palaeontologist Gareth J. Dyke and a large team of colleagues immediately rejected the pterosaurian identification of the T. sebesensis fossil, instead arguing that it was a misidentified part of a plastron (lower shell) of the prehistoric turtle Kallokibotion bajazidi (named in 1923). The idea that the fragment belonged to a turtle had been considered and rejected by Grellet-Tinnera and Codrea in their original description. Grellet-Tinnera and Codrea denied the turtle identity suggested by Dyke and colleagues, noting that those researchers had not directly examined the fossil. Description The holotype (and only known skull) of Thalassodromeus sethi is one of the largest pterosaur skulls ever discovered. The entire skull is estimated to have been long; the bones were fused together, indicating adulthood. Based on related pterosaurs, its wingspan was , making Thalassodromeus the largest known member of its clade, Thalassodromidae. Of similar proportions, its skull was more heavily built than that of its relative Tupuxuara. Although the postcranial skeleton of Thalassodromeus is unknown, relatives had unusually short and blocky neck vertebrae, with well-developed front and hind-limbs that were almost equal in length (excluding the long wing-finger). The hindlimbs were eighty percent that of the forelimb length, a unique ratio among pterodactyloids (short-tailed pterosaurs). As a pterosaur, Thalassodromeus was covered with hair-like pycnofibres and had extensive wing membranes (which were extended by the wing finger). The skull of T. sethi had a streamlined profile, especially from the tip of the snout to the front edge of the nasoantorbital fenestra (opening which combined the antorbital fenestra in front of the eye with the bony nostril). The most conspicuous feature of the skull was the large crest, which ran along the upper edge from the tip of the snout and beyond the occiput at the back of the skull, almost doubling the length and height of the skull. With the exception of the pterosaur Tupandactylus imperator (whose crest consisted mainly of soft tissue), T. sethi had the proportionally largest cranial crest of any known vertebrate (75 percent of the skull's side surface). The crest was mainly formed by the premaxillae (the frontmost snout bones), frontal bones, parietal bones, and part of the supraoccipital bone. The premaxillae formed most of the crest, extending to its back, and contacted the frontoparietal part of the crest by a straight suture (a distinct feature of this species). The crest varied from in thickness; it thickened at the contact between the premaxillae and the frontoparietal part, and became gradually thinner toward the top and back (except for the lower part behind the occiput, where it had a thick base). Despite its size, the crest was lightly built and essentially hollow; some areas indicate signs of skeletal pneumatisation and a well-developed trabecular system uniting the bones. The crest's surface had a system of channels of varying size and thickness, probably the impressions of extensive blood vessels. A small, opening was present above the orbit (eye socket), piercing the basal part of the crest; such a feature is unknown in other pterosaurs, and does not appear to be due to damage. The margins of the opening are smooth, and the inner border has fenestration connecting it to the inner structure of the crest. The back of the crest ended in a prominent V-shaped notch, a unique feature of this species. Although other parts of the crest have V-shaped breaks, the V shape at the end does not appear to have been due to breakage; the margins of the bone can be seen there, still encased by matrix. The crest probably had a keratinous (horny) covering and may have been extended by soft tissue in some areas, but the extent of this is unknown. The upper jaw of T. sethi was primarily composed of premaxillae and maxillae; the suture which formed the border between these bones is not visible. As in all members of its clade, the jaws were edentulous (toothless). The rostrum (snout) was long from the tip of the premaxilla to the joint where the quadrate bone of the skull connected with the articular bone of the lower jaw. The front of the premaxillae had sharp upper and lower edges, unique to this species. As in related genera, the nasoantorbital fenestra was comparatively large; it was long and high, which was 71 percent of the skull length (excluding the crest). The lacrimal bone, which separated the orbit from the nasoantorbital fenestra, was vertically elongated and higher than the upper surface of the orbit (in contrast to the condition seen in pterodactyloids with smaller nasoantorbital fenestrae). The orbit was slender and compressed from front to back compared to Tupuxuara and tapejarids, but similar to some of them in being more than half the height of the nasoantorbital fenestra. The orbit was positioned lower than the upper margin of the nasoantorbital fenestra, and therefore very low on the skull. Although the bones bordering the lower temporal fenestra (an opening behind the orbit) were incomplete, it appears to have been elongated and slit-like (as in Tupuxuara and Tapejara). The palatal area at the tip of T. sethis snout was a sharp ridge, similar to the keel seen on the upper surface of the mandibular symphysis where the two halves of the lower jaw connected. Small slit-like foramina (openings) on the lower side edges of the ridge indicate that it had a horny covering in life, similar to Tupandactylus. The lower edge of the area was somewhat curved, which probably created a small gap when the jaws were closed. Further back, immediately in front of the nasoantorbital fenestra, the palatal ridge became a strong, blunt, convex keel. This convexity fit into the symphyseal shelf at the front end of the lower jaw, and they would have tightly interlocked when the jaws were closed. The palatal ridge ended in a strongly concave area unique to this species. The postpalatine fenestrae (openings behind the palatine bone) were oval and very small, differing from those of related species. The ectopterygoid (bone on the side of the palate) had large, plate-like sides, and was well-developed compared to related species. The supraoccipital bone, which formed the hindmost base of the cranial crest, had muscle scars at its upper end (probably corresponding to the attachment of neck muscles). Although the lower jaw of T. sethi is incomplete, its total length is estimated at 47 percent of which was occupied by the mandibular symphysis. The tip of the mandible is missing, but its front surface indicates that it might have been turned slightly upwards as in T. oberlii (the possible second species of Thalassodromeus, or possibly a different genus (Banguela) which is known only from a jaw tip). The symphyseal shelf, the upper surface of the symphysis, extended for and had a flat surface. Seen from above, the side edges of this area were tall and formed a sharp margin. Near the front end of the symphysis, the edges which formed the margins became broader towards the front of the shelf until they met and fused. The upper and lower surfaces of the jaw at the front of the shelf were keeled (the upper keel more robust and starting before the lower), which gave the symphysis a blade-like shape. The lower keel became deeper towards the front of the jaw, giving the impression that the jaw deflected downwards; it was actually straight, except for the (perhaps) upturned tip. The mandibular fossae (depressions) at the back of the upper jaw were deeper and broader than usual in pterodactyloids, creating large surfaces for the lower jaw to articulate with. The possible species T. oberlii differed from T. sethi and other relatives by the upper surface of its mandibular symphysis being slightly shorter than the lower surface, and was further distinguished from T. sethi by the upper edge of the symphysis being much sharper than the lower. The two species shared features such as the compression of the symphysis sideways and from top to bottom, the sharp keel at the upper front of the symphysis, and the small groove running along the upper surface of the shelf. Classification The classification of Thalassodromeus and its closest relatives is one of the most contentious issues regarding their group. Kellner and Campos originally assigned Thalassodromeus to the family Tapejaridae, based on its large crest and large nasoantorbital fenestra. Within this clade, they found that it differed from the short-faced genus Tapejara but shared a keel on the palate with Tupuxuara. Kellner elaborated on the relationships within Tapejaridae in 2004, and pointed out that Thalassodromeus and Tupuxuara also shared a crest consisting primarily of bone; the crest had a large component of soft tissue in other members of the group. Martill and Naish considered Tapejaridae a paraphyletic (unnatural) group in 2006, and found Tupuxuara (which included Thalassodromeus in their analysis) to be the sister taxon to the family Azhdarchidae. This clade (Tupuxuara and Azhdarchidae) had been named Neoazhdarchia by palaeontologist David Unwin in 2003, an arrangement Martill and Naish concurred with. According to Martill, features uniting members of Neoazhdarchia included the presence of a notarium (fused vertebrae in the shoulder region), the loss of contact between the first and third metacarpals (bones in the hand), and very long snouts (more than 88% of the skull length). Kellner and Campos defended the validity of Tapejaridae in 2007, dividing it into two clades: Tapejarinae and Thalassodrominae, the latter containing Thalassodromeus (the type genus) and Tupuxuara. They distinguished thalassodromines by their high nasoantorbital fenestrae and the bony part of their crests beginning at the front of the skull and continuing further back than in other pterosaurs. The interrelationship of these clades within the larger clade Azhdarchoidea remained disputed, and the clade containing Thalassodromeus and Tupuxuara had received different names from different researchers (Thalassodrominae and Tupuxuaridae). Palaeontologist Mark Witton attempted to resolve the naming issue in 2009, noting that the name "Tupuxuaridae" (first used in the vernacular form "tupuxuarids" by palaeontologist Lü Junchang and colleagues in 2006) had never been validly established and Thalassodrominae should be the proper name (although it was bestowed a year later). Witton further converted the subfamily name Thalassodrominae into the family name Thalassodromidae, and considered the clade part of Neoazhdarchia. A 2011 analysis by palaeontologist Felipe Pinheiro and colleagues upheld the grouping of the clades Tapejarinae and Thalassodrominae in the family Tapejaridae, joined by the Chaoyangopterinae. This arrangement of Thalassodrominae and Tapejarinae would later be kept by Pêgas and colleagues in their 2018 analysis, but they acknowledged that the subject was still controversial. Conversely, a 2018 study by palaeontologist Nicholas Longrich and colleagues instead found the family Thalassodromidae to group with dsungaripterids, forming the clade Dsungaripteromorpha within Neoazhdarchia (defined as the most inclusive clade containing Dsungaripterus weii but not Quetzalcoatlus northropi). In 2021, palaeontologist Gabriela M. Cerqueira and colleagues found their new genus Kariridraco to be the sister taxon of Thalassodromeus and Tupuxuara. In 2023, having considered that the clade containing Thalassodromeus and Tupuxuara has received two different denominations throughout the years (Thalassodromidae and Thalassodrominae), palaeontologist Rodrigo Pêgas and colleagues argued that despite the disagreements between the position of said clade within Azhdarchoidea, the species contained within it have almost always been consistent. Therefore, they deemed the difference in naming pattern undesirable. They favored the denomination Thalassodromidae, in order to have consistency with other studies that used the same name. In their analysis, they corroborated the close relationship between thalassodromids and the family Tapejaridae, following the classification model established by Kellner. They included both families within the larger group Tapejaromorpha (defined as the most inclusive clade containing Tapejara wellnhoferi but not Azhdarcho lancicollis). Additionally, they also found T. oberlii to belong to Thalassodromeus. Cladogram based on Longrich and colleagues, 2018: Cladogram based on Pêgas and colleagues, 2023: Palaeobiology Crest function Possible functions for Thalassodromeuss cranial crest were proposed by Kellner and Campos in 2002. They suggested that the network of blood vessels on its large surface was consistent with use for thermoregulation, which had also been suggested for the crests of some dinosaurs. Kellner and Campos thought that the crest was used for cooling (enabling the animal to dissipate excess metabolic heat through convection), while heat transfer was controlled byand depended onthe network of blood vessels. The ability to control its body temperature would have aided Thalassodromeus during intense activity (such as hunting), and they suggested that, when in flight, heat would have been dispelled more effectively if the crest were aligned with the wind, while the head was intentionally moved to the sides. Kellner and Campos posited that the crest could have had additional functions, such as display; aided by colour, it could have been used in species recognition, and could also have been a sexually dimorphic feature (differing according to sex), as has been proposed for Pteranodon. In 2006, Martill and Naish found that the crests of Tupuxuara and its relatives developed by the premaxillary portion of the crests growing backwards over the skull-roof (as indicated by the well-defined suture between the premaxilla and the underlying bones). The hind margin of the premaxillary part of this specimen's crest had only reached above the hind margin of the nasoantorbital fenestra, indicating that it was not an adult at the time of death. This suggests that the development of the crest happened late in the growth of an individual, was probably related to sexual display, and the sexual maturity of a given specimen could be assessed by the size and disposition of the crest. The T. sethi holotype, with its hypertrophied (enlarged) premaxillary crest, would thereby represent an old adult individual (and the mature stage of Tupuxuara, according to their interpretation). Kellner and Campos found Martill and Naish's discussion of cranial crest development interesting, although they found their proposed model speculative. Palaeontologists David W. E. Hone, Naish, and Innes C. Cuthill reiterated Martill and Naish's growth hypothesis in 2012; since pterosaurs were probably precocial and able to fly shortly after hatching, the role of the crest was relevant only after maturity (when the structure was fully grown). They deemed the thermoregulation hypothesis an unlikely explanation for the blood-vessel channels on the crest, which they found consistent with nourishment for growing tissue (such as the keratin in bird beaks). Hone, Naish, and Cuthill suggested that the wing membranes and air-sac system would have been more effective at controlling heat than a crest, and wind and water could also have helped cool pterosaurs in high-temperature maritime settings. In 2013, Witton agreed that the substantially larger crests of adult thalassodromids indicated that they were more important for behavioural activities than for physiology. He found the idea that the crests were used for thermoregulation problematic, since they did not grow regularly with body size; they grew at a fast pace in near-adults, quicker than what would be predicted for the growth of a thermoregulatory structure. According to Witton, the large, highly vascular wing membranes of pterosaurs would provide the surface area needed for thermoregulation, meaning the crests were not needed for that function. He concluded that the crest's blood-vessel patterns did not differ much from those seen on bones under the beaks of birds, which are used for transporting nutrients to the bone and soft tissues rather than for thermoregulation. Witton noted that although bird beaks lose heat quickly, that is not what they were developed for; the crests of pterosaurs might also have had an effect on thermoregulation, without this being their primary function. Pêgas and colleagues noted that sexual dimorphism in crest size and shape has been proposed for some pterosaurs; the crest shape seen in the T. sethi holotype may correlate with one sex and may have been the result of sexual selection. They suggested that both sexes could have had similar crests due to mutual sexual selection, but interpretation of exaggerated features was challenging due to the small sample size; more T. sethi specimens would have to be found to evaluate these theories. They did not think that thermoregulation correlated with crest growth relative to body size, since the bills of toucans (the largest of any modern birds) grow drastically out of proportion to body size and function as thermoregulatory structures, as well as facilitating feeding and social behaviour. Pêgas and colleagues found the vascular structure of toucan bills comparable to that in the crest of T. sethi, concluding that the crest also had multiple functions. Feeding and diet Kellner and Campos originally found the jaws of Thalassodromeus similar to those of modern skimmersthree bird species in the genus Rhynchopswith their sideways-compressed jaws, blade-like beak, and protruding lower jaw (resembling scissors in side view). They argued that Thalassodromeus would have fed in a similar way, as implied by the genus name; skimmers skim over the surface of water, dipping their lower jaw to catch fish and crustaceans. Kellner and Campos listed additional skull features of skimmers which are adaptations for skim feeding, including enlarged palatine bones, a feature also shared with Thalassodromeus. Unlike skimmers and other pterosaurs, the palatine bones of Thalassodromeus were concave, which the writers suggested could have helped it momentarily store food. Like skimmers, Thalassodromeus also appears to have had powerful neck muscles, large jaw muscles, and an upper jaw tip well-irrigated by blood (features which Kellner and Campos interpreted as adaptations for skimming). They concluded that the scissor-like bill and thin crest almost made other modes of capturing preysuch as swooping down toward water and plunging into itimpossible. Conceding the difficulty of reconstructing Thalassodromeuss fishing method, they envisioned it with a less-mobile neck than skimmers; with the crest impeding its head from submersion it would glide, flapping its wings only occasionally. They found that the pterosaur with jaws most similar to those of Thalassodromeus was the smaller Rhamphorhynchus, although they believed that it would have had limited skimming ability. In 2004, palaeontologist Sankar Chatterjee and engineer R. Jack Templin said that smaller pterosaurs may have been able to skim-feed. They doubted that this was possible for larger ones, due to their lesser manoeuvrability and flying capability while resisting water. Chatterjee and Templin noted that skimmers have blunter beaks than pterosaurs like Thalassodromeus, to direct water from the jaw while skimming. In 2007, biophysicist Stuart Humphries and colleagues questioned whether any pterosaurs would have commonly fed by skimming and said that such conclusions had been based on anatomical comparisons rather than biomechanical data. The drag experienced by bird bills and pterosaur jaws was hydrodynamically and aerodynamically tested by creating model bills of the black skimmer, Thalassodromeus, and the (presumably) non-skimming Tupuxuara and towing them along a water-filled trough at varying speeds. The researchers found that skimming used more energy for skimmers than previously thought, and would have been impossible for a pterosaur weighing more than due to the metabolic power required. They found that even smaller pterosaurs, like Rhamphorhynchus, were not adapted for skimming. The aluminium rigging of the Thalassodromeus model was destroyed during the experiment, due to the high and unstable forces exerted on it while skimming at high speed, casting further doubt on this feeding method. The authors used the jaw tip of T. oberlii to model the performance of Thalassodromeus, since it was assigned to T. sethi at the time. Unwin and Martill suggested in 2007 that thalassodromids may have foraged similarly to storks, as had been suggested for azhdarchids. Witton said in 2013 that although skim-feeding had been suggested for many pterosaur groups, the idea was criticised in recent years; pterosaurs lacked virtually all adaptations for skim-feeding, making it unlikely that they fed this way. Thalassodromeus (unlike skimmers) did not have a particularly wide or robust skull or especially large jaw-muscle attachment sites, and its mandible was comparatively short and stubby. Witton agreed with Unwin and Martill that thalassodromids, with their equal limb proportions and elongated jaws, were suited to roaming terrestrially and feeding opportunistically; their shorter, more flexible necks indicated a different manner of feeding than azhdarchids, which had longer, stiffer necks. He suggested that thalassodromids may have had more generalised feeding habits, and azhdarchids may have been more restricted; Thalassodromeus may have been better at handling relatively large, struggling prey than its relative, Tupuxuara, which had a more lightly built skull. Witton stressed that more studies of functional morphology would have to be done to illuminate the subject and speculated that Thalassodromeus might have been a raptorial predator, using its jaws to subdue prey with strong bites; its concave palate could help it swallow large prey. Pêgas and Kellner presented a reconstruction of the mandibular muscles of T. sethi at a conference in 2015. They found that its well-developed jaw muscles differed from those of the possible dip-feeder Anhanguera and the terrestrially stalking azhdarchids, indicating that T. sethi had a strong bite force. In 2018, Pêgas and colleagues agreed that Thalassodromeus blade-like, robust jaws indicated that it could have used them to strike and kill prey, but they thought that biomechanical work was needed to substantiate the idea. They found (unlike Witton) that Thalassodromeus had a reinforced jaw joint and robust jaw muscles, but more work was needed to determine its dietary habits. According to Pêgas and colleagues, the articulation between T. sethi articular and quadrate bones (where the lower jaw connected with the skull) indicates a maximum gape of 50degreessimilar to the 52-degree gape inferred for Quetzalcoatlus. Locomotion In a 2002 comment on the original description of T. sethi, engineer John Michael Williams noted that although Kellner and Campos had mentioned that the large crest might have interfered aerodynamically during flight, they had not elaborated on this point and had compared the pterosaur with a bird one-fifth its size. He suggested that Thalassodromeus used its crest to balance its jaws, with the head changing attitude depending on the mode of locomotion. Williams speculated that the crest would be inflatable with blood and presented varying air resistance, which he compared to a handheld fan; this would have helped the animal change the attitude of the head during flight (and during contact with water), keeping it from rotating without powerful neck muscles. The crest would have made long flights possible, rather than interfering; Williams compared it with the spermaceti in the head of the sperm whale, stating it is supposedly used to change buoyancy through temperature adjustment. Kellner and Campos rejected the idea of an inflatable crest, since its compressed bones would not allow this; they did not find the sperm-whale analogy convincing in relation to flying animals, noting that spermaceti is more likely to be used during aggression or for sonar. They agreed that the idea of the crest having an in-flight function was tempting and sideways movement of the head would have helped it change direction, but biomechanical and flight-mechanical studies of the crest would have to be conducted to determine the animal's aerodynamics. Witton also expressed hope for further analysis of thalassodromid locomotion. He noted that since their limb proportions were similar to those of the better-studied azhdarchids, the shape of their wings and style of flight might have been similar. Thalassodromids might also have been adapted for inland flight; their wings were short and broad (unlike the long, narrow wings of marine soarers), and were more manoeuvrable and less likely to snag on obstacles. Their lower shoulder muscles appear to have been enlarged, which would have helped with powerful (or frequent) wing downstrokes and takeoff ability. Although it may have had to compensate for its large crest during flight, its development late in growth indicates that it did not develop primarily for aerodynamics. Witton suggested that the proportional similarity between the limbs of thalassodromids and azhdarchids also indicates that their terrestrial abilities would have been comparable. Their limbs would have been capable of long strides, and their short, compact feet would have made these mechanics efficient. The enlarged shoulder muscles may have allowed them to accelerate quickly when running, and they may have been as adapted for movement on the ground as has been suggested for azhdarchids; Witton cautioned that more analysis of thalassodromids was needed to determine this. Palaeoecology Thalassodromeus is known from the Romualdo Formation, which dates to the Albian stage of the Early Cretaceous period (about 110million years ago). The formation is part of the Santana Group and, at the time Thalassodromeus was described, was thought to be a member of what was then considered the Santana Formation. The Romualdo Formation is a Lagerstätte (a sedimentary deposit that preserves fossils in excellent condition) consisting of lagoonal limestone concretions embedded in shales, and overlies the Crato Formation. It is well known for preserving fossils three-dimensionally in calcareous concretions, including many pterosaur fossils. As well as muscle fibres of pterosaurs and dinosaurs, fish preserving gills, digestive tracts, and hearts have been found there. The formation's tropical climate largely corresponded to today's Brazilian climate. Most of its flora were xerophytic (adapted to dry environments). The most widespread plants were Cycadales and the conifer Brachyphyllum. Other pterosaurs from the Romualdo Formation include Anhanguera, Araripedactylus, Araripesaurus, Brasileodactylus, Cearadactylus, Coloborhynchus, Santanadactylus, Tapejara, Tupuxuara, Barbosania, Maaradactylus, Tropeognathus, and Unwindia. Thalassodromines are known only from this formation, and though well-preserved postcranial remains from there have been assigned to the group, they cannot be assigned to genus due to their lack of skulls. Dinosaur fauna includes theropods like Irritator, Santanaraptor, Mirischia, and an indeterminate unenlagiine dromaeosaur. The crocodyliforms Araripesuchus and Caririsuchus, as well as the turtles Brasilemys, Cearachelys, Araripemys, Euraxemys, and Santanachelys, are known from the deposits. There were also clam shrimps, sea urchins, ostracods, and molluscs. Well-preserved fish fossils record the presence of hybodont sharks, guitarfish, gars, amiids, ophiopsids, oshuniids, pycnodontids, aspidorhynchids, cladocyclids, bonefishes, chanids, mawsoniids and some uncertain forms. Pêgas and colleagues noted that pterosaur taxa from the Romualdo Formation had several species: two of Thalassodromeus, two of Tupuxuara, and up to six species of Anhanguera. It is possible that not all species in each taxon coexisted in time (as has been proposed for the pteranodontids of the Niobrara Formation), but there is not enough stratigraphic data for the Romualdo Formation to test this.
Biology and health sciences
Pterosaurs
Animals
24740532
https://en.wikipedia.org/wiki/Stegodontidae
Stegodontidae
Stegodontidae is an extinct family of proboscideans from Africa and Asia (with a single occurrence in Europe) from the Early Miocene (at least 17.3 million years ago) to the Late Pleistocene. It contains two genera, the earlier Stegolophodon, known from the Miocene of Asia and the later Stegodon, from the Late Miocene to Late Pleistocene of Africa and Asia (with a single occurrence in Greece) which is thought to have evolved from the former. The group is noted for their plate-like lophs on their teeth, which are similar to elephants and different from those of other extinct proboscideans like gomphotheres and mammutids, with both groups having a proal jaw movement utilizing forward strokes of the lower jaw. These similarities with modern elephants were probably convergently evolved. Like elephantids, stegodontids are thought to have evolved from gomphothere ancestors. Taxonomy Stegodontidae was named by Osborn (1918). It was assigned to Mammutoidea by Carroll (1988); to Elephantoidea by Lambert and Shoshani (1998); and to Elephantoidea by Shoshani et al. (2006). While Stegodon was historically considered an elephant, this is now largely rejected, with the similarities considered to be convergent.
Biology and health sciences
Proboscidea
Animals
43116674
https://en.wikipedia.org/wiki/Annelid
Annelid
The annelids (), also known as the segmented worms, comprise a large phylum called Annelida (; ). The phylum contains over 22,000 extant species, including ragworms, earthworms, and leeches. The species exist in and have adapted to various ecologies – some in marine environments as distinct as tidal zones and hydrothermal vents, others in fresh water, and yet others in moist terrestrial environments. The Annelids are bilaterally symmetrical, triploblastic, coelomate, invertebrate organisms. They also have parapodia for locomotion. Most textbooks still use the traditional division into polychaetes (almost all marine), oligochaetes (which include earthworms) and leech-like species. Cladistic research since 1997 has radically changed this scheme, viewing leeches as a sub-group of oligochaetes and oligochaetes as a sub-group of polychaetes. In addition, the Pogonophora, Echiura and Sipuncula, previously regarded as separate phyla, are now regarded as sub-groups of polychaetes. Annelids are considered members of the Lophotrochozoa, a "super-phylum" of protostomes that also includes molluscs, brachiopods, and nemerteans. The basic annelid form consists of multiple segments. Each segment has the same sets of organs and, in most polychaetes, has a pair of parapodia that many species use for locomotion. Septa separate the segments of many species, but are poorly defined or absent in others, and Echiura and Sipuncula show no obvious signs of segmentation. In species with well-developed septa, the blood circulates entirely within blood vessels, and the vessels in segments near the front ends of these species are often built up with muscles that act as hearts. The septa of such species also enable them to change the shapes of individual segments, which facilitates movement by peristalsis ("ripples" that pass along the body) or by undulations that improve the effectiveness of the parapodia. In species with incomplete septa or none, the blood circulates through the main body cavity without any kind of pump, and there is a wide range of locomotory techniques – some burrowing species turn their pharynges inside out to drag themselves through the sediment. Earthworms are oligochaetes that support terrestrial food chains both as prey and in some regions are important in aeration and enriching of soil. The burrowing of marine polychaetes, which may constitute up to a third of all species in near-shore environments, encourages the development of ecosystems by enabling water and oxygen to penetrate the sea floor. In addition to improving soil fertility, annelids serve humans as food and as bait. Scientists observe annelids to monitor the quality of marine and fresh water. Although blood-letting is used less frequently by doctors than it once was, some leech species are regarded as endangered species because they have been over-harvested for this purpose in the last few centuries. Ragworms' jaws are now being studied by engineers as they offer an exceptional combination of lightness and strength. Since annelids are soft-bodied, their fossils are rare – mostly jaws and the mineralized tubes that some of the species secreted. Although some late Ediacaran fossils may represent annelids, the oldest known fossil that is identified with confidence comes from about in the early Cambrian period. Fossils of most modern mobile polychaete groups appeared by the end of the Carboniferous, about . Palaeontologists disagree about whether some body fossils from the mid Ordovician, about , are the remains of oligochaetes, and the earliest indisputable fossils of the group appear in the Paleogene period, which began 66 million years ago. Classification and diversity There are over 22,000 living annelid species, ranging in size from microscopic to the Australian giant Gippsland earthworm and Amynthas mekongianus, which can both grow up to long to the largest annelid, Microchaetus rappi which can grow up to 6.7 m (22 ft). Although research since 1997 has radically changed scientists' views about the evolutionary family tree of the annelids, most textbooks use the traditional classification into the following sub-groups: Polychaetes (about 12,000 species). As their name suggests, they have multiple chetae ("hairs") per segment. Polychaetes have parapodia that function as limbs, and nuchal organs that are thought to be chemosensors. Most are marine animals, although a few species live in fresh water and even fewer on land. Clitellates (about 10,000 species ). These have few or no chetae per segment, and no nuchal organs or parapodia. However, they have a unique reproductive organ, the ring-shaped clitellum ("pack saddle") around their bodies, which produces a cocoon that stores and nourishes fertilized eggs until they hatch or, in moniligastrids, yolky eggs that provide nutrition for the embryos. The clitellates are sub-divided into: Oligochaetes ("with few hairs"), which includes earthworms. Oligochaetes have a sticky pad in the roof of the mouth. Most are burrowers that feed on wholly or partly decomposed organic materials. Hirudinea, whose name means "leech-shaped" and whose best known members are leeches. Marine species are mostly blood-sucking parasites, mainly on fish, while most freshwater species are predators. They have suckers at both ends of their bodies, and use these to move rather like inchworms. The Archiannelida, minute annelids that live in the spaces between grains of marine sediment, were treated as a separate class because of their simple body structure, but are now regarded as polychaetes. Some other groups of animals have been classified in various ways, but are now widely regarded as annelids: Pogonophora / Siboglinidae were first discovered in 1914, and their lack of a recognizable gut made it difficult to classify them. They have been classified as a separate phylum, Pogonophora, or as two phyla, Pogonophora and Vestimentifera. More recently they have been re-classified as a family, Siboglinidae, within the polychaetes. The Echiura have a checkered taxonomic history: in the 19th century they were assigned to the phylum "Gephyrea", which is now empty as its members have been assigned to other phyla; the Echiura were next regarded as annelids until the 1940s, when they were classified as a phylum in their own right; but a molecular phylogenetics analysis in 1997 concluded that echiurans are annelids. Myzostomida live on crinoids and other echinoderms, mainly as parasites. In the past they have been regarded as close relatives of the trematode flatworms or of the tardigrades, but in 1998 it was suggested that they are a sub-group of polychaetes. However, another analysis in 2002 suggested that myzostomids are more closely related to flatworms or to rotifers and acanthocephales. Sipuncula was originally classified as annelids, despite the complete lack of segmentation, bristles and other annelid characters. The phylum Sipuncula was later allied with the Mollusca, mostly on the basis of developmental and larval characters. Phylogenetic analyses based on 79 ribosomal proteins indicated a position of Sipuncula within Annelida. Subsequent analysis of the mitochondrion's DNA has confirmed their close relationship to the Myzostomida and Annelida (including echiurans and pogonophorans). It has also been shown that a rudimentary neural segmentation similar to that of annelids occurs in the early larval stage, even if these traits are absent in the adults. Mitogenomic and phylogenomic analysis also implies that Orthonectida, a group of extremely simplified parasites traditionally placed in Mesozoa, are actually reduced annelids. Research suggest that also nemerteans are annelids, with Oweniidae and Magelonidae as their closest relatives. Distinguishing features No single feature distinguishes Annelids from other invertebrate phyla, but they have a distinctive combination of features. Their bodies are long, with segments that are divided externally by shallow ring-like constrictions called annuli and internally by septa ("partitions") at the same points, although in some species the septa are incomplete and in a few cases missing. Most of the segments contain the same sets of organs, although sharing a common gut, circulatory system and nervous system makes them inter-dependent. Their bodies are covered by a cuticle (outer covering) that does not contain cells but is secreted by cells in the skin underneath, is made of tough but flexible collagen and does not molt – on the other hand arthropods' cuticles are made of the more rigid α-chitin, and molt until the arthropods reach their full size. Most annelids have closed circulatory systems, where the blood makes its entire circuit via blood vessels. Description Segmentation In addition to Sipuncula and Echiura, also lineages like Lobatocerebrum, Diurodrilus and Polygordius have lost their segmentation, but these are the exceptions from the rule. Most of an annelid's body consists of segments that are practically identical, having the same sets of internal organs and external chaetae (Greek χαιτη, meaning "hair") and, in some species, appendages. The frontmost and rearmost sections are not regarded as true segments as they do not contain the standard sets of organs and do not develop in the same way as the true segments. The frontmost section, called the prostomium (Greek προ- meaning "in front of" and στομα meaning "mouth") contains the brain and sense organs, while the rearmost, called the pygidium (Greek πυγιδιον, meaning "little tail") or periproct contains the anus, generally on the underside. The first section behind the prostomium, called the peristomium (Greek περι- meaning "around" and στομα meaning "mouth"), is regarded by some zoologists as not a true segment, but in some polychaetes the peristomium has chetae and appendages like those of other segments. The segments develop one at a time from a growth zone just ahead of the pygidium, so that an annelid's youngest segment is just in front of the growth zone while the peristomium is the oldest. This pattern is called teloblastic growth. Some groups of annelids, including all leeches, have fixed maximum numbers of segments, while others add segments throughout their lives. The phylum's name is derived from the Latin word annelus, meaning "little ring". Body wall, chaetae and parapodia Annelids' cuticles are made of collagen fibers, usually in layers that spiral in alternating directions so that the fibers cross each other. These are secreted by the one-cell deep epidermis (outermost skin layer). A few marine annelids that live in tubes lack cuticles, but their tubes have a similar structure, and mucus-secreting glands in the epidermis protect their skins. Under the epidermis is the dermis, which is made of connective tissue, in other words a combination of cells and non-cellular materials such as collagen. Below this are two layers of muscles, which develop from the lining of the coelom (body cavity): circular muscles make a segment longer and slimmer when they contract, while under them are longitudinal muscles, usually four distinct strips, whose contractions make the segment shorter and fatter. But several families have lost the circular muscles, and it has been suggested that the lack of circular muscles is a plesiomorphic character in Annelida. Some annelids also have oblique internal muscles that connect the underside of the body to each side. The setae ("hairs") of annelids project out from the epidermis to provide traction and other capabilities. The simplest are unjointed and form paired bundles near the top and bottom of each side of each segment. The parapodia ("limbs") of annelids that have them often bear more complex chetae at their tips – for example jointed, comb-like or hooked. Chetae are made of moderately flexible β-chitin and are formed by follicles, each of which has a chetoblast ("hair-forming") cell at the bottom and muscles that can extend or retract the cheta. The chetoblasts produce chetae by forming microvilli, fine hair-like extensions that increase the area available for secreting the cheta. When the cheta is complete, the microvilli withdraw into the chetoblast, leaving parallel tunnels that run almost the full length of the cheta. Hence annelids' chetae are structurally different from the setae ("bristles") of arthropods, which are made of the more rigid α-chitin, have a single internal cavity, and are mounted on flexible joints in shallow pits in the cuticle. Nearly all polychaetes have parapodia that function as limbs, while other major annelid groups lack them. Parapodia are unjointed paired extensions of the body wall, and their muscles are derived from the circular muscles of the body. They are often supported internally by one or more large, thick chetae. The parapodia of burrowing and tube-dwelling polychaetes are often just ridges whose tips bear hooked chetae. In active crawlers and swimmers the parapodia are often divided into large upper and lower paddles on a very short trunk, and the paddles are generally fringed with chetae and sometimes with cirri (fused bundles of cilia) and gills. Nervous system and senses The brain generally forms a ring round the pharynx (throat), consisting of a pair of ganglia (local control centers) above and in front of the pharynx, linked by nerve cords either side of the pharynx to another pair of ganglia just below and behind it. The brains of polychaetes are generally in the prostomium, while those of clitellates are in the peristomium or sometimes the first segment behind the prostomium. In some very mobile and active polychaetes the brain is enlarged and more complex, with visible hindbrain, midbrain and forebrain sections. The rest of the central nervous system, the ventral nerve cord, is generally "ladder-like", consisting of a pair of nerve cords that run through the bottom part of the body and have in each segment paired ganglia linked by a transverse connection. From each segmental ganglion a branching system of local nerves runs into the body wall and then encircles the body. However, in most polychaetes the two main nerve cords are fused, and in the tube-dwelling genus Owenia the single nerve chord has no ganglia and is located in the epidermis. As in arthropods, each muscle fiber (cell) is controlled by more than one neuron, and the speed and power of the fiber's contractions depends on the combined effects of all its neurons. Vertebrates have a different system, in which one neuron controls a group of muscle fibers. Most annelids' longitudinal nerve trunks include giant axons (the output signal lines of nerve cells). Their large diameter decreases their resistance, which allows them to transmit signals exceptionally fast. This enables these worms to withdraw rapidly from danger by shortening their bodies. Experiments have shown that cutting the giant axons prevents this escape response but does not affect normal movement. The sensors are primarily single cells that detect light, chemicals, pressure waves and contact, and are present on the head, appendages (if any) and other parts of the body. Nuchal ("on the neck") organs are paired, ciliated structures found only in polychaetes, and are thought to be chemosensors. Some polychaetes also have various combinations of ocelli ("little eyes") that detect the direction from which light is coming and camera eyes or compound eyes that can probably form images. The compound eyes probably evolved independently of arthropods' eyes. Some tube-worms use ocelli widely spread over their bodies to detect the shadows of fish, so that they can quickly withdraw into their tubes. Some burrowing and tube-dwelling polychaetes have statocysts (tilt and balance sensors) that indicate which way is down. A few polychaete genera have on the undersides of their heads palps that are used both in feeding and as "feelers", and some of these also have antennae that are structurally similar but probably are used mainly as "feelers". Coelom, locomotion and circulatory system Most annelids have a pair of coelomata (body cavities) in each segment, separated from other segments by septa and from each other by vertical mesenteries. Each septum forms a sandwich with connective tissue in the middle and mesothelium (membrane that serves as a lining) from the preceding and following segments on either side. Each mesentery is similar except that the mesothelium is the lining of each of the pair of coelomata, and the blood vessels and, in polychaetes, the main nerve cords are embedded in it. The mesothelium is made of modified epitheliomuscular cells; in other words, their bodies form part of the epithelium but their bases extend to form muscle fibers in the body wall. The mesothelium may also form radial and circular muscles on the septa, and circular muscles around the blood vessels and gut. Parts of the mesothelium, especially on the outside of the gut, may also form chloragogen cells that perform similar functions to the livers of vertebrates: producing and storing glycogen and fat; producing the oxygen-carrier hemoglobin; breaking down proteins; and turning nitrogenous waste products into ammonia and urea to be excreted. Many annelids move by peristalsis (waves of contraction and expansion that sweep along the body), or flex the body while using parapodia to crawl or swim. In these animals the septa enable the circular and longitudinal muscles to change the shape of individual segments, by making each segment a separate fluid-filled "balloon". However, the septa are often incomplete in annelids that are semi-sessile or that do not move by peristalsis or by movements of parapodia – for example some move by whipping movements of the body, some small marine species move by means of cilia (fine muscle-powered hairs) and some burrowers turn their pharynges (throats) inside out to penetrate the sea-floor and drag themselves into it. The fluid in the coelomata contains coelomocyte cells that defend the animals against parasites and infections. In some species coelomocytes may also contain a respiratory pigment – red hemoglobin in some species, green chlorocruorin in others (dissolved in the plasma) – and provide oxygen transport within their segments. Respiratory pigment is also dissolved in the blood plasma. Species with well-developed septa generally also have blood vessels running all long their bodies above and below the gut, the upper one carrying blood forwards while the lower one carries it backwards. Networks of capillaries in the body wall and around the gut transfer blood between the main blood vessels and to parts of the segment that need oxygen and nutrients. Both of the major vessels, especially the upper one, can pump blood by contracting. In some annelids the forward end of the upper blood vessel is enlarged with muscles to form a heart, while in the forward ends of many earthworms some of the vessels that connect the upper and lower main vessels function as hearts. Species with poorly developed or no septa generally have no blood vessels and rely on the circulation within the coelom for delivering nutrients and oxygen. However, leeches and their closest relatives have a body structure that is very uniform within the group but significantly different from that of other annelids, including other members of the Clitellata. In leeches there are no septa, the connective tissue layer of the body wall is so thick that it occupies much of the body, and the two coelomata are widely separated and run the length of the body. They function as the main blood vessels, although they are side-by-side rather than upper and lower. However, they are lined with mesothelium, like the coelomata and unlike the blood vessels of other annelids. Leeches generally use suckers at their front and rear ends to move like inchworms. The anus is on the upper surface of the pygidium. Respiration In some annelids, including earthworms, all respiration is via the skin. However, many polychaetes and some clitellates (the group to which earthworms belong) have gills associated with most segments, often as extensions of the parapodia in polychaetes. The gills of tube-dwellers and burrowers usually cluster around whichever end has the stronger water flow. Feeding and excretion Feeding structures in the mouth region vary widely, and have little correlation with the animals' diets. Many polychaetes have a muscular pharynx that can be everted (turned inside out to extend it). In these animals the foremost few segments often lack septa so that, when the muscles in these segments contract, the sharp increase in fluid pressure from all these segments everts the pharynx very quickly. Two families, the Eunicidae and Phyllodocidae, have evolved jaws, which can be used for seizing prey, biting off pieces of vegetation, or grasping dead and decaying matter. On the other hand, some predatory polychaetes have neither jaws nor eversible pharynges. Selective deposit feeders generally live in tubes on the sea-floor and use palps to find food particles in the sediment and then wipe them into their mouths. Filter feeders use "crowns" of palps covered in cilia that wash food particles towards their mouths. Non-selective deposit feeders ingest soil or marine sediments via mouths that are generally unspecialized. Some clitellates have sticky pads in the roofs of their mouths, and some of these can evert the pads to capture prey. Leeches often have an eversible proboscis, or a muscular pharynx with two or three teeth. The gut is generally an almost straight tube supported by the mesenteries (vertical partitions within segments), and ends with the anus on the underside of the pygidium. However, in members of the tube-dwelling family Siboglinidae the gut is blocked by a swollen lining that houses symbiotic bacteria, which can make up 15% of the worms' total weight. The bacteria convert inorganic matter – such as hydrogen sulfide and carbon dioxide from hydrothermal vents, or methane from seeps – to organic matter that feeds themselves and their hosts, while the worms extend their palps into the gas flows to absorb the gases needed by the bacteria. Annelids with blood vessels use metanephridia to remove soluble waste products, while those without use protonephridia. Both of these systems use a two-stage filtration process, in which fluid and waste products are first extracted and these are filtered again to re-absorb any re-usable materials while dumping toxic and spent materials as urine. The difference is that protonephridia combine both filtration stages in the same organ, while metanephridia perform only the second filtration and rely on other mechanisms for the first – in annelids special filter cells in the walls of the blood vessels let fluids and other small molecules pass into the coelomic fluid, where it circulates to the metanephridia. In annelids the points at which fluid enters the protonephridia or metanephridia are on the forward side of a septum while the second-stage filter and the nephridiopore (exit opening in the body wall) are in the following segment. As a result, the hindmost segment (before the growth zone and pygidium) has no structure that extracts its wastes, as there is no following segment to filter and discharge them, while the first segment contains an extraction structure that passes wastes to the second, but does not contain the structures that re-filter and discharge urine. Reproduction and life cycle Asexual reproduction Polychaetes can reproduce asexually, by dividing into two or more pieces or by budding off a new individual while the parent remains a complete organism. Some oligochaetes, such as Aulophorus furcatus, seem to reproduce entirely asexually, while others reproduce asexually in summer and sexually in autumn. Asexual reproduction in oligochaetes is always by dividing into two or more pieces, rather than by budding. However, leeches have never been seen reproducing asexually. Most polychaetes and oligochaetes also use similar mechanisms to regenerate after suffering damage. Two polychaete genera, Chaetopterus and Dodecaceria, can regenerate from a single segment, and others can regenerate even if their heads are removed. Annelids are the most complex animals that can regenerate after such severe damage. On the other hand, leeches cannot regenerate. Sexual reproduction It is thought that annelids were originally animals with two separate sexes, which released ova and sperm into the water via their nephridia. The fertilized eggs develop into trochophore larvae, which live as plankton. Later they sink to the sea-floor and metamorphose into miniature adults: the part of the trochophore between the apical tuft and the prototroch becomes the prostomium (head); a small area round the trochophore's anus becomes the pygidium (tail-piece); a narrow band immediately in front of that becomes the growth zone that produces new segments; and the rest of the trochophore becomes the peristomium (the segment that contains the mouth). However, the lifecycles of most living polychaetes, which are almost all marine animals, are unknown, and only about 25% of the 300+ species whose lifecycles are known follow this pattern. About 14% use a similar external fertilization but produce yolk-rich eggs, which reduce the time the larva needs to spend among the plankton, or eggs from which miniature adults emerge rather than larvae. The rest care for the fertilized eggs until they hatch – some by producing jelly-covered masses of eggs which they tend, some by attaching the eggs to their bodies and a few species by keeping the eggs within their bodies until they hatch. These species use a variety of methods for sperm transfer; for example, in some the females collect sperm released into the water, while in others the males have a penis that inject sperm into the female. There is no guarantee that this is a representative sample of polychaetes' reproductive patterns, and it simply reflects scientists' current knowledge. Some polychaetes breed only once in their lives, while others breed almost continuously or through several breeding seasons. While most polychaetes remain of one sex all their lives, a significant percentage of species are full hermaphrodites or change sex during their lives. Most polychaetes whose reproduction has been studied lack permanent gonads, and it is uncertain how they produce ova and sperm. In a few species the rear of the body splits off and becomes a separate individual that lives just long enough to swim to a suitable environment, usually near the surface, and spawn. Most mature clitellates (the group that includes earthworms and leeches) are full hermaphrodites, although in a few leech species younger adults function as males and become female at maturity. All have well-developed gonads, and all copulate. Earthworms store their partners' sperm in spermathecae ("sperm stores") and then the clitellum produces a cocoon that collects ova from the ovaries and then sperm from the spermathecae. Fertilization and development of earthworm eggs takes place in the cocoon. Leeches' eggs are fertilized in the ovaries, and then transferred to the cocoon. In all clitellates the cocoon also either produces yolk when the eggs are fertilized or nutrients while they are developing. All clitellates hatch as miniature adults rather than larvae. Ecological significance Charles Darwin's book The Formation of Vegetable Mould Through the Action of Worms (1881) presented the first scientific analysis of earthworms' contributions to soil fertility. Some burrow while others live entirely on the surface, generally in moist leaf litter. The burrowers loosen the soil so that oxygen and water can penetrate it, and both surface and burrowing worms help to produce soil by mixing organic and mineral matter, by accelerating the decomposition of organic matter and thus making it more quickly available to other organisms, and by concentrating minerals and converting them to forms that plants can use more easily. Earthworms are also important prey for birds ranging in size from robins to storks, and for mammals ranging from shrews to badgers, and in some cases conserving earthworms may be essential for conserving endangered birds. Terrestrial annelids can be invasive in some situations. In the glaciated areas of North America, for example, almost all native earthworms are thought to have been killed by the glaciers and the worms currently found in those areas are all introduced from other areas, primarily from Europe, and, more recently, from Asia. Northern hardwood forests are especially negatively impacted by invasive worms through the loss of leaf duff, soil fertility, changes in soil chemistry and the loss of ecological diversity. Especially of concern is Amynthas agrestis and at least one state (Wisconsin) has listed it as a prohibited species. Earthworms migrate only a limited distance annually on their own, and the spread of invasive worms is increased rapidly by anglers and from worms or their cocoons in the dirt on vehicle tires or footwear. Marine annelids may account for over one-third of bottom-dwelling animal species around coral reefs and in tidal zones. Burrowing species increase the penetration of water and oxygen into the sea-floor sediment, which encourages the growth of populations of aerobic bacteria and small animals alongside their burrows. Although blood-sucking leeches do little direct harm to their victims, some transmit flagellates that can be very dangerous to their hosts. Some small tube-dwelling oligochaetes transmit myxosporean parasites that cause whirling disease in fish. Interaction with humans Earthworms make a significant contribution to soil fertility. The rear end of the Palolo worm, a marine polychaete that tunnels through coral, detaches in order to spawn at the surface, and the people of Samoa regard these spawning modules as a delicacy. Anglers sometimes find that worms are more effective bait than artificial flies, and worms can be kept for several days in a tin lined with damp moss. Ragworms are commercially important as bait and as food sources for aquaculture, and there have been proposals to farm them in order to reduce over-fishing of their natural populations. Some marine polychaetes' predation on molluscs causes serious losses to fishery and aquaculture operations. Scientists study aquatic annelids to monitor the oxygen content, salinity and pollution levels in fresh and marine water. Accounts of the use of leeches for the medically dubious practice of blood-letting have come from China around 30 AD, India around 200 AD, ancient Rome around 50 AD and later throughout Europe. In the 19th century medical demand for leeches was so high that some areas' stocks were exhausted and other regions imposed restrictions or bans on exports, and Hirudo medicinalis is treated as an endangered species by both IUCN and CITES. More recently leeches have been used to assist in microsurgery, and their saliva has provided anti-inflammatory compounds and several important anticoagulants, one of which also prevents tumors from spreading. Ragworms' jaws are strong but much lighter than the hard parts of many other organisms, which are biomineralized with calcium salts. These advantages have attracted the attention of engineers. Investigations showed that ragworm jaws are made of unusual proteins that bind strongly to zinc. Evolutionary history Fossil record Since annelids are soft-bodied, their fossils are rare. Polychaetes' fossil record consists mainly of the jaws that some species had and the mineralized tubes that some secreted. Some Ediacaran fossils such as Dickinsonia in some ways resemble polychaetes, but the similarities are too vague for these fossils to be classified with confidence. The small shelly fossil Cloudina, from , has been classified by some authors as an annelid, but by others as a cnidarian (i.e. in the phylum to which jellyfish and sea anemones belong). Until 2008 the earliest fossils widely accepted as annelids were the polychaetes Canadia and Burgessochaeta, both from Canada's Burgess Shale, formed about in the Middle Cambrian. Myoscolex, found in Australia and a little older than the Burgess Shale, was possibly an annelid. However, it lacks some typical annelid features and has features which are not usually found in annelids and some of which are associated with other phyla. Then Simon Conway Morris and John Peel reported Phragmochaeta from Sirius Passet, about , and concluded that it was the oldest annelid known to date. There has been vigorous debate about whether the Burgess Shale fossil Wiwaxia was a mollusc or an annelid. Polychaetes diversified in the early Ordovician, about . It is not until the early Ordovician that the first annelid jaws are found, thus the crown-group cannot have appeared before this date and probably appeared somewhat later. By the end of the Carboniferous, about , fossils of most of the modern mobile polychaete groups had appeared. Many fossil tubes look like those made by modern sessile polychaetes, but the first tubes clearly produced by polychaetes date from the Jurassic, less than . In 2012, a 508 million year old species of annelid found near the Burgess shale beds in British Columbia, Kootenayscolex, was found that changed the hypotheses about how the annelid head developed. It appears to have bristles on its head segment akin to those along its body, as if the head simply developed as a specialized version of a previously generic segment. The earliest good evidence for oligochaetes occurs in the Tertiary period, which began , and it has been suggested that these animals evolved around the same time as flowering plants in the early Cretaceous, from . A trace fossil consisting of a convoluted burrow partly filled with small fecal pellets may be evidence that earthworms were present in the early Triassic period from . Body fossils going back to the mid Ordovician, from , have been tentatively classified as oligochaetes, but these identifications are uncertain and some have been disputed. Internal relationships Traditionally the annelids have been divided into two major groups, the polychaetes and clitellates. In turn the clitellates were divided into oligochaetes, which include earthworms, and hirudinomorphs, whose best-known members are leeches. For many years there was no clear arrangement of the approximately 80 polychaete families into higher-level groups. In 1997 Greg Rouse and Kristian Fauchald attempted a "first heuristic step in terms of bringing polychaete systematics to an acceptable level of rigour", based on anatomical structures, and divided polychaetes into: Scolecida, less than 1,000 burrowing species that look rather like earthworms. Palpata, the great majority of polychaetes, divided into: Canalipalpata, which are distinguished by having long grooved palps that they use for feeding, and most of which live in tubes. Aciculata, the most active polychaetes, which have parapodia reinforced by internal spines (aciculae). Also in 1997 Damhnait McHugh, using molecular phylogenetics to compare similarities and differences in one gene, presented a very different view, in which: the clitellates were an offshoot of one branch of the polychaete family tree; the pogonophorans and echiurans, which for a few decades had been regarded as a separate phyla, were placed on other branches of the polychaete tree. Subsequent molecular phylogenetics analyses on a similar scale presented similar conclusions. In 2007 Torsten Struck and colleagues compared three genes in 81 taxa, of which nine were outgroups, in other words not considered closely related to annelids but included to give an indication of where the organisms under study are placed on the larger tree of life. For a cross-check the study used an analysis of 11 genes (including the original 3) in ten taxa. This analysis agreed that clitellates, pogonophorans and echiurans were on various branches of the polychaete family tree. It also concluded that the classification of polychaetes into Scolecida, Canalipalpata and Aciculata was useless, as the members of these alleged groups were scattered all over the family tree derived from comparing the 81 taxa. It also placed sipunculans, generally regarded at the time as a separate phylum, on another branch of the polychaete tree, and concluded that leeches were a sub-group of oligochaetes rather than their sister-group among the clitellates. Rouse accepted the analyses based on molecular phylogenetics, and their main conclusions are now the scientific consensus, although the details of the annelid family tree remain uncertain. In addition to re-writing the classification of annelids and three previously independent phyla, the molecular phylogenetics analyses undermine the emphasis that decades of previous writings placed on the importance of segmentation in the classification of invertebrates. Polychaetes, which these analyses found to be the parent group, have completely segmented bodies, while polychaetes' echiurans and sipunculan offshoots are not segmented and pogonophores are segmented only in the rear parts of their bodies. It now seems that segmentation can appear and disappear much more easily in the course of evolution than was previously thought. The 2007 study also noted that the ladder-like nervous system, which is associated with segmentation, is less universal than previously thought in both annelids and arthropods. The updated phylogenetic tree of the Annelid phylum is comprised by a grade of basal groups of polychaetes: Palaeoannelida, Chaetopteriformia and the Amphinomida/Sipuncula/Lobatocerebrum clade. This grade is followed by Pleistoannelida, the clade containing nearly all of annelid diversity, divided into two highly diverse groups: Sedentaria and Errantia. Sedentaria contains the clitellates, pogonophorans, echiurans and some archiannelids, as well as several polychaete groups. Errantia contains the eunicid and phyllodocid polychaetes, and several archiannelids. Some small groups, such as the Myzostomida, are more difficult to place due to long branching, but belong to either one of these large groups. External relationships Annelids are members of the protostomes, one of the two major superphyla of bilaterian animals – the other is the deuterostomes, which includes vertebrates. Within the protostomes, annelids used to be grouped with arthropods under the super-group Articulata ("jointed animals"), as segmentation is obvious in most members of both phyla. However, the genes that drive segmentation in arthropods do not appear to do the same in annelids. Arthropods and annelids both have close relatives that are unsegmented. It is at least as easy to assume that they evolved segmented bodies independently as it is to assume that the ancestral protostome or bilaterian was segmented and that segmentation disappeared in many descendant phyla. The current view is that annelids are grouped with molluscs, brachiopods and several other phyla that have lophophores (fan-like feeding structures) and/or trochophore larvae as members of Lophotrochozoa. Meanwhile, arthropods are now regarded as members of the Ecdysozoa ("animals that molt"), along with some phyla that are unsegmented. The "Lophotrochozoa" hypothesis is also supported by the fact that many phyla within this group, including annelids, molluscs, nemerteans and flatworms, follow a similar pattern in the fertilized egg's development. When their cells divide after the 4-cell stage, descendants of these four cells form a spiral pattern. In these phyla the "fates" of the embryo's cells, in other words the roles their descendants will play in the adult animal, are the same and can be predicted from a very early stage. Hence this development pattern is often described as "spiral determinate cleavage". Fossil discoveries lead to the hypothesis that Annelida and the lophophorates are more closely related to each other than any other phyla. Because of the body plan of lophotrochozoan fossils, a phylogenetic analysis found the lophophorates as the sister group of annelids. Both groups share in common: the presence of chaetae secreted by microvilli; paired, metameric coelomic compartments; and a similar metanephridial structure.
Biology and health sciences
Lophotrochozoa
null
21754264
https://en.wikipedia.org/wiki/Neonatal%20herpes
Neonatal herpes
Neonatal herpes simplex, or simply neonatal herpes, is a herpes infection in a newborn baby, caused by the herpes simplex virus (HSV). It occurs mostly as a result of vertical transmission of the HSV from an affected mother to her baby. Types include skin, eye, and mouth herpes (SEM), disseminated herpes (DIS), and central nervous system herpes (CNS). Depending on the type, symptoms vary from a fever to small blisters, irritability, low body temperature, lethargy, breathing difficulty, and a large abdomen due to ascites or large liver. There may be red streaming eyes or no symptoms. The cause is HSV 1 and 2. It can infect the unborn baby, but more often passes to the baby during childbirth. Onset is typically in the first six weeks after birth. The baby is at greater risk of being affected if the mother contracts HSV in later pregnancy. In such scenarios a prolonged rupture of membranes or childbirth trauma may increase the risk further. Globally, it is estimated to affect one in 10,000 births. Around 1 in every 3,500 babies in the United States contract the infection. Signs and symptoms Neonatal herpes manifests itself in three forms: skin, eye, and mouth herpes (SEM, sometimes referred to as "localized"); disseminated herpes (DIS); and central nervous system herpes (CNS). SEM herpes is characterized by external lesions but no internal organ involvement. Lesions are likely to appear on trauma sites such as the attachment site of fetal scalp electrodes, forceps, or vacuum extractors that are used during delivery; in the margin of the eyes; in the nasopharynx; and in areas associated with trauma or surgery (including circumcision). DIS herpes affects internal organs, particularly the liver. CNS herpes is an infection of the nervous system and the brain that can lead to encephalitis. Infants with CNS herpes present with seizures, tremors, lethargy, and irritability. They feed poorly, have unstable temperatures, and their fontanelle (soft spot of the skull) may bulge. CNS herpes is associated with higher morbidity, while DIS herpes has a higher mortality rate. These categories are not mutually exclusive and there is often overlap of two or more types. SEM herpes has the best prognosis of the three, however if left untreated it may progress to disseminated or CNS herpes with attendant increases in mortality and morbidity. Death from neonatal HSV disease in the U.S. is currently decreasing; the current death rate is about 25%, down from as high as 85% in untreated cases just a few decades ago. Other complications from neonatal herpes include prematurity, with approximately 50% of cases having a gestation of 38 weeks or less, and concurrent sepsis in approximately one-quarter of cases that further clouds speedy diagnosis. Cause The cause is HSV 1 and 2. It can infect the unborn baby, but more often passes to the baby during childbirth. Onset is typically in the first six weeks after birth. The baby is at greater risk of being affected if the mother contracts HSV in later pregnancy. In such scenarios a prolonged rupture of membranes may increase the risk further. Sites of injury such as forceps or scalp electrodes may provide a portal of entry for HSV. Risk factors Maternal risk factors for neonatal HSV-1 include: White non-Hispanic race, young maternal age (<25), primary infection in third trimester, first pregnancy, HSV (1&2) seronegativity, a discordant partner, gestation <38 weeks, and receptive oral sex in the third trimester. Neonatal HSV-2 maternal risk factors: Black race, young maternal age (<21), a discordant partner, primary or non-primary first episode infection in the third trimester, four or more lifetime sexual partners, lower level of education, history of previous STD, history of pregnancy wastage, first viable pregnancy, and gestation <38 weeks. Transmission The majority of cases (85%) occur during birth when the baby comes in contact with infected genital secretions in the birth canal, most common with mothers that have newly been exposed to the virus (mothers that had the virus before pregnancy have a lower risk of transmission). An estimated 5% are infected in utero, and approximately 10% of cases are acquired postnatally. Detection and prevention is difficult because transmission is asymptomatic in 60–98% of cases. Post-natal transmission incidences can happen from a source other than the mother, such as an Orthodox Jewish mohel with herpetic gingivostomatitis who performs oral suction on a circumcision wound without using a prophylactic barrier to prevent contact between the baby's penis and the mohel's mouth. Diagnosis Diagnosis is by blood tests and culture. Swabs are generally taken from the mouth, nose, throat, eyes, and anus, for HSV culture an PCR. Fluid from any blisters can be swabbed too. Liver enzymes may be the first sign to be noted when suspecting neonatal HSV. Other tests include a lumbar puncture and medical imaging of the brain; MRI, CT scan, ultrasound. An assessment of the eyes may reveal eye disease. Differential diagnosis Other skin conditions that may appear similar include erythema toxicum neonatorum, transient neonatal pustular melanosis, infantile acne, miliaria, infantile acropustulosis, and sucking blisters. CNS disease may appear like bacterial or other viral meningitis's. Conjunctivitis due to bacterial infection or other viruses can look like neonatal herpes eye disease. Bacterial sepsis, viral hepatitis, and other infections including cytomegalovirus, toxoplasmosis, syphilis, rubella may mimic the disseminated type. Treatment Reductions in morbidity and mortality are due to the use of antiviral treatments such as vidarabine and acyclovir. However, morbidity and mortality still remain high due to diagnosis of DIS and CNS herpes coming too late for effective antiviral administration; early diagnosis is difficult in the 20–40% of infected neonates that have no visible lesions. A recent large-scale retrospective study found disseminated NHSV patients least likely to get timely treatment, contributing to the high morbidity/mortality in that group. Harrison's Principles of Internal Medicine recommends that pregnant women with active genital herpes lesions at the time of labor be delivered by caesarean section. Women whose herpes is not active can be managed with acyclovir. The current practice is to deliver women with primary or first episode non-primary infection via caesarean section, and those with recurrent infection vaginally (even in the presence of lesions) because of the low risk (1–3%) of vertical transmission associated with recurrent herpes. Epidemiology Neonatal HSV rates in the U.S. are estimated to be between 1 in 3,000 and 1 in 20,000 live births. Approximately 22% of pregnant women in the U.S. have had previous exposure to HSV-2, and an additional 2% acquire the virus during pregnancy, mirroring the HSV-2 infection rate in the general population. The risk of transmission to the newborn is 30–57% in cases where the mother acquired a primary infection in the third trimester of pregnancy. Risk of transmission by a mother with existing antibodies for both HSV-1 and HSV-2 has a much lower (1–3%) transmission rate. This in part is due to the transfer of a significant titer of protective maternal antibodies to the fetus from about the seventh month of pregnancy. However, shedding of HSV-1 from both primary genital infection and reactivations is associated with higher transmission from mother to infant. HSV-1 neonatal herpes is extremely rare in developing countries because development of HSV-1 specific antibodies usually occurs in childhood or adolescence, precluding a later genital HSV-1 infection. HSV-2 infections are much more common in these countries. In industrialized nations, the adolescent HSV-1 seroprevalence has been dropping steadily for the last 5 decades. The resulting increase in the number of young women becoming sexually active while HSV-1 seronegative has contributed to increased HSV-1 genital herpes rates, and as a result, increased HSV-1 neonatal herpes in developed nations. A study in the United States from 2003 to 2014 using large administrative databases showed increasing trends in incidence of neonatal HSV from 7.9 to 10 cases per 100,000 live births and mortality of 6.5%. Babies of decreased gestational age and those of African American race had higher incidences of neonatal HSV. Another study from Canada showed similar results, with an incidence of 5.9 per 100,000 live births and a case fatality of 15.5%. A three-year study in Canada (2000–2003) revealed a neonatal HSV incidence of 5.9 per 100,000 live births and a case fatality rate of 15.5%. HSV-1 was the cause of 62.5% of cases of neonatal herpes of known type, and 98.3% of transmission was asymptomatic. Asymptomatic genital HSV-1 has been shown to be more infectious to the neonate, and is more likely to produce neonatal herpes than HSV-2. However, with prompt application of antiviral therapy, the prognosis of neonatal HSV-1 infection is better than that for HSV-2.
Biology and health sciences
Viral diseases
Health
21754355
https://en.wikipedia.org/wiki/Herpes%20meningitis
Herpes meningitis
Herpes meningitis is inflammation of the meninges, the protective tissues surrounding the spinal cord and brain, due to infection from viruses of the Herpesviridae family - the most common amongst adults is HSV-2. Symptoms are self-limiting over 2 weeks with severe headache, nausea, vomiting, neck-stiffness, and photophobia. Herpes meningitis can cause Mollaret's meningitis, a form of recurrent meningitis. Lumbar puncture with cerebrospinal fluid results demonstrating aseptic meningitis pattern is necessary for diagnosis and polymerase chain reaction is used to detect viral presence. Although symptoms are self-limiting, treatment with antiviral medication may be recommended to prevent progression to Herpes Meningoencephalitis. Epidemiology Aseptic meningitis, meningitis caused by pathogens other than bacteria, is the most common form of meningitis with an estimate of 70 cases per 100,000 patients less than 1 year old, 5.2 cases per 100,000 patients 1 to 14 years of age, and 7.6 cases per 100,000 adults. When looking at the most common causes of meningitis, 8.3% are due to herpes simplex virus. HSV-2 specifically is the most common cause of meningitis in adults. Herpesviral meningitis primarily affects people aged 35–40, the elderly, and women. Between 20% and 50% of cases have clinical recurrences. Clinical presentation Common symptoms include nausea, vomiting, neck-stiffness, photophobia, and severe frontal headaches. Patients with meningitis secondary to the HSV-2 virus may also present with genital lesions, although most cases of HSV-2 meningitis occur without symptoms of genital herpes. Around one fifth of people infected with HSV-2 have symptoms of meningitis with their initial infection, more commonly men than women. Mollaret's Meningitis HSV-2 is the most common cause of Mollaret's meningitis, a type of recurrent viral meningitis. This condition was first described in 1944 by French neurologist Pierre Mollaret. Recurrences usually last a few days or a few weeks, and resolve without treatment. They may recur weekly or monthly for approximately 5 years following primary infection. Diagnosis Differential diagnoses are broad including other causes of meningitis (bacterial, fungal, drug-induced), systemic infection, vasculitis, auto-immune disease, and cancer. As such, patient presentation of fever, headache, stiff neck, and altered mental status is not sufficient information for diagnosis and lumbar puncture must be performed to properly diagnose meningitis. Cerebrospinal fluid findings in herpes meningitis present with lymphocytic pleocytosis, normal glucose, and normal-to-elevated protein. DNA analysis techniques such as polymerase chain reaction is the gold standard for detection of herpes virus in patient CSF fluid due to high specificity and has been able to detect the HSV-2 virus in patients presenting without genital lesions as well as those experiencing recurrent meningitis. Treatment Although guidelines strongly recommend acyclovir for treatment of herpes encephalitis, there are currently no such guidelines for managing herpes meningitis. Herpes meningitis is typically self-limiting over 2 weeks without treatment. However, empirical use of antiviral medications such as acyclovir are considered in cases of suspected HSV meningitis to prevent progression to the more rapid and fatal HSV meningoencephalitis. HSV-2 is the most common herpes virus that causes meningitis. This virus is transmitted via sexual contact and there are currently no vaccinations or cures for the disease. At the moment, there are no specific programs developed to prevent HSV-2 spread and prevention of disease is primarily done via behavioral modification via condom use or through application of antiviral medications upon infection.
Biology and health sciences
Viral diseases
Health
21754358
https://en.wikipedia.org/wiki/Herpes%20simplex%20encephalitis
Herpes simplex encephalitis
Herpes simplex encephalitis (HSE), or simply herpes encephalitis, is encephalitis due to herpes simplex virus. It is estimated to affect at least 1 in 500,000 individuals per year, and some studies suggest an incidence rate of 5.9 cases per 100,000 live births. About 90% of cases of herpes encephalitis are caused by herpes simplex virus-1 (HSV-1), the same virus that causes cold sores. According to a 2006 estimate, 57% of American adults were infected with HSV-1, which is spread through droplets, casual contact and sometimes sexual contact, though most infected people never have cold sores. The rest of cases are due to HSV-2, which is typically spread through sexual contact and is the cause of genital herpes. Two-thirds of HSE cases occur in individuals already seropositive for HSV-1, few of whom (only 10%) have history of recurrent orofacial herpes, while about one third of cases results from an initial infection by HSV-1, predominantly occurring in individuals under the age of 18. Approximately half of individuals who develop HSE are over 50 years of age. The most common cause for encephalitis in children and adults is HSV-1. However, encephalitis found in newborns and immunocompromised individuals is mainly caused by HSV-2. Signs and symptoms Most individuals with HSE show a decrease in their level of consciousness and an altered mental state presenting as confusion, and changes in personality. Increased numbers of white blood cells can be found in patient's cerebrospinal fluid, without the presence of pathogenic bacteria and fungi. Patients typically have a fever and may have seizures. The electrical activity of the brain changes as the disease progresses, first showing abnormalities in one temporal lobe of the brain, which spread to the other temporal lobe 7–10 days later. Imaging by CT or MRI shows characteristic changes in the temporal lobes (see Figure). After the first symptoms appear, patients might lose their sense of smell. This can also be accompanied by the inability to read, write, or speak coherently, and understand verbal speech. Definite diagnosis requires testing of the cerebrospinal fluid (CSF) by a lumbar puncture (spinal tap) for presence of the virus. The testing takes several days to perform, and patients with suspected Herpes encephalitis should be treated with acyclovir immediately while waiting for test results. Atypical stroke-like presentation of HSV encephalitis has been described as well and the clinicians should be aware that HSV encephalitis can mimic a stroke. Associated conditions Herpesviral encephalitis can serve as a trigger of anti-NMDA receptor encephalitis. About 30% of HSE patients develop this secondary immunologic reaction, which is associated with impaired neurocognitive recovery. Epidemiology The annual incidence of herpesviral encephalitis is from 2 to 4 cases per 1 million population. Pathophysiology HSE is thought to be caused by the transmission of virus from a peripheral site on the face following HSV-1 reactivation, along a nerve axon, to the brain. The virus lies dormant in the ganglion of the trigeminal cranial nerve, but the reason for reactivation, and its pathway to gain access to the brain, remains unclear, though changes in the immune system caused by stress clearly play a role in animal models of the disease. The olfactory nerve may also be involved in HSE, which may explain its predilection for the temporal lobes of the brain, as the olfactory nerve sends branches there. In horses, a single-nucleotide polymorphism is sufficient to allow the virus to cause neurological disease; but no similar mechanism has been found in humans. Diagnosis Brain CT scan (with/without contrast). Complete prior to lumbar puncture to exclude significantly increased ICP, obstructive hydrocephalus, mass effect Brain MRI—Increased T2 signal intensity in frontotemporal region → viral (HSV) encephalitis Treatment Herpesviral encephalitis can be treated with high-dose intravenous acyclovir, which should be infused 10 mg/kg(adult) over 1 hour to avoid kidney failure. Without treatment, HSE results in rapid death in approximately 70% of cases; survivors suffer severe neurological damage. When treated, HSE is still fatal in one-third of cases, and causes serious long-term neurological damage in over half of survivors. Twenty percent of treated patients recover with minor damage. Only a small population of untreated survivors (2.5%) regain completely normal brain function. Many amnesic cases in the scientific literature have etiologies involving HSE. Earlier treatment (within 48 hours of symptom onset) improves the chances of a good recovery. Rarely, treated individuals can have relapse of infection weeks to months later. There is evidence that aberrant inflammation triggered by herpes simplex can result in granulomatous inflammation in the brain, which responds to steroids. While the herpes virus can be spread, encephalitis itself is not infectious. Other viruses can cause similar symptoms of encephalitis, though usually milder (Herpesvirus 6, varicella zoster virus, Epstein-Barr, cytomegalovirus, coxsackievirus, etc.).
Biology and health sciences
Viral diseases
Health
21754540
https://en.wikipedia.org/wiki/Cold%20sore
Cold sore
A cold sore is a type of herpes infection caused by the herpes simplex virus that affects primarily the lip. Symptoms typically include a burning pain followed by small blisters or sores. The first attack may also be accompanied by fever, sore throat, and enlarged lymph nodes. The rash usually heals within ten days, but the virus remains dormant in the trigeminal ganglion. The virus may periodically reactivate to create another outbreak of sores in the mouth or lip. The cause is usually herpes simplex virus type 1 (HSV-1) and occasionally herpes simplex virus type 2 (HSV-2). The infection is typically spread between people by direct non-sexual contact. Attacks can be triggered by sunlight, fever, psychological stress, or a menstrual period. Direct contact with the genitals can result in genital herpes. Diagnosis is usually based on symptoms but can be confirmed with specific testing. Prevention includes avoiding kissing or using the personal items of a person who is infected. A zinc oxide, anesthetic, or antiviral cream appears to decrease the duration of symptoms by a small amount. Antiviral medications may also decrease the frequency of outbreaks. About 2.5 per 1000 people are affected with outbreaks in any given year. After one episode about 33% of people develop subsequent episodes. Onset often occurs in those less than 20 years old and 80% develop antibodies for the virus by this age. In those with recurrent outbreaks, these typically happen less than three times a year. The frequency of outbreaks generally decreases over time. Terminology The term labia means "lip" in Latin. Herpes labialis does not refer to the labia of the vulva, though the origin of the word is the same. The colloquial terms for this condition ("cold sore" and "fever blister") come from the fact that herpes labialis is often triggered by fever, for example, as may occur during an upper respiratory tract infection (i.e. a cold). When the viral infection affects both face and mouth, the broader term orofacial herpes is sometimes used, whereas herpetic stomatitis describes infection of the mouth specifically; stomatitis is derived from the Greek word stoma, which means "mouth". Signs and symptoms Herpes infections usually show no symptoms; when symptoms do appear they typically resolve within two weeks. The main symptom of oral infection is inflammation of the mucosa of the cheek and gums—known as acute herpetic gingivostomatitis—which occurs within 5–10 days of infection. Other symptoms may also develop, including headache, nausea, dizziness and painful ulcers—sometimes confused with canker sores—fever, and sore throat. Primary HSV infection in adolescents frequently manifests as severe pharyngitis with lesions developing on the cheek and gums. Some individuals develop difficulty in swallowing (dysphagia) and swollen lymph nodes (lymphadenopathy). Primary HSV infections in adults often results in pharyngitis similar to that observed in glandular fever (infectious mononucleosis), but gingivostomatitis is less likely. Recurrent oral infection is more common with HSV-1 infections than with HSV-2. Symptoms typically progress in a series of eight stages: Latent (weeks to months incident-free): The remission period; After initial infection, the viruses move to sensory nerve ganglia (trigeminal ganglion), where they reside as lifelong, latent viruses. Asymptomatic shedding of contagious virus particles can occur during this stage. Prodromal (day 0–1): Symptoms often precede a recurrence. Symptoms typically begin with tingling (itching) and reddening of the skin around the infected site. This stage can last from a few days to a few hours preceding the physical manifestation of an infection and is the best time to start treatment. Inflammation (day 1): Virus begins reproducing and infecting cells at the end of the nerve. The healthy cells react to the invasion with swelling and redness displayed as symptoms of infection. Pre-sore (day 2–3): This stage is defined by the appearance of tiny, hard, inflamed papules and vesicles that may itch and are painfully sensitive to touch. In time, these fluid-filled blisters form a cluster on the lip (labial) tissue, the area between the lip and skin (vermilion border), and can occur on the nose, chin, and cheeks. Open lesion (day 4): This is the most painful and contagious of the stages. All the tiny vesicles break open and merge to create one big, open, weeping ulcer. Fluids are slowly discharged from blood vessels and inflamed tissue. This watery discharge is teeming with active viral particles and is highly contagious. Depending on the severity, one may develop a fever and swollen lymph glands under the jaw. Crusting (day 5–8): A honey/golden crust starts to form from the syrupy exudate. This yellowish or brown crust or scab is not made of active virus but from blood serum containing useful proteins such as immunoglobulins. This appears as the healing process begins. The sore is still painful at this stage, but, more painful, however, is the constant cracking of the scab as one moves or stretches their lips, as in smiling or eating. Virus-filled fluid will still ooze out of the sore through any cracks. Healing (day 9–14): New skin begins to form underneath the scab as the virus retreats into latency. A series of scabs will form over the sore (called Meier Complex), each one smaller than the last. During this phase irritation, itching, and some pain are common. Post-scab (12–14 days): A reddish area may linger at the site of viral infection as the destroyed cells are regenerated. Virus shedding can still occur during this stage. The recurrent infection is thus often called herpes simplex labialis. Rare reinfections occur inside the mouth (intraoral HSV stomatitis) affecting the gums, alveolar ridge, hard palate, and the back of the tongue, possibly accompanied by herpes labialis. A lesion caused by herpes simplex can occur in the corner of the mouth and be mistaken for angular cheilitis of another cause. Sometimes termed "angular herpes simplex". A cold sore at the corner of the mouth behaves similarly to elsewhere on the lips. Rather than utilizing antifungal creams, angular herpes simplex is treated in the same way as a cold sore, with topical antiviral drugs. Causes Herpes labialis infection occurs when the herpes simplex virus comes into contact with oral mucosal tissue or abraded skin of the mouth. Infection by the type 1 strain of herpes simplex virus (HSV-1) is most common; however, cases of oral infection by the type 2 strain are increasing. Oral HSV-2 shedding is rare, and "usually noted in the context of first episode genital herpes." In general, both types can cause oral or genital herpes. Cold sores are the result of the virus reactivating in the body. Once HSV-1 has entered the body, it never leaves. The virus moves from the mouth to remain latent in the central nervous system. In approximately one-third of people, the virus can "wake up" or reactivate to cause disease. When reactivation occurs, the virus travels down the nerves to the skin where it may cause blisters (cold sores) around the lips or mouth area. In case of Herpes zoster the nose can be affected. Cold sore outbreaks may be influenced by stress, menstruation, sunlight, sunburn, fever, dehydration, or local skin trauma. Surgical procedures such as dental or neural surgery, lip tattooing, or dermabrasion are also common triggers. HSV-1 can in rare cases be transmitted to newborn babies by family members or hospital staff who have cold sores; this can cause a severe disease called neonatal herpes simplex. People can transfer the virus from their cold sores to other areas of the body, such as the eye, skin, or fingers; this is called autoinoculation. Eye infection, in the form of conjunctivitis or keratitis, can happen when the eyes are rubbed after touching the lesion. Finger infection (herpetic whitlow) can occur when a child with cold sores or primary HSV-1 infection sucks his fingers. Blood tests for herpes may differentiate between type 1 and type 2. When a person is not experiencing any symptoms, a blood test alone does not reveal the site of infection. Genital herpes infections occurred with almost equal frequency as type 1 or 2 in younger adults when samples were taken from genital lesions. Herpes in the mouth is more likely to be caused by type 1, but (see above) also can be type 2. The only way to know for certain if a positive blood test for herpes is due to infection of the mouth, genitals, or elsewhere, is to sample from lesions. This is not possible if the affected individual is asymptomatic. The body's immune system typically fights the virus. Prevention Primary infection The likelihood of the infection can be reduced through avoidance of touching an area with active infection, avoiding contact sports during active infection, and frequent hand washing, use of mouth rinsing (anti-viral, anti-bacterial) products. During active infection (outbreaks with oral lesions) avoid oral-to-oral kissing and oral-genital sex without protection. HSV1 can be transmitted to uninfected partners through oral sex, resulting in genital lesions. Healthcare workers working with patients who have active lesions are advised to use gloves, eye protection, and mouth protection during physical, mucosal, and bronchoscopic procedures and examinations. Recurrent infection In some cases, sun exposure can lead to HSV-1 reactivation, therefore use of zinc-based sunscreen or topical and oral therapeutics such as acyclovir and valacyclovir may prove helpful. Other triggers for recurrent herpetic infection includes fever, common cold, fatigue, emotional stress, trauma, sideropenia, oral cancer therapy, immunosuppression, chemotherapy, oral and facial surgery, menstruation, and epidural morphine, and upset GI. Surgical procedures like nerve root decompression, facial dermabrasion, and ablative laser resurfacing can increase risks of reactivation by 50–70%. Treatment Despite no cure or vaccine for the virus, a human body's immune system and specific antibodies typically fight the virus. Treatment options include no treatment, topical creams (indifferent, antiviral, and anaesthetic), and oral antiviral medications. Indifferent topical creams include zinc oxide and glycerin cream, which can have itching and burning sensation as side effects and docosanol. Docosanol, a saturated fatty alcohol, was approved by the United States Food and Drug Administration for herpes labialis in adults with properly functioning immune systems. It is comparable in effectiveness to prescription topical antiviral agents. Due to docosanol's mechanism of action, there is little risk of drug resistance. Antivirals creams include acyclovir and penciclovir, which can speed healing by as much as 10%. Oral antivirals include acyclovir, valaciclovir, and famciclovir. Famciclovir or valacyclovir, taken in pill form, can be effective using a single day, high-dose application and is more cost effective and convenient than the traditional treatment of lower doses for 5–7 days. Anaesthetic creams include lidocaine and prilocaine which has shown reduction in duration of subjective symptoms and eruptions. Treatment recommendations vary on the severity of the symptoms and chronicity of the infection. Treatment with oral antivirals such as acyclovir in children within 72 hours of illness onset has shown to shorten duration of fever, odynophagia, and lesions, and to reduce viral shedding. For patient with mild to moderate symptoms, local anaesthetic such as lidocaine for pain without antiviral may be sufficient. However, those with occasional severe recurrences of lesions may use oral antivirals. Patients with severe cases such as those with frequent recurrences of lesions, presence of disfiguring lesions, and serious systematic complications may need chronic suppressive therapy on top of the antiviral therapies. Mouth-rinse with combinations of ethanol and essential oils against herpes as therapeutic method is recommended by the German Society of Hospital Hygiene. Further research into virucidal effects of essential oils exists. Epidemiology Herpes labialis is common throughout the world. A large survey of young adults on six continents reported that 33% of males and 28% of females had herpes labialis on two or more occasions during the year before the study. The lifetime prevalence in the United States of America is estimated at 20–45% of the adult population. Lifetime prevalence in France was reported by one study as 32% in males and 42% in females. In Germany, the prevalence was reported at 32% in people aged between 35 and 44 years, and 20% in those aged 65–74. In Jordan, another study reported a lifetime prevalence of 26%. Research Research has gone into vaccines and drugs for both prevention and treatment of herpes infections.
Biology and health sciences
Viral diseases
Health
21756816
https://en.wikipedia.org/wiki/Earthlight%20%28astronomy%29
Earthlight (astronomy)
Earthlight is the diffuse reflection of sunlight reflected from Earth's surface and clouds. Earthshine (an example of planetshine), also known as the Moon's ashen glow, is the dim illumination of the otherwise unilluminated portion of the Moon by this indirect sunlight. Earthlight on the Moon during the waxing crescent is called "the old Moon in the new Moon's arms", while that during the waning crescent is called "the new Moon in the old Moon's arms". Visibility Earthlight has a calculated maximum apparent magnitude of −17.7 as viewed from the Moon. When the Earth is at maximum phase, the total radiance at the lunar surface is approximately from Earthlight. This is only 0.01% of the radiance from direct Sunlight. Earthshine has a calculated maximum apparent magnitude of −3.69 as viewed from Earth. This phenomenon is most visible from Earth at night (or astronomical twilight) a few days before or after the day of new moon, when the lunar phase is a thin crescent. On these nights, the entire lunar disk is both directly and indirectly sunlit, and is thus unevenly bright enough to see. Earthshine is most clearly seen after dusk during the waxing crescent (in the western sky) and before dawn during the waning crescent (in the eastern sky). The term earthlight would also be suitable for an observer on the Moon seeing Earth during the lunar night, or for an astronaut inside a spacecraft looking out the window. Arthur C. Clarke uses it in this sense in his 1955 novel Earthlight. High contrast photography is also able to reveal the night side of the moon illuminated by Earthlight during a solar eclipse. Radio frequency transmissions are also reflected by the moon; for example, see Earth–Moon–Earth communication. History The phenomenon was sketched and remarked upon in the 16th century by Leonardo da Vinci, who thought that the illumination came from reflections from the Earth's oceans (we now know that clouds account for much more reflected intensity than the oceans). It is referenced in "The Ballad of Sir Patrick Spens" (Child Ballad No. 58), in the phrase "‘A saw the new muin late yestreen/ Wi the auld muin in her airm." Astronaut Dr Sian Proctor was moved by seeing and experiencing earthlight from orbit as mission pilot of Inspiration4 space mission and wrote the poem, "Earthlight". In 2024, Proctor authored EarthLight: The Power of EarthLight and the Human Perspective on the concept and nature of earthlight.
Physical sciences
Solar System
Astronomy
26145195
https://en.wikipedia.org/wiki/Plastic
Plastic
Plastics are a wide range of synthetic or semisynthetic materials that use polymers as a main ingredient. Their plasticity makes it possible for plastics to be molded, extruded, or pressed into solid objects of various shapes. This adaptability, combined with a wide range of other properties, such as being lightweight, durable, flexible, nontoxic, and inexpensive to produce, has led to their widespread use around the world. Most plastics are derived from natural gas and petroleum, and a small fraction from renewable materials. One such material polylactic acid. Between 1950 and 2017, 9.2 billion metric tons of plastic are estimated to have been made; more than half of this has been produced since 2004. In 2023, preliminary figures indicate that over 400 million metric tons of plastic were produced worldwide. If global trends on plastic demand continue, it is estimated that annual global plastic production will reach over 1.3 billion tons by 2060. Major applications include packaging (40%) and building/construction (20%). The success and dominance of plastics since the early 20th century has had major benefits for mankind, ranging from medical devices to light-weight construction materials. The sewage systems in many countries relies on the resiliency and adaptability of polyvinyl chloride. It is also true that plastics are the basis of widespread environmental concerns, due to their slow decomposition rate in natural ecosystems. Most plastic produced has not been reused. Some is unsuitable for reuse. Much is captured in landfills or as plastic pollution. Particular concern focuses on microplastics. Marine plastic pollution, for example, creates garbage patches. Of all the plastic discarded so far, some 14% has been incinerated and less than 10% has been recycled. In developed economies, about a third of plastic is used in packaging and roughly the same in buildings in applications such as piping, plumbing or vinyl siding. Other uses include automobiles (up to 20% plastic), furniture, and toys. In the developing world, the applications of plastic may differ; 42% of India's consumption is used in packaging. Worldwide, about 50 kg of plastic is produced annually per person, with production doubling every ten years. The world's first fully synthetic plastic was Bakelite, invented in New York in 1907, by Leo Baekeland, who coined the term "plastics". Dozens of different types of plastics are produced today, such as polyethylene, which is widely used in product packaging, and polyvinyl chloride (PVC), used in construction and pipes because of its strength and durability. Many chemists have contributed to the materials science of plastics, including Nobel laureate Hermann Staudinger, who has been called "the father of polymer chemistry", and Herman Mark, known as "the father of polymer physics". Etymology The word plastic derives from the Greek πλαστικός (plastikos), meaning "capable of being shaped or molded;" in turn, it is from πλαστός (plastos) meaning "molded." As a noun, the word most commonly refers to the solid products of petrochemical-derived manufacturing. The noun plasticity refers specifically here to the deformability of the materials used in the manufacture of plastics. Plasticity allows molding, extrusion or compression into a variety of shapes: films, fibers, plates, tubes, bottles and boxes, among many others. Plasticity also has a technical definition in materials science outside the scope of this article; it refers to the non-reversible change in form of solid substances. Structure Most plastics contain organic polymers. The vast majority of these polymers are formed from chains of carbon atoms, with or without the attachment of oxygen, nitrogen or sulfur atoms. These chains comprise many repeating units formed from monomers. Each polymer chain consists of several thousand repeating units. The backbone is the part of the chain that is on the main path, linking together a large number of repeat units. To customize the properties of a plastic, different molecular groups called side chains hang from this backbone; they are usually attached to the monomers before the monomers themselves are linked together to form the polymer chain. The structure of these side chains influences the properties of the polymer. Classifications Plastics are usually classified by the chemical structure of the polymer's backbone and side chains. Important groups classified in this way include the acrylics, polyesters, silicones, polyurethanes, and halogenated plastics. Plastics can be classified by the chemical process used in their synthesis, such as condensation, polyaddition, and cross-linking. They can also be classified by their physical properties, including hardness, density, tensile strength, thermal resistance, and glass transition temperature. Plastics can additionally be classified by their resistance and reactions to various substances and processes, such as exposure to organic solvents, oxidation, and ionizing radiation. Other classifications of plastics are based on qualities relevant to manufacturing or product design for a particular purpose. Examples include thermoplastics, thermosets, conductive polymers, biodegradable plastics, engineering plastics and elastomers. Thermoplastics and thermosetting polymers One important classification of plastics is the degree to which the chemical processes used to make them are reversible or not. Thermoplastics do not undergo chemical change in their composition when heated and thus can be molded repeatedly. Examples include polyethylene (PE), polypropylene (PP), polystyrene (PS), and polyvinyl chloride (PVC). Thermosets, or thermosetting polymers, can melt and take shape only once: after they have solidified, they stay solid and retain their shape permanently. If reheated, thermosets decompose rather than melt. Examples of thermosets include epoxy resin, polyimide, and Bakelite. The vulcanization of rubber is an example of this process. Before heating in the presence of sulfur, natural rubber (polyisoprene) is a sticky, slightly runny material, and after vulcanization, the product is dry and rigid. {| class="wikitable" style="text-align:left; font-size:90%; width:70%;" |- class="hintergrundfarbe2" style="vertical-align:top" | Thermosets consist of closely cross-linked polymers. Cross-links are shown as red dots in the figure. | Elastomers consist of wide-meshed cross-linked polymers. The wide mesh allows the material to stretch under tensile load. | Thermoplastics consist of non-crosslinked polymers, often with a semi-crystalline structure (shown in red). They have a glass transition temperature and are fusible. |} Commodity, engineering and high-performance plastics Commodity plastics Around 70% of global production is concentrated in six major polymer types, the so-called commodity plastics. Unlike most other plastics, these can often be identified by their resin identification code (RIC): Polyethylene terephthalate (PET or PETE) High-density polyethylene (HDPE or PE-HD) Polyvinyl chloride (PVC or V) Low-density polyethylene (LDPE or PE-LD), Polypropylene (PP) Polystyrene (PS) Polyurethanes (PUR) and PP&A fibers are often also included as major commodity classes, although they usually lack RICs, as they are chemically quite diverse groups. These materials are inexpensive, versatile and easy to work with, making them the preferred choice for the mass production everyday objects. Their biggest single application is in packaging, with some 146-million tonnes being used this way in 2015, equivalent to 36% of global production. Due to their dominance; many of the properties and problems commonly associated with plastics, such as pollution stemming from their poor biodegradability, are ultimately attributable to commodity plastics. A huge number of plastics exist beyond the commodity plastics, with many having exceptional properties. Engineering plastics Engineering plastics are more robust and are used to manufacture products such as vehicle parts, building and construction materials, and some machine parts. In some cases, they are polymer blends formed by mixing different plastics together (ABS, HIPS etc.). Engineering plastics can replace metals in vehicles, lowering their weight and improving fuel efficiency by 6–8%. Roughly 50% of the volume of modern cars is made of plastic, but this only accounts for 12–17% of the vehicle weight. Acrylonitrile butadiene styrene (ABS): electronic equipment cases (e.g., computer monitors, printers, keyboards) and drainage pipes High-impact polystyrene (HIPS): refrigerator liners, food packaging, and vending cups Polycarbonate (PC): compact discs, eyeglasses, riot shields, security windows, traffic lights, and lenses Polycarbonate + acrylonitrile butadiene styrene (PC + ABS): a blend of PC and ABS that creates a stronger plastic used in car interior and exterior parts, and in mobile phone bodies Polyethylene + acrylonitrile butadiene styrene (PE + ABS): a slippery blend of PE and ABS used in low-duty dry bearings Polymethyl methacrylate (PMMA) (acrylic): contact lenses (of the original "hard" variety), glazing (best known in this form by its various trade names around the world; e.g. Perspex, Plexiglas, and Oroglas), fluorescent-light diffusers, and rear light covers for vehicles. It also forms the basis of artistic and commercial acrylic paints, when suspended in water with the use of other agents. Silicones (polysiloxanes): heat-resistant resins used mainly as sealants but also used for high-temperature cooking utensils and as a base resin for industrial paints Urea-formaldehyde (UF): one of the aminoplasts used as a multi-colorable alternative to phenolics: used as a wood adhesive (for plywood, chipboard, hardboard) and electrical switch housings High-performance plastics High-performance plastics are usually expensive, with their use limited to specialized applications that make use of their superior properties. Aramids: best known for their use in the manufacture of body armor, this class of heat-resistant and strong synthetic fibers also has applications in aerospace and military and includes Kevlar, Nomex, and Twaron. Ultra-high-molecular-weight polyethylenes (UHMWPE) Polyetheretherketone (PEEK): strong, chemical- and heat-resistant thermoplastic; its biocompatibility allows for use in medical implant applications and aerospace moldings. It is one of the most expensive commercial polymers. Polyetherimide (PEI) (Ultem): a high-temperature, chemically stable polymer that does not crystallize Polyimide: a high-temperature plastic used in materials such as Kapton tape Polysulfone: high-temperature melt-processable resin used in membranes, filtration media, water heater dip tubes and other high-temperature applications Polytetrafluoroethylene (PTFE), or Teflon: heat-resistant, low-friction coatings used in non-stick surfaces for frying pans, plumber's tape, and water slides Polyamide-imide (PAI): high-performance engineering plastic extensively used in high-performance gears, switches, transmissions, and other automotive components and aerospace parts Amorphous and crystalline plastics Many plastics are completely amorphous (without a highly ordered molecular structure), including thermosets, polystyrene, and methyl methacrylate (PMMA). Crystalline plastics exhibit a pattern of more regularly spaced atoms, such as high-density polyethylene (HDPE), polybutylene terephthalate (PBT), and polyether ether ketone (PEEK). However, some plastics are partially amorphous and partially crystalline in molecular structure, giving them both a melting point and one or more glass transitions (the temperature above which the extent of localized molecular flexibility is substantially increased). These so-called semi-crystalline plastics include polyethylene, polypropylene, polyvinyl chloride, polyamides (nylons), polyesters and some polyurethanes. Conductive polymers Intrinsically conducting polymers (ICPs) are organic polymers that conduct electricity. While a conductivity of up to 80 kilosiemens per centimeter (kS/cm) in stretch-oriented polyacetylene has been achieved, it does not approach that of most metals. For example, copper has a conductivity of several hundred kS/cm. Biodegradable plastics and bioplastics Biodegradable plastics Biodegradable plastics are plastics that degrade (break down) upon exposure to biological factors, such as sunlight, ultra-violet radiation, moisture, bacteria, enzymes, or wind abrasion. Attacks by insects, such as waxworms and mealworms, can also be considered forms of biodegradation. Aerobic degradation requires the plastic to be exposed at the surface, whereas anaerobic degradation would be effective in landfill or composting systems. Some companies produce biodegradable additives to further promote biodegradation. Although starch powder can be added as a filler to facilitate degradation of some plastics, such treatment does not lead to complete breakdown. Some researchers have genetically engineered bacteria to synthesize completely biodegradable plastics, such as polyhydroxybutyrate (PHB); however, these were still relatively expensive as of 2021. Bioplastics While most plastics are produced from petrochemicals, bioplastics are made substantially from renewable plant materials like cellulose and starch. Due both to the finite limits of fossil fuel reserves and to rising levels of greenhouse gases caused primarily by the burning of those fuels, the development of bioplastics is a growing field. Global production capacity for bio-based plastics is estimated at 327,000 tonnes per year. In contrast, global production of polyethylene (PE) and polypropylene (PP), the world's leading petrochemical-derived polyolefins, was estimated at over 150 million tonnes in 2015. Plastic industry The plastic industry includes the global production, compounding, conversion and sale of plastic products. Although the Middle East and Russia produce most of the required petrochemical raw materials, the production of plastic is concentrated in the global East and West. The plastic industry comprises a huge number of companies and can be divided into several sectors: Production Between 1950 and 2017, 9.2 billion tonnes of plastic are estimated to have been made, with more than half of this having been produced since 2004. Since the birth of the plastic industry in the 1950s, global production has increased enormously, reaching 400 million tonnes a year in 2021; this is up from 381 million metric tonnes in 2015 (excluding additives). From the 1950s, rapid growth occurred in the use of plastics for packaging, in building and construction, and in other sectors. If global trends on plastic demand continue, it is estimated that by 2050 annual global plastic production will exceed 1.1-billion tonnes annually. Plastics are produced in chemical plants by the polymerization of their starting materials (monomers); which are almost always petrochemical in nature. Such facilities are normally large and are visually similar to oil refineries, with sprawling pipework running throughout. The large size of these plants allows them to exploit economies of scale. Despite this, plastic production is not particularly monopolized, with about 100 companies accounting for 90% of global production. This includes a mixture of private and state-owned enterprises. Roughly half of all production takes place in East Asia, with China being the largest single producer. Major international producers include: Dow Chemical LyondellBasell ExxonMobil SABIC BASF Sibur Shin-Etsu Chemical Indorama Ventures Sinopec Braskem Historically, Europe and North America have dominated global plastics production. However, since 2010 Asia has emerged as a significant producer, with China accounting for 31% of total plastic resin production in 2020. Regional differences in the volume of plastics production are driven by user demand, the price of fossil fuel feedstocks, and investments made in the petrochemical industry. For example, since 2010 over US$200 billion has been invested in the United States in new plastic and chemical plants, stimulated by the low cost of raw materials. In the European Union (EU), too, heavy investments have been made in the plastics industry, which employs over 1.6-million people with a turnover of more than 360 billion euros per year. In China in 2016 there were over 15,000 plastic manufacturing companies, generating more than US$366 billion in revenue. In 2017, the global plastics market was dominated by thermoplastics– polymers that can be melted and recast. Thermoplastics include polyethylene (PE), polyethylene terephthalate (PET), polypropylene (PP), polyvinyl chloride (PVC), polystyrene (PS) and synthetic fibers, which together represent 86% of all plastics. Compounding Plastic is not sold as a pure unadulterated substance, but is instead mixed with various chemicals and other materials, which are collectively known as additives. These are added during the compounding stage and include substances such as stabilizers, plasticizers and dyes, which are intended to improve the lifespan, workability or appearance of the final item. In some cases, this can involve mixing different types of plastic together to form a polymer blend, such as high impact polystyrene. Large companies may do their own compounding prior to production, but some producers have it done by a third party. Companies that specialize in this work are known as Compounders. The compounding of thermosetting plastic is relatively straightforward; as it remains liquid until it is cured into its final form. For thermosoftening materials, which are used to make the majority of products, it is necessary to melt the plastic in order to mix-in the additives. This involves heating it to anywhere between . Molten plastic is viscous and exhibits laminar flow, leading to poor mixing. Compounding is therefore done using extrusion equipment, which is able to supply the necessary heat and mixing to give a properly dispersed product. The concentrations of most additives are usually quite low, however high levels can be added to create Masterbatch products. The additives in these are concentrated but still properly dispersed in the host resin. Masterbatch granules can be mixed with cheaper bulk polymer and will release their additives during processing to give a homogeneous final product. This can be cheaper than working with a fully compounded material and is particularly common for the introduction of color. Converting Companies that produce finished goods are known as converters (sometimes processors). The vast majority of plastics produced worldwide are thermosoftening and must be heated until molten in order to be molded. Various sorts of extrusion equipment exist which can then form the plastic into almost any shape. Film blowing - Plastic films (carrier bags, sheeting) Blow molding - Small thin-walled hollow objects in large quantities (drinks bottles, toys) Rotational molding - Large thick-walled hollow objects (IBC tanks) Injection molding - Solid objects (phone cases, keyboards) Spinning - Produces fibers (nylon, spandex etc.) For thermosetting materials the process is slightly different, as the plastics are liquid to begin with and but must be cured to give solid products, but much of the equipment is broadly similar. The most commonly produced plastic consumer products include packaging made from LDPE (e.g. bags, containers, food packaging film), containers made from HDPE (e.g. milk bottles, shampoo bottles, ice cream tubs), and PET (e.g. bottles for water and other drinks). Together these products account for around 36% of plastics use in the world. Most of them (e.g. disposable cups, plates, cutlery, takeaway containers, carrier bags) are used for only a short period, many for less than a day. The use of plastics in building and construction, textiles, transportation and electrical equipment also accounts for a substantial share of the plastics market. Plastic items used for such purposes generally have longer life spans. They may be in use for periods ranging from around five years (e.g. textiles and electrical equipment) to more than 20 years (e.g. construction materials, industrial machinery). Plastic consumption differs among countries and communities, with some form of plastic having made its way into most people's lives. North America (i.e. the North American Free Trade Agreement or NAFTA region) accounts for 21% of global plastic consumption, closely followed by China (20%) and Western Europe (18%). In North America and Europe, there is high per capita plastic consumption (94 kg and 85 kg/capita/year, respectively). In China, there is lower per capita consumption (58 kg/capita/year), but high consumption nationally because of its large population. Gallery Applications The largest application for plastics is as packaging materials, but they are used in a wide range of other sectors, including: construction (pipes, gutters, door and windows), textiles (stretchable fabrics, fleece), consumer goods (toys, tableware, toothbrushes), transportation (headlights, bumpers, body panels, wing mirrors), electronics (phones, computers, televisions) and as machine parts. In optics, plastics are used to manufacture aspheric lenses. Additives Additives are chemicals blended into plastics to improved their performance or appearance. Additives are therefore one of the reasons why plastic is used so widely. Plastics are composed of chains of polymers. Many different chemicals are used as plastic additives. A randomly chosen plastic product generally contains around 20 additives. The identities and concentrations of additives are generally not listed on products. In the EU, over 400 additives are used in high volumes. In a global market analysis, 5,500 additives were found. At a minimum, all plastic contains some polymer stabilizers which permit them to be melt-processed (molded) without suffering polymer degradation.Additives in polyvinyl chloride (PVC), used widely for sanitary plumbing, can constitute up to 80% of the total volume. Unadulterated plastic (barefoot resin) is rarely sold. Leaching Additives may be weakly bound to the polymers or react in the polymer matrix. Although additives are blended into plastic they remain chemically distinct from it and can gradually leach back out during normal use, when in landfills, or following improper disposal in the environment. Additives may also degrade to form other compounds that could be more benign or more toxic. Plastic fragmentation into microplastics and nanoplastics can allow chemical additives to move in the environment far from the point of use. Once released, some additives and derivatives may persist in the environment and bioaccumulate in organisms. They can have adverse effects on human health and biota. A recent review by the United States Environmental Protection Agency (US EPA) revealed that out of 3,377 chemicals potentially associated with plastic packaging and 906 likely associated with it, 68 were ranked by ECHA as "highest for human health hazards" and 68 as "highest for environmental hazards". Recycling As additives change the properties of plastics they have to be considered during recycling. Presently, almost all recycling is performed by simply remelting and fabricating used plastic into new items. Additives present risks in recycled products due to their difficulty to remove. When plastic products are recycled, it is highly likely that the additives will be integrated into the new products. Plastic waste, even if it is all of the same polymer type, will contain varying types and amounts of additives. Mixing these together can give a material with inconsistent properties, which can be unappealing to industry. For example, mixing different colored plastics with different plastic colorants together can produce a discolored or brown material and for this reason plastic is usually sorted both by polymer type and color prior to recycling. Lack of transparency and reporting across the value chain often results in lack of knowledge concerning the chemical profile of the final products. For example, products containing brominated flame retardants have been incorporated into new plastic products. Flame retardants are a group of chemicals used in electronic and electrical equipment, textiles, furniture and construction materials which should not be present in food packaging or child care products. A recent study found brominated dioxins as unintentional contaminants in toys made from recycled plastic electronic waste that contained brominated flame retardants. Brominated dioxins have been found to exhibit toxicity similar to that of chlorinated dioxins. They can have negative developmental effects and negative effects on the nervous system and interfere with mechanisms of the endocrine system. Health effects Plastics have proliferated in part because they are relatively benign. They are not acutely toxic, in large part because they are insoluble and or indigestible owing to their large molecular weight. Their degradation products also are rarely toxic. The same cannot be said about some additives, which tend to be lower molecular weight. Controversies associated with plastics often relate to their additives, some of which are potentially harmful. For example, some flame retardants, such as octabromodiphenyl ether and pentabromodiphenyl ether, are unsuitable for food packaging. Other harmful additives include cadmium, chromium, lead and mercury (regulated under the Minamata Convention on Mercury), which have previously been used in plastic production, are banned in many jurisdictions. However, they are still routinely found in some plastic packaging, including for food. Poor countries Additives can also be problematic if waste is burned, especially when burning is uncontrolled or takes place in low-technology incinerators, as is common in many developing countries. Incomplete combustion can cause emissions of hazardous substances such as acid gases and ash, which can contain persistent organic pollutants (POPs) such as dioxins. A number of additives identified as hazardous to humans and/or the environment are regulated internationally. The Stockholm Convention on Persistent Organic Pollutants is a global treaty to protect human health and the environment from chemicals that remain intact in the environment for long periods, become widely distributed geographically, accumulate in the fatty tissue of humans and wildlife, and have harmful impacts on human health or on the environment. The use of bisphenol A (BPA) in plastic baby bottles is banned in many parts of the world but is not restricted in some low-income countries. Animals In 2023, plasticosis, a new disease caused by the ingestion of plastic waste, was discovered in seabirds. Birds affected with this disease were found to have scarred and inflamed digestive tracts, which can impair their ability to digest food. "When birds ingest small pieces of plastic, they found, it inflames the digestive tract. Over time, the persistent inflammation causes tissues to become scarred and disfigured, affecting digestion, growth and survival." Types of additive Health effects Plastics per se have low toxicity due to their insolubility in water and because they have a large molecular weight. They are biochemically inert. Additives in plastic products can be more problemative. For example, plasticizers like adipates and phthalates are often added to brittle plastics like PVC to make them pliable. Traces of these compounds can leach out of the product. Owing to concerns over the effects of such leachates, the EU has restricted the use of DEHP (di-2-ethylhexyl phthalate) and other phthalates in some applications, and the US has limited the use of DEHP, DPB, BBP, DINP, DIDP, and DnOP in children's toys and child-care articles through the Consumer Product Safety Improvement Act. Some compounds leaching from polystyrene food containers have been proposed to interfere with hormone functions and are suspected human carcinogens (cancer-causing substances). Other chemicals of potential concern include alkylphenols. While a finished plastic may be non-toxic, the monomers used in the manufacture of its parent polymers may be toxic. In some cases, small amounts of those chemicals can remain trapped in the product unless suitable processing is employed. For example, the World Health Organization's International Agency for Research on Cancer (IARC) has recognized vinyl chloride, the precursor to PVC, as a human carcinogen. Bisphenol A (BPA) Some plastic products degrade to chemicals with estrogenic activity. The primary building block of polycarbonates, bisphenol A (BPA), is an estrogen-like endocrine disruptor that may leach into food. Research in Environmental Health Perspectives finds that BPA leached from the lining of tin cans, dental sealants and polycarbonate bottles can increase the body weight of lab animals' offspring. A more recent animal study suggests that even low-level exposure to BPA results in insulin resistance, which can lead to inflammation and heart disease. As of January 2010, the Los Angeles Times reported that the US Food and Drug Administration (FDA) is spending $30 million to investigate indications of BPA's link to cancer. Bis(2-ethylhexyl) adipate, present in plastic wrap based on PVC, is also of concern, as are the volatile organic compounds present in new car smell. The EU has a permanent ban on the use of phthalates in toys. In 2009, the US government banned certain types of phthalates commonly used in plastic. Environmental effects Because the chemical structure of most plastics renders them durable, they are resistant to many natural degradation processes. Much of this material may persist for centuries or longer, given the demonstrated persistence of structurally similar natural materials such as amber. Estimates differ as to the amount of plastic waste produced in the last century. By one estimate, one billion tons of plastic waste have been discarded since the 1950s. Others estimate a cumulative human production of 8.3-billion tons of plastic, of which 6.3-billion tons is waste, with only 9% getting recycled. It is estimated that this waste is made up of 81% polymer resin, 13% polymer fibers and 32% additives. In 2018 more than 343 million tons of plastic waste were generated, 90% of which was composed of post-consumer plastic waste (industrial, agricultural, commercial and municipal plastic waste). The rest was pre-consumer waste from resin production and manufacturing of plastic products (e.g. materials rejected due to unsuitable color, hardness, or processing characteristics). The Ocean Conservancy reported that China, Indonesia, Philippines, Thailand, and Vietnam dump more plastic into the sea than all other countries combined. The rivers Yangtze, Indus, Yellow, Hai, Nile, Ganges, Pearl, Amur, Niger, and Mekong "transport 88% to 95% of the global [plastics] load into the sea." The presence of plastics, particularly microplastics, within the food chain is increasing. In the 1960s microplastics were observed in the guts of seabirds, and since then have been found in increasing concentrations. The long-term effects of plastics in the food chain are poorly understood. In 2009 it was estimated that 10% of modern waste was plastic, although estimates vary according to region. Meanwhile, 50% to 80% of debris in marine areas is plastic. Plastic is often used in agriculture. There is more plastic in the soil than in the oceans. The presence of plastic in the environment hurts ecosystems and human health. Research on the environmental impacts has typically focused on the disposal phase. However, the production of plastics is also responsible for substantial environmental, health and socioeconomic impacts. Prior to the Montreal Protocol, CFCs had been commonly used in the manufacture of the plastic polystyrene, the production of which had contributed to depletion of the ozone layer. Efforts to minimize environmental impact of plastics may include lowering of plastics production and use, waste- and recycling-policies, and the proactive development and deployment of alternatives to plastics such as for sustainable packaging. Microplastics Decomposition of plastics Plastics degrade by a variety of processes, the most significant of which is usually photo-oxidation. Their chemical structure determines their fate. Polymers' marine degradation takes much longer as a result of the saline environment and cooling effect of the sea, contributing to the persistence of plastic debris in certain environments. Recent studies have shown, however, that plastics in the ocean decompose faster than had been previously thought, due to exposure to the sun, rain, and other environmental conditions, resulting in the release of toxic chemicals such as bisphenol A. However, due to the increased volume of plastics in the ocean, decomposition has slowed down. The Marine Conservancy has predicted the decomposition rates of several plastic products: It is estimated that a foam plastic cup will take 50 years, a plastic beverage holder will take 400 years, a disposable diaper will take 450 years, and fishing line will take 600 years to degrade. Microbial species capable of degrading plastics are known to science, some of which are potentially useful for disposal of certain classes of plastic waste. In 1975, a team of Japanese scientists studying ponds containing waste water from a nylon factory discovered a strain of Flavobacterium that digests certain byproducts of nylon 6 manufacture, such as the linear dimer of 6-aminohexanoate. Nylon 4 (polybutyrolactam) can be degraded by the ND-10 and ND-11 strands of Pseudomonas sp. found in sludge, resulting in GABA (γ-aminobutyric acid) as a byproduct. Several species of soil fungi can consume polyurethane, including two species of the Ecuadorian fungus Pestalotiopsis. They can consume polyurethane both aerobically and anaerobically (such as at the bottom of landfills). Methanogenic microbial consortia degrade styrene, using it as a carbon source. Pseudomonas putida can convert styrene oil into various biodegradable plastic|biodegradable polyhydroxyalkanoates. Microbial communities isolated from soil samples mixed with starch have been shown to be capable of degrading polypropylene. The fungus Aspergillus fumigatus effectively degrades plasticized PVC. Phanerochaete chrysosporium has been grown on PVC in a mineral salt agar. P. chrysosporium, Lentinus tigrinus, A. niger, and A. sydowii can also effectively degrade PVC. Phenol-formaldehyde, commonly known as Bakelite, is degraded by the white rot fungus P. chrysosporium. Acinetobacter has been found to partially degrade low-molecular-weight polyethylene oligomers. When used in combination, Pseudomonas fluorescens and Sphingomonas can degrade over 40% of the weight of plastic bags in less than three months. The thermophilic bacterium Brevibacillus borstelensis (strain 707) was isolated from a soil sample and found capable of using low-density polyethylene as a sole carbon source when incubated at 50 °C. Pre-exposure of the plastic to ultraviolet radiation broke chemical bonds and aided biodegradation; the longer the period of UV exposure, the greater the promotion of the degradation. Hazardous molds have been found aboard space stations that degrade rubber into a digestible form. Several species of yeasts, bacteria, algae and lichens have been found growing on synthetic polymer artifacts in museums and at archaeological sites. In the plastic-polluted waters of the Sargasso Sea, bacteria have been found that consume various types of plastic; however, it is unknown to what extent these bacteria effectively clean up poisons rather than simply release them into the marine microbial ecosystem. Plastic-eating microbes also have been found in landfills. Nocardia can degrade PET with an esterase enzyme. The fungus Geotrichum candidum, found in Belize, has been found to consume the polycarbonate plastic found in CDs. Futuro houses are made of fiberglass-reinforced polyesters, polyester-polyurethane, and PMMA. One such house was found to be harmfully degraded by Cyanobacteria and Archaea. Recycling Pyrolysis By heating to above 500 °C (932 °F) in the absence of oxygen (pyrolysis), plastics can be broken down into simpler hydrocarbons, which can be used as feedstocks for the fabrication of new plastics. These hydrocarbons can also be used as fuels. Greenhouse gas emissions According to the Organisation for Economic Co-operation and Development, plastic contributed greenhouse gases in the equivalent of 1.8 billion tons of carbon dioxide () to the atmosphere in 2019, 3.4% of global emissions. They say that by 2060, plastic could emit 4.3 billion tons of greenhouse gas a year. The effect of plastics on global warming is mixed. Plastics are generally made from fossil gas or petroleum; thus, the production of plastics creates further fugitive emissions of methane when the fossil gas or petroleum is produced. Additionally, much of the energy used in plastic production is not sustainable energy; for example, high temperature from burning fossil gas. However, plastics can also limit methane emissions; for example, packaging to reduce food waste. A study from 2024 found that compared to glass and aluminum, plastic may actually have less of a negative effect on the environment and therefore might be the best option for must food packaging and other common uses. The study found that, "replacing plastics with alternatives is worse for greenhouse gas emissions in most cases." and that the study involving European researchers found, "15 of the 16 applications a plastic product incurs fewer greenhouse gas emissions than their alternatives." Production of plastics Production of plastics from crude oil requires 7.9 to 13.7 kWh/lb (taking into account the average efficiency of US utility stations of 35%). Producing silicon and semiconductors for modern electronic equipment is even more energy consuming: 29.2 to 29.8 kWh/lb for silicon, and about 381 kWh/lb for semiconductors. This is much higher than the energy needed to produce many other materials. For example, to produce iron (from iron ore) requires 2.5-3.2 kWh/lb of energy; glass (from sand, etc.) 2.3–4.4 kWh/lb; steel (from iron) 2.5–6.4 kWh/lb; and paper (from timber) 3.2–6.4 kWh/lb. Incineration of plastics Quickly burning plastics at very high temperatures breaks down many toxic components, such as dioxins and furans. This approach is widely used in municipal solid waste incineration. Municipal solid waste incinerators also normally treat the flue gas to decrease pollutants further, which is needed because uncontrolled incineration of plastic produces carcinogenic polychlorinated dibenzo-p-dioxins. Open-air burning of plastic occurs at lower temperatures and normally releases such toxic fumes. In the European Union, municipal waste incineration is regulated by the Industrial Emissions Directive, which stipulates a minimum temperature of 850 °C for at least two seconds. Facilitation of natural degradation The bacterium Blaptica dubia is claimed to help degradation of commercial polysterene. This biodegradation seems to occur in some plastic degrading bacteria inhabiting the gut of cockroaches. The biodegradation products have been found in their feces too. History The development of plastics has evolved from the use of naturally plastic materials (e.g., gums and shellac) to the use of the chemical modification of those materials (e.g., natural rubber, cellulose, collagen, and milk proteins), and finally to completely synthetic plastics (e.g., bakelite, epoxy, and PVC). Early plastics were bio-derived materials such as egg and blood proteins, which are organic polymers. In around 1600 BC, Mesoamericans used natural rubber for balls, bands, and figurines. Treated cattle horns were used as windows for lanterns in the Middle Ages. Materials that mimicked the properties of horns were developed by treating milk proteins with lye. In the nineteenth century, as chemistry developed during the Industrial Revolution, many materials were reported. The development of plastics accelerated with Charles Goodyear's 1839 discovery of vulcanization to harden natural rubber. Parkesine, invented by Alexander Parkes in 1855 and patented the following year, is considered the first man-made plastic. It was manufactured from cellulose (the major component of plant cell walls) treated with nitric acid as a solvent. The output of the process (commonly known as cellulose nitrate or pyroxilin) could be dissolved in alcohol and hardened into a transparent and elastic material that could be molded when heated. By incorporating pigments into the product, it could be made to resemble ivory. Parkesine was unveiled at the 1862 International Exhibition in London and garnered for Parkes the bronze medal. In 1893, French chemist Auguste Trillat discovered the means to insolubilize casein (milk proteins) by immersion in formaldehyde, producing material marketed as galalith. In 1897, mass-printing press owner Wilhelm Krische of Hanover, Germany, was commissioned to develop an alternative to blackboards. The resultant horn-like plastic made from casein was developed in cooperation with the Austrian chemist (Friedrich) Adolph Spitteler (1846–1940). Although unsuitable for the intended purpose, other uses would be discovered. The world's first fully synthetic plastic was Bakelite, invented in New York in 1907 by Leo Baekeland, who coined the term plastics. Many chemists have contributed to the materials science of plastics, including Nobel laureate Hermann Staudinger, who has been called "the father of polymer chemistry", and Herman Mark, known as "the father of polymer physics". After World War I, improvements in chemistry led to an explosion of new forms of plastics, with mass production beginning in the 1940s and 1950s. Among the earliest examples in the wave of new polymers were polystyrene (first produced by BASF in the 1930s) and polyvinyl chloride (first created in 1872 but commercially produced in the late 1920s). In 1923, Durite Plastics, Inc., was the first manufacturer of phenol-furfural resins. In 1933, polyethylene was discovered by Imperial Chemical Industries (ICI) researchers Reginald Gibson and Eric Fawcett. The discovery of polyethylene terephthalate (PETE) is credited to employees of the Calico Printers' Association in the UK in 1941; it was licensed to DuPont for the US and ICI otherwise, and as one of the few plastics appropriate as a replacement for glass in many circumstances, resulting in widespread use for bottles in Europe. In 1954 polypropylene was discovered by Giulio Natta and began to be manufactured in 1957. Also in 1954 expanded polystyrene (used for building insulation, packaging, and cups) was invented by Dow Chemical. Since the 1960s, plastic production has surged with the advent of polycarbonate and HDPE, widely used in various products. In the 1980s and 1990s, plastic recycling and the development of biodegradable plastics began to flourish to mitigate environmental impacts. From 2000 to the present, bioplastics from renewable sources and awareness of microplastics have spurred extensive research and policies to control plastic pollution. Policy Work is currently underway to develop a global treaty on plastic pollution. On March 2, 2022, UN Member States voted at the resumed fifth UN Environment Assembly (UNEA-5.2) to establish an Intergovernmental Negotiating Committee (INC) with the mandate of advancing a legally-binding international agreement on plastics. The resolution is entitled "End plastic pollution: Towards an international legally binding instrument." The mandate specifies that the INC must begin its work by the end of 2022 with the goal of "completing a draft global legally binding agreement by the end of 2024."
Technology
Materials
null
4013678
https://en.wikipedia.org/wiki/Symptomatic%20treatment
Symptomatic treatment
Symptomatic treatment, supportive care, supportive therapy, or palliative treatment is any medical therapy of a disease that only affects its symptoms, not the underlying cause. It is usually aimed at reducing the signs and symptoms for the comfort and well-being of the patient, but it also may be useful in reducing organic consequences and sequelae of these signs and symptoms of the disease. In many diseases, even in those whose etiologies are known (e.g., most viral diseases, such as influenza and Rift Valley fever), symptomatic treatment is the only treatment available so far. For more detail, see supportive therapy. For conditions like cancer, arthritis, neuropathy, tendinopathy, and injury, it can be useful to distinguish treatments that are supportive/palliative and cannot alter the natural history of the disease (disease modifying treatments). Examples Examples of symptomatic treatments: Analgesics, to reduce pain Anti-inflammatory agents, for inflammation caused by arthritis Antitussives, for cough Antihistaminics (also known as antihistamines), for allergy Antipyretics, for fever Enemas for constipation Treatments that reduce unwanted side effects from drugs Uses When the etiology (the cause, set of causes, or manner of causation of a disease or condition) for the disease is known, then specific treatment may be instituted, but it is generally associated with symptomatic treatment, as well. When the etiology is unknown, then symptomatic treatment may be the only realistic option. Symptomatic treatments are often used to manage side effects, such as drug withdrawal syndromes. Symptomatic treatment is not always recommended, and in fact, it may be dangerous, because it may mask the presence of an underlying etiology which will then be forgotten or treated with great delay. Examples: Low-grade fever for 15 days or more is sometimes the only symptom of bacteremia by staphylococcus bacteria. Suppressing it by symptomatic treatment will hide the disease from effective diagnosis and treatment with antibiotics. The consequence may be severe (rheumatic fever, nephritis, endocarditis, etc.) Chronic headache may be caused simply by a constitutional disposition or be the result of a brain tumor or a brain aneurysm. Finally, symptomatic treatment is not exempt from adverse effects, and may be a cause of iatrogenic consequences (i.e., ill effects caused by the treatment itself), such as allergic reactions, stomach bleeding, central nervous system effects (nausea, dizziness, etc.).
Biology and health sciences
Medical procedures
null
4014603
https://en.wikipedia.org/wiki/Flying%20and%20gliding%20animals
Flying and gliding animals
A number of animals are capable of aerial locomotion, either by powered flight or by gliding. This trait has appeared by evolution many times, without any single common ancestor. Flight has evolved at least four times in separate animals: insects, pterosaurs, birds, and bats. Gliding has evolved on many more occasions. Usually the development is to aid canopy animals in getting from tree to tree, although there are other possibilities. Gliding, in particular, has evolved among rainforest animals, especially in the rainforests in Asia (most especially Borneo) where the trees are tall and widely spaced. Several species of aquatic animals, and a few amphibians and reptiles have also evolved this gliding flight ability, typically as a means of evading predators. Types Animal aerial locomotion can be divided into two categories: powered and unpowered. In unpowered modes of locomotion, the animal uses aerodynamic forces exerted on the body due to wind or falling through the air. In powered flight, the animal uses muscular power to generate aerodynamic forces to climb or to maintain steady, level flight. Those who can find air that is rising faster than they are falling can gain altitude by soaring. Unpowered These modes of locomotion typically require an animal start from a raised location, converting that potential energy into kinetic energy and using aerodynamic forces to control trajectory and angle of descent. Energy is continually lost to drag without being replaced, thus these methods of locomotion have limited range and duration. Falling: decreasing altitude under the force of gravity, using no adaptations to increase drag or provide lift. Parachuting: falling at an angle greater than 45° from the horizontal with adaptations to increase drag forces. Very small animals may be carried up by the wind. Some gliding animals may use their gliding membranes for drag rather than lift, to safely descend. Gliding flight: falling at an angle less than 45° from the horizontal with lift from adapted aerofoil membranes. This allows slowly falling directed horizontal movement, with streamlining to decrease drag forces for aerofoil efficiency and often with some maneuverability in air. Gliding animals have a lower aspect ratio (wing length/breadth) than true flyers. Powered flight Powered flight has evolved at least four times: first in the insects, then in pterosaurs, next in birds, and last in bats. Studies on theropod dinosaurs do suggest multiple (at least 3) independent acquisitions of powered flight however, and a recent study proposes independent acquisitions amidst the different bat clades as well. Powered flight uses muscles to generate aerodynamic force, which allows the animal to produce lift and thrust. The animal may ascend without the aid of rising air. Externally powered Ballooning and soaring are not powered by muscle, but rather by external aerodynamic sources of energy: the wind and rising thermals, respectively. Both can continue as long as the source of external power is present. Soaring is typically only seen in species capable of powered flight, as it requires extremely large wings. Ballooning: being carried up into the air from the aerodynamic effect on long strands of silk in the wind. Certain silk-producing arthropods, mostly small or young spiders, secrete a special light-weight gossamer silk for ballooning, sometimes traveling great distances at high altitude. Soaring: gliding in rising or otherwise moving air that requires specific physiological and morphological adaptations that can sustain the animal aloft without flapping its wings. The rising air is due to thermals, ridge lift or other meteorological features. Under the right conditions, soaring creates a gain of altitude without expending energy. Large wingspans are needed for efficient soaring. Many species will use multiple of these modes at various times; a hawk will use powered flight to rise, then soar on thermals, then descend via free-fall to catch its prey. Evolution and ecology Gliding and parachuting While gliding occurs independently from powered flight, it has some ecological advantages of its own as it is the simplest form of flight. Gliding is a very energy-efficient way of travelling from tree to tree. Although moving through the canopy running along the branches may be less energetically demanding, the faster transition between trees allows for greater foraging rates in a particular patch. Glide ratios can be dependent on size and current behavior. Higher foraging rates are supported by low glide ratios as smaller foraging patches require less gliding time over shorter distances and greater amounts of food can be acquired in a shorter time period. Low ratios are not as energy efficient as the higher ratios, but an argument made is that many gliding animals eat low energy foods such as leaves and are restricted to gliding because of this, whereas flying animals eat more high energy foods such as fruits, nectar, and insects. Mammals tend to rely on lower glide ratios to increase the amount of time foraging for lower energy food. An equilibrium glide, achieving a constant airspeed and glide angle, is harder to obtain as animal size increases. Larger animals need to glide from much higher heights and longer distances to make it energetically beneficial. Gliding is also very suitable for predator avoidance, allowing for controlled targeted landings to safer areas. In contrast to flight, gliding has evolved independently many times (more than a dozen times among extant vertebrates); however these groups have not radiated nearly as much as have groups of flying animals. Worldwide, the distribution of gliding animals is uneven, as most inhabit rain forests in Southeast Asia. (Despite seemingly suitable rain forest habitats, few gliders are found in India or New Guinea and none in Madagascar.) Additionally, a variety of gliding vertebrates are found in Africa, a family of hylids (flying frogs) lives in South America and several species of gliding squirrels are found in the forests of northern Asia and North America. Various factors produce these disparities. In the forests of Southeast Asia, the dominant canopy trees (usually dipterocarps) are taller than the canopy trees of the other forests. Forest structure and distance between trees are influential in the development of gliding within varying species. A higher start provides a competitive advantage of further glides and farther travel. Gliding predators may more efficiently search for prey. The lower abundance of insect and small vertebrate prey for carnivorous animals (such as lizards) in Asian forests may be a factor. In Australia, many mammals (and all mammalian gliders) possess, to some extent, prehensile tails. Globally, smaller gliding species tend to have feather-like tails and larger species have fur covered round bushy tails, but smaller animals tend to rely on parachuting rather than developing gliding membranes. The gliding membranes, patagium, are classified in the 4 groups of propatagium, digipatagium, plagiopatagium and uropatagium. These membranes consist of two tightly bounded layers of skin connected by muscles and connective tissue between the fore and hind limbs. Powered flight evolution Powered flight has evolved unambiguously only four times—birds, bats, pterosaurs, and insects (though see above for possible independent acquisitions within bird and bat groups). In contrast to gliding, which has evolved more frequently but typically gives rise to only a handful of species, all three extant groups of powered flyers have a huge number of species, suggesting that flight is a very successful strategy once evolved. Bats, after rodents, have the most species of any mammalian order, about 20% of all mammalian species. Birds have the most species of any class of terrestrial vertebrates. Finally, insects (most of which fly at some point in their life cycle) have more species than all other animal groups combined. The evolution of flight is one of the most striking and demanding in animal evolution, and has attracted the attention of many prominent scientists and generated many theories. Additionally, because flying animals tend to be small and have a low mass (both of which increase the surface-area-to-mass ratio), they tend to fossilize infrequently and poorly compared to the larger, heavier-boned terrestrial species they share habitat with. Fossils of flying animals tend to be confined to exceptional fossil deposits formed under highly specific circumstances, resulting in a generally poor fossil record, and a particular lack of transitional forms. Furthermore, as fossils do not preserve behavior or muscle, it can be difficult to discriminate between a poor flyer and a good glider. Insects were the first to evolve flight, approximately 350 million years ago. The developmental origin of the insect wing remains in dispute, as does the purpose prior to true flight. One suggestion is that wings initially evolved from tracheal gill structures and were used to catch the wind for small insects that live on the surface of the water, while another is that they evolved from paranotal lobes or leg structures and gradually progressed from parachuting, to gliding, to flight for originally arboreal insects. Pterosaurs were the next to evolve flight, approximately 228 million years ago. These reptiles were close relatives of the dinosaurs, and reached enormous sizes, with some of the last forms being the largest flying animals ever to inhabit the Earth, having wingspans of over 9.1 m (30 ft). However, they spanned a large range of sizes, down to a 250 mm (10 in) wingspan in Nemicolopterus. Birds have an extensive fossil record, along with many forms documenting both their evolution from small theropod dinosaurs and the numerous bird-like forms of theropod which did not survive the mass extinction at the end of the Cretaceous. Indeed, Archaeopteryx is arguably the most famous transitional fossil in the world, both due to its mix of reptilian and avian anatomy and the luck of being discovered only two years after Darwin's publication of On the Origin of Species. However, the ecology of this transition is considerably more contentious, with various scientists supporting either a "trees down" origin (in which an arboreal ancestor evolved gliding, then flight) or a "ground up" origin (in which a fast-running terrestrial ancestor used wings for a speed boost and to help catch prey). It may also have been a non-linear process, as several non-avian dinosaurs seem to have independently acquired powered flight. Bats are the most recent to evolve (about 60 million years ago), most likely from a fluttering ancestor, though their poor fossil record has hindered more detailed study. Only a few animals are known to have specialised in soaring: the larger of the extinct pterosaurs, and some large birds. Powered flight is very energetically expensive for large animals, but for soaring their size is an advantage, as it allows them a low wing loading, that is a large wing area relative to their weight, which maximizes lift. Soaring is very energetically efficient. Biomechanics Gliding and parachuting During a free-fall with no aerodynamic forces, the object accelerates due to gravity, resulting in increasing velocity as the object descends. During parachuting, animals use the aerodynamic forces on their body to counteract the force of gravity. Any object moving through air experiences a drag force that is proportional to surface area and velocity squared; this force will partially counter the force of gravity, slowing the animal's descent to a safer speed. If this drag is oriented at an angle to the vertical, the animal's trajectory will gradually become more horizontal, and it will cover horizontal as well as vertical distance. Smaller adjustments can allow turning or other maneuvers. This can allow a parachuting animal to move from a high location on one tree to a lower location on another tree nearby. Specifically in gliding mammals, there are 3 types of gliding paths respectively: S glide, J glide, and "straight-shaped" glides where species either gain altitude post-launch then descend, rapidly decrease height before gliding, or maintain a constant angled descent. During gliding, lift plays an increased role. Like drag, lift is proportional to velocity squared. Gliding animals will typically leap or drop from high locations such as trees, just as in parachuting, and as gravitational acceleration increases their speed, the aerodynamic forces also increase. Because the animal can utilize lift and drag to generate greater aerodynamic force, it can glide at a shallower angle than parachuting animals, allowing it to cover greater horizontal distance in the same loss of altitude, and reach trees further away. Successful flights for gliding animals are achieved through 5 steps: preparation, launch, glide, braking, and landing. Gliding species are better able to control themselves mid-air, with the tail acting as a rudder, making it capable to pull off banking movements or U-turns during flight. During landing, arboreal mammals will extend their fore and hind limbs in front of itself to brace for landing and to trap air in order to maximize air resistance and lower impact speed. Powered flight Unlike most air vehicles, in which the objects that generate lift (wings) and thrust (engine or propeller) are separate and the wings remain fixed, flying animals use their wings to generate both lift and thrust by moving them relative to the body. This has made the flight of organisms considerably harder to understand than that of vehicles, as it involves varying speeds, angles, orientations, areas, and flow patterns over the wings. A bird or bat flying through the air at a constant speed moves its wings up and down (usually with some fore-aft movement as well). Because the animal is in motion, there is some airflow relative to its body which, combined with the velocity of its wings, generates a faster airflow moving over the wing. This will generate lift force vector pointing forwards and upwards, and a drag force vector pointing rearwards and upwards. The upwards components of these counteract gravity, keeping the body in the air, while the forward component provides thrust to counteract both the drag from the wing and from the body as a whole. Pterosaur flight likely worked in a similar manner, though no living pterosaurs remain for study. Insect flight is considerably different, due to their small size, rigid wings, and other anatomical differences. Turbulence and vortices play a much larger role in insect flight, making it even more complex and difficult to study than the flight of vertebrates. There are two basic aerodynamic models of insect flight. Most insects use a method that creates a spiralling leading edge vortex. Some very small insects use the fling-and-clap or Weis-Fogh mechanism in which the wings clap together above the insect's body and then fling apart. As they fling open, the air gets sucked in and creates a vortex over each wing. This bound vortex then moves across the wing and, in the clap, acts as the starting vortex for the other wing. Circulation and lift are increased, at the price of wear and tear on the wings. Limits and extremes Flying and soaring Largest. The largest known flying animal was formerly thought to be Pteranodon, a pterosaur with a wingspan of up to . However, the more recently discovered azhdarchid pterosaur Quetzalcoatlus is much larger, with estimates of the wingspan ranging from . Some other recently discovered azhdarchid pterosaur species, such as Hatzegopteryx, may have also wingspans of a similar size or even slightly larger. Although it is widely thought that Quetzalcoatlus reached the size limit of a flying animal, the same was once said of Pteranodon. The heaviest living flying animals are the kori bustard and the great bustard with males reaching . The wandering albatross has the greatest wingspan of any living flying animal at . Among living animals which fly over land, the Andean condor and the marabou stork have the largest wingspan at . Studies have shown that it is physically possible for flying animals to reach wingspans, but there is no firm evidence that any flying animal, not even the azhdarchid pterosaurs, got that large. Smallest. There is no minimum size for getting airborne. Indeed, there are many bacteria floating in the atmosphere that constitute part of the aeroplankton. However, to move about under one's own power and not be overly affected by the wind requires a certain amount of size. The smallest flying vertebrates are the bee hummingbird and the bumblebee bat, both of which may weigh less than . They are thought to represent the lower size limit for endotherm flight. The smallest flying invertebrate is a fairyfly wasp species, Kikiki huna, at (150 μm). Fastest. The fastest of all known flying animals is the peregrine falcon, which when diving travels at or faster. The fastest animal in flapping horizontal flight may be the Mexican free-tailed bat, said to attain about based on ground speed by an aircraft tracking device; that measurement does not separate any contribution from wind speed, so the observations could be caused by strong tailwinds. Slowest. Most flying animals need to travel forward to stay aloft. However, some creatures can stay in the same spot, known as hovering, either by rapidly flapping the wings, as do hummingbirds, hoverflies, dragonflies, and some others, or carefully using thermals, as do some birds of prey. The slowest flying non-hovering bird recorded is the American woodcock, at . Highest flying. There are records of a Rüppell's vulture Gyps rueppelli, a large vulture, being sucked into a jet engine above Côte d'Ivoire in West Africa. The animal that flies highest most regularly is the bar-headed goose Anser indicus, which migrates directly over the Himalayas between its nesting grounds in Tibet and its winter quarters in India. They are sometimes seen flying well above the peak of Mount Everest at . Gliding and parachuting Most efficient glider. This can be taken as the animal that moves most horizontal distance per metre fallen. Flying squirrels are known to glide up to , but have measured glide ratio of about 2. Flying fish have been observed to glide for hundreds of metres on the drafts on the edge of waves with only their initial leap from the water to provide height, but may be obtaining additional lift from wave motion. On the other hand, albatrosses have measured lift–drag ratios of 20, and thus fall just 1 meter for every 20 in still air. Most maneuverable glider. Many gliding animals have some ability to turn, but which is the most maneuverable is difficult to assess. Even paradise tree snakes, Chinese gliding frogs, and gliding ants have been observed as having considerable capacity to turn in the air. Flying animals Extant Insects Pterygota: The first of all animals to evolve flight, they are also the only invertebrates that have evolved flight. As they comprise almost all insects, the species are too numerous to list here. Insect flight is an active research field. Birds Birds (flying, soaring) – Most of the approximately 10,000 living species can fly (flightless birds are the exception). Bird flight is one of the most studied forms of aerial locomotion in animals. See List of soaring birds for birds that can soar as well as fly. Mammals Bats. There are approximately 1,240 bat species, representing about 20% of all classified mammal species. Most bats are nocturnal and many feed on insects while flying at night, using echolocation to home in on their prey. Extinct Pterosaurs Pterosaurs were the first flying vertebrates, and are generally agreed to have been sophisticated flyers. They had large wings formed by a patagium stretching from the torso to a dramatically lengthened fourth finger. There were hundreds of species, most of which are thought to have been intermittent flappers, and many soarers. The largest known flying animals are pterosaurs. Non-avian dinosaurs Theropods (gliding and flying). There were several species of theropod dinosaur thought to be capable of gliding or flying, that are not classified as birds (though they are closely related). Some species (Microraptor gui, Microraptor zhaoianus, and Changyuraptor) have been found that were fully feathered on all four limbs, giving them four 'wings' that they are believed to have used for gliding or flying. A recent study indicates that flight may have been acquired independently in various different lineages though it may have only evolved in theropods of the Avialae. Gliding animals Extant Insects Gliding bristletails. Directed aerial gliding descent is found in some tropical arboreal bristletails, an ancestrally wingless sister taxa to the winged insects. The bristletails median caudal filament is important for the glide ratio and gliding control Gliding ants. The flightless workers of these insects have secondarily gained some capacity to move through the air. Gliding has evolved independently in a number of arboreal ant species from the groups Cephalotini, Pseudomyrmecinae, and Formicinae (mostly Camponotus). All arboreal dolichoderines and non-cephalotine myrmicines except Daceton armigerum do not glide. Living in the rainforest canopy like many other gliders, gliding ants use their gliding to return to the trunk of the tree they live on should they fall or be knocked off a branch. Gliding was first discovered for Cephalotes atreus in the Peruvian rainforest. Cephalotes atreus can make 180 degree turns, and locate the trunk using visual cues, succeeding in landing 80% of the time. Unique among gliding animals, Cephalotini and Pseudomyrmecinae ants glide abdomen first, the Forminicae however glide in the more conventional head first manner. Gliding immature insects. The wingless immature stages of some insect species that have wings as adults may also show a capacity to glide. These include some species of cockroach, mantis, katydid, stick insect and true bug. Spiders Ballooning spiders (parachuting). The young of some species of spiders travel through the air by using silk draglines to catch the wind, as may some smaller species of adult spider, such as the money spider family. This behavior is commonly known as "ballooning". Ballooning spiders make up part of the aeroplankton. Gliding spiders. Some species of arboreal spider of the genus Selenops can glide back to the trunk of a tree should they fall. Skydiving spiders discovered in South America Molluscs Flying squid. Several oceanic squids of the family Ommastrephidae, such as the Pacific flying squid, will leap out of the water to escape predators, an adaptation similar to that of flying fish. Smaller squids will fly in shoals, and have been observed to cover distances as long as . Small fins towards the back of the mantle do not produce much lift, but do help stabilize the motion of flight. They exit the water by expelling water out of their funnel, indeed some squid have been observed to continue jetting water while airborne providing thrust even after leaving the water. This may make flying squid the only animals with jet-propelled aerial locomotion. The neon flying squid has been observed to glide for distances over , at speeds of up to . Fish Flying fish. There are over 50 species of flying fish belonging to the family Exocoetidae. They are mostly marine fishes of small to medium size. The largest flying fish can reach lengths of but most species measure less than in length. They can be divided into two-winged varieties and four-winged varieties. Before the fish leaves the water it increases its speed to around 30 body lengths per second and as it breaks the surface and is freed from the drag of the water it can be traveling at around . The glides are usually up to in length, but some have been observed soaring for hundreds of metres using the updraft on the leading edges of waves. The fish can also make a series of glides, each time dipping the tail into the water to produce forward thrust. The longest recorded series of glides, with the fish only periodically dipping its tail in the water, was for 45 seconds (Video here). It has been suggested that the genus Exocoetus is on an evolutionary borderline between flight and gliding. It flaps its large pectoral fins while gliding, but does not use a power strike like flying animals. It has been found that some flying fish can glide as effectively as some flying birds. live bearers Halfbeaks. A group related to the Exocoetidae, one or two hemirhamphid species possess enlarged pectoral fins and show true gliding flight rather than simple leaps. Marshall (1965) reports that Euleptorhamphus viridis can cover in two separate hops. Trinidadian guppies have been observed exhibiting a gliding response to escape predator's Freshwater butterflyfish (possibly gliding). Pantodon buchholzi has the ability to jump and possibly glide a short distance. It can move through the air several times the length of its body. While it does this, the fish flaps its large pectoral fins, giving it its common name. However, it is debated whether the freshwater butterfly fish can truly glide, Saidel et al. (2004) argue that it cannot. Freshwater hatchetfish. In the wild, they have been observed jumping out of the water and gliding (although reports of them achieving powered flight has been brought up many times). Amphibians Gliding has evolved independently in two families of tree frogs, the Old World Rhacophoridae and the New World Hylidae. Within each lineage there are a range of gliding abilities from non-gliding, to parachuting, to full gliding. Rhacophoridae flying frogs. A number of the Rhacophoridae, such as Wallace's flying frog (Rhacophorus nigropalmatus), have adaptations for gliding, the main feature being enlarged toe membranes. For example, the Malayan flying frog Rhacophorus prominanus glides using the membranes between the toes of its limbs, and small membranes located at the heel, the base of the leg, and the forearm. Some of the frogs are quite accomplished gliders, for example, the Chinese flying frog Rhacophorus dennysi can maneuver in the air, making two kinds of turn, either rolling into the turn (a banked turn) or yawing into the turn (a crabbed turn). Hylidae flying frogs. The other frog family that contains gliders. Reptiles Several lizards and snakes are capable of gliding: Draco lizards. There are 28 species of lizard of the genus Draco, found in Sri Lanka, India, and Southeast Asia. They live in trees, feeding on tree ants, but nest on the forest floor. They can glide for up to and over this distance they lose only in height. Unusually, their patagium (gliding membrane) is supported on elongated ribs rather than the more common situation among gliding vertebrates of having the patagium attached to the limbs. When extended, the ribs form a semicircle on either side the lizard's body and can be folded to the body like a folding fan. Gliding lacertids. There are two species of gliding lacertid, of the genus Holaspis, found in Africa. They have fringed toes and tail sides and can flatten their bodies for gliding or parachuting. Ptychozoon flying geckos. There are six species of gliding gecko, of the genus Ptychozoon, from Southeast Asia. These lizards have small flaps of skin along their limbs, torso, tail, and head that catch the air and enable them to glide. Lupersaurus flying geckos. A possible sister-taxon to Ptychozoon which has similar flaps and folds and also glides. Thecadactylus flying geckos. At least some species of Thecadactylus, such as T. rapicauda, are known to glide. Cosymbotus flying gecko. Similar adaptations to Ptychozoon are found in the two species of the gecko genus Cosymbotus. Chrysopelea snakes. Five species of snake from Southeast Asia, Melanesia, and India. The paradise tree snake of southern Thailand, Malaysia, Borneo, Philippines, and Sulawesi is the most capable glider of those snakes studied. It glides by stretching out its body sideways and opening its ribs so the belly is concave, and by making lateral slithering movements. It can remarkably glide up to and make 90 degree turns. Mammals Bats are the only freely flying mammals. A few other mammals can glide or parachute; the best known are flying squirrels and flying lemurs. Flying squirrels (subfamily Petauristinae). There are more than 40 living species divided between 14 genera of flying squirrel. Flying squirrels are found in Asia (most species), North America (genus Glaucomys) and Europe (Siberian flying squirrel). They inhabit tropical, temperate, and Subarctic environments, with the Glaucomys preferring boreal and montane coniferous forests, specifically landing on red spruce (Picea rubens) trees as landing sites; they are known to rapidly climb trees, but take some time to locate a good landing spot. They tend to be nocturnal and are highly sensitive to light and noise. When a flying squirrel wishes to cross to a tree that is further away than the distance possible by jumping, it extends the cartilage spur on its elbow or wrist. This opens out the flap of furry skin (the patagium) that stretches from its wrist to its ankle. It glides spread-eagle and with its tail fluffed out like a parachute, and grips the tree with its claws when it lands. Flying squirrels have been reported to glide over . Anomalures or scaly-tailed flying squirrels (family Anomaluridae). These brightly coloured African rodents are not squirrels but have evolved to a resemble flying squirrels by convergent evolution. There are seven species, divided in three genera. All but one species have gliding membranes between their front and hind legs. The genus Idiurus contains two particularly small species known as flying mice, but similarly they are not true mice. Colugos or "flying lemurs" (order Dermoptera). There are two species of colugo. Despite their common name, colugos are not lemurs; true lemurs are primates. Molecular evidence suggests that colugos are a sister group to primates; however, some mammalogists suggest they are a sister group to bats. Found in Southeast Asia, the colugo is probably the mammal most adapted for gliding, with a patagium that is as large as geometrically possible. They can glide as far as with minimal loss of height. They have the most developed propatagium out of any gliding mammal with a mean launch velocity of approximately 3.7 m/s; the Mayan Colugo has been known to initiate glides without jumping. Sifaka, a type of lemur, and possibly some other primates (possible limited gliding or parachuting). A number of primates have been suggested to have adaptations that allow limited gliding or parachuting: sifakas, indris, galagos and saki monkeys. Most notably, the sifaka, a type of lemur, has thick hairs on its forearms that have been argued to provide drag, and a small membrane under its arms that has been suggested to provide lift by having aerofoil properties. Flying phalangers or wrist-winged gliders (subfamily Petaurinae). Possums found in Australia, and New Guinea. The gliding membranes are hardly noticeable until they jump. On jumping, the animal extends all four legs and stretches the loose folds of skin. The subfamily contains seven species. Of the six species in the genus Petaurus, the sugar glider and the Biak glider are the most common species. The lone species in the genus Gymnobelideus, Leadbeater's possum has only a vestigial gliding membrane. Greater glider (Petauroides volans). The only species of the genus Petauroides of the family Pseudocheiridae. This marsupial is found in Australia, and was originally classed with the flying phalangers, but is now recognised as separate. Its flying membrane only extends to the elbow, rather than to the wrist as in Petaurinae. It has elongated limbs compared to its non-gliding relatives. Feather-tailed possums (family Acrobatidae). This family of marsupials contains two genera, each with one species. The feathertail glider (Acrobates pygmaeus), found in Australia is the size of a very small mouse and is the smallest mammalian glider. The feathertail possum (Distoechurus pennatus) is found in New Guinea, but does not glide. Both species have a stiff-haired feather-like tail. Extinct Reptiles Extinct reptiles similar to Draco. There are a number of unrelated extinct lizard-like reptiles with similar "wings" to the Draco lizards. These include the Late Permian Weigeltisauridae, the Triassic Kuehneosauridae and Mecistotrachelos, and the Cretaceous lizard Xianglong. The largest of these, Kuehneosaurus, has a wingspan of , and was estimated to be able to glide about . Sharovipterygidae. These strange reptiles from the Upper Triassic of Kyrgyzstan and Poland unusually had a membrane on their elongated hind limbs, extending their otherwise normal, flying-squirrel-like patagia significantly. The forelimbs are in contrast much smaller. Hypuronector. This bizarre drepanosaur displays limb proportions, particularly the elongated forelimbs, that are consistent with a flying or gliding animal with patagia. Non-avian dinosaurs Scansoriopterygidae is unique among dinosaurs for the development of membranous wings, unlike the feathered airfoils of other theropods. Much like modern anomalures it developed a bony rod to help support the wing, albeit on the wrist and not the elbow. Fish Thoracopteridae is a lineage of Triassic flying fish-like Perleidiformes, having converted their pectoral and pelvic fins into broad wings very similar to those of their modern counterparts. The Ladinian genus Potanichthys is the oldest member of this clade, suggesting that these fish began exploring aerial niches soon after the Permian-Triassic extinction event. Mammals Volaticotherium antiquum. A gliding eutriconodont, long considered the earliest gliding mammal until the discovery of contemporary gliding haramiyidans. It lived around 164 million years ago and used a fur-covered skin membrane to glide through the air; it lived around 165 million years ago, during the Middle-Late Jurassic of what is now China. The closely related Argentoconodon is also thought to have been able to glide, based on postcranial similarities. The haramiyidans Vilevolodon, Xianshou, Maiopatagium and Arboroharamiya known from the Middle-Late Jurassic of China had extensive patagia, highly convergent with those of colugos. A gliding metatherian (possibly a marsupial) is known from the Paleocene of Itaboraí, Brazil. A gliding rodent belonging to the extinct family Eomyidae, Eomys quercyi is known from the late Oligocene of Germany.
Biology and health sciences
Ethology
Biology
30860031
https://en.wikipedia.org/wiki/Digital%20multimedia%20broadcasting
Digital multimedia broadcasting
Digital multimedia broadcasting (DMB) is a digital radio transmission technology developed in South Korea as part of the national IT project for sending multimedia such as TV, radio and datacasting to mobile devices such as mobile phones, laptops and GPS navigation systems. This technology, sometimes known as mobile TV, should not be confused with Digital Audio Broadcasting (DAB) which was developed as a research project for the European Union. DMB was developed in South Korea as the next generation digital technology to replace FM radio, but the technological foundations were laid by Prof. Dr. Gert Siegle and Dr. Hamed Amor at Bosch in Germany. The world's first official mobile TV service started in South Korea in May 2005, although trials were available much earlier. It can operate via satellite (S-DMB) or terrestrial (T-DMB) transmission. DMB has also some similarities with its former competing mobile TV standard, DVB-H. S-DMB T-DMB T-DMB is made for terrestrial transmissions on band III (VHF) and L (UHF) frequencies. DMB is unavailable in the United States because those frequencies are allocated for television broadcasting (VHF channels 7 to 13) and military applications. USA is adopting ATSC-M/H for free broadcasts to mobiles, and for a time, Qualcomm's proprietary MediaFLO system. In Japan, 1seg is the standard, using ISDB. T-DMB uses MPEG-4 Part 10 (H.264) for the video and MPEG-4 Part 3 BSAC or HE-AAC v2 for the audio. The audio and video is encapsulated in an MPEG transport stream (MPEG-TS). The stream is forward error corrected by Reed Solomon encoding and the parity word is 16 bytes long. There is convolutional interleaving made on this stream, then the stream is broadcast in data stream mode on DAB. In order to diminish the channel effects such as fading and shadowing, the DMB modem uses OFDM-DQPSK modulation. A single-chip T-DMB receiver is also provided by an MPEG transport stream demultiplexer. DMB has several applicable devices such as mobile phone, portable TV, PDA and telematics devices for automobiles. T-DMB is an [ETSI] standard (TS 102 427 and TS 102 428). As of December 14, 2007, ITU formally approved T-DMB as the global standard, along with three other standards, like DVB-H, 1seg, and MediaFLO. Smart DMB Smart DMB started in January 2013 in South Korea. Smart DMB has a VOD service and quality has been improved from 240p to 480p. Smart DMB is built in many Korean smartphones starting with the Galaxy Grand in January 2013. HD DMB HD DMB started in August 2016 in South Korea. HD DMB has been improved from 240p to 720p. It uses HEVC.5 codec. There are currently 6 HD DMB stations in Seoul. Smartphones integrated Qualcomm Snapdragon 801 or higher received firmware upgrade to support HD DMB. Countries using DMB Currently, DMB is being put into use in a number of countries, although mainly used in South Korea. Also see list of Countries using DAB/DMB. South Korea In 2005, South Korea became the world's first country to start S-DMB and T-DMB service on May 1 and December 1, respectively. In December 2006, T-DMB service in South Korea consists of, 7 TV channels, 12 radio channels, and 8 data channels. These are broadcast on six multiplexes in the VHF band on TV channels 8 and 12 (6 MHz raster). In October 2007, South Korea added broadcasting channel MBCNET to the DMB channel. But in 2010, this channel changed tnN go. In 2009 there were eight DMB video channels in Seoul, and six in other metropolitan cities. As of April 2013, S-DMB service in South Korea consists of 15 TV channels, 2 radio channels and 6 data channels. South Korea has had Full T-DMB services including JSS (Jpeg Slide Show), DLS (Dynamic Label Segment), BWS, and TPEG since 2006. S-DMB service in South Korea is provided on a subscription basis through TU Media and is accessible throughout the country. T-DMB service is provided free of charge, but access is limited to selected regions. Around one million receivers have been sold . 14 million DMB receivers were sold including T-DMB and S-DMB in South Korea, and 40% of the new cell phones have the capability to see DMB. Receivers are integrated in car navigation systems, mobile phones, portable media players, laptop computers and digital cameras. In mid-August 2007, Iriver, a multimedia and micro-technology company released their "NV", which utilizes South Korea's DMB service. Since the advent of smartphones DMBs have been made available on phones with receivers through smartphone applications, most of which come pre-installed in phones made and sold in South Korea. Other countries Some T-DMB trials are currently available or planned around Europe and other countries: In Norway T-DMB services was made available between May 2009 and January 2018. MiniTV DMB service launched by the Norwegian Mobile TV Corporation (NMTV) and was backed by the three largest broadcasters in Norway: the public broadcaster NRK, TV2 and Modern Times Group (MTG). The live channels were available in and around Greater Oslo. Germany's Mobiles Fernsehen Deutschland (MFD) launched the commercial T-DMB service "Watcha" in June 2006, in time for the World Cup 2006, marketed together with Samsung's P900 DMB Phone, the first DMB Phone in Europe. It was stopped in April 2008 as MFD is now favouring DVB-H, the European standard. France in December 2007 chose T-DMB Audio in VHF band III and L band as the national standard for terrestrial digital radio. It was replaced later by DAB+. China in 2006 chose DAB as an industrial standard. Since 2007, DAB and T-DMB services broadcast in Beijing, Guangdong, Henan, Dalian, Yunnan, Liaoning, Hunan, Zhejiang, Anhui, and Shenzhen. In Mexico most cell phone carriers offer DMB broadcasting as part of their basic plans. As of 2008 the vast majority of Mexico receives DMB signals. Ghana is running a T-DMB service in Accra and Kumasi on mobile network since May 2008. Netherlands: MFD, T-Systems and private investors are planning a DMB service under the name Mobiel TV Nederland. Callmax will also deploy a DMB service on the L-Band frequency in the Netherlands. Indonesia is currently running a trial in Jakarta. Italy and Vatican City: RAI and Vatican Radio are currently running a trial some areas. Canada has been running trials since 2006 in Ottawa, Toronto, Vancouver and Montreal, done by CBC/Radio-Canada. Malaysia has been running trials since 2008 in KL, done by TV3/MPB. Initially, the government was committed to deploying DVB-T for government-owned channels, however as of December 2009, RTM1 and 2, as well as all the radio channels, are available over Band III for DMB-T as in addition to DVB-T. Additionally, the TV3 DMB signal has moved to L Band. The TV3 DMB signals are still limited to the Damansara and Kuala Lumpur area, while the government owned DMB-T signals have a wider coverage and apparently covers most of the Klang Valley area. The government transmissions are part of a two-year trial that is part of a test that also involves the DAB and DAB+ digital radio standard. Cambodia in August 2010 chose T-DMB as the national standard for terrestrial digital broadcasting. TVK is currently running a trial. DMB in automobiles T-DMB works flawlessly in vehicles traveling up to 300 km/h. In tunnels or underground areas, both television and radio broadcast is still available, though DMB may skip occasionally. In South Korea, some long-distance buses adopted T-DMB instead of satellite TV such as Sky TV. It works quite well even though the resolution is 240p, lower than satellite. In comparison, satellite is usually 480p or higher.
Technology
Broadcasting
null
30860428
https://en.wikipedia.org/wiki/Coffeemaker
Coffeemaker
A coffeemaker, coffee maker or coffee machine is a cooking appliance used to brew coffee. While there are many different types of coffeemakers, the two most common brewing principles use gravity or pressure to move hot water through coffee grounds. In the most common devices, coffee grounds are placed into a paper or metal filter inside a funnel, which is set over a glass or ceramic coffee pot, a cooking pot in the kettle family. Cold water is poured into a separate chamber, which is then boiled and directed into the funnel and allowed to drip through the grounds under gravity. This is also called automatic drip-brew. Coffee makers that use pressure to force water through the coffee grounds are called espresso makers, and they produce espresso coffee. Types Vacuum brewers On 27 August 1930, Inez H. Peirce of Chicago, Illinois, filed her patent for the first vacuum coffee maker that truly automated the vacuum brewing process, while eliminating the need for a stovetop burner or liquid fuels. Cafetiere A cafetiere (coffee plunger, French press in US English) requires coffee of a coarser grind than does a drip brew coffee filter, as finer grounds will seep through the press filter and into the drink. Because the coffee grounds remain in direct contact with the brewing water and the grounds are filtered from the water via a mesh instead of a paper filter, coffee brewed with the cafetiere captures more of the coffee's flavour and essential oils, which would become trapped in a traditional drip brew machine's paper filters. As with drip-brewed coffee, cafetiere coffee can be brewed to any strength by adjusting the amount of ground coffee which is brewed. If the used grounds remain in the drink after brewing, French pressed coffee left to stand can become "bitter", though this is an effect that many users of cafetiere consider beneficial. For a cafetiere, the contents are considered spoiled, by some reports, after around 20 minutes. Single-serve coffeemaker The single-serve or single-cup coffeemaker had gained popularity by the 2000s.
Technology
Household appliances
null
44572867
https://en.wikipedia.org/wiki/Interconnector
Interconnector
An interconnector (also known as a DC tie in the USA) is a structure which enables high voltage DC electricity to flow between electrical grids. An electrical interconnector allows electricity to flow between separate AC networks, or to link synchronous grids. They can be formed of submarine power cables or underground power cables or overhead power lines. The longest interconnection as of July 2022 was the 2,210 km Hami - Zhengzhou delivering 8 GW of high voltage direct current power. The longest proposed connector is the 3,800 km, 3.6 GW Xlinks Morocco-UK Power Project. Economy Interconnectors allow the trading of electricity between territories. For example, the East–West Interconnector allows the trading of electricity between Great Britain and Ireland. A territory which generates more energy than it requires for its own activities can therefore sell surplus energy to a neighbouring territory. Interconnectors also provide increased resilience. Within the European Union there is a movement towards a single market for energy, which makes interconnectors viable. They are essential to achieve security of supply. As such, the Nordic and Baltic energy exchange Nord Pool Spot rely on multiple interconnectors. The fullest possible implementation of this is the proposed European super grid which would include numerous interconnectors between national networks. Interconnectors are used to increase the security of the energy supply and to manage peak demand. They enable cross-border access to the producers and consumers of electricity, thus increasing the competition in energy markets. They also help integrate more electricity generated from renewable sources, thus reducing the use of fossil fuel power plants and emissions. Interconnectors aid adaptation to changing demand patterns such as the uptake of electric vehicles. Infrastructure Interconnectors may run across a land border or connect two land areas separated by water. As of July 2022 there are at least 35 international connectors and many more intra-national connectors, see high-voltage direct-current (HVDC) projects.
Technology
Electricity transmission and distribution
null
34930586
https://en.wikipedia.org/wiki/Genome%20editing
Genome editing
Genome editing, or genome engineering, or gene editing, is a type of genetic engineering in which DNA is inserted, deleted, modified or replaced in the genome of a living organism. Unlike early genetic engineering techniques that randomly inserts genetic material into a host genome, genome editing targets the insertions to site-specific locations. The basic mechanism involved in genetic manipulations through programmable nucleases is the recognition of target genomic loci and binding of effector DNA-binding domain (DBD), double-strand breaks (DSBs) in target DNA by the restriction endonucleases (FokI and Cas), and the repair of DSBs through homology-directed recombination (HDR) or non-homologous end joining (NHEJ). History Genome editing was pioneered in the 1990s, before the advent of the common current nuclease-based gene editing platforms but its use was limited by low efficiencies of editing. Genome editing with engineered nucleases, i.e. all three major classes of these enzymes—zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs) and engineered meganucleases—were selected by Nature Methods as the 2011 Method of the Year. The CRISPR-Cas system was selected by Science as 2015 Breakthrough of the Year. four families of engineered nucleases were used: meganucleases, zinc finger nucleases (ZFNs), transcription activator-like effector-based nucleases (TALEN), and the clustered regularly interspaced short palindromic repeats (CRISPR/Cas9) system. Nine genome editors were available . In 2018, the common methods for such editing used engineered nucleases, or "molecular scissors". These nucleases create site-specific double-strand breaks (DSBs) at desired locations in the genome. The induced double-strand breaks are repaired through nonhomologous end-joining (NHEJ) or homologous recombination (HR), resulting in targeted mutations ('edits'). In May 2019, lawyers in China reported, in light of the purported creation by Chinese scientist He Jiankui of the first gene-edited humans (see Lulu and Nana controversy), the drafting of regulations that anyone manipulating the human genome by gene-editing techniques, like CRISPR, would be held responsible for any related adverse consequences. A cautionary perspective on the possible blind spots and risks of CRISPR and related biotechnologies has been recently discussed, focusing on the stochastic nature of cellular control processes. The University of Edinburgh Roslin Institute engineered pigs resistant to a virus that causes porcine reproductive and respiratory syndrome, which costs US and European pig farmers $2.6 billion annually. In February 2020, a US trial safely showed CRISPR gene editing on 3 cancer patients. In 2020 Sicilian Rouge High GABA, a tomato that makes more of an amino acid said to promote relaxation, was approved for sale in Japan. In 2021, England (not the rest of the UK) planned to remove restrictions on gene-edited plants and animals, moving from European Union-compliant regulation to rules closer to those of the US and some other countries. An April 2021 European Commission report found "strong indications" that the current regulatory regime was not appropriate for gene editing. Later in 2021, researchers announced a CRISPR alternative, labeled obligate mobile element–guided activity (OMEGA) proteins including IscB, IsrB and TnpB as endonucleases found in transposons, and guided by small ωRNAs. Background Genetic engineering as method of introducing new genetic elements into organisms has been around since the 1970s. One drawback of this technology has been the random nature with which the DNA is inserted into the hosts genome, which can impair or alter other genes within the organism. Although, several methods have been discovered which target the inserted genes to specific sites within an organism genome. It has also enabled the editing of specific sequences within a genome as well as reduced off target effects. This could be used for research purposes, by targeting mutations to specific genes, and in gene therapy. By inserting a functional gene into an organism and targeting it to replace the defective one it could be possible to cure certain genetic diseases. Gene targeting Homologous recombination Early methods to target genes to certain sites within a genome of an organism (called gene targeting) relied on homologous recombination (HR). By creating DNA constructs that contain a template that matches the targeted genome sequence it is possible that the HR processes within the cell will insert the construct at the desired location. Using this method on embryonic stem cells led to the development of transgenic mice with targeted genes knocked out. It has also been possible to knock in genes or alter gene expression patterns. In recognition of their discovery of how homologous recombination can be used to introduce genetic modifications in mice through embryonic stem cells, Mario Capecchi, Martin Evans and Oliver Smithies were awarded the 2007 Nobel Prize for Physiology or Medicine. Conditional targeting If a vital gene is knocked out it can prove lethal to the organism. In order to study the function of these genes site specific recombinases (SSR) were used. The two most common types are the Cre-LoxP and Flp-FRT systems. Cre recombinase is an enzyme that removes DNA by homologous recombination between binding sequences known as Lox-P sites. The Flip-FRT system operates in a similar way, with the Flip recombinase recognising FRT sequences. By crossing an organism containing the recombinase sites flanking the gene of interest with an organism that express the SSR under control of tissue specific promoters, it is possible to knock out or switch on genes only in certain cells. These techniques were also used to remove marker genes from transgenic animals. Further modifications of these systems allowed researchers to induce recombination only under certain conditions, allowing genes to be knocked out or expressed at desired times or stages of development. Process Double strand break repair A common form of Genome editing relies on the concept of DNA double stranded break (DSB) repair mechanics. There are two major pathways that repair DSB; non-homologous end joining (NHEJ) and homology directed repair (HDR). NHEJ uses a variety of enzymes to directly join the DNA ends while the more accurate HDR uses a homologous sequence as a template for regeneration of missing DNA sequences at the break point. This can be exploited by creating a vector with the desired genetic elements within a sequence that is homologous to the flanking sequences of a DSB. This will result in the desired change being inserted at the site of the DSB. While HDR based gene editing is similar to the homologous recombination based gene targeting, the rate of recombination is increased by at least three orders of magnitude. Engineered nucleases The key to genome editing is creating a DSB at a specific point within the genome. Commonly used restriction enzymes are effective at cutting DNA, but generally recognize and cut at multiple sites. To overcome this challenge and create site-specific DSB, three distinct classes of nucleases have been discovered and bioengineered to date. These are the Zinc finger nucleases (ZFNs), transcription-activator like effector nucleases (TALEN), meganucleases and the clustered regularly interspaced short palindromic repeats (CRISPR/Cas9) system. Meganucleases Meganucleases, discovered in the late 1980s, are enzymes in the endonuclease family which are characterized by their capacity to recognize and cut large DNA sequences (from 14 to 40 base pairs). The most widespread and best known meganucleases are the proteins in the LAGLIDADG family, which owe their name to a conserved amino acid sequence. Meganucleases, found commonly in microbial species, have the unique property of having very long recognition sequences (>14bp) thus making them naturally very specific. However, there is virtually no chance of finding the exact meganuclease required to act on a chosen specific DNA sequence. To overcome this challenge, mutagenesis and high throughput screening methods have been used to create meganuclease variants that recognize unique sequences. Others have been able to fuse various meganucleases and create hybrid enzymes that recognize a new sequence. Yet others have attempted to alter the DNA interacting aminoacids of the meganuclease to design sequence specific meganucelases in a method named rationally designed meganuclease. Another approach involves using computer models to try to predict as accurately as possible the activity of the modified meganucleases and the specificity of the recognized nucleic sequence. A large bank containing several tens of thousands of protein units has been created. These units can be combined to obtain chimeric meganucleases that recognize the target site, thereby providing research and development tools that meet a wide range of needs (fundamental research, health, agriculture, industry, energy, etc.) These include the industrial-scale production of two meganucleases able to cleave the human XPC gene; mutations in this gene result in Xeroderma pigmentosum, a severe monogenic disorder that predisposes the patients to skin cancer and burns whenever their skin is exposed to UV rays. Meganucleases have the benefit of causing less toxicity in cells than methods such as Zinc finger nuclease (ZFN), likely because of more stringent DNA sequence recognition; however, the construction of sequence-specific enzymes for all possible sequences is costly and time-consuming, as one is not benefiting from combinatorial possibilities that methods such as ZFNs and TALEN-based fusions utilize. Zinc finger nucleases As opposed to meganucleases, the concept behind ZFNs and TALEN technology is based on a non-specific DNA cutting catalytic domain, which can then be linked to specific DNA sequence recognizing peptides such as zinc fingers and transcription activator-like effectors (TALEs). The first step to this was to find an endonuclease whose DNA recognition site and cleaving site were separate from each other, a situation that is not the most common among restriction enzymes. Once this enzyme was found, its cleaving portion could be separated which would be very non-specific as it would have no recognition ability. This portion could then be linked to sequence recognizing peptides that could lead to very high specificity. Zinc finger motifs occur in several transcription factors. The zinc ion, found in 8% of all human proteins, plays an important role in the organization of their three-dimensional structure. In transcription factors, it is most often located at the protein-DNA interaction sites, where it stabilizes the motif. The C-terminal part of each finger is responsible for the specific recognition of the DNA sequence. The recognized sequences are short, made up of around 3 base pairs, but by combining 6 to 8 zinc fingers whose recognition sites have been characterized, it is possible to obtain specific proteins for sequences of around 20 base pairs. It is therefore possible to control the expression of a specific gene. It has been demonstrated that this strategy can be used to promote a process of angiogenesis in animals. It is also possible to fuse a protein constructed in this way with the catalytic domain of an endonuclease in order to induce a targeted DNA break, and therefore to use these proteins as genome engineering tools. The method generally adopted for this involves associating two DNA binding proteins – each containing 3 to 6 specifically chosen zinc fingers – with the catalytic domain of the FokI endonuclease which need to dimerize to cleave the double-strand DNA. The two proteins recognize two DNA sequences that are a few nucleotides apart. Linking the two zinc finger proteins to their respective sequences brings the two FokI domains closer together. FokI requires dimerization to have nuclease activity and this means the specificity increases dramatically as each nuclease partner would recognize a unique DNA sequence. To enhance this effect, FokI nucleases have been engineered that can only function as heterodimers. Several approaches are used to design specific zinc finger nucleases for the chosen sequences. The most widespread involves combining zinc-finger units with known specificities (modular assembly). Various selection techniques, using bacteria, yeast or mammal cells have been developed to identify the combinations that offer the best specificity and the best cell tolerance. Although the direct genome-wide characterization of zinc finger nuclease activity has not been reported, an assay that measures the total number of double-strand DNA breaks in cells found that only one to two such breaks occur above background in cells treated with zinc finger nucleases with a 24 bp composite recognition site and obligate heterodimer FokI nuclease domains. The heterodimer functioning nucleases would avoid the possibility of unwanted homodimer activity and thus increase specificity of the DSB. Although the nuclease portions of both ZFNs and TALEN constructs have similar properties, the difference between these engineered nucleases is in their DNA recognition peptide. ZFNs rely on Cys2-His2 zinc fingers and TALEN constructs on TALEs. Both of these DNA recognizing peptide domains have the characteristic that they are naturally found in combinations in their proteins. Cys2-His2 Zinc fingers typically happen in repeats that are 3 bp apart and are found in diverse combinations in a variety of nucleic acid interacting proteins such as transcription factors. Each finger of the Zinc finger domain is completely independent and the binding capacity of one finger is impacted by its neighbor. TALEs on the other hand are found in repeats with a one-to-one recognition ratio between the amino acids and the recognized nucleotide pairs. Because both zinc fingers and TALEs happen in repeated patterns, different combinations can be tried to create a wide variety of sequence specificities. Zinc fingers have been more established in these terms and approaches such as modular assembly (where Zinc fingers correlated with a triplet sequence are attached in a row to cover the required sequence), OPEN (low-stringency selection of peptide domains vs. triplet nucleotides followed by high-stringency selections of peptide combination vs. the final target in bacterial systems), and bacterial one-hybrid screening of zinc finger libraries among other methods have been used to make site specific nucleases. Zinc finger nucleases are research and development tools that have already been used to modify a range of genomes, in particular by the laboratories in the Zinc Finger Consortium. The US company Sangamo BioSciences uses zinc finger nucleases to carry out research into the genetic engineering of stem cells and the modification of immune cells for therapeutic purposes. Modified T lymphocytes are currently undergoing phase I clinical trials to treat a type of brain tumor (glioblastoma) and in the fight against AIDS. TALEN Transcription activator-like effector nucleases (TALENs) are specific DNA-binding proteins that feature an array of 33 or 34-amino acid repeats. TALENs are artificial restriction enzymes designed by fusing the DNA cutting domain of a nuclease to TALE domains, which can be tailored to specifically recognize a unique DNA sequence. These fusion proteins serve as readily targetable "DNA scissors" for gene editing applications that enable to perform targeted genome modifications such as sequence insertion, deletion, repair and replacement in living cells. The DNA binding domains, which can be designed to bind any desired DNA sequence, comes from TAL effectors, DNA-binding proteins excreted by plant pathogenic Xanthomanos app. TAL effectors consists of repeated domains, each of which contains a highly conserved sequence of 34 amino acids, and recognize a single DNA nucleotide within the target site. The nuclease can create double strand breaks at the target site that can be repaired by error-prone non-homologous end-joining (NHEJ), resulting in gene disruptions through the introduction of small insertions or deletions. Each repeat is conserved, with the exception of the so-called repeat variable di-residues (RVDs) at amino acid positions 12 and 13. The RVDs determine the DNA sequence to which the TALE will bind. This simple one-to-one correspondence between the TALE repeats and the corresponding DNA sequence makes the process of assembling repeat arrays to recognize novel DNA sequences straightforward. These TALEs can be fused to the catalytic domain from a DNA nuclease, FokI, to generate a transcription activator-like effector nuclease (TALEN). The resultant TALEN constructs combine specificity and activity, effectively generating engineered sequence-specific nucleases that bind and cleave DNA sequences only at pre-selected sites. The TALEN target recognition system is based on an easy-to-predict code. TAL nucleases are specific to their target due in part to the length of their 30+ base pairs binding site. TALEN can be performed within a 6 base pairs range of any single nucleotide in the entire genome. TALEN constructs are used in a similar way to designed zinc finger nucleases, and have three advantages in targeted mutagenesis: (1) DNA binding specificity is higher, (2) off-target effects are lower, and (3) construction of DNA-binding domains is easier. CRISPR CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats) are genetic elements that bacteria use as a kind of acquired immunity to protect against viruses. They consist of short sequences that originate from viral genomes and have been incorporated into the bacterial genome. Cas (CRISPR associated proteins) process these sequences and cut matching viral DNA sequences. By introducing plasmids containing Cas genes and specifically constructed CRISPRs into eukaryotic cells, the eukaryotic genome can be cut at any desired position. Editing by nucleobase modification (Base editing) One of the earliest methods of efficiently editing nucleic acids employs nucleobase modifying enzymes directed by nucleic acid guide sequences was first described in the 1990s and has seen resurgence more recently. This method has the advantage that it does not require breaking the genomic DNA strands, and thus avoids the random insertion and deletions associated with DNA strand breakage. It is only appropriate for precise editing requiring single nucleotide changes and has found to be highly efficient for this type of editing. ARCUT ARCUT stands for artificial restriction DNA cutter, it is a technique developed by Komiyama. This method uses pseudo-complementary peptide nucleic acid (pcPNA), for identifying cleavage site within the chromosome. Once pcPNA specifies the site, excision is carried out by cerium (CE) and EDTA (chemical mixture), which performs the splicing function. Precision and efficiency of engineered nucleases Meganucleases method of gene editing is the least efficient of the methods mentioned above. Due to the nature of its DNA-binding element and the cleaving element, it is limited to recognizing one potential target every 1,000 nucleotides. ZFN was developed to overcome the limitations of meganuclease. The number of possible targets ZFN can recognized was increased to one in every 140 nucleotides. However, both methods are unpredictable because of their DNA-binding elements affecting each other. As a result, high degrees of expertise and lengthy and costly validations processes are required. TALE nucleases being the most precise and specific method yields a higher efficiency than the previous two methods. It achieves such efficiency because the DNA-binding element consists of an array of TALE subunits, each of them having the capability of recognizing a specific DNA nucleotide chain independent from others, resulting in a higher number of target sites with high precision. New TALE nucleases take about one week and a few hundred dollars to create, with specific expertise in molecular biology and protein engineering. CRISPR nucleases have a slightly lower precision when compared to the TALE nucleases. This is caused by the need of having a specific nucleotide at one end in order to produce the guide RNA that CRISPR uses to repair the double-strand break it induces. It has been shown to be the quickest and cheapest method, only costing less than two hundred dollars and a few days of time. CRISPR also requires the least amount of expertise in molecular biology as the design lays in the guide RNA instead of the proteins. One major advantage that CRISPR has over the ZFN and TALEN methods is that it can be directed to target different DNA sequences using its ~80nt CRISPR sgRNAs, while both ZFN and TALEN methods required construction and testing of the proteins created for targeting each DNA sequence. Because off-target activity of an active nuclease would have potentially dangerous consequences at the genetic and organismal levels, the precision of meganucleases, ZFNs, CRISPR, and TALEN-based fusions has been an active area of research. While variable figures have been reported, ZFNs tend to have more cytotoxicity than TALEN methods or RNA-guided nucleases, while TALEN and RNA-guided approaches tend to have the greatest efficiency and fewer off-target effects. Based on the maximum theoretical distance between DNA binding and nuclease activity, TALEN approaches result in the greatest precision. Multiplex Automated Genomic Engineering (MAGE) The methods for scientists and researchers wanting to study genomic diversity and all possible associated phenotypes were very slow, expensive, and inefficient. Prior to this new revolution, researchers would have to do single-gene manipulations and tweak the genome one little section at a time, observe the phenotype, and start the process over with a different single-gene manipulation. Therefore, researchers at the Wyss Institute at Harvard University designed the MAGE, a powerful technology that improves the process of in vivo genome editing. It allows for quick and efficient manipulations of a genome, all happening in a machine small enough to put on top of a small kitchen table. Those mutations combine with the variation that naturally occurs during cell mitosis creating billions of cellular mutations. Chemically combined, synthetic single-stranded DNA (ssDNA) and a pool of oligionucleotides are introduced at targeted areas of the cell thereby creating genetic modifications. The cyclical process involves transformation of ssDNA (by electroporation) followed by outgrowth, during which bacteriophage homologous recombination proteins mediate annealing of ssDNAs to their genomic targets. Experiments targeting selective phenotypic markers are screened and identified by plating the cells on differential medias. Each cycle ultimately takes 2.5 hours to process, with additional time required to grow isogenic cultures and characterize mutations. By iteratively introducing libraries of mutagenic ssDNAs targeting multiple sites, MAGE can generate combinatorial genetic diversity in a cell population. There can be up to 50 genome edits, from single nucleotide base pairs to whole genome or gene networks simultaneously with results in a matter of days. MAGE experiments can be divided into three classes, characterized by varying degrees of scale and complexity: (i) many target sites, single genetic mutations; (ii) single target site, many genetic mutations; and (iii) many target sites, many genetic mutations. An example of class three was reflected in 2009, where Church and colleagues were able to program Escherichia coli to produce five times the normal amount of lycopene, an antioxidant normally found in tomato seeds and linked to anti-cancer properties. They applied MAGE to optimize the 1-deoxy-D-xylulose 5-phosphate (DXP) metabolic pathway in Escherichia coli to overproduce isoprenoid lycopene. It took them about 3 days and just over $1,000 in materials. The ease, speed, and cost efficiency in which MAGE can alter genomes can transform how industries approach the manufacturing and production of important compounds in the bioengineering, bioenergy, biomedical engineering, synthetic biology, pharmaceutical, agricultural, and chemical industries. Applications As of 2012 efficient genome editing had been developed for a wide range of experimental systems ranging from plants to animals, often beyond clinical interest, and was becoming a standard experimental strategy in research labs. The recent generation of rat, zebrafish, maize and tobacco ZFN-mediated mutants and the improvements in TALEN-based approaches testify to the significance of the methods, and the list is expanding rapidly. Genome editing with engineered nucleases will likely contribute to many fields of life sciences from studying gene functions in plants and animals to gene therapy in humans. For instance, the field of synthetic biology which aims to engineer cells and organisms to perform novel functions, is likely to benefit from the ability of engineered nuclease to add or remove genomic elements and therefore create complex systems. In addition, gene functions can be studied using stem cells with engineered nucleases. Listed below are some specific tasks this method can carry out: Targeted gene mutation Gene therapy Creating chromosome rearrangement Study gene function with stem cells Transgenic animals Endogenous gene labeling Targeted transgene addition Targeted gene modification in animals The combination of recent discoveries in genetic engineering, particularly gene editing and the latest improvement in bovine reproduction technologies (e.g. in vitro embryo culture) allows for genome editing directly in fertilised oocytes using synthetic highly specific endonucleases. RNA-guided endonucleases:clustered regularly interspaced short palindromic repeats associated Cas9 (CRISPR/Cas9) are a new tool, further increasing the range of methods available. In particular CRISPR/Cas9 engineered endonucleases allows the use of multiple guide RNAs for simultaneous Knockouts (KO) in one step by cytoplasmic direct injection (CDI) on mammalian zygotes. Furthermore, gene editing can be applied to certain types of fish in aquaculture such as Atlantic salmon. Gene editing in fish is currently experimental, but the possibilities include growth, disease resistance, sterility, controlled reproduction, and colour. Selecting for these traits can allow for a more sustainable environment and better welfare for the fish. AquAdvantage salmon is a genetically modified Atlantic salmon developed by AquaBounty Technologies. The growth hormone-regulating gene in the Atlantic salmon is replaced with the growth hormone-regulating gene from the Pacific Chinook salmon and a promoter sequence from the ocean pout Thanks to the parallel development of single-cell transcriptomics, genome editing and new stem cell models we are now entering a scientifically exciting period where functional genetics is no longer restricted to animal models but can be performed directly in human samples. Single-cell gene expression analysis has resolved a transcriptional road-map of human development from which key candidate genes are being identified for functional studies. Using global transcriptomics data to guide experimentation, the CRISPR based genome editing tool has made it feasible to disrupt or remove key genes in order to elucidate function in a human setting. Targeted gene modification in plants Genome editing using Meganuclease, ZFNs, and TALEN provides a new strategy for genetic manipulation in plants and are likely to assist in the engineering of desired plant traits by modifying endogenous genes. For instance, site-specific gene addition in major crop species can be used for 'trait stacking' whereby several desired traits are physically linked to ensure their co-segregation during the breeding processes. Progress in such cases have been recently reported in Arabidopsis thaliana and Zea mays. In Arabidopsis thaliana, using ZFN-assisted gene targeting, two herbicide-resistant genes (tobacco acetolactate synthase SuRA and SuRB) were introduced to SuR loci with as high as 2% transformed cells with mutations. In Zea mays, disruption of the target locus was achieved by ZFN-induced DSBs and the resulting NHEJ. ZFN was also used to drive herbicide-tolerance gene expression cassette (PAT) into the targeted endogenous locus IPK1 in this case. Such genome modification observed in the regenerated plants has been shown to be inheritable and was transmitted to the next generation. A potentially successful example of the application of genome editing techniques in crop improvement can be found in banana, where scientists used CRISPR/Cas9 editing to inactivate the endogenous banana streak virus in the B genome of banana (Musa spp.) to overcome a major challenge in banana breeding. In addition, TALEN-based genome engineering has been extensively tested and optimized for use in plants. TALEN fusions have also been used by a U.S. food ingredient company, Calyxt, to improve the quality of soybean oil products and to increase the storage potential of potatoes Several optimizations need to be made in order to improve editing plant genomes using ZFN-mediated targeting. There is a need for reliable design and subsequent test of the nucleases, the absence of toxicity of the nucleases, the appropriate choice of the plant tissue for targeting, the routes of induction of enzyme activity, the lack of off-target mutagenesis, and a reliable detection of mutated cases. A common delivery method for CRISPR/Cas9 in plants is Agrobacterium-based transformation. T-DNA is introduced directly into the plant genome by a T4SS mechanism. Cas9 and gRNA-based expression cassettes are turned into Ti plasmids, which are transformed in Agrobacterium for plant application. To improve Cas9 delivery in live plants, viruses are being used more effective transgene delivery. Research Gene therapy The ideal gene therapy practice is that which replaces the defective gene with a normal allele at its natural location. This is advantageous over a virally delivered gene as there is no need to include the full coding sequences and regulatory sequences when only a small proportions of the gene needs to be altered as is often the case. The expression of the partially replaced genes is also more consistent with normal cell biology than full genes that are carried by viral vectors. The first clinical use of TALEN-based genome editing was in the treatment of CD19+ acute lymphoblastic leukemia in an 11-month old child in 2015. Modified donor T cells were engineered to attack the leukemia cells, to be resistant to Alemtuzumab, and to evade detection by the host immune system after introduction. Extensive research has been done in cells and animals using CRISPR-Cas9 to attempt to correct genetic mutations which cause genetic diseases such as Down syndrome, spina bifida, anencephaly, and Turner and Klinefelter syndromes. In February 2019, medical scientists working with Sangamo Therapeutics, headquartered in Richmond, California, announced the first ever "in body" human gene editing therapy to permanently alter DNA - in a patient with Hunter syndrome. Clinical trials by Sangamo involving gene editing using Zinc Finger Nuclease (ZFN) are ongoing. Eradicating diseases Researchers have used CRISPR-Cas9 gene drives to modify genes associated with sterility in A. gambiae, the vector for malaria. This technique has further implications in eradicating other vector borne diseases such as yellow fever, dengue, and Zika. The CRISPR-Cas9 system can be programmed to modulate the population of any bacterial species by targeting clinical genotypes or epidemiological isolates. It can selectively enable the beneficial bacterial species over the harmful ones by eliminating pathogen, which gives it an advantage over broad-spectrum antibiotics. Antiviral applications for therapies targeting human viruses such as HIV, herpes, and hepatitis B virus are under research. CRISPR can be used to target the virus or the host to disrupt genes encoding the virus cell-surface receptor proteins. In November 2018, He Jiankui announced that he had edited two human embryos, to attempt to disable the gene for CCR5, which codes for a receptor that HIV uses to enter cells. He said that twin girls, Lulu and Nana, had been born a few weeks earlier. He said that the girls still carried functional copies of CCR5 along with disabled CCR5 (mosaicism) and were still vulnerable to HIV. The work was widely condemned as unethical, dangerous, and premature. In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua – the first ever cloned monkeys - and Dolly the sheep, and the same gene-editing Crispr-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases. Prospects and limitations In the future, an important goal of research into genome editing with engineered nucleases must be the improvement of the safety and specificity of the nucleases action. For example, improving the ability to detect off-target events can improve our ability to learn about ways of preventing them. In addition, zinc-fingers used in ZFNs are seldom completely specific, and some may cause a toxic reaction. However, the toxicity has been reported to be reduced by modifications done on the cleavage domain of the ZFN. In addition, research by Dana Carroll into modifying the genome with engineered nucleases has shown the need for better understanding of the basic recombination and repair machinery of DNA. In the future, a possible method to identify secondary targets would be to capture broken ends from cells expressing the ZFNs and to sequence the flanking DNA using high-throughput sequencing. Because of the ease of use and cost-efficiency of CRISPR, extensive research is currently being done on it. There are now more publications on CRISPR than ZFN and TALEN despite how recent the discovery of CRISPR is. Both CRISPR and TALEN are favored to be the choices to be implemented in large-scale productions due to their precision and efficiency. Genome editing occurs also as a natural process without artificial genetic engineering. The agents that are competent to edit genetic codes are viruses or subviral RNA-agents. Although GEEN has higher efficiency than many other methods in reverse genetics, it is still not highly efficient; in many cases less than half of the treated populations obtain the desired changes. For example, when one is planning to use the cell's NHEJ to create a mutation, the cell's HDR systems will also be at work correcting the DSB with lower mutational rates. Traditionally, mice have been the most common choice for researchers as a host of a disease model. CRISPR can help bridge the gap between this model and human clinical trials by creating transgenic disease models in larger animals such as pigs, dogs, and non-human primates. Using the CRISPR-Cas9 system, the programmed Cas9 protein and the sgRNA can be directly introduced into fertilized zygotes to achieve the desired gene modifications when creating transgenic models in rodents. This allows bypassing of the usual cell targeting stage in generating transgenic lines, and as a result, it reduces generation time by 90%. One potential that CRISPR brings with its effectiveness is the application of xenotransplantation. In previous research trials, CRISPR demonstrated the ability to target and eliminate endogenous retroviruses, which reduces the risk of transmitting diseases and reduces immune barriers. Eliminating these problems improves donor organ function, which brings this application closer to a reality. In plants, genome editing is seen as a viable solution to the conservation of biodiversity. Gene drive are a potential tool to alter the reproductive rate of invasive species, although there are significant associated risks. Human enhancement Many transhumanists see genome editing as a potential tool for human enhancement. Australian biologist and Professor of Genetics David Andrew Sinclair notes that "the new technologies with genome editing will allow it to be used on individuals (...) to have (...) healthier children"designer babies. According to a September 2016 report by the Nuffield Council on Bioethics in the future it may be possible to enhance people with genes from other organisms or wholly synthetic genes to for example improve night vision and sense of smell. George Church has compiled a list of potential genetic modifications for possibly advantageous traits such as less need for sleep, cognition-related changes that protect against Alzheimer's disease, disease resistances and enhanced learning abilities along with some of the associated studies and potential negative effects. The American National Academy of Sciences and National Academy of Medicine issued a report in February 2017 giving qualified support to human genome editing. They recommended that clinical trials for genome editing might one day be permitted once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight." Risks In the 2016 Worldwide Threat Assessment of the US Intelligence Community statement United States Director of National Intelligence, James R. Clapper, named genome editing as a potential weapon of mass destruction, stating that genome editing conducted by countries with regulatory or ethical standards "different from Western countries" probably increases the risk of the creation of harmful biological agents or products. According to the statement the broad distribution, low cost, and accelerated pace of development of this technology, its deliberate or unintentional misuse might lead to far-reaching economic and national security implications. For instance technologies such as CRISPR could be used to make "killer mosquitoes" that cause plagues that wipe out staple crops. According to a September 2016 report by the Nuffield Council on Bioethics, the simplicity and low cost of tools to edit the genetic code will allow amateursor "biohackers"to perform their own experiments, posing a potential risk from the release of genetically modified bugs. The review also found that the risks and benefits of modifying a person's genomeand having those changes pass on to future generationsare so complex that they demand urgent ethical scrutiny. Such modifications might have unintended consequences which could harm not only the child, but also their future children, as the altered gene would be in their sperm or eggs. In 2001 Australian researchers Ronald Jackson and Ian Ramshaw were criticized for publishing a paper in the Journal of Virology that explored the potential control of mice, a major pest in Australia, by infecting them with an altered mousepox virus that would cause infertility as the provided sensitive information could lead to the manufacture of biological weapons by potential bioterrorists who might use the knowledge to create vaccine resistant strains of other pox viruses, such as smallpox, that could affect humans. Furthermore, there are additional concerns about the ecological risks of releasing gene drives into wild populations. Nobel prize In 2007, the Nobel Prize for Physiology or Medicine was awarded to Mario Capecchi, Martin Evans and Oliver Smithies "for their discoveries of principles for introducing specific gene modifications in mice by the use of embryonic stem cells." In 2020, the Nobel Prize in Chemistry was awarded to Emmanuelle Charpentier and Jennifer Doudna for "the development of a method for genome editing".
Technology
Biotechnology
null
37434964
https://en.wikipedia.org/wiki/Orthopedic%20boot
Orthopedic boot
A variety of orthopedic boots are used for the treatment of injuries of the foot or ankle. Along with orthopedic casts, leg braces, splints and orthotics, they can immobilize and shift weight bearing to help treat injuries to the foot area. A controlled ankle motion walking boot, also referred to as a controlled ankle movement walking boot, below knee walking boot, CAM boot, CAM walker, or moon boot, is an orthopedic device prescribed for the treatment and stabilization of severe sprains, fractures, and tendon or ligament tears in the ankle or foot. In situations where ankle motion but not weight is to be limited, it may be used in place of a cast. Description A walking boot consists of: An inner lining, usually fabric, with hook and loop fasteners which encloses and cushions the patient's foot and ankle A rigid frame to restrict motion in the lower leg A hard plastic shell that provides rigidity and protection to the leg Adjustable closure system that allows for proper fitting to various leg sizes Variations CAM walkers may range in height from mid-calf to nearly knee-length, depending on the condition they are meant to treat. Some contain inflatable compartments that can be adjusted by the patient for maximum support and comfort. For further protection of the injured ankle and leg, CAM walkers may also utilize a more extensive plastic shell that also encloses the back and sides of the walker, with detachable plastic plates for the front. Comparison to casting While CAM walkers do not provide the same degree of immobility that an orthopedic cast offers, they have some advantages. Unlike casts, they are adjustable and reusable, and fully removable, permitting the patient to bathe the foot and ankle and remove the walker at night, if they so desire; and a CAM walker requires no special modifications for the patient to bear weight and walk. With some fractures, however, removal may result in worse outcomes and thus this may be a negative; also, with some fractures, the person should be non-weight bearing. Additionally, there is greater cost. For more severe fractures, a traditional cast may still be preferable.
Technology
Devices
null
23256800
https://en.wikipedia.org/wiki/Filasterea
Filasterea
Filasterea is a proposed basal Filozoan clade of single-celled ameboid eukaryotes that includes Ministeria and Capsaspora. It is a sister clade to the Choanozoa in which the Choanoflagellatea and Animals appeared, originally proposed by Shalchian-Tabrizi et al. in 2008, based on a phylogenomic analysis with 78 genes. Filasterea was found to be the sister-group to the clade composed of Metazoa and Choanoflagellata within the Opisthokonta, a finding that has been further corroborated with additional, more taxon-rich, phylogenetic analyses. Etymology From Latin filum meaning "thread" and Greek aster meaning "star", it indicates the main morphological features shared by all their integrants: small, rounded amoeboids with a mononucleated cellular body, covered in long and radiating cell protrusions known as filopodia. These filopodia may be involved in substrate adhesion and capture of prey. Applications There are currently cultures from two filasterean species: Capsaspora owczarzaki and Ministeria vibrans, the first isolated from within a fresh-water snail, the second a marine, free-living bacteriovore. The complete genome sequences of C. owczarzaki, M. vibrans, Pigoraptor vietnamica and Pigoraptor chileana have been obtained. Comparative analyses have shown that Filasterea are key to unravel the genetic repertoire of the unicellular ancestor of animals and to provide insights into the origin of Metazoa. Metabarcoding analyses of 18S ribosomal RNA in marine environments have failed to recover other filasterean representatives, suggesting this clade may not be especially abundant in natural ecosystems. Taxonomy Class Filasterea Shalchian-Tabrizi et al. 2008 Order Ministeriida Cavalier-Smith 1997 Family Ministeriidae Cavalier-Smith 2008 Genus Ministeria Patterson et al. 1993 Ministeria marisola Patterson et al. 1993 Ministeria vibrans Tong 1997 Family Txikisporidae Urrutia, Feist & Bass 2021 Genus Txikispora Urrutia, Feist & Bass 2021 Txikispora philomaios Urrutia, Feist & Bass 2021 Family Capsasporidae Cavalier-Smith 2008 Genus Capsaspora Hertel et al. 2002 Capsaspora owczarzaki Hertel et al. 2002 Genus Pigoraptor Tikhonenkov et al. 2017 Pigoraptor chileana Tikhonenkov et al. 2017 Pigoraptor vietnamica Tikhonenkov et al. 2017 In some research Capsaspora is found to be more closely related to Choanozoa than Ministeria.
Biology and health sciences
Eukaryotes
Plants
23257027
https://en.wikipedia.org/wiki/Holozoa
Holozoa
Holozoa () is a clade of organisms that includes animals and their closest single-celled relatives, but excludes fungi and all other organisms. Together they amount to more than 1.5 million species of purely heterotrophic organisms, including around 300 unicellular species. It consists of various subgroups, namely Metazoa (or animals) and the protists Choanoflagellata, Filasterea, Pluriformea and Ichthyosporea. Along with fungi and some other groups, Holozoa is part of the Opisthokonta, a supergroup of eukaryotes. Choanofila was previously used as the name for a group similar in composition to Holozoa, but its usage is discouraged now because it excludes animals and is therefore paraphyletic. The holozoan protists play a crucial role in understanding the evolutionary steps leading to the emergence of multicellular animals from single-celled ancestors. Recent genomic studies have shed light on the evolutionary relationships between the various holozoan lineages, revealing insights into the origins of multicellularity. Some fossils of possible metazoans have been reinterpreted as holozoan protists. Characteristics Composition Holozoa is a clade that includes animals and their closest relatives, as well as their common ancestor, but excludes fungi. It is defined on a branch-based approach as the clade encompassing all relatives of Homo sapiens (an animal), but not Neurospora crassa (a fungus). Holozoa, besides animals, primarily comprises unicellular protist lineages of varied morphologies such as choanoflagellates, filastereans, ichthyosporeans, and the distinct genera Corallochytrium, Syssomonas, and Tunicaraptor. Choanoflagellata, with around 250 species, are the closest living relatives of animals. They are free-living unicellular or colonial flagellates that feed on bacteria using a characteristic "collar" of microvilli. The collar of choanoflagellates closely resembles sponge collar cells, leading to theories since the 19th century about their relatedness to sponges. The mysterious Proterospongia is an example of a colonial choanoflagellate that was thought to be related to the origin of sponges. The affinities of the other single-celled holozoans only began to be recognized in the 1990s. Ichthyosporea, also known as Mesomycetozoea and comprising around 40 species, largely consist of parasites or commensals. They interact with a diverse range of animals, from humans and fish to marine invertebrates. Most reproduce through multinucleated colonies and disperse as flagellates or amoebae. Filasterea is a group of 6 amoeboid species belonging to the genera Ministeria, Pigoraptor, Capsaspora, and Txikispora, united by the structure of their thread-like pseudopods. Pluriformea is a provisional name for the clade composed by the two species Corallochytrium limacisporium and Syssomonas multiformis. These organisms have varied shapes, including cellular aggregations, amoebae, flagellates, and amoeboflagellates. Tunicaraptor unikontum is the newest discovered clade, whose position within Holozoa has yet to be resolved. It is a flagellate with a specialized "mouth" structure absent in other holozoans. Metazoa, known as animals, are multicellular organisms that sum more than 1.5 million living species. They are characterized by a blastula phase during their embryonic development and, except for the amorphous sponges, the formation of germ layers and differentiated tissues. Genetics The first sequenced unicellular holozoan genome was that of Monosiga brevicollis, a choanoflagellate. It measures around 41.6 mega–base-pairs (Mbp) and contains around 9200 coding genes, making it comparable in size to the genome of filamentous fungi. Animal genomes are usually larger (e.g. human genome, 2900 Mbp; fruit fly, 180 Mbp), with some exceptions. Evolution Phylogeny Holozoa, along with a clade that contains fungi and their protist relatives (Holomycota), are part of the larger supergroup of eukaryotes known as Opisthokonta. Holozoa diverged from their opisthokont ancestor around 1070 million years ago (Mya). The choanoflagellates, animals and filastereans group together as the clade Filozoa. Within Filozoa, the choanoflagellates and animals group together as the clade Choanozoa. Based on phylogenetic and phylogenomic analyses, the cladogram of Holozoa is shown below: Uncertainty remains around the relationship of the two most basal groups, Ichthyosporea and Pluriformea. They may be sister to each other, forming the putative clade Teretosporea. Alternatively, Ichthyosporea may be the earliest-branching of the two, while Pluriformea is sister to the Filozoa clade comprising filastereans, choanoflagellates and animals. This second outcome is more strongly supported after the discovery of Syssomonas. The position of Tunicaraptor, the newest holozoan member, is still unresolved. Three different phylogenetic positions of Tunicaraptor have been obtained from analyses: as the sister group to Filasterea, as sister to Filozoa, or as the most basal group of all Holozoa. Environmental DNA surveys of oceans have revealed new diverse lineages of Holozoa. Most of them nest within known groups, mainly Ichthyosporea and Choanoflagellata. However, one environmental clade does not nest within any known group and is a potential new holozoan lineage. It has been tentatively named MASHOL (for 'marine small Holozoa'). Unicellular ancestry of animals The quest to elucidate the evolutionary origins of animals from a unicellular ancestor requires an examination of the transition to multicellularity. In the absence of a fossil record documenting this evolution, insights into the unicellular ancestor of animals are obtained from the analysis of shared genes and genetic pathways between animals and their closest living unicellular relatives. The genetic content of these single-celled holozoans has revealed a significant discovery: many genetic characteristics previously thought as unique to animals can also be found in these unicellular relatives. This suggests that the origin of multicellular animals did not happen solely because of the appearance of new genes (i.e. innovation), but because of pre-existing genes that were adapted or utilized in new ways (i.e. co-option). For example: Adhesion proteins are necessary in allowing cells to stick to each other and to the extracellular matrix, forming layers and tissues in animals. Some unicellular holozoans, like choanoflagellates and filastereans, possess genes that encode proteins involved in cell-cell adhesion and cell-matrix adhesion (e.g. cadherin and integrin, respectively). Other genes, however, seem to be exclusively found in animals (e.g. β-catenin). ECM-related proteins, involved in the formation of the extracellular matrix, are present in other holozoans (e.g. laminins, collagens and fibronectins). Signal transduction proteins are another requirement for metazoan multicellularity. Some animal cytoplasmic tyrosine kinases (such as focal adhesion kinase) and the Hippo signaling pathway are present in unicellular holozoans. Other signaling pathways highly conserved in animals (e.g. Hedgehog, WNT, TGFβ, JAK-STAT and Notch) are absent in other holozoans, but similar signaling receptors evolved independently in choanoflagellates, filastereans and ichthyosporeans (e.g. receptor tyrosine kinases). A considerable portion of animal transcription factors (TF) is already present in unicellular holozoans, including some TF classes previously thought to be animal-specific (e.g. p53 and T-box). Additionally, many biological processes seen in animals are already present in their unicellular relatives, such as sexual reproduction and gametogenesis in the choanoflagellate Salpingoeca rosetta and several types of multicellular differentiation. Fossil record A billion-year-old freshwater microscopic fossil named Bicellum brasieri is possibly the earliest known holozoan. It shows two differentiated cell types or life cycle stages. It consists of a spherical ball of tightly packed cells (stereoblasts) enclosed in a single layer of elongated cells. There are also two populations of stereoblasts with mixed shapes, which have been interpreted as cellular migration to the periphery, a movement that could be explained by differential cell-cell adhesion. These occurrences are consistent with extant unicellular holozoans, which are known to form multicellular stages in complex life cycles. Proposed Ediacaran fossil "embryos" of early metazoans, discovered in the Doushantuo Formation, have been reinterpreted as non-animal protists within Holozoa. According to some authors, although they present possible embryonic cleavage, they lack metazoan synapomorphies such as tissue differentiation and nearby juveniles or adults. Instead, its development is comparable to the germination stage of non-animal holozoans. They possibly represent an evolutionary grade in which palintomic cleavage (i.e. rapid cell divisions without cytoplasmic growth in between, a characteristic of animal embryonic cleavage) was the method of dispersal and propagation. Taxonomy History Prior to 2002, a relationship between Choanoflagellata, Ichthyosporea and the animal-fungi divergence was considered on the basis of morphology and ultrastructure. Early phylogenetic analyses gave contradicting results, because the amount of available DNA sequences was insufficient to yield unambiguous results. The taxonomic uncertainty was such that, for example, some Ichthyosporea were traditionally treated as trichomycete fungi. Holozoa was first recognized as a clade in 2002 through a phylogenomic analysis by Franz Bernd Lang, Charles J. O'Kelly and other collaborators, as part of a paper published in the journal Current Biology. The study used complete mitochondrial genomes of a choanoflagellate (Monosiga brevicollis) and an ichthyosporean (Amoebidium parasiticum) to firmly resolve the position of Ichthyosporea as the sister group to Choanoflagellata+Metazoa. This clade was named Holozoa (), meaning 'whole animal', referencing the wider animal ancestry that it contains. Holozoa has since been supported as a robust clade by every posterior analysis, even after the discovery of more taxa nested within it (namely Filasterea since 2008, and the pluriformean species Corallochytrium and Syssomonas since 2014 and 2017 respectively). As of 2019, the clade is accepted by the International Society of Protistologists, which revises the classification of eukaryotes. Classification In classifications that use traditional taxonomic ranks (e.g. kingdom, phylum, class), all holozoan protists are classified as subphylum Choanofila (phylum Choanozoa, kingdom Protozoa) while the animals are classified as a separate kingdom Metazoa or Animalia. This classification excludes animals, even though they descend from the same common ancestor as choanofilan protists, making it a paraphyletic group rather than a true clade. Modern cladistic approaches to eukaryotic classification prioritise monophyletic groupings over traditional ranks, which are increasingly perceived as redundant and superfluous. Because Holozoa is a clade, its use is preferred over the paraphyletic taxon Choanofila. Holozoa Incertae sedis: Bicellum brasieri Tunicaraptor Ichthyosporea [Mesomycetozoea ] Dermocystida Ichthyophonida Pluriformea Corallochytrium Syssomonas Filozoa Filasterea Capsaspora Ministeria Pigoraptor Txikispora Choanozoa [Choanozoa (P)] Choanoflagellata [Choanoflagellatea ] Craspedida Acanthoecida Metazoa [Animalia ] Porifera Placozoa Ctenophora Cnidaria Bilateria
Biology and health sciences
Eukaryotes
Plants
5387354
https://en.wikipedia.org/wiki/Bay%20mud
Bay mud
Bay mud consists of thick deposits of soft, unconsolidated silty clay, which is saturated with water; these soil layers are situated at the bottom of certain estuaries, which are normally in temperate regions that have experienced cyclical glacial cycles. Example locations are Cape Cod Bay, Chongming Dongtan Reserve in Shanghai, China, Banc d'Arguinpreserve in Mauritania, The Bristol Channel in the United Kingdom, Mandø Island in the Wadden Sea in Denmark, Florida Bay, San Francisco Bay, Bay of Fundy, Casco Bay, Penobscot Bay, and Morro Bay. Bay mud manifests low shear strength, high compressibility and low permeability, making it hazardous to build upon in seismically active regions like the San Francisco Bay Area. Typical bulk density of bay mud is approximately 1.3 grams per cubic centimetre. Bay muds often have a high organic content, consisting of decayed organisms at lower depths, but may also contain living creatures when they occur at the upper soil layer and become exposed by low tides; then, they are called mudflats, an important ecological zone for shorebirds and many types of marine organisms. Great attention was not given to the incidence of deeper bay muds until the 1960s and 1970s when development encroachment on certain North American bays intensified, requiring geotechnical design of foundations. Bay mud has its own official geological abbreviation: the designation for Quaternary older bay mud is Qobm and the acronym for Quaternary younger bay mud is Qybm. An alluvial layer is often found overlying the older bay mud. In relation to shipping channels, it is often necessary to dredge bay bottoms and barge the excavated material to an alternate location. In this case, chemical analyses are usually performed on the bay mud to determine whether there are elevated levels of heavy metals, PCBs or other toxic substances known to accumulate in a benthic environment. It is not uncommon to dredge the same channel repeatedly (over a span of ten to thirty years) since further settling sediments are prone to redeposit on an open estuarine valley floor. Depositional scenarios Bay muds originate from two generalized sources. First alluvial deposits of clays, silts and sand occur from streams tributary to a given bay. The extent of these unconsolidated interglacial deposits typically ranges throughout a given bay to the extent of the historical perimeter marshlands. Second, in periods of high glaciation, deposits of silts, sands and organic plus inorganic detritus (e.g. decomposition of estuarine diatoms) may form a separate distinct layer. Thus bay muds are important time records of glacial activity and streamflow throughout the Quaternary period. Some depositional formation is quite recent, such as in the case of Florida Bay, where much of the bay mud has accumulated since 2000 BCE, and consists of primarily decayed organic material. In the case of Florida Bay these bay muds can accrete as much as 0.5 to 2.0 centimeters per annum, although the dynamic equilibrium of erosion, wave action redistribution and deposition complicate the net rate of layer growth. In the case of the Bristol Channel in the United Kingdom bay, mud formation has been occurring at least since the Eemian Stage (known as the Sangamonian Stage in North America), or about 130,000 years ago. In other cases such as with San Francisco Bay, deposition has been interrupted by sea-level changes, and strata of vastly different vintages are found. In the San Francisco Bay Area, these are called Young bay mud and Older bay mud by geologists. Human activities can also affect deposition; close to half of the Young Bay Mud in San Francisco Bay was placed in the period 1855–1865, as a result of placer mining in the Sierra Nevada foothills. Geotechnical factors Construction on bay mud sites is difficult because of the soil's low strength and high compressibility. Very lightweight buildings can be constructed on bay mud sites if there is a thick enough layer of non-bay-mud soil above the bay mud, but buildings which impose significant loads must be supported on deep foundations bearing on stiffer layers below the bay mud, or obtaining support from friction in the bay mud. Even with deep foundations, difficulties arise because the surrounding ground will likely settle over time, potentially damaging utility connections to the building and causing the entryway to sink below street level. A number of notable buildings have been constructed over bay muds, typically employing special mitigation designs to withstand seismic risks and settlement issues. Complicating design issues, fill (beginning about 1850 CE) is sometimes found deposited on the surface level. For example, the Dakin Building in Brisbane, California, was designed in 1985 to sit on piles 150 feet deep, anchoring to the Franciscan formation, below the bay muds and through an upper fill layer. Furthermore, the structure's entrance ramp has been set on a giant hinge to allow the surrounding land to settle, while the building absolute height remains constant. The Crowne Plaza high-rise hotel in Burlingame, California was also designed to sit over bay muds, as was the Westin Hotel in Millbrae, California, and Trinity Church in Boston's Copley Square. Indeed, Boston's entire Back Bay district is named for the tidal bay that it now covers. Logan International Airport and the San Francisco International Airport are also constructed over bay mud. Mudflats When the mud layer is exposed at the tidal fringe, mudflats result affording a unique ecotone that affords numerous shorebird species a safe feeding and resting habitat. Because the muds function much like quicksand, heavier mammalian predators not only cannot gain traction for pursuit, but would actually become trapped in the sinking muds. The muds are also an important substrate for primary marsh productivity including eelgrass, cordgrass and pickleweed. Furthermore, they are home to a large variety of molluscs and estuarine arthropods. Richardson Bay, for example, exposes one third of its areal extent as mudflat at low tide, which hosts a productive eelgrass expanse and also a large shorebird community. Mammals such as the Harbor seal may use mudflats to haul out of estuary waters; however, larger mammals such as humpback whales may become accidentally stranded at low tides. Note that normally humpback whales do not frequent estuaries containing mudflats, but at least one errant whale, publicized by the media as Humphrey the humpback whale, became stuck on a mudflat in San Francisco Bay at Sierra Point in Brisbane, California. Worldwide occurrences Bay muds occur in bays and estuaries throughout the temperate regions of the world. In North America, prominent instances are: (a) the Stellwagen Bank formed 16,000 to 9000 BCE by glaciation of Cape Cod Bay in Massachusetts, (b) Florida Bay, (c) in California Morro Bay and San Francisco Bay and (d) Knik & Turnagain Arms in Anchorage, Alaska. In the United Kingdom large bay mud occurrences are found at Morecambe Bay, Bridgwater Bay and Bristol Bay. Straddling Denmark, the Netherlands and Germany is the Wadden Sea, a major formation underlain by bay muds. In Asia the Chongming Dongtan Nature Reserve in Shanghai, China, is an example of a large scale bay mud formation. The Atlantic coast of Africa holds the Banc d'Arguin, a World Heritage nature preserve in the country of Mauritania. Banc d'Arguin is a vast area underlain by bay mud. Regulatory issues and actions When building on top of bay mud layers or when dredging estuary bottoms, a variety of regulatory frameworks may arise. Normally in the United States, an Environmental Impact Report as well as a geotechnical investigation are conducted precedent to any major construction over bay mud. Combined, these reports have developed much of the data base extant on bay mud characteristics, frequently yielding original field data from soil borings. These data have demonstrated that in many locations the shallower bay muds contain concentrations of mercury, lead, chromium, petroleum hydrocarbons, PCBs, pesticides and other chemicals which exceed toxic limits: a geological record of human activities of the last century. These data are particularly important to consider when dredging of bay muds is contemplated as part of a development project. Such dredging can have impacts to receiving lands as soil contamination, but also water column impacts from sediment disturbance. In the case of dredging within the United States, a permit is almost always required from the United States Army Corps of Engineers, after submission of extensive data on the project limits, chemical properties of the bay muds to be disturbed, a dredge disposal plan and often a complete Environmental Impact Statement pursuant to the National Environmental Policy Act. Further review by the United States Coast Guard would normally be required. Within individual state jurisdictions, such as California, an Environmental Impact Report must be filed for dredging of any significance; furthermore, agency reviews by the California Coastal Commission and the Regional Water Quality Control Board would normally be mandated. All of these regulatory bodies serve an important role in deciding whether an area may be dredged or not. However, the most important body is the California Environmental Quality Act (CEQA). This guiding legislation is the reason for Environmental Impact Reports, costly mitigation measures and arduous review processes. One of CEQA's main goals is to promote interagency cooperation in the review process of a project. This is one of the main reasons why it is the overseer of all projects in California. For buildings proposed over bay mud layers, typically the municipality involved will, in addition to the usual engineering and design review issues common to all building projects (which are more complicated because of the site conditions), require an Environmental Impact Report . This process would include reviews by that city's building department, as well as applicable regional and state agencies such as those cited above for dredging projects, except that Coast Guard agencies would not typically be concerned. In developing in California, proposed development over bay mud layers would also have to go through a planning commission and a city council in order to be allowed. This process would respect the EIR, CEQA, and all the other bodies discussed above. In the case of San Francisco the project would have to get approved by the San Francisco Board of Supervisors. The Millennium Tower, an example of a tall building built on bay mud, was completed in 2008 and subsequently experienced sinking. This has had a negative impact on the residents of this building. In response to this subsidence, San Francisco's city attorney filed a lawsuit against the developer, because the developer failed to inform the residents of the accelerated speed that the building was sinking at. Sea level rise Sea level rise will have a huge impact on the ecosystems surrounding and within bays all across the globe. Sea level rise in California will completely engulf bay mud that makes up San Francisco Bay. In order to deal with sea level rise the California Coastal Commission has adopted policy guidelines to help California.
Physical sciences
Oceanic and coastal landforms
Earth science
5389424
https://en.wikipedia.org/wiki/Arduino
Arduino
Arduino () is an Italian open-source hardware and software company, project, and user community that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices. Its hardware products are licensed under a CC BY-SA license, while the software is licensed under the GNU Lesser General Public License (LGPL) or the GNU General Public License (GPL), permitting the manufacture of Arduino boards and software distribution by anyone. Arduino boards are available commercially from the official website or through authorized distributors. Arduino board designs use a variety of microprocessors and controllers. The boards are equipped with sets of digital and analog input/output (I/O) pins that may be interfaced to various expansion boards ('shields') or breadboards (for prototyping) and other circuits. The boards feature serial communications interfaces, including Universal Serial Bus (USB) on some models, which are also used for loading programs. The microcontrollers can be programmed using the C and C++ programming languages (Embedded C), using a standard API which is also known as the Arduino Programming Language, inspired by the Processing language and used with a modified version of the Processing IDE. In addition to using traditional compiler toolchains, the Arduino project provides an integrated development environment (IDE) and a command line tool developed in Go. The Arduino project began in 2005 as a tool for students at the Interaction Design Institute Ivrea, Italy, aiming to provide a low-cost and easy way for novices and professionals to create devices that interact with their environment using sensors and actuators. Common examples of such devices intended for beginner hobbyists include simple robots, thermostats, and motion detectors. The name Arduino comes from a café in Ivrea, Italy, where some of the project's founders used to meet. The bar was named after Arduin of Ivrea, who was the margrave of the March of Ivrea and King of Italy from 1002 to 1014. History Founding The Arduino project was started at the Interaction Design Institute Ivrea (IDII) in Ivrea, Italy. At that time, the students used a BASIC Stamp microcontroller at a cost of $50. In 2004, Hernando Barragán created the development platform Wiring as a Master's thesis project at IDII, under the supervision of Massimo Banzi and Casey Reas. Casey Reas is known for co-creating, with Ben Fry, the Processing development platform. The project goal was to create simple, low cost tools for creating digital projects by non-engineers. The Wiring platform consisted of a printed circuit board (PCB) with an ATmega128 microcontroller, an IDE based on Processing and library functions to easily program the microcontroller. In 2005, Massimo Banzi, with David Mellis, another IDII student, and David Cuartielles, extended Wiring by adding support for the cheaper ATmega8 microcontroller. The new project, forked from Wiring, was called Arduino. The initial Arduino core team consisted of Massimo Banzi, David Cuartielles, Tom Igoe, Gianluca Martino, and David Mellis. Following the completion of the platform, lighter and less expensive versions were distributed in the open-source community. It was estimated in mid-2011 that over 300,000 official Arduinos had been commercially produced, and in 2013 that 700,000 official boards were in users' hands. Trademark dispute In early 2008, the five co-founders of the Arduino project created a company, Arduino LLC, to hold the trademarks associated with Arduino. The manufacture and sale of the boards were to be done by external companies, and Arduino LLC would get a royalty from them. The founding bylaws of Arduino LLC specified that each of the five founders transfer ownership of the Arduino brand to the newly formed company. At the end of 2008, Gianluca Martino's company, Smart Projects, registered the Arduino trademark in Italy and kept this a secret from the other co-founders for about two years. This was revealed when the Arduino company tried to register the trademark in other areas of the world (they originally registered only in the US), and discovered that it was already registered in Italy. Negotiations with Martino and his firm to bring the trademark under the control of the original Arduino company failed. In 2014, Smart Projects began refusing to pay royalties. They then appointed a new CEO, Federico Musto, who renamed the company Arduino SRL and created the website arduino.org, copying the graphics and layout of the original arduino.cc. This resulted in a rift in the Arduino development team. In January 2015, Arduino LLC filed a lawsuit against Arduino SRL. In May 2015, Arduino LLC created the worldwide trademark Genuino, used as brand name outside the United States. At the World Maker Faire in New York on 1 October 2016, Arduino LLC co-founder and CEO Massimo Banzi and Arduino SRL CEO Federico Musto announced the merger of the two companies, forming Arduino AG. Around that same time, Massimo Banzi announced that in addition to the company a new Arduino Foundation would be launched as "a new beginning for Arduino", but this decision was withdrawn later. In April 2017, Wired reported that Musto had "fabricated his academic record... On his company's website, personal LinkedIn accounts, and even on Italian business documents, Musto was, until recently, listed as holding a Ph.D. from the Massachusetts Institute of Technology. In some cases, his biography also claimed an MBA from New York University." Wired reported that neither university had any record of Musto's attendance, and Musto later admitted in an interview with Wired that he had never earned those degrees. The controversy surrounding Musto continued when, in July 2017, he reportedly pulled many open source licenses, schematics, and code from the Arduino website, prompting scrutiny and outcry. By 2017 Arduino AG owned many Arduino trademarks. In July 2017 BCMI, founded by Massimo Banzi, David Cuartielles, David Mellis and Tom Igoe, acquired Arduino AG and all the Arduino trademarks. Fabio Violante is the new CEO replacing Federico Musto, who no longer works for Arduino AG. Post-dispute In October 2017, Arduino announced its partnership with Arm Holdings (ARM). The announcement said, in part, "ARM recognized independence as a core value of Arduino ... without any lock-in with the ARM architecture". Arduino intends to continue to work with all technology vendors and architectures. Under Violante's guidance, the company started growing again and releasing new designs. The Genuino trademark was dismissed and all products were branded again with the Arduino name. In August 2018, Arduino announced its new open source command line tool (arduino-cli), which can be used as a replacement of the IDE to program the boards from a shell. In February 2019, Arduino announced its IoT Cloud service as an extension of the Create online environment. As of February 2020, the Arduino community included about 30 million active users based on the IDE downloads. Hardware Arduino is open-source hardware. The hardware reference designs are distributed under a Creative Commons Attribution Share-Alike 2.5 license and are available on the Arduino website. Layout and production files for some versions of the hardware are also available. Although the hardware and software designs are freely available under copyleft licenses, the developers have requested the name Arduino to be exclusive to the official product and not be used for derived works without permission. The official policy document on the use of the Arduino name emphasizes that the project is open to incorporating work by others into the official product. Several Arduino-compatible products commercially released have avoided the project name by using various names ending in -duino. Most Arduino boards consist of an Atmel 8-bit AVR microcontroller (ATmega8, ATmega168, ATmega328, ATmega1280, or ATmega2560) with varying amounts of flash memory, pins, and features. The 32-bit Arduino Due, based on the Atmel SAM3X8E was introduced in 2012. The boards use single or double-row pins or female headers that facilitate connections for programming and incorporation into other circuits. These may connect with add-on modules termed shields. Multiple and possibly stacked shields may be individually addressable via an I²C serial bus. Most boards include a 5 V linear regulator and a 16 MHz crystal oscillator or ceramic resonator. Some designs, such as the LilyPad, run at 8 MHz and dispense with the onboard voltage regulator due to specific form factor restrictions. Arduino microcontrollers are pre-programmed with a bootloader that simplifies the uploading of programs to the on-chip flash memory. The default bootloader of the Arduino Uno is the Optiboot bootloader. Boards are loaded with program code via a serial connection to another computer. Some serial Arduino boards contain a level shifter circuit to convert between RS-232 logic levels and transistor–transistor logic (TTL serial) level signals. Current Arduino boards are programmed via Universal Serial Bus (USB), implemented using USB-to-serial adapter chips such as the FTDI FT232. Some boards, such as later-model Uno boards, substitute the FTDI chip with a separate AVR chip containing USB-to-serial firmware, which is reprogrammable via its own ICSP header. Other variants, such as the Arduino Mini and the unofficial Boarduino, use a detachable USB-to-serial adapter board or cable, Bluetooth or other methods. When used with traditional microcontroller tools, instead of the Arduino IDE, standard AVR in-system programming (ISP) programming is used. The Arduino board exposes most of the microcontroller's I/O pins for use by other circuits. The Diecimila, Duemilanove, and current Uno provide 14 digital I/O pins, six of which can produce pulse-width modulated signals, and six analog inputs, which can also be used as six digital I/O pins. These pins are on the top of the board, via female 0.1-inch (2.54 mm) headers. Several plug-in application shields are also commercially available. The Arduino Nano and Arduino-compatible Bare Bones Board and Boarduino boards may provide male header pins on the underside of the board that can plug into solderless breadboards. Many Arduino-compatible and Arduino-derived boards exist. Some are functionally equivalent to an Arduino and can be used interchangeably. Many enhance the basic Arduino by adding output drivers, often for use in school-level education, to simplify making buggies and small robots. Others are electrically equivalent, but change the form factor, sometimes retaining compatibility with shields, sometimes not. Some variants use different processors, of varying compatibility. Official boards The original Arduino hardware was manufactured by the Italian company Smart Projects. Some Arduino-branded boards have been designed by the American companies SparkFun Electronics and Adafruit Industries. , 17 versions of the Arduino hardware have been commercially produced. Shields Arduino and Arduino-compatible boards use printed circuit expansion boards called shields, which plug into the normally supplied Arduino pin headers. Shields can provide motor controls for 3D printing and other applications, GNSS (satellite navigation), Ethernet, liquid crystal display (LCD), or breadboarding (prototyping). Several shields can also be made do it yourself (DIY). Software A program for Arduino hardware may be written in any programming language with compilers that produce binary machine code for the target processor. Atmel provides a development environment for their 8-bit AVR and 32-bit ARM Cortex-M based microcontrollers: AVR Studio (older) and Atmel Studio (newer). Legacy IDE The Arduino integrated development environment (IDE) is a cross-platform application (for Microsoft Windows, macOS, and Linux) that is based on Processing IDE which is written in Java. It uses the Wiring API as programming style and HAL. It includes a code editor with features such as text cutting and pasting, searching and replacing text, automatic indenting, brace matching, and syntax highlighting, and provides simple one-click mechanisms to compile and upload programs to an Arduino board. It also contains a message area, a text console, a toolbar with buttons for common functions and a hierarchy of operation menus. The source code for the IDE is released under the GNU General Public License, version 2. The Arduino IDE supports the languages C and C++ using special rules of code structuring. The Arduino IDE supplies a software library from the Wiring project, which provides many common input and output procedures. User-written code only requires two basic functions, for starting the sketch and the main program loop, that are compiled and linked with a program stub main() into an executable cyclic executive program with the GNU toolchain, also included with the IDE distribution. The Arduino IDE employs the program avrdude to convert the executable code into a text file in hexadecimal encoding that is loaded into the Arduino board by a loader program in the board's firmware. Traditionally, Arduino IDE was used to program Arduino's official boards based on Atmel AVR Microcontrollers, but over time, once the popularity of Arduino grew and the availability of open-source compilers existed, many more platforms from PIC, STM32, TI MSP430, ESP32 can be coded using Arduino IDE. IDE 2.0 An initial alpha preview of a new Arduino IDE was released on October 18, 2019, as the Arduino Pro IDE. The beta preview was released on March 1, 2021, renamed IDE 2.0. On September 14, 2022, the Arduino IDE 2.0 was officially released as stable. The system still uses Arduino CLI (Command Line Interface), but improvements include a more professional development environment and autocompletion support. The application frontend is based on the Eclipse Theia Open Source IDE. Its main new features are: Modern, fully featured development environment New Board Manager New Library Manager Project Explorer Basic Auto-Completion and syntax check Serial Monitor with Graph Plotter Dark Mode and DPI awareness 64-bit release Debugging capability One important feature Arduino IDE 2.0 provides is the debugging feature. It allows users to single-step, insert breakpoints or view memory. Debugging requires a target chip with debug port and a debug probe. The official Arduino Zero board can be debugged out of the box. Other official Arduino SAMD21 boards require a separate SEGGER J-Link or Atmel-ICE. For a 3rd party board, debugging in Arduino IDE 2.0 is also possible as long as such board supports GDB, OPENOCD and has a debug probe. Community has contributed debugging for ATMega328P based Arduino or CH32 RiscV Boards, etc. Sketch A sketch is a program written with the Arduino IDE. Sketches are saved on the development computer as text files with the file extension .ino. Arduino Software (IDE) pre-1.0 saved sketches with the extension .pde. A minimal Arduino C/C++ program consists of only two functions: : This function is called once when a sketch starts after power-up or reset. It is used to initialize variables, input and output pin modes, and other libraries needed in the sketch. It is analogous to the function . : After function exits (ends), the function is executed repeatedly in the main program. It controls the board until the board is powered off or is reset. It is analogous to the function . Blink example Most Arduino boards contain a light-emitting diode (LED) and a current-limiting resistor connected between pin 13 and ground, which is a convenient feature for many tests and program functions. A typical program used by beginners, akin to Hello, World!, is "blink", which repeatedly blinks the on-board LED integrated into the Arduino board. This program uses the functions , , and , which are provided by the internal libraries included in the IDE environment. This program is usually loaded into a new Arduino board by the manufacturer. const int LED_PIN = 13; // Pin number attached to LED. void setup() { pinMode(LED_PIN, OUTPUT); // Configure pin 13 to be a digital output. } void loop() { digitalWrite(LED_PIN, HIGH); // Turn on the LED. delay(1000); // Wait 1 second (1000 milliseconds). digitalWrite(LED_PIN, LOW); // Turn off the LED. delay(1000); // Wait 1 second. } Libraries The open-source nature of the Arduino project has facilitated the publication of many free software libraries that other developers use to augment their projects. Operating systems/threading There is a Xinu OS port for the ATmega328P (Arduino Uno and others with the same chip), which includes most of the basic features. The source code of this version is freely available. There is also a threading tool, named Protothreads. Protothreads are described as "extremely lightweight stackless threads designed for severely memory constrained systems, such as small embedded systems or wireless sensor network nodes. There is a port of FreeRTOS for the Arduino. This is available from the Arduino Library Manager. It is compatible with a number of boards, including the Uno. Applications Arduboy, a handheld game console based on Arduino Arduinome, a MIDI controller device that mimics the Monome Ardupilot, drone software and hardware ArduSat, a cubesat based on Arduino C-STEM Studio, a platform for hands-on integrated learning of computing, science, technology, engineering, and mathematics (C-STEM) with robotics Data loggers for scientific research OBDuino, a trip computer that uses the on-board diagnostics interface found in most modern cars OpenEVSE an open-source electric vehicle charger XOD, a visual programming language for Arduino Simulation Tinkercad, an analog and digital simulator supporting Arduino Simulation, which is commonly used to create 3D models Recognitions The Arduino project received an honorary mention in the Digital Communities category at the 2006 Prix Ars Electronica. The Arduino Engineering Kit won the Bett Award for "Higher Education or Further Education Digital Services" in 2020.
Technology
Specific hardware
null
5390350
https://en.wikipedia.org/wiki/Feral%20horse
Feral horse
A feral horse is a free-roaming horse of domesticated stock. As such, a feral horse is not a wild animal in the sense of an animal without domesticated ancestors. However, some populations of feral horses are managed as wildlife, and these horses often are popularly called "wild" horses. Feral horses are descended from domestic horses that strayed, escaped, or were deliberately released into the wild and remained to survive and reproduce there. Away from humans, over time, these animals' patterns of behavior revert to behavior more closely resembling that of wild horses. Some horses that live in a feral state but may be occasionally handled or managed by humans, particularly if privately owned, are referred to as "semi-feral". Feral horses live in groups called a herd, band, harem, or mob. Feral horse herds, like those of wild horses, are usually made up of small harems led by a dominant mare, containing additional mares, their foals, and immature horses of both sexes. There is usually one herd stallion, though occasionally a few less-dominant males may remain with the group. Horse "herds" in the wild are best described as groups of several small bands who share the same territory. Bands are normally on the small side, as few as three to five animals, but sometimes over a dozen. The makeup of bands shift over time as young animals are driven out of the band they were born into and join other bands, or as young stallions challenge older males for dominance. However, in a closed ecosystem (such as the isolated refuges in which most feral horses live today), to maintain genetic diversity, the minimum size for a sustainable free-roaming horse or burro population is 150–200 animals. Feral horse populations Americas The best-known examples of modern day "wild" horses are those of the American West. When Europeans reintroduced the horse to the Americas, beginning with the arrival of the Spanish conquistadors in the 15th century, some horses escaped and formed feral herds known today as mustangs. Isolated populations of wild horses occur in a number of places in the United States, including Assateague Island off the coast of Virginia and Maryland, Cumberland Island, Georgia, Vieques Island off the coast of Puerto Rico, and Sable Island off the coast of Nova Scotia, Canada. Some of these horses are said to be the descendants of horses that managed to swim to land when they were shipwrecked. Others may have been deliberately brought to various islands by settlers and either left to reproduce freely or abandoned when assorted human settlements failed. Many prehistoric horse species, now extinct, evolved in North America, but the wild horses of today are the offspring of horses that were domesticated in southern europe. In the Western United States, certain bands of horses and burros are protected under the Wild and Free-Roaming Horses and Burros Act of 1971. There are about 300,000 horses today in multiple land jurisdictions across the country, including tribal lands. Asia The only truly wild horses in existence today are Przewalski's horse native to the steppes of central Asia. A modern wild horse population (janghali ghura) is found in the Dibru-Saikhowa National Park and Biosphere reserve of Assam, in north-east India, and is a herd of about 79 horses descended from animals that escaped army camps during World War II. Europe In Portugal, a population of free-ranging horses, known as garrano, lives in the northern mountain chains. In County Kerry, Ireland, wild bog ponies have been known since at least the 1300s. More than 700 feral wild horses live in the foothills of Cincar Mountain, between Livno and Kupres, Bosnia and Herzegovina, in an area of roughly . These animals, which descend from horses set free by their owners in the 1950s, enjoy a protected status since 2010. In Sardinia lives the Giara Horse, a wild species which inhabits the Giara di Gesturi, a basaltic plateau in the southern central part of the island. The population is composed by about 700 horses. Oceania Australia has the largest population in the world, with about 400,000 horses. The Australian name equivalent to the mustang is the brumby, descendants of horses brought to Australia by British settlers. Modern feral horses Modern types of feral horses that have a significant percentage of their number living in a feral state, though domesticated representatives may exist, include these types, landraces, and breeds: Africa Kundudo horse, in the Kundudo region, Ethiopia; threatened with extinction Namib desert horse in Namibia North America see also Free-roaming horse management in North America Alberta Mountain Horse or Alberta Wildie, in the foothills of the Eastern Slopes of the Rocky Mountains of Alberta, Canada Banker horse on the Outer Banks of North Carolina, United States Chincoteague Pony on Assateague Island off the coasts of Virginia and Maryland, United States Cumberland Island horse on Cumberland Island off the coast of southern Georgia, United States Elegesi Qiyus Wild Horse (Cayuse) in the Nemaiah Valley, British Columbia, Canada Mustang in the western United States, legally protected by the Wild and Free-Roaming Horses and Burros Act of 1971 Nokota horse in North Dakota, United States Sable Island horse on Sable Island, Nova Scotia, Canada South America Lavradeiros in northern Brazil Small wild horses are established in the páramos of the Sierra Nevada de Santa Marta in Colombia and are believed to have descended from introductions made by Spanish conquistadors. A small population of feral horses lives in the foothills of Cordillera Real next to the city of La Paz in Bolivia; these individuals wander the high-altitude grassland up to 4700 m above sea level. The origin of this highly endangered herd is not well-known. The huge population that inhabits southern Brazil and the region of Patagonia, the Bagual. Asia Misaki horse in Cape Toi, Japan Delft Island Horse on Neduntheevu or Delft Island, Sri Lanka. Feral Horses are believed to be the descendants of horses kept on the island from the time of Dutch occupation in Sri Lanka. Yılkı horse in the Kızılırmak Delta and other places in Turkey Europe Danube Delta Horse, in and around Letea Forest, Romania Garrano, a feral horse native to northern Portugal Giara horse in Sardinia Marismeño in the Doñana National Park in Huelva, Spain Konik, predominantly domesticated, but the biggest feral herd in the world lives in the Oostvaardersplassen reserve in the Netherlands. Welsh Pony, mostly domesticated, but a feral population of about 180 animals roams the Carneddau hills of North Wales. Other populations roam the eastern parts of the Brecon Beacons National Park Oceania Brumby in Australia Kaimanawa horse in New Zealand Marquesas Islands horse on Ua Huka, Marquesas Islands, French Polynesia Semi-feral horses In the United Kingdom, herds of free-roaming ponies live in apparently wild conditions in various areas, notably Dartmoor, Exmoor, Cumbria (Fell Pony), and the New Forest. Similar horse and pony populations exist elsewhere on the European continent. These animals, however, are not truly feral, as all of them are privately owned, and roam out on the moors and forests under common grazing rights belonging to their owners. A proportion of them are halter-broken, and a smaller proportion broken to ride, but simply turned out for a while for any of a number of reasons (e.g., a break in training to allow them to grow on, a break from working to allow them to breed under natural conditions, or retirement). In other cases, the animals may be government-owned and closely managed on controlled reserves. Camargue horse, in marshes of the Rhone delta, southern France. Dartmoor pony, England; predominantly domesticated, also lives in semi-feral herds. Exmoor pony, England; predominantly domesticated, also lives in semi-feral herds. Fell pony, predominantly domesticated, also lives in semi-feral herds in northern England, particularly Cumbria. Gotlandsruss, lives in a semi-feral herd in Lojsta Moor on the Swedish Island of Gotland. New Forest pony, predominantly domesticated, also lives in semi-feral herds in the area of Hampshire, England. Pottok, predominantly domesticated, also lives in semi-feral herds in the western Pyrenees. Dülmen pony, a German pony that lives in a wild herd in Westphalia with little help by humans. Shetland pony, Scotland; predominantly used for riding, driving, and pack purposes. Population impacts Feral populations are usually controversial, with livestock producers often at odds with horse aficionados and other animal welfare advocates. Different habitats are impacted in different ways by feral horses. Where feral horses had wild ancestors indigenous to a region, a controlled population may have minimal environmental impact, particularly when their primary territory is one where they do not compete with domesticated livestock to any significant degree. However, in areas where they are an introduced species, such as Australia, or if the population is allowed to exceed available range, there can be significant impacts on soil, vegetation (including overgrazing), and animals that are native species. If a feral population lives close to civilization, their behavior can lead them to damage human-built livestock fencing and related structures. In some cases, where feral horses compete with domestic livestock, particularly on public lands where multiple uses are permitted, such as in the Western United States, there is considerable controversy over which species is most responsible for degradation of rangeland, with commercial interests often advocating for the removal of the feral horse population to allow more grazing for cattle or sheep, and advocates for feral horses recommending reduction in the numbers of domestic livestock allowed to graze on public lands. Certain populations have considerable historic or sentimental value, such as the Chincoteague pony that lives on Assateague Island, a national seashore with a delicate coastal ecosystem, or the Misaki pony of Japan that lives on a small refuge within the municipal boundaries of Kushima. These populations manage to thrive with careful management that includes using the animals to promote tourism to support the local economy. Most sustained feral populations are managed by various forms of culling, which, depending on the nation and other local conditions, may include capturing excess animals for adoption or sale. In some nations, management may include the often-controversial practice of selling captured animals for slaughter or simply shooting them. Fertility control is also sometimes used, though it is expensive and has to be repeated on a regular basis.
Biology and health sciences
Equidae
Animals
5391037
https://en.wikipedia.org/wiki/Constant%20of%20motion
Constant of motion
In mechanics, a constant of motion is a physical quantity conserved throughout the motion, imposing in effect a constraint on the motion. However, it is a mathematical constraint, the natural consequence of the equations of motion, rather than a physical constraint (which would require extra constraint forces). Common examples include energy, linear momentum, angular momentum and the Laplace–Runge–Lenz vector (for inverse-square force laws). Applications Constants of motion are useful because they allow properties of the motion to be derived without solving the equations of motion. In fortunate cases, even the trajectory of the motion can be derived as the intersection of isosurfaces corresponding to the constants of motion. For example, Poinsot's construction shows that the torque-free rotation of a rigid body is the intersection of a sphere (conservation of total angular momentum) and an ellipsoid (conservation of energy), a trajectory that might be otherwise hard to derive and visualize. Therefore, the identification of constants of motion is an important objective in mechanics. Methods for identifying constants of motion There are several methods for identifying constants of motion. The simplest but least systematic approach is the intuitive ("psychic") derivation, in which a quantity is hypothesized to be constant (perhaps because of experimental data) and later shown mathematically to be conserved throughout the motion. The Hamilton–Jacobi equations provide a commonly used and straightforward method for identifying constants of motion, particularly when the Hamiltonian adopts recognizable functional forms in orthogonal coordinates. Another approach is to recognize that a conserved quantity corresponds to a symmetry of the Lagrangian. Noether's theorem provides a systematic way of deriving such quantities from the symmetry. For example, conservation of energy results from the invariance of the Lagrangian under shifts in the origin of time, conservation of linear momentum results from the invariance of the Lagrangian under shifts in the origin of space (translational symmetry) and conservation of angular momentum results from the invariance of the Lagrangian under rotations. The converse is also true; every symmetry of the Lagrangian corresponds to a constant of motion, often called a conserved charge or current. A quantity is a constant of the motion if its total time derivative is zero which occurs when 's Poisson bracket with the Hamiltonian equals minus its partial derivative with respect to time Another useful result is Poisson's theorem, which states that if two quantities and are constants of motion, so is their Poisson bracket . A system with n degrees of freedom, and n constants of motion, such that the Poisson bracket of any pair of constants of motion vanishes, is known as a completely integrable system. Such a collection of constants of motion are said to be in involution with each other. For a closed system (Lagrangian not explicitly dependent on time), the energy of the system is a constant of motion (a conserved quantity). In quantum mechanics An observable quantity Q will be a constant of motion if it commutes with the Hamiltonian, H, and it does not itself depend explicitly on time. This is because where is the commutator relation. Derivation Say there is some observable quantity which depends on position, momentum and time, And also, that there is a wave function which obeys Schrödinger's equation Taking the time derivative of the expectation value of requires use of the product rule, and results in So finally, Comment For an arbitrary state of a Quantum Mechanical system, if and commute, i.e. if and is not explicitly dependent on time, then But if is an eigenfunction of the Hamiltonian, then even if it is still the case that provided is independent of time. Derivation Since then This is the reason why eigenstates of the Hamiltonian are also called stationary states. Relevance for quantum chaos In general, an integrable system has constants of motion other than the energy. By contrast, energy is the only constant of motion in a non-integrable system; such systems are termed chaotic. In general, a classical mechanical system can be quantized only if it is integrable; as of , there is no known consistent method for quantizing chaotic dynamical systems. Integral of motion A constant of motion may be defined in a given force field as any function of phase-space coordinates (position and velocity, or position and momentum) and time that is constant throughout a trajectory. A subset of the constants of motion are the integrals of motion, or first integrals, defined as any functions of only the phase-space coordinates that are constant along an orbit. Every integral of motion is a constant of motion, but the converse is not true because a constant of motion may depend on time. Examples of integrals of motion are the angular momentum vector, , or a Hamiltonian without time dependence, such as . An example of a function that is a constant of motion but not an integral of motion would be the function for an object moving at a constant speed in one dimension. Dirac observables In order to extract physical information from gauge theories, one either constructs gauge invariant observables or fixes a gauge. In a canonical language, this usually means either constructing functions which Poisson-commute on the constraint surface with the gauge generating first class constraints or to fix the flow of the latter by singling out points within each gauge orbit. Such gauge invariant observables are thus the `constants of motion' of the gauge generators and referred to as Dirac observables.
Physical sciences
Classical mechanics
Physics
5393980
https://en.wikipedia.org/wiki/Phylliidae
Phylliidae
The family Phylliidae (often misspelled Phyllidae) contains the extant true leaf insects or walking leaves, which include some of the most remarkably camouflaged leaf mimics (mimesis) in the entire animal kingdom. They occur from South Asia through Southeast Asia to Australia. Earlier sources treat Phylliidae as a much larger taxon, containing genera in what are presently considered to be several different families. Characteristics Leaf insects are well camouflaged, taking on the appearance of leaves. They do this so accurately that predators often are not able to distinguish them from real leaves. In some species, the edge of the leaf insect's body has the appearance of bite marks. To further confuse predators, when the leaf insect walks, it rocks back and forth, mimicking a real leaf being blown by the wind. The scholar Antonio Pigafetta was probably the first Western person to document the creature, though it was known to people in the tropics for a long time. Sailing with Ferdinand Magellan's circumnavigational expedition, he studied and chronicled the fauna on the island of Cimbonbon as the fleet hauled ashore for repairs. During this time he documented the Phyllium species with the following passage: Tribes, genera and species The subfamily Phylliinae has been divided into two tribes since 2003. This classification is not confirmed by more recent molecular genetics investigations. In addition to the fossil genus Eophyllium, the subfamily distinguishes thirteen recent genera, eight of which have been described since 2017. Within the Phyllium, previously there were several subgenera recognized, Pulchriphyllium Griffini, 1898 Comptaphyllium and Walaphyllium. As of a 2021 phylogeny, all three subgenera are now considered separate genera . Since 2021, in addition to morphological, molecular genetic studies have also increasingly been included in clarification of the phylogeny of Phylliidae. Their results show the general relationship between the genera, but when comparing female and male representatives, they do not yet provide a clear phylogenetic picture of the recent genera. Cladograms of the Phylliidae species determined on the basis of molecular genetics analysis and morphological investigations according to Cumming and Le Tirant (2022): The Phasmida Species File (V. 5.0) lists the following genera in two tribes: Phylliini Auth. Chitoniscus (Pacific) Chitoniscus feejeeanus Chitoniscus lobipes Chitoniscus lobiventris - type species (as Phyllium lobiventre ) Comptaphyllium (Australasia) Comptaphyllium caudatum - type species (as Phyllium caudatum ) Comptaphyllium regina Comptaphyllium riedeli Cryptophyllium (SE Asia).Selected species: Cryptophyllium athanysus Cryptophyllium celebicum - type species (as Phyllium celebicum ) Cryptophyllium westwoodii Microphyllium (Northern Philippine Islands) Microphyllium haskelli Microphyllium spinithorax - type species Phyllium (Sundaland, Philippine Islands, Wallacea, Australasia).Selected species: Phyllium bilobatum Phyllium hausleithneri Phyllium jacobsoni Phyllium letiranti Phyllium siccifolium - type species (as Gryllus siccifolius ) Pseudomicrophyllium (Northern Philippine Islands) Pseudomicrophyllium geryon Pseudomicrophyllium pusillulum - type species (as Pseudomicrophyllium faulkneri ) Pulchriphyllium (Seychelles, India, Western Indonesia, continental Asia)Selected species: Pulchriphyllium bioculatum ( Pulchriphyllium giganteum ( Pulchriphyllium pulchrifolium - type species (as Phyllium pulchrifolium ) Rakaphyllium (New Guinea and Ayu Islands) Rakaphyllium exsectum Rakaphyllium schultzei – type species (as Pulchriphyllium schultzei ) Trolicaphyllium (Pacific) Trolicaphyllium brachysoma - type species (as Phyllium brachysoma ) Trolicaphyllium erosus Trolicaphyllium sarrameaense Vaabonbonphyllium (New Guinea and Solomon Islands) Vaabonbonphyllium groesseri ( – type species (as Phyllium groesseri ) Vaabonbonphyllium rafidahae Walaphyllium (Australasia) Walaphyllium lelantos Walaphyllium monteithi Walaphyllium zomproi - type species (as Phyllium zomproi ) Nanophylliini Auth. Acentetaphyllium (New Guinea) Acentetaphyllium brevipenne – type species (as Phyllium brevipennis ) Acentetaphyllium larssoni Acentetaphyllium miyashitai Acentetaphyllium stellae Nanophyllium (Southern Indonesia, New Guinea, NE Australia) Nanophyllium adisi Nanophyllium asekiense Nanophyllium australianum Nanophyllium chitoniscoides Nanophyllium daphne Nanophyllium frondosum Nanophyllium hasenpuschi Nanophyllium keyicum Nanophyllium pygmaeum – type species Nanophyllium rentzi Nanophyllium suzukii Captivity Several species have gained in popularity as pets including Cryptophyllium celebicum, Cryptophyllium westwoodii, Phyllium jacobsoni, Phyllium ericoriai, Phyllium siccifolium, Phyllium letiranti, Phyllium monteithi, Phyllium philippinicum , Phyllium rubrum, Phyllium tobeloense, Pulchriphyllium bioculatum and Pulchriphyllium giganteum . Extinct species A 47-million-year-old fossil of Eophyllium messelensis, a prehistoric ancestor of Phylliidae, displays many of the same characteristics of modern leaf insects, indicating that this family has changed little over time.
Biology and health sciences
Insects: General
Animals
26161590
https://en.wikipedia.org/wiki/Train%20station
Train station
A train station, railroad station, or railroad depot (mainly North American terminology) and railway station (mainly UK and other Anglophone countries) is a railway facility where trains stop to load or unload passengers, freight, or both. It generally consists of at least one platform, one track, and a station building providing such ancillary services as ticket sales, waiting rooms, and baggage/freight service. Stations on a single-track line often have a passing loop to accommodate trains travelling in the opposite direction. Locations at which passengers only occasionally board or leave a train, sometimes consisting of a short platform and a waiting area but sometimes indicated by no more than a sign, are variously referred to as "stops", "flag stops", "halts", or "provisional stopping places". The stations themselves may be at ground level, underground, or elevated. Connections may be available to intersecting rail lines or other transport modes such as buses, trams, or other rapid transit systems. Terminology Train station is the terminology typically used in the U.S. In Europe, the terms train station and railway station are both commonly used, with railroad being obsolete. In British Commonwealth usage, where railway station is the traditional term, the word station is commonly understood to mean a railway station unless otherwise specified. In the United States, the term depot is sometimes used as an alternative name for station, along with the compound forms train depot, railway depot, and railroad depot—it is used for both passenger and freight facilities. The term depot is not used in reference to vehicle maintenance facilities in the U.S., whereas it is used as such in Canada and the United Kingdom. History The world's first recorded railway station, for trains drawn by horses rather than engined locomotives, began passenger service in 1807. It was The Mount in Swansea, Wales, on the Oystermouth (later the Swansea and Mumbles) Railway. The world's oldest station for engined trains was at Heighington, on the Stockton and Darlington railway in north-east England built by George Stephenson in the early 19th century, operated by locomotive Locomotion No. 1. The station opened in 1827 and was in use until the 1970s. The building, Grade II*-listed, was in bad condition, but was restored in 1984 as an inn. The inn closed in 2017; in 2024 there were plans to renovate the derelict station in time for the 200th anniversary of the opening of the railway line. The two-storey Mount Clare station in Baltimore, Maryland, United States, which survives as a museum, first saw passenger service as the terminus of the horse-drawn Baltimore and Ohio Railroad on 22 May 1830. The oldest terminal station in the world was Crown Street railway station in Liverpool, England, built in 1830, on the locomotive-hauled Liverpool to Manchester line. The station was slightly older than the still extant Liverpool Road railway station terminal in Manchester. The station was the first to incorporate a train shed. Crown Street station was demolished in 1836, as the Liverpool terminal station moved to Lime Street railway station. Crown Street station was converted to a goods station terminal. The first stations had little in the way of buildings or amenities. The first stations in the modern sense were on the Liverpool and Manchester Railway, opened in 1830. Manchester's Liverpool Road Station, the second oldest terminal station in the world, is preserved as part of the Museum of Science and Industry in Manchester. It resembles a row of Georgian houses. Early stations were sometimes built with both passenger and freight facilities, though some railway lines were goods-only or passenger-only, and if a line was dual-purpose there would often be a freight depot apart from the passenger station. This type of dual-purpose station can sometimes still be found today, though in many cases goods facilities are restricted to major stations. Many stations date from the 19th century and reflect the grandiose architecture of the time, lending prestige to the city as well as to railway operations. Countries where railways arrived later may still have such architecture, as later stations often imitated 19th-century styles. Various forms of architecture have been used in the construction of stations, from those boasting grand, intricate, Baroque- or Gothic-style edifices, to plainer utilitarian or modernist styles. Stations in Europe tended to follow British designs and were in some countries, like Italy, financed by British railway companies. Train stations built more recently often have a similar feel to airports, with a simple, abstract style. Examples of modern stations include those on newer high-speed rail networks, such as the Shinkansen in Japan, THSR in Taiwan, TGV lines in France, and ICE lines in Germany. Facilities Stations normally have staffed ticket sales offices, automated ticket machines, or both, although on some lines tickets are sold on board the trains. Many stations include a shop or convenience store. Larger stations usually have fast-food or restaurant facilities. In some countries, stations may also have a bar or pub. Other station facilities may include: toilets, left-luggage, lost-and-found, departures and arrivals schedules, luggage carts, waiting rooms, taxi ranks, bus bays and even car parks. Larger or staffed stations tend to have a greater range of facilities including also a station security office. These are usually open for travellers when there is sufficient traffic over a long enough period of time to warrant the cost. In large cities this may mean facilities available around the clock. A basic station might only have platforms, though it may still be distinguished from a halt, a stopping or halting place that may not even have platforms. Many stations, either larger or smaller, offer interchange with local transportation; this can vary from a simple bus stop across the street to underground rapid-transit urban rail stations. In many African, South American, and Asian countries, stations are also used as a place for public markets and other informal businesses. This is especially true on tourist routes or stations near tourist destinations. As well as providing services for passengers and loading facilities for goods, stations can sometimes have locomotive and rolling stock depots, usually with facilities for storing and refuelling rolling stock and carrying out minor repairs. Configurations The basic configuration of a station and various other features set certain types apart. The first is the level of the tracks. Stations are often sited where a road crosses the railway: unless the crossing is a level crossing, the road and railway will be at different levels. The platforms will often be raised or lowered relative to the station entrance: the station buildings may be on either level, or both. The other arrangement, where the station entrance and platforms are on the same level, is also common, but is perhaps rarer in urban areas, except when the station is a terminus. Stations located at level crossings can be problematic if the train blocks the roadway while it stops, causing road traffic to wait for an extended period of time. Stations also exist where the station buildings are above the tracks. An example of this is Arbroath. Occasionally, a station serves two or more railway lines at differing levels. This may be due to the station's position at a point where two lines cross (example: Berlin Hauptbahnhof), or may be to provide separate station capacity for two types of service, such as intercity and suburban (examples: Paris-Gare de Lyon and Philadelphia's 30th Street Station), or for two different destinations. Stations may also be classified according to the layout of the platforms. Apart from single-track lines, the most basic arrangement is a pair of tracks for the two directions; there is then a basic choice of an island platform between, two separate platforms outside the tracks (side platforms), or a combination of the two. With more tracks, the possibilities expand. Some stations have unusual platform layouts due to space constraints of the station location, or the alignment of the tracks. Examples include staggered platforms, such as at Tutbury and Hatton railway station on the Crewe–Derby line, and curved platforms, such as Cheadle Hulme railway station on the Macclesfield to Manchester Line. Stations at junctions can also have unusual shapes – a Keilbahnhof (or "wedge-shaped" station) is sited where two lines split. Triangular stations also exist where two lines form a three-way junction and platforms are built on all three sides, for example and stations. Tracks In a station, there are different types of tracks to serve different purposes. A station may also have a passing loop with a loop line that comes off the straight main line and merge back to the main line on the other end by railroad switches to allow trains to pass. A track with a spot at the station to board and disembark trains is called station track or house track regardless of whether it is a main line or loop line. If such track is served by a platform, the track may be called platform track. A loop line without a platform, which is used to allow a train to clear the main line at the station only, is called passing track. A track at the station without a platform which is used for trains to pass the station without stopping is called through track. There may be other sidings at the station which are lower speed tracks for other purposes. A maintenance track or a maintenance siding, usually connected to a passing track, is used for parking maintenance equipment, trains not in service, autoracks or sleepers. A refuge track is a dead-end siding that is connected to a station track as a temporary storage of a disabled train. Terminus A "terminus" or "terminal" is a station at the end of a railway line. Trains arriving there have to end their journeys (terminate) or reverse out of the station. Depending on the layout of the station, this usually permits travellers to reach all the platforms without the need to cross any tracks – the public entrance to the station and the main reception facilities being at the far end of the platforms. Sometimes the track continues for a short distance beyond the station, and terminating trains continue forward after depositing their passengers, before either proceeding to sidings or reversing to the station to pick up departing passengers. Bondi Junction, Australia and Kristiansand Station, Norway are examples. A terminus is frequently, but not always, the final destination of trains arriving at the station. Especially in continental Europe, a city may have a terminus as its main railway station, and all main lines converge on it. In such cases all trains arriving at the terminus must leave in the reverse direction from that of their arrival. There are several ways in which this can be accomplished: arranging for the service to be provided by a multiple-unit or push–pull train, both of which are capable of operating in either direction; the driver simply walks to the other end of the train and takes control from the other cab; this is increasingly the normal method in Europe; and is very common in North America; by detaching the locomotive which brought the train into the station and then either using another track to "run it around" to the other end of the train, to which it then re-attaches; attaching a second locomotive to the outbound end of the train; or by the use of a "wye", a roughly triangular arrangement of track and switches (points) where a train can reverse direction and back into the terminal; historically, turntables were used to reverse steam engines. There may also be a bypass line, used by freight trains that do not need to stop at the terminus. Some termini have a newer set of through platforms underneath (or above, or alongside) the terminal platforms on the main level. They are used by a cross-city extension of the main line, often for commuter trains, while the terminal platforms may serve long-distance services. Examples of underground through lines include the Thameslink platforms at in London, the Argyle and North Clyde lines of Glasgow's suburban rail network, in Antwerp in Belgium, the RER at the Gare du Nord in Paris, the Milan suburban railway service's Passante railway, and many of the numerous S-Bahn lines at terminal stations in Germany, Austria and Switzerland, such as at Zürich Hauptbahnhof. Due to the disadvantages of terminus stations there have been multiple cases in which one or several terminus stations were replaced with a new through-station, including the cases of Berlin Hauptbahnhof, Vienna Hauptbahnhof and numerous examples throughout the first century of railroading. Stuttgart 21 is a controversial project involving the replacement of a terminus station by a through-station. An American example of a terminal with this feature is Union Station in Washington, DC, where there are bay platforms on the main concourse level to serve terminating trains and standard island platforms one level below to serve trains continuing southward. The lower tracks run in a tunnel beneath the concourse and emerge a few blocks away to cross the Potomac River into Virginia. Terminus stations in large cities are by far the biggest stations, with the largest being Grand Central Terminal in New York City. Other major cities, such as London, Boston, Paris, Istanbul, Tokyo, and Milan have more than one terminus, rather than routes straight through the city. Train journeys through such cities often require alternative transport (metro, bus, taxi or ferry) from one terminus to the other. For instance, in Istanbul transfers from the Sirkeci Terminal (the European terminus) and the Haydarpaşa Terminal (the Asian terminus) historically required crossing the Bosphorus via alternative means, before the Marmaray railway tunnel linking Europe and Asia was completed. Some cities, including New York, have both termini and through lines. Terminals that have competing rail lines using the station frequently set up a jointly owned terminal railroad to own and operate the station and its associated tracks and switching operations. Stop During a journey, the term station stop may be used in announcements, to differentiate halts during which passengers may alight and halts for another reasons, such as a locomotive change. While a junction or interlocking usually divides two or more lines or routes, and thus has remotely or locally operated signals, a station stop does not. A station stop usually does not have any tracks other than the main tracks, and may or may not have switches (points, crossovers). Intermediate station An intermediate station does not have any other connecting route, unlike branch-off stations, connecting stations, transfer stations and railway junctions. In a broader sense, an intermediate station is generally any station on the route between its two terminal stations. The majority of stations are, in practice, intermediate stations. They are mostly designed as through stations; there are only a few intermediate stations that take the form of a stub-end station, for example at some zigzags. If there is a station building, it is usually located to the side of the tracks. In the case of intermediate stations used for both passenger and freight traffic, there is a distinction between those where the station building and goods facilities are on the same side of the tracks and those in which the goods facilities are on the opposite side of the tracks from the station building. Intermediate stations also occur on some funicular and cable car routes. Halt A halt, in railway parlance in the Commonwealth of Nations, Ireland and Portugal, is a small station, usually unstaffed or with very few staff, and with few or no facilities. In some cases, trains stop only on request, when passengers on the platform indicate that they wish to board, or passengers on the train inform the crew that they wish to alight. These can sometimes appear with signals and sometimes without. United Kingdom The Great Western Railway in Great Britain began opening haltes on 12 October 1903; from 1905, the French spelling was Anglicised to "halt". These GWR halts had the most basic facilities, with platforms long enough for just one or two carriages; some had no raised platform at all, necessitating the provision of steps on the carriages. Halts were normally unstaffed, tickets being sold on the train. On 1 September 1904, a larger version, known on the GWR as a "platform" instead of a "halt", was introduced; these had longer platforms, and were usually staffed by a senior grade porter, who sold tickets and sometimes booked parcels or milk consignments. From 1903 to 1947 the GWR built 379 halts and inherited a further 40 from other companies at the Grouping of 1923. Peak building periods were before the First World War (145 built) and 1928–1939 (198 built). Ten more were opened by British Rail on ex-GWR lines. The GWR also built 34 "platforms". Many such stops remain on the national railway networks in the United Kingdom, such as in North Wales, in Shropshire, and in Warwickshire, where passengers are requested to inform a member of on-board train staff if they wish to alight, or, if catching a train from the station, to make themselves clearly visible to the driver and use a hand signal as the train approaches. Most have had "Halt" removed from their names. Two publicly advertised and publicly accessible National Rail stations retain it: and . A number of other halts are still open and operational on privately owned, heritage, and preserved railways throughout the British Isles. The word is often used informally to describe national rail network stations with limited service and low usage, such as the Oxfordshire Halts on the Cotswold Line. It has also sometimes been used for stations served by public services but accessible only by persons travelling to/from an associated factory (for example IBM near Greenock and British Steel Redcar– although neither of these is any longer served by trains), or military base (such as Lympstone Commando) or railway yard. The only two such "private" stopping places on the national system, where the "halt" designation is still officially used, seem to be Staff Halt (at Durnsford Road, Wimbledon) and Battersea Pier Sidings Staff Halt, both of which are solely for railway staff. Other countries In Portugal, railway stops are called halts (). In Ireland, a few small railway stations are designated as "halts" (, sing. ). In some Commonwealth countries the term "halt" is used. In Australia, with its sparse rural populations, such stopping places were common on lines that were still open for passenger traffic. In the state of Victoria, for example, a location on a railway line where a small diesel railcar or railmotor could stop on request, allowing passengers to board or alight, was called a "rail motor stopping place" (RMSP). Usually situated near a level crossing, it was often designated solely by a sign beside the railway. The passenger could hail the driver to stop, and could buy a ticket from the train guard or conductor. In South Australia, such facilities were called "provisional stopping places". They were often placed on routes on which "school trains" (services conveying children from rural localities to and from school) operated. In West Malaysia, halts are commonplace along the less developed KTM East Coast railway line to serve rural 'kampongs' (villages), that require train services to stay connected to important nodes, but do not have a need for staff. People boarding at halts who have not bought tickets online can buy it through staff on board. In rural and remote communities across Canada and the United States, passengers wanting to board the train at such places had to flag the train down to stop it, hence the name "flag stops" or "flag stations". Accessibility Accessibility for disabled people is mandated by law in some countries. Considerations include: Elevators or ramps to every platform are necessary for people in wheelchairs who cannot use stairs, and also allow those with prams, bicycles, and luggage to reach the platform more easily and safely Minimising the platform gap in both height and width. This also requires rolling stock with appropriate dimensions. At some stations, a railway worker can install a temporary ramp to allow people in wheelchairs to board. Relying on temporary ramps can lead to people in wheelchairs becoming stranded on a train or platform if a staff member fails to show up to deploy the ramp. Station facilities such as accessible toilets, payphones, and audible announcements Tactile paving to warn visually impaired people that they are approaching a platform edge. Platform screen doors also physically prevent people from falling from the platform edge. In the United Kingdom, rail operators will arrange alternative transport (typically a taxi) at no extra cost to the ticket holder if the station they intend to travel to or from is inaccessible. Goods stations Goods or freight stations deal exclusively or predominantly with the loading and unloading of goods and may well have marshalling yards (classification yards) for the sorting of wagons. The world's first goods terminal was the 1830 Park Lane Goods Station at the South End Liverpool Docks. Built in 1830, the terminal was reached by a tunnel. As goods are increasingly moved by road, many former goods stations, as well as the goods sheds at passenger stations, have closed. Many are used purely for the cross-loading of freight and may be known as transshipment stations, where they primarily handle containers. They are also known as container stations or terminals. Records Worldwide The world's busiest passenger station, with a passenger throughput of 3.5 million passengers per day (1.27 billion per year), is Shinjuku Station in Tokyo. The world's station with most platforms is Grand Central Terminal in New York City with 44 platforms. The world's station with the longest platform is Hubli Junction railway station with a platform length of and is located in Karnataka, India. The world's highest station above ground level (not above sea level) is Hualongqiao station in Chongqing with Line 9 trains stopping 48 meters above the surface. Coney Island – Stillwell Avenue in New York City is the world's largest elevated terminal with 8 tracks and 4 island platforms. Shanghai South railway station, opened in June 2006, has the world's largest circular transparent roof. Europe Busiest Gare du Nord, in Paris, is by the number of travellers, at around 214 million per year, the busiest railway station in Europe, the 24th busiest in the world and the busiest outside Japan. , in London, is Europe's busiest station by daily rail traffic with 100 to 180 trains per hour passing through. Zürich HB is the busiest terminus in Europe by the volume of rail traffic. Largest Leipzig Hbf is the biggest railway station in Europe in terms of floor area (). München Hbf and Rome Termini are the largest railway station by number of platforms (32). Milan Centrale is the largest railway station in Europe by volume. Nuremberg Central Station is the largest through station by number of platforms (22) Highest Jungfraujoch railway station is the highest railway station in the European continent (). North America New York Penn Station is the busiest station in the Western Hemisphere. Toronto's Union Station is the busiest station in Canada.
Technology
Trains
null
26162030
https://en.wikipedia.org/wiki/Public%20transport
Public transport
Public transport (also known as public transportation, public transit, mass transit, or simply transit) is a system of transport for passengers by group travel systems available for use by the general public unlike private transport, typically managed on a schedule, operated on established routes, and that may charge a posted fee for each trip. There is no rigid definition of which kinds of transport are included, and air travel is often not thought of when discussing public transport—dictionaries use wording like "buses, trains, etc." Examples of public transport include city buses, trolleybuses, trams (or light rail) and passenger trains, rapid transit (metro/subway/underground, etc.) and ferries. Public transport between cities is dominated by airlines, coaches, and intercity rail. High-speed rail networks are being developed in many parts of the world. Most public transport systems run along fixed routes with set embarkation/disembarkation points to a prearranged timetable, with the most frequent services running to a headway (e.g.: "every 15 minutes" as opposed to being scheduled for any specific time of the day). However, most public transport trips include other modes of travel, such as passengers walking or catching bus services to access train stations. Share taxis offer on-demand services in many parts of the world, which may compete with fixed public transport lines, or complement them, by bringing passengers to interchanges. Paratransit is sometimes used in areas of low demand and for people who need a door-to-door service. Urban public transit differs distinctly among Asia, North America, and Europe. In Asia, profit-driven, privately owned and publicly traded mass transit and real estate conglomerates predominantly operate public transit systems. In North America, municipal transit authorities most commonly run mass transit operations. In Europe, both state-owned and private companies predominantly operate mass transit systems. For geographical, historical and economic reasons, differences exist internationally regarding the use and extent of public transport. The International Association of Public Transport (UITP) is the international network for public transport authorities and operators, policy decision-makers, scientific institutes and the public transport supply and service industry. It has over 1,900 members from more than 100 countries from all over the globe. In recent years, some high-wealth cities have seen a decline in public transport usage. A number of sources attribute this trend to the rise in popularity of remote work, ride-sharing services, and car loans being relatively cheap across many countries. Major cities such as Toronto, Paris, Chicago, and London have seen this decline and have attempted to intervene by cutting fares and encouraging new modes of transportation, such as e-scooters and e-bikes. Because of the reduced emissions and other environmental impacts of using public transportation over private transportation, many experts have pointed to an increased investment in public transit as an important climate change mitigation tactic. History Conveyances designed for public hire are as old as the first ferry service. The earliest public transport was water transport. Ferries appear in Greek mythology writings. The mystical ferryman Charon had to be paid and would only then take passengers to Hades. Some historical forms of public transport include the stagecoaches traveling a fixed route between coaching inns, and the horse-drawn boat carrying paying passengers, which was a feature of European canals from the 17th century onwards. The canal itself as a form of infrastructure dates back to antiquity. In ancient Egypt canals were used for freight transportation to bypass the Aswan cataract. The Chinese also built canals for water transportation as far back as the warring States period which began in the 5th century BCE. Whether or not those canals were used for for-hire public transport remains unknown; the Grand Canal in China (begun in 486 BCE) served primarily the grain trade. The bus, the first organized public transit system within a city, appears to have originated in Paris in 1662, although the service in question, Carrosses à cinq sols (English: five-sol coaches), which have been developed by mathematician and philosopher Blaise Pascal, lasted only fifteen years until 1677. Buses are known to have operated in Nantes in 1826. The public bus transport system was introduced to London in July 1829. The first passenger horse-drawn vehicle opened in 1806. It ran along the Swansea and Mumbles Railway. In 1825 George Stephenson built the Locomotion No 1 for the Stockton and Darlington Railway in northeast England, the first public steam railway in the world. The world's first steam-powered underground railway opened in London in 1863. The first successful electric streetcar was built for 11 miles of track for the Union Passenger Railway in Tallahassee, Florida, in 1888. Electric streetcars could carry heavier passenger loads than predecessors, which reduced fares and stimulated greater transit use. Two years after the Richmond success, over thirty two thousand electric streetcars were operating in America. Electric streetcars also paved the way for the first subway system in America. Before electric streetcars, steam powered subways were considered. However, most people believed that riders would avoid the smoke filled subway tunnels from the steam engines. In 1894, Boston built the first subway in the United States, an electric streetcar line in a 1.5-mile tunnel under Tremont Street's retail district. Other cities quickly followed, constructing thousands of miles of subway in the following decades. In March 2020, Luxembourg abolished fares for trains, trams and buses and became the first country in the world to make all public transport free. The Encyclopædia Britannica specifies that public transportation is within urban areas, but does not limit its discussion of the topic to urban areas. Types of public transport Comparing modes Seven criteria estimate the usability of different types of public transport and its overall appeal. The criteria are speed, comfort, safety, cost, proximity, timeliness and directness. Speed is calculated from total journey time including transfers. Proximity means how far passengers must walk or otherwise travel before they can begin the public transport leg of their journey and how close it leaves them to their desired destination. Timeliness is how long they must wait for the vehicle. Directness records how far a journey using public transport deviates from a passenger's ideal route. In selecting between competing modes of transport, many individuals are strongly motivated by direct cost (travel fare/ ticket price to them) and convenience, as well as being informed by habit. The same individual may accept the lost time and statistically higher risk of accident in private transport, together with the initial, running and parking costs. Loss of control, spatial constriction, overcrowding, high speeds/accelerations, height and other phobias may discourage use of public transport. Actual travel time on public transport becomes a lesser consideration when predictable and when travel itself is reasonably comfortable (seats, toilets, services), and can thus be scheduled and used pleasurably, productively or for (overnight) rest. Chauffeured movement is enjoyed by many people when it is relaxing, safe, but not too monotonous. Waiting, interchanging, stops and holdups, for example due to traffic or for security, are discomforting. Jet lag is a human constraint discouraging frequent rapid long-distance east–west commuting, favoring modern telecommunications and VR technologies. Airline An airline provides scheduled service with aircraft between airports. Air travel has high speeds, but incurs large waiting times before and after travel, and is therefore often only feasible over longer distances or in areas where a lack of surface infrastructure makes other modes of transport impossible. Bush airlines work more similarly to bus stops; an aircraft waits for passengers and takes off when the aircraft is full. Bus and coach Bus services use buses on conventional roads to carry numerous passengers on shorter journeys. Buses operate with low capacity compared with trams or trains, and can operate on conventional roads, with relatively inexpensive bus stops to serve passengers. Therefore, buses are commonly used in smaller cities, towns, and rural areas, and for shuttle services supplementing other means of transit in large cities. Midi buses have an ever lower capacity, however double decker buses and articulated buses have a slightly larger capaciy. Bus rapid transit (BRT) is a term used for buses operating on dedicated right-of-way, much like a light rail; resulting in a higher capacity and operating speed compared to regular buses. Coach services use coaches (long-distance buses) for suburb-to-CBD or longer-distance transportation. The vehicles are normally equipped with more comfortable seating, a separate luggage compartment, video and possibly also a toilet. They have higher standards than city buses, but a limited stopping pattern. Electric buses Trolleybuses are electrically powered buses that receive power from overhead power line by way of a set of trolley poles for mobility. Online Electric Vehicles are buses that run on a conventional battery, but are recharged frequently at certain points via underground wires. Certain types of buses, styled after old-style streetcars, are also called trackless trolleys, but are built on the same platforms as a typical diesel, CNG, or hybrid bus; these are more often used for tourist rides than commuting and tend to be privately owned. Train Passenger rail transport is the conveyance of passengers by means of wheeled vehicles specially designed to run on railways. Trains allow high capacity at most distance scales, but require track, signalling, infrastructure and stations to be built and maintained resulting in high upfront costs. Intercity and High-speed rail Intercity rail is long-haul passenger services that connect multiple urban areas. They have few stops, and aim at high average speeds, typically only making one of a few stops per city. These services may also be international. High-speed rail is passenger trains operating significantly faster than conventional rail—typically defined as at least . The most predominant systems have been built in Europe and East Asia, and compared with air travel, offer long-distance rail journeys as quick as air services, have lower prices to compete more effectively and use electricity instead of combustion. Urban rail transit Urban rail transit is an all-encompassing term for various types of local rail systems, such as these examples trams, light rail, rapid transit, people movers, commuter rail, monorail, suspension railways and funiculars. Commuter rail Commuter rail is part of an urban area's public transport. It provides faster services to outer suburbs and neighboring satellite cities. Trains stop at train stations that are located to serve a smaller suburban or town center. The stations are often combined with shuttle bus or park and ride systems. Frequency may be up to several times per hour, and commuter rail systems may either be part of the national railway or operated by local transit agencies. Common forms of commuter rail employ either diesel electric locomotives, or electric multiple unit trains. Some commuter train lines share a railway with freight trains. Rapid transit A rapid transit railway system (also called a metro, underground, heavy rail, or subway) operates in an urban area with high capacity and frequency, and grade separation from other traffic. Heavy rail is a high-capacity form of rail transit, with 4 to 10 units forming a train, and can be the most expensive form of transit to build. Modern heavy rail systems are mostly driverless, which allows for higher frequencies and less maintenance cost. Systems are able to transport large numbers of people quickly over short distances with little land use. Variations of rapid transit include people movers, small-scale light metro and the commuter rail hybrid S-Bahn. More than 160 cities have rapid transit systems, totalling more than of track and 7,000 stations. Twenty-five cities have systems under construction. People mover People movers are a special term for grade-separated rail which uses vehicles that are smaller and shorter in size. These systems are generally used only in a small area such as a theme park or an airport. Tram Trams (also known as streetcars or trolleys) are railborne vehicles that originally ran in city streets, though over decades more and more dedicated tracks are used. They have higher capacity than buses, but must follow dedicated infrastructure with rails and wires either above or below the track, limiting their flexibility. In the United States, trams were commonly used prior to the 1930s, before being superseded by the bus. In modern public transport systems, they have been reintroduced in the form of the light rail. Light rail Light rail is a term coined in 1972 and uses mainly tram technology. Light rail has mostly dedicated right-of-ways and less sections shared with other traffic and usually step-free access. Light rails line are generally traversed with increased speed compared to a tram line. Light rail lines are, thus, essentially modernized interurbans. Unlike trams, light rail trains are often longer and have one to four cars per train. Monorail Somewhere between light and heavy rail in terms of carbon footprint, monorail systems usually use overhead single tracks, either mounted directly on the track supports or put in an overhead design with the train suspended. Monorail systems are used throughout the world (especially in Europe and east Asia, particularly Japan), but apart from public transit installations in Las Vegas and Seattle, most North American monorails are either short shuttle services or privately owned services (With 150,000 daily riders, the Disney monorail systems used at their parks may be the most famous in the world). Personal rapid transit Personal rapid transit is an automated cab service that runs on rails or a guideway. This is an uncommon mode of transportation (excluding elevators) due to the complexity of automation. A fully implemented system might provide most of the convenience of individual automobiles with the efficiency of public transit. The crucial innovation is that the automated vehicles carry just a few passengers, turn off the guideway to pick up passengers (permitting other PRT vehicles to continue at full speed), and drop them off to the location of their choice (rather than at a stop). Conventional transit simulations show that PRT might attract many auto users in problematic medium-density urban areas. A number of experimental systems are in progress. One might compare personal rapid transit to the more labor-intensive taxi or paratransit modes of transportation, or to the (by now automated) elevators common in many publicly accessible areas. Cable-propelled transit Cable-propelled transit (CPT) is a transit technology that moves people in motor-less, engine-less vehicles that are propelled by a steel cable. There are two sub-groups of CPT—gondola lifts and cable cars (railway). Gondola lifts are supported and propelled from above by cables, whereas cable cars are supported and propelled from below by cables. While historically associated with usage in ski resorts, gondola lifts are now finding increased consumption and utilization in many urban areas—built specifically for the purposes of mass transit. Many, if not all, of these systems are implemented and fully integrated within existing public transportation networks. Examples include Metrocable (Medellín), Metrocable (Caracas), Mi Teleférico in La Paz, Portland Aerial Tram, Roosevelt Island Tramway in New York City, and the London Cable Car. Ferry A ferry is a boat used to carry (or ferry) passengers, and sometimes their vehicles, across a body of water. A foot-passenger ferry with many stops is sometimes called a water bus. Ferries form a part of the public transport systems of many waterside cities and islands, allowing direct transit between points at a capital cost much lower than bridges or tunnels, though at a lower speed. Ship connections of much larger distances (such as over long distances in water bodies like the Mediterranean Sea) may also be called ferry services. Cycleway network A report published by the UK National Infrastructure Commission in 2018 states that "cycling is mass transit and must be treated as such." Cycling infrastructure is normally provided without charge to users because it is cheaper to operate than mechanised transit systems that use sophisticated equipment and do not use human power. Electric bikes and scooters Many cities around the world have introduced electric bikes and scooters to their public transport infrastructure. For example, in the Netherlands many individuals use e-bikes to replace their car commutes. In major American cities, start-up companies such as Uber and Lyft have implemented e-scooters as a way for people to take short trips around the city. Operation Infrastructure All public transport runs on infrastructure, either on roads, rail, airways or seaways. The infrastructure can be shared with other modes, freight and private transport, or it can be dedicated to public transport. The latter is especially valuable in cases where there are capacity problems for private transport. Investments in infrastructure are expensive and make up a substantial part of the total costs in systems that are new or expanding. Once built, the infrastructure will require operating and maintenance costs, adding to the total cost of public transport. Sometimes governments subsidize infrastructure by providing it free of charge, just as is common with roads for automobiles. Interchanges Interchanges are locations where passengers can switch from one public transport route to another. This may be between vehicles of the same mode (like a bus interchange), or e.g. between bus and train. It can be between local and intercity transport (such as at a central station or airport). Timetables Timetables (or 'schedules' in North American English) are provided by the transport operator to allow users to plan their journeys. They are often supplemented by maps and fare schemes to help travelers coordinate their travel. Online public transport route planners help make planning easier. Mobile apps are available for multiple transit systems that provide timetables and other service information and, in some cases, allow ticket purchase, some allowing to plan your journey, with time fares zones e.g. Services are often arranged to operate at regular intervals throughout the day or part of the day (known as clock-face scheduling). Often, more frequent services or even extra routes are operated during the morning and evening rush hours. Coordination between services at interchange points is important to reduce the total travel time for passengers. This can be done by coordinating shuttle services with main routes, or by creating a fixed time (for instance twice per hour) when all bus and rail routes meet at a station and exchange passengers. There is often a potential conflict between this objective and optimising the utilisation of vehicles and drivers. Financing The main sources of financing are ticket revenue, government subsidies and advertising. The percentage of revenue from passenger charges is known as the farebox recovery ratio. A limited amount of income may come from land development and rental income from stores and vendors, parking fees, and leasing tunnels and rights-of-way to carry fiber optic communication lines. Fare and ticketing Most—but not all—public transport requires the purchase of a ticket to generate revenue for the operators. Tickets may be bought either in advance, or at the time of the journey, or the carrier may allow both methods. Passengers may be issued with a paper ticket, a metal or plastic token, or a magnetic or electronic card (smart card, contactless smart card). Sometimes a ticket has to be validated, e.g. a paper ticket has to be stamped, or an electronic ticket has to be checked in. Tickets may be valid for a single (or return) trip, or valid within a certain area for a period of time (see transit pass). The fare is based on the travel class, either depending on the traveled distance, or based on zone pricing. The tickets may have to be shown or checked automatically at the station platform or when boarding, or during the ride by a conductor. Operators may choose to control all riders, allowing sale of the ticket at the time of ride. Alternatively, a proof-of-payment system allows riders to enter the vehicles without showing the ticket, but riders may or may not be controlled by a ticket controller; if the rider fails to show proof of payment, the operator may fine the rider at the magnitude of the fare. Multi-use tickets allow travel more than once. In addition to return tickets, this includes period cards allowing travel within a certain area (for instance month cards), or to travel a specified number of trips or number of days that can be chosen within a longer period of time (called carnet ticket). Passes aimed at tourists, allowing free or discounted entry at many tourist attractions, typically include zero-fare public transport within the city. Period tickets may be for a particular route (in both directions), or for a whole network. A free travel pass allowing free and unlimited travel within a system is sometimes granted to particular social sectors, for example students, elderly, children, employees (job ticket) and the physically or mentally disabled. Zero-fare public transport services are funded in full by means other than collecting a fare from passengers, normally through heavy subsidy or commercial sponsorship by businesses. Several mid-size European cities and many smaller towns around the world have converted their entire bus networks to zero-fare. Three capital cities in Europe have free public transport: Tallinn, Luxembourg and as of 2025, Belgrade. Local zero-fare shuttles or inner-city loops are far more common than city-wide systems. There are also zero-fare airport circulators and university transportation systems. Revenue, profit and subsidies Governments frequently opt to subsidize public transport for social, environmental or economic reasons. Common motivations include the desire to provide transport to people who are unable to use an automobile and to reduce congestion, land use and automobile emissions. Subsidies may take the form of direct payments for financially unprofitable services, but support may also include indirect subsidies. For example, the government may allow free or reduced-cost use of state-owned infrastructure such as railways and roads, to stimulate public transport's economic competitiveness over private transport, that normally also has free infrastructure (subsidized through such things as gas taxes). Other subsidies include tax advantages (for instance aviation fuel is typically not taxed), bailouts if companies that are likely to collapse (often applied to airlines) and reduction of competition through licensing schemes (often applied to taxis and airlines). Private transport is normally subsidized indirectly through free roads and infrastructure, as well as incentives to build car factories and, on occasion, directly via bailouts of automakers. Subsidies also may take the form of initial or increased tolls for drivers, such as the San Francisco Bay Area raising tolls on numerous bridges and proposing more hikes to fund the Bay Area Rapid Transit system. Land development schemes may be initialized, where operators are given the rights to use lands near stations, depots, or tracks for property development. For instance, in Hong Kong, MTR Corporation Limited and KCR Corporation generate additional profits from land development to partially cover the cost of the construction of the urban rail system. Some supporters of mass transit believe that use of taxpayer capital to fund mass transit will ultimately save taxpayer money in other ways, and therefore, state-funded mass transit is a benefit to the taxpayer. Some research has supported this position, but the measurement of benefits and costs is a complex and controversial issue. A lack of mass transit results in more traffic, pollution, and road construction to accommodate more vehicles, all costly to taxpayers; providing mass transit will therefore alleviate these costs. A study found that support for public transport spending is much higher among conservatives who have high levels of trust in government officials than those who do not. Safety and security Relative to other forms of transportation, public transit is safe (with a low crash risk) and secure (with low rates of crime). The injury and death rate for public transit is roughly one-tenth that of automobile travel. A 2014 study noted that "residents of transit-oriented communities have about one-fifth the per capita crash casualty rate as in automobile-oriented communities" and that "Transit also tends to have lower overall crime rates than automobile travel, and transit improvements can help reduce overall crime risk by improving surveillance and economic opportunities for at-risk populations." Although relatively safe and secure, public perceptions that transit systems are dangerous endure. A 2014 study stated that "Various factors contribute to the under-appreciation of transit safety benefits, including the nature of transit travel, dramatic news coverage of transit crashes and crimes, transit agency messages that unintentionally emphasize risks without providing information on its overall safety, and biased traffic safety analysis." Some systems attract vagrants who use the stations or trains as sleeping shelters, though most operators have practices that discourage this. Impact Accessibility Public transport is means of independent transport for individuals (without walking or bicycling) such as children too young to drive, the elderly without access to cars, those who do not hold a drivers license, and the infirm such as wheelchair users. Kneeling buses, low-floor access boarding on buses and light rail has also enabled greater access for the disabled in mobility. In recent decades low-floor access has been incorporated into modern designs for vehicles. In economically deprived areas, public transport increases individual accessibility to transport where private means are unaffordable. Environmental Although there is continuing debate as to the true efficiency of different modes of transportation, mass transit is generally regarded as significantly more energy efficient than other forms of travel. A 2002 study by the Brookings Institution and the American Enterprise Institute found that public transportation in the U.S. uses approximately half the fuel required by cars, SUVs and light trucks. In addition, the study noted that "private vehicles emit about 95 percent more carbon monoxide, 92 percent more volatile organic compounds and about twice as much carbon dioxide and nitrogen oxide than public vehicles for every passenger mile traveled". Studies have shown that there is a strong inverse correlation between urban population density and energy consumption per capita, and that public transport could facilitate increased urban population densities, and thus reduce travel distances and fossil fuel consumption. Supporters of the green movement usually advocate public transportation, because it offers decreased airborne pollution compared to automobiles. A study conducted in Milan, Italy, in 2004 during and after a transportation strike serves to illustrate the impact that mass transportation has on the environment. Air samples were taken between 2 and 9 January, and then tested for methane, carbon monoxide, non-methane hydrocarbons (NMHCs), and other gases identified as harmful to the environment. The figure below is a computer simulation showing the results of the study "with 2 January showing the lowest concentrations as a result of decreased activity in the city during the holiday season. 9 January showed the highest NMHC concentrations because of increased vehicular activity in the city due to a public transportation strike." Based on the benefits of public transport, the green movement has affected public policy. For example, the state of New Jersey released Getting to Work: Reconnecting Jobs with Transit. This initiative attempts to relocate new jobs into areas with higher public transportation accessibility. The initiative cites the use of public transportation as being a means of reducing traffic congestion, providing an economic boost to the areas of job relocation, and most importantly, contributing to a green environment by reducing carbon dioxide (CO2) emissions. Using public transportation can result in a reduction of an individual's carbon footprint. A single person, round trip by car can be replaced using public transportation and result in a net CO2 emissions reduction of per year. Using public transportation saves CO2 emissions in more ways than simply travel as public transportation can help to alleviate traffic congestion as well as promote more efficient land use. When all three of these are considered, it is estimated that 37 million metric tons of CO2 will be saved annually. Another study claims that using public transit instead of private in the U.S. in 2005 would have reduced CO2 emissions by 3.9 million metric tons and that the resulting traffic congestion reduction accounts for an additional 3.0 million metric tons of CO2 saved. This is a total savings of about 6.9 million metric tons per year given the 2005 values. In order to compare energy impact of public transportation to private transportation, the amount of energy per passenger mile must be calculated. The reason that comparing the energy expenditure per person is necessary is to normalize the data for easy comparison. Here, the units are in per 100 p-km (read as person kilometer or passenger kilometer). In terms of energy consumption, public transportation is better than individual transport in a personal vehicle. In England, bus and rail are popular methods of public transportation, especially in London. Rail provides rapid movement into and out of the city of London while busing helps to provide transport within the city itself. As of 2006–2007, the total energy cost of London's trains was 15 kWh per 100 p-km, about 5 times better than a personal car. For busing in London, it was 32 kWh per 100 p-km, or about 2.5 times that of a personal car. This includes lighting, depots, inefficiencies due to capacity (i.e., the train or bus may not be operating at full capacity at all times), and other inefficiencies. Efficiencies of transport in Japan in 1999 were 68 kWh per 100 p-km for a personal car, 19 kWh per 100 p-km for a bus, 6 kWh per 100 p-km for rail, 51 kWh per 100 p-km for air, and 57 kWh per 100 p-km for sea. These numbers from either country can be used in energy comparison calculations or life-cycle assessment calculations. Public transportation also provides an arena to test environmentally friendly fuel alternatives, such as hydrogen-powered vehicles. Swapping out materials to create lighter public transportation vehicles with the same or better performance will increase environmental friendliness of public transportation vehicles while maintaining current standards or improving them. Informing the public about the positive environmental effects of using public transportation in addition to pointing out the potential economic benefit is an important first step towards making a difference. In the 2023 study titled "Subways and CO₂ Emissions: A Global Analysis with Satellite Data," research reveals that subway systems significantly reduce CO₂ emissions by approximately 50% in the cities they serve, contributing to an 11% global reduction. The study also explores potential expansion in 1,214 urban areas lacking subways, suggesting a potential emission cut by up to 77%. Economically, subways are viable in 794 cities under optimistic financial conditions (SCC at US$150/ton and SIC at US$140 million/km), but this figure drops to 294 cities with more pessimistic assumptions. Despite high costs—about US$200 million per kilometer for construction—subways offer substantial co-benefits, such as reduced traffic congestion and improved public health, making them a strategic investment for urban sustainability and climate mitigation. Land use Dense areas with mixed-land uses promote daily public transport use while urban sprawl is associated with sporadic public transport use. A recent European multi-city survey found that dense urban environments, reliable and affordable public transport services, and limiting motorized vehicles in high density areas of the cities will help achieve much needed promotion of public transport use. Urban space is a precious commodity and public transport utilises it more efficiently than a car dominant society, allowing cities to be built more compactly than if they were dependent on automobile transport. If public transport planning is at the core of urban planning, it will also force cities to be built more compactly to create efficient feeds into the stations and stops of transport. This will at the same time allow the creation of centers around the hubs, serving passengers' daily commercial needs and public services. This approach significantly reduces urban sprawl. Public land planning for public transportation can be difficult but it is the State and Regional organizations that are responsible to planning and improving public transportation roads and routes. With public land prices booming, there must be a plan to using the land most efficiently for public transportation in order to create better transportation systems. Inefficient land use and poor planning leads to a decrease in accessibility to jobs, education, and health care. Societal The consequences for wider society and civic life, is public transport breaks down social and cultural barriers between people in public life. An important social role played by public transport is to ensure that all members of society are able to travel without walking or cycling, not just those with a driving license and access to an automobile—which include groups such as the young, the old, the poor, those with medical conditions, and people banned from driving. Automobile dependency is a name given by policy makers to places where those without access to a private vehicle do not have access to independent mobility. This dependency contributes to the transport divide. A 2018 study published in the Journal of Environmental Economics and Management concluded that expanded access to public transit has no meaningful impact on automobile volume in the long term. Above that, public transportation opens to its users the possibility of meeting other people, as no concentration is diverted from interacting with fellow-travelers due to any steering activities. Adding to the above-said, public transport becomes a location of inter-social encounters across all boundaries of social, ethnic and other types of affiliation. Social issues Impact of COVID-19 pandemic The COVID-19 pandemic had a substantial effect on public transport systems, infrastructures and revenues in various cities across the world. The pandemic negatively impacted public transport usage by imposing social distancing, remote work, or unemployment in the United States. It caused a 79% drop in public transport riders at the beginning of 2020. This trend continued throughout the year with a 65% reduced ridership as compared to previous years. Similarly in London, at the beginning of 2020, ridership in the London Underground and buses declined by 95% and 85% respectively. A 55% drop in public transport ridership as compared to 2019 was reported in Cairo, Egypt after a period of mandatory halt. To reduce COVID-spread through cash contact, in Nairobi, Kenya, cashless payment systems were enforced by National Transport and Safety Authority (NTSA). Public transport was halted for three months in 2020 in Kampala, Uganda with people resorting to walking or cycling. Post-quarantine, upon renovating public transport infrastructure, public transport such as minibus taxis were assigned specific routes. The situation was difficult in cities where people are heavily dependent on the public transport system. In Kigali, Rwanda social distancing requirements led to fifty percent occupancy restrictions, but as the pandemic situation improved, the occupancy limit was increased to meet popular demands. Addis Ababa, Ethiopia also had inadequate bus services relative to demand and longer wait times due to social distancing restrictions and planned to deploy more buses. Both Addis Ababa and Kampala aim to improve walking and cycling infrastructures in the future as means of commuting complementary to buses.
Technology
Basics_11
null
26162462
https://en.wikipedia.org/wiki/Silver%20nitride
Silver nitride
Silver nitride is an explosive chemical compound with symbol Ag3N. It is a black, metallic-looking solid which is formed when silver oxide or silver nitrate is dissolved in concentrated solutions of ammonia, causing formation of the diammine silver complex which subsequently breaks down to Ag3N. The standard free energy of the compound is about +315 kJ/mol, making it an endothermic compound which decomposes explosively to metallic silver and nitrogen gas. History Silver nitride was formerly referred to as fulminating silver, but this can cause confusion with silver fulminate or silver azide, other compounds which have also been referred to by this name. The fulminate and azide compounds do not form from ammoniacal solutions of Ag2O. Fulminating silver was first prepared in 1788 by the French chemist Claude Louis Berthollet. 70 years earlier, in 1716 Johann Kunckel von Löwenstern had already described the preparation. Properties Silver nitride is poorly soluble in water, but decomposes in mineral acids; decomposition is explosive in concentrated acids. It also slowly decomposes in air at room temperature and explodes upon heating to 165 °C. Hazards Silver nitride is often produced inadvertently during experiments involving silver compounds and ammonia, leading to surprise detonations. Whether silver nitride is formed depends on the concentration of ammonia in the solution. Silver oxide in 1.52 M ammonia solution readily converts to the nitride, while silver oxide in 0.76 M solution does not form nitride. Silver oxide can also react with dry ammonia to form Ag3N. Silver nitride is more dangerous when dry; dry silver nitride is a contact explosive which may detonate from the slightest touch, even a falling water droplet. It is also explosive when wet, although less so, and explosions do not propagate well in wet deposits of the compound. Because of its long-term instability, undetonated deposits of Ag3N will lose their sensitivity over time. Silver nitride may appear as black crystals, grains, crusts, or mirrorlike deposits on container walls. Suspected deposits may be dissolved by adding dilute ammonia or concentrated ammonium carbonate solution, removing the explosion hazard. Other uses of the term The name "silver nitride" is sometimes also used to describe a reflective coating consisting of alternating thin layers of silver metal and silicon nitride. This material is not explosive, and is not a true silver nitride. It is used to coat mirrors and shotguns.
Physical sciences
Nitride salts
Chemistry
21768005
https://en.wikipedia.org/wiki/Whiptail%20stingray
Whiptail stingray
The whiptail stingrays are a family, the Dasyatidae, of rays in the order Myliobatiformes. They are found worldwide in tropical to temperate marine waters, and a number of species have also penetrated into fresh water in Africa, Asia, and Australia. Members of this family have flattened pectoral fin discs that range from oval to diamond-like in shape. Their common name comes from their whip-like tails, which are much longer than the disc and lack dorsal and caudal fins. All whiptail stingrays, except the porcupine ray (Urogymnus asperrimus), have one or more venomous stings near the base of the tail, which is used in defense. In order to sting their victims, they jerk their tails as the stinger falls off and stays in the wound that they have created. The stinger of a whiptail stingray is pointy, sharp with jagged edges. They range in size from or more across in the case of the smalleye stingray and giant freshwater stingray. Genera The taxonomy of Dasyatidae was revised by Peter Last, Gavin Naylor, and Mabel Manjaji-Matsumoto in 2016, based on morphological and molecular phylogenetic data. The placement of Megatrygon within the family is provisional pending further research, as evidence suggests it may be more closely related to the families Potamotrygonidae and Urotrygonidae than to other dasyatids. Subfamily Dasyatinae D. S. Jordan & Gilbert, 1879 Bathytoshia Whitley, 1933 Dasyatis Rafinesque, 1810 Hemitrygon Müller & Henle, 1838 Hypanus Rafinesque, 1818 Megatrygon Last, Naylor, and Manjaji-Matsumoto, 2016 Pteroplatytrygon Fowler, 1910 Telatrygon Last, Naylor, and Manjaji-Matsumoto, 2016 Taeniurops Garman, 1913 Subfamily Hypolophinae Stromer, 1910 Makararaja T. R. Roberts, 2007 Pastinachus Rüppell, 1829 Subfamily Neotrygoninae Castelnau, 1873 Neotrygon Castelnau, 1873 Taeniura J. P. Müller and Henle, 1837 Subfamily Urogymninae Gray, 1851 Brevitrygon Last, Naylor, and Manjaji-Matsumoto, 2016 Fluvitrygon Last, Naylor, and Manjaji-Matsumoto, 2016 Fontitrygon Last, Naylor, and Manjaji-Matsumoto, 2016 Himantura J. P. Müller and Henle, 1837 Maculabatis Last, Naylor, and Manjaji-Matsumoto, 2016 Pateobatis Last, Naylor, and Manjaji-Matsumoto, 2016 †Protohimantura Marramà, Klug, de Vos & Kriwet, 2018 Urogymnus J. P. Müller and Henle, 1837 Phylogeny
Biology and health sciences
Batoidea
Animals
21771815
https://en.wikipedia.org/wiki/Magellanic%20spiral
Magellanic spiral
A Magellanic spiral galaxy is a spiral galaxy with only one spiral arm. Magellanic spiral galaxies are classified as the type Sm (with sub-categories SAm, SBm, SABm); the prototype galaxy and namesake for Magellanic spirals is the Large Magellanic Cloud, an SBm galaxy. They are usually smaller dwarf galaxies and can be considered to be intermediate between dwarf spiral galaxies and irregular galaxies. Magellanic spirals SAm galaxies are a type of unbarred spiral galaxy, while SBm are a type of barred spiral galaxy. SABm are a type of intermediate spiral galaxy. Type Sm and Im galaxies have also been categorized as irregular galaxies with some structure (type Irr-1). Sm galaxies are typically disrupted and asymmetric. dSm galaxies are dwarf spiral galaxies or dwarf irregular galaxies, depending on categorization scheme. The Magellanic spiral classification was introduced by Gerard de Vaucouleurs, along with Magellanic irregular (Im), when he revamped the Hubble classification of galaxies. Grades of Magellanic spiral galaxies List of Magellanic spirals Barred (SBm) Large Magellanic Cloud (LMC; prototype galaxy) Small Magellanic Cloud (SMC) NGC 1311 NGC 4618 NGC 4236 NGC 55 NGC 4214 NGC 3109 IC 4710 Intermediate (SABm) NGC 4625 NGC 5713 Unbarred (SAm) NGC 5204 NGC 2552
Physical sciences
Galaxy classification
Astronomy
5399178
https://en.wikipedia.org/wiki/Egyptian%20blue
Egyptian blue
Egyptian blue, also known as calcium copper silicate (CaCuSi4O10 or CaOCuO(SiO2)4 (calcium copper tetrasilicate)) or cuprorivaite, is a pigment that was used in ancient Egypt for thousands of years. It is considered to be the first synthetic pigment. It was known to the Romans by the name . After the Roman era, Egyptian blue fell out of use and, thereafter, the manner of its creation was forgotten. In modern times, scientists have been able to analyze its chemistry and reconstruct how to make it. The ancient Egyptian word signifies blue, blue-green, and green. The first recorded use of "Egyptian blue" as a color name in English was in 1809. Definition Egyptian blue is a synthetic blue pigment produced from a mixture of silica, lime, copper, and an alkali. Its color is due to a calcium-copper tetrasilicate CaCuSi4O10 of the same composition as the naturally occurring mineral cuprorivaite. It was first synthesized in Egypt during the Fourth Dynasty and used extensively until the end of the Roman period in Europe, after which its use declined significantly. The term for it in the Egyptian language is ḫsbḏ-ỉrjt (khesbedj irtiu), which referred to artificial lapis lazuli (ḫsbḏ). It was used in antiquity as a blue pigment to color a variety of different media such as stone, wood, plaster, papyrus, and canvas, and in the production of numerous objects, including cylinder seals, beads, scarabs, inlays, pots, and statuettes. Sometimes, it is referred to in Egyptological literature as blue frit. Some have argued that this is an erroneous term that should be reserved for use to describe the initial phase of glass or glaze production, while others argue that Egyptian blue is a frit in both the fine and coarse form since it is a product of solid state reaction. Its characteristic blue color, resulting from one of its main components—copper—ranges from a light to a dark hue, depending on differential processing and composition. Apart from Egypt, it has also been found in the Near East, the Eastern Mediterranean, and the limits of the Roman Empire. It is unclear whether the pigment's existence elsewhere was a result of parallel invention or evidence of the technology's spread from Egypt to those areas. History and background The ancient Egyptians held the color blue in very high regard and were eager to present it on many media and in a variety of forms. They also desired to imitate the semiprecious stones turquoise and lapis lazuli, which were valued for their rarity and stark blue color. Use of naturally occurring minerals such as azurite to acquire this blue was impractical, as these minerals were rare and difficult to work. Therefore, to have access to the large quantities of blue color to meet demand, the Egyptians needed to manufacture the pigment themselves. The earliest evidence for the use of Egyptian blue, identified by Egyptologist Lorelei H. Corcoran of The University of Memphis, is on an alabaster bowl dated to the late pre-dynastic period or Naqada III (circa 3250 BC), excavated at Hierakonpolis, and now in the Museum of Fine Arts, Boston. In the Middle Kingdom (2050–1652 BC) it continued to be used as a pigment in the decoration of tombs, wall paintings, furnishings, and statues, and by the New Kingdom (1570–1070 BC) began to be more widely used in the production of numerous objects. Its use continued throughout the Late period and Greco-Roman period, only dying out in the fourth century AD, when the secret to its manufacture was lost. No written information exists in ancient Egyptian texts about the manufacture of Egyptian blue in antiquity, and it was first mentioned only in Roman literature by Vitruvius during the first century BC. He refers to it as caeruleum and describes in his work De architectura how it was produced by grinding sand, copper, and natron, and heating the mixture, shaped into small balls, in a furnace. Lime is necessary for the production as well, but probably lime-rich sand was used. Theophrastus gives it the Greek term κύανος (kyanos, blue), which originally probably referred to lapis lazuli. Finally, only at the beginning of the nineteenth century was interest renewed in learning more about its manufacture when it was investigated by Humphry Davy in 1815, and others such as W. T. Russell and F. Fouqué. Composition and manufacture Several experiments have been carried out by scientists and archaeologists interested in analyzing the composition of Egyptian blue and the techniques used to manufacture it. It is now generally regarded as a multiphase material that was produced by heating together quartz sand, a copper compound, calcium carbonate, and a small amount of an alkali (ash from salt-tolerant, halophyte plants or natron) at temperatures ranging between (depending on the amount of alkali used) for several hours. The result is cuprorivaite or Egyptian blue, carbon dioxide, and water vapor: In its final state, Egyptian blue consists of rectangular blue crystals together with unreacted quartz and some glass. From the analysis of a number of samples from Egypt and elsewhere, the weight percentage of the materials used to obtain Egyptian blue in antiquity was determined usually to range within these amounts: 60–70% silica (SiO2) 7–15% calcium oxide (CaO) 10–20% copper(II) oxide (CuO) To obtain theoretical cuprorivaite, where only blue crystals occur, with no excess of unreacted quartz or formation of glass, these percentages would need to be used: 64% silica 15% calcium oxide 21% copper oxide However, none of the analyzed samples from antiquity was made of this definitive composition, as all had excesses of silica, together with an excess of either CuO or CaO. This may have been intentional; an increase in the alkali content results in the pigment containing more unreacted quartz embedded in a glass matrix, which in turn results in a harder texture. Lowering the alkali content (less than 1%), though, does not allow glass to form and the resultant Egyptian blue is softer, with a hardness of 1–2 Mohs. In addition to the way the different compositions influenced texture, the way Egyptian blue was processed also had an effect on its texture, in terms of coarseness and fineness. Following a number of experiments, Tite et al. concluded that for fine-textured Egyptian blue, two stages were necessary to obtain uniformly interspersed crystals. First, the ingredients are heated, and the result is a coarse-textured product. This is then ground to a fine powder and water is added. The paste is then reshaped and fired again at temperatures ranging between 850 and 950 °C for one hour. These two stages possibly were needed to produce a paste that was fine enough for the production of small objects. Coarse-textured Egyptian blue, though, would not have gone through the second stage. Since it usually is found in the form of slabs (in the dynastic periods) and balls (in the Greco-Roman period), these either could have been awaiting to be processed through a second stage, where they would be ground and finely textured, or they would have been ground for use as a blue pigment. The shade of blue reached was also related to the coarseness and fineness of Egyptian blue as it was determined by the degree of aggregation of the Egyptian blue crystals. Coarse Egyptian blue was relatively thick in form, due to the large clusters of crystals which adhere to the unreacted quartz. This clustering results in a dark blue color that is the appearance of coarse Egyptian blue. Alternatively, fine-textured Egyptian blue consists of smaller clusters that are uniformly interspersed between the unreacted quartz grains and tends to be light blue in color. Diluted light blue, though, is used to describe the color of fine-textured Egyptian blue that has a large amount of glass formed in its composition, which masks the blue color, and gives it a diluted appearance. It depends on the level of alkali added to the mixture, so with more alkali, more glass formed, and the more diluted the appearance. This type of Egyptian blue is especially evident during the eighteenth dynasty and later, and probably is associated with the surge in glass technology at this time. If certain conditions were not met, the Egyptian blue would not be satisfactorily produced. For example, if the temperatures were above 1050 °C, it would become unstable. If too much lime was added, wollastonite (CaSiO3) forms and gives the pigment a green color. Too much of the copper ingredients results in excesses of copper oxides cuprite and tenorite. Sources The main component of Egyptian blue was the silica, and quartz sand found adjacent to the sites where Egyptian blue was being manufactured may have been its source, although no concrete evidence supports this hypothesis. The only evidence cited is by Jakcsh et al., who found crystals of titanomagnetite, a mineral found in desert sand, in samples collected from the tomb of Sabni (sixth dynasty). Its presence in Egyptian blue indicates that quartz sand, rather than flint or chert, was used as the silica source. This contrasts with the source of silica used for glass-making at Qantir (New Kingdom Ramesside site), which is quartz pebbles and not sand. It is believed that calcium oxide was not added intentionally on its own during the manufacture of Egyptian blue, but introduced as an impurity in the quartz sand and alkali. As to whether the craftsmen involved in the manufacture realized the importance of adding lime to the Egyptian blue mixture is not clear from this. The source of copper could have been either a copper ore (such as malachite), filings from copper ingots, or bronze scrap and other alloys. Before the New Kingdom, evidence is scarce as to which copper source was being used, but it is believed to have been copper ores. During the New Kingdom, evidence has been found for the use of copper alloys, such as bronze, due to the presence of varying amounts of tin, arsenic, or lead found in the Egyptian blue material. The presence of tin oxide could have come from copper ores that contained tin oxide and not from the use of bronze. However, no copper ores have been found with these amounts of tin oxide. Why a switch from the use of copper ores in earlier periods, to the use of bronze scrap during the Late Bronze Age is unclear as yet. The total alkali content in analyzed samples of Egyptian blue is greater than 1%, suggesting the alkali was introduced deliberately into the mixture and not as an impurity from other components. Sources of alkali either could have been natron from areas such as Wadi Natroun and El-Kab, or plant ash. By measuring the amounts of potash and magnesia in the samples of Egyptian blue, it is generally possible to identify which source of alkali had been used, since the plant ash contains higher amounts of potash and magnesia than the natron. However, due to the low concentration of alkali in Egyptian blue, which is a mere 4% or less, compared to glass, for example, which is at 10–20%, identifying the source is not always easy. The alkali source likely was natron, although the reasons for this assumption are unclear. However, analysis by Jaksch et al. of various samples of Egyptian blue identified variable amounts of phosphorus (up to 2 wt %), suggesting the alkali source used was in actuality plant ash and not natron. Since the glass industry during the Late Bronze Age used plant ash as its source of alkali, a link in terms of the alkali used for Egyptian blue before and after the introduction of the glass industry might have been possible. Archaeological evidence Amarna In the excavations at Amarna, Lisht, and Malkata at the beginning of the twentieth century, Petrie uncovered two types of vessels that he suggested were used in antiquity to make Egyptian blue: bowl-shaped pans and cylindrical vessels or saggers. In recent excavations at Amarna by Barry Kemp (1989), very small numbers of these "fritting" pans were uncovered, although various remaining pieces of Egyptian blue 'cake' were found, which allowed the identification of five different categories of Egyptian blue forms and the vessels associated with them: large round flat cakes, large flat rectangular cakes, bowl-shaped cakes, small sack-shaped pieces, and spherical shapes. No tin was found in the samples analyzed, which the authors suggest is an indication that use of scrap copper was possible instead of bronze. Qantir In the 1930s, Mahmud Hamza excavated a number of objects related to the production of Egyptian blue at Qantir, such as Egyptian blue cakes and fragments in various stages of production, providing evidence that Egyptian blue was actually produced at the site. Recent excavations at the same site uncovered a large copper-based industry, with several associated crafts, namely bronze-casting, red-glass making, faience production, and Egyptian blue. Ceramic crucibles with adhering remains of Egyptian blue were found in the excavations, suggesting again it had been manufactured on site. These Egyptian blue 'cakes' possibly were later exported to other areas around the country to be worked, as a scarcity of finished Egyptian blue products existed on site. For example, Egyptian blue cakes were found at Zawiyet Umm el-Rakham, a Ramesside fort near the Libyan coast, indicating in fact that the cakes were traded, and worked at and reshaped away from their primary production site. Connections with other vitreous material and with metals Egyptian blue is closely related to the other vitreous materials produced by the ancient Egyptians, namely glass and Egyptian faience, and it is possible that the Egyptians did not employ separate terms to distinguish the three products from one another. Although it is easier to distinguish between faience and Egyptian blue, due to the distinct core of faience objects and their separate glaze layers, it sometimes is difficult to differentiate glass from Egyptian blue due to the very fine texture that Egyptian blue occasionally could have. This is especially true during the New Kingdom, as Egyptian blue became more refined and glassy and continued as such into the Greco-Roman period. Since Egyptian blue, like faience, is a much older technology than glass, which only begins during the reign of Thutmose III (1479–1425 BC), changes in the manufacture of Egyptian blue undoubtedly were associated with the introduction of the glass industry. Analysis of the source of copper used in the manufacture of Egyptian blue indicates a relationship with the contemporaneous metal industry. Whereas in the earlier periods, it is most probable that copper ores were used, during the reign of Tutmosis III, the copper ore is replaced by the use of bronze filings. This has been established by the detection of a specific amount of tin oxide in Egyptian blue, which only could have resulted from the use of tin bronze scraps as the source of copper, which coincides with the time when bronze became widely available in ancient Egypt. Occurrences outside Egypt Egyptian blue was found in Western Asia during the middle of third millennium BC in the form of small artifacts and inlays, but not as a pigment. It was found in the Mediterranean area at the end of the Middle Bronze Age, and traces of tin were found in its composition suggesting the use of bronze scrap instead of copper ore as the source of copper. During the Roman period, use of Egyptian blue was extensive, as a pot containing the unused pigment, found in 1814 in Pompeii, illustrates. It was also found as unused pigment in the tombs of a number of painters. Etruscans also used it in their wall paintings. The related Chinese blue has been suggested as having Egyptian roots. Later, Raphael used Egyptian blue in his Triumph of Galatea. Roman production of Egyptian blue Around the turn of the eras, Roman sources report that a certain Vestorius transferred the production technology from Alexandria to Pozzuoli near Naples (Campania, Southern Italy). In fact, archaeological evidences confirm production sites in the northern Phlegraean Fields and seem to indicate a monopoly in the manufacture and trade of pigment spheres. Due to its almost exclusive use, Egyptian blue is the blue pigment par excellence of Roman antiquity; its art technological traces vanish in the course of the Middle Ages. In 2021, Early Medieval Egyptian blue (fifth/sixth century AD) was identified on a monochrome blue mural fragment from the church of St. Peter above Gratsch (South Tyrol, Northern Italy). By a new analytical approach based on Raman microspectroscopy, 28 different minerals with contents from the percent range down to 100 ppm were identified. Inclusion of knowledge from neighbouring disciplines made possible to read out the information about the type and provenance of the raw materials, synthesis and application of the pigment and ageing of the paint layer preserved in the previously not accessible trace components, and thus to reconstruct the individual "biography" of the Egyptian blue from St. Peter. This paradigm shift in the research history of Egyptian blue provided natural scientific evidences for the production in the northern Phlegraean fields (agreement with trace minerals found in the beach sands at the Gulf of Gaeta), the use of a sulphidic copper ore (instead of often-mentioned metallic copper or bronze), and plant ash as flux in the raw material mixture. Furthermore, indications for a synthesis predominated by solid state reactions were found, while the melting of the raw materials into glass most likely played a negligible role. A follow-up study on Roman Imperial pigment balls excavated in Aventicum and Augusta Raurica (Switzerland; first to third century AD) confirmed the results in 2022. The consistent composition of around 40 identified minerals establishes a connection to the northern Phlegraean Fields; a sulphidic copper ore and plant ash have also left their marks. Thus, the Roman production monopoly probably existed for centuries. In addition, the analyses revealed unwanted by-products of the synthesis, locally limited to microparticles on the sphere's surfaces, which can be traced back to suboptimal burning times or mixing ratios, respectively: a cuprorivaite with crystal defects in its layer structure and a copper-bearing green glass phase, characterised by Raman spectroscopy for the first time. Modern applications Egyptian blue's extremely powerful and long-lived infrared luminescence under visible light has enabled its presence to be detected on objects which appear unpainted to the human eye. This property has also been used to identify traces of the pigment on paintings produced as late as the sixteenth century, long after its use was presumed to have died out. The luminescence in the near-infrared, where neither fat nor hemoglobin show high absorption coefficients, in conjunction with the capacity of Egyptian blue to delaminate by splitting into nanosheets after immersion in water, also indicates it may have several high-technology applications, such as in biomedicine (e.g. bioimaging), telecommunications, laser technology, and security inks. Researchers at the Lawrence Berkeley National Laboratory discovered that Egyptian blue pigment absorbs visible light, and emits light in the near-infrared range. This suggests that Egyptian blue pigment could be used in construction materials designed to cool rooftops and walls in sunny climates, and for tinting glass to improve photovoltaic cell performance.
Physical sciences
Colors
Physics
38889765
https://en.wikipedia.org/wiki/Coherence%20%28units%20of%20measurement%29
Coherence (units of measurement)
A coherent system of units is a system of units of measurement used to express physical quantities that are defined in such a way that the equations relating the numerical values expressed in the units of the system have exactly the same form, including numerical factors, as the corresponding equations directly relating the quantities. It is a system in which every quantity has a unique unit, or one that does not use conversion factors. A coherent derived unit is a derived unit that, for a given system of quantities and for a chosen set of base units, is a product of powers of base units, with the proportionality factor being one. If a system of quantities has equations that relate quantities and the associated system of units has corresponding base units, with only one unit for each base quantity, then it is coherent if and only if every derived unit of the system is coherent. The concept of coherence was developed in the mid-nineteenth century by, amongst others, Kelvin and James Clerk Maxwell and promoted by the British Science Association. The concept was initially applied to the centimetre–gram–second (CGS) in 1873 and the foot–pound–second systems (FPS) of units in 1875. The International System of Units (SI) was designed in 1960 to incorporate the principle of coherence. Examples In the SI, the derived unit is a coherent derived unit for speed or velocity but is not a coherent derived unit. Speed or velocity is defined by the change in distance divided by a change in time. The derived unit uses the base units of the SI system. The derived unit requires numerical factors to relate to the SI base units: and . In the cgs system, is not a coherent derived unit. The numerical factor of is needed to express in the cgs system. History Before the metric system The earliest units of measure devised by humanity bore no relationship to each other. As both humanity's understanding of philosophical concepts and the organisation of society developed, so units of measurement were standardized—first particular units of measure had the same value across a community, then different units of the same quantity (for example feet and inches) were given a fixed relationship. Apart from Ancient China where the units of capacity and of mass were linked to red millet seed, there is little evidence of the linking of different quantities until the Enlightenment. Relating quantities of the same kind The history of the measurement of length dates back to the early civilization of the Middle East (10000 BC – 8000 BC). Archaeologists have been able to reconstruct the units of measure in use in Mesopotamia, India, the Jewish culture and many others. Archaeological and other evidence shows that in many civilizations, the ratios between different units for the same quantity of measure were adjusted so that they were integer numbers. In many early cultures such as Ancient Egypt, multiples with prime factors aside from 2, 3 and 5 were sometimes used—the Egyptian royal cubit being 28 fingers or 7 hands. In 2150 BC, the Akkadian emperor Naram-Sin rationalized the Babylonian system of measure, adjusting the ratios of many units of measure to multiples of which the only prime factors were 2, 3 and 5; for example there were 6 she (barleycorns) in a shu-si (finger) and 30 shu-si in a kush (cubit). Relating quantities of different kinds Non-commensurable quantities have different physical dimensions, which means that adding or subtracting them is not meaningful. For instance, adding the mass of an object to its volume has no physical meaning. However, new quantities (and, as such, units) can be derived via multiplication and exponentiation of other units. As an example, the SI unit for force is the newton, which is defined as kg⋅m⋅s−2. Since a coherent derived unit is one which is defined by means of multiplication and exponentiation of other units but not multiplied by any scaling factor other than 1, the pascal is a coherent unit of pressure (defined as kg⋅m−1⋅s−2), but the bar (defined as ) is not. Note that coherence of a given unit depends on the definition of the base units. Should the standard unit of length change such that it is shorter by a factor of , then the bar would be a coherent derived unit. However, a coherent unit remains coherent (and a non-coherent unit remains non-coherent) if the base units are redefined in terms of other units with the numerical factor always being unity. Metric system The concept of coherence was only introduced into the metric system in the third quarter of the nineteenth century; in its original form the metric system was non-coherent – in particular the litre was 0.001 m3 and the are (from which we get the hectare) was 100 m2. A precursor to the concept of coherence was however present in that the units of mass and length were related to each other through the physical properties of water, the gram having been designed as being the mass of one cubic centimetre of water at its freezing point. The CGS system had two units of energy, the erg that was related to mechanics and the calorie that was related to thermal energy, so only one of them (the erg, equivalent to the g⋅cm2/s2) could bear a coherent relationship to the base units. By contrast, coherence was a design aim of the SI, resulting in only one unit of energy being defined – the joule. Dimension-related coherence Each variant of the metric system has a degree of coherence—the various derived units being directly related to the base units without the need of intermediate conversion factors. An additional criterion is that, for example, in a coherent system the units of force, energy and power be chosen so that the equations = × = × = ÷ hold without the introduction of constant factors. Once a set of coherent units have been defined, other relationships in physics that use those units will automatically be true—Einstein's mass–energy equation, , does not require extraneous constants when expressed in coherent units. Isaac Asimov wrote, "In the cgs system, a unit force is described as one that will produce an acceleration of 1 cm/sec2 on a mass of 1 gm. A unit force is therefore 1 cm/sec2 multiplied by 1 gm." These are independent statements. The first is a definition; the second is not. The first implies that the constant of proportionality in the force law has a magnitude of one; the second implies that it is dimensionless. Asimov uses them both together to prove that it is the pure number one. Asimov's conclusion is not the only possible one. In a system that uses the units foot (ft) for length, second (s) for time, pound (lb) for mass, and pound-force (lbf) for force, the law relating force (F), mass (m), and acceleration (a) is . Since the proportionality constant here is dimensionless and the units in any equation must balance without any numerical factor other than one, it follows that 1 lbf = 1 lb⋅ft/s2. This conclusion appears paradoxical from the point of view of competing systems, according to which and . Although the pound-force is a coherent derived unit in this system according to the official definition, the system itself is not considered to be coherent because of the presence of the proportionality constant in the force law. A variant of this system applies the unit s2/ft to the proportionality constant. This has the effect of identifying the pound-force with the pound. The pound is then both a base unit of mass and a coherent derived unit of force. One may apply any unit one pleases to the proportionality constant. If one applies the unit s2/lb to it, then the foot becomes a unit of force. In a four-unit system (English engineering units), the pound and the pound-force are distinct base units, and the proportionality constant has the unit lbf⋅s2/(lb⋅ft). All these systems are coherent. One that is not is a three-unit system (also called English engineering units) in which that uses the pound and the pound-force, one of which is a base unit and the other, a non-coherent derived unit. In place of an explicit proportionality constant, this system uses conversion factors derived from the relation 1 lbf = 32.174 lb⋅ft/s2. In numerical calculations, it is indistinguishable from the four-unit system, since what is a proportionality constant in the latter is a conversion factor in the former. The relation among the numerical values of the quantities in the force law is , where the braces denote the numerical values of the enclosed quantities. Unlike in this system, in a coherent system, the relations among the numerical values of quantities are the same as the relations among the quantities themselves. The following example concerns definitions of quantities and units. The (average) velocity (v) of an object is defined as the quantitative physical property of the object that is directly proportional to the distance (d) traveled by the object and inversely proportional to the time (t) of travel, i.e., , where k is a constant that depends on the units used. Suppose that the metre (m) and the second (s) are base units; then the kilometre (km) and the hour (h) are non-coherent derived units. The metre per second (mps) is defined as the velocity of an object that travels one metre in one second, and the kilometre per hour (kmph) is defined as the velocity of an object that travels one kilometre in one hour. Substituting from the definitions of the units into the defining equation of velocity we obtain, 1 mps = k m/s and 1 kmph = k km/h = 1/3.6 k m/s = 1/3.6 mps. Now choose k = 1; then the metre per second is a coherent derived unit, and the kilometre per hour is a non-coherent derived unit. Suppose that we choose to use the kilometre per hour as the unit of velocity in the system. Then the system becomes non-coherent, and the numerical value equation for velocity becomes = 3.6 / . Coherence may be restored, without changing the units, by choosing k = 3.6; then the kilometre per hour is a coherent derived unit, with 1 kmph = 1 m/s, and the metre per second is a non-coherent derived unit, with 1 m/s = 3.6 m/s. A definition of a physical quantity is a statement that determines the ratio of any two instances of the quantity. The specification of the value of any constant factor is not a part of the definition since it does not affect the ratio. The definition of velocity above satisfies this requirement since it implies that v1/v2 = (d1/d2)/(t1/t2); thus if the ratios of distances and times are determined, then so is the ratio of velocities. A definition of a unit of a physical quantity is a statement that determines the ratio of any instance of the quantity to the unit. This ratio is the numerical value of the quantity or the number of units contained in the quantity. The definition of the metre per second above satisfies this requirement since it, together with the definition of velocity, implies that v/mps = (d/m)/(t/s); thus if the ratios of distance and time to their units are determined, then so is the ratio of velocity to its unit. The definition, by itself, is inadequate since it only determines the ratio in one specific case; it may be thought of as exhibiting a specimen of the unit. A new coherent unit cannot be defined merely by expressing it algebraically in terms of already defined units. Thus the statement, "the metre per second equals one metre divided by one second", is not, by itself, a definition. It does not imply that a unit of velocity is being defined, and if that fact is added, it does not determine the magnitude of the unit, since that depends on the system of units. In order for it to become a proper definition both the quantity and the defining equation, including the value of any constant factor, must be specified. After a unit has been defined in this manner, however, it has a magnitude that is independent of any system of units. List of coherent units This list catalogues coherent relationships in various systems of units. SI The following is a list of quantities, each with its corresponding coherent SI unit: frequency (hertz) = reciprocal of time (inverse second) force (newton) = mass (kilogram) × acceleration (m/s2) pressure (pascal) = force (newton) ÷ area (m2) energy (joule) = force (newton) × distance (metre) power (watt) = energy (joule) ÷ time (second) potential difference (volt) = power (watt) ÷ electric current (ampere) electric charge (coulomb) = electric current (ampere) × time (second) equivalent radiation dose (sievert) = energy (joule) ÷ mass (kilogram) absorbed radiation dose (gray) = energy (joule) ÷ mass (kilogram) radioactive activity (becquerel) = reciprocal of time (s−1) capacitance (farad) = electric charge (coulomb) ÷ potential difference (volt) electrical resistance (ohm) = potential difference (volt) ÷ electric current (ampere) electrical conductance (siemens) = electric current (ampere) ÷ potential difference (volt) magnetic flux (weber) = potential difference (volt) × time (second) magnetic flux density (tesla) = magnetic flux (weber) ÷ area (square metre) CGS The following is a list of coherent centimetre–gram–second (CGS) system of units: acceleration (gals) = distance (centimetre) ÷ time (s2) force (dyne) = mass (gram) × acceleration (cm/s2) energy (erg) = force (dyne) × distance (centimetre) pressure (barye) = force (dyne) ÷ area (cm2) dynamic viscosity (poise) = mass (gram) ÷ (distance (centimetre) × time (second)) kinematic viscosity (stokes) = area (cm2) ÷ time (second) FPS The following is a list of coherent foot–pound–second (FPS) system of units: force (pdl) = mass (lb) × acceleration (ft/s2)
Physical sciences
Measurement systems
Basics and measurement
24758132
https://en.wikipedia.org/wiki/Constant%20%28mathematics%29
Constant (mathematics)
In mathematics, the word constant conveys multiple meanings. As an adjective, it refers to non-variance (i.e. unchanging with respect to some other value); as a noun, it has two different meanings: A fixed and well-defined number or other non-changing mathematical object, or the symbol denoting it. The terms mathematical constant or physical constant are sometimes used to distinguish this meaning. A function whose value remains unchanged (i.e., a constant function). Such a constant is commonly represented by a variable which does not depend on the main variable(s) in question. For example, a general quadratic function is commonly written as: where , and are constants (coefficients or parameters), and a variable—a placeholder for the argument of the function being studied. A more explicit way to denote this function is which makes the function-argument status of (and by extension the constancy of , and ) clear. In this example , and are coefficients of the polynomial. Since occurs in a term that does not involve , it is called the constant term of the polynomial and can be thought of as the coefficient of . More generally, any polynomial term or expression of degree zero (no variable) is a constant. Constant function A constant may be used to define a constant function that ignores its arguments and always gives the same value. A constant function of a single variable, such as , has a graph of a horizontal line parallel to the x-axis. Such a function always takes the same value (in this case 5), because the variable does not appear in the expression defining the function. Context-dependence The context-dependent nature of the concept of "constant" can be seen in this example from elementary calculus: "Constant" means not depending on some variable; not changing as that variable changes. In the first case above, it means not depending on h; in the second, it means not depending on x. A constant in a narrower context could be regarded as a variable in a broader context. Notable mathematical constants Some values occur frequently in mathematics and are conventionally denoted by a specific symbol. These standard symbols and their values are called mathematical constants. Examples include: 0 (zero). 1 (one), the natural number after zero. (pi), the constant representing the ratio of a circle's circumference to its diameter, approximately equal to 3.141592653589793238462643. , approximately equal to 2.718281828459045235360287. , the imaginary unit such that . (square root of 2), the length of the diagonal of a square with unit sides, approximately equal to 1.414213562373095048801688. (golden ratio), approximately equal to 1.618033988749894848204586, or algebraically, . Constants in calculus In calculus, constants are treated in several different ways depending on the operation. For example, the derivative (rate of change) of a constant function is zero. This is because constants, by definition, do not change. Their derivative is hence zero. Conversely, when integrating a constant function, the constant is multiplied by the variable of integration. During the evaluation of a limit, a constant remains the same as it was before and after evaluation. Integration of a function of one variable often involves a constant of integration. This arises due to the fact that the integral is the inverse (opposite) of the derivative meaning that the aim of integration is to recover the original function before differentiation. The derivative of a constant function is zero, as noted above, and the differential operator is a linear operator, so functions that only differ by a constant term have the same derivative. To acknowledge this, a constant of integration is added to an indefinite integral; this ensures that all possible solutions are included. The constant of integration is generally written as 'c', and represents a constant with a fixed but undefined value. Examples If is the constant function such that for every then
Mathematics
Basics
null
34945384
https://en.wikipedia.org/wiki/Ophioglossidae
Ophioglossidae
Ophioglossidae is one of the four subclasses of Polypodiopsida (ferns). This subclass consists of the ferns commonly known as whisk ferns, grape ferns, adder's-tongues and moonworts. It is equivalent to the class Psilotopsida in previous treatments, including Smith et al. (2006). The subclass contains two orders, Psilotales and Ophioglossales, whose relationship was only confirmed by molecular phylogenetic studies. Taxonomy Smith et al. (2006) carried out the first higher-level pteridophyte classification published in the molecular phylogenetic era, and considered the ferns (monilophytes), with four classes. They placed the whisk ferns and related taxa in the class Psilotopsida, with two orders. Mark W. Chase and James L. Reveal (2009) classified them as two separate subclasses, Psilotidae and Ophioglossidae, corresponding to those orders within a much broader grouping, the class Equisetopsida sensu lato. Christenhusz et al., 2011, included both the Ophioglossales and Psilotales orders in the Ophioglossidae subclass. This was continued by both Christenhusz and Chase (2014) and by the Pteridophyte Phylogeny Group (2016). Under the latter the subclass is one of four subclasses of Polypodiopsida (ferns) and contains two orders, two families, 12 genera, and an estimated 129 species. The relationships between the two orders, Psilotales and Ophioglossales, has long been unclear and was only confirmed by molecular systematic studies. Psilotales have rhizomes instead of real roots, and the roots of Ophioglossales lack both branching and root hairs. The gametophytes of both orders are heterotrophic and often subterranean, obtaining nutrients from mycorrhiza instead of light. Photosynthesis happens exclusively in the sporophyte. The following cladogram shows a likely phylogenic relationship between subclass Ophioglossidae and the other Polypodiopsida subclasses. The first three small subclasses are sometimes informally referred to as eusporangiate ferns, in contrast to the largest subclass, Polypodiidae or leptosporangiate ferns. The two orders, Ophioglossales and Psilotales are sister groups to each other.
Biology and health sciences
Ferns
Plants
26173989
https://en.wikipedia.org/wiki/Firewall%20%28computing%29
Firewall (computing)
In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on configurable security rules. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet, or between several VLANs. History The term firewall originally referred to a wall intended to confine a fire within a line of adjacent buildings. Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. The term was applied in the 1980s to network technology that emerged when the Internet was fairly new in terms of its global use and connectivity. The predecessors to firewalls for network security were routers used in the 1980s. Because they already segregated networks, routers could apply filtering to packets crossing them. Before it was used in real-life computing, the term appeared in John Badham's 1983 computerhacking movie WarGames, spoken by the bearded and bespectacled programmer named Paul Richter, which possibly inspired its later use. One of the earliest commercially successful firewall and network address translation (NAT) products was the PIX (Private Internet eXchange) Firewall, invented in 1994 by Network Translation Inc., a startup founded and run by John Mayes. The PIX Firewall technology was coded by Brantley Coile as a consultant software developer. Recognizing the emerging IPv4 address depletion problem, they designed the PIX to enable organizations to securely connect private networks to the public internet using a limited number of registered IP addresses. The innovative PIX solution quickly gained industry acclaim, earning the prestigious "Hot Product of the Year" award from Data Communications Magazine in January 1995. Cisco Systems, seeking to expand into the rapidly growing network security market, subsequently acquired Network Translation Inc. in November 1995 to obtain the rights to the PIX technology. The PIX became one of Cisco's flagship firewall product lines before eventually being succeeded by the Adaptive Security Appliance (ASA) platform introduced in 2005. Types of firewall Firewalls are categorized as a network-based or a host-based system. Network-based firewalls are positioned between two or more networks, typically between the local area network (LAN) and wide area network (WAN), their basic function being to control the flow of data between connected networks. They are either a software appliance running on general-purpose hardware, a hardware appliance running on special-purpose hardware, or a virtual appliance running on a virtual host controlled by a hypervisor. Firewall appliances may also offer non-firewall functionality, such as DHCP or VPN services. Host-based firewalls are deployed directly on the host itself to control network traffic or other computing resources. This can be a daemon or service as a part of the operating system or an agent application for protection. Packet filter The first reported type of network firewall is called a packet filter, which inspects packets transferred between computers. The firewall maintains an access-control list which dictates what packets will be looked at and what action should be applied, if any, with the default action set to silent discard. Three basic actions regarding the packet consist of a silent discard, discard with Internet Control Message Protocol or TCP reset response to the sender, and forward to the next hop. Packets may be filtered by source and destination IP addresses, protocol, or source and destination ports. The bulk of Internet communication in 20th and early 21st century used either Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) in conjunction with well-known ports, enabling firewalls of that era to distinguish between specific types of traffic such as web browsing, remote printing, email transmission, and file transfers. The first paper published on firewall technology was in 1987 when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin continued their research in packet filtering and developed a working model for their own company based on their original first-generation architecture. In 1992, Steven McCanne and Van Jacobson released a paper on BSD Packet Filter (BPF) while at Lawrence Berkeley Laboratory. Connection tracking From 1989–1990, three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways. Second-generation firewalls perform the work of their first-generation predecessors but also maintain knowledge of specific conversations between endpoints by remembering which port number the two IP addresses are using at layer 4 (transport layer) of the OSI model for their conversation, allowing examination of the overall exchange between the nodes. Application layer Marcus Ranum, Wei Xu, and Peter Churchyard released an application firewall known as Firewall Toolkit (FWTK) in October 1993. This became the basis for Gauntlet firewall at Trusted Information Systems. The key benefit of application layer filtering is that it can understand certain applications and protocols such as File Transfer Protocol (FTP), Domain Name System (DNS), or Hypertext Transfer Protocol (HTTP). This allows it to identify unwanted applications or services using a non standard port, or detect if an allowed protocol is being abused. It can also provide unified security management including enforced encrypted DNS and virtual private networking. As of 2012, the next-generation firewall provides a wider range of inspection at the application layer, extending deep packet inspection functionality to include, but is not limited to: Web filtering Intrusion prevention systems User identity management Web application firewall Content inspection and heuristic analysis TLS Inspection Endpoint specific Endpoint-based application firewalls function by determining whether a process should accept any given connection. Application firewalls filter connections by examining the process ID of data packets against a rule set for the local process involved in the data transmission. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers. Application firewalls that hook into socket calls are also referred to as socket filters. Firewall Policies At the core of a firewall's operation are the policies that govern its decision-making process. These policies, collectively known as firewall rules, are the specific guidelines that determine the traffic allowed or blocked across a network's boundaries. Firewall rules are based on the evaluation of network packets against predetermined security criteria. A network packet, which carries data across networks, must match certain attributes defined in a rule to be allowed through the firewall. These attributes commonly include: Direction: Inbound or outbound traffic Source: Where the traffic originates (IP address, range, network, or zone) Destination: Where the traffic is headed (IP address, range, network, or zone) Port: Network ports specific to various services (e.g., port 80 for HTTP) Protocol: The type of network protocol (e.g., TCP, UDP, ICMP) Applications: L7 inspection or grouping av services. Action: Whether to allow, deny, drop, or require further inspection for the traffic Zones Zones are logical segments within a network that group together devices with similar security requirements. By partitioning a network into zones, such as "Technical", "WAN", "LAN", "Public," "Private," "DMZ", and "Wireless," administrators can enforce policies that control the flow of traffic between them. Each zone has its own level of trust and is governed by specific firewall rules that regulate the ingress and egress of data. I typical default is to allow all traffic from LAN to WAN, and to drop all traffic from WAN to LAN. Services In networking terms, services are specific functions typically identified by a network port and protocol. Common examples include HTTP/HTTPS (web traffic) on ports 80 and 443, FTP (file transfer) on port 21, and SMTP (email) on port 25. Services are the engines behind the applications users depend on. From a security aspect, controlling access to services is crucial because services are common targets for exploitation. Firewalls employ rules that stipulate which services should be accessible, to whom, and in what context. For example, a firewall might be configured to block incoming FTP requests to prevent unauthorized file uploads but allow outgoing HTTPS requests for web browsing. Applications Applications refer to the software systems that users interact with while on the network. They can range from web browsers and email clients to complex database systems and cloud-based services. In network security, applications are important because different types of traffic can pose varying security risks. Thus, firewall rules can be crafted to identify and control traffic based on the application generating or receiving it. By using application awareness, firewalls can allow, deny, or limit traffic for specific applications according to organisational policies and compliance requirements, thereby mitigating potential threats from vulnerable or undesired applications. Application can both be a grouping of services, or a L7 inspection. USER ID Implementing firewall rules based on IP addresses alone is often insufficient due to the dynamic nature of user location and device usage. User ID will be translate to a IP address. This is where the concept of "User ID" makes a significant impact. User ID allows firewall rules to be crafted based on individual user identities, rather than just fixed source or destination IP addresses. This enhances security by enabling more granular control over who can access certain network resources, regardless of where they are connecting from or what device they are using. The User ID technology is typically integrated into firewall systems through the use of directory services such as Active Directory, LDAP, RADIUS or TACACS+. These services link the user's login information to their network activities. By doing this, the firewall can apply rules and policies that correspond to user groups, roles, or individual user accounts instead of purely relying on the network topology. Example of Using User ID in Firewall Rules Consider a school that wants to restrict access to a social media server from students. They can create a rule in the firewall that utilises User ID information to enforce this policy. Directory Service Configuration — First, the firewall must be configured to communicate with the directory service that stores user group memberships. In this case, an Active Directory server. User Identification — The firewall maps network traffic to specific user IDs by interpreting authentication logs. When a user logs on, the firewall associates that login with the user's IP address. Define User Groups — Within the firewall's management interface, define user groups based on the directory service. For example, create groups such as "Students". Create Firewall Rule: Source: User ID (e.g., Students) Destination: list of IP addresses Service/Application: Allowed services (e.g., HTTP, HTTPS) Action: Deny Implement Default Allow Rule: Source: LAN zone Destination: WAN zone Service/Application: Any Action: Allow With this setup, only users who authenticate and are identified as members of "Students" are denied to access social media servers. All other traffic, starting from LAN interfaces, will be allowed. Most common firewall log types Traffic Logs: Description: Traffic logs record comprehensive details about data traversing the network. This includes source and destination IP addresses, port numbers, protocols used, and the action taken by the firewall (e.g., allow, drop, or reject). Significance: Essential for network administrators to analyze and understand the patterns of communication between devices, aiding in troubleshooting and optimizing network performance. Threat Prevention Logs: Description: Logs specifically designed to capture information related to security threats. This encompasses alerts from intrusion prevention systems (IPS), antivirus events, anti-bot detections, and other threat-related data. Significance: Vital for identifying and responding to potential security breaches, helping security teams stay proactive in safeguarding the network. Audit Logs: Description: Logs that record administrative actions and changes made to the firewall configuration. These logs are critical for tracking changes made by administrators for security and compliance purposes. Significance: Supports auditing and compliance efforts by providing a detailed history of administrative activities, aiding in investigations and ensuring adherence to security policies. Event Logs: Description: General event logs that capture a wide range of events occurring on the firewall, helping administrators monitor and troubleshoot issues. Significance: Provides a holistic view of firewall activities, facilitating the identification and resolution of any anomalies or performance issues within the network infrastructure. Session Logs: Description: Logs that provide information about established network sessions, including session start and end times, data transfer rates, and associated user or device information. Significance: Useful for monitoring network sessions in real-time, identifying abnormal activities, and optimizing network performance. DDoS Mitigation Logs: Description: Logs that record events related to Distributed Denial of Service (DDoS) attacks, including mitigation actions taken by the firewall to protect the network. Significance: Critical for identifying and mitigating DDoS attacks promptly, safeguarding network resources and ensuring uninterrupted service availability. Geo-location Logs: Description: Logs that capture information about the geographic locations of network connections. This can be useful for monitoring and controlling access based on geographical regions. Significance: Aids in enhancing security by detecting and preventing suspicious activities originating from specific geographic locations, contributing to a more robust defense against potential threats. URL Filtering Logs: Description: Records data related to web traffic and URL filtering. This includes details about blocked and allowed URLs, as well as categories of websites accessed by users. Significance: Enables organizations to manage internet access, enforce acceptable use policies, and enhance overall network security by monitoring and controlling web activity. User Activity Logs: Description: Logs that capture user-specific information, such as authentication events, user login/logout details, and user-specific traffic patterns. Significance: Aids in tracking user behavior, ensuring accountability, and providing insights into potential security incidents involving specific users. VPN Logs: Description: Information related to Virtual Private Network (VPN) connections, including events like connection and disconnection, tunnel information, and VPN-specific errors. Significance: Crucial for monitoring the integrity and performance of VPN connections, ensuring secure communication between remote users and the corporate network. System Logs: Description: Logs that provide information about the overall health, status, and configuration changes of the firewall system. This may include logs related to high availability (HA), software updates, and other system-level events. Significance: Essential for maintaining the firewall infrastructure, diagnosing issues, and ensuring the system operates optimally. Compliance Logs: Description: Logs specifically focused on recording events relevant to regulatory compliance requirements. This may include activities ensuring compliance with industry standards or legal mandates. Significance: Essential for organizations subject to specific regulations, helping to demonstrate adherence to compliance standards and facilitating audit processes. Configuration Setting up a firewall is a complex and error-prone task. A network may face security issues due to configuration errors. Firewall policy configuration is based on specific network type (e.g., public or private), and can be set up using firewall rules that either block or allow access to prevent potential attacks from hackers or malware.
Technology
Computer security
null
30861480
https://en.wikipedia.org/wiki/Somatic%20symptom%20disorder
Somatic symptom disorder
Somatic symptom disorder, also known as somatoform disorder or somatization disorder, is defined by one or more chronic physical symptoms that coincide with excessive and maladaptive thoughts, emotions, and behaviors connected to those symptoms. The symptoms are not deliberately produced or feigned, and they may or may not coexist with a known medical ailment. Manifestations of somatic symptom disorder are variable; symptoms can be widespread, specific, and often fluctuate. Somatic symptom disorder corresponds to the way an individual views and reacts to symptoms rather than the symptoms themselves. Somatic symptom disorder may develop in those who suffer from an existing chronic illness or medical condition. Several studies have found a high rate of comorbidity with major depressive disorder, generalized anxiety disorder, and phobias. Somatic symptom disorder is frequently associated with functional pain syndromes like fibromyalgia and irritable bowel syndrome (IBS). Somatic symptom disorder typically leads to poor functioning, interpersonal issues, unemployment or problems at work, and financial strain as a result of excessive healthcare visits. The cause of somatic symptom disorder is unknown. Symptoms may result from a heightened awareness of specific physical sensations paired with a tendency to interpret these experiences as signs of a medical ailment. The diagnosis is controversial, as people with a physical illness can be misdiagnosed with it. This is especially true for girls and women, who are more often dismissed when they present with physical symptoms. Signs and symptoms Somatic symptom disorder can be detected by an ambiguous and often inconsistent history of symptoms that are rarely relieved by medical treatments. Additional signs of somatic symptom disorder include interpreting normal sensations for medical ailments, avoiding physical activity, being disproportionately sensitive to medication side effects, and seeking medical care from several physicians for the same concerns. Manifestations of somatic symptom disorder are highly variable. Recurrent ailments usually begin before the age of 30; most patients have many somatic symptoms, while others only experience one. The severity may fluctuate, but symptoms rarely go away completely for long periods of time. Symptoms might be specific, such as regional pain and localized sensations, or general, such as fatigue, muscle aches, and malaise. Those suffering from somatic symptom disorder experience recurring and obsessive feelings and thoughts concerning their well-being. Common examples include severe anxiety regarding potential ailments, misinterpreting normal sensations as indications of severe illness, believing that symptoms are dangerous and serious despite lacking medical basis, claiming that medical evaluations and treatment have been inadequate, fearing that engaging in physical activity will harm the body, and spending a disproportionate amount of time thinking about symptoms. Somatic symptom disorder pertains to how an individual interprets and responds to symptoms as opposed to the symptoms themselves. Somatic symptom disorder can occur even in those who have an underlying chronic illness or medical condition. When a somatic symptom disorder coexists with another medical ailment, people overreact to the ailment's adverse effects. They may be unresponsive toward treatment or unusually sensitive to drug side effects. Those with somatic symptom disorder who also have another physical ailment may experience significant impairment that is not expected from the condition. Comorbidities Most research that looked at additional mental disorders or self-reported psychopathological symptoms among those with somatic symptom disorder identified significant rates of comorbidity with depression and anxiety, but other psychiatric comorbidities were not usually looked at. Major depressive disorder, generalized anxiety disorder, and phobias were the most common concurrent conditions. In studies evaluating different physical ailments, 41.5% of people with semantic dementia, 11.2% of subjects with Alzheimer's disease, 25% of female patients suffering from non-HIV lipodystrophy, and 18.5% of patients with congestive heart failure fulfilled somatic symptom disorder criteria. 25.6% of fibromyalgia patients met the somatic symptom disorder criteria exhibited higher depression rates than those who did not. In one study, 28.8% of those with somatic symptom disorder had asthma, 23.1% had a heart condition, and 13.5% had gout, rheumatoid arthritis, or osteoarthritis. Complications Alcohol and drug abuse are frequently observed, and sometimes used to alleviate symptoms, increasing the risk of dependence on controlled substances. Other complications include poor functioning, problems with relationships, unemployment or difficulties at work, and financial stress due to excessive hospital visits. Causes Somatic symptoms can stem from a heightened awareness of sensations in the body, alongside the tendency to interpret those sensations as ailments. Studies suggest that risk factors of somatic symptoms include childhood neglect, sexual abuse, a chaotic lifestyle, and a history of substance and alcohol abuse. Psychosocial stressors, such as unemployment and reduced job performance, may also be risk factors. There could also be a genetic element. A study of monozygotic and dizygotic twins found that genetic components contributed 7% to 21% of somatic symptoms, with the remainder related to environmental factors. In another study, various single nucleotide polymorphisms were linked to somatic symptoms. Psychological Evidence suggests that along with more broad factors such as early childhood trauma or insecure attachment, negative psychological factors including catastrophizing, negative affectivity, rumination, avoidance, health anxiety, or a poor physical self-concept have a significant impact on the shift from unproblematic somatic symptoms to a severely debilitating somatic symptom disorder. Those who experience more negative psychological characteristics may regard medically unexplained symptoms to be more threatening and, therefore, exhibit stronger cognitive, emotional, and behavioral awareness of such symptoms. In addition, evidence suggests that negative psychological factors have a significant impact on the impairments and behaviors of people suffering from somatic symptom disorder, as well as the long-term stability of such symptoms. Psychosocial Psychosocial stresses and cultural norms influence how patients present to their physicians. American and Koreans engaged in a study to measure somatization within the cultural context. It was discovered that Korean participants used more body-related phrases while discussing their connections with stressful events and experienced more sympathy when asked to read texts using somatic expressions when discussing their emotions. Those raised in environments where expressing emotions during stages of development is discouraged face the highest risk of somatization. In primary care settings, studies indicated that somaticizing patients had much greater rates of unemployment and decreased occupational functioning than non-somaticizing patients. Traumatic life events may cause the development of somatic symptom disorder. Most people with somatic symptom disorder originate from dysfunctional homes. A meta-analysis study revealed a connection between sexual abuse and functional gastrointestinal syndromes, chronic pain, non-epileptic seizures, and chronic pelvic pain. Physiological The hypothalamo pituitary adrenal axis (HPA) has a crucial role in stress response. While the HPA axis may become more active with depression, there is evidence of hypocortisolism in somatization. In somatic disorder, there is a negative connection between elevated pain scores and 5-hydroxy indol acetic acid (5-HIAA) and tryptophan levels. It has been suggested that proinflammatory processes may have a role in somatic symptom disorder, such as an increase of non-specific somatic symptoms and sensitivity to painful stimuli. Proinflammatory activation and anterior cingulate cortex activity have been shown to be linked in those who experienced stressful life events for an extended period of time. It is further claimed that increased activity of the anterior cingulate cortex, which acts as a bridge between attention and emotion, leads to increased sensitivity of unwanted stimuli and bodily sensations. Pain is a multifaceted experience, not just a sensation. While nociception refers to afferent neural activity that transmits sensory information in response to stimuli that may cause tissue damage, pain is a conscious experience requiring cortical activity and can occur in the absence of nociception. Those with somatic symptom disorder are thought to exaggerate their symptoms through choice perception and perceive them in accordance with an ailment. This idea has been identified as a cognitive style known as "somatosensorial amplification". The term "central sensitization" has been created to describe the neurobiological notion that those predisposed to somatization have an overly sensitive neural network. Harmless and mild stimuli stimulate the nociceptive specific dorsal horn cells after central sensitization. As a result, pain is felt in response to stimuli that would not typically cause pain. Neuroimaging evidence Some literature reviews of cognitive–affective neuroscience on somatic symptom disorder suggested that catastrophization in patients with somatic symptom disorders tends to present a greater vulnerability to pain. The relevant brain regions include the dorsolateral prefrontal, insular, rostral anterior cingulate, premotor, and parietal cortices. Genetic Genetic investigations have suggested modifications connected to the monoaminergic system, in particular, may be relevant while a shared genetic source remains unknown. Researchers take into account the various processes involved in the development of somatic symptom disorder as well as the interactions between various biological and psychosocial factors. Given the high occurrence of trauma, particularly throughout childhood, it has been suggested that the epigenetic changes could be explanatory. Another study found that the glucocorticoid receptor gene (NR3C1) is hypomethylated in those with somatic symptom disorder and in those with depression. Diagnosis Because those with somatic syndrome disorder typically have comprehensive previous workups, minimal laboratory testing is encouraged. Excessive testing increases the possibility of false positive results, which may result in further interventions, associated risks, and greater expenses. While some practitioners order tests to reassure patients, research shows that diagnostic testing fails to alleviate somatic symptoms. Specific tests, such as thyroid function assessments, urine drug screens, restricted blood studies, and minimal radiological imaging, may be conducted to rule out somatization because of medical issues. Somatic Symptom Scale – 8 The Somatic Symptom Scale – 8 (SSS-8) is a short self-report questionnaire that is used to evaluate somatic symptoms. It examines the perceived severity of common somatic symptoms. The SSS-8 is a condensed version of the well-known Patient Health Questionnaire-15 (PHQ-15). On a five-point scale, respondents rate how much stomach or digestive issues, back discomfort, pain in the legs, arms, or joints, headaches, chest pain or shortness of breath, dizziness, feeling tired or having low energy, and trouble sleeping impacted them in the preceding seven days. Ratings are added together to provide a sum score that ranges from 0 to 32 points. DSM-5 The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) modified the entry titled "somatoform disorders" to "somatic symptom and related disorders", and modified other diagnostic labels and criteria. The DSM-5 criteria for somatic symptom disorder includes "one or more somatic symptoms which are distressing or result in substantial impairment of daily life." Additional criteria, often known as B criteria, include "excessive thoughts, feelings, or behaviors regarding somatic symptoms or corresponding health concerns manifested by disproportionate and persistent thoughts about the severity of one's symptoms." It continues: "Although any one somatic symptom might not be consistently present, one's state of being symptomatic is continuous (typically lasting more than 6 months)." The DSM-5 includes five distinct descriptions for somatic symptom disorder. These include somatic symptom disorder with predominant pain, formally referred to as pain disorder, as well as classifications for mild, moderate, and severe symptoms. International Classification of Diseases The ICD-11 includes bodily distress disorder, which bears similarities to somatic symptom disorder. While both conditions involve somatic symptoms, bodily distress disorder appears to be more strongly associated with the experience of physical symptoms, whereas somatic symptom disorder is more closely linked to psychological distress. Patients meeting the criteria for both diagnoses tend to exhibit greater symptom severity across various psychosocial domains. Bodily distress disorder is characterized by the presence of distressing bodily symptoms and excessive attention devoted to those symptoms. The ICD-11 further specifies that if another health condition is causing or contributing to the symptoms, the level of attention must be clearly excessive in relation to the nature and course of the condition. Differential diagnosis Somatic symptom disorder's widespread, non-specific symptoms may obscure and mimic the manifestations of other medical disorders, making diagnosis and therapy challenging. For example, conditions such as adjustment disorder, body dysmorphic disorder, obsessive–compulsive disorder (OCD), hypochondriasis can also exhibit excessive and exaggerated emotional and behavioral responses. Other functional diseases with unknown etiology, such as fibromyalgia and irritable bowel syndrome (IBS), tend not to present with excessive thoughts, feelings, or maladaptive behavior. Somatic symptom disorder overlaps with hypochondriasis and functional neurologic symptom disorder (FNsD), previously known as conversion disorder. Hypochondriasis is characterized by an obsession with having or developing a dangerous, undetected medical ailment, despite the absence of bodily symptoms. FNsD may present with one or more symptoms of various sorts: motor symptoms, which may involve weakness or paralysis; aberrant movements, including tremor or dystonic movements; abnormal gait patterns; and abnormal limb posture. The presenting symptoms in FNsD is loss of function, but in somatic symptom disorder, the emphasis is on the discomfort that specific symptoms produce. FNsD often lacks the overwhelming thoughts, feelings, and behaviors that characterize somatic symptom disorder. Treatment Rather than focusing on treating the symptoms, the key objective is to support the patient in coping with symptoms, including both physical symptoms and psychological/behavioral (such as health anxiety and harmful behaviors). Early psychiatric treatment is advised. Evidence suggests that SSRIs and SNRIs can lower pain perception. Because people with somatic symptom disorder may have a low threshold for adverse reactions, medication should be started at the lowest possible dose and gradually increased to produce a therapeutic effect. Cognitive behavioral therapy (CBT) has been linked to significant improvements in patient-reported function and somatic symptoms, a reduction in healthcare expenses, and a reduction in symptoms of depression. CBT aims to help patients realize their ailments are not catastrophic and to enable them to gradually return to activities they previously engaged in, without fear of "worsening their symptoms". Consultation and collaboration with a primary care physician also demonstrated some effectiveness. Furthermore, brief psychodynamic interpersonal psychotherapy (PIT) for patients with somatic symptom disorder has been proven to improve the physical quality of life in patients with many difficult-to-treat and medically unexplained symptoms over time CBT can help in some of the following ways: Learn to reduce stress Learn to cope with physical symptoms Learn to deal with depression and other psychological issues Improve quality of life Reduce preoccupation with symptom Electroconvulsive therapy (ECT) has been used in treating somatic symptom disorder among the elderly; however, the results were still debatable with some concerns around the side effects of using ECT. Overall, psychologists recommend addressing a common difficulty in patients with somatic symptom disorder in the reading of their own emotions. This may be a central feature of treatment; as well as developing a close collaboration between the GP, the patient and the mental health practitioner. Prognosis Somatic symptom disorder is typically persistent, with symptoms that wax and wane. Chronic limitations in general function, substantial psychological impairment, and a reduction in quality of life are all common. Some investigations suggest people can recover; the natural history of the illnesses implies that around 50% to 75% of patients with medically unexplained symptoms improve, whereas 10% to 30% deteriorate. Fewer physical symptoms and better baseline functioning are stronger prognostic indicators. A strong, positive relationship between the physician and the patient is crucial, and it should be accompanied by frequent, supportive visits to avoid the temptation to medicate or test when these interventions are not obviously necessary. Epidemiology Somatic symptom disorder affects 5% to 7% of the general population, with a higher female representation, and can arise throughout childhood, adolescence, or adulthood. Evidence suggests that the emergence of prodromal symptoms often begins in childhood and that symptoms fitting the criteria for somatic symptom disorder are common during adolescence. A community study of adolescents found that 5% had persistent distressing physical symptoms paired with psychological concerns. In the primary care patient population, the rate rises to around 17%. Patients with functional illnesses such as fibromyalgia, irritable bowel syndrome, and chronic fatigue syndrome have a greater prevalence of somatic symptom disorder. The reported frequency of somatic symptom disorder, as defined by the DSM-5 criteria, ranges from 25 to 60% among these patients. There are cultural differences in the prevalence of somatic symptom disorder. For example, somatic symptom disorder and symptoms were found to be significantly more common in Puerto Rico. In addition, the diagnosis is also more prevalent among African Americans and those with less than a high school education or lower socioeconomic status. There is usually co-morbidity with other psychological disorders, particularly mood disorders or anxiety disorders. Research also showed comorbidity between somatic symptom disorder and personality disorders, especially antisocial, borderline, narcissistic, histrionic, avoidant, and dependent personality disorder. About 10–20% of female first degree relatives also have somatic symptom disorder and male relatives have increased rates of alcoholism and sociopathy. History Somatization is an idea that physicians have been attempting to comprehend since the dawn of time. The Egyptians and Sumerians were reported to have utilized the notions of melancholia and hysteria as early as 2600 BC. For many years, somatization was used in conjunction with the terms hysteria, melancholia, and hypochondriasis. Wilhelm Stekel, a German psychoanalyst, was the first to introduce the term somatization, while Paul Briquet was the first to characterize what is now known as somatic symptom disorder. Briquet reported respondents who had been unwell for most of their lives and complained of a variety of symptoms from various organ systems. Despite many appointments, hospitalizations, and tests, symptoms continue. Somatic symptom disorder was later dubbed "Briquet Syndrome" in his honor. Controversy Somatic symptom disorder has long been a contentious diagnosis because it was based solely on negative criteria, namely the absence of a medical explanation for the presenting physical problems. As a result, any person suffering from a poorly understood illness may meet the criteria for this psychological diagnosis, regardless of whether they exhibit psychiatric symptoms in the traditional sense. Misdiagnosis Allen Frances, chair of the DSM-IV task force, claimed that the DSM-5's somatic symptom disorder brings with it a risk of mislabeling a sizable proportion of the population as mentally ill.
Biology and health sciences
Mental disorders
Health
30862624
https://en.wikipedia.org/wiki/Bifidobacterium
Bifidobacterium
Bifidobacterium is a genus of gram-positive, nonmotile, often branched anaerobic bacteria. They are ubiquitous inhabitants of the gastrointestinal tract though strains have been isolated from the vagina and mouth (B. dentium) of mammals, including humans. Bifidobacteria are one of the major genera of bacteria that make up the gastrointestinal tract microbiota in mammals. Some bifidobacteria are used as probiotics. Before the 1960s, Bifidobacterium species were collectively referred to as Lactobacillus bifidus. History In 1899, Henri Tissier, a French pediatrician at the Pasteur Institute in Paris, isolated a bacterium characterised by a Y-shaped morphology ("bifid") in the intestinal microbiota of breast-fed infants and named it "bifidus". In 1907, Élie Metchnikoff, deputy director at the Pasteur Institute, propounded the theory that lactic acid bacteria are beneficial to human health. Metchnikoff observed that the longevity of Bulgarians was the result of their consumption of fermented milk products. Metchnikoff also suggested that "oral administration of cultures of fermentative bacteria would implant the beneficial bacteria in the intestinal tract". Metabolism The genus Bifidobacterium possesses a unique fructose-6-phosphate phosphoketolase pathway employed to ferment carbohydrates. Much metabolic research on bifidobacteria has focused on oligosaccharide metabolism, as these carbohydrates are available in their otherwise nutrient-limited habitats. Infant-associated bifidobacterial phylotypes appear to have evolved the ability to ferment milk oligosaccharides, whereas adult-associated species use plant oligosaccharides, consistent with what they encounter in their respective environments. As breast-fed infants often harbor bifidobacteria-dominated gut consortia, numerous applications attempt to mimic the bifidogenic properties of milk oligosaccharides. These are broadly classified as plant-derived fructooligosaccharides or dairy-derived galactooligosaccharides, which are differentially metabolized and distinct from milk oligosaccharide catabolism. Response to oxygen The sensitivity of members of the genus Bifidobacterium to O2 generally limits probiotic activity to anaerobic habitats. Recent research has reported that some Bifidobacterium strains exhibit various types of oxic growth. Low concentrations of O2 and CO2 can have a stimulatory effect on the growth of these Bifidobacterium strains. Based on the growth profiles under different O2 concentrations, the Bifidobacterium species were classified into four classes: O2-hypersensitive, O2-sensitive, O2-tolerant, and microaerophilic. The primary factor responsible for aerobic growth inhibition is proposed to be the production of hydrogen peroxide (H2O2) in the growth medium. A H2O2-forming NADH oxidase was purified from O2-sensitive Bifidobacterium bifidum and was identified as a b-type dihydroorotate dehydrogenase. The kinetic parameters suggested that the enzyme could be involved in H2O2 production in highly aerated environments. Genomes Members of the genus Bifidobacterium have genome sizes ranging from 1.73 (Bifidobacterium indicum) to 3.25 Mb (Bifidobacterium biavatii), corresponding to 1,352 and 2,557 predicted protein-encoding open reading frames, respectively. Functional classification of Bifidobacterium genes, including the pan-genome of this genus, revealed that 13.7% of the identified bifidobacterial genes encode enzymes involved in carbohydrate metabolism. Clinical uses Adding Bifidobacterium as a probiotic to conventional treatment of ulcerative colitis has been shown to be associated with improved rates of remission and improved maintenance of remission. Some Bifidobacterium strains are considered as important probiotics and used in the food industry. Different species and/or strains of bifidobacteria may exert a range of beneficial health effects, including the regulation of intestinal microbial homeostasis, the inhibition of pathogens and harmful bacteria that colonize and/or infect the gut mucosa, the modulation of local and systemic immune responses, the repression of procarcinogenic enzymatic activities within the microbiota, the production of vitamins, and the bioconversion of a number of dietary compounds into bioactive molecules. Bifidobacteria improve the gut mucosal barrier and lower levels of lipopolysaccharide in the intestine. Bifidobacteria may also improve abdominal pain in patients with irritable bowel syndrome (IBS) though studies to date have been inconclusive. Naturally occurring Bifidobacterium spp. may discourage the growth of Gram-negative pathogens in infants. A mother's milk contains high concentrations of lactose and lower quantities of phosphate (pH buffer). Therefore, when mother's milk is fermented by lactic acid bacteria (including bifidobacteria) in the infant's gastrointestinal tract, the pH may be reduced, making it more difficult for Gram-negative bacteria to grow. Bifidobacteria and the infant gut The human infant gut is relatively sterile up until birth, where it takes up bacteria from its surrounding environment and its mother. The microbiota that makes up the infant gut differs from the adult gut. An infant reaches the adult stage of their microbiome at around three years of age, when their microbiome diversity increases, stabilizes, and the infant switches over to solid foods. Breast-fed infants are colonized earlier by Bifidobacterium when compared to babies that are primarily formula-fed. Bifidobacterium is the most common bacteria in the infant gut microbiome. There is more variability in genotypes over time in infants, making them less stable compared to the adult Bifidobacterium. Infants and children under three years old show low diversity in microbiome bacteria, but more diversity between individuals when compared to adults. Reduction of Bifidobacterium and increase in diversity of the infant gut microbiome occurs with less breast-milk intake and increase of solid food intake. Mammalian milk all contain oligosaccharides showing natural selection . Human milk oligosaccharides are not digested by enzymes and remain whole through the digestive tract before being broken down in the colon by microbiota. Bifidobacterium species genomes of B. longum, B. bifidum, B. breve contain genes that can hydrolyze some of the human milk oligosaccharides and these are found in higher numbers in infants that are breast-fed. Glycans that are produced by the humans are converted into food and energy for the B. bifidum. showing an example of coevolution. Species The genus Bifidobacterium comprises the following species: B. actinocoloniiforme Killer et al. 2011 B. adolescentis Reuter 1963 (Approved Lists 1980) B. aemilianum Alberoni et al. 2019 B. aerophilum Michelini et al. 2017 B. aesculapii Modesto et al. 2014 B. amazonense Lugli et al. 2021 B. angulatum Scardovi and Crociani 1974 (Approved Lists 1980) B. animalis (Mitsuoka 1969) Scardovi and Trovatelli 1974 (Approved Lists 1980) B. anseris Lugli et al. 2018 B. apousia Chen et al. 2022 B. apri Pechar et al. 2017 B. aquikefiri Laureys et al. 2016 B. asteroides Scardovi and Trovatelli 1969 (Approved Lists 1980) B. avesanii Michelini et al. 2019 B. biavatii Endo et al. 2012 B. bifidum (Tissier 1900) Orla-Jensen 1924 (Approved Lists 1980) B. bohemicum Killer et al. 2011 B. bombi Killer et al. 2009 B. boum Scardovi et al. 1979 (Approved Lists 1980) B. breve Reuter 1963 (Approved Lists 1980) B. callimiconis Duranti et al. 2019 B. callitrichidarum Modesto et al. 2018 B. callitrichos Endo et al. 2012 B. canis Neuzil-Bunesova et al. 2020 B. castoris Duranti et al. 2019 B. catenulatum Scardovi and Crociani 1974 (Approved Lists 1980) B. catulorum Modesto et al. 2018 B. cebidarum Duranti et al. 2020 B. choerinum Scardovi et al. 1979 (Approved Lists 1980) B.choladohabitans Chen et al. 2022 B. choloepi Modesto et al. 2020 B. colobi Lugli et al. 2021 B. commune Praet et al. 2015 B. criceti Lugli et al. 2018 "B. crudilactis" Delcenserie et al. 2007 B.cuniculi Scardovi et al. 1979 (Approved Lists 1980) B. dentium Scardovi and Crociani 1974 (Approved Lists 1980) B. dolichotidis Duranti et al. 2019 "B. eriksonii" Cato et al. 1970 B. erythrocebi Neuzil-Bunesova et al. 2021 B. eulemuris Michelini et al. 2016 B. faecale Choi et al. 2014 B. felsineum Modesto et al. 2020 B. gallicum Lauer 1990 B. gallinarum Watabe et al. 1983 B. globosum (ex Scardovi et al. 1969) Biavati et al. 1982 B. goeldii Duranti et al. 2019 B. hapali Michelini et al. 2016 B. Lugli et al. 2018 B. indicum Scardovi and Trovatelli 1969 (Approved Lists 1980) B. italicum Lugli et al. 2018 B. jacchi Modesto et al. 2019 B. lemurum Modesto et al. 2015 B. leontopitheci Duranti et al. 2020 B. longum Reuter 1963 (Approved Lists 1980) B. magnum Scardovi and Zani 1974 (Approved Lists 1980) B.margollesii Lugli et al. 2018 B. merycicum Biavati and Mattarelli 1991 B. miconis Lugli et al. 2021 B. miconisargentati Lugli et al. 2021 B. minimum Biavati et al. 1982 B. mongoliense Watanabe et al. 2009 B. moraviense Neuzil-Bunesova et al. 2021 B. moukalabense Tsuchida et al. 2014 B. myosotis Michelini et al. 2016 B. oedipodis Neuzil-Bunesova et al. 2021 B. olomucense Neuzil-Bunesova et al. 2021 B. panos Neuzil-Bunesova et al. 2021 B. parmae Lugli et al. 2018 "B. platyrrhinorum" Modesto et al. 2020 B. pluvialisilvae Lugli et al. 2021 B. polysaccharolyticum Chen et al. 2022 B. pongonis Lugli et al. 2021 B. porcinum (Zhu et al. 2003) Nouioui et al. 2018 B. primatium Modesto et al. 2020 B. pseudocatenulatum Scardovi et al. 1979 (Approved Lists 1980) B. pseudolongum Mitsuoka 1969 (Approved Lists 1980) B. psychraerophilum Simpson et al. 2004 B. pullorum Trovatelli et al. 1974 (Approved Lists 1980) B. ramosum Michelini et al. 2017 B. reuteri Endo et al. 2012 B. rousetti Modesto et al. 2021 "B. ruminale" Scardovi et al. 1969 B. ruminantium Biavati and Mattarelli 1991 B. saguini Endo et al. 2012 B. saguinibicoloris Lugli et al. 2021 "B. saimiriisciurei" Modesto et al. 2020 B. samirii Duranti et al. 2019 B. santillanense Lugli et al. 2021 B. scaligerum Modesto et al. 2020 B. scardovii Hoyles et al. 2002 B. simiarum Modesto et al. 2020 B. simiiventris Lugli et al. 2021 B. stellenboschense Endo et al. 2012 B. subtile Biavati et al. 1982 B. thermacidophilum Dong et al. 2000 B. thermophilum corrig. Mitsuoka 1969 (Approved Lists 1980) B. tibiigranuli Eckel et al. 2020 B. tissieri corrig. Michelini et al. 2016 B. tsurumiense Okamoto et al. 2008 "B. urinalis" Hojo et al. 2007 B. vansinderenii Duranti et al. 2017 B. vespertilionis Modesto et al. 2021 B. xylocopae Alberoni et al. 2019
Biology and health sciences
Gram-positive bacteria
Plants
30862748
https://en.wikipedia.org/wiki/Pre-exponential%20factor
Pre-exponential factor
In chemical kinetics, the pre-exponential factor or A factor is the pre-exponential constant in the Arrhenius equation (equation shown below), an empirical relationship between temperature and rate coefficient. It is usually designated by A when determined from experiment, while Z is usually left for collision frequency. The pre-exponential factor can be thought of as a measure of the frequency of properly oriented collisions. It is typically determined experimentally by measuring the rate constant at a particular temperature and fitting the data to the Arrhenius equation. The pre-exponential factor is generally not exactly constant, but rather depends on the specific reaction being studied and the temperature at which the reaction is occurring. The units of the pre-exponential factor A are identical to those of the rate constant and will vary depending on the order of the reaction. For a first-order reaction, it has units of s−1. For that reason, it is often called frequency factor. According to collision theory, the frequency factor, A, depends on how often molecules collide when all concentrations are 1 mol/L and on whether the molecules are properly oriented when they collide. Values of A for some reactions can be found at Collision theory. According to transition state theory, A can be expressed in terms of the entropy of activation of the reaction.
Physical sciences
Kinetics
Chemistry
30862857
https://en.wikipedia.org/wiki/Technology%20and%20society
Technology and society
Technology, society and life or technology and culture refers to the inter-dependency, co-dependence, co-influence, and co-production of technology and society upon one another. Evidence for this synergy has been found since humanity first started using simple tools. The inter-relationship has continued as modern technologies such as the printing press and computers have helped shape society. The first scientific approach to this relationship occurred with the development of tektology, the "science of organization", in early twentieth century Imperial Russia. In modern academia, the interdisciplinary study of the mutual impacts of science, technology, and society, is called science and technology studies. The simplest form of technology is the development and use of basic tools. The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment. Developments in historic times have lessened physical barriers to communication and allowed humans to interact freely on a global scale, such as the printing press, telephone, and Internet. Technology has developed advanced economies, such as the modern global economy, and has led to the rise of a leisure class. Many technological processes produce by-products known as pollution, and deplete natural resources to the detriment of Earth's environment. Innovations influence the values of society and raise new questions in the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics. Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people. However, proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition. Pre-historical The importance of stone tools, circa 2.5 million years ago, is considered fundamental in the human development in the hunting hypothesis. Primatologist, Richard Wrangham, theorizes that the control of fire by early humans and the associated development of cooking was the spark that radically changed human evolution. Texts such as Guns, Germs, and Steel suggest that early advances in plant agriculture and husbandry fundamentally shifted the way that collective groups of individuals, and eventually societies, developed. Modern examples and effects Technology has taken a large role in society and day-to-day life. When societies know more about the development in a technology, they become able to take advantage of it. When an innovation achieves a certain point after it has been presented and promoted, this technology becomes part of the society. The use of technology in education provides students with technology literacy, information literacy, capacity for life-long learning, and other skills necessary for the 21st century workplace. Digital technology has entered each process and activity made by the social system. In fact, it constructed another worldwide communication system in addition to its origin. A 1982 study by The New York Times described a technology assessment study by the Institute for the Future, "peering into the future of an electronic world." The study focused on the emerging videotex industry, formed by the marriage of two older technologies, communications, and computing. It estimated that 40 percent of American households will have two-way videotex service by the end of the century. By comparison, it took television 16 years to penetrate 90 percent of households from the time commercial service was begun. Since the creation of computers achieved an entire better approach to transmit and store data. Digital technology became commonly used for downloading music and watching movies at home either by DVDs or purchasing it online. Digital music records are not quite the same as traditional recording media. Obviously, because digital ones are reproducible, portable and free. Around the globe many schools have implemented educational technology in primary schools, universities and colleges. According to the statistics, in the early beginnings of 1990s the use of Internet in schools was, on average, 2–3%. Continuously, by the end of 1990s the evolution of technology increases rapidly and reaches to 60%, and by the year of 2008 nearly 100% of schools use Internet on educational form. According to ISTE researchers, technological improvements can lead to numerous achievements in classrooms. E-learning system, collaboration of students on project based learning, and technological skills for future results in motivation of students. Although these previous examples only show a few of the positive aspects of technology in society, there are negative side effects as well. Within this virtual realm, social media platforms such as Instagram, Facebook, and Snapchat have altered the way Generation Y culture is understanding the world and thus how they view themselves. In recent years, there has been more research on the development of social media depression in users of sites like these. "Facebook Depression" is when users are so affected by their friends' posts and lives that their own jealousy depletes their sense of self-worth. They compare themselves to the posts made by their peers and feel unworthy or monotonous because they feel like their lives are not nearly as exciting as the lives of others. Technology has a serious effect on youth's health. The overuse of technology is said to be associated with sleep deprivation which is linked to obesity and poor academic performance in the lives of adolescents. Economics and technological development In ancient history, economics began when spontaneous exchange of goods and services was replaced over time by deliberate trade structures. Makers of arrowheads, for example, might have realized they could do better by concentrating on making arrowheads and barter for other needs. Regardless of goods and services bartered, some amount of technology was involved—if no more than in the making of shell and bead jewelry. Even the shaman's potions and sacred objects can be said to have involved some technology. So, from the very beginnings, technology can be said to have spurred the development of more elaborate economies. Technology is seen as primary source in economic development. Technology advancement and economic growth are related to each other. The level of technology is important to determine the economic growth. It is the technological process which keeps the economy moving. In the modern world, superior technologies, resources, geography, and history give rise to robust economies; and in a well-functioning, robust economy, economic excess naturally flows into greater use of technology. Moreover, because technology is such an inseparable part of human society, especially in its economic aspects, funding sources for (new) technological endeavors are virtually illimitable. However, while in the beginning, technological investment involved little more than the time, efforts, and skills of one or a few men, today, such investment may involve the collective labor and skills of many millions. Most recently, because of the COVID-19 pandemic, the proportion of firms employing advanced digital technology in their operations expanded dramatically. It was found that firms that adopted technology were better prepared to deal with the pandemic's disruptions. Adaptation strategies in the form of remote working, 3D printing, and the use of big data analytics and AI to plan activities to adapt to the pandemic were able to ensure positive job growth. Funding Consequently, the sources of funding for large technological efforts have dramatically narrowed, since few have ready access to the collective labor of a whole society, or even a large part. It is conventional to divide up funding sources into governmental (involving whole, or nearly whole, social enterprises) and private (involving more limited, but generally more sharply focused) business or individual enterprises. Government funding for new technology The government is a major contributor to the development of new technology in many ways. In the United States alone, many government agencies specifically invest billions of dollars in new technology. In 1980, the UK government invested just over six million pounds in a four-year program, later extended to six years, called the Microelectronics Education Programme (MEP), which was intended to give every school in Britain at least one computer, software, training materials, and extensive teacher training. Similar programs have been instituted by governments around the world. Technology has frequently been driven by the military, with many modern applications developed for the military before they were adapted for civilian use. However, this has always been a two-way flow, with industry often developing and adopting a technology only later adopted by the military. Entire government agencies are specifically dedicated to research, such as America's National Science Foundation, the United Kingdom's scientific research institutes, America's Small Business Innovative Research effort. Many other government agencies dedicate a major portion of their budget to research and development. Private funding Research and development is one of the smallest areas of investments made by corporations toward new and innovative technology. Many foundations and other nonprofit organizations contribute to the development of technology. In the OECD, about two-thirds of research and development in scientific and technical fields is carried out by industry, and 98 percent and 10 percent, respectively, by universities and government. But in poorer countries such as Portugal and Mexico the industry contribution is significantly less. The U.S. government spends more than other countries on military research and development, although the proportion has fallen from about 30 percent in the 1980s to less than 10 percent. The 2009 founding of Kickstarter allows individuals to receive funding via crowdsourcing for many technology related products including both new physical creations as well as documentaries, films, and web-series that focus on technology management. This circumvents the corporate or government oversight most inventors and artists struggle against but leaves the accountability of the project completely with the individual receiving the funds. Other economic considerations Appropriate technology, sometimes called "intermediate" technology, more of an economics concern, refers to compromises between central and expensive technologies of developed nations and those that developing nations find most effective to deploy given an excess of labour and scarcity of cash. Persuasion technology: In economics, definitions or assumptions of progress or growth are often related to one or more assumptions about technology's economic influence. Challenging prevailing assumptions about technology and its usefulness has led to alternative ideas like uneconomic growth or measuring well-being. These, and economics itself, can often be described as technologies, specifically, as persuasion technology. Technocapitalism Technological diffusion Technology acceptance model Technology life cycle Technology transfer Relation to science The relationship between science and technology can be complex.  Science may drive technological development, by generating demand for new instruments to address a scientific question, or by illustrating technical possibilities previously unconsidered.  An environment of encouraged science will also produce scientists and engineers, and technical schools, which encourages innovation and entrepreneurship that are capable of taking advantage of the existing science.  In fact, it is recognized that "innovators, like scientists, do require access to technical information and ideas" and "must know enough to recognize useful knowledge when they see it."  Science spillover also contributes to greater technological diffusion.  Having a strong policy contributing to basic science allows a country to have access to a strong a knowledge base that will allow them to be "ready to exploit unforeseen developments in technology," when needed in times of crisis. For most of human history, technological improvements were arrived at by chance, trial and error, or spontaneous inspiration.  Stokes referred to these innovators as improvers of technology'…who knew no science and would not have been helped by it if they had."  This idea is supported by Diamond who further indicated that these individuals are "more likely to achieve a breakthrough if [they do] not hold the currently dominant theory in too high regard." Research and development directed towards immediate technical application is a relatively recent occurrence, arising with the Industrial Revolution and becoming commonplace in the 20th century.  In addition, there are examples of economies that do not emphasize science research that have been shown to be technological leaders despite this.  For example, the United States relied on the scientific output of Europe in the early 20th century, though it was regarded as a leader in innovation. Another example is the technological advancement of Japan in the latter part of the same century, which emphasized more applied science (directly applicable to technology). Though the link between science and technology has need for more clarity, what is known is that a society without sufficient building blocks to encourage this link are critical.  A nation without emphasis on science is likely to eventually stagnate technologically and risk losing competitive advantage.  The most critical areas for focus by policymakers are discouraging too many protections on job security, leading to less mobility of the workforce, encouraging the reliable availability of sufficient low-cost capital for investment in R&D, by favorable economic and tax policies, and supporting higher education in the sciences to produce scientists and engineers. Sociological factors and effects Values The implementation of technology influences the values of a society by changing expectations and realities. The implementation of technology is also influenced by values. There are (at least) three major, interrelated values that inform, and are informed by, technological innovations: Mechanistic world view: Viewing the universe as a collection of parts (like a machine), that can be individually analyzed and understood. This is a form of reductionism that is rare nowadays. However, the "neo-mechanistic world view" holds that nothing in the universe cannot be understood by the human intellect. Also, while all things are greater than the sum of their parts (e.g., even if we consider nothing more than the information involved in their combination), in principle, even this excess must eventually be understood by human intelligence. That is, no divine or vital principle or essence is involved. Efficiency: A value, originally applied only to machines, but now applied to all aspects of society, so that each element is expected to attain a higher and higher percentage of its maximal possible performance, output, or ability. Social progress: The belief that there is such a thing as social progress, and that, in the main, it is beneficent. Before the Industrial Revolution, and the subsequent explosion of technology, almost all societies believed in a cyclical theory of social movement and, indeed, of all history and the universe. This was, obviously, based on the cyclicity of the seasons, and an agricultural economy's and society's strong ties to that cyclicity. Since much of the world is closer to their agricultural roots, they are still much more amenable to cyclicity than progress in history. This may be seen, for example, in Prabhat Rainjan Sarkar's modern social cycles theory. For a more westernized version of social cyclicity, see Generations: The History of America's Future, 1584 to 2069 (Paperback) by Neil Howe and William Strauss; Harper Perennial; Reprint edition (September 30, 1992); , and subsequent books by these authors. Institutions and groups Technology often enables organizational and bureaucratic group structures that otherwise and heretofore were simply not possible. Examples of this might include: The rise of very large organizations: e.g., governments, the military, health and social welfare institutions, supranational corporations. The commercialization of leisure: sports events, products, etc. (McGinn) The almost instantaneous dispersal of information (especially news) and entertainment around the world. International Technology enables greater knowledge of international issues, values, and cultures. Due mostly to mass transportation and mass media, the world seems to be a much smaller place, due to the following: Globalization of ideas Embeddedness of values Population growth and control Environment Technology can provide understanding of and appreciation for the world around us, enable sustainability and improve environmental conditions but also degrade the environment and facilitate unsustainability. Some polities may conclude that certain technologies' environmental detriments and other risks to outweigh their benefits, especially if or once substitutive technologies have been or can be invented, leading to directed technological phase-outs such as the fossil fuel phase-out and the nuclear fission power phase-out. Most modern technological processes produce unwanted byproducts in addition to the desired products, which are known as waste and pollution. While material waste is often re-used in industrial processes, many processes lead to a release into the environment with negative environmental side effects, such as pollution and lack of sustainability. Development and technologies' implications Some technologies are designed specifically with the environment in mind, but most are designed first for financial or economic effects such as the free market's profit motive. The effects of a specific technology is often not only dependent on how it is used – e.g. its usage context – but also predetermined by the technology's design or characteristics, as in the theory of "the medium is the message" which relates to media-technologies in specific. In many cases, such predetermined or built-in implications may vary depending on factors of contextual contemporary conditions such as human biology, international relations and socioeconomics. However, many technologies may be harmful to the environment only when used in specific contexts or for specific purposes that not necessarily result from the nature of the technology. Values Historically, from the perspective of economic agent-centered responsibility, an increased, as of 2021 commonly theoretic and informal, value of healthy environments and more efficient productive processes may be the result of an increase in the wealth of society. Once people are able to provide for their basic needs, they can – and are often facilitated to – not only afford more environmentally destructive products and services, but could often also be able to put an – e.g. individual morality-motivated – effort into valuing less tangible goods such as clean air and water if product-, alternatives-, consequences- and services-information are adequate. From the perspective of systems science and cybernetics, economies (systems) have economic actors and sectors make decisions based upon a range of system-internal factors with structures – or sometimes forms of leveraging existing structures – that lead to other outcomes being the result of other architectures – or systems-level configurations of the existing designs – which are considered to be possible in the sense that such could be modeled, tested, priorly assessed, developed and studied. Negative effects on the environment The effects of technology on the environment are both obvious and subtle. The more obvious effects include the depletion of nonrenewable natural resources (such as petroleum, coal, ores), and the added pollution of air, water, and land. The more subtle effects may include long-term effects (e.g. global warming, deforestation, natural habitat destruction, coastal wetland loss.) Pollution and energy requirements Each wave of technology creates a set of waste previously unknown by humans: toxic waste, radioactive waste, electronic waste, plastic waste, space waste. Electronic waste creates direct environmental impacts through the production and maintaining the infrastructure necessary for using technology and indirect impacts by breaking barriers for global interaction through the use of information and communications technology. Certain usages of information technology and infrastructure maintenance consume energy that contributes global warming. This includes software-designs such as international cryptocurrencies and most hardware powered by nonrenewable sources. One of the main problems is the lack of societal decision-making processes – such as the contemporary economy and politics – that lead to sufficient implementation of existing as well as potential efficient ways to remove, recycle and prevent these pollutants on a large scale expediently. Digital technologies, however, are important in achieving the green transition and specifically, the SDGs and European Green Deal's environmental targets. Emerging digital technologies, if correctly applied, have the potential to play a critical role in addressing environmental issues. A few examples are: smart city mobility, precision agriculture, sustainable supply chains, environmental monitoring, and catastrophe prediction. Construction and shaping Choice Society also controls technology through the choices it makes. These choices not only include consumer demands; they also include: the channels of distribution, how do products go from raw materials to consumption to disposal; the cultural beliefs regarding style, freedom of choice, consumerism, materialism, etc.; the economic values we place on the environment, individual wealth, government control, capitalism, etc. According to Williams and Edge, the construction and shaping of technology includes the concept of choice (and not necessarily conscious choice). Choice is inherent in both the design of individual artifacts and systems, and in the making of those artifacts and systems. The idea here is that a single technology may not emerge from the unfolding of a predetermined logic or a single determinant, technology could be a garden of forking paths, with different paths potentially leading to different technological outcomes. This is a position that has been developed in detail by Judy Wajcman. Therefore, choices could have differing implications for society and for particular social groups. Autonomous technology In one line of thought, technology develops autonomously, in other words, technology seems to feed on itself, moving forward with a force irresistible by humans. To these individuals, technology is "inherently dynamic and self-augmenting." Jacques Ellul is one proponent of the irresistibleness of technology to humans. He espouses the idea that humanity cannot resist the temptation of expanding our knowledge and our technological abilities. However, he does not believe that this seeming autonomy of technology is inherent. But the perceived autonomy is because humans do not adequately consider the responsibility that is inherent in technological processes. Langdon Winner critiques the idea that technological evolution is essentially beyond the control of individuals or society in his book Autonomous Technology. He argues instead that the apparent autonomy of technology is a result of "technological somnambulism," the tendency of people to uncritically and unreflectively embrace and utilize new technologies without regard for their broader social and political effects. In 1980, Mike Cooley published a critique of the automation and computerisation of engineering work under the title "Architect or Bee? The human/technology relationship". The title alludes to a comparison made by Karl Marx, on the issue of the creative achievements of human imaginative power. According to Cooley ""Scientific and technological developments have invariably proved to be double-edged. They produced the beauty of Venice and the hideousness of Chernobyl; the caring therapies of Rontgen's X-rays and the destruction of Hiroshima," Government Individuals rely on governmental assistance to control the side effects and negative consequences of technology. Supposed independence of government. An assumption commonly made about the government is that their governance role is neutral or independent. However, some argue that governing is a political process, so government will be influenced by political winds of influence. In addition, because government provides much of the funding for technological research and development, it has a vested interest in certain outcomes. Other point out that the world's biggest ecological disasters, such as the Aral Sea, Chernobyl, and Lake Karachay have been caused by government projects, which are not accountable to consumers. Liability. One means for controlling technology is to place responsibility for the harm with the agent causing the harm. Government can allow more or less legal liability to fall to the organizations or individuals responsible for damages. Legislation. A source of controversy is the role of industry versus that of government in maintaining a clean environment. While it is generally agreed that industry needs to be held responsible when pollution harms other people, there is disagreement over whether this should be prevented by legislation or civil courts, and whether ecological systems as such should be protected from harm by governments. Recently, the social shaping of technology has had new influence in the fields of e-science and e-social science in the United Kingdom, which has made centers focusing on the social shaping of science and technology a central part of their funding programs.
Technology
General
null
30862962
https://en.wikipedia.org/wiki/Macuahuitl
Macuahuitl
A macuahuitl () is a weapon, a wooden sword with several embedded obsidian blades. The name is derived from the Nahuatl language and means "hand-wood". Its sides are embedded with prismatic blades traditionally made from obsidian, which is capable of producing an edge sharper than high quality steel razor blades. The macuahuitl was a standard close combat weapon. Use of the macuahuitl as a weapon is attested from the first millennium CE, although specimens can be found in art dating to at least pre-classic times. By the time of the Spanish conquest the macuahuitl was widely distributed in Mesoamerica. The weapon was used by different civilisations including the Aztec (Mexicas), Olmec, Maya, Mixtec, Toltec, and Tarascans. One example of this weapon survived the Conquest of the Aztec Empire; it was part of the Royal Armoury of Madrid until it was destroyed by a fire in 1884. Images of the original designs survive in diverse catalogues. The oldest replica is the macuahuitl created by the medievalist Achille Jubinal in the 19th century. Description The maquahuitl (, other orthographic variants include and ; plural ), a type of , was a common weapon used by the Aztec military forces and other cultures of central Mexico. It was noted during the 16th-century Spanish conquest of the region. Other military equipment recorded includes the round shield (, ), the bow (, ), and the spear-thrower (, ). Its sides are embedded with prismatic blades traditionally made from obsidian (volcanic glass); obsidian is capable of producing an edge sharper than high-quality steel razor blades. It was capable of inflicting serious lacerations from the rows of obsidian blades embedded in its sides. These could be knapped into blades or spikes, or into a circular design that looked like scales. The macuahuitl is not specifically a sword or a club, although it approximates a European broadsword. Historian John Pohl defines the weapon as a "kind of a saw sword". According to conquistador , the macuahuitl was 0.91 to 1.22 m long, and 75 mm wide, with a groove along either edge, into which sharp-edged pieces of flint or obsidian were inserted and firmly fixed with an adhesive. Based on his research, historian John Pohl indicates that the length was just over a meter, although other models were larger, and intended for use with both hands. According to the research of historian Marco Cervera Obregón, the sharp pieces of obsidian, each about 3 cm long, were attached to the flat paddle with a natural adhesive, bitumen. The rows of obsidian blades were sometimes discontinuous, leaving gaps along the side, while at other times the rows were set close together and formed a single edge. It was noted by the Spanish that the macuahuitl was so cleverly constructed that the blades could be neither pulled out nor broken. The macuahuitl was made with either a one-handed or two-handed grip, as well as in rectangular, ovoid, or pointed forms. Two-handed macuahuitl have been described as being "as tall as a man". Typology According to National School of Anthropology and History (ENAH) archaeologist Marco Cervera Obregón, there were two versions of this weapon: The macuahuitl, about long with six to eight blades on each side; and the mācuāhuitzōctli, a smaller club about long with only four obsidian blades. Specimens According to Ross Hassig, the last authentic macuahuitl was destroyed in 1884 in a fire in the Real Armería in Madrid, where it was housed beside the last tepoztopilli. According to Marco Cervera Obregón, there is supposed to be at least one macuahuitl in a Museo Nacional de Antropología warehouse, but it is possibly lost. No actual maquahuitl specimens remain and the present knowledge of them comes from contemporaneous accounts and illustrations that date to the 16th century and earlier. For the exhibition "Tenochtitlan y Tlatelolco. A 500 años de su caída" at the Museo del Templo Mayor in Mexico city, an alleged authentic macuahuitl was displayed along with an atlatl. Origins and distribution The maquahuitl predates the Aztecs. Tools made from obsidian fragments were used by some of the earliest Mesoamericans. Obsidian used in ceramic vessels has been found at Aztec sites. Obsidian cutting knives, sickles, scrapers, drills, razors, and arrow points have also been found. Several obsidian mines were close to the Aztec civilizations in the Valley of Mexico as well as in the mountains north of the valley. Among these were the Sierra de las Navajas (Razor Mountains), named after their obsidian deposits. Use of the macuahuitl as a weapon is attested from the 1st millenniaCE. A Mayan carving at Chichen Itza shows a warrior holding a macuahuitl, depicted as a club having separate blades sticking out from each side. In a mural, a warrior holds a club with many blades on one side and one sharp point on the other, also a possible variant of the macuahuitl. Some attestations of a type of macuahuitl are also present dating to Olmec times. By the time of the Spanish conquest, the macuahuitl was widely distributed in Mesoamerica, with records of its use by the Aztecs, Mixtecs, Tarascans, Toltecs and others. It was also commonly used by the Indian auxiliaries of Spain, though they favored Spanish swords. As Mesoamericans in Spanish service needed a special permission to carry European arms, metal swords brought Indian auxiliaries more prestige than maquahuitls in the eyes of Europeans as well as natives. Effectiveness The macuahuitl was sharp enough to decapitate a man. According to an account by Bernal Díaz del Castillo, one of Hernán Cortés's conquistadors, it could even decapitate a horse: Another account by a companion of Cortés known as The Anonymous Conqueror tells a similar story of its effectiveness: Another account by Francisco de Aguilar reads: Given the importance of human sacrifice in Nahua cultures, their warfare styles, particularly those of the Aztec and Maya, placed a premium on the capture of enemy warriors for live sacrifice. Advancement into the elite cuāuhocēlōtl warrior societies of the Aztec, for example, required taking 20 live captives from the battlefield. The macuahuitl thus shows several features designed to make it a useful tool for capturing prisoners: fitting spaced instead of contiguous blades, as seen in many codex illustrations, would intentionally limit the wound depth from a single blow, and the heavy wooden construction allows weakened opponents to be easily clubbed unconscious with the flat side of the weapon. The art of disabling opponents using an un-bladed macuahuitl as a sparring club was taught from a young age in the Aztec Tēlpochcalli schools. The macuahuitl had many drawbacks in combat versus European steel swords. Despite being sharper, prismatic obsidian is also considerably more brittle than steel; obsidian blades of the type used on the macuahuitl tended to shatter on impact with other obsidian blades, steel swords or plate armour. Obsidian blades also have difficulty penetrating European mail. The thin, replaceable blades used on the macuahuitl were easily dulled or chipped by repeated impacts on bone or wood, making artful use of the weapon critical. It takes more time to lift and swing a club than it does to thrust with a sword. More space is needed as well, so warriors advanced in loose formations and fought in single combat. Experimental archaeology Replicas of the macuahuitl have been produced and tested against sides of beef for documentary shows on the History and Discovery channels, to demonstrate the effectiveness of this weapon. On the History show Warriors, special forces operator and martial artist Terry Schappert injured himself while fencing with a macuahuitl; he cut the back of his left leg as the result of a back-swing motion. For SpikeTV's reality program Deadliest Warrior a replica was created and tested against a model of a horse's head created using a horse's skeleton and ballistics gel. Actor and martial artist Éder Saúl López was able to decapitate the model, but it took three swings. Blows from the replica macuahuitl were most effective when it was swung and then dragged backwards upon impact, creating a sawing motion. This led Max Geiger, the computer programmer of the series, to refer to the weapon as "the obsidian chainsaw". This may have been due to the unrefined obsidian cutting edges of the weapon used in the show, compared with more finely made prismatic obsidian blades, as in the Madrid specimen.
Technology
Swords
null
30863191
https://en.wikipedia.org/wiki/Basic%20research
Basic research
Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development. In addition to innovations, basic research serves to provide insights and public support of nature, possibly improving conservation efforts. Technological innovations may influence engineering concepts, such as the beak of a kingfisher influencing the design of a high-speed bullet train. Overview Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common. Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future. History By country In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important. Basic versus applied science Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities. A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards. The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science.
Physical sciences
Science basics
Basics and measurement
30863347
https://en.wikipedia.org/wiki/Bamboo%20shoot
Bamboo shoot
Bamboo shoots or bamboo sprouts are the edible shoots (new bamboo culms that come out of the ground) of many bamboo species including Bambusa vulgaris and Phyllostachys edulis. They are used as vegetables in numerous Asian dishes and broths. They are sold in various processed shapes and are available in fresh, dried, and canned versions. Raw bamboo shoots contain cyanogenic glycosides, natural toxins also contained in cassava. The toxins must be destroyed by thorough cooking, and for this reason, fresh bamboo shoots are boiled before being used in other ways. The toxins are also destroyed in the canning process. Harvested species Most young bamboo shoots are edible after being boiled to remove toxins, but only around a hundred or so species are harvested regularly for edible shoots. These are usually from species that are also cultivated for other uses. These include: Acidosasa – native to South China and Vietnam Acidosasa edulis – endemic to the provinces of Fujian, Zhejiang and Jiangxi, China. Acidosasa chinensis – endemic to Guangdong, China Bambusa – the most commonly harvested bamboo in tropical and subtropical Asia, occurring from the Philippines to India, and from Sumatra to southern China. Bambusa balcooa – native to the Indian subcontinent to Mainland Southeast Asia Bambusa bambos – native to South Asia Bambusa beecheyana – native to South China to Mainland Southeast Asia and Taiwan Bambusa blumeana – native to Island Southeast Asia Bambusa gibboides – native to Guangdong, China Bambusa merrilliana – endemic to the Philippines Bambusa odashimae – endemic to Taiwan Bambusa oldhamii – native to Taiwan and South China Bambusa polymorpha – native to Mainland Southeast Asia, Bangladesh, and northeastern India. Bambusa philippinensis – endemic to the Philippines Bambusa tulda – native to the Himalayas region, Yunnan, and northern Mainland Southeast Asia Bambusa tuldoides – native to Guangdong, Guangxi, and northern Mainland Southeast Asia Bambusa vulgaris – native to Mainland Southeast Asia and Yunnan, China Chimonobambusa – native to the Himalayas, Mainland Southeast Asia, China, and Japan Dendrocalamus – native to tropical South Asia, Southeast Asia, and South China Dendrocalamus asper – native to Southeast Asia Dendrocalamus latiflorus – native to South China and Taiwan Dendrocalamus membranaceus – native to tropical Southeast Asia Dendrocalamus strictus – native to tropical Southeast Asia and India Gigantochloa – native to tropical Asia Gigantochloa atter – native to Island Southeast Asia Gigantochloa levis – native to Island Southeast Asia Phyllostachys – native from the Himalayas to East Asia Phyllostachys edulis – native to South China and Taiwan Phyllostachys bambusoides – native to China, Taiwan, and Japan Phyllostachys rivalis – endemic to China Phyllostachys vivax – endemic to China Sasa – native to Korea, Japan, and eastern Russia (Sakhalin) Sasa kurilensis – native to Korea, Japan, and eastern Russia (Sakhalin) Freshly collected bamboo shoots are a good source of thiamine, niacin, vitamin A, vitamin B6, and vitamin E. 17 different amino acids have been reported, 8 of them essential for humans. The amount of amino acids in canned and fermented shoots is lower than when freshly prepared. Uses Culinary Raw bamboo is toxic to humans, containing cyanide compounds, hence it is always boiled when used for human consumption. The diet of giant pandas and red pandas is largely made up of raw bamboo; the animals' body tissue is not well able to detoxify cyanide, but their gut microbiomes are significantly enriched in putative genes coding for enzymes related to cyanide degradation, suggesting that they have cyanide-digesting gut microbes. East Asia In certain parts of Japan, China, and Taiwan, shoots from the giant timber bamboo Bambusa oldhamii are harvested in spring or early summer. Young shoots from this species are highly sought-after due to their crisp texture and sweet taste. Older shoots, however, have an acrid flavor and should be sliced thin and boiled in a large volume of water several times. The sliced bamboo is edible after boiling. B. oldhamii is more widely known as a noninvasive landscaping bamboo. Pickled bamboo, used as a condiment, may also be made from the pith of the young shoots. In Japan, menma is a common topping for ramen noodle soup. In China, luosifen river snail noodles, a popular dish from Guangxi, get their famously pungent smell from pickled bamboo shoots. South Asia In Nepal, they are used in dishes that have been well known in the country for centuries. A popular dish is tama (fermented bamboo shoot), made with potato and beans. An old popular song in Nepali mentions tama as "my mother loves vegetable of recipe containing potato, beans, and tama". Some varieties of bamboo shoots commonly grown in the Sikkim Himalayas of India are Dendrocalamus hamiltonii, Dendrocalamus sikkimensis and Bambusa tulda locally known as choya bans, bhalu bans and karati bans. These are edible when young. These bamboo shoots are collected, defoliated and boiled in water with turmeric powder for 10–15 minutes to remove the bitter taste of the bamboo after which the tama is ready for consumption. Tama is commonly sold in local markets during the months of June to September when young bamboo shoots sprout. In Assam, bamboo shoot is called bah gaj in Assamese and hen-up in Karbi. It is an integral part of traditional Assamese cuisine. Fermented bamboo shoot, called khorisa, is a widely used ingredient in Assamese recipes for meats such as pork, chicken, duck and squab or pigeon. Fermentation increases the nutritional value of bamboo shoots by making some nutrients more bioavailable and degrading toxins. In Karnataka, Andhra Pradesh, and Northern Tamilnadu, the bamboo shoots are used as a special dish during the monsoons (due to seasonal availability). It is common in Tulunadu and Malnad regions. It goes by the name kanile or 'kalale in Tulu, Veduru Kommulu in Telugu, and Moongil Kuruthu in Tamil. The shoots are usually sliced and soaked in water for two to three days, after which the water is drained and replenished each day to extricate and remove toxins. It is also used as a pickle. It is consumed as a delicacy by all communities in the region. In the Diyun region of Arunachal Pradesh, the Chakma people call them bashchuri. The fermented version is called medukkeye and is often served fried with pork. The bamboo shoots can also be fermented and stored with vinegar. In Jharkhand, India, the bamboo shoots are used as a vegetable. Young shoots and stored shoots are known as karil and shandhna respectively. In the western part of Odisha, India, they are known as karadi and are used in traditional curries such as Ambila, pithou bhaja and pickle. In monsoon, it can be abundantly found in Bamboo forest of Karlapat wildlife sanctuary and mostly prepared in homes using mustard paste. They can be stored for months in an air tight container. They are also dried in sun increasing their shelf life and these dried shoots are called Hendua. The dried shoots are used in curries of roasted fish, called Poda Macha. In Nagaland, India, bamboo shoots are both cooked and eaten as a fresh food item or fermented for a variety of culinary uses. Fermented bamboo shoot is commonly known as bas tenga. Cooking pork with a generous portion of fermented bamboo shoot is very popular in Naga cuisine. In Manipur, India, they are known as u-soi. They are also fermented and preserved after which they are known as soibum. They are used in a wide variety of dishes – among which are iromba, ooti and kangshu etc. The fermented bamboo shoot which is preserved for many months is known as soijin. Soijin can be stored up to 10 years in Andro village. Generally, soijin or usoi is in a big basket made of bamboo. However, they are stored in earthen pot in Andro village. In Meghalaya, bamboo shoots are either used fresh or fermented and made into pickles, soups with pork or dried fish, or curried and seasoned with sesame seeds or made into a sauce with fermented fish. It sometimes cooked along with yam leaves and dried fish. In Chittagong Hill Tracts, Bangladesh, bamboo shoots are a traditional food of the indigenous Jumma people. The preparation of their dishes consist of several steps. First, bamboo shoots are collected from the bamboo forest then defoliated and boiled in water. Afterwards, the bamboo shoot is prepared with shrimp paste, chili, garlic paste, and salt. Southeast Asia In the Philippines, bamboo shoots are primarily harvested from bolo bamboo (Gigantochloa levis), giant bamboo (Dendrocalamus asper), common bamboo (Bambusa vulgaris), spiny bamboo (Bambusa blumeana), and two endemic species, bayog (Bambusa merrilliana) and laak (Bambusa philippinensis). Other economically important species also harvested for bamboo shoots include kayali (Gigantochloa atter), male bamboo (Dendrocalamus strictus), and climbing bamboos (Dinochloa spp.) Another endemic species, the lumampao or bagakay bamboo (Schizostachyum lumampao), which are used for making sawali (woven bamboo strips), are also occasionally harvested for bamboo shoots. In Filipino cuisine, the shoots are commonly called labóng (other names include rabong, dabong, or tambo). The two most popular dishes for these are ginataáng labóng (shoots in coconut milk and chilies) and dinengdeng na labóng (shoots in fish bagoóng and stew of string beans, saluyot, and tinapa). They are also sautéed alone or with other ingredients as in paklay, or cooked as fried or fresh lumpia. Bamboo shoots are also preserved as atchara, traditional sweet pickles that are often made from papaya. In Thai cuisine bamboo shoots are called no mai. It can be used in stir-fries, soups such as tom kha kai, curries such as kaeng tai pla, as well as in salads such as sup no-mai. Some dishes ask for fresh bamboo shoots, others for pickled bamboo shoots (no mai dong). In Vietnamese cuisine, shredded bamboo shoots are used alone or with other vegetable in many stir-fried vegetable dishes. It may also be used as the sole vegetable ingredient in pork chop soup. Duck and bamboo shoot noodles (Bún măng vịt) is also a famous noodle dish in Vietnam. In Myanmar, bamboo shoots are called hmyit (). They can be used in a soup called myahait hcaut tar la bot or talabaw, bamboo soup. The preparation of this dish generally follows three steps. First, the bamboo shoots are collected from a bamboo forest. Bamboo can be found in the whole of Myanmar but the bamboo shoots from the two northernmost regions (Kachin State and Sagaing Region) are soft and good in taste. The bamboo shoots are then boiled in water after which they can be cooked with curry powder, rice powder, and other ingredients such as snakehead fish and basil leaves. A small amount of rice and some shreds of meat or seafood may also be added. The soup was traditionally used by the Karen people as a supplement to rice, which was not readily or cheaply available to them. Talabaw is one of the most well known soups in Myanmar, and widely considered to be the essential dish of Karen cuisine. Another bamboo shoot dish in Burmese cuisine is a sour bamboo shoot curry called hmyit chin hin (မျှစ်ချဉ်ဟင်း), a specialty of Naypyidaw in central Burma. In Indonesia, they are sliced thinly to be boiled with coconut milk and spices to make gulai rebung. Other recipes using bamboo shoots are sayur lodeh (mixed vegetables in coconut milk) and lun pia (sometimes written lumpia: fried wrapped bamboo shoots with vegetables). The shoots of some species contain cyanide that must be leached or boiled out before they can be eaten safely. Slicing the bamboo shoots thinly assists in this leaching. Ethiopia Bamboo shoots are also eaten in Ethiopia. The locations where bamboo grows are not contiguous. Bamboo shoots of O. abyssinica are eaten in lowland locations of Pawe, Assosa and Bambasi districts of Benishangul Gumuz Regional State and in Sinan Wereda of Amhara Regional State. Bamboo shoots of A. alpina are also eaten in higher elevations, including from two bamboo forests about 80 km. apart, both areas at higher elevations than the surrounding land. One is on a mountain south of Mizan Teferi, the other is on the higher elevations near Maasha, between the cities of Tepi and Gore. Gallery
Biology and health sciences
Other vegetables
Plants
30863385
https://en.wikipedia.org/wiki/Boa%20constrictor
Boa constrictor
The boa constrictor (scientific name also Boa constrictor), also known as the common boa, is a species of large, non-venomous, heavy-bodied snake that is frequently kept and bred in captivity. The boa constrictor is a member of the family Boidae. The species is native to tropical South America. A staple of private collections and public displays, its color pattern is highly variable yet distinctive. Four subspecies are recognized. Common names Though all boids are indeed constrictors, only Boa constrictor (and its subspecies) is commonly referred to, in English, as a boa constrictor—an example of a species being referred to colloquially using its scientific binomial name. The species and subspecies of B. constrictor are part of a variable, diverse group of New World boids referred to as "red-tailed" boas, comprising the species Boa constrictor and Boa imperator. Within the exotic pet trade, it is known as a "BCC"—an abbreviation of its scientific name—to distinguish it from other boa species, such as Boa imperator (known as "BCI" or "boa constrictor imperator"). Other regional names include the chij-chan (Mayan), jiboia (Portuguese), and macajuel (Trinidadian). Subspecies Several subspecies of Boa constrictor have been described in the past, but many of these are poorly differentiated, and further research may redefine many of them. Some appear to be based more on location rather than on biological differences. Boa imperator, Boa nebulosa, Boa orophias and Boa sigma have all been elevated to full species status. Several other subspecies have been described at different times, but currently, these are no longer considered to be valid subspecies by many herpetologists and taxonomists. They include: B. c. amarali Stull, 1932 B. c. melanogaster Langhammer, 1983: a nomen dubium Description Size and weight The boa constrictor is a large snake, although it is only modestly sized in comparison to other large snakes, such as the reticulated python, Burmese python, or the occasionally sympatric green anaconda, and can reach lengths from depending on the locality and the availability of suitable prey. Clear sexual dimorphism is seen in the species, with females generally being larger in both length and girth than males. The usual size of mature female boas is between whereas males are between . Females commonly exceed , particularly in captivity, where lengths up to or even can be seen. The largest documented non-stretched dry skin is deposited at Zoologische Staatssammlung München (ZSM 4961/2012) and measures 14.6 ft (4.45 m) without head. A report of a boa constrictor growing up to was later found to be a misidentified green anaconda. The boa constrictor is a heavy-bodied snake, and large specimens can weigh up to . Females, the larger sex, more commonly weigh . Some specimens of this species can reach or possibly exceed , although this is not usual. The size and weight of a boa constrictor depends on subspecies, locale, and the availability of suitable prey. B. c. constrictor reaches, and occasionally tops, the averages given above, as it is one of the relatively large subspecies of Boa constrictor. Other examples of sexual dimorphism in the species include males generally having longer tails to contain the hemipenes and also longer pelvic spurs, which are used to grip and stimulate the female during copulation. Pelvic spurs are the only external sign of the rudimentary hind legs and pelvis and are seen in all boas and pythons. Coloring The coloring of boa constrictors can vary greatly depending on the locality. However, they are generally a brown, gray, or cream base color, patterned with brown or reddish-brown "saddles" that become more pronounced towards the tail. This coloring gives B. constrictor subspecies the common name of "red-tailed boas." The coloring works as a very effective camouflage in the jungles and forests of its natural range. Some individuals exhibit pigmentary disorders, such as albinism. Although these individuals are rare in the wild, they are common in captivity, where they are often selectively bred to make a variety of different color "morphs". Boa constrictors have an arrow-shaped head with very distinctive stripes on it: One runs dorsally from the snout to the back of the head; the others run from the snout to the eyes and then from the eyes to the jaw. Boa constrictors can sense heat via cells in their lips, though they lack the labial pits surrounding these receptors seen in many members of the family Boidae. Boa constrictors also have two lungs, a smaller (non-functional) left and an enlarged (functional) right lung to better fit their elongated shape, unlike many colubrid snakes, which have completely lost the left lung. Distribution and habitat Depending on the subspecies, Boa constrictor can be found through South America north of 35°S (Colombia, Ecuador, Peru, Venezuela, Trinidad and Tobago, Guyana, Suriname, French Guiana, Brazil, Bolivia, Uruguay, and Argentina), and many other islands along the coasts of South America. Introduced populations exist in Cozumel, extreme southern Florida, and St. Croix in the U.S. Virgin Islands. The type locality given is "Indiis"—a mistake, according to Peters and Orejas-Miranda (1970). B. constrictor flourishes in a wide variety of environmental conditions, from tropical rainforests to arid semidesert country. However, it prefers to live in rainforest due to the humidity and temperature, natural cover from predators, and vast amount of potential prey. It is commonly found in or along rivers and streams, as it is a very capable swimmer. Boa constrictors also occupy the burrows of medium-sized mammals, where they can hide from potential predators. Behavior Boa constrictors generally live on their own and do not interact with any other snakes unless they want to mate. They are nocturnal, but they may bask during the day when night-time temperatures are too low. As semi-arboreal snakes, young boa constrictors may climb into trees and shrubs to forage; however, they become mostly terrestrial as they become older and heavier. Boa constrictors strike when they perceive a threat. Their bite can be painful, especially from large snakes, but is rarely dangerous to humans. Specimens from Central America are more irascible, hissing loudly and striking repeatedly when disturbed, while those from South America tame down more readily. Like all snakes, boa constrictors in a shed cycle are more unpredictable, because the substance that lubricates between the old skin and the new makes their eyes appear milky, blue, or opaque so that the snake cannot see very well, causing it to be more defensive than it might otherwise be. Hunting and diet Their prey includes a wide variety of small to medium-sized mammals and birds. The bulk of their diet consists of rodents (such as squirrels, mice, rats and agoutis), but larger lizards (such as ameivas, iguanas and tegus) and mammals as big as monkeys, marsupials, armadillos, wild pigs and ocelots are also reported to have been consumed. Domestic animals such as dogs, cats and rabbits are frequently consumed. Young boa constrictors eat small mice, birds, bats, lizards, and amphibians. The size of the prey item increases as they get older and larger. Once a boa constrictor has caught its prey, it will wrap its coils around the animal and constrict it until it suffocates. The boa's powerful muscles allow it to exert a great deal of pressure, and the prey is typically killed within a few minutes. Boa constrictors are ambush predators, so they often lie in wait for an appropriate prey to come along, then they attack a moment before the prey can escape. However, they have also been known to actively hunt, particularly in regions with a low concentration of suitable prey, and this behavior generally occurs at night. The boa first strikes at the prey, grabbing it with its teeth; it then proceeds to constrict the prey until death before consuming it whole. Unconsciousness and death likely result from shutting off vital blood flow to the heart and brain, rather than suffocation as was previously believed; constriction can interfere with blood flow and overwhelm the prey's usual blood pressure and circulation. This would lead to unconsciousness and death very quickly. Their teeth also help force the animal down the throat while muscles then move it toward the stomach. It takes the snake about 4–6 days to fully digest the food, depending on the size of the prey and the local temperature. After this, the snake may not eat for a week to several months, due to its slow metabolism. Reproduction and development Boa constrictors are viviparous, giving birth to live young. They generally breed in the dry season—between April and August—and are polygynous; thus, males may mate with multiple females. Half of all females breed in a given year, and a larger percentage of males actively attempt to locate a mate. Due to their polygynous nature, many of these males will be unsuccessful. As such, female boas in inadequate physical condition are unlikely to attempt to mate, or to produce viable young if they do mate. Reproduction in boas is almost exclusively sexual. In 2010, a boa constrictor was shown to have reproduced asexually via parthenogenesis. The Colombian rainbow boa (Epicrates maurus) was found to reproduce by facultative parthenogenesis resulting in production of WW female progeny. The WW females were likely produced by terminal automixis (see Figure), a type of parthenogenesis in which two terminal haploid products of meiosis fuse to form a zygote, which then develops into a daughter progeny. This is only the third genetically confirmed case of consecutive virgin births of viable offspring from a single female within any vertebrate lineage. In 2017, boa constrictors, along with Boa imperators and Burmese pythons, were found to contain a new set of sex determining chromosomes. Males were discovered to contain a pair of XY sex determining chromosomes, while females have a XX pair. This is the first time snakes were thought to contain male heterogamety; since then it has been found in ball pythons (Python regius) as well. During the breeding season, the female boa constrictor emits pheromones from her cloaca to attract males, which may then wrestle to select one to breed with her. During breeding, the male curls his tail around the female's and the hemipenes (or male reproductive organs) are inserted. Copulation can last from a few minutes to several hours and may occur several times over a period of a few weeks. After this period, ovulation may not occur immediately, but the female can hold the sperm inside her for up to one year. When the female ovulates, a midbody swell can be noticed that appears similar to the snake having eaten a large meal. The female then sheds two to three weeks after ovulation, in what is known as a post-ovulation shed which lasts another 2–3 weeks, which is longer than a normal shed. The gestation period, which is counted from the postovulation shed, is around 100–120 days. The female then gives birth to young that average in length. The litter size varies between females but can be between 10 and 65 young, with an average of 25, although some of the young may be stillborn or unfertilized eggs known as "slugs". The young are independent at birth and grow rapidly for the first few years, shedding regularly (once every one to two months). At 3–4 years, boa constrictors become sexually mature and reach the adult size of , although they continue to grow at a slow rate for the rest of their lives. At this point, they shed less frequently, about every 2–4 months. Captivity Though still exported from South America in significant numbers, they are widely bred in captivity. Captive life expectancy is 20 to 30 years, with rare accounts of over 40 years. The greatest reliable age recorded for a boa constrictor in captivity is 40 years, 3 months, and 14 days. This boa constrictor was named Popeye and died in the Philadelphia Zoo, Pennsylvania, on April 15, 1977. Up to 41.5% of captive boas test positive for eosinophilic inclusion bodies. Economic significance Boa constrictors are very popular within the exotic pet trade and have been both captured in the wild and bred in captivity. Today, most captive boa constrictors are captive-bred, but between 1977 and 1983, 113,000 live boa constrictors were imported into the United States. These huge numbers of wild-caught snakes have put considerable pressure on some wild populations. Boa constrictors have also been hunted for their meat and skins, and are a common sight at markets within their geographic range. After the reticulated python, boa constrictors are the snake most commonly killed for snakeskin products, such as shoes, bags, and other items of clothing. In some areas, they have an important role in regulating the opossum populations, preventing the potential transmission of leishmaniasis to humans. In other areas, they are often let loose within the communities to control the rodent populations. Conservation All boa constrictors fall under CITES and are listed under CITES Appendix II, except B. c. occidentalis, which is listed in CITES Appendix I. In some regions, boa constrictor numbers have been severely hit by predation from humans and other animals and over-collection for the exotic pet and snakeskin trades. Most populations, though, are not under threat of immediate extinction; thus, they are within Appendix II rather than Appendix I. Boa constrictors may be an invasive species in Florida and Aruba.
Biology and health sciences
Snakes
Animals
30863534
https://en.wikipedia.org/wiki/Nypa%20fruticans
Nypa fruticans
Nypa fruticans, commonly known as the nipa palm (or simply nipa, from ) or mangrove palm, is a species of palm native to the coastlines and estuarine habitats of the Indian and Pacific Oceans. It is the only palm considered adapted to the mangrove biome. The genus Nypa and the subfamily Nypoideae are monotypic taxa because this species is their only member. Description Unlike most palms, the nipa palm's trunk grows beneath the ground; only the leaves and flower stalk grow upwards above the surface. The leaves extend up to in height. The flowers are a globular inflorescence of female flowers at the tip with catkin-like red or yellow male flowers on the lower branches. The flower produces woody nuts arranged in a globular cluster up to across on a single stalk. The infructescence can weigh as much as sixty-six pounds (thirty kg). The fruit is globular made of many seed segments, each seed has a fibrous husk covering the endosperm that allows it to float. The stalk droops as the fruits mature. When they reach that stage, the ripe seeds separate from the ball and float away on the tide, occasionally germinating while still water-borne. Fossil record While only one species of Nypa now exists, N. fruticans, with a natural distribution extending from Northern Australia through the Indonesian Archipelago and the Philippine Islands up to China, the genus Nypa once had a nearly global distribution in the Eocene (56–33.4 million years ago). Fossil mangrove palm pollen from India has been dated to 70 million years ago. Fossil fruits and seeds of Nypa have been described from the Maastrichtian and Danian sediments of the Dakhla Formation of Bir Abu Minqar, South Western Desert, Egypt. Fossilized nuts of Nypa dating to the Eocene occur in the sandbeds of Branksome, Dorset, and in London Clay on the Isle of Sheppey, Kent, England. A fossil species, N. australis, has been described from Early Eocene sediments at Macquarie Harbour on the western coast of Tasmania. Fossils of Nypa have also been recovered from throughout the New World, in North and South America, dating from at least the Maastrichtian period of the Cretaceous through the Eocene, making its last appearance in the fossil record of North and South America in the late Eocene. Assuming the habitat of extinct Nypa is similar to that of the extant species N. fruticans, the presence of Nypa fossils may indicate monsoonal or at least seasonal rainfall regimes, and likely tropical climates. The worldwide distribution of Nypa in the Eocene, especially in deposits from polar latitudes, is supporting evidence that the Eocene was a time of global warmth, prior to the formation of modern polar icecaps at the end of the Eocene. Distribution and habitat Nipa palms grow in soft mud and slow-moving tidal and river waters that bring in nutrients. They can be found as far inland as the tide can deposit the floating nuts. They are common on coasts and rivers flowing into the Indian and Pacific Oceans, from India to the Pacific Islands. The palm will survive occasional short-term drying of its environment. Despite the name "mangrove palm" and its prevalence in coastal areas, it is only moderately salt tolerant and suffers if exposed to pure seawater; it prefers the brackish waters of estuaries. It is considered native to China (Hainan), the Ryukyu Islands, Bangladesh, Indian subcontinent, Sri Lanka, the Andaman and Nicobar Islands, Vietnam, Laos, Malay Peninsula, north of Singapore, all of Borneo, Java, Maluku, the Philippines, Sulawesi, Sumatra, the Bismarck Archipelago, New Guinea, the Solomon Islands, the Caroline Islands, and Australia (Queensland and the Northern Territory). It is reportedly naturalized in Nigeria, the Society Islands of French Polynesia, the Mariana Islands, Panama, and Trinidad. Japan's Iriomote Island and its neighboring Uchibanari Island are the most northern limit of the distribution. Ecology Long-tailed macaques (Macaca fascicularis) are known to eat the fruits of the nipa palm. Proboscis monkeys in the Padas Damit Forest Reserve have been observed eating the inflorescences. Bornean orangutans eat nipa palm hearts and shoots. Fungal species Tirisporella beccariana has been found on the mangrove palm, as well as Phomatospora nypae on palms in Malaysia. Uses The long, feathery leaves of the nipa palm are used by local populations as roof material for thatched houses or dwellings. The leaves are also used in many types of basketry and thatching. Because they are buoyant, large stems are used to train swimmers in Burma. On the islands of Roti and Savu, nipa palm sap is fed to pigs during the dry season. This is said to impart a sweet flavour to the meat. The young leaves are dried, bleached and cut to wrap tobacco for smoking, this practice is also found in Sumatra. In Cambodia, this palm is called cha:k; its leaves are used to cover roofs. Roof thatching with the leaves occurs in many places in Papua New Guinea. In some coastal areas, the rachis is used for walls in houses, and the leaflets are used for ornaments. The epidermises of the leaves are used as cigarette papers. Food and beverages The young flower stalk and hard seeds are edible and provide hydration. In the Philippines and Malaysia, the inflorescence can be "tapped" to yield a sweet, edible sap collected to produce a local alcoholic beverage called tuba, bahal, or tuak. A fruit cluster is ready to be tapped when the unripe fruits are at their peak sweetness. The cluster is cut from the stalk about six inches down, and mud is rubbed on the stalk to induce sap flow. Sap begins flowing immediately if the fruit maturity was correctly gauged. A bamboo tube or a bottle is fitted over the cut stalk and the sap is collected twice daily, cutting a half centimeter slice off the end of the stalk after each collection to prevent it from gumming over. Sap flow will continue for 30 days per stalk, and the nipa flowers continuously throughout the year, providing a continuous supply of sap. Tuba can be stored in tapayan (earthenware balloon vases) for several weeks to make a kind of vinegar known as sukang paombong in the Philippines and cuka nipah in Malaysia. Tuba can also be distilled to make arrack, locally known as lambanog in Filipino and arak or arak nipah in Indonesian. Young shoots are also edible; the flower petals can be infused to make an aromatic tisane. Attap chee () (chee meaning "seed" in several Chinese dialects) is a name for the immature fruits—sweet, translucent, gelatinous balls used as a dessert ingredient in Thailand, Malaysia, the Philippines, and Singapore, that are a byproduct of the sap harvesting process. In Indonesia, especially in Java and Bali, the sap can be used to make a variant of Jaggery called gula nipah. In Sarawak, it is called gula apong. In Thailand, leaf is used for dessert. In Cambodia, its leaves are used for wrapping cakes (such as num katâm), and the flowers are sometimes used to make sugar, vinegar, and alcohol. Biofuel The nipa palm produces a very high yield of sugar-rich sap. Fermented into ethanol or butanol, the sap may allow the production of 6480–20,000 liters per hectare per year of fuel. By contrast, sugarcane yields roughly 5200 liters of ethanol per hectare per year, and an equivalent area planted in corn (maize) would produce only roughly 4000 liters per hectare per year, before accounting for the energy costs of the cultivation and alcohol extraction. Unlike corn and sugarcane, nipa palm sap requires little if any fossil fuel energy to produce from an established grove, does not require arable land, and can make use of brackish water instead of freshwater resources. Also unlike most energy crops, the nipa palm does not detract from food production to make fuel. In fact, since nipa fruit is an inevitable byproduct of sap production, it produces both food and fuel simultaneously.
Biology and health sciences
Arecales (inc. Palms)
Plants
28030850
https://en.wikipedia.org/wiki/Communication%20protocol
Communication protocol
A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both. Communicating systems use well-defined formats for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses predetermined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communication protocols have to be agreed upon by the parties involved. To reach an agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communication what programming languages are to computations. An alternate formulation states that protocols are to communication what algorithms are to computation. Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together is known as a protocol suite; when implemented in software they are a protocol stack. Internet communication protocols are published by the Internet Engineering Task Force (IETF). The IEEE (Institute of Electrical and Electronics Engineers) handles wired and wireless networking and the International Organization for Standardization (ISO) handles other types. The ITU-T handles telecommunications protocols and formats for the public switched telephone network (PSTN). As the PSTN and Internet converge, the standards are also being driven towards convergence. Communicating systems History The first use of the term protocol in a modern data-commutation context occurs in April 1967 in a memorandum entitled A Protocol for Use in the NPL Data Communications Network. Under the direction of Donald Davies, who pioneered packet switching at the National Physical Laboratory in the United Kingdom, it was written by Roger Scantlebury and Keith Bartlett for the NPL network. On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol, written by Bob Kahn, which defined the transmission of messages to an IMP. The Network Control Program (NCP) for the ARPANET, developed by Steve Crocker and other graduate students including Jon Postel and Vint Cerf, was first implemented in 1970. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept. The CYCLADES network, designed by Louis Pouzin in the early 1970s was the first to implement the end-to-end principle, and make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was the first to tackle the highly complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be the Transmission Control Protocol (TCP). Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. Research in the early 1970s by Bob Kahn and Vint Cerf led to the formulation of the Transmission Control Program (TCP). Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974, still a monolithic design at this time. The International Network Working Group agreed on a connectionless datagram standard which was presented to the CCITT in 1975 but was not adopted by the CCITT nor by the ARPANET. Separate international research, particularly the work of Rémi Després, contributed to the development of the X.25 standard, based on virtual circuits, which was adopted by the CCITT in 1976. Computer manufacturers developed proprietary protocols such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's DECnet and Xerox Network Systems. TCP software was redesigned as a modular protocol stack, referred to as TCP/IP. This was installed on SATNET in 1982 and on the ARPANET in January 1983. The development of a complete Internet protocol suite by 1989, as outlined in and , laid the foundation for the growth of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet. International work on a reference model for communication standards led to the OSI model, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. Concept The information exchanged between devices through a network or other media is governed by rules and conventions that can be set out in communication protocol specifications. The nature of communication, the actual data exchanged and any state-dependent behaviors, is defined by these specifications. In digital computing systems, the rules can be expressed by algorithms and data structures. Protocols are to communication what algorithms or programming languages are to computations. Operating systems usually contain a set of cooperating processes that manipulate shared data to communicate with each other. This communication is governed by well-understood protocols, which can be embedded in the process code itself. In contrast, because there is no shared memory, communicating systems have to communicate with each other using a shared transmission medium. Transmission is not necessarily reliable, and individual systems may use different hardware or operating systems. To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system. When protocol algorithms are expressed in a portable programming language the protocol software may be made operating system independent. The best-known frameworks are the TCP/IP model and the OSI model. At the time the Internet was developed, abstraction layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, the originally monolithic networking programs were decomposed into cooperating protocols. This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design. Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol suite. Some of the best-known protocol suites are TCP/IP, IPX/SPX, X.25, AX.25 and AppleTalk. The protocols can be arranged based on functionality in groups, for instance, there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions. To transmit a message, a protocol has to be selected from each layer. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer. Types There are two types of communication protocols, based on their representation of the content being carried: text-based and binary. Text-based A text-based protocol or plain text protocol represents its content in human-readable format, often in plain text encoded in a machine-readable encoding such as ASCII or UTF-8, or in structured text-based formats such as Intel hex format, XML or JSON. The immediate human readability stands in contrast to native binary protocols which have inherent benefits for use in a computer environment (such as ease of mechanical parsing and improved bandwidth utilization). Network applications have various methods of encapsulating data. One method very common with Internet protocols is a text oriented representation that transmits requests and responses as lines of ASCII text, terminated by a newline character (and usually a carriage return character). Examples of protocols that use plain, human-readable text for its commands are FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), early versions of HTTP (Hypertext Transfer Protocol), and the finger protocol. Text-based protocols are typically optimized for human parsing and interpretation and are therefore suitable whenever human inspection of protocol contents is required, such as during debugging and during early protocol development design phases. Binary A binary protocol utilizes all values of a byte, as opposed to a text-based protocol which only uses values corresponding to human-readable characters in ASCII encoding. Binary protocols are intended to be read by a machine rather than a human being. Binary protocols have the advantage of terseness, which translates into speed of transmission and interpretation. Binary have been used in the normative documents describing modern standards like EbXML, HTTP/2, HTTP/3 and EDOC. An interface in UML may also be considered a binary protocol. Basic requirements Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol must include rules describing the context. These kinds of rules are said to express the syntax of the communication. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kinds of rules are said to express the semantics of the communication. Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed: Data formats for data exchange Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header and the payload. The actual message is carried in the payload. The header area contains the fields with relevance to the operation of the protocol. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Address formats for data exchange Addresses are used to identify both the sender and the intended receiver(s). The addresses are carried in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are of interest and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair (sender address, receiver address). Usually, some address values have special meanings. An all-1s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme. Address mapping Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance, to translate a logical IP address specified by the application to an Ethernet MAC address. This is referred to as address mapping. Routing When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. The interconnection of networks through routers is called internetworking. Detection of transmission errors Error detection is necessary on networks where data corruption is possible. In a common approach, a CRC of the data area is added to the end of packets, making it possible for the receiver to detect differences caused by corruption. The receiver rejects the packets on CRC differences and arranges somehow for retransmission. Acknowledgements Acknowledgement of correct reception of packets is required for connection-oriented communication. Acknowledgments are sent from receivers back to their respective senders. Loss of information - timeouts and retries Packets may be lost on the network or be delayed in transit. To cope with this, under some protocols, a sender may expect an acknowledgment of correct reception from the receiver within a certain amount of time. Thus, on timeouts, the sender may need to retransmit the information. In case of a permanently broken link, the retransmission has no effect, so the number of retransmissions is limited. Exceeding the retry limit is considered an error. Direction of information flow Direction needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links or from one sender at a time as on a shared medium. This is known as media access control. Arrangements have to be made to accommodate the case of collision or contention where two parties respectively simultaneously transmit or wish to transmit. Sequence control If long bitstrings are divided into pieces and then sent on the network individually, the pieces may get lost or delayed or, on some types of networks, take different routes to their destination. As a result, pieces may arrive out of sequence. Retransmissions can result in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message. Flow control Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender. Queueing Communicating processes or state machines employ queues (or "buffers"), usually FIFO queues, to deal with the messages in the order sent, and may sometimes have multiple queues with different prioritization. Protocol design Systems engineering principles have been applied to create a set of common network protocol design principles. The design of complex protocols often involves decomposition into simpler, cooperating protocols. Such a set of cooperating protocols is sometimes called a protocol family or a protocol suite, within a conceptual framework. Communicating systems operate concurrently. An important aspect of concurrent programming is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain. A mathematical approach to the study of concurrency and communication is referred to as communicating sequential processes (CSP). Concurrency can also be modeled using finite-state machines, such as Mealy and Moore machines. Mealy and Moore machines are in use as design tools in digital electronics systems encountered in the form of hardware used in telecommunication or electronic devices in general. The literature presents numerous analogies between computer communication and programming. In analogy, a transfer mechanism of a protocol is comparable to a central processing unit (CPU). The framework introduces rules that allow the programmer to design cooperating protocols independently of one another. Layering In modern protocol design, protocols are layered to form a protocol stack. Layering is a design principle that divides the protocol design task into smaller steps, each of which accomplishes a specific part, interacting with the other parts of the protocol only in a small number of well-defined ways. Layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple. The communication protocols in use on the Internet are designed to function in diverse and complex settings. Internet protocols are designed for simplicity and modularity and fit into a coarse hierarchy of functional layers defined in the Internet Protocol Suite. The first two cooperating protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP) resulted from the decomposition of the original Transmission Control Program, a monolithic communication protocol, into this layered communication suite. The OSI model was developed internationally based on experience with networks that predated the internet as a reference model for general communication with much stricter rules of protocol interaction and rigorous layering. Typically, application software is built upon a robust data transport layer. Underlying this transport layer is a datagram delivery and routing mechanism that is typically connectionless in the Internet. Packet relaying across networks happens over another layer that involves only network link technologies, which are often specific to certain physical layer technologies, such as Ethernet. Layering provides opportunities to exchange technologies when needed, for example, protocols are often stacked in a tunneling arrangement to accommodate the connection of dissimilar networks. For example, IP may be tunneled across an Asynchronous Transfer Mode (ATM) network. Protocol layering Protocol layering forms the basis of protocol design. It allows the decomposition of single, complex protocols into simpler, cooperating protocols. The protocol layers each solve a distinct class of communication problems. Together, the layers make up a layering scheme or model. Computations deal with algorithms and data; Communication involves protocols and messages; So the analog of a data flow diagram is some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems, A and B, both make use of the same protocol suite. The vertical flows (and protocols) are in-system and the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines mark the boundaries of the (horizontal) protocol layers. Software layering The software supporting protocols has a layered organization and its relationship with protocol layering is shown in figure 5. To send a message on system A, the top-layer software module interacts with the module directly below it and hands over the message to be encapsulated. The lower module fills in the header data in accordance with the protocol it implements and interacts with the bottom module which sends the message over the communications channel to the bottom module of system B. On the receiving system B the reverse happens, so ultimately the message gets delivered in its original form to the top module of system B. Program translation is divided into subproblems. As a result, the translation software is layered as well, allowing the software layers to be designed independently. The same approach can be seen in the TCP/IP layering. The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between the application layer and the transport layer is called the operating system boundary. Strict layering Strictly adhering to a layered model, a practice known as strict layering, is not always the best approach to networking. Strict layering can have a negative impact on the performance of an implementation. Although the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers as abstracting the protocol stack in this way may cause a higher layer to duplicate the functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis. Design patterns Commonly recurring problems in the design and implementation of communication protocols can be addressed by software design patterns. Formal specification Popular formal methods of describing communication syntax are Abstract Syntax Notation One (an ISO standard) and augmented Backus–Naur form (an IETF standard). Finite-state machine models are used to formally describe the possible interactions of the protocol. and communicating finite-state machines Protocol development For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability. Protocol standards are commonly created by obtaining the approval or support of a standards organization, which initiates the standardization process. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol. The need for protocol standards The need for protocol standards can be shown by looking at what happened to the Binary Synchronous Communications (BSC) protocol invented by IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to enhance the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume, that a standard would have prevented at least some of this from happening. In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as de facto standards. De facto standards are common in emerging markets, niche markets, or markets that are monopolized (or oligopolized). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a de facto standard operating system like Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition. Standards organizations Some of the standards organizations of relevance for communication protocols are the International Organization for Standardization (ISO), the International Telecommunication Union (ITU), the Institute of Electrical and Electronics Engineers (IEEE), and the Internet Engineering Task Force (IETF). The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunication engineers designing the public switched telephone network (PSTN), as well as many radio communication systems. For marine electronics the NMEA standards are used. The World Wide Web Consortium (W3C) produces protocols and standards for Web technologies. International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other. Multiple standards bodies may be involved in the development of a protocol. If they are uncoordinated, then the result may be multiple, incompatible definitions of a protocol, or multiple, incompatible interpretations of messages; important invariants in one definition (e.g., that time-to-live values are monotone decreasing to prevent stable routing loops) may not be respected in another. The standardization process In the ISO, the standardization process starts off with the commissioning of a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties (including other standards bodies) in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement. These comments are taken into account and a draft proposal is produced by the working group. After feedback, modification, and compromise the proposal reaches the status of a draft international standard, and ultimately an international standard. International standards are reissued periodically to handle the deficiencies and reflect changing views on the subject. OSI standardization A lesson learned from ARPANET, the predecessor of the Internet, was that protocols need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for structured protocols (such as layered protocols) and their standardization. This would prevent protocol standards with overlapping functionality and would allow clear definition of the responsibilities of a protocol at the different levels (layers). This gave rise to the Open Systems Interconnection model (OSI model), which is used as a framework for the design of standard protocols and services conforming to the various layer specifications. In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered. Each layer provides service to the layer above it using the services of the layer immediately below it. The top layer provides services to the application process. The layers communicate with each other by means of an interface, called a service access point. Corresponding layers at each system are called peer entities. To communicate, two peer entities at a given layer use a protocol specific to that layer which is implemented by using services of the layer below. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the OSI model, the layers and their functionality are (from highest to lowest layer): The Application layer may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures. The presentation layer may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g., data compression and data encryption). The session layer may provide the following services to the presentation layer: establishment and release of session connections, normal and expedited data exchange, a quarantine service which allows the sending presentation entity to instruct the receiving session entity not to release data to its presentation entity without permission, interaction management so presentation entities can control whose turn it is to perform certain control functions, resynchronization of a session connection, reporting of unrecoverable exceptions to the presentation entity. The transport layer provides reliable and transparent data transfer in a cost-effective way as required by the selected quality of service. It may support the multiplexing of several transport connections on to one network connection or split one transport connection into several network connections. The network layer does the setup, maintenance and release of network paths between transport peer entities. When relays are needed, routing and relay functions are provided by this layer. The quality of service is negotiated between network and transport entities at the time the connection is set up. This layer is also responsible for network congestion control. The data link layer does the setup, maintenance and release of data link connections. Errors occurring in the physical layer are detected and may be corrected. Errors are reported to the network layer. The exchange of data link units (including flow control) is defined by this layer. The physical layer describes details like the electrical characteristics of the physical connection, the transmission techniques used, and the setup, maintenance and clearing of physical connections. In contrast to the TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Connection-oriented communication requires some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so the development of RM/OSI concentrated on connection-oriented networks and connectionless networks were first mentioned in an addendum to RM/OSI and later incorporated into an update to RM/OSI. At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code". The standardization process is described by . Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards. Wire image The wire image of a protocol is the information that a non-participant observer is able to glean from observing the protocol messages, including both information explicitly given meaning by the protocol, but also inferences made by the observer. Unencrypted protocol metadata is one source making up the wire image, and side-channels including packet timing also contribute. Different observers with different vantages may see different wire images. The wire image is relevant to end-user privacy and the extensibility of the protocol. If some portion of the wire image is not cryptographically authenticated, it is subject to modification by intermediate parties (i.e., middleboxes), which can influence protocol operation. Even if authenticated, if a portion is not encrypted, it will form part of the wire image, and intermediate parties may intervene depending on its content (e.g., dropping packets with particular flags). Signals deliberately intended for intermediary consumption may be left authenticated but unencrypted. The wire image can be deliberately engineered, encrypting parts that intermediaries should not be able to observe and providing signals for what they should be able to. If provided signals are decoupled from the protocol's operation, they may become untrustworthy. Benign network management and research are affected by metadata encryption; protocol designers must balance observability for operability and research against ossification resistance and end-user privacy. The IETF announced in 2014 that it had determined that large-scale surveillance of protocol operations is an attack due to the ability to infer information from the wire image about users and their behaviour, and that the IETF would "work to mitigate pervasive monitoring" in its protocol designs; this had not been done systematically previously. The Internet Architecture Board recommended in 2023 that disclosure of information by a protocol to the network should be intentional, performed with the agreement of both recipient and sender, authenticated to the degree possible and necessary, only acted upon to the degree of its trustworthiness, and minimised and provided to a minimum number of entities. Engineering the wire image and controlling what signals are provided to network elements was a "developing field" in 2023, according to the IAB. Ossification Protocol ossification is the loss of flexibility, extensibility and evolvability of network protocols. This is largely due to middleboxes that are sensitive to the wire image of the protocol, and which can interrupt or interfere with messages that are valid but which the middlebox does not correctly recognize. This is a violation of the end-to-end principle. Secondary causes include inflexibility in endpoint implementations of protocols. Ossification is a major issue in Internet protocol design and deployment, as it can prevent new protocols or extensions from being deployed on the Internet, or place strictures on the design of new protocols; new protocols may have to be encapsulated in an already-deployed protocol or mimic the wire image of another protocol. Because of ossification, the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are the only practical choices for transport protocols on the Internet, and TCP itself has significantly ossified, making extension or modification of the protocol difficult. Recommended methods of preventing ossification include encrypting protocol metadata, and ensuring that extension points are exercised and wire image variability is exhibited as fully as possible; remedying existing ossification requires coordination across protocol participants. QUIC is the first IETF transport protocol to have been designed with deliberate anti-ossification properties. Taxonomies Classification schemes for protocols usually focus on the domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connection-oriented networks and connectionless networks respectively. An example of function is a tunneling protocol, which is used to encapsulate packets in a high-level protocol so that the packets can be passed across a transport system using the high-level protocol. A layering scheme combines both function and domain of use. The dominant layering schemes are the ones developed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes. The layering scheme from the IETF is called Internet layering or TCP/IP layering. The layering scheme from ISO is called the OSI model or ISO layering. In networking equipment configuration, a term-of-art distinction is often drawn: The term protocol strictly refers to the transport layer, and the term service refers to protocols utilizing a protocol for transport. In the common case of TCP and UDP, services are distinguished by port numbers. Conformance to these port numbers is voluntary, so in content inspection systems the term service strictly refers to port numbers, and the term application is often used to refer to protocols identified through inspection signatures.
Technology
Internet
null
21780446
https://en.wikipedia.org/wiki/Species
Species
A species (: species) is a population of organisms in which any two individuals of the appropriate sexes or mating types can produce fertile offspring, typically by sexual reproduction. It is the basic unit of classification and a taxonomic rank of an organism, as well as a unit of biodiversity. Other ways of defining species include their karyotype, DNA sequence, morphology, behaviour, or ecological niche. In addition, palaeontologists use the concept of the chronospecies since fossil reproduction cannot be examined. The most recent rigorous estimate for the total number of species of eukaryotes is between 8 and 8.7 million. About 14% of these had been described by 2011. All species (except viruses) are given a two-part name, called a "binomial". The first part of a binomial is the genus to which the species belongs. The second part is called the specific name or the specific epithet (in botanical nomenclature, also sometimes in zoological nomenclature). For example, Boa constrictor is one of the species of the genus Boa, with constrictor being the species' epithet. While the definitions given above may seem adequate at first glance, when looked at more closely they represent problematic species concepts. For example, the boundaries between closely related species become unclear with hybridisation, in a species complex of hundreds of similar microspecies, and in a ring species. Also, among organisms that reproduce only asexually, the concept of a reproductive species breaks down, and each clone is potentially a microspecies. Although none of these are entirely satisfactory definitions, and while the concept of species may not be a perfect model of life, it is still a useful tool to scientists and conservationists for studying life on Earth, regardless of the theoretical difficulties. If species were fixed and distinct from one another, there would be no problem, but evolutionary processes cause species to change. This obliges taxonomists to decide, for example, when enough change has occurred to declare that a lineage should be divided into multiple chronospecies, or when populations have diverged to have enough distinct character states to be described as cladistic species. Species and higher taxa were seen from the time of Aristotle until the 18th century as categories that could be arranged in a hierarchy, the great chain of being. In the 19th century, biologists grasped that species could evolve given sufficient time. Charles Darwin's 1859 book On the Origin of Species explained how species could arise by natural selection. That understanding was greatly extended in the 20th century through genetics and population ecology. Genetic variability arises from mutations and recombination, while organisms themselves are mobile, leading to geographical isolation and genetic drift with varying selection pressures. Genes can sometimes be exchanged between species by horizontal gene transfer; new species can arise rapidly through hybridisation and polyploidy; and species may become extinct for a variety of reasons. Viruses are a special case, driven by a balance of mutation and selection, and can be treated as quasispecies. Definition Biologists and taxonomists have made many attempts to define species, beginning from morphology and moving towards genetics. Early taxonomists such as Linnaeus had no option but to describe what they saw: this was later formalised as the typological or morphological species concept. Ernst Mayr emphasised reproductive isolation, but this, like other species concepts, is hard or even impossible to test. Later biologists have tried to refine Mayr's definition with the recognition and cohesion concepts, among others. Many of the concepts are quite similar or overlap, so they are not easy to count: the biologist R. L. Mayden recorded about 24 concepts, and the philosopher of science John Wilkins counted 26. Wilkins further grouped the species concepts into seven basic kinds of concepts: (1) agamospecies for asexual organisms (2) biospecies for reproductively isolated sexual organisms (3) ecospecies based on ecological niches (4) evolutionary species based on lineage (5) genetic species based on gene pool (6) morphospecies based on form or phenotype and (7) taxonomic species, a species as determined by a taxonomist. Typological or morphological species A typological species is a group of organisms in which individuals conform to certain fixed properties (a type, which may be defined by a chosen 'nominal species'), so that even pre-literate people often recognise the same taxon as do modern taxonomists. The clusters of variations or phenotypes within specimens (such as longer or shorter tails) would differentiate the species. This method was used as a "classical" method of determining species, such as with Linnaeus, early in evolutionary theory. However, different phenotypes are not necessarily different species (e.g. a four-winged Drosophila born to a two-winged mother is not a different species). Species named in this manner are called morphospecies. In the 1970s, Robert R. Sokal, Theodore J. Crovello and Peter Sneath proposed a variation on the morphological species concept, a phenetic species, defined as a set of organisms with a similar phenotype to each other, but a different phenotype from other sets of organisms. It differs from the morphological species concept in including a numerical measure of distance or similarity to cluster entities based on multivariate comparisons of a reasonably large number of phenotypic traits. Recognition and cohesion species A mate-recognition species is a group of sexually reproducing organisms that recognise one another as potential mates. Expanding on this to allow for post-mating isolation, a cohesion species is the most inclusive population of individuals having the potential for phenotypic cohesion through intrinsic cohesion mechanisms; no matter whether populations can hybridise successfully, they are still distinct cohesion species if the amount of hybridisation is insufficient to completely mix their respective gene pools. A further development of the recognition concept is provided by the biosemiotic concept of species. Genetic similarity and barcode species In microbiology, genes can move freely even between distantly related bacteria, possibly extending to the whole bacterial domain. As a rule of thumb, microbiologists have assumed that members of Bacteria or Archaea with 16S ribosomal RNA gene sequences more similar than 97% to each other need to be checked by DNA–DNA hybridisation to decide if they belong to the same species. This concept was narrowed in 2006 to a similarity of 98.7%. The average nucleotide identity (ANI) method quantifies genetic distance between entire genomes, using regions of about 10,000 base pairs. With enough data from genomes of one genus, algorithms can be used to categorize species, as for Pseudomonas avellanae in 2013, and for all sequenced bacteria and archaea since 2020. Observed ANI values among sequences appear to have an "ANI gap" at 85–95%, suggesting that a genetic boundary suitable for defining a species concept is present. DNA barcoding has been proposed as a way to distinguish species suitable even for non-specialists to use. One of the barcodes is a region of mitochondrial DNA within the gene for cytochrome c oxidase. A database, Barcode of Life Data System, contains DNA barcode sequences from over 190,000 species. However, scientists such as Rob DeSalle have expressed concern that classical taxonomy and DNA barcoding, which they consider a misnomer, need to be reconciled, as they delimit species differently. Genetic introgression mediated by endosymbionts and other vectors can further make barcodes ineffective in the identification of species. Phylogenetic or cladistic species A phylogenetic or cladistic species is "the smallest aggregation of populations (sexual) or lineages (asexual) diagnosable by a unique combination of character states in comparable individuals (semaphoronts)". The empirical basis – observed character states – provides the evidence to support hypotheses about evolutionarily divergent lineages that have maintained their hereditary integrity through time and space. Molecular markers may be used to determine diagnostic genetic differences in the nuclear or mitochondrial DNA of various species. For example, in a study done on fungi, studying the nucleotide characters using cladistic species produced the most accurate results in recognising the numerous fungi species of all the concepts studied. Versions of the phylogenetic species concept that emphasise monophyly or diagnosability may lead to splitting of existing species, for example in Bovidae, by recognising old subspecies as species, despite the fact that there are no reproductive barriers, and populations may intergrade morphologically. Others have called this approach taxonomic inflation, diluting the species concept and making taxonomy unstable. Yet others defend this approach, considering "taxonomic inflation" pejorative and labelling the opposing view as "taxonomic conservatism"; claiming it is politically expedient to split species and recognise smaller populations at the species level, because this means they can more easily be included as endangered in the IUCN red list and can attract conservation legislation and funding. Unlike the biological species concept, a cladistic species does not rely on reproductive isolation – its criteria are independent of processes that are integral in other concepts. Therefore, it applies to asexual lineages. However, it does not always provide clear cut and intuitively satisfying boundaries between taxa, and may require multiple sources of evidence, such as more than one polymorphic locus, to give plausible results. Evolutionary species An evolutionary species, suggested by George Gaylord Simpson in 1951, is "an entity composed of organisms which maintains its identity from other such entities through time and over space, and which has its own independent evolutionary fate and historical tendencies". This differs from the biological species concept in embodying persistence over time. Wiley and Mayden stated that they see the evolutionary species concept as "identical" to Willi Hennig's species-as-lineages concept, and asserted that the biological species concept, "the several versions" of the phylogenetic species concept, and the idea that species are of the same kind as higher taxa are not suitable for biodiversity studies (with the intention of estimating the number of species accurately). They further suggested that the concept works for both asexual and sexually-reproducing species. A version of the concept is Kevin de Queiroz's "General Lineage Concept of Species". Ecological species An ecological species is a set of organisms adapted to a particular set of resources, called a niche, in the environment. According to this concept, populations form the discrete phenetic clusters that we recognise as species because the ecological and evolutionary processes controlling how resources are divided up tend to produce those clusters. Genetic species A genetic species as defined by Robert Baker and Robert Bradley is a set of genetically isolated interbreeding populations. This is similar to Mayr's Biological Species Concept, but stresses genetic rather than reproductive isolation. In the 21st century, a genetic species could be established by comparing DNA sequences. Earlier, other methods were available, such as comparing karyotypes (sets of chromosomes) and allozymes (enzyme variants). Evolutionarily significant unit An evolutionarily significant unit (ESU) or "wildlife species" is a population of organisms considered distinct for purposes of conservation. Chronospecies In palaeontology, with only comparative anatomy (morphology) and histology from fossils as evidence, the concept of a chronospecies can be applied. During anagenesis (evolution, not necessarily involving branching), some palaeontologists seek to identify a sequence of species, each one derived from the phyletically extinct one before through continuous, slow and more or less uniform change. In such a time sequence, some palaeontologists assess how much change is required for a morphologically distinct form to be considered a different species from its ancestors. Viral quasispecies Viruses have enormous populations, are doubtfully living since they consist of little more than a string of DNA or RNA in a protein coat, and mutate rapidly. All of these factors make conventional species concepts largely inapplicable. A viral quasispecies is a group of genotypes related by similar mutations, competing within a highly mutagenic environment, and hence governed by a mutation–selection balance. It is predicted that a viral quasispecies at a low but evolutionarily neutral and highly connected (that is, flat) region in the fitness landscape will outcompete a quasispecies located at a higher but narrower fitness peak in which the surrounding mutants are unfit, "the quasispecies effect" or the "survival of the flattest". There is no suggestion that a viral quasispecies resembles a traditional biological species. The International Committee on Taxonomy of Viruses has since 1962 developed a universal taxonomic scheme for viruses; this has stabilised viral taxonomy. Mayr's biological species concept Most modern textbooks make use of Ernst Mayr's 1942 definition, known as the biological species concept, as a basis for further discussion on the definition of species. It is also called a reproductive or isolation concept. This defines a species as It has been argued that this definition is a natural consequence of the effect of sexual reproduction on the dynamics of natural selection. Mayr's use of the adjective "potentially" has been a point of debate; some interpretations exclude unusual or artificial matings that occur only in captivity, or that involve animals capable of mating but that do not normally do so in the wild. The species problem It is difficult to define a species in a way that applies to all organisms. The debate about species concepts is called the species problem. The problem was recognised even in 1859, when Darwin wrote in On the Origin of Species: He went on to write: When Mayr's concept breaks down Many authors have argued that a simple textbook definition, following Mayr's concept, works well for most multi-celled organisms, but breaks down in several situations: When organisms reproduce asexually, as in single-celled organisms such as bacteria and other prokaryotes, and parthenogenetic or apomictic multi-celled organisms. DNA barcoding and phylogenetics are commonly used in these cases. The term quasispecies is sometimes used for rapidly mutating entities like viruses. When scientists do not know whether two morphologically similar groups of organisms are capable of interbreeding; this is the case with all extinct life-forms in palaeontology, as breeding experiments are not possible. When hybridisation permits substantial gene flow between species. In ring species, when members of adjacent populations in a widely continuous distribution range interbreed successfully but members of more distant populations do not. Species identification is made difficult by discordance between molecular and morphological investigations; these can be categorised as two types: (i) one morphology, multiple lineages (e.g. morphological convergence, cryptic species) and (ii) one lineage, multiple morphologies (e.g. phenotypic plasticity, multiple life-cycle stages). In addition, horizontal gene transfer (HGT) makes it difficult to define a species. All species definitions assume that an organism acquires its genes from one or two parents very like the "daughter" organism, but that is not what happens in HGT. There is strong evidence of HGT between very dissimilar groups of prokaryotes, and at least occasionally between dissimilar groups of eukaryotes, including some crustaceans and echinoderms. The evolutionary biologist James Mallet concludes that The botanist Brent Mishler argued that the species concept is not valid, notably because gene flux decreases gradually rather than in discrete steps, which hampers objective delimitation of species. Indeed, complex and unstable patterns of gene flux have been observed in cichlid teleosts of the East African Great Lakes. Wilkins argued that "if we were being true to evolution and the consequent phylogenetic approach to taxa, we should replace it with a 'smallest clade' idea" (a phylogenetic species concept). Mishler and Wilkins and others concur with this approach, even though this would raise difficulties in biological nomenclature. Wilkins cited the ichthyologist Charles Tate Regan's early 20th century remark that "a species is whatever a suitably qualified biologist chooses to call a species". Wilkins noted that the philosopher Philip Kitcher called this the "cynical species concept", and arguing that far from being cynical, it usefully leads to an empirical taxonomy for any given group, based on taxonomists' experience. Other biologists have gone further and argued that we should abandon species entirely, and refer to the "Least Inclusive Taxonomic Units" (LITUs), a view that would be coherent with current evolutionary theory. Aggregates of microspecies The species concept is further weakened by the existence of microspecies, groups of organisms, including many plants, with very little genetic variability, usually forming species aggregates. For example, the dandelion Taraxacum officinale and the blackberry Rubus fruticosus are aggregates with many microspecies—perhaps 400 in the case of the blackberry and over 200 in the dandelion, complicated by hybridisation, apomixis and polyploidy, making gene flow between populations difficult to determine, and their taxonomy debatable. Species complexes occur in insects such as Heliconius butterflies, vertebrates such as Hypsiboas treefrogs, and fungi such as the fly agaric. Hybridisation Natural hybridisation presents a challenge to the concept of a reproductively isolated species, as fertile hybrids permit gene flow between two populations. For example, the carrion crow Corvus corone and the hooded crow Corvus cornix appear and are classified as separate species, yet they can hybridise where their geographical ranges overlap. Ring species A ring species is a connected series of neighbouring populations, each of which can sexually interbreed with adjacent related populations, but for which there exist at least two "end" populations in the series, which are too distantly related to interbreed, though there is a potential gene flow between each "linked" population. Such non-breeding, though genetically connected, "end" populations may co-exist in the same region thus closing the ring. Ring species thus present a difficulty for any species concept that relies on reproductive isolation. However, ring species are at best rare. Proposed examples include the herring gull–lesser black-backed gull complex around the North pole, the Ensatina eschscholtzii group of 19 populations of salamanders in America, and the greenish warbler in Asia, but many so-called ring species have turned out to be the result of misclassification leading to questions on whether there really are any ring species. Taxonomy and naming Common and scientific names The commonly used names for kinds of organisms are often ambiguous: "cat" could mean the domestic cat, Felis catus, or the cat family, Felidae. Another problem with common names is that they often vary from place to place, so that puma, cougar, catamount, panther, painter and mountain lion all mean Puma concolor in various parts of America, while "panther" may also mean the jaguar (Panthera onca) of Latin America or the leopard (Panthera pardus) of Africa and Asia. In contrast, the scientific names of species are chosen to be unique and universal (except for some inter-code homonyms); they are in two parts used together: the genus as in Puma, and the specific epithet as in concolor. Species description A species is given a taxonomic name when a type specimen is described formally, in a publication that assigns it a unique scientific name. The description typically provides means for identifying the new species, which may not be based solely on morphology (see cryptic species), differentiating it from other previously described and related or confusable species and provides a validly published name (in botany) or an available name (in zoology) when the paper is accepted for publication. The type material is usually held in a permanent repository, often the research collection of a major museum or university, that allows independent verification and the means to compare specimens. Describers of new species are asked to choose names that, in the words of the International Code of Zoological Nomenclature, are "appropriate, compact, euphonious, memorable, and do not cause offence". Abbreviations Books and articles sometimes intentionally do not identify species fully, using the abbreviation "sp." in the singular or "spp." (standing for species pluralis, Latin for "multiple species") in the plural in place of the specific name or epithet (e.g. "Canis sp."). This commonly occurs when authors are confident that some individuals belong to a particular genus but are not sure to which exact species they belong, as is common in paleontology. Authors may also use "spp." as a short way of saying that something applies to many species within a genus, but not to all. If scientists mean that something applies to all species within a genus, they use the genus name without the specific name or epithet. The names of genera and species are usually printed in italics. However, abbreviations such as "sp." should not be italicised. When a species' identity is not clear, a specialist may use "cf." before the epithet to indicate that confirmation is required. The abbreviations "nr." (near) or "aff." (affine) may be used when the identity is unclear but when the species appears to be similar to the species mentioned after. Identification codes With the rise of online databases, codes have been devised to provide identifiers for species that are already defined, including: National Center for Biotechnology Information (NCBI) employs a numeric 'taxid' or Taxonomy identifier, a "stable unique identifier", e.g., the taxid of Homo sapiens is 9606. Kyoto Encyclopedia of Genes and Genomes (KEGG) employs a three- or four-letter code for a limited number of organisms; in this code, for example, H. sapiens is simply hsa. UniProt employs an "organism mnemonic" of not more than five alphanumeric characters, e.g., HUMAN for H. sapiens. Integrated Taxonomic Information System (ITIS) provides a unique number for each species. The LSID for Homo sapiens is urn:lsid:catalogueoflife.org:taxon:4da6736d-d35f-11e6-9d3f-bc764e092680:col20170225. Lumping and splitting The naming of a particular species, including which genus (and higher taxa) it is placed in, is a hypothesis about the evolutionary relationships and distinguishability of that group of organisms. As further information comes to hand, the hypothesis may be corroborated or refuted. Sometimes, especially in the past when communication was more difficult, taxonomists working in isolation have given two distinct names to individual organisms later identified as the same species. When two species names are discovered to apply to the same species, the older species name is given priority and usually retained, and the newer name considered as a junior synonym, a process called synonymy. Dividing a taxon into multiple, often new, taxa is called splitting. Taxonomists are often referred to as "lumpers" or "splitters" by their colleagues, depending on their personal approach to recognising differences or commonalities between organisms. The circumscription of taxa, considered a taxonomic decision at the discretion of cognizant specialists, is not governed by the Codes of Zoological or Botanical Nomenclature, in contrast to the PhyloCode, and contrary to what is done in several other fields, in which the definitions of technical terms, like geochronological units and geopolitical entities, are explicitly delimited. Broad and narrow senses The nomenclatural codes that guide the naming of species, including the ICZN for animals and the ICN for plants, do not make rules for defining the boundaries of the species. Research can change the boundaries, also known as circumscription, based on new evidence. Species may then need to be distinguished by the boundary definitions used, and in such cases the names may be qualified with sensu stricto ("in the narrow sense") to denote usage in the exact meaning given by an author such as the person who named the species, while the antonym sensu lato ("in the broad sense") denotes a wider usage, for instance including other subspecies. Other abbreviations such as "auct." ("author"), and qualifiers such as "non" ("not") may be used to further clarify the sense in which the specified authors delineated or described the species. Change Species are subject to change, whether by evolving into new species, exchanging genes with other species, merging with other species or by becoming extinct. Speciation The evolutionary process by which biological populations of sexually-reproducing organisms evolve to become distinct or reproductively isolated as species is called speciation. Charles Darwin was the first to describe the role of natural selection in speciation in his 1859 book The Origin of Species. Speciation depends on a measure of reproductive isolation, a reduced gene flow. This occurs most easily in allopatric speciation, where populations are separated geographically and can diverge gradually as mutations accumulate. Reproductive isolation is threatened by hybridisation, but this can be selected against once a pair of populations have incompatible alleles of the same gene, as described in the Bateson–Dobzhansky–Muller model. A different mechanism, phyletic speciation, involves one lineage gradually changing over time into a new and distinct form (a chronospecies), without increasing the number of resultant species. Exchange of genes between species Horizontal gene transfer between organisms of different species, either through hybridisation, antigenic shift, or reassortment, is sometimes an important source of genetic variation. Viruses can transfer genes between species. Bacteria can exchange plasmids with bacteria of other species, including some apparently distantly related ones in different phylogenetic domains, making analysis of their relationships difficult, and weakening the concept of a bacterial species. Louis-Marie Bobay and Howard Ochman suggest, based on analysis of the genomes of many types of bacteria, that they can often be grouped "into communities that regularly swap genes", in much the same way that plants and animals can be grouped into reproductively isolated breeding populations. Bacteria may thus form species, analogous to Mayr's biological species concept, consisting of asexually reproducing populations that exchange genes by homologous recombination. Extinction A species is extinct when the last individual of that species dies, but it may be functionally extinct well before that moment. It is estimated that over 99 percent of all species that ever lived on Earth, some five billion species, are now extinct. Some of these were in mass extinctions such as those at the ends of the Ordovician, Devonian, Permian, Triassic and Cretaceous periods. Mass extinctions had a variety of causes including volcanic activity, climate change, and changes in oceanic and atmospheric chemistry, and they in turn had major effects on Earth's ecology, atmosphere, land surface and waters. Another form of extinction is through the assimilation of one species by another through hybridization. The resulting single species has been termed as a "compilospecies". Practical implications Biologists and conservationists need to categorise and identify organisms in the course of their work. Difficulty assigning organisms reliably to a species constitutes a threat to the validity of research results, for example making measurements of how abundant a species is in an ecosystem moot. Surveys using a phylogenetic species concept reported 48% more species and accordingly smaller populations and ranges than those using nonphylogenetic concepts; this was termed "taxonomic inflation", which could cause a false appearance of change to the number of endangered species and consequent political and practical difficulties. Some observers claim that there is an inherent conflict between the desire to understand the processes of speciation and the need to identify and to categorise. Conservation laws in many countries make special provisions to prevent species from going extinct. Hybridization zones between two species, one that is protected and one that is not, have sometimes led to conflicts between lawmakers, land owners and conservationists. One of the classic cases in North America is that of the protected northern spotted owl which hybridises with the unprotected California spotted owl and the barred owl; this has led to legal debates. It has been argued, that since species are not comparable, simply counting them is not a valid measure of biodiversity; alternative measures of phylogenetic biodiversity have been proposed. History Classical forms In his biology, Aristotle used the term γένος (génos) to mean a kind, such as a bird or fish, and εἶδος (eidos) to mean a specific form within a kind, such as (within the birds) the crane, eagle, crow, or sparrow. These terms were translated into Latin as "genus" and "species", though they do not correspond to the Linnean terms thus named; today the birds are a class, the cranes are a family, and the crows a genus. A kind was distinguished by its attributes; for instance, a bird has feathers, a beak, wings, a hard-shelled egg, and warm blood. A form was distinguished by being shared by all its members, the young inheriting any variations they might have from their parents. Aristotle believed all kinds and forms to be distinct and unchanging. More importantly, in Aristotle's works, the terms γένος (génos) and εἶδος (eidos) are relative; a taxon that is considered an eidos in a given context can be considered a génos in another, and be further subdivided into eide (plural of eidos). His approach remained influential until the Renaissance, and still, to a lower extent, today. Fixed species When observers in the Early Modern period began to develop systems of organization for living things, they placed each kind of animal or plant into a context. Many of these early delineation schemes would now be considered whimsical: schemes included consanguinity based on colour (all plants with yellow flowers) or behaviour (snakes, scorpions and certain biting ants). John Ray, an English naturalist, was the first to attempt a biological definition of species in 1686, as follows: In the 18th century, the Swedish scientist Carl Linnaeus classified organisms according to shared physical characteristics, and not simply based upon differences. Like many contemporary systematists, he established the idea of a taxonomic hierarchy of classification based upon observable characteristics and intended to reflect natural relationships. At the time, however, it was still widely believed that there was no organic connection between species (except, possibly, between those of a given genus), no matter how similar they appeared. This view was influenced by European scholarly and religious education, which held that the taxa had been created by God, forming an Aristotelian hierarchy, the scala naturae or great chain of being. However, whether or not it was supposed to be fixed, the scala (a ladder) inherently implied the possibility of climbing. Mutability In viewing evidence of hybridisation, Linnaeus recognised that species were not fixed and could change; he did not consider that new species could emerge and maintained a view of divinely fixed species that may alter through processes of hybridisation or acclimatisation. By the 19th century, naturalists understood that species could change form over time, and that the history of the planet provided enough time for major changes. Jean-Baptiste Lamarck, in his 1809 Zoological Philosophy, described the transmutation of species, proposing that a species could change over time, in a radical departure from Aristotelian thinking. In 1859, Charles Darwin and Alfred Russel Wallace provided a compelling account of evolution and the formation of new species. Darwin argued that it was populations that evolved, not individuals, by natural selection from naturally occurring variation among individuals. This required a new definition of species. Darwin concluded that species are what they appear to be: ideas, provisionally useful for naming groups of interacting individuals, writing:
Biology and health sciences
Biology
null
21784348
https://en.wikipedia.org/wiki/Trombiculidae
Trombiculidae
Trombiculidae (), commonly referred to in North America as chiggers and in Britain as harvest mites, but also known as berry bugs, bush-mites, red bugs or scrub-itch mites, are a family of mites. Chiggers are often confused with jiggers – a type of flea. Several species of Trombiculidae in their larva stage bite their animal host and by embedding their mouthparts into the skin cause "intense irritation", or "a wheal, usually with severe itching and dermatitis". Humans are possible hosts. Trombiculidae live in forests and grasslands and are also found in the vegetation of low, damp areas such as woodlands, berry bushes, orchards, along lakes and streams, and even in drier places where vegetation is low, such as lawns, golf courses, and parks. They are most numerous in early summer when grass, weeds, and other vegetation are heaviest. In their larval stage, they attach to various animals, including humans, and feed on skin, often causing itching. These relatives of ticks are nearly microscopic, measuring 400 μm (1/60 of an inch) and have a chrome-orange hue. There is a marked constriction in the front part of the body in the nymph and adult stages. The best known species of chigger in North America is the hard-biting Trombicula alfreddugesi of the Southeastern United States, humid Midwest and Mexico. In the UK, the most prevalent harvest mite is Neotrombicula autumnalis, which is distributed through Western Europe to Eastern Asia. Trombiculid mites go through a lifecycle of egg, larva, nymph, and adult. The larval mites feed on the skin cells of animals. The six-legged parasitic larvae feed on a large variety of creatures, including humans, rabbits, toads, box turtles, quail, and even some insects. After crawling onto their hosts, they inject digestive enzymes into the skin that break down skin cells. They do not actually "bite", but instead form a hole in the skin called a stylostome and chew up tiny parts of the inner skin, thus causing irritation and swelling. The itching is accompanied by red, pimple-like bumps (papules) or hives and skin rash or lesions on a sun-exposed area. For humans, itching usually occurs after the larvae detach from the skin. After feeding on their hosts, the larvae drop to the ground and become nymphs, then mature into adults, which have eight legs and are harmless to humans. In the postlarval stages, they are not parasitic and feed on plant material. The females lay three to eight eggs in a clutch, usually on a leaf or among the roots of a plant, and die by autumn. History Trombiculidae, from Greek ("to tremble") and Latin , genitive ("gnat" or "midge"), was first described as an independent family by Henry Ellsworth Ewing in 1944. Then, when the family was first described, it included two subfamilies, Hemitrombiculinae and Trombiculinae. Womersley added another, Leeuwenhoekiinae, which at the time contained only Leeuwenhoekia. Later, he erected the family Leeuwenhoekiidae for the genus and subfamily, having six genera; they have a pair of submedian setae present on the dorsal plate.
Biology and health sciences
Arachnids
Animals
21787470
https://en.wikipedia.org/wiki/Alpha%20particle
Alpha particle
Alpha particles, also called alpha rays or alpha radiation, consist of two protons and two neutrons bound together into a particle identical to a helium-4 nucleus. They are generally produced in the process of alpha decay but may also be produced in other ways. Alpha particles are named after the first letter in the Greek alphabet, α. The symbol for the alpha particle is α or α2+. Because they are identical to helium nuclei, they are also sometimes written as He2+ or 2+ indicating a helium ion with a +2 charge (missing its two electrons). Once the ion gains electrons from its environment, the alpha particle becomes a normal (electrically neutral) helium atom . Alpha particles have a net spin of zero. When produced in standard alpha radioactive decay, alpha particles generally have a kinetic energy of about 5 MeV and a velocity in the vicinity of 4% of the speed of light. They are a highly ionizing form of particle radiation, with low penetration depth (stopped by a few centimetres of air, or by the skin). However, so-called long-range alpha particles from ternary fission are three times as energetic and penetrate three times as far. The helium nuclei that form 10–12% of cosmic rays are also usually of much higher energy than those produced by nuclear decay processes, and thus may be highly penetrating and able to traverse the human body and also many metres of dense solid shielding, depending on their energy. To a lesser extent, this is also true of very high-energy helium nuclei produced by particle accelerators. Name The term "alpha particle" was coined by Ernest Rutherford in reporting his studies of the properties of uranium radiation. The radiation appeared to have two different characters, the first he called " radiation" and the more penetrating one he called " radiation". After five years of additional experimental work, Rutherford and Hans Geiger determined that "the alpha particle, after it has lost its positive charge, is a Helium atom". Alpha radiation consists of particles equivalent to doubly-ionized helium nuclei (He2+) which can gain electrons from passing through matter. This mechanism is the origin of terrestrial helium gas. Sources Alpha decay The best-known source of alpha particles is alpha decay of heavier (mass number of at least 104) atoms. When an atom emits an alpha particle in alpha decay, the atom's mass number decreases by four due to the loss of the four nucleons in the alpha particle. The atomic number of the atom goes down by two, as a result of the loss of two protons – the atom becomes a new element. Examples of this sort of nuclear transmutation by alpha decay are the decay of uranium to thorium, and that of radium to radon. Alpha particles are commonly emitted by all of the larger radioactive nuclei such as uranium, thorium, actinium, and radium, as well as the transuranic elements. Unlike other types of decay, alpha decay as a process must have a minimum-size atomic nucleus that can support it. The smallest nuclei that have to date been found to be capable of alpha emission are beryllium-8 and tellurium-104, not counting beta-delayed alpha emission of some lighter elements. The alpha decay sometimes leaves the parent nucleus in an excited state; the emission of a gamma ray then removes the excess energy. Mechanism of production in alpha decay In contrast to beta decay, the fundamental interactions responsible for alpha decay are a balance between the electromagnetic force and nuclear force. Alpha decay results from the Coulomb repulsion between the alpha particle and the rest of the nucleus, which both have a positive electric charge, but which is kept in check by the nuclear force. In classical physics, alpha particles do not have enough energy to escape the potential well from the strong force inside the nucleus (this well involves escaping the strong force to go up one side of the well, which is followed by the electromagnetic force causing a repulsive push-off down the other side). However, the quantum tunnelling effect allows alphas to escape even though they do not have enough energy to overcome the nuclear force. This is allowed by the wave nature of matter, which allows the alpha particle to spend some of its time in a region so far from the nucleus that the potential from the repulsive electromagnetic force has fully compensated for the attraction of the nuclear force. From this point, alpha particles can escape. Ternary fission Especially energetic alpha particles deriving from a nuclear process are produced in the relatively rare (one in a few hundred) nuclear fission process of ternary fission. In this process, three charged particles are produced from the event instead of the normal two, with the smallest of the charged particles most probably (90% probability) being an alpha particle. Such alpha particles are termed "long range alphas" since at their typical energy of 16 MeV, they are at far higher energy than is ever produced by alpha decay. Ternary fission happens in both neutron-induced fission (the nuclear reaction that happens in a nuclear reactor), and also when fissionable and fissile actinides nuclides (i.e., heavy atoms capable of fission) undergo spontaneous fission as a form of radioactive decay. In both induced and spontaneous fission, the higher energies available in heavy nuclei result in long range alphas of higher energy than those from alpha decay. Accelerators Energetic helium nuclei (helium ions) may be produced by cyclotrons, synchrotrons, and other particle accelerators. Convention is that they are not normally referred to as "alpha particles". Solar core reactions Helium nuclei may participate in nuclear reactions in stars, and occasionally and historically these have been referred to as alpha reactions (see triple-alpha process and alpha process). Cosmic rays In addition, extremely high energy helium nuclei sometimes referred to as alpha particles make up about 10 to 12% of cosmic rays. The mechanisms of cosmic ray production continue to be debated. Energy and absorption The energy of the alpha particle emitted in alpha decay is mildly dependent on the half-life for the emission process, with many orders of magnitude differences in half-life being associated with energy changes of less than 50%, shown by the Geiger–Nuttall law. The energy of alpha particles emitted varies, with higher energy alpha particles being emitted from larger nuclei, but most alpha particles have energies of between 3 and 7 MeV (mega-electron-volts), corresponding to extremely long and extremely short half-lives of alpha-emitting nuclides, respectively. The energies and ratios are often distinct and can be used to identify specific nuclides as in alpha spectrometry. With a typical kinetic energy of 5 MeV; the speed of emitted alpha particles is 15,000 km/s, which is 5% of the speed of light. This energy is a substantial amount of energy for a single particle, but their high mass means alpha particles have a lower speed than any other common type of radiation, e.g. β particles, neutrons. Because of their charge and large mass, alpha particles are easily absorbed by materials, and they can travel only a few centimetres in air. They can be absorbed by tissue paper or by the outer layers of human skin. They typically penetrate skin about 40 micrometres, equivalent to a few cells deep. Biological effects Due to the short range of absorption and inability to penetrate the outer layers of skin, alpha particles are not, in general, dangerous to life unless the source is ingested or inhaled. Because of this high mass and strong absorption, if alpha-emitting radionuclides do enter the body (upon being inhaled, ingested, or injected, as with the use of Thorotrast for high-quality X-ray images prior to the 1950s), alpha radiation is the most destructive form of ionizing radiation. It is the most strongly ionizing, and with large enough doses can cause any or all of the symptoms of radiation poisoning. It is estimated that chromosome damage from alpha particles is anywhere from 10 to 1000 times greater than that caused by an equivalent amount of gamma or beta radiation, with the average being set at 20 times. A study of European nuclear workers exposed internally to alpha radiation from plutonium and uranium found that when relative biological effectiveness is considered to be 20, the carcinogenic potential (in terms of lung cancer) of alpha radiation appears to be consistent with that reported for doses of external gamma radiation i.e. a given dose of alpha-particles inhaled presents the same risk as a 20-times higher dose of gamma radiation. The powerful alpha emitter polonium-210 (a milligram of 210Po emits as many alpha particles per second as 4.215 grams of 226Ra) is suspected of playing a role in lung cancer and bladder cancer related to tobacco smoking. 210Po was used to kill Russian dissident and ex-FSB officer Alexander V. Litvinenko in 2006. History of discovery and use In 1896, Henri Becquerel discovered that uranium emits an invisible radiation that can leave marks on photographic plates, and this mystery radiation wasn't phosphorescence. Marie Curie showed that this phenomenon, which she called "radioactivity", was not unique to uranium and a consequence of individual atoms. Ernest Rutherford studied uranium radiation and discovered that it could ionize gas particles. In 1899, Rutherford discovered that uranium radiation is a mixture of two types of radiation. He performed an experiment which involved two electrodes separated by 4 cm of air. He placed some uranium on the bottom electrode, and the radiation from the uranium ionized the air between the electrodes, creating a current. Rutherford then placed an aluminium foil (5 micrometers thick) over the uranium and noticed that the current dropped a bit, indicating that the foil was absorbing some of the uranium's radiation. Rutherford placed a few more foils over the uranium and found that, for the first four foils, the current steadily decreased at a geometric rate. However, after the fourth layer of foil over the uranium, the current didn't drop anymore and remained more or less level for up to twelve layers of foil. This result indicated that uranium radiation has two components. Rutherford dubbed one component "alpha radiation" which was fully absorbed by just a few layers of foil, and what was left was a second component that could penetrate the foils more easily, and he dubbed the latter "beta radiation". In 1900, Marie Curie noticed that the absorption coefficient of alpha rays seemed to increase the thicker the barrier she placed in their path. This suggested that alpha radiation is not a form of light but made of particles that lose kinetic energy as they pass through barriers. In 1902, Rutherford found that he could deflect alpha rays with a magnetic field and an electric field, showing that alpha radiation is composed of positively charged particles. In 1906, Rutherford made some more precise measurements of the charge-to-mass ratio of alpha particles. Firstly, he found that the ratio was more or less the same whether the source was radium or actinium, showing that alpha particles are the same regardless of the source. Secondly, he found the charge-to-mass ratio of alpha particles to be half that of the hydrogen ion. Rutherford proposed three explanations: 1) an alpha particle is a hydrogen molecule (H2) with a charge of 1 e; 2) an alpha particle is an atom of helium with a charge of 2 e; 3) an alpha particle is half a helium atom with a charge of 1 e. At that time in history, scientists knew that hydrogen ions have an atomic weight of 1 and a charge of 1 e, and that helium has an atomic weight of 4. Nobody knew exactly how many electrons were in an atom. Protons and neutrons had not yet been discovered. Rutherford decided the second explanation was the most plausible because it is the simplest and sizeable deposits of helium were commonly found underground next to deposits of radioactive elements. His explanation was that as alpha particles are emitted by underground radioactive elements, they become trapped in the rock strata and acquire electrons, becoming helium atoms. Therefore an alpha particle is essentially a helium atom stripped of two electrons. In 1909, Ernest Rutherford and Thomas Royds finally proved that alpha particles were indeed helium ions. To do this they collected and purified the gas emitted by radium, a known alpha particle emitter, in a glass tube. An electric spark discharge inside the tube produced light. Subsequent study of the spectra of this light showed that the gas was helium and thus the alpha particles were indeed the helium ions. In 1911, Rutherford used alpha particle scattering data to argue that the positive charge of an atom is concentrated in a tiny nucleus. In 1913, Antonius van den Broek suggested that anomalies in the periodic table would be reduced if the nuclear charge in an atom and thus the number of electrons in an atom is equal to its atomic number. Therefore a helium atom has two electrons, and an alpha particle is essentially a helium nucleus. In 1920, Rutherford deduced the existence of the proton as the source of positive charge in the atom. In 1932, James Chadwick discovered the neutron. Thereafter it was known that an alpha particle is an agglomeration of two protons and two neutrons. Anti-alpha particle While anti-matter equivalents for helium-3 have been known since 1970, it took until 2010 for members of the international STAR collaboration using the Relativistic Heavy Ion Collider at the U.S. Department of Energy's Brookhaven National Laboratory to detect the antimatter partner of the helium-4 nucleus. Like the Rutherford scattering experiments, the antimatter experiment used gold. This time the gold ions ions moving at nearly the speed of light and colliding head on to produce the antiparticle, also dubbed "anti-alpha" particle. Applications Devices Some smoke detectors contain a small amount of the alpha emitter americium-241. The alpha particles ionize air within a small gap. A small current is passed through that ionized air. Smoke particles from fire that enter the air gap reduce the current flow, sounding the alarm. The isotope is extremely dangerous if inhaled or ingested, but the danger is minimal if the source is kept sealed. Many municipalities have established programs to collect and dispose of old smoke detectors, to keep them out of the general waste stream. However the US EPA says they "may be thrown away with household garbage". Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes. Alpha decay is much more easily shielded against than other forms of radioactive decay. Plutonium-238, a source of alpha particles, requires only 2.5 mm of lead shielding to protect against unwanted radiation. Static eliminators typically use polonium-210, an alpha emitter, to ionize air, allowing the "static cling" to more rapidly dissipate. Cancer treatment Alpha-emitting radionuclides are presently being used in three different ways to eradicate cancerous tumors: as an infusible radioactive treatment targeted to specific tissues (radium-223), as a source of radiation inserted directly into solid tumors (radium-224), and as an attachment to an tumor-targeting molecule, such as an antibody to a tumor-associated antigen. Radium-223 is an alpha emitter that is naturally attracted to the bone because it is a calcium mimetic. Radium-223 (as radium-223 dichloride) can be infused into a cancer patient's veins, after which it migrates to parts of the bone where there is rapid turnover of cells due to the presence of metastasized tumors. Once within the bone, Ra-223 emits alpha radiation that can destroy tumor cells within a 100-micron distance. This approach has been in use since 2013 to treat prostate cancer which has metastasized to the bone. Radionuclides infused into the circulation are able to reach sites that are accessible to blood vessels. This means, however, that the interior of a large tumor that is not vascularized (i.e. is not well penetrated by blood vessels) may not be effectively eradicated by the radioactivity. Radium-224 is a radioactive atom that is utilized as a source of alpha radiation in a cancer treatment device called DaRT (diffusing alpha emitters radiation therapy). Each radium-224 atom undergoes a decay process producing 6 daughter atoms. During this process, 4 alpha particles are emitted. The range of an alpha particle—up to 100 microns—is insufficient to cover the width of many tumors. However, radium-224's daughter atoms can diffuse up to 2–3 mm in the tissue, thus creating a "kill region" with enough radiation to potentially destroy an entire tumor, if the seeds are placed appropriately. Radium-224's half-life is short enough at 3.6 days to produce a rapid clinical effect while avoiding the risk of radiation damage due to overexposure. At the same time, the half-life is long enough to allow for handling and shipping the seeds to a cancer treatment center at any location across the globe. Targeted alpha therapy for solid tumors involves attaching an alpha-particle-emitting radionuclide to a tumor-targeting molecule such as an antibody, that can be delivered by intravenous administration to a cancer patient. Alpha radiation and DRAM errors In computer technology, dynamic random access memory (DRAM) "soft errors" were linked to alpha particles in 1978 in Intel's DRAM chips. The discovery led to strict control of radioactive elements in the packaging of semiconductor materials, and the problem is largely considered to be solved.
Physical sciences
Nuclear physics
null
21789178
https://en.wikipedia.org/wiki/River%20mouth
River mouth
A river mouth is where a river flows into a larger body of water, such as another river, a lake/reservoir, a bay/gulf, a sea, or an ocean. At the river mouth, sediments are often deposited due to the slowing of the current, reducing the carrying capacity of the water. The water from a river can enter the receiving body in a variety of different ways. The motion of a river is influenced by the relative density of the river compared to the receiving water, the rotation of the Earth, and any ambient motion in the receiving water, such as tides or seiches. If the river water has a higher density than the surface of the receiving water, the river water will plunge below the surface. The river water will then either form an underflow or an interflow within the lake. However, if the river water is lighter than the receiving water, as is typically the case when fresh river water flows into the sea, the river water will float along the surface of the receiving water as an overflow. Alongside these advective transports, inflowing water will also diffuse. Landforms At the mouth of a river, the change in flow conditions can cause the river to drop any sediment it is carrying. This sediment deposition can generate a variety of landforms, such as deltas, sand bars, spits, and tie channels. Landforms at the river mouth drastically alter the geomorphology and ecosystem. Along coasts, sand bars and similar landforms act as barriers, sheltering sensitive ecosystems that are enriched by nutrients deposited from the river. However, the damming of rivers can starve the river of sand and nutrients, creating a deficit at the river's mouth. Cultural influence As river mouths are the site of large-scale sediment deposition and allow for easy travel and ports, many towns and cities are founded there. Many places in the United Kingdom take their names from their positions at the mouths of rivers, such as Plymouth (i.e. mouth of the Plym River), Sidmouth (i.e. mouth of the Sid River), and Great Yarmouth (i.e. mouth of the Yare River); in Celtic, the term is Aber or Inver. Due to rising sea levels as a result of climate change, the coastal cities are at heightened risk of flooding. Sediment starvation in the river compounds this concern.
Physical sciences
Hydrology
Earth science
23275402
https://en.wikipedia.org/wiki/Amateur%20radio
Amateur radio
Amateur radio, also known as ham radio, is the use of the radio frequency spectrum for purposes of non-commercial exchange of messages, wireless experimentation, self-training, private recreation, radiosport, contesting, and emergency communications. The term "amateur" is used to specify "a duly authorized person interested in radioelectric practice with a purely personal aim and without pecuniary interest" (either direct monetary or other similar reward); and to differentiate it from commercial broadcasting, public safety (such as police and fire), or professional two-way radio services (such as maritime, aviation, taxis, etc.). The amateur radio service (amateur service and amateur-satellite service) is established by the International Telecommunication Union (ITU) through the Radio Regulations. National governments regulate technical and operational characteristics of transmissions and issue individual station licenses with a unique identifying call sign, which must be used in all transmissions. Amateur operators must hold an amateur radio license which is obtained by passing a government test demonstrating adequate technical radio knowledge and legal knowledge of the host government's radio regulations. Radio amateurs are limited to a specific set of frequency bands, the amateur radio bands, allocated throughout the radio spectrum, but within these bands are allowed to transmit on any frequency using a variety of voice, text, image, and data communications modes. This enables communication across a city, region, country, continent, the world, or even into space. In many countries, amateur radio operators may also send, receive, or relay radio communications between computers or transceivers connected to secure virtual private networks on the Internet. Amateur radio is officially represented and coordinated by the International Amateur Radio Union (IARU), which is organized in three regions and has as its members the national amateur radio societies which exist in most countries. According to an estimate made in 2011 by the American Radio Relay League (the American national amateur radio society), two million people throughout the world are regularly involved with amateur radio. About 830,000 amateur radio stations are located in IARU Region 2 (the Americas) followed by IARU Region 3 (South and East Asia and the Pacific Ocean) with about 750,000 stations. A significantly smaller number, about 400,000, are located in IARU Region 1 (Europe, Middle East, CIS, Africa). History The origins of amateur radio can be traced to the late 19th century, but amateur radio as practised today began in the early 20th century. The First Annual Official Wireless Blue Book of the Wireless Association of America, produced in 1909, contains a list of amateur radio stations. This radio callbook lists wireless telegraph stations in Canada and the United States, including 89 amateur radio stations. As with radio in general, amateur radio was associated with various amateur experimenters and hobbyists. Amateur radio enthusiasts have significantly contributed to science, engineering, industry, and social services. Research by amateur operators has founded new industries, built economies, empowered nations, and saved lives in times of emergency. Ham radio can also be used in the classroom to teach English, map skills, geography, math, science, and computer skills. Ham radio The term "ham" was first a pejorative term used in professional wired telegraphy during the 19th century, to mock operators with poor Morse code-sending skills ("ham-fisted"). This term continued to be used after the invention of radio and the proliferation of amateur experimentation with wireless telegraphy; among land- and sea-based professional radio operators, "ham" amateurs were considered a nuisance. The use of "ham" meaning "amateurish or unskilled" survives today sparsely in other disciplines (e.g. "ham actor"). The amateur radio community subsequently began to reclaim the word as a label of pride, and by the mid-20th century it had lost its pejorative meaning. Although not an acronym or initialism, it is often written as "HAM" in capital letters. Activity and practice The many facets of amateur radio attract practitioners with a wide range of interests. Many amateurs begin with a fascination with radio communication and then combine other personal interests to make pursuit of the hobby rewarding. Some of the focal areas amateurs pursue include radio contesting, radio propagation study, public service communication, technical experimentation, and computer networking. Hobbyist radio enthusiasts employ a variety of transmission methods for interaction. The primary modes for vocal communications are frequency modulation (FM) and single sideband (SSB). FM is recognized for its superior audio quality, whereas SSB is more efficient for long-range communication under limited bandwidth conditions. Radiotelegraphy using Morse code, also known as "CW" from "continuous wave", is the wireless extension of landline (wired) telegraphy developed by Samuel Morse and dates to the earliest days of radio. Although computer-based (digital) modes and methods have largely replaced CW for commercial and military applications, many amateur radio operators still enjoy using the CW mode—particularly on the shortwave bands and for experimental work, such as Earth–Moon–Earth communication, because of its inherent signal-to-noise ratio advantages. Morse, using internationally agreed message encodings such as the Q code, enables communication between amateurs who speak different languages. It is also popular with homebrewers and in particular with "QRP" or very-low-power enthusiasts, as CW-only transmitters are simpler to construct, and the human ear-brain signal processing system can pull weak CW signals out of the noise where voice signals would be totally inaudible. A similar "legacy" mode popular with home constructors is amplitude modulation (AM), pursued by many vintage amateur radio enthusiasts and aficionados of vacuum tube technology. Demonstrating a proficiency in Morse code was for many years a requirement to obtain an amateur license to transmit on frequencies below 30 MHz. Following changes in international regulations in 2003, countries are no longer required to demand proficiency. The United States Federal Communications Commission, for example, phased out this requirement for all license classes on 23 February 2007. Modern personal computers have encouraged the use of digital modes such as radioteletype (RTTY) which previously required cumbersome mechanical equipment. Hams led the development of packet radio in the 1970s, which has employed protocols such as AX.25 and TCP/IP. Specialized digital modes such as PSK31 allow real-time, low-power communications on the shortwave bands but have been losing favor in place of newer digital modes such as FT8. Radio over IP, or RoIP, is similar to Voice over IP (VoIP), but augments two-way radio communications rather than telephone calls. EchoLink using VoIP technology has enabled amateurs to communicate through local Internet-connected repeaters and radio nodes, while IRLP has allowed the linking of repeaters to provide greater coverage area. Automatic link establishment (ALE) has enabled continuous amateur radio networks to operate on the high frequency bands with global coverage. Other modes, such as FSK441 using software such as WSJT, are used for weak signal modes including meteor scatter and moonbounce communications. Fast scan amateur television has gained popularity as hobbyists adapt inexpensive consumer video electronics like camcorders and video cards in PCs. Because of the wide bandwidth and stable signals required, amateur television is typically found in the 70 cm (420–450 MHz) wavelength range, though there is also limited use on 33 cm (902–928 MHz), 23 cm (1240–1300 MHz) and shorter. These requirements also effectively limit the signal range to between 20 and 60 miles (30–100 km). Linked repeater systems, however, can allow transmissions of VHF and higher frequencies across hundreds of miles. Repeaters are usually located on heights of land or on tall structures, and allow operators to communicate over hundreds of miles using hand-held or mobile transceivers. Repeaters can also be linked together by using other amateur radio bands, landline, or the Internet. Amateur radio satellites can be accessed, some using a hand-held transceiver (HT), even, at times, using the factory "rubber duck" antenna. Hams also use the moon, the aurora borealis, and the ionized trails of meteors as reflectors of radio waves. Hams can also contact the International Space Station (ISS) because many astronauts are licensed as amateur radio operators. Amateur radio operators use their amateur radio station to make contacts with individual hams as well as participate in round-table discussion groups or "rag chew sessions" on the air. Some join in regularly scheduled on-air meetings with other amateur radio operators, called "nets" (as in "networks"), which are moderated by a station referred to as "Net Control". Nets can allow operators to learn procedures for emergencies, be an informal round table, or cover specific interests shared by a group. Amateur radio operators, using battery- or generator-powered equipment, often provide essential communications services when regular channels are unavailable due to natural disaster or other disruptive events . Many amateur radio operators participate in radio contests, during which an individual or team of operators typically seek to contact and exchange information with as many other amateur radio stations as possible in a given period of time. In addition to contests, a number of amateur radio operating award schemes exist, sometimes suffixed with "on the Air", such as Summits on the Air, Islands on the Air, Worked All States and Jamboree on the Air. Amateur radio operators may also act as citizen scientists for propagation research and atmospheric science. Licensing Radio transmission permits are closely controlled by nations' governments because radio waves propagate beyond national boundaries, and therefore radio is of international concern. Both the requirements for and privileges granted to a licensee vary from country to country, but generally follow the international regulations and standards established by the International Telecommunication Union and World Radio Conferences. All countries that license citizens to use amateur radio require operators to display knowledge and understanding of key concepts, usually by passing an exam. The licenses grant hams the privilege to operate in larger segments of the radio frequency spectrum, with a wider variety of communication techniques, and with higher power levels relative to unlicensed personal radio services (such as CB radio, FRS, and PMR446), which require type-approved equipment restricted in mode, range, and power. Amateur licensing is a routine civil administrative matter in many countries. Amateurs therein must pass an examination to demonstrate technical knowledge, operating competence, and awareness of legal and regulatory requirements, in order to avoid interfering with other amateurs and other radio services. A series of exams are often available, each progressively more challenging and granting more privileges: greater frequency availability, higher power output, permitted experimentation, and, in some countries, distinctive call signs. Some countries, such as the United Kingdom and Australia, have begun requiring a practical assessment in addition to the written exams in order to obtain a beginner's license, which they call a Foundation License. In most countries, an operator will be assigned a call sign with their license. In some countries, a separate "station license" is required for any station used by an amateur radio operator. Amateur radio licenses may also be granted to organizations or clubs. In some countries, hams were allowed to operate only club stations. An amateur radio license is valid only in the country where it is issued or in another country that has a reciprocal licensing agreement with the issuing country. In some countries, an amateur radio license is necessary in order to purchase or possess amateur radio equipment. Amateur radio licensing in the United States exemplifies the way in which some countries award different levels of amateur radio licenses based on technical knowledge: three sequential levels of licensing exams (Technician Class, General Class, and Amateur Extra Class) are currently offered, which allow operators who pass them access to larger portions of the Amateur Radio spectrum and more desirable (shorter) call signs. An exam, authorized by the Federal Communications Commission (FCC), is required for all levels of the Amateur Radio license. These exams are administered by Volunteer Examiners, accredited by the FCC-recognized Volunteer Examiner Coordinator (VEC) system. The Technician Class and General Class exams consist of 35 multiple-choice questions, drawn randomly from a pool of at least 350. To pass, 26 of the 35 questions must be answered correctly. The Extra Class exam has 50 multiple choice questions (drawn randomly from a pool of at least 500), 37 of which must be answered correctly. The tests cover regulations, customs, and technical knowledge, such as FCC provisions, operating practices, advanced electronics theory, radio equipment design, and safety. Morse Code is no longer tested in the U.S. Once the exam is passed, the FCC issues an Amateur Radio license which is valid for ten years. Studying for the exam is made easier because the entire question pools for all license classes are posted in advance. The question pools are updated every four years by the National Conference of VECs. Licensing requirements Prospective amateur radio operators are examined on understanding of the key concepts of electronics, radio equipment, antennas, radio propagation, RF safety, and the radio regulations of the government granting the license. These examinations are sets of questions typically posed in either a short answer or multiple-choice format. Examinations can be administered by bureaucrats, non-paid certified examiners, or previously licensed amateur radio operators. The ease with which an individual can acquire an amateur radio license varies from country to country. In some countries, examinations may be offered only once or twice a year in the national capital and can be inordinately bureaucratic (for example in India) or challenging because some amateurs must undergo difficult security approval (as in Iran). Currently, only Yemen and North Korea do not issue amateur radio licenses to their citizens. Some developing countries, especially those in Africa, Asia, and Latin America, require the payment of annual license fees that can be prohibitively expensive for most of their citizens. A few small countries may not have a national licensing process and may instead require prospective amateur radio operators to take the licensing examinations of a foreign country. In countries with the largest numbers of amateur radio licensees, such as Japan, the United States, Thailand, Canada, and most of the countries in Europe, there are frequent license examinations opportunities in major cities. Granting a separate license to a club or organization generally requires that an individual with a current and valid amateur radio license who is in good standing with the telecommunications authority assumes responsibility for any operations conducted under the club license or club call sign. A few countries may issue special licenses to novices or beginners that do not assign the individual a call sign but instead require the newly licensed individual to operate from stations licensed to a club or organization for a period of time before a higher class of license can be acquired. Reciprocal licensing A reciprocal licensing agreement between two countries allows bearers of an amateur radio license in one country under certain conditions to legally operate an amateur radio station in the other country without having to obtain an amateur radio license from the country being visited, or the bearer of a valid license in one country can receive a separate license and a call sign in another country, both of which have a mutually-agreed reciprocal licensing approvals. Reciprocal licensing requirements vary from country to country. Some countries have bilateral or multilateral reciprocal operating agreements allowing hams to operate within their borders with a single set of requirements. Some countries lack reciprocal licensing systems. Others use international bodies such as the Organization of American States to facilitate licensing reciprocity. When traveling abroad, visiting amateur operators must follow the rules of the country in which they wish to operate. Some countries have reciprocal international operating agreements allowing hams from other countries to operate within their borders with just their home country license. Other host countries require that the visiting ham apply for a formal permit, or even a new host country-issued license, in advance. The reciprocal recognition of licenses frequently not only depends on the involved licensing authorities, but also on the nationality of the bearer. As an example, in the US, foreign licenses are recognized only if the bearer does not have US citizenship and holds no US license (which may differ in terms of operating privileges and restrictions). Conversely, a US citizen may operate under reciprocal agreements in Canada, but not a non-US citizen holding a US license. Newcomers Many people start their involvement in amateur radio on social media or by finding a local club. Clubs often provide information about licensing, local operating practices, and technical advice. Newcomers also often study independently by purchasing books or other materials, sometimes with the help of a mentor, teacher, or friend. In North America, established amateurs who help newcomers are often referred to as "Elmers", as coined by Rodney Newkirk, W9BRD, within the ham community. In addition, many countries have national amateur radio societies which encourage newcomers and work with government communications regulation authorities for the benefit of all radio amateurs. The oldest of these societies is the Wireless Institute of Australia, formed in 1910; other notable societies are the Radio Society of Great Britain, the American Radio Relay League, Radio Amateurs of Canada, Bangladesh NGOs Network for Radio and Communication, the New Zealand Association of Radio Transmitters and South African Radio League. (See :Category:Amateur radio organizations) Call signs An amateur radio operator uses a call sign on the air to legally identify the operator or station. In some countries, the call sign assigned to the station must always be used, whereas in other countries, the call sign of either the operator or the station may be used. In certain jurisdictions, an operator may also select a "vanity" call sign although these must also conform to the issuing government's allocation and structure used for Amateur Radio call signs. Some jurisdictions require a fee to obtain such a vanity call sign; in others, such as the UK, a fee is not required and the vanity call sign may be selected when the license is applied for. The FCC in the U.S. discontinued its fee for vanity call sign applications in September 2015, but replaced it as $35 in 2022. Call sign structure as prescribed by the ITU consists of three parts which break down as follows, using the call sign ZS1NAT as an example: ZS – Shows the country from which the call sign originates and may also indicate the license class. (This call sign is licensed in South Africa.) 1 – Gives the subdivision of the country or territory indicated in the first part (this one refers to the Western Cape). NAT – The final part is unique to the holder of the license, identifying that station specifically. Many countries do not follow the ITU convention for the numeral. In the United Kingdom the original calls G0xxx, G2xxx, G3xxx, G4xxx, were Full (A) License holders along with the last M0xxx full call signs issued by the City & Guilds examination authority in December 2003. Additional Full Licenses were originally granted to (B) Licenses with G1xxx, G6xxx, G7xxx, G8xxx and 1991 onward with M1xxx call signs. The newer three-level Intermediate License holders are assigned 2E0xxx and 2E1xxx, and the basic Foundation License holders are granted call signs M3xxx, M6xxx or M7xxx. Instead of using numbers, in the UK the second letter after the initial 'G' or 'M' identifies the station's location; for example, a call sign G7OOE becomes GM7OOE and M0RDM becomes MM0RDM when that license holder is operating a station in Scotland. Prefix "GM" & "MM" are Scotland, "GW" & "MW" are Wales, "GI" & "MI" are Northern Ireland, "GD" & "MD" are the Isle of Man, "GJ" & "MJ" are Jersey and "GU" & "MU" are Guernsey. Intermediate licence call signs are slightly different. They begin 2#0 and 2#1 where the # is replaced with the country letters as above. For example "2M0" and "2M1" are Scotland, "2W0" and "2W1" are Wales and so on. The exception however is for England. The letter "E" is used, but only in intermediate-level call signs. For example "2E0" & "2E1" are used whereas the call signs beginning G or M for foundation and full licenses never use the "E". In the United States, for non-vanity licenses, the numeral indicates the geographical district the holder resided in when the license was first issued. Prior to 1978, US hams were required to obtain a new call sign if they moved out of their geographic district. In Canada, call signs start with VA, VE, VY, VO, and CY. Call signs starting with 'V' end with a number after to indicate the political region; prefix CY indicates geographic islands. Prefix VA1 or VE1 is Nova Scotia, VA2VE2 is Quebec, VA3VE3 is Ontario, VA4VE4 is Manitoba, VA5VE5 is Saskatchewan, VA6VE6 is Alberta, VA7VE7 is British Columbia, VE8 is the Northwest Territories, VE9 is New Brunswick, VY0 is Nunavut, VY1 is Yukon, VY2 is Prince Edward Island, VO1 is Newfoundland, and VO2 is Labrador. CY is for amateurs operating from Sable Island (CY0) or St. Paul Island (CY9). Special permission is required to access either of these: from Parks Canada for Sable and Coast Guard for St. Paul. The last two or three letters of the call signs are typically the operator's choice (upon completing the licensing test, the ham writes three most-preferred options). Two-letter call sign suffixes require a ham to have already been licensed for 5 years. Call signs in Canada can be requested with a fee. Also, for smaller geopolitical entities, the numeral may be part of the country identification. For example, VP2xxx is in the British West Indies, which is subdivided into VP2Exx Anguilla, VP2Mxx Montserrat, and VP2Vxx British Virgin Islands. VP5xxx is in the Turks and Caicos Islands, VP6xxx is on Pitcairn Island, VP8xxx is in the Falklands, and VP9xxx is in Bermuda. Online callbooks or call sign databases can be browsed or searched to find out who holds a specific call sign. An example of an online callbook is QRZ.com. Non-exhaustive lists of famous people who hold or have held amateur radio call signs have also been compiled and published. Many jurisdictions (but not in the UK & Europe) may issue specialty vehicle registration plates to licensed amateur radio operators. The fees for application and renewal are usually less than the standard rate for specialty plates. Privileges In most administrations, unlike other RF spectrum users, radio amateurs may build or modify transmitting equipment for their own use within the amateur spectrum without the need to obtain government certification of the equipment. Licensed amateurs can also use any frequency in their bands (rather than being allocated fixed frequencies or channels) and can operate medium-to-high-powered equipment on a wide range of frequencies so long as they meet certain technical parameters including occupied bandwidth, power, and prevention of spurious emission. Radio amateurs have access to frequency allocations throughout the RF spectrum, usually allowing choice of an effective frequency for communications across a local, regional, or worldwide path. The shortwave bands, or HF, are suitable for worldwide communication, and the VHF and UHF bands normally provide local or regional communication, while the microwave bands have enough space, or bandwidth, for amateur television transmissions and high-speed computer networks. In most countries, an amateur radio license grants permission to the license holder to own, modify, and operate equipment that is not certified by a governmental regulatory agency. This encourages amateur radio operators to experiment with home-constructed or modified equipment. The use of such equipment must still satisfy national and international standards on spurious emissions. Amateur radio operators are encouraged both by regulations and tradition of respectful use of the spectrum to use as little power as possible to accomplish the communication. This is to minimise interference or EMC to any other device. Although allowable power levels are moderate by commercial standards, they are sufficient to enable global communication. Lower license classes usually have lower power limits; for example, the lowest license class in the UK (Foundation licence) has a limit of 10 W. Power limits vary from country to country and between license classes within a country. For example, the peak envelope power limits for the highest available license classes in a few selected countries are: 2.25 kW in Canada; 1.5 kW in the United States; 1.0 kW in Belgium, Luxembourg, Switzerland, South Africa and New Zealand; 750 W in Germany; 500 W in Italy; 400 W in Australia, India, and the United Kingdom; and 150 W in Oman. Output power limits may also depend on the mode of transmission. In Australia, for example, 400 W may be used for SSB transmissions, but FM and other modes are limited to 120 W. The point at which power output is measured may also affect transmissions. The United Kingdom measures at the point the antenna is connected to the signal feed cable, which means the radio system may transmit more than 400 W to overcome signal loss in the cable; conversely, Germany measures power at the output of the final amplification stage, which results in a loss in radiated power with longer cable feeds. Certain countries permit amateur radio licence holders to hold a Notice of Variation that allows higher power to be used than normally allowed for certain specific purposes. E.g. in the UK some amateur radio licence holders are allowed to transmit using (33 dBw) 2.0 kW for experiments entailing using the moon as a passive radio reflector (known as Earth–Moon–Earth communication) (EME). Band plans and frequency allocations The International Telecommunication Union (ITU) governs the allocation of communications frequencies worldwide, with participation by each nation's communications regulation authority. National communications regulators have some liberty to restrict access to these bandplan frequencies or to award additional allocations as long as radio services in other countries do not suffer interference. In some countries, specific emission types are restricted to certain parts of the radio spectrum, and in most other countries, International Amateur Radio Union (IARU) member societies adopt voluntary plans to ensure the most effective use of spectrum. In a few cases, a national telecommunication agency may also allow hams to use frequencies outside of the internationally allocated amateur radio bands. In Trinidad and Tobago, hams are allowed to use a repeater which is located on 148.800 MHz. This repeater is used and maintained by the National Emergency Management Agency (NEMA), but may be used by radio amateurs in times of emergency or during normal times to test their capability and conduct emergency drills. This repeater can also be used by non-ham NEMA staff and REACT members. In Australia and New Zealand, ham operators are authorized to use one of the UHF TV channels. In the U.S., amateur radio operators providing essential communication needs in connection with the immediate safety of human life and immediate protection of property when normal communication systems are not available may use any frequency including those of other radio services such as police and fire and in cases of disaster in Alaska may use the statewide emergency frequency of 5.1675 MHz with restrictions upon emissions. Similarly, amateurs in the United States may apply to be registered with the Military Auxiliary Radio System (MARS). Once approved and trained, these amateurs also operate on US government military frequencies to provide contingency communications and morale message traffic support to the military services. Modes of communication Amateurs use a variety of voice, text, image, and data communication modes the over radio. Generally new modes can be tested in the amateur radio service, although national regulations may require disclosure of a new mode to permit radio licensing authorities to monitor the transmissions. Encryption, for example, is not generally permitted in the Amateur Radio service except for the special purpose of satellite vehicle control uplinks. The following is a partial list of the modes of communication used, where the mode includes both modulation types and operating protocols. Voice Amplitude modulation (AM) Double Sideband Suppressed Carrier (DSB-SC) Independent Sideband (ISB) Single Sideband (SSB) Amplitude Modulation Equivalent (AME) Frequency modulation (FM) Phase modulation (PM) Image Amateur Television (ATV), also known as Fast Scan television Slow-Scan Television (SSTV) Radiofax Text and data In former times, most amateur digital modes were transmitted by inserting audio into the microphone input of a radio and using an analog scheme, such as amplitude modulation (AM), frequency modulation (FM), or single-sideband modulation (SSB). Beginning in 2017, increased use of several digital modes, particularly FT8, became popular within the amateur radio community. Text-modes Continuous Wave (CW), usually used for Morse code Automatic Link Establishment (ALE) AMateur Teleprinting Over Radio (AMTOR) PACTOR Radioteletype (RTTY) Hellschreiber, also referred to as either Feld-Hell, or Hell Digimodes D-STAR Digital mobile radio Fusion (Yaesu own mode) GTOR Discrete multi-tone modulation modes such as Multi Tone 63 (MT63) Multiple Frequency-Shift Keying (MFSK) modes such as FSK441, JT6M, JT65, JT9, FT8, FT4 JS8Call WSPR Olivia MFSK Packet radio (AX.25) Automatic Packet Reporting System (APRS) Phase-Shift Keying 31-baud binary phase shift keying: PSK31 31-baud quadrature phase shift keying: QPSK31 63-baud binary phase shift keying: PSK63 63-baud quadrature phase shift keying: QPSK63 CLOVER Modes by activity The following "modes" use no one specific modulation scheme but rather are classified by the activity of the communication. AllStarLink (AllStar / ASL) Earth-Moon-Earth (EME) EchoLink Internet Radio Linking Project (IRLP) Low Transmitter Power (QRP) Satellite (OSCAR – Orbiting Satellite Carrying Amateur Radio)
Technology
Media and communication
null
23275627
https://en.wikipedia.org/wiki/African%20wild%20dog
African wild dog
The African wild dog (Lycaon pictus), also called painted dog and Cape hunting dog, is a wild canine native to sub-Saharan Africa. It is the largest wild canine in Africa, and the only extant member of the genus Lycaon, which is distinguished from Canis by dentition highly specialised for a hypercarnivorous diet and by a lack of dewclaws. It is estimated that there are around 6,600 adults (including 1,400 mature individuals) living in 39 subpopulations, all threatened by habitat fragmentation, human persecution and outbreaks of disease. As the largest subpopulation probably consists of fewer than 250 individuals, the African wild dog has been listed as endangered on the IUCN Red List since 1990. The African wild dog is a specialized hunter of terrestrial ungulates, mostly hunting at dawn and dusk, but also displays diurnal activity. It captures its prey by using stamina and cooperative hunting to exhaust them. Its natural competitors are lions and spotted hyenas: the former will kill the dogs where possible whilst the latter are frequent kleptoparasites. Like other canids, the African wild dog regurgitates food for its young but also extends this action to adults as a central part of the pack's social unit. The young have the privilege of feeding first on carcasses. The African wild dog has been revered in several hunter-gatherer societies, particularly those of the San people and Prehistoric Egypt. Etymology and naming The English language has several names for the African wild dog, including African hunting dog, Cape hunting dog, painted hunting dog, painted dog, painted wolf, and painted lycaon. Though the name African wild dog is widely used, 'wild dog' is thought by conservation groups to have negative connotations that could be detrimental to its image; one organisation promotes the name 'painted wolf', whilst the name 'painted dog' has been found to be the most likely to counteract negative perceptions. Taxonomic and evolutionary history Taxonomy The earliest written reference for the species appears to be from Oppian, who wrote of the thoa, a hybrid between the wolf and leopard, which resembles the former in shape and the latter in colour. Solinus's Collea rerum memorabilium from the third century AD describes a multicoloured wolf-like animal with a mane native to Ethiopia. The African wild dog was scientifically described in 1820 by Coenraad Jacob Temminck after examining a specimen from the coast of Mozambique, which he named Hyaena picta. It was later recognised as a canid by Joshua Brookes in 1827 and renamed Lycaon tricolor. The root word of Lycaon is the Greek λυκαίος (lykaios), meaning ‘wolf-like’. The specific epithet pictus (Latin for ‘painted’), which derived from the original picta, was later returned to it, in conformity with the International Rules on Taxonomic Nomenclature. Paleontologist George G. Simpson placed the African wild dog, the dhole and the bush dog together in the subfamily Simocyoninae on the basis of all three species having similarly trenchant carnassials. This grouping was disputed by Juliet Clutton-Brock, who argued that other than dentition too many differences exist among the three species to warrant classifying them in a single subfamily. Evolution The African wild dog possesses the most specialized adaptations among the canids for coat colour and diet and for pursuing its prey through its cursorial (running) ability. It has a graceful skeleton, and the loss of the first digit on its forefeet increases its stride and speed. This adaptation allows it to pursue prey across open terrain for long distances. The teeth are generally carnassial-shaped and its premolars are the largest relative to body size of any living carnivoran with the exception of the spotted hyena. On the lower carnassials (first lower molars), the talonid has evolved to become a cutting blade for flesh-slicing, with a reduction or loss of the post-carnassial molars. This adaptation also occurs in the two other hypercarnivorous canids – the dhole and the bush dog. The African wild dog exhibits one of the most varied coat colours among mammals. Individuals differ in patterns and colours, indicating a diversity of the underlying genes. The purpose of these coat patterns may be an adaptation for communication, concealment or temperature regulation. In 2019 a study indicated that the lycaon lineage diverged from Cuon and Canis 1.7 million years ago through this suite of adaptations, and these occurred at the same time as large ungulates (its prey) diversified. The findings also suggest that the African wild dog is largely isolated from gene transfer with other canid species. The oldest African wild dog fossil dates back to 200,000 years ago and was found in HaYonim Cave, Israel. The evolution of the African wild dog is poorly understood owing to the scarcity of fossil finds. Some authors consider the extinct Canis subgenus Xenocyon as ancestral to both the genus Lycaon and the genus Cuon, which lived throughout Eurasia and Africa from the Early Pleistocene to the early Middle Pleistocene. Others propose that Xenocyon should be reclassified as Lycaon. The species Canis (Xenocyon) falconeri shared the African wild dog's absent first metacarpal (dewclaw), though its dentition was still relatively unspecialised. This connection was rejected by one author because C. (X.) falconeris lack of the first metacarpal is a poor indication of phylogenetic closeness to the African wild dog, and the dentition was too different to imply ancestry. Another ancestral candidate is the Plio-Pleistocene Lycaon sekowei of South Africa on the basis of distinct accessory cusps on its premolars and anterior accessory cuspids on its lower premolars. These adaptions are found only in Lycaon among living canids, which shows the same adaptations to a hypercarnivorous diet. L. sekowei had not yet lost the first metacarpal absent in L. pictus and was more robust than the modern species, having 10% larger teeth. Admixture with the dhole The African wild dog has 78 chromosomes, the same number as those of species in the genus Canis. In 2018 whole genome sequencing was used to compare the dhole (Cuon alpinus) with the African wild dog. There was strong evidence of ancient genetic admixture between the two species. Today their ranges are remote from each other; however during the Pleistocene era the dhole could be found as far west as Europe. The study proposes that the dhole's distribution may have once included the Middle East, from where it may have admixed with the African wild dog in North Africa. However, there is no evidence of the dhole having existed in the Middle East or North Africa. Subspecies , five subspecies are recognised by MSW3: Although the species is genetically diverse, these subspecific designations are not universally accepted. East African and Southern African wild dog populations were once thought to be genetically distinct, based on a small number of samples. More recent studies with a larger number of samples showed that extensive intermixing has occurred between East African and Southern African populations in the past. Some unique nuclear and mitochondrial alleles are found in Southern African and northeastern African populations, with a transition zone encompassing Botswana, Zimbabwe and southeastern Tanzania between the two. The West African wild dog population may possess a unique haplotype, thus possibly constituting a truly distinct subspecies. The original Serengeti and Maasai Mara population of painted dogs is known to have possessed a unique genotype, but these genotypes may be extinct. Description The African wild dog is the bulkiest and most solidly built of African canids. The species stands at the shoulders, measures in head-and-body length and has a tail length of . Adults have a weight range of . On average, dogs from East Africa weigh around . By body mass, they are only outsized amongst other extant canids by the gray wolf species complex. Females are usually 3–7% smaller than males. Compared to members of the genus Canis, the African wild dog is comparatively lean and tall, with outsized ears and lacking dewclaws. The middle two toepads are usually fused. Its dentition differs from that of Canis by the degeneration of the last lower molar, the narrowness of the canines and proportionately large premolars, which are the largest relative to body size of any carnivore other than hyenas. The heel of the lower carnassial M1 is crested with a single, blade-like cusp, which enhances the shearing capacity of the teeth, thus the speed at which prey can be consumed. This feature, termed "trenchant heel", is shared with two other canids: the Asian dhole and the South American bush dog. The skull is relatively shorter and broader than those of other canids. The fur of the African wild dog differs significantly from that of other canids, consisting entirely of stiff bristle-hairs with no underfur. Adults gradually lose their fur as it ages, with older individuals being almost naked. Colour variation is extreme, and may serve in visual identification, as African wild dogs can recognise each other at distances of . Some geographic variation is seen in coat colour, with northeastern African specimens tending to be predominantly black with small white and yellow patches, while southern African ones are more brightly coloured, sporting a mix of brown, black and white coats. Much of the species' coat patterning occurs on the trunk and legs. Little variation in facial markings occurs, with the muzzle being black, gradually shading into brown on the cheeks and forehead. A black line extends up the forehead, turning blackish-brown on the back of the ears. A few specimens sport a brown teardrop-shaped mark below the eyes. The back of the head and neck are either brown or yellow. A white patch occasionally occurs behind the fore legs, with some specimens having completely white fore legs, chests and throats. The tail is usually white at the tip, black in the middle and brown at the base. Some specimens lack the white tip entirely, or may have black fur below the white tip. These coat patterns can be asymmetrical, with the left side of the body often having different markings from the right. Distribution and habitat The African wild dog occurs foremost in Southern and East Africa. It is rare in North Africa and mostly absent in West Africa, with the only potentially viable population occurring in Senegal's Niokolo-Koba National Park. It is occasionally sighted in other parts of Senegal, Guinea and Mali. Its distribution is patchy in East Africa. It inhabits mostly savannas and arid zones, generally avoiding forested areas. This preference is likely linked to its hunting habits, which require open areas that do not obstruct vision or impede pursuit. It travels through scrubland, woodland and montane areas in pursuit of prey. A forest-dwelling population has been identified in the Harenna Forest, a wet montane forest up to an elevation of in the Bale Mountains of Ethiopia. At least one record exists of a pack being sighted on the summit of Mount Kilimanjaro. In Zimbabwe, it has been recorded at the elevation of . In Ethiopia, several packs were sighted at elevations of , and a dead individual was found in June 1995 at on the Sanetti Plateau. A stable population comprising more than 370 individuals is present in Kruger National Park. Behaviour and ecology Social and reproductive behaviour The African wild dog have strong social bonds, stronger than those of sympatric lions and spotted hyenas; thus, solitary living and hunting are extremely rare in the species. It lives in permanent packs consisting of two to 27 adults and yearling pups. The typical pack size in the Kruger National Park and the Maasai Mara is four or five adults, while packs in Moremi and Selous Game Reserves contain eight or nine. However, larger packs have been observed and temporary aggregations of hundreds of individuals may have gathered in response to the seasonal migration of vast springbok herds in Southern Africa. Males and females have separate dominance hierarchies, with the latter usually being led by the oldest female. Males may be led by the oldest male, but these can be supplanted by younger specimens; thus, some packs may contain elderly male former pack leaders. The dominant pair typically monopolises breeding. The species differs from most other social carnivorans in that males remain in the natal pack, while females disperse (a pattern also found in primates such as gorillas, chimpanzees, and red colobuses). Furthermore, males in any given pack tend to outnumber females 3:1. Dispersing females join other packs and evict some of the resident females related to the other pack members, thus preventing inbreeding and allowing the evicted individuals to find new packs of their own and breed. Males rarely disperse, and when they do, they are invariably rejected by other packs already containing males. Although arguably the most social canid, the species lacks the elaborate facial expressions and body language found in the gray wolf, likely because of the African wild dog's less hierarchical social structure. Furthermore, while elaborate facial expressions are important for wolves in re-establishing bonds after long periods of separation from their family groups, they are not as necessary to African wild dogs, which remain together for much longer periods. The species does have an extensive vocal repertoire consisting of twittering, whining, yelping, squealing, whispering, barking, growling, gurling, rumbling, moaning and hooing. African wild dog populations in East Africa appear to have no fixed breeding season, whereas those in Southern Africa usually breed during the April–July period. During estrus, the female is closely accompanied by a single male, which keeps other members of the same sex at bay. The estrus period can last as long as 20 days. The copulatory tie characteristic of mating in most canids has been reported to be absent or very brief (less than one minute) in African wild dog, possibly an adaptation to the prevalence of larger predators in its environment. The gestation period lasts 69–73 days, with the interval between each pregnancy being 12–14 months typically. The African wild dog produces more pups than any other canid, with litters containing around six to 16 pups, with an average of 10, thus indicating that a single female can produce enough young to form a new pack every year. Because the amount of food necessary to feed more than two litters would be impossible to acquire by the average pack, breeding is strictly limited to the dominant female, which may kill the pups of subordinates. After giving birth, the mother stays close to the pups in the den, while the rest of the pack hunts. She typically drives away pack members approaching the pups until the latter are old enough to eat solid food at three to four weeks of age. The pups leave the den around the age of three weeks and are suckled outside. The pups are weaned at the age of five weeks, when they are fed regurgitated meat by the other pack members. By seven weeks, the pups begin to take on an adult appearance, with noticeable lengthening in the legs, muzzle, and ears. Once the pups reach the age of eight to 10 weeks, the pack abandons the den and the young follow the adults during hunts. The youngest pack members are permitted to eat first on kills, a privilege which ends once they become yearlings. African wild dogs have an average lifespan of about 10 to 11 years in the wild. When separated from the pack, an African wild dog becomes depressed and can die as a result of broken heart syndrome. Male/female ratio Packs of African wild dogs have a high ratio of males to females. This is a consequence of the males mostly staying with the pack whilst female offspring disperse and is supported by a changing sex-ratio in consecutive litters. Those born to maiden females contain a higher proportion of males, second litters are half and half and subsequent litters biased towards females with this trend increasing as females get older. As a result, the earlier litters provide stable hunters whilst the higher ratio of dispersals amongst the females stops a pack from getting too big. Sneeze communication and ‘voting’ Populations in the Okavango Delta have been observed ‘rallying’ before setting out to hunt. Not every rally results in a departure, but departure becomes more likely when more individual dogs ‘sneeze’. These sneezes are characterized by a short, sharp exhale through the nostrils. When members of dominant mating pairs sneeze first, the group is much more likely to depart. If a dominant dog initiates, around three sneezes guarantee departure. When less dominant dogs sneeze first, if enough others also sneeze (about 10), then the group will go hunting. Researchers assert that wild dogs in Botswana "use a specific vocalization (the sneeze) along with a variable quorum response mechanism in the decision-making process [to go hunting at a particular moment]". Inbreeding avoidance Because the African wild dog largely exists in fragmented, small populations, its existence is endangered. Inbreeding avoidance by mate selection is a characteristic of the species and has important potential consequences for population persistence. Inbreeding is rare within natal packs. Inbreeding may have been selected against evolutionarily because it leads to the expression of recessive deleterious alleles. Computer simulations indicate that all populations continuing to avoid incestuous mating will become extinct within 100 years due to the unavailability of unrelated mates. Thus, the impact of reduced numbers of suitable unrelated mates will likely have a severe demographic impact on the future viability of small wild dog populations. Hunting and diet The African wild dog is a specialised pack hunter of common medium-sized antelopes. It is a primarily diurnal predator and hunts by approaching prey silently, then chasing it in a pursuit clocking at up to for 10–60 minutes. The average chase covers some , during which the prey animal, if large, is repeatedly bitten on the legs, belly, and rump until it stops running, while smaller prey is simply pulled down and torn apart. African wild dogs adjust their hunting strategy to the particular prey species. They will rush at wildebeest to panic the herd and isolate a vulnerable individual, but pursue territorial antelope species (which defend themselves by running in wide circles) by cutting across the arc to foil their escape. Medium-sized prey is often killed in 2–5 minutes, whereas larger prey such as wildebeest may take half an hour to pull down. Male wild dogs usually perform the task of grabbing dangerous prey, such as warthogs, by the nose. A species-wide study showed that by preference, where available, five prey species were the most regularly selected, namely the greater kudu, Thomson's gazelle, impala, Cape bushbuck and blue wildebeest. More specifically, in East Africa, its most common prey is the Thomson's gazelle, while in Central and Southern Africa, it targets impala, reedbuck, kob, lechwe and springbok, and smaller prey such as common duiker, dik-dik, hares, spring hares, insects and cane rats. Staple prey sizes are usually between , though some local studies put upper prey sizes as variously . In the case of larger species such as kudu and wildebeest, calves are largely but not exclusively targeted. However, certain packs in the Serengeti specialized in hunting adult plains zebras weighing up to quite frequently. Another study claimed that some prey taken by wild dogs could weigh up to . This includes African buffalo juveniles during the dry season when herds are small and calves less protected. Footage from Lower Zambezi National Park taken in 2021 showed a large pack of African wild dogs hunting an adult, healthy buffalo, though this is apparently extremely rare. One pack was recorded to occasionally prey on bat-eared foxes, rolling on the carcasses before eating them. African wild dogs rarely scavenge, but have on occasion been observed to appropriate carcasses from spotted hyenas, leopards, cheetahs, lions, and animals caught in snares. Hunting success varies with prey type, vegetation cover and pack size, but African wild dogs tend to be very successful: often more than 60% of their chases end in a kill, sometimes up to 90%. An analysis of 1,119 chases by a pack of six Okavango wild dogs showed that most were short distance uncoordinated chases, and the individual kill rate was only 15.5 percent. Because kills are shared, each dog enjoyed an efficient benefit–cost ratio. Small prey such as rodents, hares and birds are hunted singly, with dangerous prey such as cane rats and Old World porcupines being killed with a quick and well-placed bite to avoid injury. Small prey is eaten entirely, while large animals are stripped of their meat and organs, leaving the skin, head, and skeleton intact. The African wild dog is a fast eater, with a pack being able to consume a Thomson's gazelle in 15 minutes. In the wild, the species' consumption is per African wild dog a day, with one pack of 17–43 individuals in East Africa having been recorded to kill three animals per day on average. Unlike most social predators, African wild dogs will regurgitate food for other adults as well as young family members. Pups old enough to eat solid food are given first priority at kills, eating even before the dominant pair; subordinate adult dogs help feed and protect the pups. Enemies and competitors Lions dominate African wild dogs and are a major source of mortality for both adults and pups. Population densities are usually low in areas where lions are more abundant. One pack reintroduced into Etosha National Park was wiped out by lions. A population crash in lions in the Ngorongoro Conservation Area during the 1960s resulted in an increase in African wild dog sightings, only for their numbers to decline once the lions recovered. As with other large predators killed by lion prides, the dogs are usually killed and left uneaten by the lions, indicating the competitive rather than predatory nature of the lions' dominance. However, a few cases have been reported of old and wounded lions falling prey to African wild dogs. On occasion, packs of wild dogs have been observed defending pack members attacked by single lions, sometimes successfully. One pack in the Okavango in March 2016 was photographed by safari guides waging "an incredible fight" against a lioness that attacked a subadult dog at an impala kill, which forced the lioness to retreat, although the subadult dog died. A pack of four wild dogs was observed furiously defending an old adult male dog from a male lion that attacked it at a kill; the dog survived and rejoined the pack. African wild dogs commonly lose their kills to larger predators. Spotted hyenas are important kleptoparasites and follow packs of African wild dogs to appropriate their kills. They typically inspect areas where wild dogs have rested and eat any food remains they find. When approaching wild dogs at a kill, solitary hyenas approach cautiously and attempt to take off with a piece of meat unnoticed, though they may be mobbed in the attempt. When operating in groups, spotted hyenas are more successful in pirating African wild dog kills, though the latter's greater tendency to assist each other puts them at an advantage against spotted hyenas, which rarely work cooperatively. Cases of African wild dogs scavenging from spotted hyenas are rare. Although African wild dog packs can easily repel solitary hyenas, on the whole, the relationship between the two species is a one-sided benefit for the hyenas, with African wild dog densities being negatively correlated with high hyena populations. Beyond piracy, cases of interspecific killing of African wild dogs by spotted hyenas are documented. African wild dogs are apex predators, only fatally losing contests to larger social carnivores. When briefly unprotected, wild dog pups may occasionally be vulnerable to large eagles, such as the martial eagle, when they venture out of their dens. Threats The African wild dog is primarily threatened by habitat fragmentation, which results from human–wildlife conflict, transmission of infectious diseases and high mortality rates; it has been exterminated in large parts of North and West Africa, and its population has greatly reduced in Central Africa, Uganda and much of Kenya. Surveys in the Central African Republic's Chinko area revealed that the African wild dog population decreased from 160 individuals in 2012 to 26 individuals in 2017. At the same time, transhumant pastoralists from the border area with Sudan moved in the area with their livestock. Conservation The non-governmental organization African Wild Dog Conservancy began working in 2003 to conserve the African wild dog in northeastern and coastal Kenya, a convergence zone of two biodiversity hotspots. This area largely consists of community lands inhabited by pastoralists. With the help of local people, a pilot study was launched confirming the presence of a population of wild dogs largely unknown to conservationists. Over the next 16 years, local ecological knowledge revealed this area to be a significant refuge for African wild dogs and an important wildlife corridor connecting Kenya's Tsavo National Parks with the Horn of Africa in an increasingly human-dominated landscape. This project has been identified as a wild dog conservation priority by the IUCN/SSC Canid Specialist Group. In culture Ancient Egypt Depictions of African wild dogs are prominent on cosmetic palettes and other objects from Egypt's predynastic period, likely symbolising order over chaos and the transition between the wild and the domestic dog. Predynastic hunters may have identified with the African wild dog, as the Hunters Palette shows them wearing the animals' tails on their belts. By the dynastic period, African wild dog illustrations became much less represented, and the animal's symbolic role was largely taken over by the wolf. Ethiopia According to Enno Littmann, the people of Ethiopia's Tigray Region believed that injuring a wild dog with a spear would result in the animal dipping its tail in its wounds and flicking the blood at its assailant, causing instant death. For this reason, Tigrean shepherds used to repel wild dog attacks with pebbles rather than with edged weapons. San people The African wild dog also plays a prominent role in the mythology of Southern Africa's San people. In one story, the wild dog is indirectly linked to the origin of death, as the hare is cursed by the moon to be forever hunted by African wild dogs after the hare rebuffs the moon's promise to allow all living things to be reborn after death. Another story has the god Cagn taking revenge on the other gods by sending a group of men transformed into African wild dogs to attack them, though who won the battle is never revealed. The San of Botswana see the African wild dog as the ultimate hunter and traditionally believe that shamans and medicine men can transform themselves into wild dogs. Some San hunters will smear African wild dog bodily fluids on their feet before a hunt, believing that doing so will give them the animal's boldness and agility. Nevertheless, the species does not figure prominently in San rock art, with the only notable example being a frieze in Mount Erongo showing a pack hunting two antelopes. Ndebele The Ndebele have a story explaining why the African wild dog hunts in packs: in the beginning, when the first wild dog's wife was sick, the other animals were concerned. An impala went to hare, who was a medicine man. Hare gave Impala a calabash of medicine, warning him not to turn back on the way to Wild Dog's den. Impala was startled by the scent of a leopard and turned back, spilling the medicine. A zebra then went to Hare, who gave him the same medicine along with the same advice. On the way, Zebra turned back when he saw a black mamba, thus breaking the gourd. A moment later a terrible howling was heard: Wild Dog's wife had died. Wild Dog went outside and saw Zebra standing over the broken gourd of medicine, so Wild Dog and his family chased Zebra and tore him to shreds. To this day, African wild dogs hunt zebras and impalas as revenge for their failure to deliver the medicine that could have saved Wild Dog's wife. In media Documentary A Wild Dog's Tale (2013), a single painted dog (named Solo by researchers) befriends hyenas and jackals in Okavango, hunting together. Solo feeds and cares for jackal pups. The Pale Pack, Savage Kingdom, Season 1 (2016), was the story of Botswana African wild dog pack leaders Teemana and Molao written and directed by Brad Bestelink, and narrated by Charles Dance premiered on National Geographic. Dynasties (2018 TV series), episode 4, Produced by Nick Lyon: Tait is the elderly matriarch of a pack of painted wolves in Zimbabwe's Mana Pools National Park. Her pack is driven out of their territory by Tait's daughter, Blacktip, the matriarch of a rival pack in need of more space for their large family of 32. Their combined territory also shrunk over Tait's lifetime due to the expansion of human, hyena and lion territories. Tait leads her family into the territory of a lion pride in the midst of a drought, with Blacktip's pack in an eight month long pursuit. When Tait died, the pack was observed performing a rare "singing", the purpose of which is unclear.
Biology and health sciences
Canines
Animals
23278617
https://en.wikipedia.org/wiki/Ciliate
Ciliate
The ciliates are a group of alveolates characterized by the presence of hair-like organelles called cilia, which are identical in structure to eukaryotic flagella, but are in general shorter and present in much larger numbers, with a different undulating pattern than flagella. Cilia occur in all members of the group (although the peculiar Suctoria only have them for part of their life cycle) and are variously used in swimming, crawling, attachment, feeding, and sensation. Ciliates are an important group of protists, common almost anywhere there is water—in lakes, ponds, oceans, rivers, and soils, including anoxic and oxygen-depleted habitats. About 4,500 unique free-living species have been described, and the potential number of extant species is estimated at 27,000–40,000. Included in this number are many ectosymbiotic and endosymbiotic species, as well as some obligate and opportunistic parasites. Ciliate species range in size from as little as 10 μm in some colpodeans to as much as 4 mm in length in some geleiids, and include some of the most morphologically complex protozoans. In most systems of taxonomy, "Ciliophora" is ranked as a phylum under any of several kingdoms, including Chromista, Protista or Protozoa. In some older systems of classification, such as the influential taxonomic works of Alfred Kahl, ciliated protozoa are placed within the class "Ciliata" (a term which can also refer to a genus of fish). In the taxonomic scheme endorsed by the International Society of Protistologists, which eliminates formal rank designations such as "phylum" and "class", "Ciliophora" is an unranked taxon within Alveolata. Cell structure Nuclei Unlike most other eukaryotes, ciliates have two different sorts of nuclei: a tiny, diploid micronucleus (the "generative nucleus", which carries the germline of the cell), and a large, ampliploid macronucleus (the "vegetative nucleus", which takes care of general cell regulation, expressing the phenotype of the organism). The latter is generated from the micronucleus by amplification of the genome and heavy editing. The micronucleus passes its genetic material to offspring, but does not express its genes. The macronucleus provides the small nuclear RNA for vegetative growth. Division of the macronucleus occurs in most ciliate species, apart from those in class Karyorelictea, whose macronuclei are replaced every time the cell divides. Macronuclear division is accomplished by amitosis, and the segregation of the chromosomes occurs by a process whose mechanism is unknown. After a certain number of generations (200–350, in Paramecium aurelia, and as many as 1,500 in Tetrahymena) the cell shows signs of aging, and the macronuclei must be regenerated from the micronuclei. Usually, this occurs following conjugation, after which a new macronucleus is generated from the post-conjugal micronucleus. Cytoplasm Food vacuoles are formed through phagocytosis and typically follow a particular path through the cell as their contents are digested and broken down by lysosomes so the substances the vacuole contains are then small enough to diffuse through the membrane of the food vacuole into the cell. Anything left in the food vacuole by the time it reaches the cytoproct (anal pore) is discharged by exocytosis. Most ciliates also have one or more prominent contractile vacuoles, which collect water and expel it from the cell to maintain osmotic pressure, or in some function to maintain ionic balance. In some genera, such as Paramecium, these have a distinctive star shape, with each point being a collecting tube. Specialized structures in ciliates Mostly, body cilia are arranged in mono- and dikinetids, which respectively include one and two kinetosomes (basal bodies), each of which may support a cilium. These are arranged into rows called kineties, which run from the anterior to posterior of the cell. The body and oral kinetids make up the infraciliature, an organization unique to the ciliates and important in their classification, and include various fibrils and microtubules involved in coordinating the cilia. In some forms there are also body polykinetids, for instance, among the spirotrichs where they generally form bristles called cirri. The infraciliature is one of the main components of the cell cortex. Others are the alveoli, small vesicles under the cell membrane that are packed against it to form a pellicle maintaining the cell's shape, which varies from flexible and contractile to rigid. Numerous mitochondria and extrusomes are also generally present. The presence of alveoli, the structure of the cilia, the form of mitosis and various other details indicate a close relationship between the ciliates, Apicomplexa, and dinoflagellates. These superficially dissimilar groups make up the alveolates. Feeding Most ciliates are heterotrophs, feeding on smaller organisms, such as bacteria and algae, and detritus swept into the oral groove (mouth) by modified oral cilia. This usually includes a series of membranelles to the left of the mouth and a paroral membrane to its right, both of which arise from polykinetids, groups of many cilia together with associated structures. The food is moved by the cilia through the mouth pore into the gullet, which forms food vacuoles. Many species are also mixotrophic, combining phagotrophy and phototrophy through kleptoplasty or symbiosis with photosynthetic microbes. The ciliate Halteria has been observed to feed on chloroviruses. Feeding techniques vary considerably, however. Some ciliates are mouthless and feed by absorption (osmotrophy), while others are predatory and feed on other protozoa and in particular on other ciliates. Some ciliates parasitize animals, although only one species, Balantidium coli, is known to cause disease in humans. Reproduction and sexual phenomena Reproduction Ciliates reproduce asexually, by various kinds of fission. During fission, the micronucleus undergoes mitosis and the macronucleus elongates and undergoes amitosis (except among the Karyorelictean ciliates, whose macronuclei do not divide). The cell then divides in two, and each new cell obtains a copy of the micronucleus and the macronucleus. Typically, the cell is divided transversally, with the anterior half of the ciliate (the proter) forming one new organism, and the posterior half (the opisthe) forming another. However, other types of fission occur in some ciliate groups. These include budding (the emergence of small ciliated offspring, or "swarmers", from the body of a mature parent); strobilation (multiple divisions along the cell body, producing a chain of new organisms); and palintomy (multiple fissions, usually within a cyst). Fission may occur spontaneously, as part of the vegetative cell cycle. Alternatively, it may proceed as a result of self-fertilization (autogamy), or it may follow conjugation, a sexual phenomenon in which ciliates of compatible mating types exchange genetic material. While conjugation is sometimes described as a form of reproduction, it is not directly connected with reproductive processes, and does not directly result in an increase in the number of individual ciliates or their progeny. Conjugation Overview Ciliate conjugation is a sexual phenomenon that results in genetic recombination and nuclear reorganization within the cell. During conjugation, two ciliates of a compatible mating type form a bridge between their cytoplasms. The micronuclei undergo meiosis, the macronuclei disappear, and haploid micronuclei are exchanged over the bridge. In some ciliates (peritrichs, chonotrichs and some suctorians), conjugating cells become permanently fused, and one conjugant is absorbed by the other. In most ciliate groups, however, the cells separate after conjugation, and both form new macronuclei from their micronuclei. Conjugation and autogamy are always followed by fission. In many ciliates, such as Paramecium, conjugating partners (gamonts) are similar or indistinguishable in size and shape. This is referred to as "isogamontic" conjugation. In some groups, partners are different in size and shape. This is referred to as "anisogamontic" conjugation. In sessile peritrichs, for instance, one sexual partner (the microconjugant) is small and mobile, while the other (macroconjugant) is large and sessile. Stages of conjugation In Paramecium caudatum, the stages of conjugation are as follows (see diagram at right): Compatible mating strains meet and partly fuse The micronuclei undergo meiosis, producing four haploid micronuclei per cell. Three of these micronuclei disintegrate. The fourth undergoes mitosis. The two cells exchange a micronucleus. The cells then separate. The micronuclei in each cell fuse, forming a diploid micronucleus. Mitosis occurs three times, giving rise to eight micronuclei. Four of the new micronuclei transform into macronuclei, and the old macronucleus disintegrates. Binary fission occurs twice, yielding four identical daughter cells. DNA rearrangements (gene scrambling) Ciliates contain two types of nuclei: somatic "macronucleus" and the germline "micronucleus". Only the DNA in the micronucleus is passed on during sexual reproduction (conjugation). On the other hand, only the DNA in the macronucleus is actively expressed and results in the phenotype of the organism. Macronuclear DNA is derived from micronuclear DNA by amazingly extensive DNA rearrangement and amplification. The macronucleus begins as a copy of the micronucleus. The micronuclear chromosomes are fragmented into many smaller pieces and amplified to give many copies. The resulting macronuclear chromosomes often contain only a single gene. In Tetrahymena, the micronucleus has 10 chromosomes (five per haploid genome), while the macronucleus has over 20,000 chromosomes. In addition, the micronuclear genes are interrupted by numerous "internal eliminated sequences" (IESs). During development of the macronucleus, IESs are deleted and the remaining gene segments, macronuclear destined sequences (MDSs), are spliced together to give the operational gene. Tetrahymena has about 6,000 IESs and about 15% of micronuclear DNA is eliminated during this process. The process is guided by small RNAs and epigenetic chromatin marks. In spirotrich ciliates (such as Oxytricha), the process is even more complex due to "gene scrambling": the MDSs in the micronucleus are often in different order and orientation from that in the macronuclear gene, and so in addition to deletion, DNA inversion and translocation are required for "unscrambling". This process is guided by long RNAs derived from the parental macronucleus. More than 95% of micronuclear DNA is eliminated during spirotrich macronuclear development. Aging ln clonal populations of Paramecium, aging occurs over successive generations leading to a gradual loss of vitality, unless the cell line is revitalized by conjugation or autogamy. In Paramecium tetraurelia, the clonally aging line loses vitality and expires after about 200 fissions, if the cell line is not rejuvenated by conjugation or self-fertilization. The basis for clonal aging was clarified by the transplantation experiments of Aufderheide in 1986 who demonstrated that the macronucleus, rather than the cytoplasm, is responsible for clonal aging. Additional experiments by Smith-Sonneborn, Holmes and Holmes, and Gilley and Blackburn demonstrated that, during clonal aging, DNA damage increases dramatically. Thus, DNA damage appears to be the cause of aging in P. tetraurelia. Fossil record Until recently, the oldest ciliate fossils known were tintinnids from the Ordovician period. In 2007, Li et al. published a description of fossil ciliates from the Doushantuo Formation, about 580 million years ago, in the Ediacaran period. These included two types of tintinnids and a possible ancestral suctorian. A fossil Vorticella has been discovered inside a leech cocoon from the Triassic period, about 200 million years ago. Phylogeny According to the 2016 phylogenetic analysis, Mesodiniea is consistently found as the sister group to all other ciliates. Additionally, two big sub-groups are distinguished inside subphylum Intramacronucleata: SAL (Spirotrichea+Armophorea+Litostomatea) and CONthreeP or Ventrata (Colpodea+Oligohymenophorea+Nassophorea+Phyllopharyngea+Plagiopylea+Prostomatea). The class Protocruziea is found as the sister group to Ventrata/CONthreeP. The class Cariacotrichea was excluded from the analysis, but it was originally established as part of Intramacronucleata.The odontostomatids were identified in 2018 as its own class Odontostomatea, related to Armophorea. Classification Several different classification schemes have been proposed for the ciliates. The following scheme is based on a molecular phylogenetic analysis of up to four genes from 152 species representing 110 families: Class Mesodiniea (e.g. Mesodinium) Subphylum Postciliodesmatophora Class Heterotrichea (e.g. Stentor) Class Karyorelictea Subphylum Intramacronucleata Class Armophorea Class Odontostomatea (e.g. Discomorphella, Saprodinium) Class Cariacotrichea (only one species, Cariacothrix caudata) Class Muranotrichea Class Parablepharismea Class Colpodea (e.g. Colpoda) Class Litostomatea Subclass Haptoria (e.g. Didinium) Subclass Rhynchostomatia Subclass Trichostomatia (e.g. Balantidium) Class Nassophorea Class Phyllopharyngea Subclass Chonotrichia Subclass Cyrtophoria Subclass Rhynchodia Subclass Suctoria (e.g. Podophyra) Subclass Synhymenia Class Oligohymenophorea Subclass Apostomatia Subclass Astomatia Subclass Hymenostomatia (e.g. Tetrahymena) Subclass Peniculia (e.g. Paramecium) Subclass Peritrichia (e.g. Vorticella) Subclass Scuticociliatia Class Plagiopylea Class Prostomatea (e.g. Coleps) Class Protocruziea Class Spirotrichea Subclass Choreotrichia Subclass Euplotia Subclass Hypotrichia Subclass Licnophoria Subclass Oligotrichia Subclass Phacodiniidea Subclass Protohypotrichia Other Some old classifications included Opalinidae in the ciliates. The fundamental difference between multiciliate flagellates (e.g., hemimastigids, Stephanopogon, Multicilia, opalines) and ciliates is the presence of macronuclei in ciliates alone. Pathogenicity The only member of the ciliate phylum known to be pathogenic to humans is Balantidium coli, which causes the disease balantidiasis. It is not pathogenic to the domestic pig, the primary reservoir of this pathogen.
Biology and health sciences
SAR supergroup
Plants
33402026
https://en.wikipedia.org/wiki/Video%20coding%20format
Video coding format
A video coding format (or sometimes video compression format) is a content representation format of digital video content, such as in a data file or bitstream. It typically uses a standardized video compression algorithm, most commonly based on discrete cosine transform (DCT) coding and motion compensation. A computer software or hardware component that compresses or decompresses a specific video coding format is a video codec. Some video coding formats are documented by a detailed technical specification document known as a video coding specification. Some such specifications are written and approved by standardization organizations as technical standards, and are thus known as a video coding standard. There are de facto standards and formal standards. Video content encoded using a particular video coding format is normally bundled with an audio stream (encoded using an audio coding format) inside a multimedia container format such as AVI, MP4, FLV, RealMedia, or Matroska. As such, the user normally does not have a H.264 file, but instead has a video file, which is an MP4 container of H.264-encoded video, normally alongside AAC-encoded audio. Multimedia container formats can contain one of several different video coding formats; for example, the MP4 container format can contain video coding formats such as MPEG-2 Part 2 or H.264. Another example is the initial specification for the file type WebM, which specifies the container format (Matroska), but also exactly which video (VP8) and audio (Vorbis) compression format is inside the Matroska container, even though Matroska is capable of containing VP9 video, and Opus audio support was later added to the WebM specification. Distinction between format and codec A format is the layout plan for data produced or consumed by a codec. Although video coding formats such as H.264 are sometimes referred to as codecs, there is a clear conceptual difference between a specification and its implementations. Video coding formats are described in specifications, and software, firmware, or hardware to encode/decode data in a given video coding format from/to uncompressed video are implementations of those specifications. As an analogy, the video coding format H.264 (specification) is to the codec OpenH264 (specific implementation) what the C Programming Language (specification) is to the compiler GCC (specific implementation). Note that for each specification (e.g., H.264), there can be many codecs implementing that specification (e.g., x264, OpenH264, H.264/MPEG-4 AVC products and implementations). This distinction is not consistently reflected terminologically in the literature. The H.264 specification calls H.261, H.262, H.263, and H.264 video coding standards and does not contain the word codec. The Alliance for Open Media clearly distinguishes between the AV1 video coding format and the accompanying codec they are developing, but calls the video coding format itself a video codec specification. The VP9 specification calls the video coding format VP9 itself a codec. As an example of conflation, Chromium's and Mozilla's pages listing their video formats support both call video coding formats, such as H.264 codecs. As another example, in Cisco's announcement of a free-as-in-beer video codec, the press release refers to the H.264 video coding format as a codec ("choice of a common video codec"), but calls Cisco's implementation of a H.264 encoder/decoder a codec shortly thereafter ("open-source our H.264 codec"). A video coding format does not dictate all algorithms used by a codec implementing the format. For example, a large part of how video compression typically works is by finding similarities between video frames (block-matching) and then achieving compression by copying previously-coded similar subimages (such as macroblocks) and adding small differences when necessary. Finding optimal combinations of such predictors and differences is an NP-hard problem, meaning that it is practically impossible to find an optimal solution. Though the video coding format must support such compression across frames in the bitstream format, by not needlessly mandating specific algorithms for finding such block-matches and other encoding steps, the codecs implementing the video coding specification have some freedom to optimize and innovate in their choice of algorithms. For example, section 0.5 of the H.264 specification says that encoding algorithms are not part of the specification. Free choice of algorithm also allows different space–time complexity trade-offs for the same video coding format, so a live feed can use a fast but space-inefficient algorithm, and a one-time DVD encoding for later mass production can trade long encoding-time for space-efficient encoding. History The concept of analog video compression dates back to 1929, when R.D. Kell in Britain proposed the concept of transmitting only the portions of the scene that changed from frame-to-frame. The concept of digital video compression dates back to 1952, when Bell Labs researchers B.M. Oliver and C.W. Harrison proposed the use of differential pulse-code modulation (DPCM) in video coding. In 1959, the concept of inter-frame motion compensation was proposed by NHK researchers Y. Taki, M. Hatori and S. Tanaka, who proposed predictive inter-frame video coding in the temporal dimension. In 1967, University of London researchers A.H. Robinson and C. Cherry proposed run-length encoding (RLE), a lossless compression scheme, to reduce the transmission bandwidth of analog television signals. The earliest digital video coding algorithms were either for uncompressed video or used lossless compression, both methods inefficient and impractical for digital video coding. Digital video was introduced in the 1970s, initially using uncompressed pulse-code modulation (PCM), requiring high bitrates around 45200 Mbit/s for standard-definition (SD) video, which was up to 2,000 times greater than the telecommunication bandwidth (up to 100kbit/s) available until the 1990s. Similarly, uncompressed high-definition (HD) 1080p video requires bitrates exceeding 1Gbit/s, significantly greater than the bandwidth available in the 2000s. Motion-compensated DCT Practical video compression emerged with the development of motion-compensated DCT (MC DCT) coding, also called block motion compensation (BMC) or DCT motion compensation. This is a hybrid coding algorithm, which combines two key data compression techniques: discrete cosine transform (DCT) coding in the spatial dimension, and predictive motion compensation in the temporal dimension. DCT coding is a lossy block compression transform coding technique that was first proposed by Nasir Ahmed, who initially intended it for image compression, while he was working at Kansas State University in 1972. It was then developed into a practical image compression algorithm by Ahmed with T. Natarajan and K. R. Rao at the University of Texas in 1973, and was published in 1974. The other key development was motion-compensated hybrid coding. In 1974, Ali Habibi at the University of Southern California introduced hybrid coding, which combines predictive coding with transform coding. He examined several transform coding techniques, including the DCT, Hadamard transform, Fourier transform, slant transform, and Karhunen-Loeve transform. However, his algorithm was initially limited to intra-frame coding in the spatial dimension. In 1975, John A. Roese and Guner S. Robinson extended Habibi's hybrid coding algorithm to the temporal dimension, using transform coding in the spatial dimension and predictive coding in the temporal dimension, developing inter-frame motion-compensated hybrid coding. For the spatial transform coding, they experimented with different transforms, including the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for them, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality comparable to a typical intra-frame coder requiring 2-bit per pixel. The DCT was applied to video encoding by Wen-Hsiung Chen, who developed a fast DCT algorithm with C.H. Smith and S.C. Fralick in 1977, and founded Compression Labs to commercialize DCT technology. In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression. This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards. Video coding standards The first digital video coding standard was H.120, developed by the CCITT (now ITU-T) in 1984. H.120 was not usable in practice, as its performance was too poor. H.120 used motion-compensated DPCM coding, a lossless compression algorithm that was inefficient for video coding. During the late 1980s, a number of companies began experimenting with discrete cosine transform (DCT) coding, a much more efficient form of compression for video coding. The CCITT received 14 proposals for DCT-based video compression formats, in contrast to a single proposal based on vector quantization (VQ) compression. The H.261 standard was developed based on motion-compensated DCT compression. H.261 was the first practical video coding standard, and uses patents licensed from a number of companies, including Hitachi, PictureTel, NTT, BT, and Toshiba, among others. Since H.261, motion-compensated DCT compression has been adopted by all the major video coding standards (including the H.26x and MPEG formats) that followed. MPEG-1, developed by the Moving Picture Experts Group (MPEG), followed in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which was developed with patents licensed from a number of companies, primarily Sony, Thomson and Mitsubishi Electric. MPEG-2 became the standard video format for DVD and SD digital television. Its motion-compensated DCT algorithm was able to achieve a compression ratio of up to 100:1, enabling the development of digital media technologies such as video on demand (VOD) and high-definition television (HDTV). In 1999, it was followed by MPEG-4/H.263, which was a major leap forward for video compression technology. It uses patents licensed from a number of companies, primarily Mitsubishi, Hitachi and Panasonic. The most widely used video coding format is H.264/MPEG-4 AVC. It was developed in 2003, and uses patents licensed from a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics. In contrast to the standard DCT used by its predecessors, AVC uses the integer DCT. H.264 is one of the video encoding standards for Blu-ray Discs; all Blu-ray Disc players must be able to decode H.264. It is also widely used by streaming internet sources, such as videos from YouTube, Netflix, Vimeo, and the iTunes Store, web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSC standards, ISDB-T, DVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S2). A main problem for many video coding formats has been patents, making it expensive to use or potentially risking a patent lawsuit due to submarine patents. The motivation behind many recently designed video coding formats such as Theora, VP8, and VP9 have been to create a (libre) video coding standard covered only by royalty-free patents. Patent status has also been a major point of contention for the choice of which video formats the mainstream web browsers will support inside the HTML video tag. The current-generation video coding format is HEVC (H.265), introduced in 2013. AVC uses the integer DCT with 4x4 and 8x8 block sizes, and HEVC uses integer DCT and DST transforms with varied block sizes between 4x4 and 32x32. HEVC is heavily patented, mostly by Samsung Electronics, GE, NTT, and JVCKenwood. It is challenged by the AV1 format, intended for free license. , AVC is by far the most commonly used format for the recording, compression, and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers. List of video coding standards Lossless, lossy, and uncompressed Consumer video is generally compressed using lossy video codecs, since that results in significantly smaller files than lossless compression. Some video coding formats designed explicitly for either lossy or lossless compression, and some video coding formats such as Dirac and H.264 support both. Uncompressed video formats, such as Clean HDMI, is a form of lossless video used in some circumstances such as when sending video to a display over a HDMI connection. Some high-end cameras can also capture video directly in this format. Intra-frame Interframe compression complicates editing of an encoded video sequence. One subclass of relatively simple video coding formats are the intra-frame video formats, such as DV, in which each frame of the video stream is compressed independently without referring to other frames in the stream, and no attempt is made to take advantage of correlations between successive pictures over time for better compression. One example is Motion JPEG, which is simply a sequence of individually JPEG-compressed images. This approach is quick and simple, at the expense of the encoded video being much larger than a video coding format supporting Inter frame coding. Because interframe compression copies data from one frame to another, if the original frame is simply cut out (or lost in transmission), the following frames cannot be reconstructed properly. Making cuts in intraframe-compressed video while video editing is almost as easy as editing uncompressed video: one finds the beginning and ending of each frame, and simply copies bit-for-bit each frame that one wants to keep, and discards the frames one does not want. Another difference between intraframe and interframe compression is that, with intraframe systems, each frame uses a similar amount of data. In most interframe systems, certain frames (such as I-frames in MPEG-2) are not allowed to copy data from other frames, so they require much more data than other frames nearby. It is possible to build a computer-based video editor that spots problems caused when I frames are edited out while other frames need them. This has allowed newer formats like HDV to be used for editing. However, this process demands a lot more computing power than editing intraframe compressed video with the same picture quality. But, this compression is not very effective to use for any audio format. Profiles and levels A video coding format can define optional restrictions to encoded video, called profiles and levels. It is possible to have a decoder which only supports decoding a subset of profiles and levels of a given video format, for example to make the decoder program/hardware smaller, simpler, or faster. A profile restricts which encoding techniques are allowed. For example, the H.264 format includes the profiles baseline, main and high (and others). While P-slices (which can be predicted based on preceding slices) are supported in all profiles, B-slices (which can be predicted based on both preceding and following slices) are supported in the main and high profiles but not in baseline. A level is a restriction on parameters such as maximum resolution and data rates.
Technology
File formats
null
38905529
https://en.wikipedia.org/wiki/IBM%20POWER%20architecture
IBM POWER architecture
IBM POWER is a reduced instruction set computer (RISC) instruction set architecture (ISA) developed by IBM. The name is an acronym for Performance Optimization With Enhanced RISC. The ISA is used as base for high end microprocessors from IBM during the 1990s and were used in many of IBM's servers, minicomputers, workstations, and supercomputers. These processors are called POWER1 (RIOS-1, RIOS.9, RSC, RAD6000) and POWER2 (POWER2, POWER2+ and P2SC). The ISA evolved into the PowerPC instruction set architecture and was deprecated in 1998 when IBM introduced the POWER3 processor that was mainly a 32/64-bit PowerPC processor but included the IBM POWER architecture for backwards compatibility. The original IBM POWER architecture was then abandoned. PowerPC evolved into the third Power ISA in 2006. IBM continues to develop PowerPC microprocessor cores for use in their application-specific integrated circuit (ASIC) offerings. Many high volume applications embed PowerPC cores. History The 801 research project In 1974, IBM started a project with a design objective of creating a large telephone-switching network with a potential capacity to deal with at least 300 calls per second. It was projected that 20,000 machine instructions would be required to handle each call while maintaining a real-time response, so a processor with a performance of 12 MIPS was deemed necessary. This requirement was extremely ambitious for the time, but it was realised that much of the complexity of contemporary CPUs could be dispensed with, since this machine would need only to perform I/O, branches, add register-register, move data between registers and memory, and would have no need for special instructions to perform heavy arithmetic. This simple design philosophy, whereby each step of a complex operation is specified explicitly by one machine instruction, and all instructions are required to complete in the same constant time, would later come to be known as RISC. By 1975 the telephone switch project was canceled without a prototype. From the estimates from simulations produced in the project's first year, however, it looked as if the processor being designed for this project could be a very promising general-purpose processor, so work continued at Thomas J. Watson Research Center building #801, on the 801 project. 1982 Cheetah project For two years at the Watson Research Center, the superscalar limits of the 801 design were explored, such as the feasibility of implementing the design using multiple functional units to improve performance, similar to what had been done in the IBM System/360 Model 91 and the CDC 6600 (although the Model 91 had been based on a CISC design), to determine if a RISC machine could maintain multiple instructions per cycle, or what design changes need to be made to the 801 design to allow for multiple-execution-units. To increase performance, Cheetah had separate branch, fixed-point, and floating-point execution units. Many changes were made to the 801 design to allow for multiple-execution-units. Cheetah was originally planned to be manufactured using bipolar emitter-coupled logic (ECL) technology, but by 1984 complementary metal–oxide–semiconductor (CMOS) technology afforded an increase in the level of circuit integration while improving transistor-logic performance. The America project In 1985, research on a second-generation RISC architecture started at the IBM Thomas J. Watson Research Center, producing the "AMERICA architecture"; in 1986, IBM Austin started developing the RS/6000 series, based on that architecture. POWER In February 1990, the first computers from IBM to incorporate the POWER instruction set were called the "RISC System/6000" or RS/6000. These RS/6000 computers were divided into two classes, workstations and servers, and hence introduced as the POWERstation and POWERserver. The RS/6000 CPU had 2 configurations, called the "RIOS-1" and "RIOS.9" (or more commonly the "POWER1" CPU). A RIOS-1 configuration had a total of 10 discrete chips - an instruction cache chip, fixed-point chip, floating-point chip, 4 data cache chips, storage control chip, input/output chips, and a clock chip. The lower cost RIOS.9 configuration had 8 discrete chips - an instruction cache chip, fixed-point chip, floating-point chip, 2 data cache chips, storage control chip, input/output chip, and a clock chip. A single-chip implementation of RIOS, RSC (for "RISC Single Chip"), was developed for lower-end RS/6000's; the first machines using RSC were released in 1992. POWER2 IBM started the POWER2 processor effort as a successor to the POWER1 two years before the creation of the 1991 Apple/IBM/Motorola alliance in Austin, Texas. Despite being impacted by diversion of resources to jump start the Apple/IBM/Motorola effort, the POWER2 took five years from start to system shipment. By adding a second fixed-point unit, a second floating point unit, and other performance enhancements to the design, the POWER2 had leadership performance when it was announced in November 1993. New instructions were also added to the instruction set: Quad-word storage instructions. The quad-word load instruction moves two adjacent double-precision values into two adjacent floating-point registers. Hardware square root instruction. Floating-point to integer conversion instructions. To support the RS/6000 and RS/6000 SP2 product lines in 1996, IBM had its own design team implement a single-chip version of POWER2, the P2SC ("POWER2 Super Chip"), outside the Apple/IBM/Motorola alliance in IBM's most advanced and dense CMOS-6S process. P2SC combined all of the separate POWER2 instruction cache, fixed point, floating point, storage control, and data cache chips onto one huge die. At the time of its introduction, P2SC was the largest and highest transistor count processor in the industry. Despite the challenge of its size, complexity, and advanced CMOS process, the first tape-out version of the processor was able to be shipped, and it had leadership floating point performance at the time it was announced. P2SC was the processor used in the 1997 IBM Deep Blue chess playing supercomputer which beat chess grandmaster Garry Kasparov. With its twin sophisticated MAF floating point units and huge wide and low latency memory interfaces, P2SC was primarily targeted at engineering and scientific applications. P2SC was eventually succeeded by the POWER3, which included 64-bit, SMP capability, and a full transition to PowerPC in addition to P2SC's sophisticated twin MAF floating point units. The architecture The POWER design is descended directly from the 801's CPU, widely considered to be the first true RISC processor design. The 801 was used in a number of applications inside IBM hardware. At about the same time the PC/RT was being released, IBM started the America Project, to design the most powerful CPU on the market. They were interested primarily in fixing two problems in the 801 design: The 801 required all instructions to complete in one clock cycle, which precluded floating point instructions. Although the decoder was pipelined as a side effect of these single-cycle operations, they didn't use superscalar effects. Floating point became a focus for the America Project, and IBM was able to use new algorithms developed in the early 1980s that could support 64-bit double-precision multiplies and divides in a single cycle. The FPU portion of the design was separate from the instruction decoder and integer parts, allowing the decoder to send instructions to both the FPU and ALU (integer) execution units at the same time. IBM complemented this with a complex instruction decoder which could be fetching one instruction, decoding another, and sending one to the ALU and FPU at the same time, resulting in one of the first superscalar CPU designs in use. The system used 32 32-bit integer registers and another 32 64-bit floating point registers, each in their own unit. The branch unit also included a number of "private" registers for its own use, including the program counter. Another interesting feature of the architecture is a virtual address system which maps all addresses into a 52-bit space. In this way applications can share memory in a "flat" 32-bit space, and all of the programs can have different blocks of 32 bits each. Appendix E of Book I: PowerPC User Instruction Set Architecture of the PowerPC Architecture Book, Version 2.02 describes the differences between the POWER and POWER2 instruction set architectures and the version of the PowerPC instruction set architecture implemented by the POWER5.
Technology
Computer architecture concepts
null
30864598
https://en.wikipedia.org/wiki/Hematophagy
Hematophagy
Hematophagy (sometimes spelled haematophagy or hematophagia) is the practice by certain animals of feeding on blood (from the Greek words αἷμα "blood" and φαγεῖν "to eat"). Since blood is a fluid tissue rich in nutritious proteins and lipids that can be taken without great effort, hematophagy is a preferred form of feeding for many small animals, such as worms and arthropods. Some intestinal nematodes, such as Ancylostomatids, feed on blood extracted from the capillaries of the gut, and about 75 percent of all species of leeches (e.g., Hirudo medicinalis) are hematophagous. The spider Evarcha culicivora feeds indirectly on vertebrate blood by specializing on blood-filled female mosquitoes as their preferred prey. Some fish, such as lampreys and candirus; mammals, especially vampire bats; and birds, including the vampire finch, Hood mockingbird, Tristan thrush, and oxpeckers, also practise hematophagy. Mechanism and evolution Hematophagous animals have mouth parts and chemical agents for penetrating vascular structures in the skin of hosts, mostly of mammals, birds, and fish. This type of feeding is known as phlebotomy (from the Greek words, phleps "vein" and tomos "cutting"). Once phlebotomy is performed (in most insects by a specialized fine hollow "needle", the proboscis, which perforates skin and capillaries; in bats by sharp incisor teeth that act as a razor to cut the skin), blood is acquired either by sucking action directly from the veins or capillaries, from a pool of escaped blood, or by lapping (again, in bats). To overcome natural hemostasis (blood coagulation), vasoconstriction, inflammation, and pain sensation in the host, hematophagous animals have evolved chemical solutions, in their saliva for instance, that they pre-inject—and anesthesia and capillary dilation have evolved in some hematophagous species. Scientists have developed anticoagulant medicines from studying substances in the saliva of several hematophagous species, such as leeches (hirudin). Hematophagy is classified as either obligatory or facultative. Obligatory hematophagous animals cannot survive on any other food. Examples include Rhodnius prolixus, a South American assassin bug, and Cimex lectularius, the human bed bug. Facultative hematophages, meanwhile, acquire at least some portion of their nutrition from non-blood sources in at least one of the sexually mature forms. Examples of this include many mosquito species, such as Aedes aegypti, whose both males and females feed on pollen and fruit juice for survival, but the females require a blood meal to produce their eggs. Fly species such as Leptoconops torrens can also be facultative hematophages. In anautogenous species, the female can survive without blood but must consume blood in order to produce eggs (obligatory hematophages are by definition also anautogenous). As a feeding practice, hematophagy has evolved independently in a number of arthropod, annelid, nematode and mammalian taxa. For example, Diptera (insects with two wings, such as flies) have eleven families with hematophagous habits (more than half of the 19 hematophagous arthropod taxa). About 14,000 species of arthropods are hematophagous, even including some genera that were not previously thought to be, such as moths of the genus Calyptra. Hematophagy in insects, including mosquitoes, is thought to have arisen from phytophagous or entomophagous origins. Several complementary biological adaptations for locating the hosts (usually in the dark, as most hematophagous species are nocturnal and silent to avoid detection) have also evolved, such as special physical or chemical detectors for sweat components, CO2, heat, light, movement, etc. In addition to these biological adaptations that have evolved to help blood-feeding arthropods locate hosts, there is evidence that RNA from host species may also be taken up and have regulatory consequences in blood feeding insects. A study on the yellow fever mosquito Aedes aegypti has shown that human blood microRNA has-miR-21 are taken up during blood feeding and transported into the fat body tissues. Once in the fat body they target and regulate mosquito genes such as vitellogenin, which is a yolk protein used for egg production. Medical importance The phlebotomic action opens a channel for contamination of the host species with bacteria, viruses and blood-borne parasites contained in the hematophagous organism. Thus, many animal and human infectious diseases are transmitted by hematophagous species, such as the bubonic plague, Chagas disease, dengue fever, eastern equine encephalitis, filariasis, leishmaniasis, Lyme disease, malaria, rabies, sleeping sickness, St. Louis encephalitis, tularemia, typhus, Rocky Mountain spotted fever, West Nile fever, Zika fever, and many others. Insects and arachnids of medical importance for being hematophagous, at least in some species, include the sandfly, blackfly, tsetse fly, bedbug, assassin bug, mosquito, tick, louse, mite, midge, and flea. Hematophagous organisms have been used by physicians for beneficial purposes (hirudotherapy). Some doctors now use leeches to prevent the clotting of blood on some wounds following surgery or trauma. The anticoagulants in the laboratory-raised leeches' saliva keeps fresh blood flowing to the site of an injury, actually preventing infection and increasing chances of full recovery. In a recent study a genetically engineered drug called desmoteplase based on the saliva of Desmodus rotundus (a vampire bat) was shown to improve recovery in stroke patients. Human hematophagy Many human societies also drink blood or use it to manufacture foodstuffs and delicacies. Cow blood mixed with milk, for example, is a mainstay food of the African Maasai. Many places around the world eat blood sausage. Some societies, such as the Moche, had ritual hematophagy, as well as the Scythians, a nomadic people of Eastern Europe, who drank the blood of the first enemy they killed in battle. Psychiatric cases of patients performing hematophagy also exist. Sucking or licking one's own blood from a wound to clean it is also a common human behavior, and in small enough quantities is not considered taboo. Finally, human vampirism has been a persistent object of literary and cultural attention.
Biology and health sciences
Ethology
Biology
30864636
https://en.wikipedia.org/wiki/Late%20Heavy%20Bombardment
Late Heavy Bombardment
The Late Heavy Bombardment (LHB), or lunar cataclysm, is a hypothesized astronomical event thought to have occurred approximately 4.1 to 3.8 billion years (Ga) ago, at a time corresponding to the Neohadean and Eoarchean eras on Earth. According to the hypothesis, during this interval, a disproportionately large number of asteroids and comets collided into the terrestrial planets and their natural satellites in the inner Solar System, including Mercury, Venus, Earth (and the Moon) and Mars. These came from both post-accretion and planetary instability-driven populations of impactors. Although it gained widespread credence, definitive evidence remains elusive. Evidence for the LHB derives from moon rock samples of Lunar craters brought back by the Apollo program astronauts. Isotopic dating showed that the rocks were last molten during impact events in a rather narrow interval of time, suggesting that a large proportion of craters were formed during this period. Several hypotheses attempt to explain this apparent spike in the flux of impactors in the inner Solar System, but no consensus yet exists. The Nice model, popular among planetary scientists, postulates that the giant planets underwent orbital migration, scattering objects from the asteroid belt, Kuiper belt, or both, into eccentric orbits and into the path of the terrestrial planets. Other researchers doubt the heavy bombardment, arguing for example that the apparent clustering of lunar impact-melt ages is a statistical artifact produced by sampling rocks scattered from a single large impact. A range of evidence suggests that there may instead have been a more extended period of lunar bombardment, lasting from approximately 4.2 billion years ago to 3.5 billion years ago. Evidence for a cataclysm The main piece of evidence for a lunar cataclysm comes from the radiometric ages of impact melt rocks that were collected during the Apollo missions. The majority of these impact melts are thought to have formed during the collision of asteroids or comets tens of kilometres across, forming impact craters hundreds of kilometres in diameter. The Apollo 15, 16, and 17 landing sites were chosen as a result of their proximity to the Imbrium, Nectaris, and Serenitatis basins, respectively. The apparent clustering of ages of these impact melts, between about 3.8 and 4.1 Ga, led investigators to postulate that those ages record an intense bombardment of the Moon. They named it the "lunar cataclysm" and proposed that it represented a dramatic increase in the rate of bombardment of the Moon around 3.9 Ga. If these impact melts were derived from these three basins, then not only did these three prominent impact basins form within a short interval of time, but so did many others based on stratigraphic grounds. At the time, the hypothesis was considered controversial. As more data has become available, particularly from lunar meteorites, this hypothesis, while still controversial, has become more popular. The lunar meteorites are thought to randomly sample the lunar surface, and at least some of these should have originated from regions far from the Apollo landing sites. Many of the feldspathic lunar meteorites probably originated from the lunar far side, and impact melts within these have recently been dated. Consistent with the cataclysm hypothesis, none of their ages was found to be older than about 3.9 Ga. Nevertheless, the ages do not "cluster" at this date, but span between 2.5 and 3.9 Ga. Dating of howardite, eucrite and diogenite (HED) meteorites and H chondrite meteorites originating from the asteroid belt reveal numerous ages from 3.4–4.1 Ga and an earlier peak at 4.5 Ga. The 3.4–4.1 Ga ages has been interpreted as representing an increase in impact velocities as computer simulations using hydrocode reveal that the volume of impact melt increases 100–1,000 times as the impact velocity increases from the current asteroid belt average of 5 km/s to 10 km/s. Impact velocities above 10 km/s require very high inclinations or the large eccentricities of asteroids on planet-crossing orbits. Such objects are rare in the current asteroid belt but the population would be significantly increased by the sweeping of resonances due to giant planet migration. Studies of the highland crater size distributions suggest that the same family of projectiles struck Mercury and the Moon during the Late Heavy Bombardment. If the history of decay of late heavy bombardment on Mercury also followed the history of late heavy bombardment on the Moon, the youngest large basin discovered, Caloris, is comparable in age to the youngest large lunar basins, Orientale and Imbrium, and all of the plains units are older than 3 billion years. Criticisms of the cataclysm hypothesis While the cataclysm hypothesis has recently become more popular (in the last fifty years), particularly among dynamicists who have identified possible causes for such a phenomenon, it is still controversial and based on debatable assumptions. Two criticisms are that (1) the "cluster" of impact ages could be an artifact of sampling a single basin's ejecta, and (2) that the lack of impact melt rocks older than about 4.1 Ga is related to all such samples having been pulverized, or their ages being reset. The first criticism concerns the origin of the impact melt rocks that were sampled at the Apollo landing sites. While these impact melts have been commonly attributed to having been derived from the closest basin, it has been argued that a large portion of these might instead be derived from the Imbrium basin. The Imbrium impact basin is the youngest and largest of the multi-ring basins found on the central nearside of the Moon, and quantitative modeling shows that significant amounts of ejecta from this event should be present at all of the Apollo landing sites. According to this alternative hypothesis, the cluster of impact melt ages near 3.9 Ga simply reflects material being collected from a single impact event, and not several. Additional criticism also argues that the age spike at 3.9 Ga identified in 40Ar/39Ar dating could also be produced by an episodic early crust formation followed by partial 40Ar losses as the impact rate declined. A second criticism concerns the significance of the lack of impact melt rocks older than about 4.1 Ga. One hypothesis for this observation that does not involve a cataclysm is that old melt rocks did exist, but that their radiometric ages have all been reset by the continuous effects of impact cratering over the past 4 billion years. Furthermore, it is possible that these putative samples could all have been pulverized to such small sizes that it is impossible to obtain age determinations using standard radiometric methods. Scientists continue to study the bombardment history of the moon in an attempt to clarify the history of the inner solar system. Geological consequences on Earth If a cataclysmic cratering event truly occurred on the Moon, Earth would have been affected as well. Extrapolating lunar cratering rates to Earth at this time suggests that the following number of craters would have formed: 22,000 or more impact craters with diameters >, about 40 impact basins with diameters about , several impact basins with diameters about , Before the formulation of the LHB hypothesis, geologists generally assumed that Earth remained molten until about 3.8 Ga. This date could be found in many of the oldest-known rocks from around the world, and appeared to represent a strong "cutoff point" beyond which older rocks could not be found. These dates remained fairly constant even across various dating methods, including the system considered the most accurate and least affected by environment, uranium–lead dating of zircons. As no older rocks could be found, it was generally assumed that Earth had remained molten until this date, which defined the boundary between the earlier Hadean and later Archean eons. Nonetheless, in 1999, the oldest known rock on Earth was dated to be 4.031 ± 0.003 billion years old, and is part of the Acasta Gneiss of the Slave Craton in northwestern Canada. Older rocks could be found, however, in the form of asteroid fragments that fall to Earth as meteorites. Like the rocks on Earth, asteroids also show a strong cutoff point, at about 4.6 Ga, which is assumed to be the time when the first solids formed in the protoplanetary disk around the then-young Sun. The Hadean, then, was the period of time between the formation of these early rocks in space, and the eventual solidification of Earth's crust, some 700 million years later. This time would include the accretion of the planets from the disk and the slow cooling of Earth into a solid body as the gravitational potential energy of accretion was released. Later calculations showed that the rate of collapse and cooling depends on the size of the rocky body. Scaling this rate to an object of Earth mass suggested very rapid cooling, requiring only 100 million years. The difference between measurement and theory presented a conundrum at the time. The LHB offers a potential explanation for this anomaly. Under this model, the rocks dating to 3.8 Ga solidified only after much of the crust was destroyed by the LHB. Collectively, the Acasta Gneiss in the North American cratonic shield and the gneisses within the Jack Hills portion of the Narryer Gneiss Terrane in Western Australia are the oldest continental fragments on Earth, yet they appear to post-date the LHB. The oldest mineral yet dated on Earth, a 4.404 Ga zircon from Jack Hills, predates this event, but it is likely a fragment of crust left over from before the LHB, contained within a much younger (~3.8 Ga old) rock. The Jack Hills zircon led to an evolution in understanding of the Hadean eon. Older references generally show that Hadean Earth had a molten surface with prominent volcanos. The name "Hadean" itself refers to the "hellish" conditions assumed on Earth for the time, from the Greek Hades. Zircon dating suggested, albeit controversially, that the Hadean surface was solid, temperate, and covered by acidic oceans. This picture derives from the presence of particular isotopic ratios that suggest the action of water-based chemistry at some time before the formation of the oldest rocks (see Cool early Earth). Of particular interest, Manfred Schidlowski argued in 1979 that the carbon isotopic ratios of some sedimentary rocks found in Greenland were a relic of organic matter: the ratio of carbon-12 to carbon-13 was unusually high, normally a sign of "processing" by life. There was much debate over the precise dating of the rocks, with Schidlowski suggesting they were about 3.8 Ga old, and others suggesting a more "modest" 3.6 Ga. In either case it was a very short time for abiogenesis to have taken place, and if Schidlowski was correct, arguably too short a time. The Late Heavy Bombardment and the "re-melting" of the crust that it suggests provides a timeline under which this would be possible: life either formed immediately after the Late Heavy Bombardment, or more likely survived it, having arisen earlier during the Hadean. A 2002 study suggest that the rocks Schidlowski found are indeed from the older end of the possible age range at about 3.85 Ga, suggesting the latter possibility is the most likely answer. Studies from 2005, 2006 and 2009 have found no evidence for the isotopically-light carbon ratios that were the basis for the original claims of early Hadean life. However, a similar study of Jack Hills rocks from 2008 shows traces of the same sort of potential organic indicators. Thorsten Geisler of the Institute for Mineralogy at the University of Münster studied traces of carbon trapped in small pieces of diamond and graphite within zircons dating to 4.25 Ga. Three-dimensional computer models developed in May 2009 by a team at the University of Colorado at Boulder postulate that much of Earth's crust, and the microbes living in it, could have survived the bombardment. Their models suggest that although the surface of Earth would have been sterilized, hydrothermal vents below Earth's surface could have incubated life by providing a sanctuary for thermophile microbes. In April 2014, scientists reported finding evidence of the largest terrestrial meteor impact event to date near the Barberton Greenstone Belt. They estimated the impact occurred about 3.26 billion years ago and that the impactor was approximately wide. The crater from this event, if it still exists, has not yet been found. Possible causes Giant-planet migration In the Nice model, the Late Heavy Bombardment is the result of a dynamical instability in the outer Solar System. The original Nice model simulations by Gomes et al. began with the Solar System's giant planets in a tight orbital configuration surrounded by a rich trans-Neptunian belt. Objects from this belt stray into planet-crossing orbits, causing the orbits of the planets to migrate over several hundred million years. Jupiter and Saturn's orbits drift apart slowly until they cross a 2:1 orbital resonance, causing the eccentricities of their orbits to increase. The orbits of the planets become unstable and Uranus and Neptune are scattered onto wider orbits that disrupt the outer belt, causing a bombardment of comets as they enter planet-crossing orbits. Interactions between the objects and the planets also drive a faster migration of Jupiter and Saturn's orbits. This migration causes resonances to sweep through the asteroid belt, increasing the eccentricities of many asteroids until they enter the inner Solar System and impact the terrestrial planets. The Nice model has undergone some modification since its initial publication. The giant planets now begin in a multi-resonant configuration due to an early gas-driven migration through the protoplanetary disk. Interactions with the trans-Neptunian belt allow their escape from the resonances after several hundred million years. The encounters between planets that follow include one between an ice giant and Saturn that propels the ice giant onto a Jupiter-crossing orbit followed by an encounter with Jupiter that drives the ice giant outward. This jumping-Jupiter scenario quickly increases the separation of Jupiter and Saturn, limiting the effects of resonance sweeping on the asteroids and the terrestrial planets. While this is required to preserve the low eccentricities of the terrestrial planets and avoid leaving the asteroid belt with too many high-eccentricity asteroids, it also reduces the fraction of asteroids removed from the main asteroid belt, leaving a now-nearly-depleted inner band of asteroids as the primary source of the impactors of the LHB. The ice giant is often ejected following its encounter with Jupiter, leading some to propose that the Solar System began with five giant planets. Recent works, however, have found that impacts from this inner asteroid belt would be insufficient to explain the formation of ancient impact spherule beds and the lunar basins, and that the asteroid belt was probably not the source of the Late Heavy Bombardment. Late formation of Uranus and Neptune According to one planetesimal simulation of the establishment of the planetary system, the outermost planets Uranus and Neptune formed very slowly, over a period of several billion years. Harold Levison and his team have also suggested that the relatively low density of material in the outer Solar System during planet formation would have greatly slowed their accretion. The late formation of these planets has therefore been suggested as a different reason for the LHB. However, recent calculations of gas-flows combined with planetesimal runaway growth in the outer Solar System imply that Jovian planets formed extremely rapidly, on the order of 10 My, which does not support this explanation for the LHB. Planet V hypothesis The Planet V hypothesis posits that a fifth terrestrial planet caused the Late Heavy Bombardment when its meta-stable orbit entered the inner asteroid belt. The hypothetical fifth terrestrial planet, Planet V, had a mass less than half of Mars and originally orbited between Mars and the asteroid belt. Planet V's orbit became unstable due to perturbations from the other inner planets causing it to intersect the inner asteroid belt. After close encounters with Planet V, many asteroids entered Earth-crossing orbits, causing the Late Heavy Bombardment. Planet V was ultimately lost, likely plunging into the Sun. In numerical simulations, an uneven distribution of asteroids, with the asteroids heavily concentrated toward the inner asteroid belt, has been shown to be necessary to produce the LHB via this mechanism. An alternate version of this hypothesis in which the lunar impactors are debris resulting from Planet V impacting Mars, forming the Borealis Basin, has been proposed to explain a low number of giant lunar basins relative to craters and a lack of evidence of cometary impactors. Disruption of Mars-crossing asteroid A hypothesis proposed by Matija Ćuk posits that the last few basin-forming impacts were the result of the collisional disruption of a large Mars-crossing asteroid. This Vesta-sized asteroid was a remnant of a population which initially was much larger than the current main asteroid belt. Most of the pre-Imbrium impacts would have been due to these Mars-crossing objects, with the early bombardment extending until 4.1 billion years ago. A period without many basin-forming impacts then followed, during which the lunar magnetic field decayed. Then, roughly 3.9 billion years ago, a catastrophic impact disrupted the Vesta-sized asteroid, significantly increasing the population of Mars-crossing objects. Many of these objects then evolved onto Earth-crossing orbits, producing a spike in the lunar impact rate during which the last few lunar impact basins are formed. Ćuk points to the weak or absent residual magnetism of the last few basins and a change in the size–frequency distribution of craters which formed during this late bombardment as evidence supporting this hypothesis. The timing and the cause of the change in the size–frequency distribution of craters is controversial. Other potential sources A number of other possible sources of the Late Heavy Bombardment have been investigated. Among these are additional Earth satellites orbiting independently or as lunar trojans, planetesimals left over from the formations of the terrestrial planets, Earth or Venus co-orbitals, and the breakup of a large main belt asteroid. Additional Earth satellites on independent orbits were shown to be quickly captured into resonances during the Moon's early tidally-driven orbital expansion and were lost or destroyed within a few million years. Lunar trojans were found to be destabilized within 100 million years by a solar resonance when the Moon reached 27 Earth radii. Planetesimals left over from the formation of the terrestrial planets were shown to be depleted too rapidly due to collisions and ejections to form the last lunar basins. The long-term stability of primordial Earth or Venus co-orbitals (trojans or objects with horseshoe orbits) in conjunction with the lack of current observations indicate that they were unlikely to have been common enough to contribute to the LHB. Producing the LHB from the collisional disruption of a main belt asteroid was found to require at minimum a 1,000–1,500 km parent body with the most favorable initial conditions. Debris produced by collisions among inner planets, now lost, has also been proposed as a source of the LHB. Exosystem with possible Late Heavy Bombardment Evidence has been found for Late Heavy Bombardment-like conditions around the star Eta Corvi.
Physical sciences
Events
Earth science
30865437
https://en.wikipedia.org/wiki/Ranch
Ranch
A ranch (from /Mexican Spanish) is an area of land, including various structures, given primarily to ranching, the practice of raising grazing livestock such as cattle and sheep. It is a subtype of farm. These terms are most often applied to livestock-raising operations in Mexico, the Western United States and Western Canada, though there are ranches in other areas. People who own or operate a ranch are called ranchers, cattlemen, or stockgrowers. Ranching is also a method used to raise less common livestock such as horses, elk, American bison, ostrich, emu, and alpaca. Ranches generally consist of large areas, but may be of nearly any size. In the western United States, many ranches are a combination of privately owned land supplemented by grazing leases on land under the control of the federal Bureau of Land Management or the United States Forest Service. If the ranch includes arable or irrigated land, the ranch may also engage in a limited amount of farming, raising crops for feeding the animals, such as hay and feed grains. Ranches that cater exclusively to tourists are called guest ranches or, colloquially, "dude ranches". Most working ranches do not cater to guests, though they may allow private hunters or outfitters onto their property to hunt native wildlife. However, in recent years, a few struggling smaller operations have added some dude ranch features such as horseback rides, cattle drives, and guided hunting to bring in additional income. Ranching is part of the iconography of the "Wild West" as seen in Western movies and rodeos. Etymology The term ranch comes from the Spanish term rancho, itself from the term rancharse, which means “to get ready, to settle in a place, to pitch camp”, itself from the military French term se ranger (to arrange oneself, to tidy up), from the Frankish hring, which means ring or circle. It was, originally, vulgarly applied in the 16th century to the provisional houses of the indigenous peoples of the Americas. The term evolved differently throughout the Spanish speaking world: In Mexico, it evolved to mean a cattle farm, station or estate, a pasturing land or agricultural settlement where cattle are raised. Originally used to refer to a hamlet or village where cattle is raised and where the land is sowed; and to a small independent cattle farm, or to a cattle station, an area of land for cattle raising, that is dependent of a hacienda, a large cattle estate. In Spain it retained its military origin, being defined as: the group of people, typically soldiers, who eat together in a circle; a mess hall. “Rancho” in Spain is also the: “food prepared for several people who eat in a circle and from the same pot.” It was also defined as a family reunion to talk any particular business. While “ranchero” is defined as the: “steward of a mess”, the steward in charge of preparing the food for the “rancho” or mess-hall. In South America, specifically in Argentina, Uruguay, Chile, Brasil, Bolivia and Paraguay, the term is applied to a modest humble rural home or dwelling, a cottage; while in Venezuela it’s an improvised, illegal dwelling, generally poorly built or not meeting basic habitability requirements; a shanty or slum house. Ranch occupations The person who owns and manages the operation of a ranch is usually called a rancher, but the terms cattleman, stockgrower, or stockman are also sometimes used. If this individual in charge of overall management is an employee of the actual owner, the term foreman or ranch foreman is used. A rancher who primarily raises young stock sometimes is called a cow-calf operator or a cow-calf man. This person is usually the owner, though in some cases, particularly where there is absentee ownership, it is the ranch manager or ranch foreman. The people who are employees of the rancher and involved in handling livestock are called a number of terms, including cowhand, ranch hand, and cowboy. People exclusively involved with handling horses are sometimes called wranglers. Origins of ranching The most remote origins of a form of ranching, date back to the 1100’s in Spain where livestock raising had developed mainly around the raising of sheep and, by the 1200’s, the establishment of a system of transhumance being regulated by the powerful Mesta, an association of sheep barons and shepherds created by King Alfonso X of Castile. Although other livestock species, like cattle, existed, their importance, value and herd sizes were lesser than sheep, even before Roman times. The importance of sheep was such that, even in the accounting books for payment of livestock taxes, the “serviciadores” (servicemen) reduced cows to sheep for accounting purposes, typically 6 six sheep for every cow or horse; because there were so few cows and horses, they didn’t detract from the value of the figures. Sheep were also used as currency, and currency itself, like the silver solidus, had the equivalency of one sheep. Sheep and sheepherding had such a cultural and economic impact on the culture of Spain that the country was known as Un Pais de Pastores (A country of shepherds). During the Reconquista, the lands being reclaimed by the Christians had been depopulated. Because these lands were still prone to raids and attacks by the Muslims even after being reconquered, agriculture was out of the question. Instead, livestock raising, predominantly sheep, became the solution to repopulate the land as the animals could easily be moved to a safer place in case of an attack. This led to the emergence of the “pastor a caballo” or horse-mounted shepherd. Called caballeros villanos (knights-villein) or pastores guerreros (warrior shepherds), these were the highest class of peasants and were allowed to have a horse. They, on horseback, could defend the frontier from attacks, easily herd the animals to a safer place, and could do their own raids on Muslim lands. As for the land itself, it was organized and highly regulated into various types due to a scarcity of pasturelands. There were: pastos comunales (communal lands), private lands, baldíos (vacant wooded areas) and dehesas, or fenced lands that could be privately owned. All these pasturelands could be used for a fee or tribute, unless used by the surrounding villagers in the case of communal lands, or could be rented to livestock owners. Most Spanish ganaderos (livestock owners) didn’t own their own land, having to rent or lease the land by paying tribute or rent. Most pasturelands had restrictions on the amount of livestock that could enter, or even the type of species, as in the dehesas boyales, or dehesas used only for oxen. A montazgo, a tribute or tax paid for the transit of livestock through villages, towns, lands or mountains, was also paid. History in North America The origins of what we know today as ranching in North America date back to the 16th century when the Spaniards introduced cattle and horses to Mexico. Livestock raising had diverged in Mexico from what it had been in Spain due to an over abundance of land and, because of the rapid multiplication of livestock, an over abundance of cattle and horses. In a letter to the King of Spain in 1544, Cristóbal de Benavente, prosecutor of Mexico (fiscal de audiencia), wrote that livestock of all species were multiplying rapidly, almost double every 15 months. Although all livestock species took root and multiplied rapidly, there was a preponderance of cattle. According to Spanish army captain, Bernardo de Vargas Machuca, out of all the kingdoms, Mexico was where cattle were the most abundant. Cattle had multiplied so much that, just in the Sotavento region of Veracruz alone, cattle had quintupled to around 2.5 million head in 1630 from half a million in 1570. The Franciscan friar, Antonio de Ciudad Real, who accompanied friar Alonso de Ponce to New Spain in 1584, argued that the reason why cattle was so abundant in the Province of Mexico was because it was easier to produce and raise, at less cost and with less work, because pasturelands were abundant, the climate was temperate and there were no wolves or other predators to prey upon them as in Spain, multiplying so much that it seemed to be native to the land, that many men were able to brand more than 30 thousand calves a year. Because it was a much greater territory, land was plentiful, thus, any Spanish laws that were applied back in Spain regarding pasturelands and land ownership were never applied. Initially, a generalized common grazing regime was established, in which all vacant land was free and open to all, as was the stubble after the harvest. This regime allowed cattle to multiply in a semi-wild state, with minimal intervention from man, diverging, once again, from Spanish tradition. However, over time, the authorities were forced to recognize a somewhat stable occupation of the land by the first cattle barons. The first sites or sitios intended for cattle and other livestock were called Estancias (stays, stations), and were given in the form of grants upon verification of the occupation or "purchase" made from the Indians. These grants didn’t grant ownership, but rather the usufruct of the land, and were revocable if the beneficiary was absent. Thus, the cattle-barons reserved the exclusive use of the land, without actually owning it. There were two types of estancias: estancias de ganado mayor (cattle and horse estancias) and estancias de ganado menor (sheep & goat estancias). Both types had to be square in shape, going from east to west. Cattle estancias had to be 1 league in length, on each side, or 5000 varas or 1750 hectares, approximately 4400 acres. While sheep and goat estancias had to measure 3333 varas in length or 780 hectares, approximately 2000 acres. The caballeria, the piece of land allotted to a Caballero and his mount, while not defined as estancia, had to be 43 hectares or approximately 110 acres. The Estancias de labor were the ones that combined livestock raising with agriculture. However, the estancias far exceeded the established limits, since fencing the land was prohibited (unlike in Spain), allowing the cattle to graze freely in the intermediate spaces and, thus, allowing the cattle-barons to annex the land. Most Cattle barons usually possessed a set of estancias that were situated side by side, encompassing a vast area. Estancia was also the name of the houses or “cottages” where the vaqueros (cowherds) would gather. By 1554, there were 60 estancias de ganado mayor in the Valley of Toluca in central Mexico, with more than 150,000 head of cattle and horses. It’s estimated that just in central Mexico, there were around 1.3 million head of cattle by 1620. Between 1550 and 1619, 103 cattle estancias (+444,789 acres), 118 sheep estancias (approximately 200,000 acres); 42 mare estancias (186,000 acres) and 130 caballerías (approximately 14000 acres) were granted in the Huasteca region of San Luis Potosí. The earliest cattle estancias were located in the highlands of Central Mexico and in the lowlands along the Gulf of Mexico. Because the lands in central Mexico were beginning to be insufficient, the cattle barons were forced to relocate. According to Franciscan friar, Juan de Torquemada, cattle barons began to move operations north of Central Mexico, to the valleys and lands stretching for more than 200 leagues (500 miles), from Querétaro in the Bajío region, passing through Zacatecas, to the valley of Guadiana (Durango). By 1582 there were more than 100,000 head of cattle, 200,000 sheep and 10,000 horses grazing in the San Juan del Río valley in Querétaro. In 1576 there were around 30,000 head of cattle in Nueva Vizcaya and by the end of the 18th century, just in the Bolsón de Mapimí alone, there were 325,000 head of cattle, 230,000 horses, 49,000 mules, 7,000 donkeys, 2 million sheep and 250,000 goats. In early 17th century, prior to the Tepehuán uprising of 1616, there were in the vicinity of the city of Durango, capital of Nueva Vizcaya, more than 200,000 head of cattle and horses, from where thousands of steers were driven to Mexico City. By the late 18th century and early 19th century, there were more than 5 million head of cattle in the province of Xalisco in the Intendancy of Guadalajara, producing between 300,000 and 350,000 calves a year. From both regions, Guadalajara and Durango, more than 50,000 head of cattle were taken out and driven each year to New Spain, to the cattle markets of Mexico City, Puebla and surrounding areas in the 18th century. The province of Sonora had 121,000 head of cattle in 1783, the province of California had 68,000 head of cattle and 2187 horses in 1802, up from 25000 cattle in 1792. Prior to the establishment of rancho as a cattle-farm, the term seems to have been used to refer to provisional houses, like those of the indigenous people, or a camping site. Similarly, the term “estancia” appears to have been used originally to denote a point where herdsmen and their herds finally came to rest, or as the Spanish-Mexican horseman and historian, Don Juan Suárez de Peralta, described it in 1580: “the houses where the vaqueros gather or assemble, where they have corrals to enclose some cattle to brand and mark.” The rancho under the Mexican definition, as we know it today, would emerge sometime in the 17th century, being defined as: “A small hacienda, with a small amount of land for cultivation, a small workforce, and a proportionate amount of tools and equipment; different from the estancia or big hacienda which has more land, a bigger workforce, more oxen, and more tools and equipment.” This definition from 1687, shows that both terms, estancia and hacienda, were synonymous; apparently the term estancia begins to fall into disuse in the country, being replaced by the term hacienda, sometime in the early 18th century. The French historian, François Chevalier, states that the terms estancia and caballería were gradually divested of their original meanings and ultimately restricted to units of measurement, in favor of the term hacienda which had become popular. Ultimately, according to Chevalier, a hacienda was just a combination of cattle estancias and caballerías into one huge rural estate. Towards the 19th century, ranchos were either small independent cattle farms or were dependent of a hacienda. Both haciendas and ranchos were divided according to type. In the case of haciendas, there were two types, the “hacienda de beneficio” and the “hacienda de campo”. The “hacienda de beneficio” were mining operations, typically silver. The “haciendas de campo” were the landed estates, and were divided into two types: the hacienda de labor (agricultural estate) and the hacienda de ganado (livestock estate); the latter was divided into two types, the hacienda de ganado mayor (cattle estate) and the hacienda de ganado menor (sheep and goat estate). Ranchos were either “de ganado mayor” (cattle), “de caballada o mulada” (horses or mules), or “de ganado menor” (sheep and goats). The inhabitants of haciendas and ranchos of the high-lands and interior of the country were called rancheros, and were tenants or worked for the landowner; rancheros who took care of the livestock were vaqueros, while those who lived and worked as vaqueros in the haciendas of Veracruz, in the low-lands, were called Jarochos. The largest hacienda/ranch in the world during Colonial times was the Sanchez Navarro estate with more than 16 million acres. The hacienda “San Ignacio del Buey” owned by the friar Don Juan Caballero in the Huasteca region of San Luis Potosi had, at its height, 600,000 hectares or 1.5 million acres. The hacienda “San Juan Evangelista del Mezquite” owned by Felipe de Barragán in the same region was 450,000 hectares or 1.2 million acres at its height in the 18th century. One of the largest cattle-barons in 16th century Mexico was Don Diego de Ibarra, governor of Nueva Vizcaya, who in the year 1586 had branded more than 33,000 calves at his Trujillo hacienda in Zacatecas where, at the time of his death in 1600, owned more than 130,000 head of cattle and more than 4000 horses. His successor as governor, Don Rodrigo del Rio de la Loza, had branded at his Poanas hacienda more than 42,000 calves that same year. The wealthiest, most important and renowned hacienda was the “Jaral de Berrio” owned by the Count and Marquis Juan Nepomuceno de Moncada y Berrio in Guanajuato, considered the wealthiest man in Mexico in the 1830’s and possibly the largest landowner in the world at the time. His vast landholdings stretched for more than 200 miles from Guanajuato to Zacatecas. His vast wealth consisted, among other things, of more than 3 million head of livestock of cattle, horses, sheep and goats. El Jaral was such a large and influential hacienda, that songs, poems and proverbs were written about it. The horses and fighting bulls of El Jaral de Berrio were considered to be the finest and most renowned of all of New Spain-Mexico, which led to the famous Mexican ranchero proverb: Pa’ los Toros del Jaral los Caballos de allá mesmo (For the Bulls of Jaral the Horses from there too). The largest hacienda/ranch in the world, prior to the Mexican Revolution of 1910, was the Terrazas family estate headed by Don Luis Terrazas in the state of Chihuahua, with more than 8 million acres in size (some sources say 15 million acres) stretching for more than 160 miles north to south and 200 miles east to west. At its height in the early 1900’s, Terrazas owned more than 1 million head of cattle, 700,000 sheep, and 200,000 horses. It employed more than 2000 workers of which 1000 were vaqueros. It was the only ranch in the world that had its own slaughtering and packing plant at the time. United States As settlers from the United States moved west, they brought cattle breeds developed on the east coast and in Europe along with them, and adapted their management to the drier lands of the west by borrowing key elements of the Spanish vaquero culture. However, there were cattle on the eastern seaboard. Deep Hollow Ranch, east of New York City in Montauk, New York, claims to be the first ranch in the United States, having continuously operated since 1658. The ranch makes the somewhat debatable claim of having the oldest cattle operation in what today is the United States, though cattle had been run in the area since European settlers purchased land from the Indian people of the area in 1643. Although there were substantial numbers of cattle on Long Island, as well as the need to herd them to and from common grazing lands on a seasonal basis, the cattle handlers actually lived in houses built on the pasture grounds, and cattle were ear-marked for identification, rather than being branded. The only actual "cattle drives" held on Long Island consisted of one drive in 1776, when the island's cattle were moved in a failed attempt to prevent them from being captured during the Revolutionary War, and three or four drives in the late 1930s, when area cattle were herded down Montauk Highway to pasture ground near Deep Hollow Ranch. The open range The prairie and desert lands of what today is Mexico and the western United States were well-suited to "open range" grazing. For example, American bison had been a mainstay of the diet for the Native Americans in the Great Plains for centuries. Likewise, cattle and other livestock were simply turned loose in the spring after their young were born and allowed to roam with little supervision and no fences, then rounded up in the fall, with the mature animals driven to market and the breeding stock brought close to the ranch headquarters for greater protection in the winter. The use of livestock branding allowed the cattle owned by different ranchers to be identified and sorted. Beginning with the settlement of Texas in the 1840s, and expansion both north and west from that time, through the Civil War and into the 1880s, ranching dominated western economic activity. Along with ranchers came the need for agricultural crops to feed both humans and livestock, and hence many farmers also came west along with ranchers. Many operations were "diversified", with both ranching and farming activities taking place. With the Homestead Act of 1862, more settlers came west to set up farms. This created some conflict, as increasing numbers of farmers needed to fence off fields to prevent cattle and sheep from eating their crops. Barbed wire, invented in 1874, gradually made inroads in fencing off privately owned land, especially for homesteads. There was some reduction of land on the Great Plains open to grazing. End of the open range The end of the open range was not brought about by a reduction in land due to crop farming, but by overgrazing. Cattle stocked on the open range created a tragedy of the commons as each rancher sought increased economic benefit by grazing too many animals on public lands that "nobody" owned. However, being a non-native species, the grazing patterns of ever-increasing numbers of cattle slowly reduced the quality of the rangeland, in spite of the simultaneous massive slaughter of American bison that occurred. The winter of 1886–87 was one of the most severe on record, and livestock that were already stressed by reduced grazing died by the thousands. Many large cattle operations went bankrupt, and others suffered severe financial losses. Thus, after this time, ranchers also began to fence off their land and negotiated individual grazing leases with the American government so that they could keep better control of the pasture land available to their own animals. Ranching in Hawaii Ranching in Hawaii developed independently of that in the continental United States. In colonial times, Capt. George Vancouver gave several head of cattle to the Hawaiian king, Pai`ea Kamehameha, monarch of the Hawaiian Kingdom, and by the early 19th century, they had multiplied considerably, to the point that they were wreaking havoc throughout the countryside. About 1812, John Parker, a sailor who had jumped ship and settled in the islands, received permission from Kamehameha to capture the wild cattle and develop a beef industry. The Hawaiian style of ranching originally included capturing wild cattle by driving them into pits dug in the forest floor. Once tamed somewhat by hunger and thirst, they were hauled out up a steep ramp, and tied by their horns to the horns of a tame, older steer (or ox) and taken to fenced-in areas. The industry grew slowly under the reign of Kamehameha's son Liholiho (Kamehameha II). When Liholiho's brother, Kauikeaouli (Kamehameha III), visited California, then still a part of Mexico, he was impressed with the skill of the Mexican vaqueros. In 1832, he invited several to Hawaii to teach the Hawaiian people how to work cattle. The Hawaiian cowboy came to be called the paniolo, a Hawaiianized pronunciation of español. Even today, the traditional Hawaiian saddle and many other tools of the ranching trade have a distinctly Mexican look, and many Hawaiian ranching families still carry the surnames of vaqueros who made Hawaii their home. Ranching in South America In Argentina and Uruguay, ranches are known as estancias and in Brazil, they are called fazendas. In much of South America, including Ecuador and Colombia, the term hacienda or finca may be used. Ranchero or Rancho are also generic terms used throughout tropical Latin America. In the colonial period, from the pampas regions of South America all the way to the Minas Gerais state in Brazil, including the semi-arid pampas of Argentina and the south of Brazil, were often well-suited to ranching, and a tradition developed that largely paralleled that of Mexico and the United States. The gaucho culture of Argentina, Brazil and Uruguay are among the cattle ranching traditions born during the period. However, in the 20th century, cattle raising expanded into less-suitable areas of the Pantanal. Particularly in Brazil, the 20th century marked the rapid growth of deforestation, as rain forest lands were cleared by slash and burn methods that allowed grass to grow for livestock, but also led to the depletion of the land within only a few years. Many of indigenous peoples of the rain forest opposed this form of cattle ranching and protested the forest being burnt down to set up grazing operations and farms. This conflict is still a concern in the region today. Ranches outside the Americas In Spain, where the origins of ranching can be traced, there are ganaderías operating on dehesa-type land, where fighting bulls are raised. However, ranch-type properties are not seen to any significant degree in the rest of western Europe, where there is far less land area and sufficient rainfall allows the raising of cattle on much smaller farms. In Australia, a rangeland property is a station (originally in the sense of a place where stock were temporarily stationed). In almost all cases, these are either cattle stations or sheep stations. The largest cattle stations in the world are located in Australia's dry outback rangelands. Owners of these stations are usually known as graziers or pastoralists, especially if they reside on the property. Employees are generally known as stockmen/stockwomen, jackaroos/jillaroos, and ringers (rather than cowboys). Some Australian cattle stations are larger than 10,000 km2, with the greatest being Anna Creek Station which measures 23,677 km2 in area (approximately eight times the largest US Ranch). Anna Creek is owned by S Kidman & Co. The equivalent terms in New Zealand are run and station. In South Africa, similar extensive holdings are usually known as a farm (occasionally also ranch) in South African English and plaas in Afrikaans.
Technology
Buildings and infrastructure
null
30865446
https://en.wikipedia.org/wiki/Sapporo%20Municipal%20Subway
Sapporo Municipal Subway
The is a mostly-underground rubber-tyred rapid transit system in Sapporo, Hokkaido, Japan. Operated by the Sapporo City Transportation Bureau, it is the only subway system on the island of Hokkaido. Lines The system consists of three lines: the green Namboku Line (North–south line), orange Tozai Line (East–west line), and blue Tōhō Line (North East Line). The first, the Namboku Line, was opened in 1971 prior to the 1972 Winter Olympics. The Sapporo City Subway system operates out of two main hubs: Sapporo Station and Odori Station. Most areas of the city are within a reasonable walking distance or short bus ride from one of the subway stations. The three lines all connect at Odori Station. The Namboku Line and Tōhō Line lines connect with the JR Hokkaido main lines at Sapporo Station. At Odori and Susukino stations, it connects to the streetcar (tram) above. The system has a total length of with 46 stations. Except for the section of the Namboku Line south of Hiragishi Station, tracks and stations are all underground. The aforementioned above-ground section is entirely covered, including stations, depot access tracks, and the depot south of Jieitai-Mae Station. Technology All lines of the subway use rubber-tired trains that travel on two flat roll ways, guided by a single central rail. This system is unique among subways in Japan and the rest of the world; while other rubber-tired metro networks, including smaller automated guideway transit lines such as the Port Liner, use guide bars, the Sapporo system does not because the central rail makes them superfluous (similar to some rubber-tyred trams, such as the Translohr and Bombardier Guided Light Transit). This rubber-tired system, combined with the heavy snowfall that Sapporo gets during winter, means that the system must be fully enclosed (including the southern elevated segment of the Namboku line); as a result, rolling stock cannot be fitted with air conditioning as it would otherwise trap hot air in the tunnels. There are differences between the technology used on the older Namboku Line and the newer Tōzai and Tōhō Lines. The Namboku Line uses a T-shaped guide rail, double tires, and third rail power collection, while the Tōzai and Tōhō Lines use an I-shaped guide rail, single tires, and overhead line power collection. The surface of the roll ways is constructed of resin (on the entirety of the Namboku Line and the central section of the Tōzai Line) and steel (on the outer sections of the Tōzai Line and the entirety of the Tōhō Line). Rolling stock Namboku Line 5000 series (6-car formation with 4 doors per side, since 1997) Tōzai Line Sapporo Municipal Subway 8000 series (7-car formation with 3 doors per side, since 1998) Tōhō Line 9000 series (4-car formation with 3 doors per side, since May 2015) Former rolling stock Namboku Line 1000/2000 series (2/4/6/8-car formation with 2 doors per side, from 1971 until 1999) 3000 series (8-car formation with 2 doors per side, from 1978 until 2012) Tōzai Line 6000 series (7-car formation with 3 doors per side, from 1976 until 2008) Tōhō Line 7000 series (4-car formation with 3 doors per side, from 1988 until 2016) Rolling stock gallery Fares Ticket prices range from 210 yen to 380 yen, depending on the distance to travel. All stations accept SAPICA, rechargeable IC cards which can be used as a fare card for the subway. Kitaca, a contactless smart card issued by JR Hokkaido, is also usable on the Sapporo Municipal Subway, as well as IC cards part of the Nationwide Mutual Usage Service, e.g. Suica and PASMO. However, this compatibility is unidirectional; SAPICA cannot be used on other rail networks. Day passes and discount passes can be purchased at ticket vending machines in stations. Prior to its discontinuation on March 31, 2015, prepaid "With You" magnetic cards could be used for the subway, streetcar and regular city routes offered by JR Hokkaido Bus, Hokkaido Chuo Bus, and Jotetsu Bus. Magnetic card functionality was superseded by SAPICA. One-day passes offer unlimited rides on the subway, streetcar, and regular city routes offered by the Chuo, Jotetsu, and JR Hokkaido Buses (excluding some suburban areas) on the day of purchase. A subway one-day card, for use only on the subway, is also available for 830 yen. Donichika tickets (ドニチカキップ, donichika kippu, a portmanteau of 土日 donichi meaning "Saturday and Sunday" and 地下 chika meaning "underground") allow for unlimited one-day ride pass for the subway, and are only available on weekends and national holidays; they are sold for a lower price of 520 yen. Due to their identical functionality, subway one-day cards are unavailable on days where Donichika tickets are sold. Both are able to be purchased with cash only. Commuter passes, able to be loaded onto SAPICA, offer unlimited rides between specific stations during their period of validity. There are two types of commuter pass: one for those commuting to workplaces and one for students. Both are available for one-month or three-month periods, and can be newly purchased from commuter pass sales offices located at major stations. Standard SAPICA cards may be upgraded to a commuter pass through ticket vending machines. Commuter SAPICA cards automatically downgrade to standard SAPICA cards once the time period expires. Shopping areas There are two main shopping areas located underground, connected to the exits of three central stations on the Namboku line: Sapporo Station, Susukino Station, and Odori Station. Pole Town is an extensive shopping area that lies between Susukino and Odori stations. Aurora Town is a shopping arcade connected to Sapporo Station, linking some of the main shopping malls in Sapporo, such as Daimaru, JR Tower, and Stellar Place. In addition to the underground shopping corridors, an underground walkway also connects Odori Station to Bus Center Mae station and its neighboring bus center. There are few stores in this walkway. Network Map
Technology
Japan
null
23289836
https://en.wikipedia.org/wiki/Claymore
Claymore
A claymore (; from , "great sword") is either the Scottish variant of the late medieval two-handed sword or the Scottish variant of the basket-hilted sword. The former is characterised as having a cross hilt of forward-sloping quillons with quatrefoil terminations and was in use from the 15th to 17th centuries. The word claymore was first used in reference to basket-hilted swords during the 18th century in Scotland and parts of England. This description was maybe not used during the 17th century, when basket-hilted swords were the primary military swords across Europe, but these basket-hilted, broad-bladed swords remained in service with officers of Scottish regiments into the 21st century. After the Acts of Union in 1707 (when Scottish and English regiments were integrated together), the swords were seen as a mark of distinction by Scottish officers over the more slender sabres used by their English contemporaries: a symbol of physical strength and prowess, and a link to the historic Highland way of life. Terminology The term claymore is an anglicisation of the Gaelic "big/great sword", attested in 1772 (as Cly-more) with the gloss "great two-handed sword". The sense "basket-hilted sword" is contemporaneous, attested in 1773 as "the broad-sword now used ... called the Claymore, (i.e., the great sword)", although OED observes that this usage is "inexact, but very common". The 1911 Encyclopædia Britannica likewise judged that the term is "wrongly" applied to the basket-hilted sword. Countering this view, Paul Wagner and Christopher Thompson argue that the term "claymore" was applied first to the basket-hilted broadsword, and then to all Scottish swords. They provide quotations that are earlier than those given above in support of its use to refer to a basket-hilted broadsword and targe: "a strong handsome target, with a sharp pointed steel, of above half an ell in length, screw'd into the navel of it, on his left arm, a sturdy claymore by his side" (1715 pamphlet). They also note its use as a battle-cry as early as 1678. Some authors suggest that claybeg should be used instead, from a purported Gaelic claidheamh beag "small sword". This does not parallel Scottish Gaelic usage. According to the Gaelic Dictionary by R. A. Armstrong (1825), claidheamh mòr "big/great sword" translates to "broadsword", and claidheamh dà làimh to "two-handed sword", while claidheamh beag "small sword" is given as a translation of "Bilbo". Two-handed (Highland) claymore The two-handed claymore was a large sword used in the late Medieval and early modern periods. It was used in the constant clan warfare and border fights with the English from to 1700. Although claymores existed as far back as the Wars of Scottish Independence, they were smaller and few had the typical quatrefoil design (as can be seen on the Great Seal of John Balliol King of Scots). The last known battle in which it is considered to have been used in a significant number was the Battle of Killiecrankie in 1689. It was somewhat longer than other two-handed swords of the era. The English did use swords similar to the Claymore during the renaissance called a greatsword. The two-handed claymore seems to be an offshoot of early Scottish medieval longswords (similar to the espee de guerre or grete war sword) which had developed a distinctive style of a cross-hilt with forward-angled arms that ended in spatulate swellings. The lobed pommels on earlier swords were inspired by the Viking style. The spatulate swellings were later frequently made in a quatrefoil design. The average claymore ran about in overall length, with a grip, blade, and a weight of approximately . For instance, in 1772 Thomas Pennant described a sword seen on his visit to Raasay as: "an unwieldy weapon, two inches broad (), doubly edged; the length of the blade three feet seven inches (); of the handle, fourteen inches (); of a plain transverse guard, one foot (); the weight six pounds and a half ()." Fairly uniform in style, the sword was set with a wheel pommel often capped by a crescent-shaped nut and a guard with straight, forward-sloping arms ending in quatrefoils, and langets running down the centre of the blade from the guard. Another common style of two-handed claymore (though lesser known today) was the "clamshell-hilted" claymore. It had a crossguard that consisted of two downward-curving arms and two large, round, concave plates that protected the foregrip. It was so named because the round guards resembled an open clam. Popular culture references The song "Tweedle Dee, Tweedle Dum" by the Scottish band Middle of the Road mentions Scottish warriors going to battle with "claymores in their hands". Drew McIntyre's finishing move in WWE is known as the Claymore Kick. McIntyre has also entered matches with a Claymore sword named 'Angela', after his late mother. The video game Team Fortress 2 features an unlockable, haunted claymore known as the "Eyelander" and a Zweihänder misleadingly named the "Claidheamh Mòr". In the Star Trek: The Original Series episode "Day of the Dove", the character Chief Engineer Scott finds and keeps a claymore when the ship's weapons are replaced by antique weaponry. In the video game For Honor, the character Highlander wields a claymore. The claymore is a recurring weapon in the Dark Souls video game series. In the 2023 remake of Super Mario RPG, one of the weapon-themed bosses is named Claymorton. In the video game Genshin Impact, the Claymore is one of the five weapon classes which can be used by the game’s characters. The American Rock band Ween sings about a Claymore in their song titled "The Blarney Stone" from their 1997 album titled The Mollusk. In the 1995 film Braveheart, William Wallace carried a Claymore. At the end of the film, the Claymore was tossed onto the fields of Bannockburn and was stuck point down in the ground. Final image of the film showed the Claymore still stuck in the empty grassy field.
Technology
Swords
null
23290197
https://en.wikipedia.org/wiki/CSS
CSS
Cascading Style Sheets (CSS) is a style sheet language used for specifying the presentation and styling of a document written in a markup language such as HTML or XML (including XML dialects such as SVG, MathML or XHTML). CSS is a cornerstone technology of the World Wide Web, alongside HTML and JavaScript. CSS is designed to enable the separation of content and presentation, including layout, colors, and fonts. This separation can improve content accessibility, since the content can be written without concern for its presentation; provide more flexibility and control in the specification of presentation characteristics; enable multiple web pages to share formatting by specifying the relevant CSS in a separate .css file, which reduces complexity and repetition in the structural content; and enable the .css file to be cached to improve the page load speed between the pages that share the file and its formatting. Separation of formatting and content also makes it feasible to present the same markup page in different styles for different rendering methods, such as on-screen, in print, by voice (via speech-based browser or screen reader), and on Braille-based tactile devices. CSS also has rules for alternate formatting if the content is accessed on a mobile device. The name cascading comes from the specified priority scheme to determine which declaration applies if more than one declaration of a property match a particular element. This cascading priority scheme is predictable. The CSS specifications are maintained by the World Wide Web Consortium (W3C). Internet media type (MIME type) text/css is registered for use with CSS by RFC 2318 (March 1998). The W3C operates a free CSS validation service for CSS documents. In addition to HTML, other markup languages support the use of CSS including XHTML, plain XML, SVG, and XUL. CSS is also used in the GTK widget toolkit. Syntax CSS has a simple syntax and uses a number of English keywords to specify the names of various style properties. Style sheet A style sheet consists of a list of rules. Each rule or rule-set consists of one or more selectors, and a declaration block. Selector In CSS, selectors declare which part of the markup a style applies to by matching tags and attributes in the markup itself. Selector types Selectors may apply to the following: all elements of a specific type, e.g. the second-level headers h2 elements specified by attribute, in particular: id: an identifier unique within the document, denoted in the selector language by a hash prefix e.g. class: an identifier that can annotate multiple elements in a document, denoted by a dot prefix e.g. (the phrase "CSS class", although sometimes used, is a misnomer, as element classes—specified with the HTML class attribute—is a markup feature that is distinct from browsers' CSS subsystem and the related W3C/WHATWG standards work on document styles; see RDF and microformats for the origins of the "class" system of the Web content model) elements depending on how they are placed relative to others in the document tree. Classes and IDs are case-sensitive, start with letters, and can include alphanumeric characters, hyphens, and underscores. A class may apply to any number of instances of any element. An ID may only be applied to a single element. Pseudo-classes Pseudo-classes are used in CSS selectors to permit formatting based on information that is not contained in the document tree. One example of a widely used pseudo-class is , which identifies content only when the user "points to" the visible element, usually by holding the mouse cursor over it. It is appended to a selector as in or . A pseudo-class classifies document elements, such as or , whereas a pseudo-element makes a selection that may consist of partial elements, such as or . Note the distinction between the double-colon notation used for pseudo-elements and the single-colon notation used for pseudo-classes. Combinators Multiple simple selectors may be joined using combinators to specify elements by location, element type, id, class, or any combination thereof. The order of the selectors is important. For example, div .myClass {color: red;} applies to all elements of class myClass that are inside div elements, whereas .myClass div {color: red;} applies to all div elements that are inside elements of class myClass. This is not to be confused with concatenated identifiers such as div.myClass {color: red;} which applies to div elements of class myClass. Summary of selector syntax The following table provides a summary of selector syntax indicating usage and the version of CSS that introduced it. Declaration block A declaration block consists of a pair of braces ({}) enclosing a semicolon-separated list of declarations. Declaration Each declaration itself consists of a property, a colon (:), and a value. Optional white-space may be around the declaration block, declarations, colons, and semi-colons for readability. Properties Properties are specified in the CSS standard. Each property has a set of possible values. Some properties can affect any type of element, and others apply only to particular groups of elements. Values Values may be keywords, such as "center" or "inherit", or numerical values, such as (200 pixels), (50 percent of the viewport width) or (80 percent of the parent element's width). Color values can be specified with keywords (e.g. ""), hexadecimal values (e.g. , also abbreviated as ), RGB values on a 0 to 255 scale (e.g. ), RGBA values that specify both color and alpha transparency (e.g. ), or HSL or HSLA values (e.g. , ). Non-zero numeric values representing linear measures must include a length unit, which is either an alphabetic code or abbreviation, as in 200px or 50vw; or a percentage sign, as in 80%. Some units – cm (centimetre); in (inch); mm (millimetre); pc (pica); and pt (point) – are absolute, which means that the rendered dimension does not depend upon the structure of the page; others – em (em); ex (ex) and px (pixel) – are relative, which means that factors such as the font size of a parent element can affect the rendered measurement. These eight units were a feature of CSS 1 and retained in all subsequent revisions. The proposed CSS Values and Units Module Level 3 will, if adopted as a W3C Recommendation, provide seven further length units: ch; Q; rem; vh; vmax; vmin; and vw. Use Before CSS, nearly all presentational attributes of HTML documents were contained within the HTML markup. All font colors, background styles, element alignments, borders, and sizes had to be explicitly described, often repeatedly, within the HTML. CSS lets authors move much of that information to another file, the style sheet, resulting in considerably simpler HTML. And additionally, as more and more devices are able to access responsive web pages, different screen sizes and layouts begin to appear. Customizing a website for each device size is costly and increasingly difficult. The modular nature of CSS means that styles can be reused in different parts of a site or even across sites, promoting consistency and efficiency. For example, headings (h1 elements), sub-headings (h2), sub-sub-headings (h3), etc., are defined structurally using HTML. In print and on the screen, choice of font, size, color and emphasis for these elements is presentational. Before CSS, document authors who wanted to assign such typographic characteristics to, say, all h2 headings had to repeat HTML presentational markup for each occurrence of that heading type. This made documents more complex, larger, and more error-prone and difficult to maintain. CSS allows the separation of presentation from structure. CSS can define color, font, text alignment, size, borders, spacing, layout and many other typographic characteristics, and can do so independently for on-screen and printed views. CSS also defines non-visual styles, such as reading speed and emphasis for aural text readers. The W3C has now deprecated the use of all presentational HTML markup. For example, under pre-CSS HTML, a heading element defined with red text would be written as: <h1><font color="red">Chapter 1.</font></h1> Using CSS, the same element can be coded using style properties instead of HTML presentational attributes: <h1 style="color: red;">Chapter 1.</h1> The advantages of this may not be immediately clear but the power of CSS becomes more apparent when the style properties are placed in an internal style element or, even better, an external CSS file. For example, suppose the document contains the style element: <style> h1 { color: red; } </style> All h1 elements in the document will then automatically become red without requiring any explicit code. If the author later wanted to make h1 elements blue instead, this could be done by changing the style element to: <style> h1 { color: blue; } </style> rather than by laboriously going through the document and changing the color for each individual h1 element. The styles can also be placed in an external CSS file, as described below, and loaded using syntax similar to: <link href="path/to/file.css" rel="stylesheet" type="text/css"> This further decouples the styling from the HTML document and makes it possible to restyle multiple documents by simply editing a shared external CSS file. Sources CSS, or Cascading Style Sheets, offers a flexible way to style web content, with styles originating from browser defaults, user preferences, or web designers. These styles can be applied inline, within an HTML document, or through external .css files for broader consistency. Not only does this simplify web development by promoting reusability and maintainability, it also improves site performance because styles can be offloaded into dedicated .css files that browsers can cache. Additionally, even if the styles cannot be loaded or are disabled, this separation maintains the accessibility and readability of the content, ensuring that the site is usable for all users, including those with disabilities. Its multi-faceted approach, including considerations for selector specificity, rule order, and media types, ensures that websites are visually coherent and adaptive across different devices and user needs, striking a balance between design intent and user accessibility. Multiple style sheets Multiple style sheets can be imported. Different styles can be applied depending on the output device being used; for example, the screen version can be quite different from the printed version, so authors can tailor the presentation appropriately for each medium. Cascading The style sheet with the highest priority controls the content display. Declarations not set in the highest priority source are passed on to a source of lower priority, such as the user agent style. The process is called cascading. One of the goals of CSS is to allow users greater control over presentation. Someone who finds red italic headings difficult to read may apply a different style sheet. Depending on the browser and the website, a user may choose from various style sheets provided by the designers, or may remove all added styles, and view the site using the browser's default styling, or may override just the red italic heading style without altering other attributes. Browser extensions like Stylish and Stylus have been created to facilitate the management of such user style sheets. In the case of large projects, cascading can be used to determine which style has a higher priority when developers do integrate third-party styles that have conflicting priorities, and to further resolve those conflicts. Additionally, cascading can help create themed designs, which help designers fine-tune aspects of a design without compromising the overall layout. CSS priority scheme Specificity Specificity refers to the relative weights of various rules. It determines which styles apply to an element when more than one rule could apply. Based on the specification, a simple selector (e.g. H1) has a specificity of 1, class selectors have a specificity of 1,0, and ID selectors have a specificity of 1,0,0. Because the specificity values do not carry over as in the decimal system, commas are used to separate the "digits" (a CSS rule having 11 elements and 11 classes would have a specificity of 11,11, not 121). Thus the selectors of the following rule result in the indicated specificity: Examples Consider this HTML fragment: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <style> #xyz { color: blue; } </style> </head> <body> <p id="xyz" style="color: green;">To demonstrate specificity</p> </body> </html> In the above example, the declaration in the style attribute overrides the one in the <style> element because it has a higher specificity, and thus, the paragraph appears green: To demonstrate specificity Inheritance Inheritance is a key feature in CSS; it relies on the ancestor-descendant relationship to operate. Inheritance is the mechanism by which properties are applied not only to a specified element but also to its descendants. Inheritance relies on the document tree, which is the hierarchy of XHTML elements in a page based on nesting. Descendant elements may inherit CSS property values from any ancestor element enclosing them. In general, descendant elements inherit text-related properties, but their box-related properties are not inherited. Properties that can be inherited are color, font, letter spacing, line-height, list-style, text-align, text-indent, text-transform, visibility, white-space, and word-spacing. Properties that cannot be inherited are background, border, display, float and clear, height, and width, margin, min- and max-height and -width, outline, overflow, padding, position, text-decoration, vertical-align, and z-index. Inheritance can be used to avoid declaring certain properties over and over again in a style sheet, allowing for shorter CSS. Inheritance in CSS is not the same as inheritance in class-based programming languages, where it is possible to define class B as "like class A, but with modifications". With CSS, it is possible to style an element with "class A, but with modifications". However, it is not possible to define a CSS class B like that, which could then be used to style multiple elements without having to repeat the modifications. Example Given the following style sheet: p { color: pink; } Suppose there is a p element with an emphasizing element () inside: <p> This is to <em>illustrate</em> inheritance </p> If no color is assigned to the em element, the emphasized word "illustrate" inherits the color of the parent element, p. The style sheet p has the color pink, hence, the em element is likewise pink: This is to illustrate inheritance Whitespace The whitespace between properties and selectors is ignored. This code snippet: body{overflow:hidden;background:#000000;background-image:url(images/bg.gif);background-repeat:no-repeat;background-position:left top;} is functionally equivalent to this one: body { overflow: hidden; background-color: #000000; background-image: url(images/bg.gif); background-repeat: no-repeat; background-position: left top; } Indentation One common way to format CSS for readability is to indent each property and give it its own line. In addition to formatting CSS for readability, shorthand properties can be used to write out the code faster, which also gets processed more quickly when being rendered: body { overflow: hidden; background: #000 url(images/bg.gif) no-repeat left top; }Sometimes, multiple property values are indented onto their own line:@font-face { font-family: 'Comic Sans'; font-size: 20px; src: url('first.example.com'), url('second.example.com'), url('third.example.com'), url('fourth.example.com'); } Positioning CSS 2.1 defines three positioning schemes: Normal flow Inline items are laid out in the same way as the letters in words in the text, one after the other across the available space until there is no more room, then starting a new line below. Block items stack vertically, like paragraphs and like the items in a bulleted list. Normal flow also includes the relative positioning of block or inline items and run-in boxes. Floats A floated item is taken out of the normal flow and shifted to the left or right as far as possible in the space available. Other content then flows alongside the floated item. Absolute positioning An absolutely positioned item has no place in, and no effect on, the normal flow of other items. It occupies its assigned position in its container independently of other items. Position property There are five possible values of the position property. If an item is positioned in any way other than static, then the further properties top, bottom, left, and right are used to specify offsets and positions.The element having position static is not affected by the top, bottom , left or right properties. Static The default value places the item in the normal flow. Relative The item is placed in the normal flow, and then shifted or offset from that position. Subsequent flow items are laid out as if the item had not been moved. Absolute Specifies absolute positioning. The element is positioned in relation to its nearest non-static ancestor. Fixed The item is absolutely positioned in a fixed position on the screen even as the rest of the document is scrolled Float and clear The property may have one of three values. Absolutely positioned or fixed items cannot be floated. Other elements normally flow around floated items, unless they are prevented from doing so by their property. left The item floats to the left of the line that it would have appeared in; other items may flow around its right side. right The item floats to the right of the line that it would have appeared in; other items may flow around its left side. clear Forces the element to appear underneath ('clear') floated elements to the left (), right () or both sides (). History CSS was first proposed by Håkon Wium Lie on 10 October 1994. At the time, Lie was working with Tim Berners-Lee at CERN. Several other style sheet languages for the web were proposed around the same time, and discussions on public mailing lists and inside World Wide Web Consortium resulted in the first W3C CSS Recommendation (CSS1) being released in 1996. In particular, a proposal by Bert Bos was influential; he became co-author of CSS1, and is regarded as co-creator of CSS. Style sheets have existed in one form or another since the beginnings of Standard Generalized Markup Language (SGML) in the 1980s, and CSS was developed to provide style sheets for the web. One requirement for a web style sheet language was for style sheets to come from different sources on the web. Therefore, existing style sheet languages like DSSSL and FOSI were not suitable. CSS, on the other hand, let a document's style be influenced by multiple style sheets by way of "cascading" styles. As HTML grew, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. This evolution gave the designer more control over site appearance, at the cost of more complex HTML. Variations in web browser implementations, such as ViolaWWW and WorldWideWeb, made consistent site appearance difficult, and users had less control over how web content was displayed. The browser/editor developed by Tim Berners-Lee had style sheets that were hard-coded into the program. The style sheets could therefore not be linked to documents on the web. Robert Cailliau, also of CERN, wanted to separate the structure from the presentation so that different style sheets could describe different presentation for printing, screen-based presentations, and editors. Improving web presentation capabilities was a topic of interest to many in the web community and nine different style sheet languages were proposed on the www-style mailing list. Of these nine proposals, two were especially influential on what became CSS: Cascading HTML Style Sheets and Stream-based Style Sheet Proposal (SSP). Two browsers served as testbeds for the initial proposals; Lie worked with Yves Lafon to implement CSS in Dave Raggett's Arena browser. Bert Bos implemented his own SSP proposal in the Argo browser. Thereafter, Lie and Bos worked together to develop the CSS standard (the 'H' was removed from the name because these style sheets could also be applied to other markup languages besides HTML). Lie's proposal was presented at the "Mosaic and the Web" conference (later called WWW2) in Chicago, Illinois in 1994, and again with Bert Bos in 1995. Around this time the W3C was already being established and took an interest in the development of CSS. It organized a workshop toward that end chaired by Steven Pemberton. This resulted in W3C adding work on CSS to the deliverables of the HTML editorial review board (ERB). Lie and Bos were the primary technical staff on this aspect of the project, with additional members, including Thomas Reardon of Microsoft, participating as well. In August 1996, Netscape Communication Corporation presented an alternative style sheet language called JavaScript Style Sheets (JSSS). The spec was never finished, and is deprecated. By the end of 1996, CSS was ready to become official, and the CSS level 1 Recommendation was published in December. Development of HTML, CSS, and the DOM had all been taking place in one group, the HTML Editorial Review Board (ERB). Early in 1997, the ERB was split into three working groups: HTML Working Group, chaired by Dan Connolly of W3C; DOM Working group, chaired by Lauren Wood of SoftQuad; and CSS Working Group, chaired by Chris Lilley of W3C. The CSS Working Group began tackling issues that had not been addressed with CSS level 1, resulting in the creation of CSS level 2 on November 4, 1997. It was published as a W3C Recommendation on May 12, 1998. CSS level 3, which was started in 1998, is still under development . In 2005, the CSS Working Groups decided to enforce the requirements for standards more strictly. This meant that already published standards like CSS 2.1, CSS 3 Selectors, and CSS 3 Text were pulled back from Candidate Recommendation to Working Draft level. Difficulty with adoption The CSS 1 specification was completed in 1996. Microsoft's Internet Explorer 3 was released that year, featuring some limited support for CSS. IE 4 and Netscape 4.x added more support, but it was typically incomplete and had many bugs that prevented CSS from being usefully adopted. It was more than three years before any web browser achieved near-full implementation of the specification. Internet Explorer 5.0 for the Macintosh, shipped in March 2000, was the first browser to have full (better than 99 percent) CSS 1 support, surpassing Opera, which had been the leader since its introduction of CSS support fifteen months earlier. Other browsers followed soon afterward, and many of them additionally implemented parts of CSS 2. However, even when later "version 5" web browsers began to offer a fairly full implementation of CSS, they were still incorrect in certain areas. They were fraught with inconsistencies, bugs, and other quirks. Microsoft Internet Explorer 5. x for Windows, as opposed to the very different IE for Macintosh, had a flawed implementation of the CSS box model, as compared with the CSS standards. Such inconsistencies and variation in feature support made it difficult for designers to achieve a consistent appearance across browsers and platforms without the use of workarounds termed CSS hacks and filters. The IE Windows box model bugs were so serious that, when Internet Explorer 6 was released, Microsoft introduced a backward-compatible mode of CSS interpretation ("quirks mode") alongside an alternative, corrected "standards mode". Other non-Microsoft browsers also provided mode-switch capabilities. It, therefore, became necessary for authors of HTML files to ensure they contained special distinctive "standards-compliant CSS intended" marker to show that the authors intended CSS to be interpreted correctly, in compliance with standards, as opposed to being intended for the now long-obsolete IE5/Windows browser. Without this marker, web browsers with the "quirks mode"-switching capability will size objects in web pages as IE 5 on Windows would, rather than following CSS standards. Problems with the patchy adoption of CSS and errata in the original specification led the W3C to revise the CSS 2 standards into CSS 2.1, which moved nearer to a working snapshot of current CSS support in HTML browsers. Some CSS 2 properties that no browser successfully implemented were dropped, and in a few cases, defined behaviors were changed to bring the standard into line with the predominant existing implementations. CSS 2.1 became a Candidate Recommendation on February 25, 2004, but CSS 2.1 was pulled back to Working Draft status on June 13, 2005, and only returned to Candidate Recommendation status on July 19, 2007. In addition to these problems, the .css extension was used by a software product used to convert PowerPoint files into Compact Slide Show files, so some web servers served all .css as MIME type application/x-pointplus rather than text/css. Vendor prefixes Individual browser vendors occasionally introduced new parameters ahead of standardization and universalization. To prevent interfering with future implementations, vendors prepended unique names to the parameters, such as -moz- for Mozilla Firefox, -webkit- named after the browsing engine of Apple Safari, -o- for Opera Browser and -ms- for Microsoft Internet Explorer and early versions of Microsoft Edge that use EdgeHTML. Occasionally, the parameters with vendor prefixes such as -moz-radial-gradient and -webkit-linear-gradient have slightly different syntax as compared to their non-vendor-prefix counterparts. Prefixed properties are rendered obsolete by the time of standardization. Programs are available to automatically add prefixes for older browsers and to point out standardized versions of prefixed parameters. Since prefixes are limited to a small subset of browsers, removing the prefix allows other browsers to see the functionality. An exception is certain obsolete -webkit- prefixed properties, which are so common and persistent on the web that other families of browsers have decided to support them for compatibility. CSS has various levels and profiles. Each level of CSS builds upon the last, typically adding new features and typically denoted as CSS 1, CSS 2, CSS 3, and CSS 4. Profiles are typically a subset of one or more levels of CSS built for a particular device or user interface. Currently, there are profiles for mobile devices, printers, and television sets. Profiles should not be confused with media types, which were added in CSS 2. CSS 1 The first CSS specification to become an official W3C Recommendation is CSS level 1, published on 17 December 1996. Håkon Wium Lie and Bert Bos are credited as the original developers. Among its capabilities are support for Font properties such as typeface and emphasis Color of text, backgrounds, and other elements Text attributes such as spacing between words, letters, and lines of text Alignment of text, images, tables and other elements Margin, border, padding, and positioning for most elements Unique identification and generic classification of groups of attributes The W3C no longer maintains the CSS 1 Recommendation. CSS 2 CSS level 2 specification was developed by the W3C and published as a recommendation in May 1998. A superset of CSS 1, CSS 2 includes a number of new capabilities like absolute, relative, and fixed positioning of elements and z-index, the concept of media types, support for aural style sheets (which were later replaced by the CSS 3 speech modules) and bidirectional text, and new font properties such as shadows. The W3C no longer maintains the CSS 2 recommendation. CSS 2.1 CSS level 2 revision 1, often referred to as "CSS 2.1", fixes errors in CSS 2, removes poorly supported or not fully interoperable features and adds already implemented browser extensions to the specification. To comply with the W3C Process for standardizing technical specifications, CSS 2.1 went back and forth between Working Draft status and Candidate Recommendation status for many years. CSS 2.1 first became a Candidate Recommendation on 25 February 2004, but it was reverted to a Working Draft on 13 June 2005 for further review. It returned to Candidate Recommendation on 19 July 2007 and then updated twice in 2009. However, because changes and clarifications were made, it again went back to Last Call Working Draft on 7 December 2010. CSS 2.1 went to Proposed Recommendation on 12 April 2011. After being reviewed by the W3C Advisory Committee, it was finally published as a W3C Recommendation on 7 June 2011. CSS 2.1 was planned as the first and final revision of level 2—but low-priority work on CSS 2.2 began in 2015. CSS 3 Unlike CSS 2, which is a large single specification defining various features, CSS 3 is divided into several separate documents called "modules". Each module adds new capabilities or extends features defined in CSS 2, preserving backward compatibility. Work on CSS level 3 started around the time of publication of the original CSS 2 recommendation. The earliest CSS 3 drafts were published in June 1999. Due to the modularization, different modules have different stability and statuses. Some modules have Candidate Recommendation (CR) status and are considered moderately stable. At CR stage, implementations are advised to drop vendor prefixes. CSS 4 There is no single, integrated CSS4 specification, because the specification has been split into many separate modules which level independently. Modules that build on things from CSS Level 2 started at Level 3. Some of them have already reached Level 4 or are already approaching Level 5. Other modules that define entirely new functionality, such as Flexbox, have been designated as Level 1 and some of them are approaching Level 2. The CSS Working Group sometimes publishes "Snapshots", a collection of whole modules and parts of other drafts that are considered stable enough to be implemented by browser developers. So far, five such "best current practices" documents have been published as
Technology
Stylesheet languages
null
23290471
https://en.wikipedia.org/wiki/Group%2011%20element
Group 11 element
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} Group 11, by modern IUPAC numbering, is a group of chemical elements in the periodic table, consisting of copper (Cu), silver (Ag), gold (Au), and roentgenium (Rg), although no chemical experiments have yet been carried out to confirm that roentgenium behaves like the heavier homologue to gold. Group 11 is also known as the coinage metals, due to their usage in minting coins—while the rise in metal prices mean that silver and gold are no longer used for circulating currency, remaining in use for bullion, copper remains a common metal in coins to date, either in the form of copper clad coinage or as part of the cupronickel alloy. They were most likely the first three elements discovered. Copper, silver, and gold all occur naturally in elemental form. History All three stable elements of the group have been known since prehistoric times, as all of them occur in metallic form in nature and no extraction metallurgy is necessary to produce them. Copper was known and used around 4000 BC and many items, weapons and materials were made and used with copper. The first evidence of silver mining dates back to 3000 BC, in Turkey and Greece, according to the RSC. Ancient people even figured out how to refine silver. The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, c. 40,000 BC. Gold artifacts made their first appearance at the very beginning of the pre-dynastic period in Egypt, at the end of the fifth millennium BC and the start of the fourth, and smelting was developed during the course of the 4th millennium BC; gold artifacts appear in the archeology of Lower Mesopotamia during the early 4th millennium BC. Roentgenium was made in 1994 by bombarding nickel-64 atoms into bismuth-209 to make roentgenium-272. Characteristics Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior, although roentgenium is probably an exception: All group 11 elements are relatively inert, corrosion-resistant metals. Copper and gold are colored, but silver is not. Roentgenium is expected to be silvery, though it has not been produced in large enough amounts to confirm this. These elements have low electrical resistivity so they are used for wiring. Copper is the cheapest and most widely used. Bond wires for integrated circuits are usually gold. Silver and silver-plated copper wiring are found in some special applications. Occurrence Copper occurs in its native form in Chile, China, Mexico, Russia and the USA. Various natural ores of copper are: copper pyrites (CuFeS2), cuprite or ruby copper (Cu2O), copper glance (Cu2S), malachite (Cu(OH)2CuCO3), and azurite (Cu(OH)22CuCO3). Copper pyrite is the principal ore, and yields nearly 76% of the world production of copper. Production Silver is found in native form, as an alloy with gold (electrum), and in ores containing sulfur, arsenic, antimony or chlorine. Ores include argentite (Ag2S), chlorargyrite (AgCl) which includes horn silver, and pyrargyrite (Ag3SbS3). Silver is extracted using the Parkes process. Applications These metals, especially silver, have unusual properties that make them essential for industrial applications outside of their monetary or decorative value. They are all excellent conductors of electricity. The most conductive (by volume) of all metals are silver, copper and gold in that order. Silver is also the most thermally conductive element, and the most light reflecting element. Silver also has the unusual property that the tarnish that forms on silver is still highly electrically conductive. Copper is used extensively in electrical wiring and circuitry. Gold contacts are sometimes found in precision equipment for their ability to remain corrosion-free. Silver is used widely in mission-critical applications as electrical contacts, and is also used in photography (because silver nitrate reverts to metal on exposure to light), agriculture, medicine, audiophile and scientific applications. Gold, silver, and copper are quite soft metals and so are easily damaged in daily use as coins. Precious metal may also be easily abraded and worn away through use. In their numismatic functions these metals must be alloyed with other metals to afford coins greater durability. The alloying with other metals makes the resulting coins harder, less likely to become deformed and more resistant to wear. Gold coins: Gold coins are typically produced as either 90% gold (e.g. with pre-1933 US coins), or 22 carat (91.66%) gold (e.g. current collectible coins and Krugerrands), with copper and silver making up the remaining weight in each case. Bullion gold coins are being produced with up to 99.999% gold (in the Canadian Gold Maple Leaf series). Silver coins: Silver coins are typically produced as either 90% silver – in the case of pre-1965 US minted coins (which were circulated in many countries), or sterling silver (92.5%) coins for pre-1920 British Commonwealth and other silver coinage, with copper making up the remaining weight in each case. Old European coins were commonly produced with 83.5% silver. Modern silver bullion coins are often produced with purity varying from 99.9% to 99.999%. Copper coins: Copper coins are often of quite high purity, around 97%, and are usually alloyed with small amounts of zinc and tin. Inflation has caused the face value of coins to fall below the hard currency value of the historically used metals. This had led to most modern coins being made of base metals – copper nickel (around 80:20, silver in color) is popular as are nickel-brass (copper (75), nickel (5) and zinc (20), gold in color), manganese-brass (copper, zinc, manganese, and nickel), bronze, or simple plated steel. Biological role and toxicity Copper, although toxic in excessive amounts, is essential for life. It can be found in hemocyanin, cytochrome c oxidase and in superoxide dismutase. Copper is shown to have antimicrobial properties which make it useful for hospital doorknobs to keep diseases from being spread. Eating food in copper containers is known to increase the risk of copper toxicity. Wilson's disease is a genetic condition in which a protein important for excretion of excess copper is mutated such that copper builds up in body tissues, causing symptoms including vomiting, weakness, tremors, anxiety, and muscle stiffness. Elemental gold and silver have no known toxic effects or biological use, although gold salts can be toxic to liver and kidney tissue. Like copper, silver also has antimicrobial properties. The prolonged use of preparations containing gold or silver can also lead to the accumulation of these metals in body tissue; the results of which are irreversible but apparently harmless pigmentation conditions known as chrysiasis and argyria respectively. Due to being short lived and radioactive, roentgenium has no biological use but it is likely extremely harmful due to its radioactivity.
Physical sciences
Group 11
Chemistry
33409078
https://en.wikipedia.org/wiki/Automatic%20generation%20control
Automatic generation control
In an electric power system, automatic generation control (AGC) is a system for adjusting the power output of multiple generators at different power plants, in response to changes in the load. Since a power grid requires that generation and load closely balance moment by moment, frequent adjustments to the output of generators are necessary. The balance can be judged by measuring the system frequency; if it is increasing, more power is being generated than used, which causes all the machines in the system to accelerate. If the system frequency is decreasing, more load is on the system than the instantaneous generation can provide, which causes all generators to slow down. History Before the use of automatic generation control, one generating unit in a system would be designated as the regulating unit and would be manually adjusted to control the balance between generation and load to maintain system frequency at the desired value. The remaining units would be controlled with speed droop to share the load in proportion to their ratings. With automatic systems, many units in a system can participate in regulation, reducing wear on a single unit's controls and improving overall system efficiency, stability, and economy. Where the grid has tie interconnections to adjacent control areas, automatic generation control helps maintain the power interchanges over the tie lines at the scheduled levels. With computer-based control systems and multiple inputs, an automatic generation control system can take into account such matters as the most economical units to adjust, the coordination of thermal, hydroelectric, and other generation types, and even constraints related to the stability of the system and capacity of interconnections to other power grids. Types Turbine-governor control Turbine generators in a power system have stored kinetic energy due to their large rotating masses. All the kinetic energy stored in a power system in such rotating masses is a part of the grid inertia. When system load increases, grid inertia is initially used to supply the load. This, however, leads to a decrease in the stored kinetic energy of the turbine generators. Since the mechanical power of these turbines correlates with the delivered electrical power, the turbine generators have a decrease in angular velocity, which is directly proportional to a decrease in frequency in synchronous generators. The purpose of the turbine-governor control (TGC) is to maintain the desired system frequency by adjusting the mechanical power output of the turbine. These controllers have become automated and at steady state, the frequency-power relation for turbine-governor control is, where, is the change in turbine mechanical power output is the change in a reference power setting is the regulation constant which quantifies the sensitivity of the generator to a change in frequency is the change in frequency. For steam turbines, steam turbine governing adjusts the mechanical output of the turbine by increasing or decreasing the amount of steam entering the turbine via a throttle valve. Load-frequency control Load-frequency control (LFC) is employed to allow an area to first meet its own load demands, then to assist in returning the steady-state frequency of the system, Δf, to zero. Load-frequency control operates with a response time of a few seconds to keep system frequency stable. Economic dispatch The goal of economic dispatch is to minimize total operating costs in an area by determining how the real power output of each generating unit will meet a given load. Generating units have different costs to produce a unit of electrical energy, and incur different costs for the losses in transmitting energy to the load. An economic dispatch algorithm will run every few minutes to select the combination of generating unit power setpoints that minimizes overall cost, subject to the constraints of transmission limitation or security of the system against failures. Further constraints may be imposed by the water supply of hydroelectric generation, or by the availability of sun and wind power.
Technology
Concepts
null
21808348
https://en.wikipedia.org/wiki/Computer%20hardware
Computer hardware
Computer hardware includes the physical parts of a computer, such as the central processing unit (CPU), random access memory (RAM), motherboard, computer data storage, graphics card, sound card, and computer case. It includes external devices such as a monitor, mouse, keyboard, and speakers. By contrast, software is a set of written instructions that can be stored and run by hardware. Hardware derived its name from the fact it is hard or rigid with respect to changes, whereas software is soft because it is easy to change. Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware. History Early computing devices were more complicated than the ancient abacus date to the seventeenth century. French mathematician Blaise Pascal designed a gear-based device that could add and subtract, selling around 50 models. The stepped reckoner was invented by Gottfried Leibniz by 1676, which could also divide and multiply. Due to the limitations of contemporary fabrication and design flaws, Leibniz' reckoner was not very functional, but similar devices (Leibniz wheel) remained in use into the 1970s. In the 19th century, Englishman Charles Babbage invented the difference engine, a mechanical device to calculate polynomials for astronomical purposes. Babbage also designed a general-purpose computer that was never built. Much of the design was incorporated into the earliest computers: punch cards for input and output, memory, an arithmetic unit analogous to central processing units, and even a primitive programming language similar to assembly language. In 1936, Alan Turing developed the universal Turing machine to model any type of computer, proving that no computer would be able to solve the decision problem. The universal Turing machine was a type of stored-program computer capable of mimicking the operations of any Turing machine (computer model) based on the software instructions passed to it. The storage of computer programs is key to the operation of modern computers and is the connection between computer hardware and software. Even prior to this, in the mid-19th century mathematician George Boole invented Boolean algebra—a system of logic where each proposition is either true or false. Boolean algebra is now the basis of the circuits that model the transistors and other components of integrated circuits that make up modern computer hardware. In 1945, Turing finished the design for a computer (the Automatic Computing Engine) that was never built. Around this time, technological advancement in relays and vacuum tubes enabled the construction of the first computers. Building on Babbage's design, relay computers were built by George Stibitz at Bell Laboratories and Harvard University's Howard Aiken, who engineered the MARK I. Also in 1945, mathematician John von Neumann—working on the ENIAC project at the University of Pennsylvania—devised the underlying von Neumann architecture that has served as the template for most modern computers. Von Neumann's design featured a centralized memory that stored both data and programs, a central processing unit (CPU) with priority of access to the memory, and input and output (I/O) units. Von Neumann used a single bus to transfer data, meaning that his solution to the storage problem by locating programs and data adjacent to each other created the Von Neumann bottleneck when the system tries to fetch both at the same time—often throttling the system's performance. Computer architecture Computer architecture requires prioritizing between different goals, such as cost, speed, availability, and energy efficiency. The designer must have a good grasp of the hardware requirements and many different aspects of computing, from compilers to integrated circuit design. Cost has also become a significant constraint for manufacturers seeking to sell their products for less money than competitors offering a very similar hardware component. Profit margins have also been reduced. Even when the performance is not increasing, the cost of components has been dropping over time due to improved manufacturing techniques that have fewer components rejected at quality assurance stage. Instruction set architecture The most common instruction set architecture (ISA)—the interface between a computer's hardware and software—is based on the one devised by von Neumann in 1945. Despite the separation of the computing unit and the I/O system in many diagrams, typically the hardware is shared, with a bit in the computing unit indicating whether it is in computation or I/O mode. Common types of ISAs include CISC (complex instruction set computer), RISC (reduced instruction set computer), vector operations, and hybrid modes. CISC involves using a larger expression set to minimize the number of instructions the machines need to use. Based on a recognition that only a few instructions are commonly used, RISC shrinks the instruction set for added simplicity, which also enables the inclusion of more registers. After the invention of RISC in the 1980s, RISC based architectures that used pipelining and caching to increase performance displaced CISC architectures, particularly in applications with restrictions on power usage or space (such as mobile phones). From 1986 to 2003, the annual rate of improvement in hardware performance exceeded 50 percent, enabling the development of new computing devices such as tablets and mobiles. Alongside the density of transistors, DRAM memory as well as flash and magnetic disk storage also became exponentially more compact and cheaper. The rate of improvement slackened off in the twenty-first century. In the twenty-first century, increases in performance have been driven by increasing exploitation of parallelism. Applications are often parallelizable in two ways: either the same function is running across multiple areas of data (data parallelism) or different tasks can be performed simultaneously with limited interaction (task parallelism). These forms of parallelism are accommodated by various hardware strategies, including instruction-level parallelism (such as instruction pipelining), vector architectures and graphical processing units (GPUs) that are able to implement data parallelism, thread-level parallelism and request-level parallelism (both implementing task-level parallelism). Microarchitecture Microarchitecture, also known as computer organization, refers to high-level hardware questions such as the design of the CPU, memory, and memory interconnect. Memory hierarchy ensures that the memory quicker to access (and more expensive) is located closer to the CPU, while slower, cheaper memory for large-volume storage is located further away. Memory is typically segregated to separate programs from data and limit an attacker's ability to alter programs. Most computers use virtual memory to simplify addressing for programs, using the operating system to map virtual memory to different areas of the finite physical memory. Cooling Computer processors generate heat, and excessive heat impacts their performance and can harm the components. Many computer chips will automatically throttle their performance to avoid overheating. Computers also typically have mechanisms for dissipating excessive heat, such as air or liquid coolers for the CPU and GPU and heatsinks for other components, such as the RAM. Computer cases are also often ventilated to help dissipate heat from the computer. Data centers typically use more sophisticated cooling solutions to keep the operating temperature of the entire center safe. Air-cooled systems are more common in smaller or older data centers, while liquid-cooled immersion (where each computer is surrounded by cooling fluid) and direct-to-chip (where the cooling fluid is directed to each computer chip) can be more expensive but are also more efficient. Most computers are designed to be more powerful than their cooling system, but their sustained operations cannot exceed the capacity of the cooling system. While performance can be temporarily increased when the computer is not hot (overclocking), in order to protect the hardware from excessive heat, the system will automatically reduce performance or shut down the processor if necessary. Processors also will shut off or enter a low power mode when inactive to reduce heat. Power delivery as well as heat dissipation are the most challenging aspects of hardware design, and have been the limiting factor to the development of smaller and faster chips since the early twenty-first century. Increases in performance require a commensurate increase in energy use and cooling demand. Types of computer hardware systems Personal computer The personal computer is one of the most common types of computer due to its versatility and relatively low price. Desktop personal computers have a monitor, a keyboard, a mouse, and a computer case. The computer case holds the motherboard, fixed or removable disk drives for data storage, the power supply, and may contain other peripheral devices such as modems or network interfaces. Some models of desktop computers integrated the monitor and keyboard into the same case as the processor and power supply. Separating the elements allows the user to arrange the components in a pleasing, comfortable array, at the cost of managing power and data cables between them. Laptops are designed for portability but operate similarly to desktop PCs. They may use lower-power or reduced size components, with lower performance than a similarly priced desktop computer. Laptops contain the keyboard, display, and processor in one case. The monitor in the folding upper cover of the case can be closed for transportation, to protect the screen and keyboard. Instead of a mouse, laptops may have a touchpad or pointing stick. Tablets are portable computers that use a touch screen as the primary input device. Tablets generally weigh less and are smaller than laptops. Some tablets include fold-out keyboards or offer connections to separate external keyboards. Some models of laptop computers have a detachable keyboard, which allows the system to be configured as a touch-screen tablet. They are sometimes called 2-in-1 detachable laptops or tablet-laptop hybrids. Mobile phones are designed to have an extended battery life and light weight, while having less functionality than larger computers. They have diverse hardware architecture, often including antennas, microphones, cameras, GPS devices, and speakers. Power and data connections vary between phones. Large-scale computers A mainframe computer is a much larger computer that typically fills a room and may cost many hundreds or thousands of times as much as a personal computer. They are designed to perform large numbers of calculations for governments and large enterprises. In the 1960s and 1970s, more and more departments started to use cheaper and dedicated systems for specific purposes like process control and laboratory automation. A minicomputer, or colloquially mini, is a class of smaller computers that was developed in the mid-1960s and sold for much less than mainframe and mid-size computers from IBM and its direct competitors. Supercomputers can cost hundreds of millions of dollars. They are intended to maximize performance with floating-point arithmetic and running batch programs that take a very long time (such as weeks) to complete. As a result of the need for communication between parallel programs, the speed of the internal network must be prioritized. Warehouse scale computers are larger versions of cluster computers that came into fashion with software as a service provided via the internet. Their design is intended to minimize cost per operation and power usage, as they can cost over $100 million for a warehouse and the computers which go inside (the computers must be replaced every few years). Although availability is crucial for SaaS products, the software is designed to compensate for availability failures—unlike supercomputers. Virtual hardware Virtual hardware is software that mimics the function of hardware; it is commonly used in infrastructure as a Service (IaaS) and platform as a Service (PaaS). Embedded system Embedded systems have the most variation in their processing power and cost: from an 8-bit processor that could cost less than USD$0.10, to higher-end processors capable of billions of operations per second and costing over USD$100. Cost is a particular concern with these systems, with designers often choosing the cheapest option that satisfies the performance requirements. Components Case A computer case encloses most of the components of a desktop computer system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, and power supply, and controls and directs the flow of cooling air over internal components. The case is also part of the system to control electromagnetic interference radiated by the computer and protects internal parts from electrostatic discharge. Large tower cases provide space for multiple disk drives or other peripherals and usually stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases that provide impact protection for the unit. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding. Power supply Most personal computer power supply units meet the ATX standard and convert from alternating current (AC) at between 120 and 277 volts provided from a power outlet to direct current (DC) at a much lower voltage: typically 12, 5, or 3.3 volts. Motherboard The motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots. The integrated circuit (IC) chips in a computer typically contain billions of tiny metal–oxide–semiconductor field-effect transistors (MOSFETs). Components directly attached to or to part of the motherboard include: At least one CPU (central processing unit), which performs most of the calculations that enable a computer to function. It can be informally referred to as the brain of the computer. It takes program instructions from random-access memory (RAM), interprets and processes them and then sends back results so that the relevant components can carry out the instructions. The CPU is a microprocessor, which is fabricated on a metal–oxide–semiconductor (MOS) integrated circuit (IC) chip. It is usually cooled by a heatsink and fan, or water-cooling system. Many newer CPUs include an on-die graphics processing unit (GPU). The clock speed of the CPU governs how fast it executes instructions and is measured in GHz; typical values lie between 1 GHz and 5 GHz. There is also an increasing trend to add more cores to a processor—with each acting as if it were an independent processor—for increased parallelism. The internal bus connects the CPU to the main memory with several lines for simultaneous communication—typically 50 to 100—which are separated into those for addressing or memory, data, and command or control. Although parallel buses used to be more common, serial buses with a serializer to send more information over the same wire have become more common in the twenty-first century. Computers with multiple processors will need an interconnection bus, usually managed by a northbridge, while the southbridge manages communication with slower peripheral and I/O devices. Random-access memory (RAM), which stores the code and data that are being actively accessed by the CPU in a hierarchy based on when it is expected to be next used. Registers are closest to the CPU but have very limited capacity. CPUs also typically have multiple areas of cache memory that have much more capacity than registers, but much less than main memory; they are slower to access than registers, but much faster than main memory. Caching works by prefetching data before the CPU needs it, reducing latency. If the data the CPU needs is not in the cache, it can be accessed from main memory. Cache memory is typically SRAM, while the main memory is typically DRAM. RAM is volatile, meaning its contents will disappear if the computer powers down. Permanent storage or non-volatile memory is typically higher capacity and cheaper than memory, but takes much longer to access. Historically, such storage was typically provided in the form of a hard drive, but solid-state drives (SSD) are becoming cheaper and are much faster, thus leading to their increasing adoption. USB drives and network or cloud storage are also options. Read-only memory (ROM), which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or booting or booting up. The ROM is typically a nonvolatile BIOS memory chip, which can only be written once with special technology. The BIOS (Basic Input Output System) includes boot firmware and power management firmware. Newer motherboards use Unified Extensible Firmware Interface (UEFI) instead of BIOS. The CMOS (complementary MOS) battery, which powers the CMOS memory for date and time in the BIOS chip. This battery is generally a watch battery. Power MOSFETs make up the voltage regulator module (VRM), which controls how much voltage other hardware components receive. Expansion cards An expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or backplane to add functionality to a computer system via the expansion bus. Expansion cards can be used to obtain or expand on features not offered by the motherboard. Using expansion cards for a video processor used to be common, but modern computers are more likely to instead have a GPU integrated into the motherboard. Input/output Most computers also have an external data bus to connect peripheral devices to the motherboard. Most commonly, Universal Serial Bus (USB) is used. Unlike the internal bus, the external bus is connected using a bus controller that allows the peripheral system to operate at a different speed from the CPU. Input and output devices are used to receive data from the external world or write data respectively. Common examples include keyboards and mice (input) and displays and printers (output). Network interface controllers are used to access the Internet. USB ports also allow power to connected devices—a standard USB supplies power at 5 volts and up to 500 milliamps (2.5 watts), while powered USB ports with additional pins may allow the delivery of more power—up to 6 amps at 24v. Sales Global revenue from computer hardware in 2023 reached $705.17 billion. Recycling Because computer parts contain hazardous materials, there is a growing movement to recycle old and outdated parts. Computer hardware contain dangerous chemicals such as lead, mercury, nickel, and cadmium. According to the EPA these e-wastes have a harmful effect on the environment unless they are disposed of properly. Making hardware requires energy, and recycling parts will reduce air pollution, water pollution, as well as greenhouse gas emissions. Disposing unauthorized computer equipment is in fact illegal. Legislation makes it mandatory to recycle computers through the government approved facilities. Recycling a computer can be made easier by taking out certain reusable parts. For example, the RAM, DVD drive, the graphics card, hard drive or SSD, and other similar removable parts can be reused. Many materials used in computer hardware can be recovered by recycling for use in future production. Reuse of tin, silicon, iron, aluminum, and a variety of plastics that are present in bulk in computers or other electronics can reduce the costs of constructing new systems. Components frequently contain copper, gold, tantalum, silver, platinum, palladium, and lead as well as other valuable materials suitable for reclamation. Toxic computer components The central processing unit contains many toxic materials. It contains lead and chromium in the metal plates. Resistors, semiconductors, infrared detectors, stabilizers, cables, and wires contain cadmium. The circuit boards in a computer contain mercury, and chromium. When these types of materials, and chemicals are disposed improperly will become hazardous for the environment. Environmental effects When e-waste byproducts leach into groundwater, are burned, or get mishandled during recycling, it causes harm. Health problems associated with such toxins include impaired mental development, cancer, and damage to the lungs, liver, and kidneys. Computer components contain many toxic substances, like dioxins, polychlorinated biphenyls (PCBs), cadmium, chromium, radioactive isotopes and mercury. Circuit boards contain considerable quantities of lead-tin solders that are more likely to leach into groundwater or create air pollution due to incineration. Recycling of computer hardware is considered environmentally friendly because it prevents hazardous waste, including heavy metals and carcinogens, from entering the atmosphere, landfill or waterways. While electronics consist a small fraction of total waste generated, they are far more dangerous. There is stringent legislation designed to enforce and encourage the sustainable disposal of appliances, the most notable being the Waste Electrical and Electronic Equipment Directive of the European Union and the United States National Computer Recycling Act. Efforts for minimizing computer hardware waste E-cycling, the recycling of computer hardware, refers to the donation, reuse, shredding and general collection of used electronics. Generically, the term refers to the process of collecting, brokering, disassembling, repairing and recycling the components or metals contained in used or discarded electronic equipment, otherwise known as electronic waste (e-waste). E-cyclable items include, but are not limited to: televisions, computers, microwave ovens, vacuum cleaners, telephones and cellular phones, stereos, and VCRs and DVDs just about anything that has a cord, light or takes some kind of battery. Some companies, such as Dell and Apple, will recycle computers of their make or any other make. Otherwise, a computer can be donated to Computer Aid International which is an organization that recycles and refurbishes old computers for hospitals, schools, universities, etc.
Technology
Computer hardware
null
44904271
https://en.wikipedia.org/wiki/Microsoft%20Edge
Microsoft Edge
Microsoft Edge (or simply nicknamed Edge), based on the Chromium open-source project, also known as The New Microsoft Edge or New Edge, is a proprietary cross-platform web browser created by Microsoft, superseding Edge Legacy. In Windows 11, Edge is the only browser available from Microsoft. First made available only for Android and iOS in 2017. In late 2018, Microsoft announced it would completely rebuild Edge as a Chromium-based browser with Blink and V8 engines, which allowed the browser to be ported from Windows 10 to macOS. The new Edge was publicly released in January 2020, and on Xbox as well as Linux in 2021. Edge was also available on Windows 7 and 8/8.1 until early 2023. In February 2023, according to StatCounter, Microsoft Edge became the third most popular browser in the world, behind Apple Safari and Chrome, while , Edge is second most popular PC/desktop web browser with Safari sliding to 3rd place (including 2nd place in the U.S. or rather there tied with Safari). , Edge was used by 11% of PCs worldwide. Features The new Microsoft Edge is the default web browser, replacing Edge Legacy. In Windows 11, Edge is the only browser available from Microsoft (for compatibility with Google Chrome). However, it includes an "Internet Explorer mode", which is aimed at fixing compatibility issues; it provides the legacy MSHTML browser engine and supports the legacy ActiveX and BHO technologies. New Edge also has a new feature called vertical tabs which allow users to move tabs on the left side of the screen. As of December 2022, there are more than 9,000 extensions—called add-ons—available for Edge. On February 7, 2023, Microsoft announced a major overhaul to Edge, revamping the user interface with Fluent Design, along with adding a Bing Chat (later known as Microsoft Copilot) button, which replaces the Discover button. Microsoft also added compatibility for split screen, i.e., 2 tabs can be viewed at the same time. A new feature, "Workspaces" was introduced, which basically lets the user create different spaces for various things. These workspaces are also collaborative, users can invite friends or colleagues and seamlessly have completely separate workspace for collaboration. Edge for Business Starting with Edge version 116, Microsoft released Microsoft Edge for Business. A new business mode for Edge that enables end users to completely separates work and personal browsing into dedicated browser windows, in addition to offering other features aimed at admins. Release channels On April 8, 2019, Microsoft announced the introduction of four preview channels: Canary, Dev, Beta, and Stable and launched the Canary and Dev channels that same day with the first preview builds, for those channels, of the new Edge. Microsoft collectively calls the Canary, Dev, and Beta channels the "Microsoft Edge insider channels". As a result, Edge updates were decoupled from new versions of Windows. Major versions of Edge Stable are now scheduled for release every 4 weeks, closely following Chromium version releases. Surf (video game) In May 2020, an update to Microsoft Edge added Surf, a browser game where players control a surfer attempting to evade obstacles and collect powerups. Similar to Google Chrome's Dinosaur Game, Surf is accessible from the browser's offline error page and can also be accessed by entering edge://surf into the address bar. The game features three game modes (classic, time trial, and slalom), has character customization, and supports keyboard, mouse, touch, and gamepad controls. Its gameplay has been compared to the 1991 Microsoft video game SkiFree. In 2021, Surf was updated with limited-time seasonal theming resembling SkiFree. Instead of surfing, the player skis down a mountain while being chased by a yeti. Development In November 2017, Microsoft released ports of Edge for Android and iOS. The apps feature integration and synchronization with the desktop version on Windows PCs. Due to platform restrictions and other factors, these ports did not use the same layout engine as the current desktop version at the time (Edge Legacy) and instead use OS-native Blink and WebKit-based engines. Codenamed "Anaheim", on December 6, 2018, Microsoft announced its intent to base Edge on the Chromium source code, using the same browser engine as Google Chrome but with enhancements developed by Microsoft. It was also announced that there will be versions of Edge available for older Windows versions, including Windows 7 and Windows 8.x, and macOS, plus that all versions will be updated on a more frequent basis. According to Microsoft executive Joe Belfiore, the decision for the change came after CEO Satya Nadella told the team in 2017 that the product needed to be better and pushed for replacing its in-house rendering engine with an open source one. On April 8, 2019, the first builds of the new Edge for Windows were released to the public. On May 20, 2019, the first preview builds of Edge for macOS were released to the public, marking the first time in 13 years that a Microsoft browser was available on the Mac platform. The last time a Microsoft browser was available on the Mac platform was Internet Explorer for Mac, which was withdrawn in January 2006. On June 18, 2019, IAmA post on Reddit, an Edge developer stated that it was theoretically possible for a Linux version to be developed in the future, but no work had actually started on that possibility. On June 19, 2019, Microsoft made Edge available on old Windows versions for testing. On August 20, 2019, Microsoft made its first beta build of Edge available for Windows and macOS. At Microsoft Ignite, Microsoft released an updated version of the Edge logo. The new Edge was released on January 15, 2020, and was gradually rolled out to all Windows 10 users. The new Edge was also rolled out to Windows users via Windows Update. Windows Vista and earlier were not supported at the time Edge started supporting older Windows versions. On September 22, 2020, Microsoft announced that a beta version of Edge for Linux would be available in preview form in October 2020. This comes after the company announced in November 2019 that a Linux version would be developed and confirmed in May 2020 that the Linux version was in development. The first preview build for Linux was released on October 20, 2020. Full support for the new Edge on older Windows versions was scheduled to end on January 15, 2022, but was later extended to January 15, 2023. On April 29, 2022, Microsoft announced integrated VPN support for Microsoft Edge, coming in line with this privacy feature with Chrome and Firefox. There will be a free version of the integrated Edge VPN available but is limited to 1 GB of data transfer. On November 14, 2024, Microsoft announced that they will drop support for CPUs that lack the SSE3 instruction set with the release of Edge version 128. New Edge release history Privacy Edge sends the images that the users view online to Microsoft servers by default, although Microsoft has stated that it encrypts images before transfer. Reception Microsoft's switch to Blink as Edge's engine has faced mixed reception. The move increases the consistency of web platform compatibility between major browsers. For this reason, the move has attracted criticism, as it reduces diversity in the overall web browser market and increases the influence of Google on the overall browser market by Microsoft ceding its independently developed browser engine. According to Douglas J. Leith, a computer science professor from Trinity College, Dublin, Microsoft Edge is among the least private browsers. He explained, "from a privacy perspective Microsoft Edge and Yandex are much more worrisome than the other browsers studied. Both send identifiers that are linked to the device hardware and so persist across fresh browser installs and can also be used to link different apps running on the same device. Edge sends the hardware UUID of the device to Microsoft, a strong and enduring identifier than cannot be easily changed or deleted." In response, a spokesperson from Microsoft Edge explained that it uses user diagnostic data to improve the product. In June 2020, users criticized newly released Windows updates that installed Edge and imported some user data from Chrome and Firefox prior to obtaining user permission. Microsoft responded by stating that if a user rejects giving Edge data import permission, then Edge will delete the imported data. However, if the browser crashes before the user has a chance to reject the import, then the already imported data will not be cleared. The Verge called these "spyware tactics" and called Edge's "first run experience" a "dark pattern". Microsoft uses proprietary URL handlers in Windows 10 and 11 to redirect URLs accessed via system search functions to Edge, deliberately ignoring the user's choice of default browser. In November 2021, a patch was released to frustrate a workaround employed by the third-party tool "EdgeDeflector", with a Microsoft spokesperson stating that search in the Windows shell is an "end-to-end customer experience" that is not designed to be modified. The developer of EdgeDeflector, Daniel Aleksandersen, called this "clearly a user-hostile move that sees Windows compromise its own product usability in order to make it more difficult to use competing products." In November 2021, Microsoft announced that it would display integrated advertising for the buy now, pay later service Zip Pay in Edge during online purchases eligible for financing via the service, and allow users to link their Microsoft account to expedite registration for the service. Microsoft claims that it "does not collect a fee for connecting users to loan providers." This decision was met with criticism from users and the press, arguing that the feature was added bloat. Controversy In December 2021, Microsoft began testing the display of in-browser prompts on the Google Chrome website to discourage downloading the browser. Similar prompts intended to discourage Google Chrome downloads also appear when searching for "Chrome" or "browser" on Microsoft Bing search engine. In February 2023, users reported seeing large banner advertisements for Microsoft Edge on the Chrome download page, a move that was criticized for deceptively altering part of Google's official website. In October 2023, Microsoft began testing the display of a sidebar containing a survey related to Chrome when the browser is downloaded. Market share
Technology
Browsers
null
4034788
https://en.wikipedia.org/wiki/Aggradation
Aggradation
Aggradation (or alluviation) is the term used in geology for the increase in land elevation, typically in a river system, due to the deposition of sediment. Aggradation occurs in areas in which the supply of sediment is greater than the amount of material that the system is able to transport. The mass balance between sediment being transported and sediment in the bed is described by the Exner equation. Typical aggradational environments include lowland alluvial rivers, river deltas, and alluvial fans. Aggradational environments are often undergoing slow subsidence which balances the increase in land surface elevation due to aggradation. After millions of years, an aggradational environment will become a sedimentary basin, which contains the deposited sediment, including paleochannels and ancient floodplains. Aggradation can be caused by changes in climate, land use, and geologic activity, such as volcanic eruption, earthquakes, and faulting. For example, volcanic eruptions may lead to rivers carrying more sediment than the flow can transport: this leads to the burial of the old channel and its floodplain. In another example, the quantity of sediment entering a river channel may increase when climate becomes drier. The increase in sediment is caused by a decrease in soil binding that results from plant growth being suppressed. The drier conditions cause river flow to decrease at the same time as sediment is being supplied in greater quantities, resulting in the river becoming choked with sediment. In 2009, a report by researchers from the University of Colorado at Boulder in the journal Nature Geoscience said that reduced aggradation was contributing to an increased risk of flooding in many river deltas.
Physical sciences
Sedimentology
Earth science
4037035
https://en.wikipedia.org/wiki/Terrestrial%20locomotion
Terrestrial locomotion
Terrestrial locomotion has evolved as animals adapted from aquatic to terrestrial environments. Locomotion on land raises different problems than that in water, with reduced friction being replaced by the increased effects of gravity. As viewed from evolutionary taxonomy, there are three basic forms of animal locomotion in the terrestrial environment: legged – moving by using appendages limbless locomotion – moving without legs, primarily using the body itself as a propulsive structure. rolling – rotating the body over the substrate Some terrains and terrestrial surfaces permit or demand alternative locomotive styles. A sliding component to locomotion becomes possible on slippery surfaces (such as ice and snow), where location is aided by potential energy, or on loose surfaces (such as sand or scree), where friction is low but purchase (traction) is difficult. Humans, especially, have adapted to sliding over terrestrial snowpack and terrestrial ice by means of ice skates, snow skis, and toboggans. Aquatic animals adapted to polar climates, such as ice seals and penguins also take advantage of the slipperiness of ice and snow as part of their locomotion repertoire. Beavers are known to take advantage of a mud slick known as a "beaver slide" over a short distance when passing from land into a lake or pond. Human locomotion in mud is improved through the use of cleats. Some snakes use an unusual method of movement known as sidewinding on sand or loose soil. Animals caught in terrestrial mudflows are subject to involuntary locomotion; this may be beneficial to the distribution of species with limited locomotive range under their own power. There is less opportunity for passive locomotion on land than by sea or air, though parasitism (hitchhiking) is available toward this end, as in all other habitats. Many species of monkeys and apes use a form of arboreal locomotion known as brachiation, with forelimbs as the prime mover. Some elements of the gymnastic sport of uneven bars resemble brachiation, but most adult humans do not have the upper body strength required to sustain brachiation. Many other species of arboreal animal with tails will incorporate their tails into the locomotion repertoire, if only as a minor component of their suspensory behaviors. Locomotion on irregular, steep surfaces require agility and dynamic balance known as sure-footedness. Mountain goats are famed for navigating vertiginous mountainsides where the least misstep could lead to a fatal fall. Many species of animals must sometimes locomote while safely conveying their young. Most often this task is performed by adult females. Some species are specially adapted to conveying their young without occupying their limbs, such as marsupials with their special pouch. In other species, the young are carried on the mother's back, and the offspring have instinctual clinging behaviours. Many species incorporate specialized transportation behaviours as a component of their locomotion repertoire, such as the dung beetle when rolling a ball of dung, which combines both rolling and limb-based elements. The remainder of this article focuses on the anatomical and physiological distinctions involving terrestrial locomotion from the taxonomic perspective. Legged locomotion Movement on appendages is the most common form of terrestrial locomotion, it is the basic form of locomotion of two major groups with many terrestrial members, the vertebrates and the arthropods. Important aspects of legged locomotion are posture (the way the body is supported by the legs), the number of legs, and the functional structure of the leg and foot. There are also many gaits, ways of moving the legs to locomote, such as walking, running, or jumping. Posture Appendages can be used for movement in a lot of ways: the posture, the way the body is supported by the legs, is an important aspect. There are three main ways in which vertebrates support themselves with their legs – sprawling, semi-erect, and fully erect. Some animals may use different postures in different circumstances, depending on the posture's mechanical advantages. There is no detectable difference in energetic cost between stances. The "sprawling" posture is the most primitive, and is the original limb posture from which the others evolved. The upper limbs are typically held horizontally, while the lower limbs are vertical, though upper limb angle may be substantially increased in large animals. The body may drag along the ground, as in salamanders, or may be substantially elevated, as in monitor lizards. This posture is typically associated with trotting gaits, and the body flexes from side-to-side during movement to increase step length. All limbed reptiles and salamanders use this posture, as does the platypus and several species of frogs that walk. Unusual examples can be found among amphibious fish, such as the mudskipper, which drag themselves across land on their sturdy fins. Among the invertebrates, most arthropods – which includes the most diverse group of animals, the insects – have a stance best described as sprawling. There is also anecdotal evidence that some octopus species (such as the genus Pinnoctopus) can also drag themselves across land a short distance by hauling their body along by their tentacles (for example to pursue prey between rockpools) – there may be video evidence of this. The semi-erect posture is more accurately interpreted as an extremely elevated sprawling posture. This mode of locomotion is typically found in large lizards such as monitor lizards and tegus. Mammals and birds typically have a fully erect posture, though each evolved it independently. In these groups the legs are placed beneath the body. This is often linked with the evolution of endothermy, as it avoids Carrier's constraint and thus allows prolonged periods of activity. The fully erect stance is not necessarily the "most-evolved" stance; evidence suggests that crocodilians evolved a semi-erect stance in their forelimbs from ancestors with fully erect stance as a result of adapting to a mostly aquatic lifestyle, though their hindlimbs are still held fully erect. For example, the mesozoic prehistoric crocodilian Erpetosuchus is believed to have had a fully erect stance and been terrestrial. Number of legs The number of locomotory appendages varies much between animals, and sometimes the same animal may use different numbers of its legs in different circumstances. The best contender for unipedal movement is the springtail, which while normally hexapedal, hurls itself away from danger using its furcula, a tail-like forked rod that can be rapidly unfurled from the underside of its body. A number of species move and stand on two legs, that is, they are bipedal. The group that is exclusively bipedal is the birds, which have either an alternating or a hopping gait. There are also a number of bipedal mammals. Most of these move by hopping – including the macropods such as kangaroos and various jumping rodents. Only a few mammals such as humans and the ground pangolin commonly show an alternating bipedal gait. In humans, alternating bipedalism is characterized by a bobbing motion, which is due to the utilization of gravity when falling forward. This form of bipedalism has demonstrated significant energy savings. Cockroaches and some lizards may also run on their two hind legs. With the exception of the birds, terrestrial vertebrate groups with legs are mostly quadrupedal – the mammals, reptiles, and the amphibians usually move on four legs. There are many quadrupedal gaits. The most diverse group of animals on earth, the insects, are included in a larger taxon known as hexapods, most of which are hexapedal, walking and standing on six legs. Exceptions among the insects include praying mantises and water scorpions, which are quadrupeds with their front two legs modified for grasping, some butterflies such as the Lycaenidae (blues and hairstreaks) which use only four legs, and some kinds of insect larvae that may have no legs (e.g., maggots), or additional prolegs (e.g., caterpillars). Spiders and many of their relatives move on eight legs – they are octopedal. However, some creatures move on many more legs. Terrestrial crustaceans may have a fair number – woodlice having fourteen legs. Also, as previously mentioned, some insect larvae such as caterpillars and sawfly larvae have up to five (caterpillars) or nine (sawflies) additional fleshy prolegs in addition to the six legs normal for insects. Some species of invertebrate have even more legs, the unusual velvet worm having stubby legs under the length of its body, with around several dozen pairs of legs. Centipedes have one pair of legs per body segment, with typically around 50 legs, but some species have over 200. The terrestrial animals with the most legs are the millipedes. They have two pairs of legs per body segment, with common species having between 80 and 400 legs overall – with the rare species Illacme plenipes having up to 750 legs. Animals with many legs typically move them in metachronal rhythm, which gives the appearance of waves of motion travelling forward or backward along their rows of legs. Millipedes, caterpillars, and some small centipedes move with the leg waves travelling forward as they walk, while larger centipedes move with the leg waves travelling backward. Leg and foot structure The legs of tetrapods, the main group of terrestrial vertebrates (which also includes amphibious fish), have internal bones, with externally attached muscles for movement, and the basic form has three key joints: the shoulder joint, the knee joint, and the ankle joint, at which the foot is attached. Within this form there is much variation in structure and shape. An alternative form of vertebrate 'leg' to the tetrapod leg is the fins found on amphibious fish. Also a few tetrapods, such as the macropods, have adapted their tails as additional locomotory appendages. The fundamental form of the vertebrate foot has five digits, however some animals have fused digits, giving them less, and some early fishapods had more; Acanthostega had eight toes. Only ichthyosaurs evolved more than 5 digits within tetrapods, while their transition from land to water again (limb terminations were becoming flippers). Feet have evolved many forms depending on the animal's needs. One key variation is where on the foot the animal's weight is placed. Some vertebrates: amphibians, reptiles, and some mammals such as humans, bears, and rodents, are plantigrade. This means the weight of the body is placed on the heel of the foot, giving it strength and stability. Most mammals, such as cats and dogs, are digitigrade, walking on their toes, giving them what many people mistake as a “backward knee”, which is really their ankle. The extension of the joint helps store momentum and acts as a spring, allowing digitigrade creatures more speed. Digitigrade mammals are also often adept at quiet movement. Birds are also digitigrade. Hooved mammals are known as ungulates, walking on the fused tips of their fingers and toes. This can vary from odd-toed ungulates, such as horses, rhinos, and a few wild African ungulates, to even-toed ungulates, such as pigs, cows, deer, and goats. Mammals whose limbs have adapted to grab objects have what are called prehensile limbs. This term can be attributed to front limbs as well as tails for animals such as monkeys and some rodents. All animals that have prehensile front limbs are plantigrade, even if their ankle joint looks extended (squirrels are a good example). Among terrestrial invertebrates there are a number of leg forms. The arthropod legs are jointed and supported by hard external armor, with the muscles attached to the internal surface of this exoskeleton. The other group of legged terrestrial invertebrates, the velvet worms, have soft stumpy legs supported by a hydrostatic skeleton. The prolegs that some caterpillars have in addition to their six more-standard arthropod legs have a similar form to those of velvet worms, and suggest a distant shared ancestry. Gaits Animals show a vast range of gaits, the order that they place and lift their appendages in locomotion. Gaits can be grouped into categories according to their patterns of support sequence. For quadrupeds, there are three main categories: walking gaits, running gaits, and leaping gaits. In one system (relating to horses), there are 60 discrete patterns: 37 walking gaits, 14 running gaits, and 9 leaping gaits. Walking is the most common gait, where some feet are on the ground at any given time, and found in almost all legged animals. In an informal sense, running is considered to occur when at some points in the stride all feet are off the ground in a moment of suspension. Technically, however, moments of suspension occur in both running gaits (such as trot) and leaping gaits (such as canter and gallop). Gaits involving one or more moments of suspension can be found in many animals, and compared to walking they are faster but more energetically costly forms of locomotion. Animals will use different gaits for different speeds, terrain, and situations. For example, horses show four natural gaits, the slowest horse gait is the walk, then there are three faster gaits which, from slowest to fastest, are the trot, the canter, and the gallop. Animals may also have unusual gaits that are used occasionally, such as for moving sideways or backwards. For example, the main human gaits are bipedal walking and running, but they employ many other gaits occasionally, including a four-legged crawl in tight spaces. In walking, and for many animals running, the motion of legs on either side of the body alternates, i.e. is out of phase. Other animals, such as a horse when galloping, or an inchworm, alternate between their front and back legs. In saltation (hopping) all legs move together, instead of alternating. As a main means of locomotion, this is usually found in bipeds, or semi-bipeds. Among the mammals saltation is commonly used among kangaroos and their relatives, jerboas, springhares, kangaroo rats, hopping mice, gerbils, and sportive lemurs. Certain tendons in the hind legs of kangaroos are very elastic, allowing kangaroos to effectively bounce along conserving energy from hop to hop, making saltation a very energy efficient way to move around in their nutrient poor environment. Saltation is also used by many small birds, frogs, fleas, crickets, grasshoppers, and water fleas (a small planktonic crustacean). Most animals move in the direction of their head. However, there are some exceptions. Crabs move sideways, and naked mole rats, which live in tight tunnels and can move backward or forward with equal facility. Crayfish can move backward much faster than they can move forward. Gait analysis is the study of gait in humans and other animals. This may involve videoing subjects with markers on particular anatomical landmarks and measuring the forces of their footfall using floor transducers (strain gauges). Skin electrodes may also be used to measure muscle activity. Limbless locomotion There are a number of terrestrial and amphibious limbless vertebrates and invertebrates. These animals, due to lack of appendages, use their bodies to generate propulsive force. These movements are sometimes referred to as "slithering" or "crawling", although neither are formally used in the scientific literature and the latter term is also used for some animals moving on all four limbs. All limbless animals come from cold-blooded groups; there are no endothermic limbless animals, i.e. there are no limbless birds or mammals. Lower body surface Where the foot is important to the legged mammal, for limbless animals the underside of the body is important. Some animals such as snakes or legless lizards move on their smooth dry underside. Other animals have various features that aid movement. Molluscs such as slugs and snails move on a layer of mucus that is secreted from their underside, reducing friction and protecting from injury when moving over sharp objects. Earthworms have small bristles (setae) that hook into the substrate and help them move. Some animals, such as leeches, have suction cups on either end of the body allowing two anchor movement. Type of movement Some limbless animals, such as leeches, have suction cups on either end of their body, which allow them to move by anchoring the rear end and then moving forward the front end, which is then anchored and then the back end is pulled in, and so on. This is known as two-anchor movement. A legged animal, the inchworm, also moves like this, clasping with appendages at either end of its body. Limbless animals can also move using pedal locomotory waves, rippling the underside of the body. This is the main method used by molluscs such as slugs and snails, and also large flatworms, some other worms, and even earless seals. The waves may move in the opposite direction to motion, known as retrograde waves, or in the same direction as motion, known as direct waves. Earthworms move by retrograde waves alternatively swelling and contracting down the length of their body, the swollen sections being held in place using setae. Aquatic molluscs such as limpets, which are sometimes out of the water, tend to move using retrograde waves. However, terrestrial molluscs such as slugs and snails tend to use direct waves. Lugworms and seals also use direct waves. Most snakes move using lateral undulation where a lateral wave travels down the snake's body in the opposite direction to the snake's motion and pushes the snake off irregularities in the ground. This mode of locomotion requires these irregularities to function. Another form of locomotion, rectilinear locomotion, is used at times by some snakes, especially large ones such as pythons and boa. Here, large scales on the underside of the body known as scutes are used to push backwards and downwards. This is effective on a flat surface and is used for slow, silent movement, such as when stalking prey. Snakes use concertina locomotion for moving slowly in tunnels, here the snake alternates in bracing parts of its body on it surrounds. Finally the caenophidian snakes use the fast and unusual method of movement known as sidewinding on sand or loose soil. The snake cycles through throwing the front part of its body in the direction of motion and bringing the back part of its body into line crosswise. Rolling Although animals have never evolved wheels for locomotion, a small number of animals will move at times by rolling their whole body. Rolling animals can be divided into those that roll under the force of gravity or wind and those that roll using their own power. Gravity or wind assisted The web-toed salamander, a salamander, lives on steep hills in the Sierra Nevada mountains. When disturbed or startled it coils itself up into a ball, often causing it to roll downhill. The pebble toad (Oreophrynella nigra) lives atop tepui in the Guiana highlands of South America. When threatened, often by tarantulas, it rolls into ball, and typically being on an incline, rolls away under gravity like a loose pebble. Namib wheeling spiders (Carparachne spp.), found in the Namib desert, will actively roll down sand dunes. This action can be used to successfully escape predators such as the Pompilidae tarantula wasps, which lay their eggs in a paralyzed spider for their larvae to feed on when they hatch. The spiders flip their body sideways and then cartwheel over their bent legs. The rotation is fast, the golden wheel spider (Carparachne aureoflava) moving up to 20 revolutions per second, moving the spider at . Coastal tiger beetle larvae when threatened can flick themselves into the air and curl their bodies to form a wheels, which the wind blows, often uphill, as far as and as fast as . They also may have some ability to steer themselves in this state. Pangolins, a type of mammal covered in thick scales, roll into a tight ball when threatened. Pangolins have been reported to roll away from danger, by both gravity and self-powered methods. A pangolin in hill country in Sumatra, to flee from a researcher, ran to the edge of a slope and curled into a ball to roll down the slope, crashing through the vegetation, and covering an estimated or more in 10 seconds. Self-powered Caterpillars of the mother-of-pearl moth, Pleuroptya ruralis, when attacked, will touch their heads to their tails and roll backwards, up to 5 revolutions at about , which is about 40 times its normal speed. Nannosquilla decemspinosa, a species of long-bodied, short-legged mantis shrimp, lives in shallow sandy areas along the Pacific coast of Central and South America. When stranded by a low tide the stomatopod lies on its back and performs backwards somersaults over and over. The animal moves up to at a time by rolling 20–40 times, with speeds of around 72 revolutions per minute. That is 1.5 body lengths per second (). Researchers estimate that the stomatopod acts as a true wheel around 40% of the time during this series of rolls. The remaining 60% of the time it has to "jumpstart" a roll by using its body to thrust itself upwards and forwards. Pangolins have also been reported to roll away from danger by self-powered methods. Witnessed by a lion researcher in the Serengeti in Africa, a group of lions surrounded a pangolin, but could not get purchase on it when it rolled into a ball, and so the lions sat around it waiting and dozing. Surrounded by lions, it would unroll itself slightly and give itself a push to roll some distance, until by doing this multiple times it could get far enough away from the lions to be safe. Moving like this would allow a pangolin to cover distance while still remaining in a protective armoured ball. Moroccan flic-flac spiders, if provoked or threatened, can escape by doubling their normal walking speed using forward or backward flips similar to acrobatic flic-flac movements. Limits and extremes The fastest terrestrial animal is the cheetah, which can attain maximal sprint speeds of approximately 104 km/h (64 mph). The fastest running lizard is the black iguana, which has been recorded moving at speed of up to 34.9 km/h (21.7 mph).
Biology and health sciences
Ethology
Biology
43173137
https://en.wikipedia.org/wiki/Alcohol%20%28drug%29
Alcohol (drug)
Alcohol (), sometimes referred to by the chemical name ethanol, is the second most consumed psychoactive drug globally behind caffeine. Alcohol is a central nervous system (CNS) depressant, decreasing electrical activity of neurons in the brain. The World Health Organization (WHO) classifies alcohol as a toxic, psychoactive, dependence-producing, and carcinogenic substance. Alcohol is found in fermented beverages such as beer, wine, and distilled spirit – in particular, rectified spirit, and serves various purposes; Certain religions integrate alcohol into their spiritual practices. For example, the Catholic Church requires alcoholic sacramental wine in the Eucharist, and permits moderate consumption of alcohol in daily life as a means of experiencing joy. Alcohol is also used as a recreational drug, for example by college students, for self-medication, and in warfare. It is also frequently involved in alcohol-related crimes such as drunk driving, public intoxication, and underage drinking. Short-term effects from moderate consumption include relaxation, decreased social inhibition, and euphoria, while binge drinking may result in cognitive impairment, blackout, and hangover. Excessive alcohol intake causes alcohol poisoning, characterized by unconsciousness or, in severe cases, death. Long-term effects are considered to be a major global public health issue and includes alcoholism, abuse, alcohol withdrawal, fetal alcohol spectrum disorder (FASD), liver disease, hepatitis, cardiovascular disease (e.g., cardiomyopathy), polyneuropathy, alcoholic hallucinosis, long-term impact on the brain (e.g., brain damage, dementia, and Marchiafava–Bignami disease), and cancers. For roughly two decades, the International Agency for Research on Cancer (IARC) of the WHO has classified alcohol as a Group 1 Carcinogen. Globally, alcohol use was the seventh leading risk factor for both deaths and DALY in 2016. According to WHO's Global status report on alcohol and health 2018, more than 200 health issues are associated with harmful alcohol consumption, ranging from liver diseases, road injuries and violence, to cancers, cardiovascular diseases, suicides, tuberculosis, and HIV/AIDS. Moreover, a 2024 WHO report indicates that these harmful consequences of alcohol use result in approximately 2.6 million deaths annually, accounting for 4.7% of all global deaths. In 2023, the WHO declared that 'there is no safe amount of alcohol consumption' and that 'the risk to the drinker's health starts from the first drop of any alcoholic beverage.' National agencies are aligning with the WHO's recommendations and increasingly advocating for abstinence from alcohol consumption. They highlight that even minimal alcohol intake is associated with elevated health risks, emphasizing that reducing alcohol intake is beneficial for everyone, regardless of their current drinking levels. Uses Dutch courage Dutch courage, also known as pot-valiance or liquid courage, refers to courage gained from intoxication with alcohol. Alcohol use among college students is often used as "liquid courage" in the hookup culture, for them to make a sexual advance in the first place. However, a recent trend called "dry dating" is gaining popularity to replace "liquid courage", which involves going on dates without consuming alcohol. Consuming alcohol prior to visiting female sex workers is a common practice among some men. Sex workers often resort to using drugs and alcohol to cope with stress. Alcohol when consumed in high doses is considered to be an anaphrodisiac. Criminal Albeit not a valid intoxication defense, weakening the inhibitions by drunkenness is occasionally used as a tool to commit planned offenses such as property crimes including theft and robbery, and violent crimes including assault, murder, or rape – which sometimes but not always occurs in alcohol-facilitated sexual assaults where the victim is also drugged. Warfare Alcohol has a long association of military use, and has been called "liquid courage" for its role in preparing troops for battle, anaesthetize injured soldiers, and celebrate military victories. It has also served as a coping mechanism for combat stress reactions and a means of decompression from combat to everyday life. However, this reliance on alcohol can have negative consequences for physical and mental health. Military and veteran populations face significant challenges in addressing the co-occurrence of PTSD and alcohol use disorder. Military personnel who show symptoms of PTSD, major depressive disorder, alcohol use disorder, and generalized anxiety disorder show higher levels of suicidal ideation. Alcohol consumption in the US Military is higher than any other profession, according to CDC data from 2013–2017. The Department of Defense Survey of Health Related Behaviors among Active Duty Military Personnel published that 47% of active duty members engage in binge drinking, with another 20% engaging in heavy drinking in the past 30 days. Reports from the Russian invasion of Ukraine in 2022 and since suggested that Russian soldiers are drinking significant amount of alcohol (as well as consuming harder drugs), which increases their losses. Some reports suggest that on occasion, alcohol and drugs have been provided to the lower quality troops by their commanders, in order to facilitate their use as expendable cannon fodder. Food energy The USDA uses a figure of per gram of alcohol ( per ml) for calculating food energy. For distilled spirits, a standard serving in the United States is , which at 40% ethanol (80 proof), would be 14 grams and 98 calories. However, alcoholic drinks are considered empty calorie foods because other than food energy they contribute no essential nutrients. Alcohol increases insulin response to glucose promoting fat storage and hindering carb/fat burning oxidation. This excess processing in the liver acetyl CoA can lead to fatty liver disease and eventually alcoholic liver disease. This progression can lead to further complications, alcohol-related liver disease may cause exocrine pancreatic insufficiency, the inability to properly digest food due to a lack or reduction of digestive enzymes made by the pancreas. The use of alcohol as a staple food source is considered inconvenient due to the fact that it increases the blood alcohol content (BAC). However, alcohol is a significant source of food energy for individuals with alcoholism and those who engage in binge drinking; For example, individuals with drunkorexia, engage in the combination of self-imposed malnutrition and binge drinking to avoid weight gain from alcohol, to save money for purchasing alcohol, and to facilitate alcohol intoxication. Also, in alcoholics who get most of their daily calories from alcohol, a deficiency of thiamine can produce Korsakoff's syndrome, which is associated with serious brain damage. Medical Spiritus fortis is a medical term for ethanol solutions with 95% ABV. When taken by mouth or injected into a vein ethanol is used to treat methanol or ethylene glycol toxicity when fomepizole is not available. Ethanol, when used to treat or prevent methanol and/or ethylene glycol toxicity, competes with other alcohols for the alcohol dehydrogenase enzyme, lessening metabolism into toxic aldehyde and carboxylic acid derivatives, and reducing the more serious toxic effects of the glycols when crystallized in the kidneys. Recreational Drinking culture is the set of traditions and social behaviors that surround the consumption of alcoholic beverages as a recreational drug and social lubricant. Although alcoholic beverages and social attitudes toward drinking vary around the world, nearly every civilization has independently discovered the processes of brewing beer, fermenting wine and distilling spirits. Common drinking styles include moderate drinking, social drinking, and binge drinking. Drinking styles In today's society, there is a growing awareness of this, reflected in the variety of approaches to alcohol use, each emphasizing responsible choices. Sober curious describes a mindset or approach where someone is consciously choosing to reduce or eliminate alcohol consumption, not drinking and driving, being aware of your surroundings, not pressuring others to drink, and being able to quit anytime. However, they are not necessarily committed to complete sobriety. A 2014 report in the National Survey on Drug Use and Health found that only 10% of either "heavy drinkers" or "binge drinkers" defined according to the above criteria also met the criteria for alcohol dependence, while only 1.3% of non-binge drinkers met the criteria. An inference drawn from this study is that evidence-based policy strategies and clinical preventive services may effectively reduce binge drinking without requiring addiction treatment in most cases. Binge drinking Binge drinking, or heavy episodic drinking, is drinking alcoholic beverages with an intention of becoming intoxicated by heavy consumption of alcohol over a short period of time, but definitions vary considerably. Binge drinking is a style of drinking that is popular in several countries worldwide, and overlaps somewhat with social drinking since it is often done in groups. Drinking games involve consuming alcohol as part of the gameplay. They can be risky because they can encourage people to drink more than they intended to. Recent studies link binge drinking habits to a decline in quality of life and a shortened lifespan by 3–6 years. Alcohol-based sugar-sweetened beverages, are closely linked to episodic drinking in adolescents. Sugar-infused alcoholic beverages include alcopops, and liqueurs. Pregame heavy episodic drinking (4+/5+ drinks for women/men) or more drinks is linked to a higher likelihood of engaging in high-intensity drinking (8+/10+ drinks), according to a 2022 study. The study also found that students who pregame at this level report more negative consequences compared to days with moderate pregame drinking and days without any pregame drinking. Hazing has a long-standing presence in college fraternities, often involving alcohol as a form of punishment. This can lead to dangerous levels of intoxication and severe ethanol poisoning, sometimes resulting in fatalities. High serum ethanol levels are common among affected students. Definition Binge drinking refers to the consumption of alcohol that takes place simultaneously or within a few hours of one another; The National Institute on Alcohol Abuse and Alcoholism (NIAAA) defines binge drinking as a pattern of alcohol consumption that brings a person's blood alcohol concentration (BAC) to 0.08 percent or above. This typically occurs when men consume five or more US standard drinks, or women consume four or more drinks, within about two hours. The Substance Abuse and Mental Health Services Administration (SAMHSA) defines binge drinking slightly differently, focusing on the number of drinks consumed on a single occasion. According to SAMHSA, binge drinking is consuming five or more drinks for men, or four or more drinks for women, on the same occasion on at least one day in the past month. Heavy drinking Alcohol in association football has long been a complex issue, with significant cultural and behavioral implications. Football is widely observed in various settings such as television broadcasts, sports bars, and arenas, contributing to the drinking culture surrounding the sport. A 2007 study at the University of Texas at Austin monitored the drinking habits of 541 students over two football seasons. It revealed that high-profile game days ranked among the heaviest drinking occasions, similar to New Year's Eve. Male students increased their consumption for all games, while socially active female students drank heavily during away games. Lighter drinkers also showed a higher likelihood of risky behaviors during away games as their intoxication increased. This research highlights specific drinking patterns linked to collegiate sports events. Heavy drinking significantly increases during December, particularly around Christmas and New Year's, leading to a rise in alcohol sales, consumption, and related harmful events and deaths. Because of increased alcohol consumption at festivities and poorer road conditions during the winter months, alcohol-related road traffic accidents increase over the Christmas and holiday season. According to a 2022 study, recreational heavy drinking and intoxication have become increasingly prevalent among Nigerian youth in Benin City. Traditionally, alcohol use was more accepted for men, while youth drinking was often taboo. Today, many young people engage in heavy drinking for pleasure and excitement. Peer networks encourage this behavior through rituals that promote intoxication and provide care for inebriated friends. The findings suggest a need to reconsider cultural prohibitions on youth drinking and advocate for public health interventions promoting low-risk drinking practices. Definition Heavy drinking should not be confused with heavy episodic drinking, commonly known as binge drinking, which takes place over a brief period of a few hours. However, multiple binge drinking sessions within a short timeframe can be classified as heavy drinking. Heavy alcohol use refers to consumption patterns that take place within a single day, week, or month, depending on the amount consumed: The Centers for Disease Control and Prevention defines heavy drinking as consuming more than 8 drinks per week for women and more than 15 drinks per week for men. NIAAA defines heavy alcohol use as the consumption of five or more standard drinks in a single day or 15 or more drinks within a week for men, while for women, it is defined as consuming four or more drinks in a day or eight or more drinks per week. SAMHSA considers heavy alcohol use to be engaging in binge drinking behaviors on five or more days within a month. Light, moderate, responsible, and social drinking In many cultures, good news is often celebrated by a group sharing alcoholic drinks. For example, sparkling wine may be used to toast the bride at a wedding, and alcoholic drinks may be served to celebrate a baby's birth. Buying someone an alcoholic drink is often considered a gesture of goodwill, an expression of gratitude, or to mark the resolution of a dispute. Definitions Light drinking, moderate drinking, responsible drinking, and social drinking are often used interchangeably, but with slightly different connotations: Light drinking - "Alcohol has been found to increase risk for cancer, and for some types of cancer, the risk increases even at low levels of alcohol consumption (less than 1 drink in a day). Caution, therefore, is recommended.", according to the Dietary Guidelines for Americans (DGA). "The Committee recommended that adults limit alcohol intake to no more than 1 drink per day for both women and men for better health" (DGA). Light alcohol consumption showed no connection to most cancers, but a slight rise in the likelihood of melanoma, breast cancer in females, and prostate cancer in males was observed. Moderate drinking - strictly focuses on the amount of alcohol consumed, following alcohol consumption recommendations. This is called "drinking in moderation". The CDC defines "Moderate drinking is having one drink or less in a day for women, or two drinks or less in a day for men." According to the WHO nearly half of all alcohol-attributable cancers in the WHO European Region are linked to alcohol consumption, even from "light" or "moderate" drinking – "less than 1.5 litres of wine or less than 3.5 litres of beer or less than 450 millilitres of spirits per week". However, moderate drinking is associated with a further slight increase in cancer risk. Also, moderate drinking may disrupt normal brain functioning. Responsible drinking - as defined by alcohol industry standards, often emphasizes personal choice and risk management, unlike terms like "social drinking" or "moderate drinking". Critics argue that the alcohol industry's definition does not always align with official recommendations for safe drinking limits. Social drinking - refers to casual drinking of alcoholic beverages in a social setting (for example bars, nightclubs, or parties) without an intent to become intoxicated. A social drinker is also defined as a person who only drinks alcohol during social events, such as parties, and does not drink while alone (e.g., at home). While social drinking often involves moderation, it does not strictly emphasize safety or specific quantities, unlike moderate drinking. Social settings can involve peer pressure to drink more than intended, which can be a risk factor for excessive alcohol consumption. Regularly socializing over drinks can lead to a higher tolerance for alcohol and potentially dependence, especially in groups where drinking is a central activity. Social drinking does not preclude the development of alcohol dependence. High-functioning alcoholism describes individuals who appear to function normally in daily life despite struggling with alcohol dependence. Self-medication The therapeutic index for ethanol is 10%. Alcohol can have analgesic (pain-relieving) effects, which is why some people with chronic pain turn to alcohol to self-medicate and try to alleviate their physical discomfort. People with social anxiety disorder commonly self-medicate with alcohol to overcome their highly set inhibitions. However, self-medicating excessively for prolonged periods of time with alcohol often makes the symptoms of anxiety or depression worse. This is believed to occur as a result of the changes in brain chemistry from long-term use. A 2023 systematic review highlights the non-addictive use of alcohol for managing developmental issues, personality traits, and psychiatric symptoms, emphasizing the need for informed, harm-controlled approaches to alcohol consumption within a personalized health policy framework. A 2023 study suggests that people who drink for both recreational enjoyment and therapeutic reasons, like relieving pain and anxiety/depression/stress, have a higher demand for alcohol compared to those who drink solely for recreation or self-medication. This finding raises concerns, as this group may be more likely to develop alcohol use disorder and experience negative consequences related to their drinking. A significant proportion of patients attending mental health services for conditions including anxiety disorders such as panic disorder or social phobia have developed these conditions as a result of recreational alcohol or sedative use. Self-medication or mental disorders may make people not decline their drinking despite negative consequences. This can create a cycle of dependence that is difficult to break without addressing the underlying mental health issue. Unscientific The American Heart Association warn that "We've all seen the headlines about studies associating light or moderate drinking with health benefits and reduced mortality. Some researchers have suggested there are health benefits from wine, especially red wine, and that a glass a day can be good for the heart. But there's more to the story. No research has proved a cause-and-effect link between drinking alcohol and better heart health." In folk medicine, consuming a nightcap is for the purpose of inducing sleep. However, alcohol is not recommended by many doctors as a sleep aid because it interferes with sleep quality. "Hair of the dog", short for "hair of the dog that bit you", is a colloquial expression in the English language predominantly used to refer to alcohol that is consumed as a hangover remedy (with the aim of lessening the effects of a hangover). Many other languages have their own phrase to describe the same concept. The idea may have some basis in science in the difference between ethanol and methanol metabolism. Instead of alcohol, rehydration before going to bed or during hangover may relieve dehydration-associated symptoms such as thirst, dizziness, dry mouth, and headache. Drinking alcohol may cause subclinical immunosuppression. Spiritual Christian views on alcohol encompass a range of perspectives regarding the consumption of alcoholic beverages, with significant emphasis on moderation rather than total abstinence. The moderationist position is held by Roman Catholics and Eastern Orthodox, and within Protestantism, it is accepted by Anglicans, Lutherans and many Reformed churches. Moderationism is also accepted by Jehovah's Witnesses. Spiritual use of moderate alcohol consumption is also found in some religions and schools with esoteric influences, including the Hindu tantra sect Aghori, in the Sufi Bektashi Order and Alevi Jem ceremonies, in the Rarámuri religion, in the Japanese religion Shinto, by the new religious movement Thelema, in Vajrayana Buddhism, and in Vodou faith of Haiti. Contraindication Pregnancy In the US, alcohol is subject to the FDA drug labeling Pregnancy Category X (Contraindicated in pregnancy). Minnesota, North Dakota, Oklahoma, South Dakota, and Wisconsin have laws that allow the state to involuntarily commit pregnant women to treatment if they abuse alcohol during pregnancy. Risks Fetal alcohol spectrum disorder Ethanol is classified as a teratogen—a substance known to cause birth defects; according to the U.S. Centers for Disease Control and Prevention (CDC), alcohol consumption by women who are not using birth control increases the risk of fetal alcohol spectrum disorders (FASDs). This group of conditions encompasses fetal alcohol syndrome, partial fetal alcohol syndrome, alcohol-related neurodevelopmental disorder, static encephalopathy, and alcohol-related birth defects. The CDC currently recommends complete abstinence from alcoholic beverages for women of child-bearing age who are pregnant, trying to become pregnant, or are sexually active and not using birth control. In South Africa, some populations have rates as high as 9%. Miscarriage Miscarriage, also known in medical terms as a spontaneous abortion, is the death and expulsion of an embryo or fetus before it can survive independently. Alcohol consumption is a risk factor for miscarriage. Sudden infant death syndrome Drinking of alcohol by parents is linked to sudden infant death syndrome (SIDS). One study found a positive correlation between the two during New Years celebrations and weekends. Another found that alcohol use disorder was linked to a more than doubling of risk. Adverse effects Alcohol has a variety of short-term and long-term adverse effects. Alcohol has both short-term, and long-term effects on the memory, and sleep. It also has reinforcement-related adverse effects, including alcoholism, dependence, and withdrawal; The most severe withdrawal symptoms, associated with physical dependence, can include seizures and delirium tremens, which in rare cases can be fatal. Alcohol use is directly related to considerable morbidity and mortality, for instance due to intoxication and alcohol-related health problems. The World Health Organization advises that there is no safe level of alcohol consumption. A study in 2015 found that alcohol and tobacco use combined resulted in a significant health burden, costing over a quarter of a billion disability-adjusted life years. Illicit drug use caused tens of millions more disability-adjusted life years. Drunkorexia is a colloquialism for anorexia or bulimia combined with an alcohol use disorder. Alcohol is a common cause of substance-induced psychosis or episodes, which may occur through acute intoxication, chronic alcoholism, withdrawal, exacerbation of existing disorders, or acute idiosyncratic reactions. Research has shown that excessive alcohol use causes an 8-fold increased risk of psychotic disorders in men and a 3-fold increased risk of psychotic disorders in women. While the vast majority of cases are acute and resolve fairly quickly upon treatment and/or abstinence, they can occasionally become chronic and persistent. Alcoholic psychosis is sometimes misdiagnosed as another mental illness such as schizophrenia. An inability to process or exhibit emotions in a proper manner has been shown to exist in people who consume excessive amounts of alcohol and those who were exposed to alcohol while fetuses (FAexp). Also, a significant portion (40–60%) of alcoholics experience emotional blindness. Impairments in theory of mind, as well as other social-cognitive deficits, are commonly found in people who have alcohol use disorders, due to the neurotoxic effects of alcohol on the brain, particularly the prefrontal cortex. Short-term effects The amount of ethanol in the body is typically quantified by blood alcohol content (BAC); weight of ethanol per unit volume of blood. Small doses of ethanol, in general, are stimulant-like and produce euphoria and relaxation; people experiencing these symptoms tend to become talkative and less inhibited, and may exhibit poor judgement. At higher dosages (BAC > 1 gram/liter), ethanol acts as a central nervous system (CNS) depressant, producing at progressively higher dosages, impaired sensory and motor function, slowed cognition, stupefaction, unconsciousness, and possible death. Ethanol is commonly consumed as a recreational substance, especially while socializing, due to its psychoactive effects. Central nervous system impairment Alcohol causes generalized CNS depression, is a positive allosteric GABAA modulator and is associated and related with decreased anxiety, decreased social inhibition, sedation, impairment of cognitive, memory, motor, and sensory function. It slows and impairs cognition and reaction time and the cognitive skills, impairs judgement, interferes with motor function resulting in motor incoordination, numbness, impairs memory formation, and causes sensory impairment. Binge drinking can cause generalized impairment of neurocognitive function, dizziness, analgesia, amnesia, ataxia (loss of balance, confusion, sedation, slurred speech), general anaesthesia, decreased libido, nausea, vomiting, blackout, spins, stupor, unconsciousness, and hangover. At very high concentrations, alcohol can cause anterograde amnesia, markedly decreased heart rate, pulmonary aspiration, positional alcohol nystagmus, respiratory depression, shock, coma and death can result due to profound suppression of CNS function alcohol overdose and can finish in consequent dysautonomia. Gastrointestinal effects Alcohol can cause nausea and vomiting in sufficiently high amounts (varying by person). Alcohol stimulates gastric juice production, even when food is not present, and as a result, its consumption stimulates acidic secretions normally intended to digest protein molecules. Consequently, the excess acidity may harm the inner lining of the stomach. The stomach lining is normally protected by a mucosal layer that prevents the stomach from, essentially, digesting itself. Ingestion of alcohol can initiate systemic pro-inflammatory changes through two intestinal routes: (1) altering intestinal microbiota composition (dysbiosis), which increases lipopolysaccharide (LPS) release, and (2) degrading intestinal mucosal barrier integrity – thus allowing LPS to enter the circulatory system. The major portion of the blood supply to the liver is provided by the portal vein. Therefore, while the liver is continuously fed nutrients from the intestine, it is also exposed to any bacteria and/or bacterial derivatives that breach the intestinal mucosal barrier. Consequently, LPS levels increase in the portal vein, liver and systemic circulation after alcohol intake. Immune cells in the liver respond to LPS with the production of reactive oxygen species, leukotrienes, chemokines and cytokines. These factors promote tissue inflammation and contribute to organ pathology. Hangover A hangover is the experience of various unpleasant physiological and psychological effects usually following the consumption of alcohol, such as wine, beer, and liquor. Hangovers can last for several hours or for more than 24 hours. Typical symptoms of a hangover may include headache, drowsiness, concentration problems, dry mouth, dizziness, fatigue, gastrointestinal distress (e.g., nausea, vomiting, diarrhea), absence of hunger, light sensitivity, depression, sweating, hyper-excitability, irritability, and anxiety (often referred to as "hangxiety"). Though many possible remedies and folk cures have been suggested, there is no compelling evidence to suggest that any are effective for preventing or treating hangovers. Avoiding alcohol or drinking in moderation are the most effective ways to avoid a hangover. The socioeconomic consequences of hangovers include workplace absenteeism, impaired job performance, reduced productivity and poor academic achievement. A hangover may also impair performance during potentially dangerous daily activities such as driving a car or operating heavy machinery. Holiday heart syndrome Holiday heart syndrome, also known as alcohol-induced atrial arrhythmias, is a syndrome defined by an irregular heartbeat and palpitations associated with high levels of ethanol consumption. Holiday heart syndrome was discovered in 1978 when Philip Ettinger discovered the connection between arrhythmia and alcohol consumption. It received its common name as it is associated with the binge drinking common during the holidays. It is unclear how common this syndrome is. 5-10% of cases of atrial fibrillation may be related to this condition, but it could be as high 63%. Positional alcohol nystagmus Positional alcohol nystagmus (PAN) is nystagmus (visible jerkiness in eye movement) produced when the head is placed in a sideways position. PAN occurs when the specific gravity of the membrane space of the semicircular canals in the ear differs from the specific gravity of the fluid in the canals because of the presence of alcohol. Allergic-like reactions Ethanol-containing beverages can cause alcohol flush reactions, exacerbations of rhinitis and, more seriously and commonly, bronchoconstriction in patients with a history of asthma, and in some cases, urticarial skin eruptions, and systemic dermatitis. Such reactions can occur within 1–60 minutes of ethanol ingestion, and may be caused by: genetic abnormalities in the metabolism of ethanol, which can cause the ethanol metabolite, acetaldehyde, to accumulate in tissues and trigger the release of histamine, or true allergy reactions to allergens occurring naturally in, or contaminating, alcoholic beverages (particularly wine and beer), and other unknown causes. Alcohol flush reaction has also been associated with an increased risk of esophageal cancer in those who do drink. Long-term effects According to The Lancet, 'four industries (tobacco, unhealthy food, fossil fuel, and alcohol) are responsible for at least a third of global deaths per year'. In 2024, the World Health Organization published a report including these figures. Due to the long term effects of alcohol abuse, binge drinking is considered to be a major public health issue. The impact of alcohol on aging is multifaceted. The relationship between alcohol consumption and body weight is the subject of inconclusive studies. Alcoholic lung disease is disease of the lungs caused by excessive alcohol. However, the term 'alcoholic lung disease' is not a generally accepted medical diagnosis. Alcohol's overall effect on health is uncertain. While some studies suggest moderate consumption might have some benefit, others find any amount increases health risks. This uncertainty is due to conflicting research methods and potential biases, including counting former drinkers as abstainers and the possibility of alcohol industry influence. Because of these issues, experts advise against using alcohol for health reasons. For example, reviews from 2016 found that the "risk of all-cause mortality, and of cancers specifically, rises with increasing levels of consumption, and the level of consumption that minimises health loss is zero". Additionally, in 2023, the World Health Organization (WHO) stated that there is currently no conclusive evidence from studies that the potential benefits of moderate alcohol consumption for cardiovascular disease and type 2 diabetes outweigh the increased cancer risk associated with these drinking levels for individual consumers. Despite being a widespread issue, social stigma around problematic alcohol use or alcoholism discourages over 80% from seeking help. Alcoholism Alcoholism or its medical diagnosis alcohol use disorder refers to alcohol addiction, alcohol dependence, dipsomania, and/or alcohol abuse. It is a major problem and many health problems as well as death can result from excessive alcohol use. Alcohol dependence is linked to a lifespan that is reduced by about 12 years relative to the average person. In 2004, it was estimated that 4% of deaths worldwide were attributable to alcohol use. Deaths from alcohol are split about evenly between acute causes (e.g., overdose, accidents) and chronic conditions. The leading chronic alcohol-related condition associated with death is alcoholic liver disease. Alcohol dependence is also associated with cognitive impairment and organic brain damage. Some researchers have found that even one alcoholic drink a day increases an individual's risk of health problems by 0.4%. Stigma surrounding alcohol use disorder is particularly strong and different from the stigma attached to other mental illnesses not caused by substances. People with this condition are seen less as truly ill, face greater blame and social rejection, and experience higher structural discrimination risks. Two or more consecutive alcohol-free days a week have been recommended to improve health and break dependence. Dry drunk is an expression coined by the founder of Alcoholics Anonymous that describes an alcoholic who no longer drinks but otherwise maintains the same behavior patterns of an alcoholic. A high-functioning alcoholic (HFA) is a person who maintains jobs and relationships while exhibiting alcoholism. Many Native Americans in the United States have been harmed by, or become addicted to, drinking alcohol. Brain damage While many people associate alcohol's effects with intoxication, the long-term impact of alcohol on the brain can be severe. Binge drinking, or heavy episodic drinking, can lead to alcohol-related brain damage that occurs after a relatively short period of time. This brain damage increases the risk of alcohol-related dementia, and abnormalities in mood and cognitive abilities. Alcohol can cause Wernicke encephalopathy and Korsakoff syndrome which frequently occur simultaneously, known as Wernicke–Korsakoff syndrome. Lesions, or brain abnormalities, are typically located in the diencephalon which result in anterograde and retrograde amnesia, or memory loss. Dementia Alcohol-related dementia (ARD) is a form of dementia caused by long-term, excessive consumption of alcohol, resulting in neurological damage and impaired cognitive function. Marchiafava–Bignami disease Marchiafava–Bignami disease is a progressive neurological disease of alcohol use disorder, characterized by corpus callosum demyelination and necrosis and subsequent atrophy. The disease was first described in 1903 by the Italian pathologists Amico Bignami and Ettore Marchiafava in an Italian Chianti drinker. Symptoms can include, but are not limited to lack of consciousness, aggression, seizures, depression, hemiparesis, ataxia, apraxia, coma, etc. There will also be lesions in the corpus callosum. Liver damage Consuming more than 30 grams of pure alcohol per day over an extended period can significantly increase the risk of developing alcoholic liver disease. During the metabolism of alcohol via the respective dehydrogenases, nicotinamide adenine dinucleotide (NAD) is converted into reduced NAD. Normally, NAD is used to metabolize fats in the liver, and as such alcohol competes with these fats for the use of NAD. Prolonged exposure to alcohol means that fats accumulate in the liver, leading to the term 'fatty liver'. Continued consumption (such as in alcohol use disorder) then leads to cell death in the hepatocytes as the fat stores reduce the function of the cell to the point of death. These cells are then replaced with scar tissue, leading to the condition called cirrhosis. Cancer Alcoholic beverages have been classified as carcinogenic by leading health organizations for more than two decades, including the WHO's IARC (Group 1 carcinogens) and the U.S. NTP, raising concerns about the potential cancer risk associated with alcohol consumption. In 2023 the WHO highlighted a statistic: nearly half of all alcohol-attributable cancers in the WHO European Region are linked to alcohol consumption, even from "light" or "moderate" drinking – "less than 1.5 litres of wine or less than 3.5 litres of beer or less than 450 millilitres of spirits per week". This new information suggests that these consumption levels should now be considered high-risk. Many countries exceed these levels by a significant margin. Echoing the WHO's view, a growing number of national public health agencies are prioritizing complete abstinence (teetotalism) and stricter drinking guidelines in their alcohol consumption recommendations. Alcohol is also a major cause for head and neck cancer, especially laryngeal cancer. This risk is even higher when alcohol is used together with tobacco. Qualitative analysis reveals that the alcohol industry likely misinforms the public about the alcohol-cancer link, similar to the tobacco industry. The alcohol industry influences alcohol policy and health messages, including those for schoolchildren. Cardiovascular disease Excessive daily alcohol consumption and binge drinking can cause a higher risk of stroke, coronary artery disease, heart failure, fatal hypertensive disease, and fatal aortic aneurysm. A 2010 study reviewed research on alcohol and heart disease. They found that moderate drinking did not seem to worsen things for people who already had heart problems. But importantly, the researchers did not say that people who do not drink should start in order to improve their heart health. Thus, the safety and potential positive effect of light drinking on the cardiovascular system has not yet been proven. Still alcohol is a major health risk, and even if moderate drinking lowers the risk of some cardiovascular diseases it might increase the risk of others. Therefore starting to drink alcohol in the hope of any benefit is not recommended. The World Heart Federation (2022) recommends against any alcohol intake for optimal heart health. It has also been pointed out that the studies suggesting a positive link between red wine consumption and heart health had flawed methodology in the form of comparing two sets of people which were not actually appropriately paired. Cardiomyopathy Alcoholic cardiomyopathy (ACM) is a disease in which the long-term consumption of alcohol leads to heart failure. ACM is a type of dilated cardiomyopathy. The heart is unable to pump blood efficiently, leading to heart failure. It can affect other parts of the body if the heart failure is severe. It is most common in males between the ages of 35 and 50. Hearing loss Alcohol, classified as an ototoxin (ear toxin), can contribute to hearing loss sometimes referred to as "cocktail deafness" after exposure to loud noises in drinking environments. Children with fetal alcohol spectrum disorder (FASD) are at an increased risk of having hearing difficulties. Withdrawal syndrome Discontinuation of alcohol after extended heavy use and associated tolerance development (resulting in dependence) can result in withdrawal. Alcohol withdrawal can cause confusion, paranoia, anxiety, insomnia, agitation, tremors, fever, nausea, vomiting, autonomic dysfunction, seizures, and hallucinations. In severe cases, death can result. Delirium tremens is a condition that requires people with a long history of heavy drinking to undertake an alcohol detoxification regimen. Alcohol is one of the more dangerous drugs to withdraw from. Drugs which help to re-stabilize the glutamate system such as N-acetylcysteine have been proposed for the treatment of addiction to cocaine, nicotine, and alcohol. Cohort studies have demonstrated that the combination of anticonvulsants and benzodiazepines is more effective than other treatments in reducing alcohol withdrawal scores and shortening the duration of intensive care unit stays. Nitrous oxide has been shown to be an effective and safe treatment for alcohol withdrawal. The gas therapy reduces the use of highly addictive sedative medications (like benzodiazepines and barbiturates). Cortisol Research has looked into the effects of alcohol on the amount of cortisol that is produced in the human body. Continuous consumption of alcohol over an extended period of time has been shown to raise cortisol levels in the body. Cortisol is released during periods of high stress, and can result in the temporary shut down of other physical processes, causing physical damage to the body. Gout There is a strong association between gout the consumption of alcohol, and sugar-sweetened beverages, with wine presenting somewhat less of a risk than beer or spirits. Ketoacidosis Alcoholic ketoacidosis (AKA) is a specific group of symptoms and metabolic state related to alcohol use. Symptoms often include abdominal pain, vomiting, agitation, a fast respiratory rate, and a specific "fruity" smell. Consciousness is generally normal. Complications may include sudden death. Mental disorders Alcohol misuse often coincides with mental health conditions. Many individuals struggling with psychiatric disorders also experience problematic drinking behaviors. For example, alcohol may play a role in depression, with up to 10% of male depression cases in some European countries linked to alcohol use. Psychiatric genetics research continues to explore the complex interplay between alcohol use, genetic factors, and mental health outcomes; A 2024 study found that excessive drinking and alcohol-related DNA methylation may directly contribute to the causes of mental disorders, possibly through the altered expression of affected genes. Austrian syndrome Austrian syndrome, also known as Osler's triad, is a medical condition that was named after Robert Austrian in 1957. The presentation of the condition consists of pneumonia, endocarditis, and meningitis, all caused by Streptococcus pneumoniae. It is associated with alcoholism due to hyposplenism (reduced splenic functioning) and can be seen in males between the ages of 40 and 60 years old. Robert Austrian was not the first one to describe the condition, but Richard Heschl (around 1860s) or William Osler were not able to link the signs to the bacteria because microbiology was not yet developed. The leading cause of Osler's triad (Austrian syndrome) is Streptococcus pneumoniae, which is usually associated with heavy alcohol use. Polyneuropathy Alcoholic polyneuropathy is a neurological disorder in which peripheral nerves throughout the body malfunction simultaneously. It is defined by axonal degeneration in neurons of both the sensory and motor systems and initially occurs at the distal ends of the longest axons in the body. This nerve damage causes an individual to experience pain and motor weakness, first in the feet and hands and then progressing centrally. Alcoholic polyneuropathy is caused primarily by chronic alcoholism; however, vitamin deficiencies are also known to contribute to its development. Specific population Women Breast cancer Drinking alcohol increases the risk for breast cancer. For women in Europe, breast cancer represents the most significant alcohol-related cancer burden. Breastfeeding difficulties Moderate alcohol consumption by breastfeeding mothers can significantly affect infants and cause breastfeeding difficulties. Even one or two drinks, including beer, may reduce milk intake by 20 to 23%, leading to increased agitation and poor sleep patterns. Regular heavy drinking (more than two drinks daily) can shorten breastfeeding duration and cause issues in infants, such as excessive sedation, fluid retention, and hormonal imbalances. Additionally, higher alcohol consumption may negatively impact children's academic achievement. Neonatal withdrawal Babies exposed to alcohol, benzodiazepines, barbiturates, and some antidepressants (SSRIs) during pregnancy may experience neonatal withdrawal. The onset of clinical presentation typically appears within 48 to 72 hours of birth but may take up to 8 days. Other effects Alcohol may negatively affect sleep. Alcohol consumption disrupts circadian rhythms, with acute intake causing dose-dependent alterations in melatonin and cortisol levels, as well as core body temperature, which normalize the following morning, while chronic alcohol use leads to more severe and persistent disruptions that are associated with alcohol use disorders (AUD) and withdrawal symptoms. Also, Alcohol consumption may increase the risk of sleep disorders, including insomnia, restless legs syndrome, and sleep apnea. Erosive gastritis is commonly caused by stress, alcohol, some drugs, such as aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs), and Crohn's disease. Excessive alcohol intake has been shown to cause immunodeficiency, compromising the body's ability to fight infections and diseases, as evidenced by research on people who regularly consume large amounts of alcohol. Alcohol is associated with instances of sudden death. Sudden arrhythmic death syndrome in alcohol misuse is a significant cause of death among heavy drinkers, characterized by older age and severe liver damage, highlighting the need for family screening for heritable channelopathies. Also, sudden unexpected death in epilepsy is associated with a twofold higher risk in individuals with a history of substance abuse or alcohol dependence. Alcohol consumption is associated with lower sperm concentration, percentage of normal morphology, and semen volume, but not sperm motility. Frequent drinking of alcoholic beverages is a major contributing factor in cases of hypertriglyceridemia. Alcoholism is the single most common cause of chronic pancreatitis. Excess alcohol use is frequently associated with porphyria cutanea tarda (PTC). Alcohol consumption is a risk factor for Dupuytren's contracture. The majority of those with aspirin-exacerbated respiratory disease experience respiratory reactions to alcohol. Interactions Disorders COVID-19 A 2023 study suggests a link between alcohol consumption and worse COVID-19 outcomes. Researchers analyzed data from over 1.6 million people and found that any level of alcohol consumption increased the risk of severe illness, intensive care unit admission, and needing ventilation compared to non-drinkers. Even a history of drinking was associated with a higher risk of severe COVID-19. These findings suggest that avoiding alcohol altogether might be beneficial during the pandemic. Diabetes See the insulin section. Hepatitis Alcohol consumption can be especially dangerous for those with pre-existing liver damage from hepatitis B or C. Even relatively low amounts of alcohol can be life-threatening in these cases, so a strict adherence to abstinence is highly recommended. Histamine intolerance Alcohol may release histamine in individuals with histamine intolerance. Mental disorders Mental disorders can be a significant risk factor for alcohol abuse. Alcohol abuse, alcohol dependence, and alcoholism are comorbid with anxiety disorders. With dual diagnosis, the initial symptoms of mental illness tend to appear before those of substance abuse. Individuals with common mental health conditions, such as depression, anxiety, or phobias, are twice as likely to also report having an alcohol use disorder, compared to those without these mental health challenges. Alcohol is a major risk factor for self-harm. Individuals with anxiety disorders who self-medicate with drugs or alcohol may also have an increased likelihood of suicidal ideation. Peptic ulcer disease In patients who have a peptic ulcer disease (PUD), the mucosal layer is broken down by ethanol. PUD is commonly associated with the bacteria Helicobacter pylori, which secretes a toxin that weakens the mucosal wall, allowing acid and protein enzymes to penetrate the weakened barrier. Because alcohol stimulates the stomach to secrete acid, a person with PUD should avoid drinking alcohol on an empty stomach. Drinking alcohol causes more acid release, which further damages the already-weakened stomach wall. Complications of this disease could include a burning pain in the abdomen, bloating and in severe cases, the presence of dark black stools indicate internal bleeding. A person who drinks alcohol regularly is strongly advised to reduce their intake to prevent PUD aggravation. Dosage forms Alcohol induced dose dumping (AIDD) Alcohol-induced dose dumping (AIDD) is by definition an unintended rapid release of large amounts of a given drug, when administered through a modified-release dosage while co-ingesting ethanol. This is considered a pharmaceutical disadvantage due to the high risk of causing drug-induced toxicity by increasing the absorption and serum concentration above the therapeutic window of the drug. The best way to prevent this interaction is by avoiding the co-ingestion of both substances or using specific controlled-release formulations that are resistant to AIDD. Drugs Alcohol can intensify the sedation caused by antipsychotics, and certain antidepressants. Alcohol combined with cannabis (not to be confused with tincture of cannabis which contains minute quantities of alcohol) — known as cross-fading and may easily cause spins in people who are drunk and smoke potent cannabis; Ethanol increases plasma tetrahydrocannabinol levels, which suggests that ethanol may increase the absorption of tetrahydrocannabinol. TOMSO is a lesser-known psychedelic drug and a substituted amphetamine. TOMSO was first synthesized by Alexander Shulgin. According to Shulgin's book PiHKAL, TOMSO is inactive on its own and requires consumption of alcohol to become active. Hypnotics/sedatives Alcohol can intensify the sedation caused by hypnotics/sedatives such as barbiturates, benzodiazepines, sedative antihistamines, opioids, nonbenzodiazepines/Z-drugs (such as zolpidem and zopiclone). Dextromethorphan Combining alcohol with dextromethorphan significantly increases the risk of overdose and other severe health complications, according to the NIAAA. Disulfiram-like drugs Disulfiram Disulfiram inhibits the enzyme acetaldehyde dehydrogenase, which in turn results in buildup of acetaldehyde, a toxic metabolite of ethanol with unpleasant effects. The medication or drug is commonly used to treat alcohol use disorder, and results in immediate hangover-like symptoms upon consumption of alcohol, this effect is widely known as disulfiram effect. Metronidazole Metronidazole is an antibacterial agent that kills bacteria by damaging cellular DNA and hence cellular function. Metronidazole is usually given to people who have diarrhea caused by Clostridioides difficile bacteria. Patients who are taking metronidazole are sometimes advised to avoid alcohol, even after 1 hour following the last dose. Although older data suggested a possible disulfiram-like effect of metronidazole, newer data has challenged this and suggests it does not actually have this effect. Insulin Alcohol consumption can cause hypoglycemia in diabetics on certain medications, such as insulin or sulfonylurea, by blocking gluconeogenesis. NSAIDs The concomitant use of NSAIDs with alcohol and/or tobacco products significantly increases the already elevated risk of peptic ulcers during NSAID therapy. The risk of stomach bleeding is still increased when aspirin is taken with alcohol or warfarin. Stimulants Controlled animal and human studies showed that caffeine (energy drinks) in combination with alcohol increased the craving for more alcohol more strongly than alcohol alone. These findings correspond to epidemiological data that people who consume energy drinks generally showed an increased tendency to take alcohol and other substances. Ethanol interacts with cocaine in vivo to produce cocaethylene, another psychoactive substance which may be substantially more cardiotoxic than either cocaine or alcohol by themselves. Ethylphenidate formation appears to be more common when large quantities of methylphenidate and alcohol are consumed at the same time, such as in non-medical use or overdose scenarios. However, only a small percent of the consumed methylphenidate is converted to ethylphenidate. While nicotinis mimic the name of classic cocktails like the appletini (their name deriving from "martini"), combining nicotine with alcohol is a bad idea. Tobacco and nicotine actually heighten cravings for alcohol, making this a risky mix. Methanol and ethylene glycol The rate-limiting steps for the elimination of ethanol are in common with certain other substances. As a result, the blood alcohol concentration can be used to modify the rate of metabolism of toxic alcohols, such as methanol and ethylene glycol. Methanol itself is not highly toxic, but its metabolites formaldehyde and formic acid are; therefore, to reduce the rate of production and concentration of these harmful metabolites, ethanol can be ingested. Ethylene glycol poisoning can be treated in the same way. Warfarin Excessive use of alcohol is also known to affect the metabolism of warfarin and can elevate the INR, and thus increase the risk of bleeding. The U.S. Food and Drug Administration (FDA) product insert on warfarin states that alcohol should be avoided. The Cleveland Clinic suggests that when taking warfarin one should not drink more than "one beer, 6 oz of wine, or one shot of alcohol per day". Special population Isoniazid Levels of liver enzymes in the bloodstream should be frequently checked in daily alcohol drinkers, pregnant women, IV drug users, people over 35, and those who have chronic liver disease, severe kidney dysfunction, peripheral neuropathy, or HIV infection since they are more likely to develop hepatitis from INH. Pharmacology Alcohol works in the brain primarily by increasing the effects of γ-Aminobutyric acid (GABA), the major inhibitory neurotransmitter in the brain; by facilitating GABA's actions, alcohol suppresses the activity of the CNS. The pharmacology of ethanol involves both pharmacodynamics (how it affects the body) and pharmacokinetics (how the body processes it). In the body, ethanol primarily affects the central nervous system, acting as a depressant and causing sedation, relaxation, and decreased anxiety. The exact mechanism remains elusive, but ethanol has been shown to affect ligand-gated ion channels, particularly the GABAA receptor. After oral ingestion, ethanol is absorbed via the stomach and intestines into the bloodstream. Ethanol is highly water-soluble and diffuses passively throughout the entire body, including the brain. Soon after ingestion, it begins to be metabolized, 90% or more by the liver. One standard drink is sufficient to almost completely saturate the liver's capacity to metabolize alcohol. The main metabolite is acetaldehyde, a toxic carcinogen. Acetaldehyde is then further metabolized into ionic acetate by the enzyme aldehyde dehydrogenase (ALDH). Acetate is not carcinogenic and has low toxicity, but has been implicated in causing hangovers. Acetate is further broken down into carbon dioxide and water and eventually eliminated from the body through urine and breath. 5 to 10% of ethanol is excreted unchanged in the breath, urine, and sweat. Alcohol also direct affects a number of other neurotransmitter systems including those of glutamate, glycine, acetylcholine, and serotonin. The pleasurable effects of alcohol ingestion are the result of increased levels of dopamine and endogenous opioids in the reward pathways of the brain. The average human digestive system produces approximately 3g of ethanol per day through fermentation of its contents. Safety Symptoms of ethanol overdose may include nausea, vomiting, CNS depression, coma, acute respiratory failure, or death. Levels of even less than 0.1% can cause intoxication, with unconsciousness often occurring at 0.3–0.4%. Death from ethanol consumption is possible when blood alcohol levels reach 0.4%. A blood level of 0.5% or more is commonly fatal. The oral median lethal dose (LD50) of ethanol in rats is 5,628 mg/kg. Directly translated to human beings, this would mean that if a person who weighs drank a glass of pure ethanol, they would theoretically have a 50% risk of dying. The highest blood alcohol level ever recorded, in which the subject survived, is 1.41%. A retrospective case-control study conducted from 1990 to 2001 found that alcohol consumption was responsible for over half of all deaths among Russian adults aged 15–54, significantly impacting mortality rates related to causes such as accidents, violence, and various diseases. In the US, the DEA has claimed illegal drugs are more deadly than alcohol, citing CDC data from 2000 showing similar death counts despite alcohol's wider use. However, this comparison is disputed; a JAMA article reported alcohol-related deaths in 2000 as 85,000, significantly higher than the DEA's figure of 18,539. Toxicity The WHO classifies alcohol as a toxic substance. More specifically, ethanol is categorized as a cytotoxin, hepatotoxin, neurotoxin, and ototoxin, which has acute toxic effects on the cells, liver, the nervous system, and the ears, respectively. However, ethanol's acute effects on these organs are usually reversible. This means that even with a single episode of heavy drinking, the body can typically repair itself from the initial damage. Methanol laced alcohol on the other hand can cause blindness even in small quantities. Ethanol is nutritious but highly intoxicating for most animals, which typically tolerate only up to 4% in their diet. However, a 2024 study found that oriental hornets fed sugary solutions containing 1% to 80% ethanol for a week showed no adverse effects on behavior or lifespan. A risk assessment using the margin of exposure (MOE) approach evaluated drugs like alcohol and tobacco. Alcohol had a benchmark dose of 531 mg/kg, while heroin's was 2 mg/kg. Alcohol, nicotine, cocaine, and heroin were classified as "high risk" (MOE < 10), and most others as "risk" (MOE < 100). Only alcohol was "high risk" on a population level, with cannabis showing an MOE over 10,000. This confirms alcohol and tobacco as high risk and cannabis as low risk. Chemistry Ethanol is also known chemically as alcohol, ethyl alcohol, or drinking alcohol. It is a simple alcohol with a molecular formula of C2H6O and a molecular weight of 46.0684 g/mol. The molecular formula of ethanol may also be written as CH3−CH2−OH or as C2H5−OH. The latter can also be thought of as an ethyl group linked to a hydroxyl (alcohol) group and can be abbreviated as EtOH. Ethanol is a volatile, flammable, colorless liquid with a slight characteristic odor. Aside from its use as a psychoactive and recreational substance, ethanol is also commonly used as an antiseptic and disinfectant, a chemical and medicinal solvent, and a fuel. Analogues Ethanol has a variety of analogues, many of which have similar actions and effects. In chemistry, "alcohol" can encompass other mind-altering alcohols besides the kind we drink. Some examples include synthetic drugs like ethchlorvynol and methylpentynol, once used in medicine. Also, ethanol is colloquially referred to as "alcohol" because it is the most prevalent alcohol in alcoholic beverages. But technically all alcoholic beverages contain several types of psychoactive alcohols, that are categorized as primary, secondary, or tertiary. Primary, and secondary alcohols, are oxidized to aldehydes, and ketones, respectively, while tertiary alcohols are generally resistant to oxidation. Ethanol is a primary alcohol that has unpleasant actions in the body, many of which are mediated by its toxic metabolite acetaldehyde. Less prevalent alcohols found in alcoholic beverages, are secondary, and tertiary alcohols. For example, the tertiary alcohol 2M2B which is up to 50 times more potent than ethanol and found in trace quantities in alcoholic beverages, has been synthesized and used as a designer drug. Alcoholic beverages are sometimes laced with toxic alcohols, such as methanol (the simplest alcohol) and isopropyl alcohol. A mild, brief exposure to isopropyl alcohol (which is only moderately more toxic than ethanol) is unlikely to cause any serious harm. But many methanol poisoning incidents have occurred through history, since methanol is lethal even in small quantities, as little as 10–15 milliliters (2–3 teaspoons). Ethanol is used to treat methanol and ethylene glycol toxicity. The Lucas test differentiates between primary, secondary, and tertiary alcohols. Production Ethanol is produced naturally as a byproduct of the metabolic processes of yeast and hence is present in any yeast habitat, including even endogenously in humans, but it does not cause raised blood alcohol content as seen in the rare medical condition auto-brewery syndrome (ABS). It is manufactured through hydration of ethylene or by brewing via fermentation of sugars with yeast (most commonly Saccharomyces cerevisiae). The sugars are commonly obtained from sources like steeped cereal grains (e.g., barley), grape juice, and sugarcane products (e.g., molasses, sugarcane juice). Ethanol–water mixture which can be further purified via distillation. Home-made alcoholic beverages Homebrewing Homebrewing is the brewing of beer or other alcoholic beverages on a small scale for personal, non-commercial purposes. Supplies, such as kits and fermentation tanks, can be purchased locally at specialty stores or online. Beer was brewed domestically for thousands of years before its commercial production, although its legality has varied according to local regulation. Homebrewing is closely related to the hobby of home distillation, the production of alcoholic spirits for personal consumption; however home distillation is generally more tightly regulated. Moonshine Although methanol is not produced in toxic amounts by fermentation of sugars from grain starches, it is a major occurrence in fruit spirits. However, in modern times, reducing methanol with the absorption of a molecular sieve is a practical method for production. History Alcoholic beverages have been produced since the Neolithic period, as early as 7000 BC in China. Since antiquity, prior to the development of modern agents, alcohol was used as a general anaesthetic. In the history of wound care, beer, and wine, are recognized as substances used for healing wounds. Late Middle Ages Alcohol has been used as an antiseptic as early as 1363 with evidence to support its use becoming available in the late 1800s. Early modern period The popular story dates the etymology of the term Dutch courage to English soldiers fighting in the Anglo-Dutch Wars (1652–1674) and perhaps as early as the Thirty Years' War (1618–1648). One version states that jenever (or Dutch gin) was used by English soldiers for its calming effects before battle, and for its purported warming properties on the body in cold weather. Another version has it that English soldiers noted the bravery-inducing effects of jenever on Dutch soldiers. The Gin Craze was a period in the first half of the 18th century when the consumption of gin increased rapidly in Great Britain, especially in London. By 1743, England was drinking 2.2 gallons (10 litres) of gin per person per year. The Sale of Spirits Act 1750 (commonly known as the Gin Act 1751) was an Act of the Parliament of Great Britain (24 Geo. 2. c. 40) which was enacted to reduce the consumption of gin and other distilled spirits, a popular pastime that was regarded as one of the primary causes of crime in London. Modern period The rum ration (also called the tot) was a daily amount of rum given to sailors on Royal Navy ships. It started 1866 and was abolished in 1970 after concerns that the intake of strong alcohol would lead to unsteady hands when working machinery. The Andrew Johnson alcoholism debate is the dispute, originally conducted among the general public, and now typically a question for historians, about whether or not Andrew Johnson, the 17th president of the United States (1865–1869), drank to excess. The prohibition in the United States era was the period from 1920 to 1933 when the United States prohibited the production, importation, transportation, and sale of alcoholic beverages. The nationwide ban on alcoholic beverages, was repealed by the passage of the Twenty-first Amendment to the United States Constitution on December 5, 1933. The Bratt System was a system that was used in Sweden (1919–1955) and similarly in Finland (1944–1970) to control alcohol consumption, by rationing of liquor. Every citizen allowed to consume alcohol was given a booklet called a motbok (viinakortti in Finland), in which a stamp was added each time a purchase was made at Systembolaget (in Sweden) and Alko (in Finland). A similar system also existed in Estonia between July 1, 1920 to December 31, 1925. The stamps were based on the amount of alcohol bought. When a certain amount of alcohol had been bought, the owner of the booklet had to wait until next month to buy more. The Medicinal Liquor Prescriptions Act of 1933 was a law passed by Congress in response to the abuse of medicinal liquor prescriptions during Prohibition. Gilbert Paul Jordan (aka The Boozing Barber) was a Canadian serial killer who is believed to have committed the so-called "alcohol murders" between 1965– in Vancouver, British Columbia. Society and culture The consumption of alcohol has a long human history deeply embedded in social practices and rituals, often celebrated as a cornerstone of community gatherings and personal milestones. Drinking culture is the set of traditions and social behaviours that surround the consumption of alcoholic beverages as a recreational drug and social lubricant. The alcohol consumption recommendations (or ) varies from no intake, to daily, weekly, or daily/weekly guidelines provided by health agencies of governments. The WHO published a statement in The Lancet Public Health in April 2023 that "there is no safe amount that does not affect health." United Nations Sustainable Development Goal 3 is part of "The Alcohol Policy Playbook," which is a resource for reaching the goals of the WHO European Framework for Action on Alcohol (2022–2025) and the WHO Global Alcohol Action Plan (2022–2030). In October 2024, the WHO Regional Office for Europe launched the "Redefine alcohol" campaign to address alcohol-related health risks, as alcohol causes nearly 1 in 11 deaths in the region. The campaign aims to raise awareness about alcohol's link to over 200 diseases, including several cancers, and to encourage healthier choices by sharing research and personal stories. It also calls for stricter regulation of alcohol to reduce its societal harm. This initiative is part of the WHO/EU Evidence into Action Alcohol Project, which seeks to reduce alcohol-related harm across Europe. Alcohol education is the practice of disseminating disinformation about the effects of alcohol on health, as well as society and the family unit. Alcohol as a gateway drug Alcohol and nicotine prime the brain for a heightened response to other drugs and are, like marijuana, also typically used before a person progresses to other, more harmful substances. A study of drug use of 14,577 U.S. 12th graders showed that alcohol consumption was associated with an increased probability of later use of tobacco, cannabis, and other illegal drugs.
Biology and health sciences
Drugs and pharmacology
null
37487265
https://en.wikipedia.org/wiki/Variational%20method%20%28quantum%20mechanics%29
Variational method (quantum mechanics)
In quantum mechanics, the variational method is one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states. This allows calculating approximate wavefunctions such as molecular orbitals. The basis for this method is the variational principle. The method consists of choosing a "trial wavefunction" depending on one or more parameters, and finding the values of these parameters for which the expectation value of the energy is the lowest possible. The wavefunction obtained by fixing the parameters to such values is then an approximation to the ground state wavefunction, and the expectation value of the energy in that state is an upper bound to the ground state energy. The Hartree–Fock method, density matrix renormalization group, and Ritz method apply the variational method. Description Suppose we are given a Hilbert space and a Hermitian operator over it called the Hamiltonian . Ignoring complications about continuous spectra, we consider the discrete spectrum of and a basis of eigenvectors (see spectral theorem for Hermitian operators for the mathematical background): where is the Kronecker delta and the satisfy the eigenvalue equation Once again ignoring complications involved with a continuous spectrum of , suppose the spectrum of is bounded from below and that its greatest lower bound is . The expectation value of in a state is then If we were to vary over all possible states with norm 1 trying to minimize the expectation value of , the lowest value would be and the corresponding state would be the ground state, as well as an eigenstate of . Varying over the entire Hilbert space is usually too complicated for physical calculations, and a subspace of the entire Hilbert space is chosen, parametrized by some (real) differentiable parameters . The choice of the subspace is called the ansatz. Some choices of ansatzes lead to better approximations than others, therefore the choice of ansatz is important. Let's assume there is some overlap between the ansatz and the ground state (otherwise, it's a bad ansatz). We wish to normalize the ansatz, so we have the constraints and we wish to minimize This, in general, is not an easy task, since we are looking for a global minimum and finding the zeroes of the partial derivatives of over all is not sufficient. If is expressed as a linear combination of other functions ( being the coefficients), as in the Ritz method, there is only one minimum and the problem is straightforward. There are other, non-linear methods, however, such as the Hartree–Fock method, that are also not characterized by a multitude of minima and are therefore comfortable in calculations. There is an additional complication in the calculations described. As tends toward in minimization calculations, there is no guarantee that the corresponding trial wavefunctions will tend to the actual wavefunction. This has been demonstrated by calculations using a modified harmonic oscillator as a model system, in which an exactly solvable system is approached using the variational method. A wavefunction different from the exact one is obtained by use of the method described above. Although usually limited to calculations of the ground state energy, this method can be applied in certain cases to calculations of excited states as well. If the ground state wavefunction is known, either by the method of variation or by direct calculation, a subset of the Hilbert space can be chosen which is orthogonal to the ground state wavefunction. The resulting minimum is usually not as accurate as for the ground state, as any difference between the true ground state and results in a lower excited energy. This defect is worsened with each higher excited state. In another formulation: This holds for any trial φ since, by definition, the ground state wavefunction has the lowest energy, and any trial wavefunction will have energy greater than or equal to it. Proof: can be expanded as a linear combination of the actual eigenfunctions of the Hamiltonian (which we assume to be normalized and orthogonal): Then, to find the expectation value of the Hamiltonian: Now, the ground state energy is the lowest energy possible, i.e., . Therefore, if the guessed wave function is normalized: In general For a hamiltonian H that describes the studied system and any normalizable function Ψ with arguments appropriate for the unknown wave function of the system, we define the functional The variational principle states that , where is the lowest energy eigenstate (ground state) of the hamiltonian if and only if is exactly equal to the wave function of the ground state of the studied system. The variational principle formulated above is the basis of the variational method used in quantum mechanics and quantum chemistry to find approximations to the ground state. Another facet in variational principles in quantum mechanics is that since and can be varied separately (a fact arising due to the complex nature of the wave function), the quantities can be varied in principle just one at a time. Helium atom ground state The helium atom consists of two electrons with mass m and electric charge , around an essentially fixed nucleus of mass and charge . The Hamiltonian for it, neglecting the fine structure, is: where ħ is the reduced Planck constant, is the vacuum permittivity, (for ) is the distance of the -th electron from the nucleus, and is the distance between the two electrons. If the term , representing the repulsion between the two electrons, were excluded, the Hamiltonian would become the sum of two hydrogen-like atom Hamiltonians with nuclear charge . The ground state energy would then be , where is the Rydberg constant, and its ground state wavefunction would be the product of two wavefunctions for the ground state of hydrogen-like atoms: where is the Bohr radius and , helium's nuclear charge. The expectation value of the total Hamiltonian H (including the term ) in the state described by will be an upper bound for its ground state energy. is , so is . A tighter upper bound can be found by using a better trial wavefunction with 'tunable' parameters. Each electron can be thought to see the nuclear charge partially "shielded" by the other electron, so we can use a trial wavefunction equal with an "effective" nuclear charge : The expectation value of in this state is: This is minimal for implying shielding reduces the effective charge to ~1.69. Substituting this value of into the expression for yields , within 2% of the experimental value, −78.975 eV. Even closer estimations of this energy have been found using more complicated trial wave functions with more parameters. This is done in physical chemistry via variational Monte Carlo.
Physical sciences
Quantum mechanics
Physics
23306251
https://en.wikipedia.org/wiki/Gregorian%20calendar
Gregorian calendar
The Gregorian calendar is the calendar used in most parts of the world. It went into effect in October 1582 following the papal bull issued by Pope Gregory XIII, which introduced it as a modification of, and replacement for, the Julian calendar. The principal change was to space leap years differently so as to make the average calendar year 365.2425 days long, more closely approximating the 365.2422-day "tropical" or "solar" year that is determined by the Earth's revolution around the Sun. The rule for leap years is: There were two reasons to establish the Gregorian calendar. First, the Julian calendar assumed incorrectly that the average solar year is exactly 365.25 days long, an overestimate of a little under one day per century, and thus has a leap year every four years without exception. The Gregorian reform shortened the average (calendar) year by 0.0075 days to stop the drift of the calendar with respect to the equinoxes. Second, in the years since the First Council of Nicaea in AD 325, the excess leap days introduced by the Julian algorithm had caused the calendar to drift such that the March equinox was occurring well before its nominal 21 March date. This date was important to the Christian churches, because it is fundamental to the calculation of the date of Easter. To reinstate the association, the reform advanced the date by 10 days: Thursday 4 October 1582 was followed by Friday 15 October 1582. In addition, the reform also altered the lunar cycle used by the Church to calculate the date for Easter, because astronomical new moons were occurring four days before the calculated dates. Whilst the reform introduced minor changes, the calendar continued to be fundamentally based on the same geocentric theory as its predecessor. The reform was adopted initially by the Catholic countries of Europe and their overseas possessions. Over the next three centuries, the Protestant and Eastern Orthodox countries also gradually moved to what they called the "Improved calendar", with Greece being the last European country to adopt the calendar (for civil use only) in 1923. However, many Orthodox churches continue to use the Julian calendar for religious rites and the dating of major feasts. To unambiguously specify a date during the transition period (in contemporary documents or in history texts), both notations were given, tagged as "Old Style" or "New Style" as appropriate. During the 20th century, most non-Western countries also adopted the calendar, at least for civil purposes. Description The Gregorian calendar, like the Julian calendar, is a solar calendar with 12 months of 28–31 days each. The year in both calendars consists of 365 days, with a leap day being added to February in the leap years. The months and length of months in the Gregorian calendar are the same as for the Julian calendar. The only difference is that the Gregorian calendar omits a leap day in three centurial years every 400 years and leaves the leap day unchanged. A leap year normally occurs every four years: the leap day, historically, was inserted by doubling 24 Februarythere were indeed two days dated 24 February. However, for many years it has been customary to put the extra day at the end of the month of February, adding a 29 February for the leap day. Before the 1969 revision of its General Roman Calendar, the Catholic Church delayed February feasts after the 23rd by one day in leap years; masses celebrated according to the previous calendar still reflect this delay. Gregorian years are identified by consecutive year numbers. A calendar date is fully specified by the year (numbered according to a calendar era, in this case Anno Domini or Common Era), the month (identified by name or number), and the day of the month (numbered sequentially starting from 1). Although the calendar year currently runs from 1 January to 31 December, at previous times year numbers were based on a different starting point within the calendar (see the "beginning of the year" section below). Calendar cycles repeat completely every 400 years, which equals 146,097 days. Of these 400 years, 303 are regular years of 365 days and 97 are leap years of 366 days. A mean calendar year is days = 365.2425 days, or 365 days, 5 hours, 49 minutes and 12 seconds. Gregorian reform The Gregorian calendar was a reform of the Julian calendar. It was instituted by papal bull Inter gravissimas dated 24 February 1582 by Pope Gregory XIII, after whom the calendar is named. The motivation for the adjustment was to bring the date for the celebration of Easter to the time of year in which it was celebrated when it was introduced by the early Church. The error in the Julian calendar (its assumption that there are exactly 365.25 days in a year) had led to the date of the equinox according to the calendar drifting from the observed reality, and thus an error had been introduced into the calculation of the date of Easter. Although a recommendation of the First Council of Nicaea in 325 specified that all Christians should celebrate Easter on the same day, it took almost five centuries before virtually all Christians achieved that objective by adopting the rules of the Church of Alexandria (see Easter for the issues which arose). Background Because the date of Easter is a functionthe computusof the date of the spring equinox in the northern hemisphere, the Catholic Church considered unacceptable the increasing divergence between the canonical date of the equinox and observed reality. Easter is celebrated on the Sunday after the ecclesiastical full moon on or after 21 March, which was adopted as an approximation to the March equinox. European scholars had been well aware of the calendar drift since the early medieval period. Bede, writing in the 8th century, showed that the accumulated error in his time was more than three days. Roger Bacon in estimated the error at seven or eight days. Dante, writing , was aware of the need for calendar reform. An attempt to go forward with such a reform was undertaken by Pope Sixtus IV, who in 1475 invited Regiomontanus to the Vatican for this purpose. However, the project was interrupted by the death of Regiomontanus shortly after his arrival in Rome. The increase of astronomical knowledge and the precision of observations towards the end of the 15th century made the question more pressing. Numerous publications over the following decades called for a calendar reform, among them two papers sent to the Vatican by the University of Salamanca in 1515 and 1578, but the project was not taken up again until the 1540s, and implemented only under Pope Gregory XIII (r. 1572–1585). Preparation In 1545, the Council of Trent authorised Pope Paul III to reform the calendar, requiring that the date of the vernal equinox be restored to that which it held at the time of the First Council of Nicaea in 325 and that an alteration to the calendar be designed to prevent future drift. This would allow for more consistent and accurate scheduling of the feast of Easter. In 1577, a was sent to expert mathematicians outside the reform commission for comments. Some of these experts, including Giambattista Benedetti and Giuseppe Moleto, believed Easter should be computed from the true motions of the Sun and Moon, rather than using a tabular method, but these recommendations were not adopted. The reform adopted was a modification of a proposal made by the Calabrian doctor Aloysius Lilius (or Lilio). Lilius's proposal included reducing the number of leap years in four centuries from 100 to 97, by making three out of four centurial years common instead of leap years. He also produced an original and practical scheme for adjusting the epacts of the Moon when calculating the annual date of Easter, solving a long-standing obstacle to calendar reform. Ancient tables provided the Sun's mean longitude. The German mathematician Christopher Clavius, the architect of the Gregorian calendar, noted that the tables agreed neither on the time when the Sun passed through the vernal equinox nor on the length of the mean tropical year. Tycho Brahe also noticed discrepancies. The Gregorian leap year rule (97 leap years in 400 years) was put forward by Petrus Pitatus of Verona in 1560. He noted that it is consistent with the tropical year of the Alfonsine tables and with the mean tropical year of Copernicus (De revolutionibus) and Erasmus Reinhold (Prutenic tables). The three mean tropical years in Babylonian sexagesimals as the excess over 365 days (the way they would have been extracted from the tables of mean longitude) were 0;14,33,9,57 (Alfonsine), 0;14,33,11,12 (Copernicus) and 0;14,33,9,24 (Reinhold). In decimal notation, these are equal to 0.24254606, 0.24255185, and 0.24254352, respectively. All values are the same to two sexagesimal places (0;14,33, equal to decimal 0.2425) and this is also the mean length of the Gregorian year. Thus Pitatus's solution would have commended itself to the astronomers. Lilius's proposals had two components. First, he proposed a correction to the length of the year. The mean tropical year is 365.24219 days long. A commonly used value in Lilius's time, from the Alfonsine tables, is 365.2425463 days. As the average length of a Julian year is 365.25 days, the Julian year is almost 11 minutes longer than the mean tropical year. The discrepancy results in a drift of about three days every 400 years. Lilius's proposal resulted in an average year of 365.2425 days (see Accuracy). At the time of Gregory's reform there had already been a drift of 10 days since the Council of Nicaea, resulting in the vernal equinox falling on 10 or 11 March instead of the ecclesiastically fixed date of 21 March, and if unreformed it would have drifted further. Lilius proposed that the 10-day drift should be corrected by deleting the Julian leap day on each of its ten occurrences over a period of forty years, thereby providing for a gradual return of the equinox to 21 March. Lilius's work was expanded upon by Christopher Clavius in a closely argued, 800-page volume. He would later defend his and Lilius's work against detractors. Clavius's opinion was that the correction should take place in one move, and it was this advice that prevailed with Gregory. The second component consisted of an approximation that would provide an accurate yet simple, rule-based calendar. Lilius's formula was a 10-day correction to revert the drift since the Council of Nicaea, and the imposition of a leap day in only 97 years in 400 rather than in 1 year in 4. The proposed rule was that "years divisible by 100 would be leap years only if they were divisible by 400 as well". The 19-year cycle used for the lunar calendar required revision because the astronomical new moon was, at the time of the reform, four days before the calculated new moon. It was to be corrected by one day every 300 or 400 years (8 times in 2500 years) along with corrections for the years that are no longer leap years (i.e. 1700, 1800, 1900, 2100, etc.) In fact, a new method for computing the date of Easter was introduced. The method proposed by Lilius was revised somewhat in the final reform. When the new calendar was put in use, the error accumulated in the 13 centuries since the Council of Nicaea was corrected by a deletion of 10 days. The Julian calendar day Thursday, 4 October 1582 was followed by the first day of the Gregorian calendar, Friday, 15 October 1582 (the cycle of weekdays was not affected). First printed Gregorian calendar A month after having decreed the reform, the pope (with a brief of 3 April 1582) granted to one Antoni Lilio the exclusive right to publish the calendar for a period of ten years. The was printed by Vincenzo Accolti, one of the first calendars printed in Rome after the reform, notes at the bottom that it was signed with papal authorization and by Lilio (Con licentia delli Superiori... et permissu Ant(onii) Lilij). The papal brief was revoked on 20 September 1582, because Antonio Lilio proved unable to keep up with the demand for copies. Adoption Although Gregory's reform was enacted in the most solemn of forms available to the Church, the bull had no authority beyond the Catholic Church (of which he was the supreme religious authority) and the Papal States (which he personally ruled). The changes that he was proposing were changes to the civil calendar, which required adoption by the civil authorities in each country to have legal effect. The bull became the law of the Catholic Church in 1582, but it was not recognised by Protestant Churches, Eastern Orthodox Churches, Oriental Orthodox Churches, and a few others. Consequently, the days on which Easter and related holidays were celebrated by different Christian Churches again diverged. On 29 September 1582, Philip II of Spain decreed the change from the Julian to the Gregorian calendar. This affected much of Roman Catholic Europe, as Philip was at the time ruler over Spain and Portugal as well as much of Italy. In these territories, as well as in the Polish–Lithuanian Commonwealth and in the Papal States, the new calendar was implemented on the date specified by the bull, with Julian Thursday, 4 October 1582, being followed by Gregorian Friday, 15 October. The Spanish and Portuguese colonies followed somewhat later because of delay in communication. The other major Catholic power of Western Europe, France, adopted the change a few months later: 9 December was followed by 20 December. Many Protestant countries initially objected to adopting a Catholic innovation; some Protestants feared the new calendar was part of a plot to return them to the Catholic fold. For example, the British could not bring themselves to adopt the Catholic system explicitly: the Annexe to their Calendar (New Style) Act 1750 established a computation for the date of Easter that achieved the same result as Gregory's rules, without actually referring to him. Britain and the British Empire (including the eastern part of what is now the United States) adopted the Gregorian calendar in 1752. Sweden followed in 1753. Prior to 1917, Turkey used the lunar Islamic calendar with the Hijri era for general purposes and the Julian calendar for fiscal purposes. The start of the fiscal year was eventually fixed at 1 March and the year number was roughly equivalent to the Hijri year (see Rumi calendar). As the solar year is longer than the lunar year this originally entailed the use of "escape years" every so often when the number of the fiscal year would jump. From 1 March 1917 the fiscal year became Gregorian, rather than Julian. On 1 January 1926, the use of the Gregorian calendar was extended to include use for general purposes and the number of the year became the same as in most other countries. Adoption by country Difference between Gregorian and Julian calendar dates This section always places the intercalary day on even though it was always obtained by doubling (the (twice sixth) or bissextile day) until the late Middle Ages. The Gregorian calendar is proleptic before 1582 (calculated backwards on the same basis, for years before 1582), and the difference between Gregorian and Julian calendar dates increases by three days every four centuries (all date ranges are inclusive). The following equation gives the number of days that the Gregorian calendar is ahead of the Julian calendar, called the "secular difference" between the two calendars. A negative difference means the Julian calendar is ahead of the Gregorian calendar. where is the secular difference and is the year using astronomical year numbering, that is, use for BC years. means that if the result of the division is not an integer it is rounded down to the nearest integer. The general rule, in years which are leap years in the Julian calendar but not the Gregorian, is: Up to 28 February in the calendar being converted , add one day less or subtract one day more than the calculated value. Give February the appropriate number of days for the calendar being converted . When subtracting days to calculate the Gregorian equivalent of 29 February (Julian), 29 February is discounted. Thus if the calculated value is −4 the Gregorian equivalent of this date is 24 February. Beginning of the year The year used in dates during the Roman Republic and the Roman Empire was the consular year, which began on the day when consuls first entered office—probably 1 May before 222 BC, 15 March from 222 BC and 1 January from 153 BC. The Julian calendar, which began in 45 BC, continued to use 1 January as the first day of the new year. Even though the year used for dates changed, the civil year always displayed its months in the order January to December from the Roman Republican period until the present. During the Middle Ages, under the influence of the Catholic Church, many Western European countries moved the start of the year to one of several important Christian festivals—25 December (Christmas), 25 March (Annunciation), or Easter, while the Byzantine Empire began its year on 1 September and Russia did so on 1 March until 1492 when the new year was moved to 1 September. In common usage, 1 January was regarded as New Year's Day and celebrated as such, but from the 12th century until 1751 the legal year in England began on 25 March (Lady Day). So, for example, the Parliamentary record lists the execution of Charles I on 30 January as occurring in 1648 (as the year did not end until 24 March), although later histories adjust the start of the year to 1 January and record the execution as occurring in 1649. Most Western European countries changed the start of the year to 1 January before they adopted the Gregorian calendar. For example, Scotland changed the start of the Scottish New Year to 1 January in 1600 (this means that 1599 was a short year). England, Ireland and the British colonies changed the start of the year to 1 January in 1752 (so 1751 was a short year with only 282 days). Later in 1752 in September the Gregorian calendar was introduced throughout Britain and the British colonies (see the section Adoption). These two reforms were implemented by the Calendar (New Style) Act 1750. In some countries, an official decree or law specified that the start of the year should be 1 January. For such countries, a specific date when a "1 January year" became the norm, can be identified. In other countries, the customs varied, and the start of the year moved back and forth as fashion and influence from other countries dictated various customs. Neither the papal bull nor its attached canons explicitly fix such a date, though the latter states that the "Golden number" of 1752 ends in December and a new year (and new Golden number) begins in January 1753. Dual dating During the period between 1582, when the first countries adopted the Gregorian calendar, and 1923, when the last European country adopted it, it was often necessary to indicate the date of some event in both the Julian calendar and in the Gregorian calendar, for example, "10/21 February 1750/51", where the dual year accounts for some countries already beginning their numbered year on 1 January while others were still using some other date. Even before 1582, the year sometimes had to be double-dated because of the different beginnings of the year in various countries. Woolley, writing in his biography of John Dee (1527–1608/9), notes that immediately after 1582 English letter writers "customarily" used "two dates" on their letters, one OS and one NS. Old Style and New Style dates "Old Style" (O.S.) and "New Style" (N.S.) indicate dating systems before and after a calendar change, respectively. Usually, this is the change from the Julian calendar to the Gregorian calendar as enacted in various European countries between 1582 and the early 20th century. In England, Wales, Ireland, and Britain's American colonies, there were two calendar changes, both in 1752. The first adjusted the start of a new year from Lady Day (25 March) to 1 January (which Scotland had done from 1600), while the second discarded the Julian calendar in favour of the Gregorian calendar, removing 11 days from the September 1752 calendar to do so. To accommodate the two calendar changes, writers used dual dating to identify a given day by giving its date according to both styles of dating. For countries such as Russia where no start of year adjustment took place, O.S. and N.S. simply indicate the Julian and Gregorian dating systems. Many Eastern Orthodox countries continue to use the older Julian calendar for religious purposes. Proleptic Gregorian calendar Extending the Gregorian calendar backwards to dates preceding its official introduction produces a proleptic calendar, which should be used with some caution. For ordinary purposes, the dates of events occurring prior to 15 October 1582 are generally shown as they appeared in the Julian calendar, with the year starting on 1 January, and no conversion to their Gregorian equivalents. For example, the Battle of Agincourt is universally considered to have been fought on 25 October 1415 which is Saint Crispin's Day. Usually, the mapping of new dates onto old dates with a start of year adjustment works well with little confusion for events that happened before the introduction of the Gregorian calendar. But for the period between the first introduction of the Gregorian calendar on 15 October 1582 and its introduction in Britain on 14 September 1752, there can be considerable confusion between events in continental western Europe and in British domains in English language histories. Events in continental western Europe are usually reported in English language histories as happening under the Gregorian calendar. For example, the Battle of Blenheim is always given as 13 August 1704. Confusion occurs when an event affects both. For example, William III of England set sail from the Netherlands on 11 November 1688 (Gregorian calendar) and arrived at Brixham in England on 5 November 1688 (Julian calendar). Shakespeare and Cervantes seemingly died on exactly the same date (23 April 1616), but Cervantes predeceased Shakespeare by ten days in real time (as Spain used the Gregorian calendar, but Britain used the Julian calendar). This coincidence encouraged UNESCO to make 23 April the World Book and Copyright Day. Astronomers avoid this ambiguity by the use of the Julian day number. For dates before the year 1, unlike the proleptic Gregorian calendar used in the international standard ISO 8601, the traditional proleptic Gregorian calendar (like the older Julian calendar) does not have a year 0 and instead uses the ordinal numbers 1, 2, ... both for years AD and BC. Thus the traditional time line is 2 BC, 1 BC, AD 1, and AD 2. ISO 8601 uses astronomical year numbering which includes a year 0 and negative numbers before it. Thus the ISO 8601 time line is , 0000, 0001, and 0002. Months The Gregorian calendar continued to employ the Julian months, which have Latinate names and irregular numbers of days: January (31 days), from Latin , "Month of Janus", the Roman god of gates, doorways, beginnings and endings February (28 days in common and 29 in leap years), from Latin , "Month of the Februa", the Roman festival of purgation and purification, cognate with fever, the Etruscan death god Februus ("Purifier"), and the Proto-Indo-European word for sulfur March (31 days), from Latin , "Month of Mars", the Roman war god April (30 days), from Latin , of uncertain meaning but usually derived from some form of the verb ("to open") or the name of the goddess Aphrodite May (31 days), from Latin , "Month of Maia", a Roman vegetation goddess whose name is cognate with Latin ("great") and English major June (30 days), from Latin , "Month of Juno", the Roman goddess of marriage, childbirth, and rule July (31 days), from Latin , "Month of Julius Caesar", the month of Caesar's birth, instituted in 44BC as part of his calendrical reforms August (31 days), from Latin , "Month of Augustus", instituted by Augustus in 8BC in agreement with July and from the occurrence during the month of several important events during his rise to power September (30 days), from Latin , "seventh month", of the ten-month Roman year of Romulus BC October (31 days), from Latin , "eighth month", of the ten-month Roman year of Romulus BC November (30 days), from Latin , "ninth month", of the ten-month Roman year of Romulus BC December (31 days), from Latin , "tenth month", of the ten-month Roman year of Romulus BC Europeans sometimes attempt to remember the number of days in each month by memorizing some form of the traditional verse "Thirty Days Hath September". It appears in Latin, Italian, French and Portuguese, and belongs to a broad oral tradition but the earliest currently attested form of the poem is the English marginalia inserted into a calendar of saints : Variations appeared in Mother Goose and continue to be taught at schools. The unhelpfulness of such involved mnemonics has been parodied as "Thirty days hath September/ But all the rest I can't remember" but it has also been called "probably the only sixteenth-century poem most ordinary citizens know by heart". A common nonverbal alternative is the knuckle mnemonic, considering the knuckles of one's hands as months with 31 days and the lower spaces between them as the months with fewer days. Using two hands, one may start from either pinkie knuckle as January and count across, omitting the space between the index knuckles (July and August). The same procedure can be done using the knuckles of a single hand, returning from the last (July) to the first (August) and continuing through. A similar mnemonic is to move up a piano keyboard in semitones from an F key, taking the white keys as the longer months and the black keys as the shorter ones. Weeks In conjunction with the system of months, there is a system of weeks. A physical or electronic calendar provides conversion from a given date to the weekday and shows multiple dates for a given weekday and month. Calculating the day of the week is not very simple, because of the irregularities in the Gregorian system. When the Gregorian calendar was adopted by each country, the weekly cycle continued uninterrupted. For example, in the case of the few countries that adopted the reformed calendar on the date proposed by Gregory XIII for the calendar's adoption, Friday, 15 October 1582, the preceding date was Thursday, 4 October 1582 (Julian calendar). Opinions vary about the numbering of the days of the week. ISO 8601, in common use worldwide, starts with Monday=1; printed monthly calendar grids often list Mondays in the first (left) column of dates and Sundays in the last. In North America, the week typically begins on Sunday and ends on Saturday. Accuracy The Gregorian calendar improves the approximation made by the Julian calendar by skipping three Julian leap days in every 400 years, giving an average year of 365.2425 mean solar days long. This approximation has an error of about one day per 3,030 years with respect to the current value of the mean tropical year. However, because of the precession of the equinoxes, which is not constant, and the movement of the perihelion (which affects the Earth's orbital speed) the error with respect to the astronomical vernal equinox is variable; using the average interval between vernal equinoxes near 2000 of 365.24237 days implies an error closer to 1 day every 7,700 years. By any criterion, the Gregorian calendar is substantially more accurate than the 1 day in 128 years error of the Julian calendar (average year 365.25 days). In the 19th century, Sir John Herschel proposed a modification to the Gregorian calendar with 969 leap days every 4,000 years, instead of 970 leap days that the Gregorian calendar would insert over the same period. This would reduce the average year to 365.24225 days. Herschel's proposal would make the year 4000, and multiples thereof, common instead of leap. While this modification has often been proposed since, it has never been officially adopted. On time scales of thousands of years, the Gregorian calendar falls behind the astronomical seasons. This is because the Earth's speed of rotation is gradually slowing down, which makes each day slightly longer over time (see tidal acceleration and leap second) while the year maintains a more uniform duration. Calendar seasonal error This image shows the difference between the Gregorian calendar and the astronomical seasons. The y-axis is the date in June and the x-axis is Gregorian calendar years. Each point is the date and time of the June solstice in that particular year. The error shifts by about a quarter of a day per year. Centurial years are ordinary years, unless they are divisible by 400, in which case they are leap years. This causes a correction in the years 1700, 1800, 1900, 2100, 2200, and 2300. For instance, these corrections cause 23 December 1903 to be the latest December solstice, and 20 December 2096 to be the earliest solstice—about 2.35 days of variation compared with the astronomical event. Proposed reforms The following are proposed reforms of the Gregorian calendar: Holocene calendar International Fixed Calendar (also called the International Perpetual calendar) World Calendar World Season Calendar Leap week calendars Pax Calendar Symmetry454 Hanke–Henry Permanent Calendar
Technology
Timekeeping
null
26198824
https://en.wikipedia.org/wiki/Climate%20change%20feedbacks
Climate change feedbacks
Climate change feedbacks are natural processes that impact how much global temperatures will increase for a given amount of greenhouse gas emissions. Positive feedbacks amplify global warming while negative feedbacks diminish it. Feedbacks influence both the amount of greenhouse gases in the atmosphere and the amount of temperature change that happens in response. While emissions are the forcing that causes climate change, feedbacks combine to control climate sensitivity to that forcing. While the overall sum of feedbacks is negative, it is becoming less negative as greenhouse gas emissions continue. This means that warming is slower than it would be in the absence of feedbacks, but that warming will accelerate if emissions continue at current levels. Net feedbacks will stay negative largely because of increased thermal radiation as the planet warms, which is an effect that is several times larger than any other singular feedback. Accordingly, anthropogenic climate change alone cannot cause a runaway greenhouse effect. Feedbacks can be divided into physical feedbacks and partially biological feedbacks. Physical feedbacks include decreased surface reflectivity (from diminished snow and ice cover) and increased water vapor in the atmosphere. Water vapor is not only a powerful greenhouse gas, it also influences feedbacks in the distribution of clouds and temperatures in the atmosphere. Biological feedbacks are mostly associated with changes to the rate at which plant matter accumulates as part of the carbon cycle. The carbon cycle absorbs more than half of CO2 emissions every year into plants and into the ocean. Over the long term the percentage will be reduced as carbon sinks become saturated and higher temperatures lead to effects like drought and wildfires. Feedback strengths and relationships are estimated through global climate models, with their estimates calibrated against observational data whenever possible. Some feedbacks rapidly impact climate sensitivity, while the feedback response from ice sheets is drawn out over several centuries. Feedbacks can also result in localized differences, such as polar amplification resulting from feedbacks that include reduced snow and ice cover. While basic relationships are well understood, feedback uncertainty exists in certain areas, particularly regarding cloud feedbacks. Carbon cycle uncertainty is driven by the large rates at which is both absorbed into plants and released when biomass burns or decays. For instance, permafrost thaw produces both and methane emissions in ways that are difficult to model. Climate change scenarios use models to estimate how Earth will respond to greenhouse gas emissions over time, including how feedbacks will change as the planet warms. Definition and terminology The Planck response is the additional thermal radiation objects emit as they get warmer. Whether Planck response is a climate change feedback depends on the context. In climate science the Planck response can be treated as an intrinsic part of warming that is separate from radiative feedbacks and carbon cycle feedbacks. However, the Planck response is included when calculating climate sensitivity. A feedback that amplifies an initial change is called a positive feedback while a feedback that reduces an initial change is called a negative feedback. Climate change feedbacks are in the context of global warming, so positive feedbacks enhance warming and negative feedbacks diminish it. Naming a feedback positive or negative does not imply that the feedback is good or bad. The initial change that triggers a feedback may be externally forced, or may arise through the climate system's internal variability. External forcing refers to "a forcing agent outside the climate system causing a change in the climate system" that may push the climate system in the direction of warming or cooling. External forcings may be human-caused (for example, greenhouse gas emissions or land use change) or natural (for example, volcanic eruptions). Physical feedbacks Planck response (negative) Planck response is "the most fundamental feedback in the climate system". As the temperature of a black body increases, the emission of infrared radiation increases with the fourth power of its absolute temperature according to the Stefan–Boltzmann law. This increases the amount of outgoing radiation back into space as the Earth warms. It is a strong stabilizing response and has sometimes been called the "no-feedback response" because it is an intensive property of a thermodynamic system when considered to be purely a function of temperature. Although Earth has an effective emissivity less than unity, the ideal black body radiation emerges as a separable quantity when investigating perturbations to the planet's outgoing radiation. The Planck "feedback" or Planck response is the comparable radiative response obtained from analysis of practical observations or global climate models (GCMs). Its expected strength has been most simply estimated from the derivative of the Stefan-Boltzmann equation as -4σT3 = -3.8 W/m2/K (watts per square meter per degree of warming). Accounting from GCM applications has sometimes yielded a reduced strength, as caused by extensive properties of the stratosphere and similar residual artifacts subsequently identified as being absent from such models. Most extensive "grey body" properties of Earth that influence the outgoing radiation are usually postulated to be encompassed by the other GCM feedback components, and to be distributed in accordance with a particular forcing-feedback formulation of the climate system. Ideally the Planck response strength obtained from GCMs, indirect measurements, and black body estimates will further converge as analysis methods continue to mature. Water vapor feedback (positive) According to Clausius–Clapeyron relation, saturation vapor pressure is higher in a warmer atmosphere, and so the absolute amount of water vapor will increase as the atmosphere warms. It is sometimes also called the specific humidity feedback, because relative humidity (RH) stays practically constant over the oceans, but it decreases over land. This occurs because land experiences faster warming than the ocean, and a decline in RH has been observed after the year 2000. Since water vapor is a greenhouse gas, the increase in water vapor content makes the atmosphere warm further, which allows the atmosphere to hold still more water vapor. Thus, a positive feedback loop is formed, which continues until the negative feedbacks bring the system to equilibrium. Increases in atmospheric water vapor have been detected from satellites, and calculations based on these observations place this feedback strength at 1.85 ± 0.32 m2/K. This is very similar to model estimates, which are at 1.77 ± 0.20 m2/K Either value effectively doubles the warming that would otherwise occur from CO2 increases alone. Like with the other physical feedbacks, this is already accounted for in the warming projections under climate change scenarios. Lapse rate (negative) The lapse rate is the rate at which an atmospheric variable, normally temperature in Earth's atmosphere, falls with altitude. It is therefore a quantification of temperature, related to radiation, as a function of altitude, and is not a separate phenomenon in this context. The lapse rate feedback is generally a negative feedback. However, it is in fact a positive feedback in polar regions where it strongly contributed to polar amplified warming, one of the biggest consequences of climate change. This is because in regions with strong inversions, such as the polar regions, the lapse rate feedback can be positive because the surface warms faster than higher altitudes, resulting in inefficient longwave cooling. The atmosphere's temperature decreases with height in the troposphere. Since emission of infrared radiation varies with temperature, longwave radiation escaping to space from the relatively cold upper atmosphere is less than that emitted toward the ground from the lower atmosphere. Thus, the strength of the greenhouse effect depends on the atmosphere's rate of temperature decrease with height. Both theory and climate models indicate that global warming will reduce the rate of temperature decrease with height, producing a negative lapse rate feedback that weakens the greenhouse effect. Surface albedo feedback (positive) Albedo is the measure of how strongly the planetary surface can reflect solar radiation, which prevents its absorption and thus has a cooling effect. Brighter and more reflective surfaces have a high albedo and darker surfaces have a low albedo, so they heat up more. The most reflective surfaces are ice and snow, so surface albedo changes are overwhelmingly associated with what is known as the ice-albedo feedback. A minority of the effect is also associated with changes in physical oceanography, soil moisture and vegetation cover. The presence of ice cover and sea ice makes the North Pole and the South Pole colder than they would have been without it. During glacial periods, additional ice increases the reflectivity and thus lowers absorption of solar radiation, cooling the planet. But when warming occurs and the ice melts, darker land or open water takes its place and this causes more warming, which in turn causes more melting. In both cases, a self-reinforcing cycle continues until an equilibrium is found. Consequently, recent Arctic sea ice decline is a key reason behind the Arctic warming nearly four times faster than the global average since 1979 (the start of continuous satellite readings), in a phenomenon known as Arctic amplification. Conversely, the high stability of ice cover in Antarctica, where the East Antarctic ice sheet rises nearly 4 km above the sea level, means that it has experienced very little net warming over the past seven decades. As of 2021, the total surface feedback strength is estimated at 0.35 [0.10 to 0.60] W m2/K. On its own, Arctic sea ice decline between 1979 and 2011 was responsible for 0.21 (W/m2) of radiative forcing. This is equivalent to a quarter of impact from emissions over the same period. The combined change in all sea ice cover between 1992 and 2018 is equivalent to 10% of all the anthropogenic greenhouse gas emissions. Ice-albedo feedback strength is not constant and depends on the rate of ice loss - models project that under high warming, its strength peaks around 2100 and declines afterwards, as most easily melted ice would already be lost by then. When CMIP5 models estimate a total loss of Arctic sea ice cover from June to September (a plausible outcome under higher levels of warming), it increases the global temperatures by , with a range of 0.16–0.21 °C, while the regional temperatures would increase by over . These calculations include second-order effects such as the impact from ice loss on regional lapse rate, water vapor and cloud feedbacks, and do not cause "additional" warming on top of the existing model projections. Cloud feedback (positive) Seen from below, clouds emit infrared radiation back to the surface, which has a warming effect; seen from above, clouds reflect sunlight and emit infrared radiation to space, leading to a cooling effect. Low clouds are bright and very reflective, so they lead to strong cooling, while high clouds are too thin and transparent to effectively reflect sunlight, so they cause overall warming. As a whole, clouds have a substantial cooling effect. However, climate change is expected to alter the distribution of cloud types in a way which collectively reduces their cooling and thus accelerates overall warming. While changes to clouds act as a negative feedback in some latitudes, they represent a clear positive feedback on a global scale. As of 2021, cloud feedback strength is estimated at 0.42 [–0.10 to 0.94] W m2/K. This is the largest confidence interval of any climate feedback, and it occurs because some cloud types (most of which are present over the oceans) have been very difficult to observe, so climate models don't have as much data to go on with when they attempt to simulate their behaviour. Additionally, clouds have been strongly affected by aerosol particles, mainly from the unfiltered burning of sulfur-rich fossil fuels such as coal and bunker fuel. Any estimate of cloud feedback needs to disentangle the effects of so-called global dimming caused by these particles as well. Thus, estimates of cloud feedback differ sharply between climate models. Models with the strongest cloud feedback have the highest climate sensitivity, which means that they simulate much stronger warming in response to a doubling of (or equivalent greenhouse gas) concentrations than the rest. Around 2020, a small fraction of models was found to simulate so much warming as the result that they had contradicted paleoclimate evidence from fossils, and their output was effectively excluded from the climate sensitivity estimate of the IPCC Sixth Assessment Report. Biogeophysical and biogeochemical feedbacks feedbacks (mostly negative) There are positive and negative climate feedbacks from Earth's carbon cycle. Negative feedbacks are large, and play a great role in the studies of climate inertia or of dynamic (time-dependent) climate change. Because they are considered relatively insensitive to temperature changes, they are sometimes considered separately or disregarded in studies which aim to quantify climate sensitivity. Global warming projections have included carbon cycle feedbacks since the IPCC Fourth Assessment Report (AR4) in 2007. While the scientific understanding of these feedbacks was limited at the time, it had improved since then. These positive feedbacks include an increase in wildfire frequency and severity, substantial losses from tropical rainforests due to fires and drying and tree losses elsewhere. The Amazon rainforest is a well-known example due to its enormous size and importance, and because the damage it experiences from climate change is exacerbated by the ongoing deforestation. The combination of two threats can potentially transform much or all of the rainforest to a savannah-like state, although this would most likely require relatively high warming of . Altogether, carbon sinks in the land and ocean absorb around half of the current emissions. Their future absorption is dynamic. In the future, if the emissions decrease, the fraction they absorb will increase, and they will absorb up to three-quarters of the remaining emissions - yet, the raw amount absorbed will decrease from the present. On the contrary, if the emissions will increase, then the raw amount absorbed will increase from now, yet the fraction could decline to one-third by the end of the 21st century. If the emissions remain very high after the 21st century, carbon sinks would eventually be completely overwhelmed, with the ocean sink diminished further and land ecosystems outright becoming a net source. Hypothetically, very strong carbon dioxide removal could also result in land and ocean carbon sinks becoming net sources for several decades. Role of oceans Following Le Chatelier's principle, the chemical equilibrium of the Earth's carbon cycle will shift in response to anthropogenic emissions. The primary driver of this is the ocean, which absorbs anthropogenic via the so-called solubility pump. At present this accounts for only about one third of the current emissions, but ultimately most (~75%) of the emitted by human activities will dissolve in the ocean over a period of centuries: "A better approximation of the lifetime of fossil fuel for public discussion might be 300 years, plus 25% that lasts forever". However, the rate at which the ocean will take it up in the future is less certain, and will be affected by stratification induced by warming and, potentially, changes in the ocean's thermohaline circulation. It is believed that the single largest factor in determining the total strength of the global carbon sink is the state of the Southern Ocean - particularly of the Southern Ocean overturning circulation. Chemical weathering Chemical weathering over the geological long term acts to remove from the atmosphere. With current global warming, weathering is increasing, demonstrating significant feedbacks between climate and Earth surface. Biosequestration also captures and stores by biological processes. The formation of shells by organisms in the ocean, over a very long time, removes from the oceans. The complete conversion of to limestone takes thousands to hundreds of thousands of years. Primary production through photosynthesis Net primary productivity of plants' and phytoplankton grows as the increased fuels their photosynthesis in what is known as the CO2 fertilization effect. Additionally, plants require less water as the atmospheric concentrations increase, because they lose less moisture to evapotranspiration through open stomata (the pores in leaves through which is absorbed). However, increased droughts in certain regions can still limit plant growth, and the warming beyond optimum conditions has a consistently negative impact. Thus, estimates for the 21st century show that plants would become a lot more abundant at high latitudes near the poles but grow much less near the tropics - there is only medium confidence that tropical ecosystems would gain more carbon relative to now. However, there is high confidence that the total land carbon sink will remain positive. Non- climate-relevant gases (unclear) Release of gases of biological origin would be affected by global warming, and this includes climate-relevant gases such as methane, nitrous oxide or dimethyl sulfide. Others, such as dimethyl sulfide released from oceans, have indirect effects. Emissions of methane from land (particularly from wetlands) and of nitrous oxide from land and oceans are a known positive feedback. I.e. long-term warming changes the balance in the methane-related microbial community within freshwater ecosystems so they produce more methane while proportionately less is oxidised to carbon dioxide. There would also be biogeophysical changes which affect the albedo. For instance, larch in some sub-arctic forests are being replaced by spruce trees. This has a limited contribution to warming, because larch trees shed their needles in winter and so they end up more extensively covered in snow than the spruce trees which retain their dark needles all year. On the other hand, changes in emissions of compounds such sea salt, dimethyl sulphide, dust, ozone and a range of biogenic volatile organic compounds are expected to be negative overall. As of 2021, all of these non- feedbacks are believed to practically cancel each other out, but there is only low confidence, and the combined feedbacks could be up to 0.25 W m2/K in either direction. Permafrost (positive) Permafrost is not included in the estimates above, as it is difficult to model, and the estimates of its role is strongly time-dependent as its carbon pools are depleted at different rates under different warming levels. Instead, it is treated as a separate process that will contribute to near-term warming, with the best estimates shown below. Long-term feedbacks Ice sheets The Earth's two remaining ice sheets, the Greenland ice sheet and the Antarctic ice sheet, cover the world's largest island and an entire continent, and both of them are also around thick on average. Due to this immense size, their response to warming is measured in thousands of years and is believed to occur in two stages. The first stage would be the effect from ice melt on thermohaline circulation. Because meltwater is completely fresh, it makes it harder for the surface layer of water to sink beneath the lower layers, and this disrupts the exchange of oxygen, nutrients and heat between the layers. This would act as a negative feedback - sometimes estimated as a cooling effect of over a 1000-year average, though the research on these timescales has been limited. An even longer-term effect is the ice-albedo feedback from ice sheets reaching their ultimate state in response to whatever the long-term temperature change would be. Unless the warming is reversed entirely, this feedback would be positive. The total loss of the Greenland Ice Sheet is estimated to add to global warming (with a range of 0.04–0.06 °C), while the loss of the West Antarctic Ice Sheet adds (0.04–0.06 °C), and East Antarctic ice sheet Total loss of the Greenland ice sheet would also increase regional temperatures in the Arctic by between and , while the regional temperature in Antarctica is likely to go up by after the loss of the West Antarctic ice sheet and after the loss of the East Antarctic ice sheet. These estimates assume that global warming stays at an average of . Because of the logarithmic growth of the greenhouse effect, the impact from ice loss would be larger at the slightly lower warming level of 2020s, but it would become lower if the warming proceeds towards higher levels. While Greenland and the West Antarctic ice sheet are likely committed to melting entirely if the long-term warming is around , the East Antarctic ice sheet would not be at risk of complete disappearance until the very high global warming of Methane hydrates Methane hydrates or methane clathrates are frozen compounds where a large amount of methane is trapped within a crystal structure of water, forming a solid similar to ice. On Earth, they generally lie beneath sediments on the ocean floors, (approximately below the sea level). Around 2008, there was a serious concern that a large amount of hydrates from relatively shallow deposits in the Arctic, particularly around the East Siberian Arctic Shelf, could quickly break down and release large amounts of methane, potentially leading to within 80 years. Current research shows that hydrates react very slowly to warming, and that it's very difficult for methane to reach the atmosphere after dissociation on the seafloor. Thus, no "detectable" impact on the global temperatures is expected to occur in this century due to methane hydrates. Some research suggests hydrate dissociation can still cause a warming of over several millennia. Mathematical formulation of global energy imbalance Earth is a thermodynamic system for which long-term temperature changes follow the global energy imbalance (EEI stands for Earth's energy imbalance): where ASR is the absorbed solar radiation and OLR is the outgoing longwave radiation at top of atmosphere. When EEI is positive the system is warming, when it is negative they system is cooling, and when it is approximately zero then there is neither warming or cooling. The ASR and OLR terms in this expression encompass many temperature-dependent properties and complex interactions that govern system behavior. In order to diagnose that behavior around a relatively stable equilibrium state, one may consider a perturbation to EEI as indicated by the symbol Δ. Such a perturbation is typically induced by a radiative forcing (ΔF) which can be natural or man-made. Responses within the system to either return towards the stable state, or to move further away from the stable state are called feedbacks λΔT: . A feedback is a thermodynamic process while a forcing is a thermodynamic operation according to classical principles. Collectively the feedbacks may be approximated by the linearized parameter λ and the perturbed temperature ΔT because all components of λ (assumed to be first-order to act independently and additively) are also functions of temperature, albeit to varying extents, by definition for a thermodynamic system: . Some feedback components having significant influence on EEI are: = water vapor, = clouds, = surface albedo, = carbon cycle, = Planck response, and = lapse rate. All quantities are understood to be global averages, while T is usually translated to temperature at the surface because of its direct relevance to humans and much other life. The negative Planck response, being an especially strong function of temperature, is sometimes factored out to give an expression in terms of the relative feedback gains gi from other components: . For example for the water vapor feedback. Within the context of modern numerical climate modelling and analysis, the linearized formulation has limited use. One such use is to diagnose the relative strengths of different feedback mechanisms. An estimate of climate sensitivity to a forcing is then obtained for the case where the net feedback remains negative and the system reaches a new equilibrium state (ΔEEI=0) after some time has passed: . Implications for climate policy Uncertainty over climate change feedbacks has implications for climate policy. For instance, uncertainty over carbon cycle feedbacks may affect targets for reducing greenhouse gas emissions (climate change mitigation). Emissions targets are often based on a target stabilization level of atmospheric greenhouse gas concentrations, or on a target for limiting global warming to a particular magnitude. Both of these targets (concentrations or temperatures) require an understanding of future changes in the carbon cycle. If models incorrectly project future changes in the carbon cycle, then concentration or temperature targets could be missed. For example, if models underestimate the amount of carbon released into the atmosphere due to positive feedbacks (e.g., due to thawing permafrost), then they may also underestimate the extent of emissions reductions necessary to meet a concentration or temperature target.
Physical sciences
Climate change
Earth science
41672549
https://en.wikipedia.org/wiki/Forth%20Bridge
Forth Bridge
The Forth Bridge is a cantilever railway bridge across the Firth of Forth in the east of Scotland, west of central Edinburgh. Completed in 1890, it is considered a symbol of Scotland (having been voted Scotland's greatest man-made wonder in 2016), and is a UNESCO World Heritage Site. It was designed by English engineers Sir John Fowler and Sir Benjamin Baker. It is sometimes referred to as the Forth Rail Bridge (to distinguish it from the adjacent Forth Road Bridge), although this is not its official name. Construction of the bridge began in 1882 and it was opened on 4 March 1890 by the Duke of Rothesay, the future Edward VII. The bridge carries the Edinburgh–Aberdeen line across the Forth between the villages of South Queensferry and North Queensferry and has a total length of . When it opened it had the longest single cantilever bridge span in the world, until 1919 when the Quebec Bridge in Canada was completed. It continues to be the world's second-longest single cantilever span, with a span of . The bridge and its associated railway infrastructure are owned by Network Rail. Background Earlier proposals Before the construction of the bridge, ferries were used to cross the Firth. In 1806, a pair of tunnels, one for each direction, was proposed, and in 1818 James Anderson produced a design for a three-span suspension bridge close to the site of the present one. Calling for approximately of iron, Wilhelm Westhofen said of it "and this quantity [of iron] distributed over the length would have given it a very light and slender appearance, so light indeed that on a dull day it would hardly have been visible, and after a heavy gale probably no longer to be seen on a clear day either". For the railway age, Thomas Bouch designed for the Edinburgh and Northern Railway a roll-on/roll-off ferry between Granton and Burntisland that opened in 1850, which proved so successful that another was ordered for the Tay. In late 1863, a joint project between the North British Railway and Edinburgh and Glasgow Railway, which would merge in 1865, appointed Stephenson and Toner to design a bridge for the Forth, but the commission was given to Bouch around six months later. It had proven difficult to engineer a suspension bridge that was able to carry railway traffic, and Thomas Bouch, engineer to the North British Railway (NBR) and Edinburgh and Glasgow Railway, was in 1863–1864 working on a single-track girder bridge crossing the Forth near Charlestown, where the river is around wide, but mostly relatively shallow. The promoters, however, were concerned about the ability to set foundations in the silty river bottom, as borings had gone as deep as into the mud without finding rock, but Bouch conducted experiments to demonstrate that it was possible for the silt to support considerable weight. Experiments in late 1864 with weighted caissons achieved a pressure of on the silt, encouraging Bouch to continue with the design. In August 1865, Richard Hodgson, chairman of the NBR, proposed that the company invest to try a different kind of foundation, as the weighted caissons had not been successful. Bouch proposed using a large pine platform underneath the piers, (the original design called for a platform of green beech) weighed down with of pig iron which would sink the wooden platform to the level of the silt. The platform was launched on 14 June 1866 after some difficulty in getting it to move down the greased planks it rested on, and then moored in the harbour for six weeks pending completion. The bridge project was aborted just before the platform was sunk as the NBR expected to lose "through traffic" following the amalgamation of the Caledonian Railway and the Scottish North Eastern Railway. In September 1866, a committee of shareholders investigating rumours of financial difficulties found that accounts had been falsified, and the chairman and the entire board had resigned by November. By mid-1867 the NBR was nearly bankrupt, and all work on the Forth and Tay bridges was stopped. The North British Railway took over the ferry at Queensferry in 1867, and completed a rail link from Ratho in 1868, establishing a contiguous link with Fife. Interest in bridging the Forth increased again, and in 1871 Bouch proposed a stiffened steel suspension bridge on roughly the same line as taken by the present rail bridge. This design was examined and pronounced acceptable by W. H. Barlow and William Pole, both "eminent" civil engineers, and Parliament passed in August 1873 an act authorising its construction. Work started in September 1878, in the form of a brick pier at the western end of the mid-Forth island of Inchgarvie. After the Tay Bridge collapsed in 1879, confidence in Bouch dried up and the work stopped. The public inquiry into the disaster, chaired by Henry Cadogan Rothery, found the Tay Bridge to be "badly designed, badly constructed and badly maintained", with Bouch being "mainly to blame" for the defects in construction and maintenance and "entirely responsible" for the defects in design. In particular, Bouch had failed to properly account for the effect that high winds would have on the bridge, and in response to this finding the Board of Trade imposed a requirement that all bridges be designed to accept a lateral wind loading of . Bouch's 1871 design for the Forth Bridge fell significantly short of this figure, ason the advice of the Astronomer Royalhe had assumed a wind loading of only . This had been accepted by Barlow and Pole in their 1873 assessment of the design, though they qualified in their report that "[while] we raise no object to Mr. Bouch's system, we do not commit ourselves to an opinion that it is the best possible". Bouch's design was formally abandoned on 13 January 1881, and Sir John Fowler, W. H. Barlow, and T. E. Harrison, consulting engineers to the project, were invited to propose new designs. Bouch's Inchgarvie pier was left in place, protruding approximately from the water at high tide. It lies directly under the present bridge and was equipped with a small navigational light . Design Dimensions The bridge spans the Forth between the villages of South Queensferry and North Queensferry and has a total length of with the double track elevated above the water level at high tide. It consists of two main spans of , two side spans of , and 15 approach spans of . Each main span consists of two cantilever arms supporting a central span truss. The weight of the bridge superstructure was , including the 6.5 million rivets used. The bridge also used of granite. The three great four-tower cantilever structures are tall, each tower resting on a separate granite pier. These were constructed using diameter caissons; those for the north cantilever and two on the small uninhabited island of Inchgarvie acted as cofferdams, while the remaining two on Inchgarvie and those for the south cantilever, where the river bed was below high-water level, used compressed air to keep water out of the working chamber at the base. Engineering principles The bridge is built on the principle of the cantilever bridge, where a cantilever beam supports a light central girder, a principle that has been used for thousands of years in the construction of bridges. In order to illustrate the use of tension and compression in the bridge, a demonstration in 1887 had the Japanese engineer Kaichi Watanabe supported between Fowler and Baker sitting in chairs. Fowler and Baker represent the cantilevers, with their arms in tension and the sticks under compression, and the bricks the cantilever end piers which are weighted with cast iron. Materials The bridge was the first major structure in Britain to be constructed of steel; its French contemporary, the Eiffel Tower, was built of wrought iron. Large amounts of steel became available after the invention of the Bessemer process, patented in 1856. In 1859, the Board of Trade imposed a limit of for the maximum design stress in railway bridges; this was revised as technology progressed. The original design required for the cantilevers only, of which was to come from Siemens' steel works in Landore, Wales and the remainder from the Steel Company of Scotland's works near Glasgow. When modifications to the design necessitated a further , about half of this was supplied by the Steel Company of Scotland Ltd. and half by Dalzell's Iron and Steel Works in Motherwell. About of rivets came from the Clyde Rivet Company of Glasgow. Around three or four thousand tons of steel was scrapped, some of which was used for temporary purposes, resulting in the discrepancy between the quantity delivered and the quantity erected. Approaches After Dalmeny railway station, the track curves slightly to the east before coming to the southern approach viaduct. After the railway crosses the bridge, it passes through North Queensferry railway station, before curving to the west, and then back to the east over the Jamestown Viaduct. The approaches were built under separate contract and were to the design of the engineer James Carswell. The supports of the approach viaducts are tapered to prevent the impression of the columns widening as they approach the top, and an evaluation of the aesthetics of the Bridge in 2007, by A. D. Magee of the University of Bath, identified that order was present throughout, and this included in the approach viaducts. Magee points out that the masonry was carefully planned, and has neat block work even in areas not immediately visible from the ground. Construction The Bill for the construction of the bridge was passed on 19 May 1882 after an eight-day enquiry, the only objections being from rival railway companies. On 21 December, the contract was let to Sir Thomas Tancred, T. H. Falkiner and Joseph Philips, civil engineers and contractor, and Sir William Arrol & Co. Arrol was a self-made man, who had been apprenticed to a blacksmith at the age of thirteen before going on to have a highly successful business. Tancred was a professional engineer who had worked with Arrol before, but he would leave the partnership during the course of construction. The steel was produced by Frederick and William Siemens (England) and Pierre and Emile Martin (France). Following advances in furnace design by the Siemens brothers and improvements by the Martin brothers, the process of manufacture enabled high quality steel to be produced quickly. Preparations The new works took possession of offices and stores erected by Arrol in connection with Bouch's bridge; these were expanded considerably over time. Reginald Middleton took an accurate survey to establish the exact position of the bridge and allow the permanent construction work to commence. The old coastguard station at the Fife end had to be removed to make way for the north-east pier. The rocky shore was levelled to a height of above high water to make way for plant and materials, and huts and other facilities for workmen were set up further inland. The preparations at South Queensferry were much more substantial, and required the steep hillside to be terraced. Wooden huts and shops for the workmen were put up, as well as more substantial brick houses for the foremen and tenements for leading hands and gangers. Drill roads and workshops were built, as well as a drawing loft to allow full size drawings and templates to be laid out. A cable was also laid across the Forth to allow telephone communication between the centres at South Queensferry, Inchgarvie, and North Queensferry, and girders from the collapsed Tay Bridge were laid across the railway to the west in order to allow access to the ground there. Near the shore a sawmill and cement store were erected, and a substantial jetty around long was started early in 1883, and extended as necessary, and sidings were built to bring railway vehicles among the shops, and cranes set up to allow the loading and movement of material delivered by rail. In April 1883, construction of a landing stage at Inchgarvie commenced. Extant buildings, including fortifications built in the 15th century, were roofed over to increase the available space, and the rock at the west of the island was cut down to a level above high water, and a seawall was built to protect against large waves. In 1884 a compulsory purchase order was obtained for the island, as it was found that previously available area enclosed by the four piers of the bridge was insufficient for the storage of materials. Iron staging reinforced wood in heavily used areas was put up over the island, eventually covering around and using over of iron. Movement of materials The bridge uses of steel and of masonry. Many materials, including granite from Aberdeen, Arbroath rubble, sand, timber, and sometimes coke and coal, could be taken straight to the centre where they were required. Steel was delivered by train and prepared at the yard at South Queensferry, painted with boiled linseed oil, and was then taken to where it was needed by barge. The cement used was Portland cement manufactured on the Medway. It required to be stored before it was able to be used, and up to of cement could be kept in a barge, formerly called the Hougoumont that was moored off South Queensferry. For a time a paddle steamer was hired for the movement of workers, but after a time it was replaced with one capable of carrying 450 men, and the barges were also used for carrying people. Special trains were run from Edinburgh and Dunfermline, and a steamer ran to Leith in the summer. Circular piers The three towers of the cantilever are each seated on four circular piers. Since the foundations were required to be constructed at or below sea level, they were excavated with the assistance of caissons and cofferdams. Caissons were used at locations that were either always under water, even at low tide, or where the foundations were to be built on mud and clay. Cofferdams were used where rock was nearer to the surface, and it was possible to work in low tide. Six caissons were excavated by the pneumatic process, by the French contractor L. Coisea. This process used a positive air pressure inside a sealed caisson to allow dry working conditions at depths of up to . These caissons were constructed and assembled in Glasgow by the Arrol Brothers, namesakes of but unconnected to W. Arrol, before being dismantled and transported to South Queensferry. The caissons were then built up to a large extent before being floated to their final resting-places. The first caisson, for the south-west pier at South Queensferry was launched on 26 May 1884, and the last caisson was launched on 29 May 1885 for the south-west pier at Inchgarvie. When the caissons had been launched and moored, they were extended upwards with a temporary portion in order to keep water out and allow the granite pier to be built when in place. Above the foundations each of which is different to suit the different sites, is a tapered circular granite pier with a diameter of at the bottom and a height of . Inchgarvie The rock on which the two northern piers at Inchgarvie are located is submerged at high water, and of the other two piers, the site of the eastern one is about half submerged and the western one was three-quarters submerged. This meant work initially had to be done at low tide. The southern piers on Inchgarvie are sited on solid rock with a slope of around 1 in 5, so the rock was prepared with concrete and sandbags to make a landing-spot for the caissons. Excavation was carried out by drilling and blasting, but no blasting was done within of the caissons, and the remaining rock was quarried to within . North Queensferry Once the positions of the piers had been established, the first task at the Fife end was to level the site of the northernmost piers, a bedrock of whinstone rising to a level of above high water, to a height of above high water. The south piers at North Queensferry are sited on rock sloping into the sea, and the site was prepared by diamond drilling holes for explosive charges and blasting the rock. South Queensferry The four South Queensferry caissons were all sunk by the pneumatic method, and are identical in design except for differences in height. A T shaped jetty was built at the site of the South Queensferry piers, to allow one caisson to be attached to each corner, and when launched the caissons were attached to the jetty and permitted to rise and fall with the tide. Excavation beneath the caissons was generally only carried out at high tide when the caisson was supported by buoyancy, and then when the tide fell the air pressure was reduced in order to allow the caisson to sink down, and digging would begin anew. The north-west caisson was towed into place in December 1884, but an exceptionally low tide on New Year's Day 1885 caused the caisson to sink into the mud of the river bed and adopt a slight tilt. When the tide rose, it flooded over the lower edge, filling the caisson with water, and when the tide fell but the water did not drain from the caisson, its top-heaviness caused to tilt further. Plates were bolted on by divers to raise the edge of the caisson above water level, and the caisson was reinforced with wooden struts as water was pumped out, but pumping took place too quickly and the water pressure tore a hole between long. It was decided to construct a "barrel" of large timbers inside the caisson to reinforce it, and it was ten months before the caisson could be pumped out and dug free. The caisson was refloated on 19 October 1885, and then moved into position and sunk with suitable modifications. Approach viaducts The approach viaducts to the north and south had to be carried at above the level of high water, and it was decided to build them at a lower level and then raise them in tandem with the construction of the masonry piers. The two viaducts have fifteen spans between them, each one long and weighing slightly over . Two spans are attached together to make a continuous girder, with an expansion joint between each pair of spans. Due to the slope of the hill under the viaducts, the girders were assembled at different heights, and only joined when they had reached the same level. Lifting was done using large hydraulic rams, and took place in increments of around every four days. Building the cantilevers The tubular members were constructed in the No. 2 workshop further up the hill at South Queensferry. To bend plates into the required shape, they were first heated in a gas furnace, and then pressed into the correct curve. The curved plates were then assembled on a mandrel, and holes drilled for rivets, before they were marked individually and moved to the correct location to be added to the structure. Lattice members and other parts were also assembled at South Queensferry, using cranes and highly efficient hydraulic rivetters. Opening The bridge was completed in December 1889, and load testing of the completed bridge was carried out on 21 January 1890. Two trains, each consisting of three heavy locomotives and 50 wagons loaded with coal, totalling 1,880 tons in weight, were driven slowly from South Queensferry to the middle of the north cantilever, stopping frequently to measure the deflection of the bridge. This represented more than twice the design load of the bridge: the deflection under load was as expected. A few days previously there had been a violent storm, producing the highest wind pressure recorded to date at Inchgarvie, and the deflection of the cantilevers had been less than 25 mm (1 in). The first complete crossing took place on 24 February, when a train consisting of two carriages carrying the chairmen of the railway companies involved made several crossings. The bridge was opened on 4 March 1890 by the Duke of Rothesay, later King Edward VII, who drove home the last rivet, which was gold plated and suitably inscribed. The key for the official opening was made by Edinburgh silversmith John Finlayson Bain, commemorated in a plaque on the bridge. When it opened it had the longest single cantilever bridge span in the world, until 1919 when the Quebec Bridge in Canada was completed. It continues to be the world's second-longest single cantilever span, with a span of . To make the fullest use of the bridge, several new railway connections were built, bringing main line routes to the bridge. The construction of some of these lines was only completed on 2 June 1890, delaying the implementation of a full express train service over the bridge until that date. Even then, there was considerable congestion at Edinburgh Waverley station with remarshalling of the portions of the new, more intensive train service. Accidents and deaths At its peak, approximately 4,600 workers were employed in the bridge's construction. Wilhelm Westhofen recorded in 1890 that 57 people died. In 2005 the Forth Bridge Memorial Committee was set up to erect a monument to those lost, and a team of local historians set out to name all those who died. As of 2009, 73 deaths have been connected with the construction of the bridge and its immediate aftermath. It is thought that the figure of 57 deaths excluded those who died working on the approaches to the bridge, as those parts were completed by a subcontractor, as well as those who died after the Sick and Accident Club stopped. Of the 73 recorded deaths, 38 were as a result of falling, 9 of being crushed, 9 drowned, 8 struck by a falling object, 3 died in a fire in a bothy, 1 of caisson disease, and the cause of five deaths is unknown. The Sick and Accident Club was founded in 1883, and membership was compulsory for all contractors' employees. It would provide medical treatment to men and sometimes their families, and pay them if they were unable to work. The club also paid for funerals within certain limits, and would provide grants to the widows of men killed or the wives of those permanently disabled. Eight men were saved from drowning by rowing boats positioned in the river under the working areas. In 2019, it was reported that historians of the Queensferry Historian Group had discovered that at least 21 more men died building the Forth Bridge than was previously thought, in an alleged "cover up" of the true human cost of the structure, taking the new death toll to 78. Tragically, this arguably makes the Forth Bridge a more deadly structure than the failed Tay Bridge when counting both the 59 known deaths attributed to the Tay Bridge Disaster, that led to the earlier proposal for the Forth Bridge construction being halted and subsequently redesigned, combined with the 14 deaths during its construction. Later history Race to the North Before the opening of the Forth Bridge, the railway journey from London to Aberdeen had taken about 13 hours running from and using the London and North Western Railway and Caledonian Railway on a west coast route. With competition opened along the east coast route from the Great Northern, North Eastern and North British railways and starting from King's Cross, unofficial racing took place between the two consortia, reducing the journey time to about 8 hours on the overnight runs. This reached a climax in 1895 with sensational daily press reports about the "Race to the North". When race fever subsided the journey times became around 10 hours. World wars In the First World War British sailors would time their departures or returns to the base at Rosyth by asking when they would pass under the bridge. The first German air attack on Britain in the Second World War took place over the Forth Bridge, six weeks into the war, on 16 October 1939. Although known as the "Forth Bridge Raid", the bridge was not the target and not damaged. In all, 12 German Junkers Ju 88 bombers led by two reconnaissance Heinkel He 111s from Westerland on the island of Sylt, away, reached the Scottish coast in four waves of three. The target of the attack was shipping from the Rosyth naval base in the Forth, about to the west of the bridge. The Germans were hoping to find , the largest capital ship in the Royal Navy. Luftwaffe rules of engagement restricted action to targets on water and not in the dockyard. Although was in Rosyth, the attack was concentrated on the cruisers and , the carrier and the destroyer Jervis. The destroyer Mohawk and the cruisers, Southampton and Edinburgh were damaged. Sixteen Royal Navy crew died and 44 were wounded, although this information was not made public at the time. Spitfires from 603 "City of Edinburgh" Squadron RAF intercepted the raiders and during the attack shot down the first German aircraft downed over Britain in the war. One bomber came down in the water off Port Seton on the East Lothian coast and another off Crail on the coast of Fife. After the War it was learned that a third bomber had come down in the Netherlands as a result of damage inflicted during the raid. Later in the month, a reconnaissance Heinkel 111 crashed near Humbie in East Lothian and photographs of this crashed plane were, and still are, used erroneously to illustrate the raid of 16 October, thus sowing confusion as to whether a third aircraft had been brought down. Members of the bomber crew at Port Seton were rescued and made prisoners-of-war. Two bodies were recovered from the Crail wreckage and after a full military funeral with firing party, were interred in Portobello cemetery, Edinburgh. The body of the gunner was never found. A wartime propaganda film, Squadron 992, made by the GPO Film Unit after the raid, recreated it and conveyed the false impression that the main target was the bridge. Ownership Before the opening of the bridge, the North British Railway (NBR) had lines on both sides of the Firth of Forth between which trains could not pass except by running at least as far west as and using the lines of a rival company. The only alternative route between Edinburgh and Fife involved the ferry at Queensferry, which was purchased by the NBR in 1867. Accordingly, the NBR sponsored the Forth Bridge project which would give them a direct link independent of the Caledonian Railway. A conference at York in 1881 set up the Forth Bridge Railway Committee, to which the NBR contributed 35% of the cost. The remaining money came from three English railways, which ran trains from London over NBR tracks. The Midland Railway, which connected to the NBR at and which owned the route to London St Pancras, contributed 30%, and 17.5% came equally from each of the North Eastern Railway and the Great Northern Railway, which between them owned the route between and London King's Cross, via . This body undertook to construct and maintain the bridge. In 1882 the NBR were given powers to purchase the bridge, which it never exercised. At the time of the 1923 Grouping, the bridge was still jointly owned by the same four railways, and so it became jointly owned by these companies' successors, the London Midland and Scottish Railway (30%) and the London and North Eastern Railway (70%). The Forth Bridge Railway Company was named in the Transport Act 1947 as one of the bodies to be nationalised and so became part of British Railways on 1 January 1948. Under the Act, Forth Bridge shareholders would receive £109 of British Transport stock for each £100 of Forth Bridge Debenture stock; and £104 17s 6d of British Transport stock for each £100 of Forth Bridge Ordinary stock. As of April 2017, the bridge and its associated railway infrastructure were owned by Network Rail Infrastructure Limited. Operation Traffic The bridge has a speed limit of for high-speed trains and diesel multiple units, for ordinary passenger trains and for freight trains. The route availability code is RA8, but freight trains above a certain size must not pass each other on the bridge. Up to 190–200 trains per day crossed the bridge in 2006. Maintenance "Painting the Forth Bridge" is a colloquial expression for a never-ending task, coined on the erroneous belief that at one time in the history of the bridge repainting was required and commenced immediately upon completion of the previous repaint. Such a practice never existed, as weathered areas were given more attention, but there was a permanent maintenance crew. Between 2001 and 2011, the bridge was covered in a new coating designed to last for 25 years, bringing an end to having painters as a regular part of the maintenance crew. Colin Hardie, of Balfour Beatty Construction, was reported as saying, Restoration Floodlighting was installed in 1990, and the track was renewed between 1992 and 1995. The bridge was costing British Rail £1 million a year to maintain, and they announced that the schedule of painting would be interrupted to save money, and the following year, upon privatisation, Railtrack took over. A £40 million package of works commenced in 1998, and in 2002 the responsibility of the bridge was passed to Network Rail. Work started in 2002 to repaint the bridge fully for the first time in its history, in a £130 million contract awarded to Balfour Beatty. Up to of scaffolding was on the bridge at any time, and computer modelling was used to analyse the additional wind load on the structure. The bridge was encapsulated in a climate controlled membrane to give the proper conditions for the application of the paint. All previous layers of paint were removed using copper slag fired at up to , exposing the steel and allowing repairs to be made. The paint, developed specifically for the bridge by Leigh Paints, consisted of a system of three coats derived from that used in the North Sea oil industry; a total of was applied to of the structure, and it is not expected to need repainting for at least 20 years. The top coat can be reapplied indefinitely, minimising future maintenance work. In a report produced by J. E. Jacobs, Grant Thornton and Faber Maunsell in 2007 which reviewed the alternative options for a second road crossing, it was stated that "Network Rail has estimated the [remaining] life of the bridge to be in excess of 100 years. However, this is dependant upon NR's inspection and refurbishment works programme for the bridge being carried out year on year". In culture In the media The Forth Bridge has been featured in television programmes and films, including Carry On Regardless, Alfred Hitchcock's 1935 film The 39 Steps, and its 1959 remake. A.G. Barr used the bridge in posters advertising its soft drink Irn-Bru, with the slogan: "Made in Scotland, from girders". In 2005, the BBC lit the Bridge in red for Comic Relief. Also in 2005, Channel 4 documentary Jump Britain showed Sébastien Foucan, a French freerunner, crawling along one of the highest points of the bridge without a harness. The first episode of the UK television series Britain's Greatest Bridges featured the Forth Bridge and was aired on Spike UK on 12 January 2017. In general culture The location of the Forth Bridge has seen it featured in other cultural forms. In the build up to the Millennium celebrations a countdown clock sponsored by the Royal Bank of Scotland was attached to the top of the Bridge in 1998. Iain Banks wrote the novel The Bridge, which is mainly set on a fictionalised version of the bridge, which links "The City" (Edinburgh) and "The Kingdom" (Fife). In Alan Turing's most famous paper about artificial intelligence, one of the challenges put to the subject of an imagined Turing test is "Please write me a sonnet on the subject of the Forth Bridge." The test subject in Turing's paper answers, "Count me out on this one. I never could write poetry." The bridge is included in the video game Grand Theft Auto: San Andreas by Edinburgh-based developer Rockstar North. Renamed the Kincaid Bridge, it serves as the main railway bridge of the fictional city of San Fierro, and appears alongside a virtual Forth Road Bridge. In his 1917 book On Growth and Form, the mathematical biologist D'Arcy Thompson compares the structural form of the Forth Bridge with the cantilevered skeleton of an ox, the piers corresponding to legs, the cantilevers to the vertebral column: As heritage UNESCO inscribed the bridge as a World Heritage Site on 5 July 2015, recognising it as "an extraordinary and impressive milestone in bridge design and construction during the period when railways came to dominate long-distance land travel". It is the sixth World Heritage Site to be inscribed in Scotland. In 2016, a VisitScotland survey voted the Forth Bridge "Scotland's greatest man-made wonder", beating off competition from Stirling Castle, the Caledonian Canal, the Scott Monument, Bell Rock Lighthouse, and Melrose Abbey. The Forth Bridge has appeared in representation on a 2004 one pound coin, issued by the Royal Mint. The Bridge has also featured on banknotes including the 2007 series issued by the Bank of Scotland, which depicts different bridges in Scotland as examples of Scottish engineering, and the £20 note features the Forth Bridge. In 2014 Clydesdale Bank announced the introduction of Britain's second polymer banknote, a £5 note featuring Sir William Arrol and the Forth Bridge (the first polymer banknote was issued by Northern Bank in 2000). It was introduced in 2015 to commemorate the 125th anniversary of the opening of the bridge, and its nomination to become a UNESCO World Heritage Site. "Operation Forth Bridge" was the title of the plans for the funeral of Prince Philip, Duke of Edinburgh in 2021. Visitor attraction Network Rail plans to add a visitor centre to the bridge, which would include a viewing platform on top of the North Queensferry side, or a bridge climbing experience to the South Queensferry side. In December 2014 it was announced Arup had been awarded the design contract for the project. In September 2019, Network Rail submitted plans to build a visitor centre at the South Queensferry side that would serve as a base for the bridge climbing experience, dubbed "The Forth Bridge Experience". The plans were approved in early 2020 but were put on hold due to the COVID-19 pandemic. Revised plans were submitted in February 2022.
Technology
Bridges
null
30871736
https://en.wikipedia.org/wiki/Otitis%20externa
Otitis externa
Otitis externa, also called swimmer's ear, is inflammation of the ear canal. It often presents with ear pain, swelling of the ear canal, and occasionally decreased hearing. Typically there is pain with movement of the outer ear. A high fever is typically not present except in severe cases. Otitis externa may be acute (lasting less than six weeks) or chronic (lasting more than three months). Acute cases are typically due to bacterial infection, and chronic cases are often due to allergies and autoimmune disorders. The most common cause of otitis externa is bacterial. Risk factors for acute cases include swimming, minor trauma from cleaning, using hearing aids and ear plugs, and other skin problems, such as psoriasis and dermatitis. People with diabetes are at risk of a severe form of malignant otitis externa. Diagnosis is based on the signs and symptoms. Culturing the ear canal may be useful in chronic or severe cases. Acetic acid ear drops may be used as a preventive measure. Treatment of acute cases is typically with antibiotic drops, such as ofloxacin or acetic acid. Steroid drops may be used in addition to antibiotics. Pain medications such as ibuprofen may be used for the pain. Antibiotics by mouth are not recommended unless the person has poor immune function or there is infection of the skin around the ear. Typically, improvement occurs within a day of the start of treatment. Treatment of chronic cases depends on the cause. Otitis externa affects 1–3% of people a year; more than 95% of cases are acute. About 10% of people are affected at some point in their lives. It occurs most commonly among children between the ages of seven and twelve and among the elderly. It occurs with near equal frequency in males and females. Those who live in warm and wet climates are more often affected. Signs and symptoms Tenderness of pinna is the predominant complaint and the only symptom directly related to the severity of acute external otitis. Unlike other forms of ear infections, there is tenderness in outer ear, i.e., the pain of acute external otitis is worsened when the outer ear is touched or pulled gently. Pushing the tragus, the tablike portion of the auricle that projects out just in front of the ear canal opening, also typically causes pain in this condition as to be diagnostic of external otitis on physical examination. People may also experience ear discharge and itchiness. When enough swelling and discharge in the ear canal is present to block the opening, external otitis may cause temporary conductive hearing loss. Because the symptoms of external otitis lead many people to attempt to clean out the ear canal (or scratch it) with slim implements, self-cleaning attempts generally lead to additional traumas of the injured skin, so rapid worsening of the condition often occurs. Causes The two factors that are required for external otitis to develop are (1) the presence of germs that can infect the skin and (2) impairments in the integrity of the skin of the ear canal that allow an infection to occur. If the skin is healthy and uninjured, only exposure to a high concentration of pathogens, such as submersion in a pond contaminated by sewage, is likely to set off an episode. However, if there are chronic skin conditions that affect the ear canal skin, such as atopic dermatitis, seborrheic dermatitis, psoriasis or abnormalities of keratin production, or if there has been a break in the skin from trauma, even the normal bacteria found in the ear canal may cause infection and full-blown symptoms of external otitis. Fungal ear canal infections, also known as otomycosis, range from inconsequential to extremely severe. Fungi can be saprophytic, in which there are no symptoms and the fungus simply co-exists in the ear canal in a commensal relationship with the host, in which case the only physical finding is the presence of a fungus. If the fungus begins active reproduction, the ear canal can fill with dense fungal debris, causing pressure and ever-increasing pain that is unrelenting until the fungus is removed from the canal and anti-fungal medication is used. Most antibacterial ear drops also contain a steroid to hasten resolution of canal edema and pain. Unfortunately, such drops make the fungal infection worse. Prolonged use of them promotes the growth of fungus in the ear canal. Antibacterial ear drops should be used for a maximum of one week, but 5 days is usually enough. Otomycosis responds more than 95% of the time to a three-day course of the same over-the-counter anti-fungal solutions used for athlete's foot. Swimming Swimming in polluted water is a common way to contract swimmer's ear, but it is also possible to contract swimmer's ear from water trapped in the ear canal after a shower, especially in a humid climate. Prolonged swimming can saturate the skin of the canal, compromising its barrier function and making it more susceptible to further damage if the ear is instrumented with cotton swabs after swimming. Main symptoms of swimmer’s ear are a feeling of fullness in the ear, itchiness, redness, and swelling in or around the ear canal, muffled hearing, pain in the external ear and ear canal and especially a smelly discharge from the ear. Constriction of the ear canal from bone growth (Surfer's ear) can trap debris leading to infection. Saturation divers have reported otitis externa during occupational exposure. Objects in ear Even without exposure to water, the use of objects such as cotton swabs or other small objects to clear the ear canal is enough to cause breaks in the skin, and allow the condition to develop. Once the skin of the ear canal is inflamed, external otitis can be drastically enhanced by either scratching the ear canal with an object or by allowing water to remain in the ear canal for any prolonged length of time. Infections The majority of cases are due to Pseudomonas aeruginosa and Staphylococcus aureus, followed by a great number of other gram-positive and gram-negative species. Candida albicans and Aspergillus species are the most common fungal pathogens responsible for the condition. Diagnosis When the ear is inspected, the canal appears red and swollen in well-developed cases. The ear canal may also appear eczema-like, with scaly shedding of skin. Touching or moving the outer ear increases the pain, and this maneuver on physical exam is important in establishing the clinical diagnosis. It may be difficult to see the eardrum with an otoscope at the initial examination because of narrowing of the ear canal from inflammation and the presence of drainage and debris. Sometimes the diagnosis of external otitis is presumptive and return visits are required to fully examine the ear. The culture of the drainage may identify the bacteria or fungus causing infection, but is not part of the routine diagnostic evaluation. In severe cases of external otitis, there may be swelling of the lymph node(s) directly beneath the ear. The diagnosis may be missed in most early cases because the examination of the ear, with the exception of pain with manipulation, is nearly normal. In some early cases, the most striking visual finding is the lack of earwax. As a moderate or severe case of external otitis resolves, weeks may be required before the ear canal again shows a normal amount of it. Classification In contrast to the chronic otitis externa, acute otitis externa (AOE) is predominantly a bacterial infection, occurs suddenly, rapidly worsens, and becomes painful. The ear canal has an abundant nerve supply, so the pain is often severe enough to interfere with sleep. Wax in the ear can combine with the swelling of the canal skin and the associated pus to block the canal and dampen hearing, creating a temporary conductive hearing loss. In more severe or untreated cases, the infection can spread to the soft tissues of the face that surround the adjacent parotid gland and the jaw joint, making chewing painful. In its mildest forms, otitis externa is so common that some ear nose and throat physicians have suggested that most people will have at least a brief episode at some point in life. The skin of the bony ear canal is unique, in that it is not movable but is closely attached to the bone, and it is almost paper-thin. For these reasons, it is easily abraded or torn by even minimal physical force. Inflammation of the ear canal skin typically begins with a physical insult, most often from injury caused by attempts at self-cleaning or scratching with cotton swabs, pen caps, fingernails, hair pins, keys, or other small implements. Another causative factor for acute infection is prolonged water exposure in the forms of swimming or exposure to extreme humidity, which can compromise the protective barrier function of the canal skin, allowing bacteria to flourish, hence the name "swimmer's ear". Prevention The strategies for preventing acute external otitis are similar to those for treatment. Avoid inserting anything into the ear canal: use of cotton buds or swabs is the most common event leading to acute otitis externa. Most normal ear canals have a self-cleaning and self-drying mechanism, the latter by simple evaporation. After prolonged swimming, a person prone to external otitis can dry the ears using a small battery-powered ear dryer, available at many retailers, especially shops catering to watersports enthusiasts. Alternatively, drops containing dilute acetic acid (vinegar diluted 3:1) or Burow's solution may be used. It is especially important not to instrument ears when the skin is saturated with water, as it is very susceptible to injury, which can lead to external otitis. Avoid swimming in polluted water. Avoid washing hair or swimming if very mild symptoms of acute external otitis begin. Although the use of earplugs when swimming and shampooing hair may help prevent external otitis, there are important details in the use of plugs. Hard and poorly fitting earplugs can scratch the ear canal skin and set off an episode. When earplugs are used during an acute episode, either disposable plugs are recommended, or used plugs must be cleaned and dried properly to avoid contaminating the healing ear canal with infected discharge. According to one source, the use of in-ear headphones during otherwise "dry" exercise in the summer has been associated with the development of swimmer's ear since the plugs can create a warm and moist environment inside the ears. The source claims that on-ear or over-ear headphones can be a better alternative for preventing swimmer's ear. Treatment Medications Effective solutions for the ear canal include acidifying and drying agents, used either singly or in combination. When the ear canal skin is inflamed from the acute otitis externa, the use of dilute acetic acid may be painful. Burow's solution is a very effective remedy against both bacterial and fungal external otitis. This is a buffered mixture of aluminium sulfate and acetic acid, and is available without prescription in the United States. Ear drops are the mainstay of treatment for external otitis. Some contain antibiotics, either antibacterial or antifungal, and others are simply designed to mildly acidify the ear canal environment to discourage bacterial growth. Some prescription drops also contain anti-inflammatory steroids, which help to resolve swelling and itching. Although there is evidence that steroids are effective at reducing the length of treatment time required, fungal otitis externa (also called otomycosis) may be caused or aggravated by overly prolonged use of steroid-containing drops. Antibiotics by mouth should not be used to treat uncomplicated acute otitis externa. Antibiotics by mouth are not a sufficient response to bacteria which cause this condition and have significant side effects including increased risk of opportunistic infection. In contrast, topical products can treat this condition. Oral anti-pseudomonal antibiotics can be used in case of severe soft tissue swelling extending into the face and neck and may hasten recovery. Although the acute external otitis generally resolves in a few days with topical washes and antibiotics, complete return of hearing and cerumen gland function may take a few more days. Once healed completely, the ear canal is again self-cleaning. Until it recovers fully, it may be more prone to repeat infection from further physical or chemical insult. Effective medications include ear drops containing antibiotics to fight infection, and corticosteroids (Hydrocortisone +Neomycin+ Polymixin B) to reduce itching and inflammation. In painful cases, a topical solution of antibiotics such as aminoglycoside, polymyxin or fluoroquinolone is usually prescribed. Antifungal solutions are used in the case of fungal infections. External otitis is almost always predominantly bacterial or predominantly fungal so that only one type of medication is necessary and indicated. Cleaning Removal of debris (wax, shed skin, and pus) from the ear canal promotes direct contact of the prescribed medication with the infected skin and shortens recovery time. When canal swelling has progressed to the point where the ear canal is blocked, ear drops may not penetrate far enough into the ear canal to be effective. The physician may need to carefully insert a wick of cotton or other commercially available, pre-fashioned, absorbent material called an ear wick and then saturate that with the medication. The wick is kept saturated with medication until the canal opens enough that the drops will penetrate the canal without it. Removal of the wick does not require a health professional. Antibiotic ear drops should be dosed in a quantity that allows coating of most of the ear canal and used for no more than 4 to 7 days. The ear should be left open. It is imperative that visualization of an intact tympanic membrane (eardrum) is noted. Use of certain medications with a ruptured tympanic membrane can cause tinnitus, vertigo, dizziness and hearing loss in some cases. Prognosis Otitis externa responds well to treatment, but complications may occur if it is not treated. Individuals with underlying diabetes, disorders of the immune system, or history of radiation therapy to the base of the skull are more likely to develop complications, including malignant otitis externa. In these individuals, rapid examination by an otolaryngologist (ear, nose, and throat physician) is very important. Chronic otitis externa Spread of infection to other areas of the body Necrotizing external otitis Otitis externa haemorhagica external otitis Necrotizing external otitis (malignant otitis externa) is an uncommon form of external otitis that occurs mainly in elderly diabetics, being somewhat more likely and more severe when the diabetes is poorly controlled. Even less commonly, it can develop due to a severely compromised immune system. Beginning as infection of the external ear canal, there is an extension of the infection into the bony ear canal and the soft tissues deep to the bony canal. Unrecognized and untreated, it may result in death. The hallmark of malignant otitis externa (MOE) is unrelenting pain that interferes with sleep and persists even after swelling of the external ear canal may have resolved with topical antibiotic treatment. It can also cause skull base osteomyelitis (SBO), manifested by multiple cranial nerve palsies, described below under the "Treatment" heading. Natural history MOE follows a much more chronic and indolent course than ordinary acute otitis externa. There may be granulation involving the floor of the external ear canal, most often at the bony-cartilaginous junction. Paradoxically, the physical findings of MOE, at least in its early stages, are often much less dramatic than those of ordinary acute otitis externa. In later stages, there can be soft tissue swelling around the ear, even in the absence of significant canal swelling. While fever and leukocytosis might be expected in response to bacterial infection invading the skull region, MOE does not cause fever or elevation of white blood count. Treatment of MOE Unlike ordinary otitis externa, MOE requires oral or intravenous antibiotics for cure. Pseudomonas is the most common offending pathogen. Diabetes control is also an essential part of treatment. When MOE goes unrecognized and untreated, the infection continues to smolder and over weeks or months can spread deeper into the head and involve the bones of the skull base, constituting skull base osteomyelitis (SBO). Multiple cranial nerve palsies can result, including the facial nerve (causing facial palsy), the recurrent laryngeal nerve (causing vocal cord paralysis), and the cochlear nerve (causing deafness). The infecting organism is almost always pseudomonas aeruginosa, but it can instead be fungal (aspergillus or mucor). MOE and SBO are not amenable to surgery, but exploratory surgery may facilitate the culture of unusual organism(s) that are not responding to empirically used anti-pseudomonal antibiotics (ciprofloxacin being the drug of choice). The usual surgical finding is diffuse cellulitis without localized abscess formation. SBO can extend into the petrous apex of the temporal bone or more inferiorly into the opposite side of the skull base. The use of hyperbaric oxygen therapy as an adjunct to antibiotic therapy remains controversial. Complications As the skull base is progressively involved, the adjacent exiting cranial nerves and their branches, especially the facial nerve and the vagus nerve, may be affected, resulting in facial paralysis and hoarseness, respectively. If both of the recurrent laryngeal nerves are paralyzed, shortness of breath may develop and necessitate tracheotomy. Profound deafness can occur, usually later in the disease course due to relative resistance of the inner ear structures. Gallium scans are sometimes used to document the extent of the infection but are not essential to disease management. Skull base osteomyelitis is a chronic disease that can require months of IV antibiotic treatment, tends to recur, and has a significant mortality rate. Epidemiology The incidence of otitis externa is high. In the Netherlands, it has been estimated at 12–14 per 1000 population per year, and has been shown to affect more than 1% of a sample of the population in the United Kingdom over a 12-month period. History During the Tektite Project in 1969 there was a great deal of otitis externa. The Diving Medical Officer devised a prophylaxis that came to be known as, "Tektite Solution", equal parts of 15% tannic acid, 15% acetic acid and 50% isopropyl alcohol or ethanol. During Tektite ethanol was used because it was available in the lab for pickling specimens. Other animals
Biology and health sciences
Infectious diseases by site
Health
30871810
https://en.wikipedia.org/wiki/North%20magnetic%20pole
North magnetic pole
The north magnetic pole, also known as the magnetic north pole, is a point on the surface of Earth's Northern Hemisphere at which the planet's magnetic field points vertically downward (in other words, if a magnetic compass needle is allowed to rotate in three dimensions, it will point straight down). There is only one location where this occurs, near (but distinct from) the geographic north pole. The geomagnetic north pole is the northern antipodal pole of an ideal dipole model of the Earth's magnetic field, which is the most closely fitting model of Earth's actual magnetic field. The north magnetic pole moves over time according to magnetic changes and flux lobe elongation in the Earth's outer core. In 2001, it was determined by the Geological Survey of Canada to lie west of Ellesmere Island in northern Canada at . It was situated at in 2005. In 2009, while still situated within the Canadian Arctic at , it was moving toward Russia at between per year. In 2013, the distance between the north magnetic pole and the geographic north pole was approximately . As of 2021, the pole is projected to have moved beyond the Canadian Arctic to . Its southern hemisphere counterpart is the south magnetic pole. Since Earth's magnetic field is not exactly symmetric, the north and south magnetic poles are not antipodal, meaning that a straight line drawn from one to the other does not pass through the geometric center of Earth. Earth's north and south magnetic poles are also known as magnetic dip poles, with reference to the vertical "dip" of the magnetic field lines at those points. Polarity All magnets have two poles, where lines of magnetic flux enter one pole and emerge from the other pole. By analogy with Earth's magnetic field, these are called the magnet's "north" and "south" poles. Before magnetism was well understood, the north-seeking pole of a magnet was defined to have the north designation, according to their use in early compasses. However, opposite poles attract, which means that as a physical magnet, the magnetic north pole of the Earth is actually on the southern hemisphere. In other words, if we establish that true geographic north is north, then what we call the Earth's north magnetic pole is actually its south magnetic pole since it attracts the north magnetic pole of other magnets, such as compass needles. The direction of magnetic field lines is defined such that the lines emerge from the magnet's north pole and enter into the magnet's south pole. History Early European navigators, cartographers and scientists believed that compass needles were attracted to a hypothetical "magnetic island" somewhere in the far north (see Rupes Nigra), or to Polaris, the pole star. The idea that Earth itself acts as essentially a giant magnet was first proposed in 1600, by the English physician and natural philosopher William Gilbert. He was also the first to define the north magnetic pole as the point where Earth's magnetic field points vertically downwards. This is the current definition, though it would be a few hundred years before the nature of Earth's magnetic field was understood with modern accuracy and precision. Expeditions and measurements First observations The first group to reach the north magnetic pole was led by James Clark Ross, who found it at Cape Adelaide on the Boothia Peninsula on 1 June 1831, while serving on the second arctic expedition of his uncle, Sir John Ross. Roald Amundsen found the north magnetic pole in a slightly different location in 1903. The third observation was by Canadian government scientists Paul Serson and Jack Clark, of the Dominion Astrophysical Observatory, who found the pole at Allen Lake on Prince of Wales Island in 1947. Project Polaris At the start of the Cold War, the United States Department of War recognized a need for a comprehensive survey of the North American Arctic and asked the United States Army to undertake the task. An assignment was made in 1946 for the Army Air Forces' recently formed Strategic Air Command to explore the entire Arctic Ocean area. The exploration was conducted by the 46th (later re-designated the 72nd) Photo Reconnaissance Squadron and reported on as a classified Top Secret mission named Project Nanook. This project in turn was divided into many separate, but identically classified, projects, one of which was Project Polaris, which was a radar, photographic (trimetrogon, or three-angle, cameras) and visual study of the entire Canadian Archipelago. A Canadian officer observer was assigned to accompany each flight. Frank O. Klein, the director of the project, noticed that the fluxgate compass did not behave as erratically as expected—it oscillated no more than 1 to 2 degrees over much of the region—and began to study northern terrestrial magnetism. With the cooperation of many of his squadron teammates in obtaining many hundreds of statistical readings, startling results were revealed: The center of the north magnetic dip pole was on Prince of Wales Island some NNW of the positions determined by Amundsen and Ross, and the dip pole was not a point but occupied an elliptical region with foci about apart on Boothia Peninsula and Bathurst Island. Klein called the two foci local poles, for their importance to navigation in emergencies when using a "homing" procedure. About three months after Klein's findings were officially reported, a Canadian ground expedition was sent into the Archipelago to locate the position of the magnetic pole. R. Glenn Madill, Chief of Terrestrial Magnetism, Department of Mines and Resources, Canada, wrote to Lt. Klein on 21 July 1948: (The positions were less than apart.) Modern (post-1996) The Canadian government has made several measurements since, which show that the north magnetic pole is moving continually northwestward. In 2001, an expedition located the pole at . In 2007, the latest survey found the pole at . During the 20th century it moved , and since 1970 its rate of motion has accelerated from per year (2001–2007 average; see also polar drift). Members of the 2007 expedition to locate the magnetic north pole wrote that such expeditions have become logistically difficult, as the pole moves farther away from inhabited locations. They expect that in the future, the magnetic pole position will be obtained from satellite data instead of ground surveys. This general movement is in addition to a daily or diurnal variation in which the north magnetic pole describes a rough ellipse, with a maximum deviation of from its mean position. This effect is due to disturbances of the geomagnetic field by charged particles from the Sun. As of early 2019, the magnetic north pole is moving from Canada towards Siberia at a rate of approximately per year. NOAA gives the 2024 location of the magnetic north pole as 86 degrees North, 142 degrees East. By 2025, it will have drifted to 138 degrees East (same latitude). Exploration The first team of novices to reach the magnetic north pole did so in 1996, led by David Hempleman-Adams. It included the first British woman Sue Stockdale and first Swedish woman to reach the Pole. The team also successfully tracked the location of the Magnetic North Pole on behalf of the University of Ottawa, and certified its location by magnetometer and theodolite at . The Polar Race was a biannual competition that ran from 2003 until 2011. It took place between the community of Resolute, on the shores of Resolute Bay, Nunavut, in northern Canada and the 1996 location of the north magnetic pole at , also in northern Canada. On 25 July 2007, the Top Gear: Polar Special was broadcast on BBC Two in the United Kingdom, in which Jeremy Clarkson, James May, and their support and camera team claimed to be the first people in history to reach the 1996 location of the north magnetic pole in northern Canada by car. Note that they did not reach the actual north magnetic pole, which at the time (2007) had moved several hundred kilometers further north from the 1996 position. Magnetic north and magnetic declination Historically, the magnetic compass was an important tool for navigation. While it has been widely replaced by Global Positioning Systems, many airplanes and ships still carry them, as do casual boaters and hikers. The direction in which a compass needle points is known as magnetic north. In general, this is not exactly the direction of the north magnetic pole (or of any other consistent location). Instead, the compass aligns itself to the local geomagnetic field, which varies in a complex manner over Earth's surface, as well as over time. The local angular difference between magnetic north and true north is called the magnetic declination. Most map coordinate systems are based on true north, and magnetic declination is often shown on map legends so that the direction of true north can be determined from north as indicated by a compass. In North America the line of zero declination (the agonic line) runs from the north magnetic pole down through Lake Superior and southward into the Gulf of Mexico (see figure). Along this line, true north is the same as magnetic north. West of the agonic line a compass will give a reading that is east of true north and by convention the magnetic declination is positive. Conversely, east of the agonic line a compass will point west of true north and the declination is negative. North geomagnetic pole As a first-order approximation, Earth's magnetic field can be modeled as a simple dipole (like a bar magnet), tilted about 10° with respect to Earth's rotation axis (which defines the geographic north and geographic south poles) and centered at Earth's center. The north and south geomagnetic poles are the antipodal points where the axis of this theoretical dipole intersects Earth's surface. If Earth's magnetic field were a perfect dipole then the field lines would be vertical at the geomagnetic poles, and they would coincide with the magnetic poles. However, the approximation is imperfect, and so the magnetic and geomagnetic poles lie some distance apart. Like the north magnetic pole, the north geomagnetic pole attracts the north pole of a bar magnet and so is in a physical sense actually a magnetic south pole. It is the center of the region of the magnetosphere in which the Aurora Borealis can be seen. As of 2015 it was located at approximately , over Ellesmere Island, Canada but it is now drifting away from North America and toward Siberia. Geomagnetic reversal Over the life of Earth, the orientation of Earth's magnetic field has reversed many times, with magnetic north becoming magnetic south and vice versa – an event known as a geomagnetic reversal. Evidence of geomagnetic reversals can be seen at mid-ocean ridges where tectonic plates move apart and the seabed is filled in with magma. As the magma seeps out of the mantle, cools, and solidifies into igneous rock, it is imprinted with a record of the direction of the magnetic field at the time that the magma cooled.
Physical sciences
Geophysics
Earth science
30871821
https://en.wikipedia.org/wiki/South%20magnetic%20pole
South magnetic pole
The south magnetic pole, also known as the magnetic south pole, is the point on Earth's Southern Hemisphere where the geomagnetic field lines are directed perpendicular to the nominal surface. The Geomagnetic South Pole, a related point, is the south pole of an ideal dipole model of the Earth's magnetic field that most closely fits the Earth's actual magnetic field. For historical reasons, the "end" of a freely hanging magnet that points (roughly) north is itself called the "north pole" of the magnet, and the other end, pointing south, is called the magnet's "south pole". Because opposite poles attract, Earth's south magnetic pole is physically actually a magnetic north pole (see also ). The south magnetic pole is constantly shifting due to changes in Earth's magnetic field. As of 2005 it was calculated to lie at , placing it off the coast of Antarctica, between Adélie Land and Wilkes Land. In 2015 it lay at (est). That point lies outside the Antarctic Circle. Due to polar drift, the pole is moving northwest by about per year. Its current distance from the actual Geographic South Pole is approximately . The nearest permanent science station is Dumont d'Urville Station. While the north magnetic pole began wandering very quickly in the mid 1990s, the movement of the south magnetic pole did not show a matching change of speed. Expeditions Early unsuccessful attempts to reach the magnetic south pole included those of French explorer Jules Dumont d'Urville (1837–1840), American Charles Wilkes (expedition of 1838–1842) and Briton James Clark Ross (expedition of 1839–1843). The first calculation of the magnetic inclination to locate the magnetic South Pole was made on 23 January 1838 by the hydrographer , a member of the Dumont d'Urville expedition in Antarctica and Oceania on the corvettes and in 1837–1840, which discovered Adélie Land. On 16 January 1909 three men (Douglas Mawson, Edgeworth David, and Alistair Mackay) from Sir Ernest Shackleton's Nimrod Expedition claimed to have found the south magnetic pole, which was at that time located on land. They planted a flagpole at the spot and claimed it for the British Empire. However, there is now some doubt as to whether their location was correct. The approximate position of the pole on 16 January 1909 was . Fits to global data sets The south magnetic pole has also been estimated by fits to global sets of data such as the World Magnetic Model (WMM) and the International Geomagnetic Reference Field (IGRF). For earlier years back to about 1600, the model GUFM1 is used, based on a compilation of data from ship logs. South geomagnetic pole Earth's geomagnetic field can be approximated by a tilted dipole (like a bar magnet) placed at the center of Earth. The south geomagnetic pole is the point where the axis of this best-fitting tilted dipole intersects Earth's surface in the southern hemisphere. As of 2005 it was calculated to be located at , near the Vostok Station. Because the field is not an exact dipole, the south geomagnetic pole does not coincide with the south magnetic pole. Furthermore, the south geomagnetic pole is wandering for the same reason its northern geomagnetic counterpart wanders.
Physical sciences
Geophysics
Earth science
30873277
https://en.wikipedia.org/wiki/Atmosphere%20of%20Jupiter
Atmosphere of Jupiter
The atmosphere of Jupiter is the largest planetary atmosphere in the Solar System. It is mostly made of molecular hydrogen and helium in roughly solar proportions; other chemical compounds are present only in small amounts and include methane, ammonia, hydrogen sulfide, and water. Although water is thought to reside deep in the atmosphere, its directly-measured concentration is very low. The nitrogen, sulfur, and noble gas abundances in Jupiter's atmosphere exceed solar values by a factor of about three. The atmosphere of Jupiter lacks a clear lower boundary and gradually transitions into the liquid interior of the planet. From lowest to highest, the atmospheric layers are the troposphere, stratosphere, thermosphere and exosphere. Each layer has characteristic temperature gradients. The lowest layer, the troposphere, has a complicated system of clouds and hazes composed of layers of ammonia, ammonium hydrosulfide, and water. The upper ammonia clouds visible at Jupiter's surface are organized in a dozen zonal bands parallel to the equator and are bounded by powerful zonal atmospheric flows (winds) known as jets, exhibiting a phenomenon known as atmospheric super-rotation. The bands alternate in color: the dark bands are called belts, while light ones are called zones. Zones, which are colder than belts, correspond to upwellings, while belts mark descending gas. The zones' lighter color is believed to result from ammonia ice; what gives the belts their darker colors is uncertain. The origins of the banded structure and jets are not well understood, though a "shallow model" and a "deep model" exist. The Jovian atmosphere shows a wide range of active phenomena, including band instabilities, vortices (cyclones and anticyclones), storms and lightning. The vortices reveal themselves as large red, white or brown spots (ovals). The largest two spots are the Great Red Spot (GRS) and Oval BA, which is also red. These two and most of the other large spots are anticyclonic. Smaller anticyclones tend to be white. Vortices are thought to be relatively shallow structures with depths not exceeding several hundred kilometers. Located in the southern hemisphere, the GRS is the largest known vortex in the Solar System. It could engulf two or three Earths and has existed for at least three hundred years. Oval BA, south of GRS, is a red spot a third the size of GRS that formed in 2000 from the merging of three white ovals. Jupiter has powerful storms, often accompanied by lightning strikes. The storms are a result of moist convection in the atmosphere connected to the evaporation and condensation of water. They are sites of strong upward motion of the air, which leads to the formation of bright and dense clouds. The storms form mainly in belt regions. The lightning strikes on Jupiter are hundreds of times more powerful than those seen on Earth, and are assumed to be associated with the water clouds. Recent Juno observations suggest Jovian lightning strikes occur above the altitude of water clouds (3-7 bars). A charge separation between falling liquid ammonia-water droplets and water ice particles may generate higher-altitude lightning. Upper-atmospheric lightning has also been observed 260 km above the 1 bar level. Vertical structure The atmosphere of Jupiter is classified into four layers, by increasing altitude: the troposphere, stratosphere, thermosphere and exosphere. Unlike the Earth's atmosphere, Jupiter's lacks a mesosphere. Jupiter does not have a solid surface, and the lowest atmospheric layer, the troposphere, smoothly transitions into the planet's fluid interior. This is a result of having temperatures and the pressures well above those of the critical points for hydrogen and helium, meaning that there is no sharp boundary between gas and liquid phases. Hydrogen is considered a supercritical fluid when the temperature is above 33 K and the pressure is above 13 bar. Since the lower boundary of the atmosphere is ill-defined, the pressure level of 10 bars, at an altitude of about 90 km below 1 bar with a temperature of around 340 K, is commonly treated as the base of the troposphere. In scientific literature, the 1 bar pressure level is usually chosen as a zero point for altitudes—a "surface" of Jupiter. As is generally the case, the top atmospheric layer, the exosphere, does not have a specific upper boundary. The density gradually decreases until it smoothly transitions into the interplanetary medium approximately 5,000 km above the "surface". The vertical temperature gradients in the Jovian atmosphere are similar to those of the atmosphere of Earth. The temperature of the troposphere decreases with height until it reaches a minimum at the tropopause, which is the boundary between the troposphere and stratosphere. On Jupiter, the tropopause is approximately 50 km above the visible clouds (or 1 bar level). The pressure and temperature at the tropopause are about 0.1 bar and 110 K. (This gives a drop of 340−110=230 °C over 90+50=140 km. The adiabatic lapse rate on Earth is around 9.8 °C per km. The adiabatic lapse rate is proportional to the average molecular weight and the gravitational force. The latter is about 2.5 times stronger than on Earth, but the average molecular weight is about 15 times less.) In the stratosphere, the temperatures rise to about 200 K at the transition into the thermosphere, at an altitude and pressure of around 320 km and 1 μbar. In the thermosphere, temperatures continue to rise, eventually reaching 1000 K at about 1000 km, where pressure is about 1 nbar. Jupiter's troposphere contains a complicated cloud structure. The upper clouds, located in the pressure range 0.6–0.9 bar, are made of ammonia ice. Below these ammonia ice clouds, denser clouds made of ammonium hydrosulfide ((NH4)SH) or ammonium sulfide ((NH4)2S, between 1–2 bar) and water (3–7 bar) are thought to exist. There are no methane clouds as the temperatures are too high for it to condense. The water clouds form the densest layer of clouds and have the strongest influence on the dynamics of the atmosphere. This is a result of the higher condensation heat of water and higher water abundance as compared to the ammonia and hydrogen sulfide (oxygen is a more abundant chemical element than either nitrogen or sulfur). Various tropospheric (at 200–500 mbar) and stratospheric (at 10–100 mbar) haze layers reside above the main cloud layers. The stratospheric haze layers are made from condensed heavy polycyclic aromatic hydrocarbons or hydrazine, which are generated in the upper stratosphere (1–100 μbar) from methane under the influence of the solar ultraviolet radiation (UV). The methane abundance relative to molecular hydrogen in the stratosphere is about 10−4, while the abundance ratio of other light hydrocarbons, like ethane and acetylene, to molecular hydrogen is about 10−6. Jupiter's thermosphere is located at pressures lower than 1 μbar and demonstrates such phenomena as airglow, polar aurorae and X-ray emissions. Within it lie layers of increased electron and ion density that form the ionosphere. The high temperatures prevalent in the thermosphere (800–1000 K) have not been explained yet; existing models predict a temperature no higher than about 400 K. They may be caused by absorption of high-energy solar radiation (UV or X-ray), by heating from the charged particles precipitating from the Jovian magnetosphere, or by dissipation of upward-propagating gravity waves. The thermosphere and exosphere at the poles and at low latitudes emit X-rays, which were first observed by the Einstein Observatory in 1983. The energetic particles coming from Jupiter's magnetosphere create bright auroral ovals, which encircle the poles. Unlike their terrestrial analogs, which appear only during magnetic storms, aurorae are permanent features of Jupiter's atmosphere. The thermosphere was the first place outside the Earth where the trihydrogen cation () was discovered. This ion emits strongly in the mid-infrared part of the spectrum, at wavelengths between 3 and 5 μm; this is the main cooling mechanism of the thermosphere. Chemical composition <div style="float: right; margin-left: 1em; width: 25em;"> The composition of Jupiter's atmosphere is similar to that of the planet as a whole. Jupiter's atmosphere is the most comprehensively understood of those of all the giant planets because it was observed directly by the Galileo atmospheric probe when it entered the Jovian atmosphere on December 7, 1995. Other sources of information about Jupiter's atmospheric composition include the Infrared Space Observatory (ISO), the Galileo and Cassini orbiters, and Earth-based observations. The two main constituents of the Jovian atmosphere are molecular hydrogen () and helium. The helium abundance is relative to molecular hydrogen by number of molecules, and its mass fraction is , which is slightly lower than the Solar System's primordial value. The reason for this low abundance is not entirely understood, but some of the helium may have condensed into the core of Jupiter. This condensation is likely to be in the form of helium rain: as hydrogen turns into the metallic state at depths of more than 10,000 km, helium separates from it forming droplets which, being denser than the metallic hydrogen, descend towards the core. This can also explain the severe depletion of neon (see Table), an element that easily dissolves in helium droplets and would be transported in them towards the core as well. The atmosphere contains various simple compounds such as water, methane (CH4), hydrogen sulfide (H2S), ammonia (NH3) and phosphine (PH3). Their abundances in the deep (below 10 bar) troposphere imply that the atmosphere of Jupiter is enriched in the elements carbon, nitrogen, sulfur and possibly oxygen by a factor of 2–4 relative to the Sun. The noble gases argon, krypton and xenon also appear in abundance relative to solar levels (see table), while neon is scarcer. Other chemical compounds such as arsine (AsH3) and germane (GeH4) are present only in trace amounts. The upper atmosphere of Jupiter contains small amounts of simple hydrocarbons such as ethane, acetylene, and diacetylene, which form from methane under the influence of the solar ultraviolet radiation and charged particles coming from Jupiter's magnetosphere. The carbon dioxide, carbon monoxide and water present in the upper atmosphere are thought to originate from impacting comets, such as Shoemaker-Levy 9. The water cannot come from the troposphere because the cold tropopause acts like a cold trap, effectively preventing water from rising to the stratosphere (see Vertical structure above). Earth- and spacecraft-based measurements have led to improved knowledge of the isotopic ratios in Jupiter's atmosphere. As of July 2003, the accepted value for the deuterium abundance is , which probably represents the primordial value in the protosolar nebula that gave birth to the Solar System. The ratio of nitrogen isotopes in the Jovian atmosphere, 15N to 14N, is 2.3, a third lower than that in the Earth's atmosphere (3.5). The latter discovery is especially significant since the previous theories of Solar System formation considered the terrestrial value for the ratio of nitrogen isotopes to be primordial. Zones, belts and jets The visible surface of Jupiter is divided into several bands parallel to the equator. There are two types of bands: lightly colored zones and relatively dark belts. The wider Equatorial Zone (EZ) extends between latitudes of approximately 7°S to 7°N. Above and below the EZ, the North and South Equatorial belts (NEB and SEB) extend to 18°N and 18°S, respectively. Farther from the equator lie the North and South Tropical zones (NtrZ and STrZ). The alternating pattern of belts and zones continues until the polar regions at approximately 50 degrees latitude, where their visible appearance becomes somewhat muted. The difference in the appearance between zones and belts is caused by differences in the opacity of the clouds. Ammonia concentration is higher in zones, which leads to the appearance of denser clouds of ammonia ice at higher altitudes, which in turn leads to their lighter color. On the other hand, in belts clouds are thinner and are located at lower altitudes. The upper troposphere is colder in zones and warmer in belts. The exact nature of chemicals that make Jovian zones and bands so colorful is not known, but they may include complicated compounds of sulfur, phosphorus and carbon. The Jovian bands are bounded by zonal atmospheric flows (winds), called jets. The eastward (prograde) jets are found at the transition from zones to belts (going away from the equator), whereas westward (retrograde) jets mark the transition from belts to zones. Such flow velocity patterns mean that the jets' eastward momentum decreases in belts and increases in zones from the equator to the pole. Therefore, wind shear in belts is cyclonic, while in zones it is anticyclonic. The EZ is an exception to this rule, showing a strong eastward (prograde) jet and has a local minimum of the wind speed exactly at the equator. The jet speeds are high on Jupiter, reaching more than 100 m/s. These speeds correspond to ammonia clouds located in the pressure range 0.7–1 bar. The prograde jets are generally more powerful than the retrograde jets. The jets extend thousands of kilometers into the interior, as measured by the gravitometer instrument onboard of the Juno spacecraft. The direction at which the jets extend into the planet is parallel to Jupiter's axis of rotation rather than in a radial direction (toward the center of the planet), consistent with the Taylor-Proudman theorem. The Galileo Probe measured the vertical profile of a jet along its descent trajectory into Jupiter's atmosphere, finding the winds to decay over two to three scale heights above the clouds, while below the cloud level, winds increase slightly and then remain constant down to at least 22 bar—the maximum operational depth reached by the probe. The origin of Jupiter's colored banded structure is not completely clear, though it may resemble the cloud structure of Earth's Hadley cells. The simplest interpretation is that zones are sites of atmospheric upwelling, whereas belts are manifestations of downwelling. When air enriched in ammonia rises in zones, it expands and cools, forming high and dense white clouds. In belts, however, the air descends, warming adiabatically as in a convergence zone on Earth, and white ammonia clouds evaporate, revealing lower, darker clouds. The location and width of bands, speed and location of jets on Jupiter are remarkably stable, having changed only slightly between 1980 and 2000. One example of change is a decrease of the speed of the strongest eastward jet located at the boundary between the North Tropical zone and North Temperate belts at 23°N. However bands vary in coloration and intensity over time (see "specific band"). These variations were first observed in the early seventeenth century. Meridional circulation cells Meridional circulation cells are a large-scale atmospheric motion where gas rises at a certain latitude, travel in the north-south (meridional) direction, descends, and get back to the origin in a closed cell circulation. On Earth, the meridional circulation is composed of 3 cells in each hemisphere: Hadley, Ferrel and Polar cells. On Jupiter, the visible cloud bands gave indication for upward motion in the zones and downward motion in the belts, indicative only for the upper few bars. However, higher frequency of lightning flashes in the belts, indicative of upward atmospheric motion, gave indication for a reversed motion in the deeper atmosphere. Juno's microwave measurements probe the atmosphere down to ~240 bar. These measurements confirmed the existence of these motions as a part of mid-latitudes large circulation cells with upward motion in the belts and downward motions in the zones, extending from ~1 bar down to at least ~240 bar. So far, 8 cells have been identified at each of Jupiter's hemispheres along latitudes 20°-60° N\S. The mid-latitude cells are driven by breaking of atmospheric waves, similar to the Ferrel cells on Earth. While on Earth, the return flow in the cells' lower branch is balanced by friction in the Ekman layer, the balance in Jupiter in yet unknown, but one possibility is that the friction is maintained by magnetic drag. Specific bands The belts and zones that divide Jupiter's atmosphere each have their own names and unique characteristics. They begin below the North and South Polar Regions, which extend from the poles to roughly 40–48° N/S. These bluish-gray regions are usually featureless. The North North Temperate Region rarely shows more detail than the polar regions, due to limb darkening, foreshortening, and the general diffuseness of features. However, the North-North Temperate Belt (NNTB) is the northernmost distinct belt, though it occasionally disappears. Disturbances tend to be minor and short-lived. The North-North Temperate Zone (NNTZ) is perhaps more prominent, but also generally quiet. Other minor belts and zones in the region are occasionally observed. The North Temperate Region is part of a latitudinal region easily observable from Earth, and thus has a superb record of observation. It also features the strongest prograde jet stream on the planet—a westerly current that forms the southern boundary of the North Temperate Belt (NTB). The NTB fades roughly once a decade (this was the case during the Voyager encounters), making the North Temperate Zone (NTZ) apparently merge into the North Tropical Zone (NTropZ). Other times, the NTZ is divided by a narrow belt into northern and southern components. The North Tropical Region is composed of the NTropZ and the North Equatorial Belt (NEB). The NTropZ is generally stable in coloration, changing in tint only in tandem with activity on the NTB's southern jet stream. Like the NTZ, it too is sometimes divided by a narrow band, the NTropB. On rare occasions, the southern NTropZ plays host to "Little Red Spots". As the name suggests, these are northern equivalents of the Great Red Spot. Unlike the GRS, they tend to occur in pairs and are always short-lived, lasting a year on average; one was present during the Pioneer 10 encounter. The NEB is one of the most active belts on the planet. It is characterized by anticyclonic white ovals and cyclonic "barges" (also known as "brown ovals"), with the former usually forming farther north than the latter; as in the NTropZ, most of these features are relatively short-lived. Like the South Equatorial Belt (SEB), the NEB has sometimes dramatically faded and "revived". The timescale of these changes is about 25 years. The Equatorial Region (EZ) is one of the most stable regions of the planet, in latitude and in activity. The northern edge of the EZ hosts spectacular plumes that trail southwest from the NEB, which are bounded by dark, warm (in infrared) features known as festoons (hot spots). Though the southern boundary of the EZ is usually quiescent, observations from the late 19th into the early 20th century show that this pattern was then reversed relative to today. The EZ varies considerably in coloration, from pale to an ochre, or even coppery hue; it is occasionally divided by an Equatorial Band (EB). Features in the EZ move roughly 390 km/h relative to the other latitudes. The South Tropical Region includes the South Equatorial Belt (SEB) and the South Tropical Zone. It is by far the most active region on the planet, as it is home to its strongest retrograde jet stream. The SEB is usually the broadest, darkest belt on Jupiter; it is sometimes split by a zone (the SEBZ), and can fade entirely every 3 to 15 years before reappearing in what is known as an SEB Revival cycle. A period of weeks or months following the belt's disappearance, a white spot forms and erupts dark brownish material which is stretched into a new belt by Jupiter's winds. The belt most recently disappeared in May 2010. Another characteristic of the SEB is a long train of cyclonic disturbances following the Great Red Spot. Like the NTropZ, the STropZ is one of the most prominent zones on the planet; not only does it contain the GRS, but it is occasionally rent by a South Tropical Disturbance (STropD), a division of the zone that can be very long-lived; the most famous one lasted from 1901 to 1939. The South Temperate Region, or South Temperate Belt (STB), is yet another dark, prominent belt, more so than the NTB; until March 2000, its most famous features were the long-lived white ovals BC, DE, and FA, which have since merged to form Oval BA ("Red Jr."). The ovals were part of South Temperate Zone, but they extended into STB partially blocking it. The STB has occasionally faded, apparently due to complex interactions between the white ovals and the GRS. The appearance of the South Temperate Zone (STZ)—the zone in which the white ovals originated—is highly variable. There are other features on Jupiter that are either temporary or difficult to observe from Earth. The South South Temperate Region is harder to discern even than the NNTR; its detail is subtle and can only be studied well by large telescopes or spacecraft. Many zones and belts are more transient in nature and are not always visible. These include the Equatorial band (EB), North Equatorial belt zone (NEBZ, a white zone within the belt) and South Equatorial belt zone (SEBZ). Belts are also occasionally split by a sudden disturbance. When a disturbance divides a normally singular belt or zone, an N or an S is added to indicate whether the component is the northern or southern one; e.g., NEB(N) and NEB(S). Dynamics Circulation in Jupiter's atmosphere is markedly different from that in the atmosphere of Earth. The interior of Jupiter is fluid and lacks any solid surface. Therefore, convection may occur throughout the planet's outer molecular envelope. As of 2008, a comprehensive theory of the dynamics of the Jovian atmosphere has not been developed. Any such theory needs to explain the following facts: the existence of narrow stable bands and jets that are symmetric relative to Jupiter's equator, the strong prograde jet observed at the equator, the difference between zones and belts, and the origin and persistence of large vortices such as the Great Red Spot. The theories regarding the dynamics of the Jovian atmosphere can be broadly divided into two classes: shallow and deep. The former hold that the observed circulation is largely confined to a thin outer (weather) layer of the planet, which overlays the stable interior. The latter hypothesis postulates that the observed atmospheric flows are only a surface manifestation of deeply rooted circulation in the outer molecular envelope of Jupiter. As both theories have their own successes and failures, many planetary scientists think that the true theory will include elements of both models. Shallow models The first attempts to explain Jovian atmospheric dynamics date back to the 1960s. They were partly based on terrestrial meteorology, which had become well developed by that time. Those shallow models assumed that the jets on Jupiter are driven by small scale turbulence, which is in turn maintained by moist convection in the outer layer of the atmosphere (above the water clouds). The moist convection is a phenomenon related to the condensation and evaporation of water and is one of the major drivers of terrestrial weather. The production of the jets in this model is related to a well-known property of two dimensional turbulence—the so-called inverse cascade, in which small turbulent structures (vortices) merge to form larger ones. The finite size of the planet means that the cascade can not produce structures larger than some characteristic scale, which for Jupiter is called the Rhines scale. Its existence is connected to production of Rossby waves. This process works as follows: when the largest turbulent structures reach a certain size, the energy begins to flow into Rossby waves instead of larger structures, and the inverse cascade stops. Since on the spherical rapidly rotating planet the dispersion relation of the Rossby waves is anisotropic, the Rhines scale in the direction parallel to the equator is larger than in the direction orthogonal to it. The ultimate result of the process described above is production of large scale elongated structures, which are parallel to the equator. The meridional extent of them appears to match the actual width of jets. Therefore, in shallow models vortices actually feed the jets and should disappear by merging into them. While these weather–layer models can successfully explain the existence of a dozen narrow jets, they have serious problems. A glaring failure of the model is the prograde (super-rotating) equatorial jet: with some rare exceptions shallow models produce a strong retrograde (subrotating) jet, contrary to observations. In addition, the jets tend to be unstable and can disappear over time. Shallow models cannot explain how the observed atmospheric flows on Jupiter violate stability criteria. More elaborated multilayer versions of weather–layer models produce more stable circulation, but many problems persist. Meanwhile, the Galileo Probe found that the winds on Jupiter extend well below the water clouds at 5–7 bar and do not show any evidence of decay down to 22 bar pressure level, which implies that circulation in the Jovian atmosphere may in fact be deep. Deep models The deep model was first proposed by Busse in 1976. His model was based on another well-known feature of fluid mechanics, the Taylor–Proudman theorem. It holds that in any fast-rotating barotropic ideal liquid, the flows are organized in a series of cylinders parallel to the rotational axis. The conditions of the theorem are probably met in the fluid Jovian interior. Therefore, the planet's molecular hydrogen mantle may be divided into cylinders, each cylinder having a circulation independent of the others. Those latitudes where the cylinders' outer and inner boundaries intersect with the visible surface of the planet correspond to the jets; the cylinders themselves are observed as zones and belts. The deep model easily explains the strong prograde jet observed at the equator of Jupiter; the jets it produces are stable and do not obey the 2D stability criterion. However it has major difficulties; it produces a very small number of broad jets, and realistic simulations of 3D flows are not possible as of 2008, meaning that the simplified models used to justify deep circulation may fail to catch important aspects of the fluid dynamics within Jupiter. One model published in 2004 successfully reproduced the Jovian band-jet structure. It assumed that the molecular hydrogen mantle is thinner than in all other models; occupying only the outer 10% of Jupiter's radius. In standard models of the Jovian interior, the mantle comprises the outer 20–30%. The driving of deep circulation is another problem. The deep flows can be caused both by shallow forces (moist convection, for instance) or by deep planet-wide convection that transports heat out of the Jovian interior. Which of these mechanisms is more important is not clear yet. Moist-convection and Y-shaped Structures on Jupiter’s Equatorial Zone Numerical simulations suggest that deep convection on Jupiter is primarily triggered by water condensation occurring at pressure levels ranging from approximately 5 bar to 500 mbar. At the upper altitudes of these convective plumes, where the pressure is a few hundred millibars, condensates such as NH3, H2S, and water are likely to form. In contrast, at pressures exceeding 3 bar, water becomes the dominant condensate. Global climate model (GCM) simulations using Jupiter-DYNAMICO indicate weaker convective activity in the equatorial regions compared to mid- to high latitudes, consistent with lightning observations. It is also possible that during strong storms on Jupiter, ammonia vapor dissolves into lofted water ice at pressures between 1.1 and 1.5 bar, forming a low-temperature liquid mixture of ammonia and water. This process facilitates the formation of ammonia-rich mushballs that transport ammonia to deeper layers of the atmosphere . A possible mechanism for the formation of Y-shaped structures on Jupiter's equator is that Y-shaped structure is driven by equatorial modons coupled with convectively baroclinic Kelvin waves (CCBCKWs). This mechanism suggests that Y-shaped structures result from large-scale localized heating in a diabatic environment, which, upon reaching a critical threshold of negative pressure or positive buoyancy anomaly, generates a hybrid structure. This hybrid structure consists of a quasi equatorial modon, a coherent dipolar structure, coupled with a CCBCKW that propagates eastward in a self-sustaining and self-propelled manner. Initially, the hybrid moves steadily eastward; however, the larger phase speed of the CCBCKW eventually leads to its detachment from the quasi equatorial modon. The lifetime of this coupled structure varies from interseasonal to seasonal timescales . Moist convection is a necessary condition for triggering the eastward-propagating structure . Internal heat As has been known since 1966, Jupiter radiates much more heat than it receives from the Sun. It is estimated that the ratio of the thermal power emitted by the planet to the thermal power absorbed from the Sun is . The internal heat flux from Jupiter is , whereas the total emitted power is . The latter value is approximately equal to one billionth of the total power radiated by the Sun. This excess heat is mainly the primordial heat from the early phases of Jupiter's formation, but may result in part from the precipitation of helium into the core. The internal heat may be important for the dynamics of the Jovian atmosphere. While Jupiter has a small obliquity of about 3°, and its poles receive much less solar radiation than its equator, the tropospheric temperatures do not change appreciably from the equator to poles. One explanation is that Jupiter's convective interior acts like a thermostat, releasing more heat near the poles than in the equatorial region. This leads to a uniform temperature in the troposphere. While heat is transported from the equator to the poles mainly via the atmosphere on Earth, on Jupiter deep convection equilibrates heat. The convection in the Jovian interior is thought to be driven mainly by the internal heat. Discrete features Vortices The atmosphere of Jupiter is home to hundreds of vortices—circular rotating structures that, as in the Earth's atmosphere, can be divided into two classes: cyclones and anticyclones. Cyclones rotate in the direction similar to the rotation of the planet (counterclockwise in the northern hemisphere and clockwise in the southern); anticyclones rotate in the reverse direction. However, unlike in the terrestrial atmosphere, anticyclones predominate over cyclones on Jupiter—more than 90% of vortices larger than 2000 km in diameter are anticyclones. The lifetime of Jovian vortices varies from several days to hundreds of years, depending on their size. For instance, the average lifetime of an anticyclone between 1000 and 6000 km in diameter is 1–3 years. Vortices have never been observed in the equatorial region of Jupiter (within 10° of latitude), where they are unstable. As on any rapidly rotating planet, Jupiter's anticyclones are high pressure centers, while cyclones are low pressure. The anticyclones in Jupiter's atmosphere are always confined within zones, where the wind speed increases in direction from the equator to the poles. They are usually bright and appear as white ovals. They can move in longitude, but stay at approximately the same latitude as they are unable to escape from the confining zone. The wind speeds at their periphery are about 100 m/s. Different anticyclones located in one zone tend to merge when they approach each other. However Jupiter has two anticyclones that are somewhat different from all others. They are the Great Red Spot (GRS) and the Oval BA; the latter formed only in 2000. In contrast to white ovals, these structures are red, arguably due to dredging up of red material from the planet's depths. On Jupiter the anticyclones usually form through merges of smaller structures including convective storms (see below), although large ovals can result from the instability of jets. The latter was observed in 1938–1940, when a few white ovals appeared as a result of instability of the southern temperate zone; they later merged to form Oval BA. In contrast to anticyclones, the Jovian cyclones tend to be small, dark and irregular structures. Some of the darker and more regular features are known as brown ovals (or badges). However the existence of a few long–lived large cyclones has been suggested. In addition to compact cyclones, Jupiter has several large irregular filamentary patches, which demonstrate cyclonic rotation. One of them is located to the west of the GRS (in its wake region) in the southern equatorial belt. These patches are called cyclonic regions (CR). The cyclones are always located in the belts and tend to merge when they encounter each other, much like anticyclones. The deep structure of vortices is not completely clear. They are thought to be relatively thin, as any thickness greater than about 500 km will lead to instability. The large anticyclones are known to extend only a few tens of kilometers above the visible clouds. As of 2008, the early hypothesis that the vortices are deep convective plumes (or convective columns) is not shared by the majority of planetary scientists. Great Red Spot The Great Red Spot (GRS) is a persistent anticyclonic storm, 22° south of Jupiter's equator; observations from Earth establish a minimum storm lifetime of 350 years. A storm was described as a "permanent spot" by Gian Domenico Cassini after observing the feature in July 1665 with his instrument-maker Eustachio Divini. According to a report by Giovanni Battista Riccioli in 1635, Leander Bandtius, whom Riccioli identified as the Abbot of Dunisburgh who possessed an "extraordinary telescope", observed a large spot that he described as "oval, equaling one seventh of Jupiter's diameter at its longest." According to Riccioli, "these features are seldom able to be seen, and then only by a telescope of exceptional quality and magnification". The Great Spot has been continually observed since the 1870s, however. The GRS rotates counter-clockwise, with a period of about six Earth days or 14 Jovian days. Its dimensions are 24,000–40,000 km east-to-west and 12,000–14,000 km north-to-south. The spot is large enough to contain two or three planets the size of Earth. At the start of 2004, the Great Red Spot had approximately half the longitudinal extent it had a century ago, when it was 40,000 km in diameter. At the present rate of reduction, it could potentially become circular by 2040, although this is unlikely because of the distortion effect of the neighboring jet streams. It is not known how long the spot will last, or whether the change is a result of normal fluctuations. According to a study by scientists at the University of California, Berkeley, between 1996 and 2006 the spot lost 15 percent of its diameter along its major axis. Xylar Asay-Davis, who was on the team that conducted the study, noted that the spot is not disappearing because "velocity is a more robust measurement because the clouds associated with the Red Spot are also strongly influenced by numerous other phenomena in the surrounding atmosphere." Infrared data have long indicated that the Great Red Spot is colder (and thus, higher in altitude) than most of the other clouds on the planet; the cloudtops of the GRS are about 8 km above the surrounding clouds. Furthermore, careful tracking of atmospheric features revealed the spot's counterclockwise circulation as far back as 1966 – observations dramatically confirmed by the first time-lapse movies from the Voyager flybys. The spot is spatially confined by a modest eastward jet stream (prograde) to its south and a very strong westward (retrograde) one to its north. Though winds around the edge of the spot peak at about 120 m/s (432 km/h), currents inside it seem stagnant, with little inflow or outflow. The rotation period of the spot has decreased with time, perhaps as a direct result of its steady reduction in size. In 2010, astronomers imaged the GRS in the far infrared (from 8.5 to 24 μm) with a spatial resolution higher than ever before and found that its central, reddest region is warmer than its surroundings by between 3–4 K. The warm airmass is located in the upper troposphere in the pressure range of 200–500 mbar. This warm central spot slowly counter-rotates and may be caused by a weak subsidence of air in the center of GRS. The Great Red Spot's latitude has been stable for the duration of good observational records, typically varying by about a degree. Its longitude, however, is subject to constant variation. Because Jupiter's visible features do not rotate uniformly at all latitudes, astronomers have defined three different systems for defining the longitude. System II is used for latitudes of more than 10°, and was originally based on the average rotation rate of the Great Red Spot of 9h 55m 42s. Despite this, the spot has "lapped" the planet in System II at least 10 times since the early 19th century. Its drift rate has changed dramatically over the years and has been linked to the brightness of the South Equatorial Belt, and the presence or absence of a South Tropical Disturbance. It is not known exactly what causes the Great Red Spot's reddish color. Theories supported by laboratory experiments suppose that the color may be caused by complex organic molecules, red phosphorus, or yet another sulfur compound. The GRS varies greatly in hue, from almost brick-red to pale salmon, or even white. The higher temperature of the reddest central region is the first evidence that the Spot's color is affected by environmental factors. The spot occasionally disappears from the visible spectrum, becoming evident only through the Red Spot Hollow, which is its niche in the South Equatorial Belt (SEB). The visibility of GRS is apparently coupled to the appearance of the SEB; when the belt is bright white, the spot tends to be dark, and when it is dark, the spot is usually light. The periods when the spot is dark or light occur at irregular intervals; in the 50 years from 1947 to 1997, the spot was darkest in the periods 1961–1966, 1968–1975, 1989–1990, and 1992–1993. In November 2014, an analysis of data from NASA's Cassini mission revealed that the red color is likely a product of simple chemicals being broken apart by solar ultraviolet irradiation in the planet's upper atmosphere. The Great Red Spot should not be confused with the Great Dark Spot, a feature observed near Jupiter's north pole (bottom) in 2000 by the Cassini–Huygens spacecraft. A feature in the atmosphere of Neptune was also called the Great Dark Spot. The latter feature, imaged by Voyager 2 in 1989, may have been an atmospheric hole rather than a storm. It was no longer present in 1994, although a similar spot had appeared farther to the north. Oval BA Oval BA is a red storm in Jupiter's southern hemisphere similar in form to, though smaller than, the Great Red Spot (it is often affectionately referred to as "Red Spot Jr.", "Red Jr." or "The Little Red Spot"). A feature in the South Temperate Belt, Oval BA was first seen in 2000 after the collision of three small white storms, and has intensified since then. The formation of the three white oval storms that later merged into Oval BA can be traced to 1939, when the South Temperate Zone was torn by dark features that effectively split the zone into three long sections. Jovian observer Elmer J. Reese labeled the dark sections AB, CD, and EF. The rifts expanded, shrinking the remaining segments of the STZ into the white ovals FA, BC, and DE. Ovals BC and DE merged in 1998, forming Oval BE. Then, in March 2000, BE and FA joined, forming Oval BA. (see White ovals, below) Oval BA slowly began to turn red in August 2005. On February 24, 2006, Filipino amateur astronomer Christopher Go discovered the color change, noting that it had reached the same shade as the GRS. As a result, NASA writer Dr. Tony Phillips suggested it be called "Red Spot Jr." or "Red Jr." In April 2006, a team of astronomers, believing that Oval BA might converge with the GRS that year, observed the storms through the Hubble Space Telescope. The storms pass each other about every two years, but the passings of 2002 and 2004 did not produce anything exciting. Dr. Amy Simon-Miller, of the Goddard Space Flight Center, predicted the storms would have their closest passing on July 4, 2006. On July 20, the two storms were photographed passing each other by the Gemini Observatory without converging. Why Oval BA turned red is not well understood. According to a 2008 study by Dr. Santiago Pérez-Hoyos of the University of the Basque Country, the most likely mechanism is "an upward and inward diffusion of either a colored compound or a coating vapor that may interact later with high energy solar photons at the upper levels of Oval BA." Some believe that small storms (and their corresponding white spots) on Jupiter turn red when the winds become powerful enough to draw certain gases from deeper within the atmosphere which change color when those gases are exposed to sunlight. Oval BA is getting stronger according to observations made with the Hubble Space Telescope in 2007. The wind speeds have reached 618 km/h; about the same as in the Great Red Spot and far stronger than any of the progenitor storms. As of July 2008, its size is about the diameter of Earth—approximately half the size of the Great Red Spot. Oval BA should not be confused with another major storm on Jupiter, the South Tropical Little Red Spot (LRS) (nicknamed "the Baby Red Spot" by NASA), which was destroyed by the GRS. The new storm, previously a white spot in Hubble images, turned red in May 2008. The observations were led by Imke de Pater of the University of California, at Berkeley, US. The Baby Red Spot encountered the GRS in late June to early July 2008, and in the course of a collision, the smaller red spot was shredded into pieces. The remnants of the Baby Red Spot first orbited, then were later consumed by the GRS. The last of the remnants with a reddish color to have been identified by astronomers had disappeared by mid-July, and the remaining pieces again collided with the GRS, then finally merged with the bigger storm. The remaining pieces of the Baby Red Spot had completely disappeared by August 2008. During this encounter Oval BA was present nearby, but played no apparent role in the destruction of the Baby Red Spot. Storms and lightning The storms on Jupiter are similar to thunderstorms on Earth. They reveal themselves via bright clumpy clouds about 1000 km in size, which appear from time to time in the belts' cyclonic regions, especially within the strong westward (retrograde) jets. In contrast to vortices, storms are short-lived phenomena; the strongest of them may exist for several months, while the average lifetime is only 3–4 days. They are believed to be due mainly to moist convection within Jupiter's troposphere. Storms are actually tall convective columns (plumes), which bring the wet air from the depths to the upper part of the troposphere, where it condenses in clouds. A typical vertical extent of Jovian storms is about 100 km; as they extend from a pressure level of about 5–7 bar, where the base of a hypothetical water cloud layer is located, to as high as 0.2–0.5 bar. Storms on Jupiter are always associated with lightning. The imaging of the night–side hemisphere of Jupiter by Galileo and Cassini spacecraft revealed regular light flashes in Jovian belts and near the locations of the westward jets, particularly at 51°N, 56°S and 14°S latitudes. On Jupiter lightning strikes are on average a few times more powerful than those on Earth. However, they are less frequent; the light power emitted from a given area is similar to that on Earth. A few flashes have been detected in polar regions, making Jupiter the second known planet after Earth to exhibit polar lightning. A Microwave Radiometer (Juno) detected many more in 2018. Every 15–17 years Jupiter is marked by especially powerful storms. They appear at 23°N latitude, where the strongest eastward jet, that can reach 150 m/s, is located. The last time such an event was observed was in March–June 2007. Two storms appeared in the northern temperate belt 55° apart in longitude. They significantly disturbed the belt. The dark material that was shed by the storms mixed with clouds and changed the belt's color. The storms moved with a speed as high as 170 m/s, slightly faster than the jet itself, hinting at the existence of strong winds deep in the atmosphere. Circumpolar cyclones Other notable features of Jupiter are its cyclones near the northern and southern poles of the planet. These are called circumpolar cyclones (CPCs) and they have been observed by the Juno Spacecraft using JunoCam and JIRAM. The cyclones have now been observed for about 5 years, as Juno completed 39 orbits around Jupiter. The northern pole has eight cyclones moving around a central cyclone (NPC) while the southern pole only has five cyclones around a central cyclone (SPC), with a gap between the first and second cyclones. The cyclones look like the hurricanes on Earth with trailing spiral arms and a denser center, although there are differences between the centers depending on the individual cyclone. Northern CPCs generally maintain their shape and position compared to the southern CPCs and this could be due to the faster wind speeds that are experienced in the south, where the maximum wind velocities are around 80 m/s to 90 m/s. Although there is more movement among the southern CPCs they tend to retain the pentagonal structure relative to the pole. It has also been observed that the angular wind velocity increases as the center is approached and radius becomes smaller, except for one cyclone in the north, which may have rotation in the opposite direction. The difference in the number of cyclones in the north compared to the south is probably due to the size of the cyclones. The southern CPCs tend to be bigger with radii ranging from 5,600 km to 7,000 km while northern CPCs range from 4,000 km to 4,600 km. The mechanism for the stability of these two symmetric structures of cyclones is an outcome of Beta-drift, a known effect causing cyclones to move poleward and anti-cyclones to move equatorward due to the conservation of momentum along streamlines in a vortex, under the change of the Coriolis parameter. Thus, cyclones forming in the polar regions may congregate at the pole and form a polar cyclone such as those observed on Saturn's poles. The polar cyclone (the central cyclone in the polygons) also emit a vorticity field which can repel other cyclones (see Fujiwhara effect) similar to the beta-effect. The latitude where the circumpolar cyclones are positioned (~84°) fits, in calculations, the hypothesis that the poleward beta-drift force balances the equatorward rejection of the polar cyclone on the circumpolar cyclones, assuming they have an anticyclonic ring around them, consistent with model simulations and observations. The northern cyclones tend to maintain an octagonal structure with the NPC as a center point. Northern cyclones have less data than southern cyclones because of limited illumination in the north-polar winter, making it difficult for JunoCam to obtain accurate measurements of northern CPC positions at each perijove (53 days), but JIRAM is able to collect enough data to understand the northern CPCs. The limited illumination makes it difficult to see the northern central cyclone, but by making four orbits, the NPC can be partially seen and the octagonal structure of the cyclones can be identified. Limited illumination also makes it difficult to view the motion of the cyclones, but early observations show that the NPC is offset from the pole by about 0.5˚ and the CPCs generally maintained their position around the center. Despite data being harder to obtain, it has been observed that the northern CPCs have a drift rate of about 1˚ to 2.5˚ per perijove to the west. The seventh cyclone in the north (n7) drifts a little more than the others and this is due to an anticyclonic white oval (AWO) that pulls it farther from the NPC, which causes the octagonal shape to be slightly distorted. The instantaneous locations of the south polar cyclones have been tracked for 5 years by the JIRAM instrument and by JunoCam. The locations over time were revealed to form an oscillatory motion of each of the 6 cyclones, with periods of approximately one (Earth) year and radii of about 400 km. These oscillations around the CPCs' mean positions were explained to be a result of imbalances between the beta-drift, pulling the CPCs toward the pole and the rejection forces that develop due to the interactions between the cyclones, similar to a 6-body spring system. In addition to this periodic motion, the south polar cyclones were observed to drift westward by 7.5±0.7˚ per year. The reason for this drift is still unknown. The circumpolar cyclones have different morphologies, especially in the north, where cyclones have a "filled" or "chaotic" structure. The inner part of the "chaotic" cyclones have small-scale cloud streaks and flecks. The "filled" cyclones have a sharply-bound, lobate area that is bright white near the edge with a dark inner portion. There are four "filled" cyclones and four "chaotic" cyclones in the north. The southern cyclones all have an extensive fine-scale spiral structure on their outside but they all differ in size and shape. There is very little observation of the cyclones due to low sun angles and a haze that is typically over the atmosphere but what little has been observed shows the cyclones to be a reddish color. Disturbances The normal pattern of bands and zones is sometimes disrupted for periods of time. One particular class of disruption are long-lived darkenings of the South Tropical Zone, normally referred to as "South Tropical Disturbances" (STD). The longest lived STD in recorded history was followed from 1901 until 1939, having been first seen by Percy B. Molesworth on February 28, 1901. It took the form of darkening over part of the normally bright South Tropical zone. Several similar disturbances in the South Tropical Zone have been recorded since then. Hot spots Some of the most mysterious features in the atmosphere of Jupiter are hot spots. In them, the air is relatively free of clouds and heat can escape from the depths without much absorption. The spots look like bright spots in the infrared images obtained at the wavelength of about 5 μm. They are preferentially located in the belts, although there is a train of prominent hot spots at the northern edge of the Equatorial Zone. The Galileo Probe descended into one of those equatorial spots. Each equatorial spot is associated with a bright cloudy plume located to the west of it and reaching up to 10,000 km in size. Hot spots generally have round shapes, although they do not resemble vortices. The origin of hot spots is not clear. They can be either downdrafts, where the descending air is adiabatically heated and dried or, alternatively, they can be a manifestation of planetary scale waves. The latter hypotheses explains the periodical pattern of the equatorial spots. The possibility of life In 1953, the Miller–Urey experiment proved that the combination of lightning and compounds existing in the primitive Earth's atmosphere can form organic matter (including amino acids), which can be used as the cornerstone of life. The simulated atmosphere consists of water, methane, ammonia and hydrogen molecules; all of these substances are found in today's Jupiter atmosphere. Jupiter's atmosphere has a strong vertical air flow that carries these compounds into lower regions. But there are higher temperatures inside Jupiter, which will decompose these chemicals and hinder the formation of life similar to Earth. This was speculated by Carl Sagan and Edwin E. Salpeter. Observational history Early modern astronomers, using small telescopes, recorded the changing appearance of Jupiter's atmosphere. Their descriptive terms—belts and zones, brown spots and red spots, plumes, barges, festoons, and streamers—are still used. Other terms such as vorticity, vertical motion, cloud heights have entered in use later, in the 20th century. The first observations of the Jovian atmosphere at higher resolution than possible with Earth-based telescopes were taken by the Pioneer 10 and 11 spacecraft. The first truly detailed images of Jupiter's atmosphere were provided by the Voyagers. The two spacecraft were able to image details at a resolution as low as 5 km in size in various spectra, and also able to create "approach movies" of the atmosphere in motion. The Galileo Probe, which suffered an antenna problem, saw less of Jupiter's atmosphere but at a better average resolution and a wider spectral bandwidth. Today, astronomers have access to a continuous record of Jupiter's atmospheric activity thanks to telescopes such as Hubble Space Telescope. These show that the atmosphere is occasionally wracked by massive disturbances, but that, overall, it is remarkably stable. The vertical motion of Jupiter's atmosphere was largely determined by the identification of trace gases by ground-based telescopes. Spectroscopic studies after the collision of Comet Shoemaker–Levy 9 gave a glimpse of the Jupiter's composition beneath the cloud tops. The presence of diatomic sulfur (S2) and carbon disulfide (CS2) was recorded—the first detection of either in Jupiter, and only the second detection of S2 in any astronomical object— together with other molecules such as ammonia (NH3) and hydrogen sulfide (H2S), while oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers. The Galileo atmospheric probe, as it plunged into Jupiter, measured the wind, temperature, composition, clouds, and radiation levels down to 22 bar. However, below 1 bar elsewhere on Jupiter there is uncertainty in the quantities. Great Red Spot studies The first sighting of the GRS is often credited to Robert Hooke, who described a spot on the planet in May 1664; however, it is likely that Hooke's spot was in the wrong belt altogether (the North Equatorial Belt, versus the current location in the South Equatorial Belt). Much more convincing is Giovanni Cassini's description of a "permanent spot" in the following year. With fluctuations in visibility, Cassini's spot was observed from 1665 to 1713. A minor mystery concerns a Jovian spot depicted around 1700 on a canvas by Donato Creti, which is exhibited in the Vatican. It is a part of a series of panels in which different (magnified) heavenly bodies serve as backdrops for various Italian scenes, the creation of all of them overseen by the astronomer Eustachio Manfredi for accuracy. Creti's painting is the first known to depict the GRS as red. No Jovian feature was officially described as red before the late 19th century. The present GRS was first seen only after 1830 and well-studied only after a prominent apparition in 1879. A 118-year gap separates the observations made after 1830 from its 17th-century discovery; whether the original spot dissipated and re-formed, whether it faded, or even if the observational record was simply poor are unknown. The older spots had a short observational history and slower motion than that of the modern spot, which make their identity unlikely. On February 25, 1979, when the Voyager 1 spacecraft was 9.2 million kilometers from Jupiter it transmitted the first detailed image of the Great Red Spot back to Earth. Cloud details as small as 160 km across were visible. The colorful, wavy cloud pattern seen to the west (left) of the GRS is the spot's wake region, where extraordinarily complex and variable cloud motions are observed. White ovals The white ovals that were to become Oval BA formed in 1939. They covered almost 90 degrees of longitude shortly after their formation, but contracted rapidly during their first decade; their length stabilized at 10 degrees or less after 1965. Although they originated as segments of the STZ, they evolved to become completely embedded in the South Temperate Belt, suggesting that they moved north, "digging" a niche into the STB. Indeed, much like the GRS, their circulations were confined by two opposing jet streams on their northern and southern boundaries, with an eastward jet to their north and a retrograde westward one to the south. The longitudinal movement of the ovals seemed to be influenced by two factors: Jupiter's position in its orbit (they became faster at aphelion), and their proximity to the GRS (they accelerated when within 50 degrees of the Spot). The overall trend of the white oval drift rate was deceleration, with a decrease by half between 1940 and 1990. During the Voyager fly-bys, the ovals extended roughly 9000 km from east to west, 5000 km from north to south, and rotated every five days (compared to six for the GRS at the time).
Physical sciences
Solar System
Astronomy
30873721
https://en.wikipedia.org/wiki/Brontotheriidae
Brontotheriidae
Brontotheriidae is a family of extinct mammals belonging to the order Perissodactyla, the order that includes horses, rhinoceroses, and tapirs. Superficially, they looked rather like rhinos with some developing bony nose horns, and were some of the earliest mammals to have evolved large body sizes of several tonnes. They lived around 56–34 million years ago, until the very close of the Eocene. Brontotheres had a Holarctic distribution, with the exception of Western Europe: they occupied North America, Asia, and Eastern Europe. They were the first fossilized mammals to be discovered west of the Mississippi, and were first discovered in South Dakota. Characteristics and evolution This group has also been referred to as "Titanotheres." "Titan" refers to the mythological Greek gods who were symbols of strength and large size, and "theros" is Greek for "wild animal." "Bronto" is Greek for "thunder," which may be how this group got the nickname "thunder beasts." Brontotheres retain four toes on their front feet and three toes on their hind feet. Their teeth are adapted to shearing (cutting) relatively nonabrasive vegetation. Their molars have a characteristic W-shaped ectoloph (outer shearing blade). The wear patterns observed on brontothere teeth suggests a folivorous diet. Early Brontotheres had brachydont teeth with thick enamel, while later forms evolved a more hypsodont style tooth with thinner enamel. Brontotheres also shared an elongated postorbital cranium, meaning that their skulls are lengthened between their eyes and ears. They also had anteroposteriorly abbreviated (shortened) faces. The evolutionary history of this group is well known due to an excellent fossil record in North America. The earliest stem-brontotheres, had an estimated body mass of only The earliest brontotheres, such as Eotitanops, were rather small, no more than a meter in height, and hornless. Brontotheres evolved massive bodies, with some species standing over 2.5 meters (7 feet) tall, with body masses of over a tonne, perhaps exceeding , in large individuals of Megacerops, although some small species such as Nanotitanops did persist through the Eocene. Some genera, such as Dolichorhinus, evolved highly elongated skulls. Some later brontotheres developed horn-like bony projections of the skull. The North American brontothere Megacerops, for example, evolved large sexually dimorphic paired horns above their noses. The sexually dimorphic horns, along with highly developed neck musculature, suggest that brontotheres were highly gregarious (social) and males may have performed some sort of head-clashing behavior in competition for mates. Females had smaller appendages, which may have been used to ward off predators and protect young. In Asia, another species of brontothere, Embolotherium, evolved a similarly gigantic body size; however, instead of the slingshot-like horns of the Megacerops, they evolved a single elongated bony process that was composed of both nasal and frontal bones. Embolotherium may have used its large nasal cavity to make vocalizations in order to communicate with others of its species. Unlike rhinoceros, in which the horns are made of keratin, however, the horns of brontotheres are composed of bone (the frontal bone and nasal bone) and were placed side-to-side rather than front-to-back. Similarly to Giraffes, their horns were covered in skin and did not have grooves for nutrient blood vessels. There is some evidence of secondary bone growth, likely due to impact from head clashing. Brontotheres had likely adapted to the warmer and more humid climates of the Eocene, and probably became extinct because they could not adapt to the drier conditions and more open landscapes of the Oligocene. Discovery Brontotheres were one of the first fossilized mammals to be discovered west of the Mississippi, with the first fossil being found in 1846 in the Badlands, South Dakota. Joseph Leidy was the first researcher to scientifically describe brontothere fossils, followed by Cope and Marsh, who studied skulls and entire skeletons. Marsh came up with the term "Brontotheridae," identified them as odd-toed ungulates, and identified distinguishing characteristics of the group. Brontotheriidae fossils have been found in eastern Europe, eastern Russia, Kazakstan, Pakistan, southeast Asia, Korea, Japan, the southeastern U.S., and Canada. Classification Brontotheres are an early diverging clade within Perissodactyla. Although historically suggested to be closely related to horses, phylogenetic analyses have recovered them to lie outside the clade containing chalicotheres, rhinoceroses, tapirs and horses, or more closely related to chalicotheres, rhinoceroses and tapirs than to horses. Two classification systems for Brontotheriidae are presented below. The first contains 43 genera and 8 subfamilies, and although it is based on a 1997 publication by McKenna and Bell, it summarizes research that was conducted before 1920 and is badly outdated. The second classification is based on 2004 and 2005 research by Mihlbachler et al., which indicates that many of the previous subfamily names are invalid. Several more recently discovered brontotheres are included in the newer classification. Although Lambdotherium and Xenicohippus were previously included in Brontotheriidae, they are no longer considered members of this family. Lambdotherium, though excluded, may be the closest known relative to brontotheres. Xenicohippus is now thought to be an early member of the horse family, Equidae.
Biology and health sciences
Perissodactyla
Animals
4043742
https://en.wikipedia.org/wiki/Physics%20beyond%20the%20Standard%20Model
Physics beyond the Standard Model
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental parameters of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons. Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the "best step" towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics. Problems with the Standard Model Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. A large share of the published output of theoretical physicists consists of proposals for various forms of "Beyond the Standard Model" new physics proposals that would modify the Standard Model in ways subtle enough to be consistent with existing data, yet address its imperfections materially enough to predict non-Standard Model outcomes of new experiments that can be proposed. Phenomena not explained The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain: Gravity. The standard model does not explain gravity. The approach of simply adding a graviton to the Standard Model does not recreate what is observed experimentally without other modifications, as yet undiscovered, to the Standard Model. Moreover, the Standard Model is widely considered to be incompatible with the most successful theory of gravity to date, general relativity. Dark matter. Assuming that general relativity and Lambda CDM are true, cosmological observations tell us the standard model explains about 5% of the mass-energy present in the universe. About 26% should be dark matter (the remaining 69% being dark energy) which would behave just like other matter, but which only interacts weakly (if at all) with the Standard Model fields. Yet, the Standard Model does not supply any fundamental particles that are good dark matter candidates. Dark energy. As mentioned, the remaining 69% of the universe's energy should consist of the so-called dark energy, a constant energy density for the vacuum. Attempts to explain dark energy in terms of vacuum energy of the standard model lead to a mismatch of 120 orders of magnitude. Neutrino oscillations. According to the Standard Model, neutrinos do not oscillate. However, experiments and astronomical observations have shown that neutrino oscillation does occur. These are typically explained by postulating that neutrinos have mass. Neutrinos do not have mass in the Standard Model, and mass terms for the neutrinos can be added to the Standard Model by hand, but these lead to new theoretical problems. For example, the mass terms need to be extraordinarily small and it is not clear if the neutrino masses would arise in the same way that the masses of other fundamental particles do in the Standard Model. There are also other extensions of the Standard Model for neutrino oscillations which do not assume massive neutrinos, such as Lorentz-violating neutrino oscillations. Matter–antimatter asymmetry. The universe is made out of mostly matter. However, the standard model predicts that matter and antimatter should have been created in (almost) equal amounts if the initial conditions of the universe did not involve disproportionate matter relative to antimatter. Yet, there is no mechanism in the Standard Model to sufficiently explain this asymmetry. Experimental results not explained No experimental result is accepted as definitively contradicting the Standard Model at the 5 level, widely considered to be the threshold of a discovery in particle physics. Because every experiment contains some degree of statistical and systemic uncertainty, and the theoretical predictions themselves are also almost never calculated exactly and are subject to uncertainties in measurements of the fundamental constants of the Standard Model (some of which are tiny and others of which are substantial), it is to be expected that some of the hundreds of experimental tests of the Standard Model will deviate from it to some extent, even if there were no new physics to be discovered. At any given moment there are several experimental results standing that significantly differ from a Standard Model-based prediction. In the past, many of these discrepancies have been found to be statistical flukes or experimental errors that vanish as more data has been collected, or when the same experiments were conducted more carefully. On the other hand, any physics beyond the Standard Model would necessarily first appear in experiments as a statistically significant difference between an experiment and the theoretical prediction. The task is to determine which is the case. In each case, physicists seek to determine if a result is merely a statistical fluke or experimental error on the one hand, or a sign of new physics on the other. More statistically significant results cannot be mere statistical flukes but can still result from experimental error or inaccurate estimates of experimental precision. Frequently, experiments are tailored to be more sensitive to experimental results that would distinguish the Standard Model from theoretical alternatives. Some of the most notable examples include the following: B meson decay etc. – results from a BaBar experiment may suggest a surplus over Standard Model predictions of a type of particle decay . In this, an electron and positron collide, resulting in a B meson and an antimatter meson, which then decays into a D meson and a tau lepton as well as a tau antineutrino. While the level of certainty of the excess (3.4  in statistical jargon) is not enough to declare a break from the Standard Model, the results are a potential sign of something amiss and are likely to affect existing theories, including those attempting to deduce the properties of Higgs bosons. In 2015, LHCb reported observing a 2.1  excess in the same ratio of branching fractions. The Belle experiment also reported an excess. In 2017 a meta analysis of all available data reported a cumulative 5  deviation from SM. Neutron lifetime puzzle - Free neutrons are not stable but decay after some time. Currently there are two methods used to measure this lifetime ("bottle" versus "beam") that give different values not within each other's error margin. Currently the lifetime from the bottle method is at with a difference of 10 seconds below the beam method value of . Theoretical predictions not observed Observation at particle colliders of all of the fundamental particles predicted by the Standard Model has been confirmed. The Higgs boson is predicted by the Standard Model's explanation of the Higgs mechanism, which describes how the weak SU(2) gauge symmetry is broken and how fundamental particles obtain mass; it was the last particle predicted by the Standard Model to be observed. On July 4, 2012, CERN scientists using the Large Hadron Collider announced the discovery of a particle consistent with the Higgs boson, with a mass of about . A Higgs boson was confirmed to exist on March 14, 2013, although efforts to confirm that it has all of the properties predicted by the Standard Model are ongoing. A few hadrons (i.e. composite particles made of quarks) whose existence is predicted by the Standard Model, which can be produced only at very high energies in very low frequencies have not yet been definitively observed, and "glueballs" (i.e. composite particles made of gluons) have also not yet been definitively observed. Some very low frequency particle decays predicted by the Standard Model have also not yet been definitively observed because insufficient data is available to make a statistically significant observation. Unexplained relations Koide formula – an unexplained empirical equation remarked upon by Yoshio Koide in 1981, and later by others. It relates the masses of the three charged leptons: . The Standard Model does not predict lepton masses (they are free parameters of the theory). However, the value of the Koide formula being equal to 2/3 within experimental errors of the measured lepton masses suggests the existence of a theory which is able to predict lepton masses. The CKM matrix, if interpreted as a rotation matrix in a 3-dimensional vector space, "rotates" a vector composed of square roots of down-type quark masses into a vector of square roots of up-type quark masses , up to vector lengths, a result due to Kohzo Nishida. The sum of squares of the Yukawa couplings of all Standard Model fermions is approximately 0.984, which is very close to 1. To put it another way, the sum of squares of fermion masses is very close to half of squared Higgs vacuum expectation value. This sum is dominated by the top quark. The sum of squares of boson masses (that is, W, Z, and Higgs bosons) is also very close to half of squared Higgs vacuum expectation value, the ratio is approximately 1.004. Consequently, the sum of squared masses of all Standard Model particles is very close to the squared Higgs vacuum expectation value, the ratio is approximately 0.994. It is unclear if these empirical relationships represent any underlying physics; according to Koide, the rule he discovered "may be an accidental coincidence". Theoretical problems Some features of the standard model are added in an ad hoc way. These are not problems per se (i.e. the theory works fine with the ad hoc insertions), but they imply a lack of understanding. These contrived features have motivated theorists to look for more fundamental theories with fewer parameters. Some of the contrivances are: Hierarchy problem – the standard model introduces particle masses through a process known as spontaneous symmetry breaking caused by the Higgs field. Within the standard model, the mass of the Higgs particle gets some very large quantum corrections due to the presence of virtual particles (mostly virtual top quarks). These corrections are much larger than the actual mass of the Higgs. This means that the bare mass parameter of the Higgs in the standard model must be fine tuned in such a way that almost completely cancels the quantum corrections. This level of fine-tuning is deemed unnatural by many theorists. Number of parameters – the standard model depends on 19 parameter numbers. Their values are known from experiment, but the origin of the values is unknown. Some theorists have tried to find relations between different parameters, for example, between the masses of particles in different generations or calculating particle masses, such as in asymptotic safety scenarios. Quantum triviality – suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar Higgs particles. This is sometimes called the Landau pole problem. Strong CP problem – it can be argued theoretically that the standard model should contain a term in the strong interaction that breaks CP symmetry, causing slightly different interaction rates for matter vs. antimatter. Experimentally, however, no such violation has been found, implying that the coefficient of this term – if any – would be suspiciously close to zero. Additional experimental results Research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. Grand unified theories The standard model has three gauge symmetries; the colour SU(3), the weak isospin SU(2), and the weak hypercharge U(1) symmetry, corresponding to the three fundamental forces. Due to renormalization the coupling constants of each of these symmetries vary with the energy at which they are measured. Around these couplings become approximately equal. This has led to speculation that above this energy the three gauge symmetries of the standard model are unified in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below this energy the symmetry is spontaneously broken to the standard model symmetries. Popular choices for the unifying group are the special unitary group in five dimensions SU(5) and the special orthogonal group in ten dimensions SO(10). Theories that unify the standard model symmetries in this way are called Grand Unified Theories (or GUTs), and the energy scale at which the unified symmetry is broken is called the GUT scale. Generically, grand unified theories predict the creation of magnetic monopoles in the early universe, and instability of the proton. Neither of these have been observed, and this absence of observation puts limits on the possible GUTs. Supersymmetry Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders may not be powerful enough to produce them. Neutrinos In the standard model, neutrinos cannot spontaneously change flavor. Measurements however indicated that neutrinos do spontaneously change flavor, in what is called neutrino oscillations. Neutrino oscillations are usually explained using massive neutrinos. In the standard model, neutrinos have exactly zero mass, as the standard model only contains left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model. These measurements only give the mass differences between the different flavours. The best constraint on the absolute mass of the neutrinos comes from precision measurements of tritium decay, providing an upper limit 2 eV, which makes them at least five orders of magnitude lighter than the other particles in the standard model. This necessitates an extension of the standard model, which not only needs to explain how neutrinos get their mass, but also why the mass is so small. One approach to add masses to the neutrinos, the so-called seesaw mechanism, is to add right-handed neutrinos and have these couple to left-handed neutrinos with a Dirac mass term. The right-handed neutrinos have to be sterile, meaning that they do not participate in any of the standard model interactions. Because they have no charges, the right-handed neutrinos can act as their own anti-particles, and have a Majorana mass term. Like the other Dirac masses in the standard model, the neutrino Dirac mass is expected to be generated through the Higgs mechanism, and is therefore unpredictable. The standard model fermion masses differ by many orders of magnitude; the Dirac neutrino mass has at least the same uncertainty. On the other hand, the Majorana mass for the right-handed neutrinos does not arise from the Higgs mechanism, and is therefore expected to be tied to some energy scale of new physics beyond the standard model, for example the Planck scale. Therefore, any process involving right-handed neutrinos will be suppressed at low energies. The correction due to these suppressed processes effectively gives the left-handed neutrinos a mass that is inversely proportional to the right-handed Majorana mass, a mechanism known as the see-saw. The presence of heavy right-handed neutrinos thereby explains both the small mass of the left-handed neutrinos and the absence of the right-handed neutrinos in observations. However, due to the uncertainty in the Dirac neutrino masses, the right-handed neutrino masses can lie anywhere. For example, they could be as light as keV and be dark matter, they can have a mass in the LHC energy range and lead to observable lepton number violation, or they can be near the GUT scale, linking the right-handed neutrinos to the possibility of a grand unified theory. The mass terms mix neutrinos of different generations. This mixing is parameterized by the PMNS matrix, which is the neutrino analogue of the CKM quark mixing matrix. Unlike the quark mixing, which is almost minimal, the mixing of the neutrinos appears to be almost maximal. This has led to various speculations of symmetries between the various generations that could explain the mixing patterns. The mixing matrix could also contain several complex phases that break CP invariance, although there has been no experimental probe of these. These phases could potentially create a surplus of leptons over anti-leptons in the early universe, a process known as leptogenesis. This asymmetry could then at a later stage be converted in an excess of baryons over anti-baryons, and explain the matter-antimatter asymmetry in the universe. The light neutrinos are disfavored as an explanation for the observation of dark matter, based on considerations of large-scale structure formation in the early universe. Simulations of structure formation show that they are too hot – that is, their kinetic energy is large compared to their mass – while formation of structures similar to the galaxies in our universe requires cold dark matter. The simulations show that neutrinos can at best explain a few percent of the missing mass in dark matter. However, the heavy, sterile, right-handed neutrinos are a possible candidate for a dark matter WIMP. There are however other explanations for neutrino oscillations which do not necessarily require neutrinos to have masses, such as Lorentz-violating neutrino oscillations. Preon models Several preon models have been proposed to address the unsolved problem concerning the fact that there are three generations of quarks and leptons. Preon models generally postulate some additional new particles which are further postulated to be able to combine to form the quarks and leptons of the standard model. One of the earliest preon models was the Rishon model. To date, no preon model is widely accepted or fully verified. Theories of everything Theoretical physics continues to strive toward a theory of everything, a theory that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle. In practical terms the immediate goal in this regard is to develop a theory which would unify the Standard Model with General Relativity in a theory of quantum gravity. Additional features, such as overcoming conceptual flaws in either theory or accurate prediction of particle masses, would be desired. The challenges in putting together such a theory are not just conceptual - they include the experimental aspects of the very high energies needed to probe exotic realms. Several notable attempts in this direction are supersymmetry, loop quantum gravity, and String theory. Supersymmetry Loop quantum gravity Theories of quantum gravity such as loop quantum gravity and others are thought by some to be promising candidates to the mathematical unification of quantum field theory and general relativity, requiring less drastic changes to existing theories. However recent work places stringent limits on the putative effects of quantum gravity on the speed of light, and disfavours some current models of quantum gravity. String theory Extensions, revisions, replacements, and reorganizations of the Standard Model exist in attempt to correct for these and other issues. String theory is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true Theory of Everything. Among the numerous variants of string theory, M-theory, whose mathematical existence was first proposed at a String Conference in 1995 by Edward Witten, is believed by many to be a proper "ToE" candidate, notably by physicists Brian Greene and Stephen Hawking. Though a full mathematical description is not yet known, solutions to the theory exist for specific cases. Recent works have also proposed alternate string models, some of which lack the various harder-to-test features of M-theory (e.g. the existence of Calabi–Yau manifolds, many extra dimensions, etc.) including works by well-published physicists such as Lisa Randall.
Physical sciences
Particle physics: General
null
4044867
https://en.wikipedia.org/wiki/Recursion%20%28computer%20science%29
Recursion (computer science)
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science. Most computer programming languages support recursion by allowing a function to call itself from within its own code. Some functional programming languages (for instance, Clojure) do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved in computability theory that these recursive-only languages are Turing complete; this means that they are as powerful (they can be used to solve the same problems) as imperative languages based on control structures such as and . Repeatedly calling a function from within itself may cause the call stack to have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally less efficient, and, for certain problems, algorithmic or compiler-optimization techniques such as tail call optimization may improve computational performance over a naive recursive implementation. Recursive functions and algorithms A common algorithm design tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; when combined with a lookup table that stores the results of previously solved sub-problems (to avoid solving them repeatedly and incurring extra computation time), it can be referred to as dynamic programming or memoization. Base case A recursive function definition has one or more base cases, meaning input(s) for which the function produces a result trivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (calls itself). For example, the factorial function can be defined recursively by the equations and, for all , . Neither equation by itself constitutes a complete definition; the first is the base case, and the second is the recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the "terminating case". The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached. (Functions that are not intended to terminate under normal circumstances—for example, some system and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly, can cause an infinite loop. For some functions (such as one that computes the series for ) there is not an obvious base case implied by the input data; for these one may add a parameter (such as the number of terms to be added, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is more naturally treated by corecursion, where successive terms in the output are the partial sums; this can be converted to a recursion by using the indexing parameter to say "compute the nth term (nth partial sum)". Recursive data types Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is a technique for representing data whose exact size is unknown to the programmer: the programmer can specify this data with a self-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions. Inductively defined data An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example, linked lists can be defined inductively (here, using Haskell syntax): data ListOfStrings = EmptyList | Cons String ListOfStrings The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings. The self-reference in the definition permits the construction of lists of any (finite) number of strings. Another example of inductive definition is the natural numbers (or positive integers): A natural number is either 1 or n+1, where n is a natural number. Similarly recursive definitions are often used to model the structure of expressions and statements in programming languages. Language designers often express grammars in a syntax such as Backus–Naur form; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition: <expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>) This says that an expression is either a number, a product of two expressions, or a sum of two expressions. By recursively referring to expressions in the second and third lines, the grammar permits arbitrarily complicated arithmetic expressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression. Coinductively defined data and corecursion A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically, self-referential coinductive definitions are used for data structures of infinite size. A coinductive definition of infinite streams of strings, given informally, might look like this: A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings. This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via the accessor functions head and tail—and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from. Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. As a programming technique, it is used most often in the context of lazy programming languages, and can be preferable to recursion when the desired size or precision of a program's output is unknown. In such cases the program requires both a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of that result. The problem of computing the first n prime numbers is one that can be solved with a corecursive program (e.g. here). Types of recursion Single recursion and multiple recursion Recursion that contains only a single self-reference is known as , while recursion that contains multiple self-references is known as . Standard examples of single recursion include list traversal, such as in a linear search, or computing the factorial function, while standard examples of multiple recursion include tree traversal, such as in a depth-first search. Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterative computation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require exponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without an explicit stack. Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example, while computing the Fibonacci sequence naively entails multiple iteration, as each value requires two previous values, it can be computed by single recursion by passing two successive values as parameters. This is more naturally framed as corecursion, building up from the initial values, while tracking two successive values at each step – see corecursion: examples. A more sophisticated example involves using a threaded binary tree, which allows iterative tree traversal, rather than multiple recursion. Indirect recursion Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in which a function calls itself. Indirect recursion occurs when a function is called not by itself but by another function that it called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which calls f, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 calls function 2, function 2 calls function 3, and function 3 calls function 1 again. Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a difference of emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions. Anonymous recursion Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitly calling a function based on the current context, which is particularly useful for anonymous functions, and is known as anonymous recursion. Structural versus generative recursion Some authors classify recursion as either "structural" or "generative". The distinction is related to where a recursive procedure gets the data that it works on, and how it processes that data: Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call is the content of a field of the original input. Structural recursion includes nearly all tree traversals, including XML processing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (that is, a natural number is either zero or the successor of a natural number), functions such as factorial may also be regarded as structural recursion. is the alternative: This distinction is important in proving termination of a function. All structurally recursive functions on finite (inductively defined) data structures can easily be shown to terminate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a base case is reached. Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, so proof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. These generatively recursive functions can often be interpreted as corecursive functions – each step generates the new data, such as successive approximation in Newton's method – and terminating this corecursion requires that the data eventually satisfy some condition, which is not necessarily guaranteed. In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or complexity, which starts off finite and decreases at each recursive step. By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends on a function, such as "error of approximation" that does not necessarily decrease to zero, and thus termination is not guaranteed without further analysis. Implementation issues In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step), a number of modifications may be made, for purposes of clarity or efficiency. These include: Wrapper function (at top) Short-circuiting the base case, aka "Arm's-length recursion" (at bottom) Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frowned upon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursion in small cases, and arm's-length recursion is a special case of this. Wrapper function A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliary function which actually does the recursion. Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initialization (allocate memory, initialize variables), particularly for auxiliary variables such as "level of recursion" or partial computations for memoization, and handle exceptions and errors. In languages that support nested functions, the auxiliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions, auxiliary functions are instead a separate function, if possible private (as they are not called directly), and information is shared with the wrapper function by using pass-by-reference. Short-circuiting the base case Short-circuiting the base case, also known as arm's-length recursion, consists of checking the base case before making a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking for the base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call that immediately returns. Note that since the base case has already been checked for (immediately before the recursive step), it does not need to be checked for separately, but one does need to use a wrapper function for the case when the overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is 0! = 1, while immediately returning 1 for 1! is a short circuit, and may miss 0; this can be mitigated by a wrapper function. The box shows C code to shortcut factorial cases 0 and 1. Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, which can be linear in the number of function calls, hence significant savings for algorithms; this is illustrated below for a depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children) as the base case, rather than considering an empty node as the base case. If there is only a single base case, such as in computing the factorial, short-circuiting provides only savings. Conceptually, short-circuiting can be considered to either have the same base case and recursive step, checking the base case only before the recursion, or it can be considered to have a different base case (one step removed from standard base case) and a more complex recursive step, namely "check valid then recurse", as in considering leaf nodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, compared with the clear separation of base case and recursive step in standard recursion, it is often considered poor style, particularly in academia. Depth-first search A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section for standard recursive discussion. The standard recursive algorithm for a DFS is: base case: If current node is Null, return false recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children In short-circuiting, this is instead: check value of current node, return true if match, otherwise, on children, if not Null, then recurse. In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can be considered a different form of base case and recursive step, respectively. Note that this requires a wrapper function to handle the case when the tree itself is empty (root node is Null). In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for each of the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case. In C, the standard recursive algorithm may be implemented as: bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); } The short-circuited algorithm may be implemented as: // Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumes tree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true; // found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right && tree_contains_do(tree_node->right, i)); } Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is made only if the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is a Boolean, so the overall expression evaluates to a Boolean. This is a common idiom in recursive short-circuiting. This is in addition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left child fails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a return statement, but legibility suffers at no benefit to efficiency. Hybrid algorithm Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. For this reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch to a different algorithm when the input becomes small. An important example is merge sort, which is often implemented by switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybrid recursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort. Recursion versus iteration Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit call stack, while iteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consideration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion, as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to little overhead. Implementing an algorithm using iteration may not be easily achievable. Compare the templates to compute xn defined by xn = f(n, xn-1) from xbase: For an imperative language the overhead is to define the function, and for a functional language the overhead is to define the accumulator variable x. For example, a factorial function may be implemented iteratively in C by assigning to a loop index variable and accumulator variable, rather than by passing arguments and returning values by recursion: unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n; } return product; } Expressive power Most programming languages in use today allow the direct specification of recursive functions and procedures. When such a function is called, the program's runtime environment keeps track of the various instances of the function (often using a call stack, although other methods may be used). Every recursive function can be transformed into an iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with a stack explicitly managed by the program. Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness) can be expressed in terms of recursive functions; iterative control constructs such as while loops and for loops are routinely rewritten in recursive form in functional languages. However, in practice this rewriting depends on tail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languages in which all function calls, including tail calls, may cause stack allocation that would not occur with the use of looping constructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack, although tail call elimination may be a feature that is not covered by a language's specification, and different implementations of the same language may differ in tail call elimination capabilities. Performance issues In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and space cost associated with recursive programs, due to the overhead required to manage the stack and the relative slowness of function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, and the difference is usually less noticeable. As a concrete example, the difference in performance between recursive and iterative implementations of the "factorial" example above depends highly on the compiler used. In languages where looping constructs are preferred, the iterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages, the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the larger numbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelm any time saved by choosing iteration. Stack space In some programming languages, the maximum size of the call stack is much less than the space available in the heap, and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languages sometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language. Note the caveat below regarding the special case of tail recursion. Vulnerability Because recursive algorithms can be subject to stack overflows, they may be vulnerable to pathological or malicious input. Some malware specifically targets a program's call stack and takes advantage of the stack's inherently recursive nature. Even in the absence of malware, a stack overflow caused by unbounded recursion can be fatal to the program, and exception handling logic may not prevent the corresponding process from being terminated. Multiply recursive problems Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is tree traversal as in depth-first search; though both recursive and iterative methods are used, they contrast with list traversal and linear search in a list, which is a singly recursive and thus naturally iterative method. Other examples include divide-and-conquer algorithms such as Quicksort, and functions such as the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack, but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguably outweigh any advantages of the iterative solution. Refactoring recursion Recursive algorithms can be replaced with non-recursive counterparts. One method for replacing recursive algorithms is to simulate them using heap memory in place of stack memory. An alternative is to develop a replacement algorithm entirely based on non-recursive methods, which can be challenging. For example, recursive algorithms for matching wildcards, such as Rich Salz' wildmat algorithm, were once typical. Non-recursive algorithms for the same purpose, such as the Krauss matching wildcards algorithm, have been developed to avoid the drawbacks of recursion and have improved only gradually based on techniques such as collecting tests and profiling performance. Tail-recursive functions Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferred operations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function (also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplication operations that must be performed after the final recursive call completes. With a compiler or interpreter that treats tail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constant space. Thus the program is essentially iterative, equivalent to using imperative language control structures like the "for" and "while" loops. The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller's return position need not be saved on the call stack; when the recursive call returns, it will branch directly on the previously saved return position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space and time. Order of execution Consider these two functions: Function 1 void recursiveFunction(int num) { printf("%d\n", num); if (num < 4) recursiveFunction(num + 1); } Function 2 void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n", num); } The output of function 2 is that of function 1 with the lines swapped. In the case of a function calling itself only once, instructions placed before the recursive call are executed once per recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly after the maximum recursion has been reached. Also note that the order of the print statements is reversed, which is due to the way the functions and statements are stored on the call stack. Recursive procedures Factorial A classic example of a recursive procedure is the function used to calculate the factorial of a natural number: The function can also be written as a recurrence relation: This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating the pseudocode above: This factorial function can also be described without using recursion by making use of the typical looping constructs found in imperative programming languages: The imperative code above is equivalent to this mathematical definition using an accumulator variable : The definition above translates straightforwardly to functional programming languages such as Scheme; this is an example of iteration implemented recursively. Greatest common divisor The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively. Function definition: Recurrence relation for greatest common divisor, where expresses the remainder of : if The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shown above shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a version of the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintaining its state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls and growing the call stack. The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is more difficult to understand the process by simple inspection, although the two algorithms are very similar in their steps. Towers of Hanoi The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion. There are three pegs which can hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting with n disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to move the stack? Function definition: Recurrence relation for hanoi: Example implementations: Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula. Binary search The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in half with each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that point with the data being searched and then responding to one of three possible conditions: the data is found at the midpoint, the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the data being searched for. Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. The binary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass. Example implementation of binary search in C: /* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCENDING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: result of binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1 (top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a array of integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index, end is the maximum array index OUTPUT: position of the integer toFind within array data, -1 if not found */ int binary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Integer division if (start > end) //Stop condition (base case) return -1; else if (data[mid] == toFind) //Found, return index return mid; else if (data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1); else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); } Recursive data structures (structural recursion) An important application of recursion in computer science is in defining dynamic data structures such as lists and trees. Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; in contrast, the size of a static array must be set at compile time. "Recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms." The examples in this section illustrate what is known as "structural recursion". This term refers to the fact that the recursive procedures are acting on data that is defined recursively. As long as a programmer derives the template from a data definition, functions employ structural recursion. That is, the recursions in a function's body consume some immediate piece of a given compound value. Linked lists Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself. The "next" element of struct node is a pointer to another struct node, effectively creating a list type. struct node { int data; // some integer data struct node *next; // pointer to another struct node }; Because the struct node data structure is defined recursively, procedures that operate on it can be implemented naturally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e., the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation, the list remains unchanged by the list_print procedure. void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer data followed by a space list_print (list->next); // recursive call on the next node } } Binary trees Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself, recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the right sub-tree). struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // point to the right subtree }; Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers (left and right), tree operations may require two recursive calls: // Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); } At most two recursive calls will be made for any given call to tree_contains as defined above. // Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right); // go right } } The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of the binary tree where the data elements of each node are in order. Filesystem traversal Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerate its contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversal are applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversal of a filesystem. import java.io.File; public class FileSystem { public static void main(String [] args) { traverse(); } /** * Obtains the filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse() { File[] fs = File.listRoots(); for (int i = 0; i < fs.length; i++) { System.out.println(fs[i]); if (fs[i].isDirectory() && fs[i].canRead()) { rtraverse(fs[i]); } } } /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private static void rtraverse(File fd) { File[] fss = fd.listFiles(); for (int i = 0; i < fss.length; i++) { System.out.println(fss[i]); if (fss[i].isDirectory() && fss[i].canRead()) { rtraverse(fss[i]); } } } } This code is both recursion and iteration - the files and directories are iterated, and each directory is opened recursively. The "rtraverse" method is an example of direct recursion, whilst the "traverse" method is a wrapper function. The "base case" scenario is that there will always be a fixed number of files and/or directories in a given filesystem. Time-efficiency of recursive algorithms The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can (usually) then be simplified into a single Big-O term. Shortcut rule (master theorem) If the time-complexity of the function is in the form Then the Big O of the time-complexity is thus: If for some constant , then If , then If for some constant , and if for some constant and all sufficiently large , then where represents the number of recursive calls at each level of recursion, represents by what factor smaller the input is for the next level of recursion (i.e. the number of pieces you divide the problem into), and represents the work that the function does independently of any recursion (e.g. partitioning, recombining) at each level of recursion. Recursion in Logic Programming In the procedural interpretation of logic programs, clauses (or rules) of the form A :- B are treated as procedures, which reduce goals of the form A to subgoals of the form B. For example, the Prolog clauses: path(X,Y) :- arc(X,Y). path(X,Y) :- arc(X,Z), path(Z,Y). define a procedure, which can be used to search for a path from X to Y, either by finding a direct arc from X to Y, or by finding an arc from X to Z, and then searching recursively for a path from Z to Y. Prolog executes the procedure by reasoning top-down (or backwards) and searching the space of possible paths depth-first, one branch at a time. If it tries the second clause, and finitely fails to find a path from Z to Y, it backtracks and tries to find an arc from X to another node, and then searches for a path from that other node to Y. However, in the logical reading of logic programs, clauses are understood declaratively as universally quantified conditionals. For example, the recursive clause of the path-finding procedure is understood as representing the knowledge that, for every X, Y and Z, if there is an arc from X to Z and a path from Z to Y then there is a path from X to Y. In symbolic form: The logical reading frees the reader from needing to know how the clause is used to solve problems. The clause can be used top-down, as in Prolog, to reduce problems to subproblems. Or it can be used bottom-up (or forwards), as in Datalog, to derive conclusions from conditions. This separation of concerns is a form of abstraction, which separates declarative knowledge from problem solving methods (see Algorithm#Algorithm = Logic + Control). Infinite recursion A common mistake among programmers is not providing a way to exit a recursive function, often by omitting or incorrectly checking the base case, letting it run (at least theoretically) infinitely by endlessly calling itself recursively. This is called infinite recursion, and the program will never terminate. In practice, this typically exhausts the available stack space. In most programming environments, a program with infinite recursion will not really run forever. Eventually, something will break and the program will report an error. Below is a Java code that would use infinite recursion:public class InfiniteRecursion { static void recursive() { // Recursive Function with no way out recursive(); } public static void main(String[] args) { recursive(); // Executes the recursive function upon runtime } } Running this code will result in a stack overflow error.
Technology
Software development: General
null
37506594
https://en.wikipedia.org/wiki/Snapchat
Snapchat
Snapchat is an American multimedia instant messaging app and service developed by Snap Inc., originally Snapchat Inc. One of the principal features of the multimedia Snapchat is that pictures and messages are usually available for only a short time before they become inaccessible to their recipients. The app has evolved from originally focusing on person-to-person photo sharing to presently featuring users' "Stories" of 24 hours of chronological content, along with "Discover", letting brands show ad-supported short-form content. It also allows users to store photos in a password-protected area called "My Eyes Only". It has also reportedly incorporated limited use of end-to-end encryption, with plans to broaden its use in the future. Snapchat was created by Evan Spiegel, Bobby Murphy, and Reggie Brown, former students at Stanford University. It is known for representing a mobile-first direction for social media, and places significant emphasis on users interacting with virtual stickers and augmented reality objects. In 2023, Snapchat had over 300 million monthly active users. On average more than four billion Snaps were sent each day in 2020. Snapchat is popular among the younger generations, with most users being between 18 and 24. Snapchat is subject to privacy concerns with social networking services. History Prototype According to documents and deposition statements, Reggie Brown brought the idea for a disappearing-pictures application to Evan Spiegel because Spiegel had prior business experience. Brown and Spiegel then pulled in Bobby Murphy, who had experience coding. The three worked closely together for several months and launched Snapchat as "Picaboo" on the iOS operating system on July 8, 2011. Reggie Brown was ousted from the company months after it was launched. The app was relaunched as Snapchat in September 2011, and the team focused on usability and technical aspects, rather than branding efforts. One exception was the decision to keep a mascot designed by Brown, "Ghostface Chillah", named after Ghostface Killah of the hip-hop group Wu-Tang Clan. On May 8, 2012, Reggie Brown sent an email to Evan Spiegel during their senior year at Stanford, in which he offered to re-negotiate his equitable share regarding ownership of the company. Lawyers for Snapchat claimed that Reggie Brown had made no contributions of value to the company, and was therefore entitled to nothing. In September 2014, Brown settled with Spiegel and Murphy for $157.5 million and was credited as one of the original authors of Snapchat. In their first blog post, dated May 9, 2012, CEO Evan Spiegel described the company's mission: "Snapchat isn't about capturing the traditional Kodak moment. It's about communicating with the full range of human emotion—not just what appears to be pretty or perfect." He presented Snapchat as the solution to stresses caused by the longevity of personal information on social media, evidenced by "emergency detagging of Facebook photos before job interviews and photoshopping blemishes out of candid shots before they hit the internet. Growth As of May 2012, 25 Snapchat images were being sent per second and, as of November 2012, users had shared over one billion photos on the Snapchat iOS app, with 20 million photos being shared per day. That same month, Spiegel cited problems with user base scalability as the reason that Snapchat was experiencing some difficulties delivering its images, known as "snaps", in real time. Snapchat was released as an Android app on October 29, 2012. In June 2013, Snapchat version 5.0, dubbed "Banquo", was released for iOS. The updated version introduced several speed and design enhancements, including swipe navigation, double-tap to reply, an improved friend finder, and in-app profiles. The name is a reference to a character from Shakespeare's Macbeth. Also in June 2013, Snapchat introduced Snapkidz for users under 13 years of age. Snapkidz was part of the original Snapchat application and was activated when the user provided a date of birth to verify his/her age. Snapkidz allowed children to take snaps and draw on them, but they could not send snaps to other users and could save snaps only locally on the device being used. According to Snapchat's published statistics, as of May 2015, the app's users were viewing 2 billion videos per day, reaching 6 billion by November. By 2016, Snapchat had hit 10 billion daily video views. In May 2016, Snapchat raised $1.81 billion in equity offering, suggesting strong investor interest in the company. By May 31, 2016, the app had almost 10 million daily active users in the United Kingdom. In February 2017, Snapchat had 160 million daily active users, growing to 166 million in May. Investel Capital Corp., a Canadian company, sued Snapchat for infringement on its geofiltering patent in 2016. They were seeking "monetary compensation and an order that would prohibit California-based Snapchat from infringing on its patent in the future." In September 2016, Snapchat Inc. was renamed Snap Inc. to coincide with the introduction of the company's first hardware product, Spectacles—smartglasses with a built-in camera that can record 10 seconds of video at a time. On February 20, 2017, Spectacles became available for purchase online. Snapchat announced a redesign in November 2017, which proved controversial with many of its followers. CNBC's Ingrid Angulo listed some of the reasons why many disliked the update, citing that sending a snap and re-watching stories was more complicated, stories and incoming snaps were now listed on the same page, and that the Discover page now included featured and sponsored content. A tweet sent by Kylie Jenner in February 2018, which criticized the redesign of the Snapchat app, reportedly caused Snap Inc. to lose more than $1.3 billion in market value. Over 1.2 million people signed a Change.org petition asking the company to remove the new app update. In December 2019, App Annie announced Snapchat to be the 5th most downloaded mobile app of the decade. The data includes figures for iOS downloads starting from 2010 and Android downloads starting from 2012. Snapchat acquired AI Factory, a computer vision startup, in January 2020 to give a boost to its video capabilities. In November 2020, Snapchat announced it would pay a total of $1 million a day to users who post viral videos. The company has not stated the criteria for a video to be considered viral or how many people the payout would be split between. The promotion, called Snapchat Spotlight, was initially intended to run until the end of the year. , the program continues to operate but its payout structure changed in 2021 as the company announced a shift from the $1 million per day model to a "millions per month" one. In June 2022, Snapchat announced plans to launch Snapchat Plus, a paid subscription model. The subscription gives users early access to features, the ability to change the app icon and see which users rewatch their stories. In July 2022, the company reported that they had 347 million daily active users, an increase of 18% from the previous year. In August 2022, Snapchat announced that Snapchat Plus had more than 1 million subscribers and added four new features to the subscription including priority replies, post-view emoji, new Bitmoji content, and new app icons. Features Core functionality Snapchat is primarily used for creating multimedia messages referred to as "snaps"; snaps can consist of a photo or a short video, and can be edited to include filters and effects, text captions, and drawings. Snaps can be directed privately to selected contacts, or to a semi-public "Story" or a public "Story" called "Our Story". The ability to send video snaps was added as a feature option in December 2012. By holding down on the photo button while inside the app, a video of up to ten seconds in length can be captured. Spiegel explained that this process allowed the video data to be compressed into the size of a photo. A later update allowed the ability to record up to 60 seconds, but are still segmented into 10 second intervals. After a single viewing, the video disappears by default. On May 1, 2014, the ability to communicate via video chat was added. Direct messaging features were also included in the update, allowing users to send ephemeral text messages to friends and family while saving any needed information by clicking on it. According to CIO, Snapchat uses real-time marketing concepts and temporality to make the app appealing to users. According to Marketing Pro, Snapchat attracts interest and potential customers by combining the AIDA (marketing) model with modern digital technology. Private message photo snaps can be viewed for a user-specified length of time (1 to 10 seconds as determined by the sender) before they become inaccessible. Users were previously required to hold down on the screen in order to view a snap; this behavior was removed in July 2015 The requirement to hold on the screen was intended to frustrate the ability to take screenshots of snaps; the Snapchat app does not prevent screenshots from being taken but can notify the sender if it detects that it has been saved. However, these notifications can be bypassed through either unauthorized modifications to the app or by obtaining the image through external means. One snap per day can be replayed for free. In September 2015, Snapchat introduced the option to purchase additional replays through in-app purchases. The ability to purchase extra replays was removed in April 2016. Friends can be added via usernames and phone contacts, using customizable "Snapcodes", or through the "Add Nearby" function, which scans for users near their location who are also in the Add Nearby menu. Spiegel explained that Snapchat is intended to counteract the trend of users being compelled to manage an idealized online identity of themselves, which he says has "taken all of the fun out of communicating." In November 2014, Snapchat introduced "Snapcash", a feature that lets users send and receive money to each other through private messaging. The payments system is powered by Square. In July 2016, Snapchat introduced a new, optional feature known as "Memories". Memories allow snaps and story posts to be saved into a private storage area, where they can be viewed alongside other photos stored on the device, as well as edited and published as snaps, story posts, or messages anytime. When shared with a user's current story, the memory would have a timestamp to indicate its age. Content in the Memories storage area can be searched by date or using a local object recognition system. Snaps accessible within Memories can additionally be placed into a "My Eyes Only" area that is locked with a Personal identification number (PIN). Snapchat has stated that the Memories feature was inspired by the practice of manually scrolling through photos on a phone to show them to others. In April 2017, the white border around old memories was removed. While originally intended to let viewers know the material was old, TechCrunch wrote that the indicator "ended up annoying users who didn't want their snaps altered, sometimes to the point where they would decide not to share the old content at all." In May 2017, an update made it possible to send snaps with unlimited viewing time, dropping the previous ten-second maximum duration, with the content disappearing after being deliberately closed by the recipient. New creative tools, namely the ability to draw with an emoji, videos that play in a loop, and an eraser that lets users remove objects in a photo with the app filling in the space with the background, were also released. In July 2017, Snapchat started allowing users to add links to snaps, enabling them to direct viewers to specific websites; the feature was only available for brands previously. Additionally, the update added more creative tools: A "Backdrop" feature lets users cut out a specific object from their photo and apply colorful patterns to it in order to bring greater emphasis to that object, and "Voice Filters" enable users to remix the sounds of their voices in the snap. Voice Filters was previously available as part of the feature enabling augmented reality lenses, with the new update adding a dedicated speaker icon to remix the audio in any snap. In June 2020, Snap announced "minis", embeddable apps that live inside the parent Snap app. In August 2022, Snap launched the "Family Center" feature which allows parents to monitor the activity of their children, ages 13–18, within the app. In February 2023, Snapchat launched "My AI", a custom chatbot offering Snapchat+ users access to a mobile version of the AI chatbot ChatGPT. It followed up by announcing that its customizable My AI chatbot would be accessible to all users within the app in April 2023, a month after OpenAI allowed access to third parties, and would be available for group chats. Filters, lenses, and stickers Snaps can be personalized with various forms of visual effects and stickers. Geofilters are graphical overlays available if the user is within a certain geographical location, such as a city, event, or destination. Users can design and create their own geofilters for personal events at a fee of $10–15 USD per hour. They can also subscribe to an annual plan which ranges from $1,000 to $10,000 depending on the location, for a permanent filter. A similar feature known as Geostickers was launched in 10 major cities in 2016. Bitmoji are stickers featuring personalized cartoon avatars, which can be used in snaps and messaging. Bitmoji characters can also be used as World Lenses. The "Lens" feature, introduced in September 2015, allows users to add real-time effects into their snaps by using face detection technology. This is activated by long-pressing on a face within the viewfinder. In April 2017, Snapchat extended this feature into "World Lenses", which use augmented reality technology to integrate 3D rendered elements (such as objects and animated characters) into scenes; these elements are placed and anchored in 3D space. On October 26, 2018, at TwitchCon, Snap launched the Snap Camera desktop application for macOS and Windows PCs, which enables use of Snapchat lenses in videotelephony and live streaming services such as Skype, Twitch, YouTube, and Zoom. However, this was discontinued in January 2023. Snapchat also launched integration with Twitch, including an in-stream widget for Snapcodes, the ability to offer lenses to stream viewers and as an incentive to channel subscribers. Several video game-themed lenses were also launched at this time, including ones themed around League of Legends, Overwatch, and PlayerUnknown's Battlegrounds. In August 2020, Snapchat collaborated with four TikTok influencers to launch Augmented Reality (AR) lenses to create a more interactive experience with users. The lenses now incorporate geo-locational mapping techniques to incorporate digital overlays onto real world surfaces. These lenses track 18 joints across the body to identify body movements, and generate effects around the body of the user. Advertising is now also utilizing AR lenses that make users a part of the advert. Coca-Cola, Pepsi and Taco Bell are just a select few of the brands now utilizing the tech on Snapchat. Consumers no longer scroll past these adverts, but become a part of them with AR lenses. In March 2022, Snapchat launched the ability to share YouTube videos as stickers. The stickers function as clickable links that redirect users to a browser or the YouTube app. Friend emojis Friend emojis can be customized, however the default emojis are listed below. The snapscore, which states the amount of snaps one has sent and received is recorded and is visible to one's friends. If users tap their own score it shows the ratio of sent and received snaps, the amount of snaps they have sent is on the right and the amount of snaps they have received is on the left, these numbers combined are their Snapchat score. There are multiple synonyms for Snapchat score such as Snapchat points, Snapscore, Snap points and Snap Number. YouTube has a similar rewards system called "Perks". Stories and Discover In October 2013, Snapchat introduced the "My Story" feature, which allows users to compile snaps into chronological storylines, accessible to all of their friends. By June 2014, photo and video snaps presented to friends in the Stories functionality had surpassed person-to-person private snaps as the most frequently used function of the service, with over one billion viewed per day—double the daily views tallied in April 2014. In June 2014, the story feature was expanded to incorporate "Our Stories", which was then changed to "Live Stories" about a year later. The feature allows users on-location at specific events (such as music festivals or sporting events) to contribute snaps to a curated story advertised to all users, showcasing a single event from multiple perspectives and viewpoints. These curated snaps provided by the app's contributors and selected for the "Live" section could also be more localized, but Snapchat eventually scaled back the more personal imaging streams in order to emphasize public events. An "Official Stories" designation was added in November 2015 to denote the public stories of notable figures and celebrities, similar to Twitter's "Verified account" program. In January 2015, Snapchat introduced "Discover" an area containing channels of ad-supported short-form content from major publishers, including BuzzFeed, CNN, ESPN, Mashable, People, Vice and Snapchat itself among others. To address data usage concerns related to these functions, a "Travel Mode" option was added in August 2015. When activated, the feature prevents the automatic downloading of snaps until they are explicitly requested by the user. In October 2016, the app was updated to replace its auto-advance functionality, which automatically moved users from one story to the next, with a "Story Playlist" feature, letting users select thumbnails of users in the list to play only selected stories. In January 2017, Snapchat revamped its design, adding search functionality and a new global live "Our Story" feature, to which any user can contribute. In May 2017, Snapchat introduced "Custom Stories", letting users collaboratively make stories combining their captures. In June 2017, "Snap Map" was introduced, which allows users to optionally share their location with friends. A map display, accessible from the viewfinder, can be used to locate stories based on location data, supporting the use of Bitmoji as place markers. Entering a "Ghost Mode" hides the user from the map. The function is based on the app Zenly, which was acquired by Snap Inc. prior to its launch. The map data is supplied from OpenStreetMap and Mapbox, while satellite imagery comes from DigitalGlobe. In February 2020, Snapchat released a Discover cartoon series called Bitmoji TV, which will star users' avatars. Original video content The Wall Street Journal reported in May 2017 that Snap Inc., the company developing Snapchat, had signed deals with NBCUniversal, A&E Networks, BBC, ABC, Metro-Goldwyn-Mayer and other content producers to develop original shows for viewing through Snapchat's "Stories" format. According to the report, Snap hoped to have several new shows available on a daily basis, with each show lasting between three and five minutes, and the company has sent out detailed reports to its partners on how to produce content for Snapchat. Over 2017 and 2018, Snap and partners launched several shows. In, 2018 Snapchat and Vertical Networks (Snapchat Publisher Story) created a show called My Ex-BFF Court," which is a spoof of daytime-TV fare like the typical court shows we watch for example "Divorce Court" in which two ex-friends try to fix their problems. Who ever is guilty gets a funny sentence. Each episode is hosted by Judge Matteo Lane who is also known as Matthew Lane. In 2018, Snapchat / Vertical Networks made a deal with Fox to make a television version of the dating and reality show Phone Swap. In 2018, Snapchat got a new show called How Low Will You Go that was created by Above Average Productions and NBC. In contrast to other messaging apps, Spiegel described Snapchat's messaging functions as being "conversational", rather than "transactional", as they sought to replicate the conversations he engaged in with friends. Spiegel stated that he did not experience conversational interactions while using the products of competitors like iMessage. Rather than a traditional online notification, a blue pulsing "here" button is displayed within the sender's chat window if the recipient is currently viewing their own chat window. When this button is held down, a video chat function is immediately launched. By default, messages disappear after they are read, and a notification is sent to the recipient only when they start to type. Users can also use messages to reply to snaps that are part of a story. The video chat feature uses technology from AddLive—a real-time communications provider that Snapchat acquired prior to the feature's launch. In regards to the "Here" indicator, Spiegel explained that "the accepted notion of an online indicator that every chat service has is really a negative indicator. It means 'my friend is available and doesn't want to talk to you,' versus this idea in Snapchat where 'my friend is here and is giving you their full attention.'" Spiegel further claimed that the Here video function prevents the awkwardness that can arise from apps that use typing indicators because, with text communication, conversations lose their fluidity as each user tries to avoid typing at the same time. On March 29, 2016, Snapchat launched a major revision of the messaging functionality known as "Chat 2.0", adding stickers, easier access to audio and video conferencing, the ability to leave audio or video "notes", and the ability to share recent camera photos. The implementation of these features are meant to allow users to easily shift between text, audio, and video chat as needed while retaining an equal level of functionality. In June 2018, Snapchat added the feature of deleting a sent message (including; audio, video, and text) before it is read. A feature introduced in August 2018 allows users to send Musical GIFs, TuneMojis. In August 2022, Snap Inc. announced it would discontinue all original scripted content with no plans to continue work in this direction. In 2023, Snapchat had over 300 million monthly active users. In 2024, the countries with the most Snapchat users were India with 202.5 million users, followed by the United States with 106.5 million, Pakistan 31.9 million, France 27.5 million and the United Kingdom 23.1 million. Encryption In January 2018, Snapchat introduced the use of end-to-end encryption in the application but only for snaps (pictures and video), according to a Snapchat security engineer presenting at the January 2019 Real World Crypto Conference. As of the January 2019 conference Snapchat had plans to introduce end-to-end encryption for text messages and group chats in the future. Business and multimedia Demographics Snapchat is popular among the younger generations, with most users being between 18 and 24 in 2023. On the app store, the age classification is 12+. In 2014, researchers from the University of Washington and Seattle Pacific University designed a user survey to help understand how and why the application was being used. The researchers originally hypothesized that due to the ephemeral nature of Snapchat messages, its use would be predominantly for privacy-sensitive content including the much talked about potential use for sexual content and sexting. However, it appears that Snapchat is used for a variety of creative purposes that are not necessarily privacy-related at all. In the study, only 1.6% of respondents reported using Snapchat primarily for sexting, although 14.2% admitted to having sent sexual content via Snapchat at some point. These findings suggest that users do not seem to utilize Snapchat for sensitive content. Rather, the primary use for Snapchat was found to be for comedic content such as "stupid faces" with 59.8% of respondents reporting this use most commonly. The researchers also determined how Snapchat users do not use the application and what types of content they are not willing to send. They found that the majority of users are not willing to send content classified as sexting (74.8% of respondents), photos of documents (85.0% of respondents), messages containing legally questionable content (86.6% of respondents), or content considered mean or insulting (93.7% of respondents). The study results also suggested that Snapchat's success is not due to its security properties, but because the users found the application to be fun. The researchers found that users seem to be well-aware (79.4% of respondents) that recovering snaps is possible and a majority of users (52.8% of respondents) report that this does not affect their behavior and use of Snapchat. Many users (52.8% of respondents) were found to use an arbitrary timeout length on snaps regardless of the content type or recipient. The remaining respondents were found to adjust their snaps' timeout depending on the content or the recipient. Reasons for adjusting the time length of snaps included the level of trust and relationship with the recipient, the time needed to comprehend the snap, and avoiding screenshots. Communication In the 2010s, Snapchat was seen as a messenger focused more on in-the-moment way sharing and less on the accumulation of permanent material. Building on this distinction by launching as a mobile-first company, Snapchat, in the midst of the app revolution and the growing presence of cellular communication, did not have to make the transition to mobile in the way other competing social media networks had to do. Evan Spiegel himself described Snapchat as primarily a camera company. Spiegel also dismissed past comparisons to other social media networks such as Facebook and Twitter when he was asked if the 2016 presidential race was going to be remembered as the Snapchat election, although major candidates did occasionally use the app to reach voters. Nevertheless, the mobile app offered distinct publication, media, and news content within its Discover channel. Snapchat attempted to distinguish brand content and user-based messaging and sharing. Monetization Snapchat's developing features embody a deliberate strategy of monetization. Snapchat announced its then-upcoming advertising efforts on October 17, 2014, when it acknowledged its need for a revenue stream. The company stated that it wanted to evaluate "if we can deliver an experience that's fun and informative, the way ads used to be, before they got creepy and targeted." Snapchat's first paid advertisement, in the form of a 20-second movie trailer for the horror film Ouija, was shown to users on October 19, 2014. In January 2015, Snapchat began making a shift from focusing on growth to monetization. The company launched its "Discover" feature, which allowed for paid advertising by presenting short-form content from publishers. Its initial launch partners included CNN, Comedy Central, ESPN and Food Network, among others. In June 2015, Snapchat announced that it would allow advertisers to purchase sponsored geofilters for snaps; an early customer of the offering was McDonald's, who paid for a branded geofilter covering its restaurant locations in the United States. Snapchat made a push to earn ad revenue from its "Live Stories" feature in 2015, after initially launching the feature in 2014. Ad placements can be sold within a live story, or a story can be pitched by a sponsor. Live stories are estimated to reach an average of 20 million viewers in a 24-hour span. Campaigns In September 2015, the service entered into a partnership with the National Football League to present live stories from selected games (including a Sunday game, and marquee games such as Monday Night Football and Thursday Night Football), with both parties contributing content and handling ad sales. The 2015 Internet Trends Report by Mary Meeker highlighted the significant growth of vertical video viewing. Vertical video ads like Snapchat's are watched in their entirety nine times more than landscape video ads. In 2016, Gatorade came out with an animated filter as part of the Super Bowl ads in 2016. The dunk lens of Gatorade received 165 million views on Snapchat. In April 2016, NBC Olympics announced that it had reached a deal with Snapchat to allow stories from the 2016 Summer Olympics to be featured on Snapchat in the United States. The content would include a behind-the-scenes Discover channel curated by BuzzFeed (a company which NBCUniversal has funded), and stories featuring a combination of footage from NBC, athletes, and attendees. NBC sold advertising and entered into revenue sharing agreements. This marked the first time NBC allowed Olympics footage to be featured on third-party property. In May 2016, as part of a campaign to promote X-Men: Apocalypse, 20th Century Fox paid for the entire array of lenses to be replaced by those based on characters from the X-Men series and films for a single day. In July 2016, it was reported that Snapchat had submitted a patent application for the process of using an object recognition system to deliver sponsored filters based on objects seen in a camera view. Later that year, in September 2016, Snapchat released its first hardware product, called the Spectacles. Evan Spiegel, CEO of Snap Inc., called it "a toy" but saw it as an upside to freeing his app from smartphone cameras. In April 2017, Digiday reported that Snapchat would launch a self-service manager for advertising on the platform. The feature launched the following month, alongside news of a Snapchat Mobile Dashboard for tracking ad campaigns, which rolled out in June to select countries. Also in 2017, Snapchat introduced a "Snap to Store" advertising tool that lets companies using geostickers to track whether users buy their product or visit their store in a 7-day period after seeing the relevant geosticker. On November 13, 2018, Snapchat announced the launch of the Snap Store, where they sell Bitmoji merchandise personalized by avatars from users and their friends. Items for sale include shirts, mugs, shower curtains, and phone cases. Development platform In June 2018, Snapchat announced a new third-party development platform known as Snap Kit: a suite of components that allows partners to provide third-party integrations with aspects of the service. "Login Kit" is a social login platform that utilizes Snapchat accounts. It was promoted as being more privacy-conscious than competing equivalents, as services are only able to receive the user's display name (and, optionally, a Bitmoji avatar) and are subject to a 90-day inactivity timeout, preventing them from being able to collect any further personal information or social graphs through their authorization. "Creative Kit" allows apps to generate their own stickers to overlay into Snapchat posts. "Story Kit" can be used to embed and aggregate publicly posted stories (with for example, Bandsintown using Story Kit to aggregate stories posted by musicians), while "Bitmoji Kit" allows Bitmoji stickers to be integrated into third-party apps. Snap Originals In response to industry competition from streaming platforms such as Netflix, Snapchat announced in late 2018 that it would diversify its content by launching Snap Originals (episodic content including both scripted shows and documentaries). In June 2020, Snapchat announced the creation of its first-ever "shoppable" original show called The Drop, which focused on "exclusive streetwear collage" from celebrities and designers. Each episode explored the relationship between the designer and celebrity collaborator. Viewers would learn about the item for sale and how it came together, as well as what time that day the item would go up for sale. Later that day, at the aforementioned time, the episode would be updated with more content that included a "swipe up to buy" action. All projects related to original programming were ended in August 2022. Premium accounts and sexual content In 2014, Snapchat introduced a new feature called Snapcash which spurred its popularity among adult content creators. Snapchat allows private premium accounts in which users can monetize their content. This feature is mostly used by models to monetize their adult content. Snapchat is increasingly becoming an integral part of the online porn industry. Controversies December 2013 hack Snapchat was hacked on December 31, 2013. Gibson Security, an Australian security firm, had disclosed an API security vulnerability to the company on August 27, 2013, and then made public the source code for the exploit on December 25. On December 27, Snapchat announced that it had implemented mitigating features. Nonetheless, an anonymous group hacked them, saying that the mitigating features presented only "minor obstacles". The hackers revealed parts of approximately 4.6 million Snapchat usernames and phone numbers on a website named SnapchatDB.info and sent a statement to the popular technology blog TechCrunch saying that their objective had been to "raise public awareness... and... put public pressure on Snapchat" to fix the vulnerability. Snapchat apologized a week after the hack. Federal Trade Commission In 2014, Snapchat settled a complaint made by the US Federal Trade Commission (FTC). The government agency alleged that the company had exaggerated to the public the degree to which mobile app images and photos could actually be made to disappear. Under the terms of the agreement, Snapchat was not fined, but the app service agreed to have its claims and policies monitored by an independent party for a period of 20 years. The FTC concluded that Snapchat was prohibited from "misrepresenting the extent to which it maintains the privacy, security, or confidentiality of users' information." Following the agreement, Snapchat updated its privacy page to state that the company "can't guarantee that messages will be deleted within a specific timeframe." Even after Snapchat deletes message data from their servers, that same data may remain in backup for a certain period of time. In a public blog post, the service warned that "If you've ever tried to recover lost data after accidentally deleting a drive or maybe watched an episode of CSI, you might know that with the right forensic tools, it's sometimes possible to retrieve data after it has been deleted." In September 2024, the FTC released a report summarizing 9 company responses (including from Snapchat) to orders made by the agency pursuant to Section 6(b) of the Federal Trade Commission Act of 1914 to provide information about user and non-user data collection (including of children and teenagers) and data use by the companies that found that the companies' user and non-user data practices put individuals vulnerable to identity theft, stalking, unlawful discrimination, emotional distress and mental health issues, social stigma, and reputational harm. Windows app In November 2014, Snapchat announced a crackdown on third-party apps of its service and their users. Users of the Windows Phone platform were affected, as Snapchat did not have an official client for it, but numerous third-party apps existed, most popularly one called 6snap. In December, Microsoft was forced to remove 6snap and all other third-party apps of Snapchat from the Windows Phone Store; Snapchat however did not develop an official app for the platform, leaving its users on the platform behind. A petition from users requesting an official Snapchat app reached 43,000 signatures in 2015, but the company still refused to respond and to build an app for Windows Phone. Snapchat was criticized once again later in 2015 when it did not develop an app for Microsoft's Universal Windows Platform (UWP). Lens incidents In September 2015, an 18-year-old was using a Snapchat feature called "Lens" to record the speed she was driving her Mercedes-Benz C230 when she crashed into a Mitsubishi Outlander in Hampton, Georgia. The crash injured both drivers. The driver of the Outlander spent five weeks in intensive care while he was treated for severe traumatic brain injury. In April 2016, the Outlander driver sued both Snapchat and the user of Snapchat, alleging that Snapchat knew its application was being used in unlawful speed contests, yet did nothing to prevent such use so is negligent. In October 2016, a similar collision occurred while a 22-year-old was driving at in Tampa, Florida, killing five people. "Poor Country" remark According to former Snapchat employee Anthony Pompliano in a lawsuit filed against Snap Inc., Spiegel made a statement in 2015 that Snapchat is "only for rich people" and that he does not "want to expand into poor countries like India and Spain". The incident sparked a Twitter trend called "#UninstallSnapchat", in which Indian users uninstalled the app, and caused backlash against the company, including a large number of low "one-star" ratings for the app in the Google Play Store and Apple's App Store. Snapchat's shares fell by 1.5%. In response to the allegation, Snapchat called Pompliano's claim "ridiculous", and elaborated that "Obviously Snapchat is for everyone. It's available worldwide to download for free." Pompliano lawsuit In January 2017, Pompliano filed a state lawsuit accusing Snapchat of doctoring growth metrics with the intention of deceiving investors. Pompliano said that Spiegel was dismissive of his concerns and that Pompliano was fired shortly thereafter. The judge dropped Pompliano's claims that Snapchat violated the Dodd-Frank and Consumer Protection Acts in retaliation against him, citing an arbitration clause in his contract. However, Snap Inc. faced blowback over a lack of disclosure regarding the contents of the lawsuit, resulting in plunging stock prices, several class-action lawsuits, and Federal investigations. "Snap Map" privacy concerns The June 2017 release of "Snap Map", a feature that broadcasts the user's location on a map, was met with concerns over privacy and safety. The feature, through an opt-in, delivers a message asking if the user would like to show their position on the map, but reportedly does not explain the ramifications of doing so, including that the app updates the user's position on the map each time the app is opened and not just when actively capturing snaps, potentially assisting stalkers. The map can be zoomed in to feature detailed geographical information, such as street addresses. The Daily Telegraph reported that police forces had issued child safety warnings, while other media publications wrote that safety concerns were also raised for teenagers and adults unaware of the feature's actual behavior. In a statement to The Verge, a Snapchat spokesperson said that "The safety of our community is very important to us and we want to make sure that all Snapchatters, parents, and educators have accurate information about how the Snap Map works". Users have the ability to operate in "Ghost Mode", or select the friends that they wish to share their location with. Although there has been an increase in advertising on Snapchat, Snapchat has stated that they do not plan on running ads on Snap Map stories. Rihanna controversy In March 2018, an advertisement containing a poll about Rihanna was posted stating, "Would you rather punch Chris Brown or slap Rihanna?" Rihanna tweeted that Snapchat was "insensitive to domestic violence victims" and urged fans to delete Snapchat. Body image concerns The increased use of body and facial reshaping applications such as Snapchat and Facetune has been identified as a potential cause of body dysmorphic disorder. In August 2018, researchers from the Boston Medical Center wrote in a JAMA Facial Plastic Surgery essay that a phenomenon they called 'Snapchat dysmorphia' had been identified, where people request surgery to look like the edited version of themselves as they appear through Snapchat Filters. Snapchat employees abused data access to spy on users In May 2019, it was revealed that multiple Snapchat employees used an internal tool called SnapLion, originally designed to gather data in compliance with law enforcement requests, to spy on users. Mozilla calls for public disclosures related to use of A.I. Citing "vague, broad language" in Snapchat's privacy policy, Mozilla issued a September 2019 petition calling for public disclosures related to the app's use of facial emotion recognition technology. When reached for comment by Scientific American, representatives for Snapchat declined to share a public response. Revenge porn During the 2020 lockdown to inhibit the spread of COVID-19 in France, the app emerged as a hub for the dissemination of revenge porn of underage girls. Some users have also reported that perpetrators of revenge porn have utilized explicit images to seek sexual favors or powers over individuals. In 2020, a woman in North Carolina sued Snapchat (as well as dating app Tinder and the five men named in the attack), claiming features of the app enabled her alleged rapist and his friends to hide evidence of the rape. In particular, the suit alleges that "because of the ways Snapchat is and has been designed, constructed, marketed, and maintained, [the woman's assailants] were able to send these nonconsensual, pornographic photographs and videos of [her] with little to no threat of law enforcement verifying that they did so." The woman told the court that parent company Snap Inc. "specifically and purposely designed, constructed, and maintained Snapchat to serve as a secretive and nefarious communications platform that encourages, solicits, and facilitates the creation and dissemination of illicit and non-consensual sexually explicit content...and allowed Snapchat to operate as a safe-haven from law enforcement." Sale of fake pills In December 2022, the National Crime Prevention Council wrote U.S. Attorney General Merrick Garland urging the Justice Department to examine Snaps business practices related to the sale of fake pills containing lethal amounts the synthetic opioid fentanyl. Less than a month later, it was widely reported that the Federal Bureau of Investigation launched a probe into the company and the sale of fake pills. Illinois biometric data lawsuit Snapchat was the subject of a class action lawsuit from the state of Illinois, alleging that the company violated the Biometric Information Privacy Act by collecting and storing biometric data on Illinois residents who used the app's filters and lenses without providing a written explanation on why the data was recorded and what its term of storage would be. The company opted to settle the lawsuit with a $35 million payout. Grooming In November 2024, British children's charity the NSPCC reported that according to statistics provided to them by the police, that the most popular app amongst online groomers was Snapchat. More than 7,000 Sexual Communication with a Child offences were recorded across the UK in the year to March 2024, the highest number since the offence was created. Snapchat made up nearly half of the 1,824 cases where the specific platform used for the grooming was recorded by the police. Snapchat Speed Filter Crashes In September 2015, Christal McGee was driving her Mercedes-Benz C230 in Georgia when she collided with a Mitsubishi Outlander at 107 mph. The high-speed crash severely injured the driver of the Mitsubishi, Wentworth Maynard, who required five weeks of intensive care and was left with a permanent brain injury. In April 2016, Maynard sued both McGee and Snapchat, claiming that McGee was using the Snapchat “speed filter” at the time of the crash. The lawsuit further alleged that Snapchat negligently allowed the feature despite knowing it encouraged dangerous speeding. In March 2022, the Georgia Supreme Court ruled that Snapchat must face claims that it defectively designed the “speed filter” application. In May 2017, a group of teens in Wisconsin used Snapchat's "speed filter" to capture their car's speed as it reached 123 mph on a rural road. Moments later, the vehicle crashed into a tree, killing all three occupants. In May 2019, the families of two passengers, Hunter Morby and Landen Brown, filed a lawsuit against Snapchat, alleging that the company knew the filter encouraged reckless speeding among young users but failed to restrict its use. The case, Lemmon v. Snap, led to a landmark legal precedent. In May 2021, the 9th U.S. Circuit Court of Appeals ruled that Section 230 of the Communications Decency Act–which typically shields tech companies from liability for content created by users–did not bar the families' claims. The court distinguished between protecting platforms from liability for user-generated content and protecting them from liability for negligent product design, finding that the speed filter was a feature Snapchat itself had created. This decision allowed the case to proceed, marking a significant precedent for holding tech companies accountable for the design of their products. In 2021, Snap Inc. settled the lawsuit for an undisclosed amount. In June 2021, a month after the 9th Circuit ruling, Snapchat removed the “speed filter”, citing its limited use for its removal. The decision came after mounting pressure from safety advocates, legal experts, and families affected by crashes allegedly linked to the feature. Critics had long argued that the filter incentivized reckless behavior, particularly among young and impressionable drivers, and called for stronger accountability from social media companies to prioritize user safety.
Technology
Social network and blogging
null
37508073
https://en.wikipedia.org/wiki/Fish%20fin
Fish fin
Fins are moving appendages protruding from the body of fish that interact with water to generate thrust and help the fish swim. Apart from the tail or caudal fin, fish fins have no direct connection with the back bone and are supported only by muscles. Fish fins are distinctive anatomical features with varying structures among different clades: in ray-finned fish (Actinopterygii), fins are mainly composed of bony spines or rays covered by a thin stretch of scaleless skin; in lobe-finned fish (Sarcopterygii) such as coelacanths and lungfish, fins are short rays based around a muscular central bud supported by jointed bones; in cartilaginous fish (Chondrichthyes) and jawless fish (Agnatha), fins are fleshy "flippers" supported by a cartilaginous skeleton. Fins at different locations of the fish body serve different purposes, and are divided into two groups: the midsagittal unpaired fins and the more laterally located paired fins. Unpaired fins are predominantly associated with generating linear acceleration via oscillating propulsion, as well as providing directional stability; while paired fins are used for generating paddling acceleration, deceleration, and differential thrust or lift for turning, surfacing or diving and rolling. Fins can also be used for other locomotions other than swimming, for example, flying fish use pectoral fins for gliding flight above water surface, and frogfish and many amphibious fishes use pectoral and/or pelvic fins for crawling. Fins can also be used for other purposes: remoras and gobies have evolved sucker-like dorsal fins for attaching to surfaces and "hitchhiking"; male sharks and mosquitofish use a modified fin to deliver sperm; thresher sharks use their caudal fin to whip and stun prey; reef stonefish have spines in their dorsal fins that inject venom as an anti-predator defense; anglerfish use the first spine of their dorsal fin like a fishing rod to lure prey; and triggerfish avoid predators by squeezing into coral crevices and using spines in their fins to anchor themselves in place. Types of fins Fins can either be paired or unpaired. The pectoral and pelvic fins are paired, whereas the dorsal, anal and caudal fins are unpaired and situated along the midline of the body. For every type of fin, there are a number of fish species in which this particular fin has been lost during evolution (e.g. pelvic fins in Bobasatrania, caudal fin in ocean sunfish). In some clades, additional unpaired fins were acquired during evolution (e.g. additional dorsal fins, adipose fin). In some Acanthodii ("spiny sharks"), one or more pairs of "intermediate" or "prepelvic" spines are present between the pectoral and pelvic fins, but these are not associated with fins. Bony fishes Bony fishes (Actinopterygii and Sarcopterygii) form a taxonomic group called Osteichthyes (or Euteleostomi, which includes also land vertebrates); they have skeletons made of bone mostly, and can be contrasted with cartilaginous fishes (see below), which have skeletons made mainly of cartilage (except for their teeth, fin spines, and denticles). Bony fishes are divided into ray-finned and lobe-finned fish. Most living fish are ray-finned, an extremely diverse and abundant group consisting of over 30,000 species. It is the largest class of vertebrates in existence today, making up more than 50% of species. In the distant past, lobe-finned fish were abundant; however, there are currently only eight species. Bony fish have fin spines called lepidotrichia or "rays" (due to how the spines spread open). They typically have swim bladders, which allow the fish to alter the relative density of its body and thus the buoyancy, so it can sink or float without having to use the fins to swim up and down. However, swim bladders are absent in many fish, most notably in lungfishes, who have evolved their swim bladders into primitive lungs, which may have a shared evolutionary origin with those of their terrestrial relatives, the tetrapods. Bony fishes also have a pair of opercula that function to draw water across the gills, which help them breathe without needing to swim forward to force the water into the mouth across the gills. Lobe-fins Lobe-finned fishes form a class of bony fishes called Sarcopterygii. They have fleshy, lobed, paired fins, which are joined to the body by a series of bones. The fins of lobe-finned fish differ from those of all other fish in that each is borne on a fleshy, lobe-like, scaly stalk extending from the body. Pectoral and pelvic fins have articulations resembling those of tetrapod limbs. These fins evolved into legs of the first tetrapod land vertebrates (amphibians) in the Devonian Period. Sarcopterygians also possess two dorsal fins with separate bases, as opposed to the single dorsal fin of most ray-finned fish (except some teleosts). The caudal fin is either heterocercal (only fossil taxa) or diphycercal. The coelacanth is one type of living lobe-finned fish. Both extant members of this group, the West Indian Ocean coelacanth (Latimeria chalumnae) and the Indonesian coelacanth (Latimeria menadoensis), are found in the genus Latimeria. Coelacanths are thought to have evolved roughly into their current form about 408 million years ago, during the early Devonian. Locomotion of the coelacanths is unique to their kind. To move around, coelacanths most commonly take advantage of up or downwellings of the current and drift. They use their paired fins to stabilise their movement through the water. While on the ocean floor their paired fins are not used for any kind of movement. Coelacanths can create thrust for quick starts by using their caudal fins. Due to the high number of fins they possess, coelacanths have high manoeuvrability and can orient their bodies in almost any direction in the water. They have been seen doing headstands and swimming belly up. It is thought that their rostral organ helps give the coelacanth electroperception, which aids in their movement around obstacles. Lungfish are also living lobe-finned fish. They occur in Africa (Protopterus), Australia (Neoceratodus), and South America (Lepidosiren). Lungfish evolved during the Devonian Period. Genetic studies and palaeontological data confirm that lungfish are the closest living relatives of land vertebrates. Fin arrangement and body shape is relatively conservative in lobe-finned fishes. However, there are a few examples from the fossil record that show aberrant morphologies, such as Allenypterus, Rebellatrix, Foreyia or the tetrapodomorphs. Diversity of fins in lobe-finned fishes Ray-fins Ray-finned fishes form a class of bony fishes called Actinopterygii. Their fins contain spines or rays. A fin may contain only spiny rays, only soft rays, or a combination of both. If both are present, the spiny rays are always anterior. Spines are generally stiff and sharp. Rays are generally soft, flexible, segmented, and may be branched. This segmentation of rays is the main difference that separates them from spines; spines may be flexible in certain species, but they will never be segmented. Spines have a variety of uses. In catfish, they are used as a form of defense; many catfish have the ability to lock their spines outwards. Triggerfish also use spines to lock themselves in crevices to prevent them being pulled out. Lepidotrichia are usually composed of bone, but those of early osteichthyans - such as Cheirolepis - also had dentine and enamel. They are segmented and appear as a series of disks stacked one on top of another. They may have been derived from dermal scales. The genetic basis for the formation of the fin rays is thought to be genes coded for the production of certain proteins. It has been suggested that the evolution of the tetrapod limb from lobe-finned fishes is related to the loss of these proteins. Diversity of fins in ray-finned fishes Cartilaginous fishes Cartilaginous fishes form a class of fishes called Chondrichthyes. They have skeletons made of cartilage rather than bone. The class includes sharks, rays and chimaeras. Shark fin skeletons are elongated and supported with soft and unsegmented rays named ceratotrichia, filaments of elastic protein resembling the horny keratin in hair and feathers. Originally the pectoral and pelvic girdles, which do not contain any dermal elements, did not connect. In later forms, each pair of fins became ventrally connected in the middle when scapulocoracoid and puboischiadic bars evolved. In rays, the pectoral fins have connected to the head and are very flexible. One of the primary characteristics present in most sharks is the heterocercal tail, which aids in locomotion. Most sharks have eight fins. Sharks can only drift away from objects directly in front of them because their fins do not allow them to move in the tail-first direction. Unlike modern cartilaginous fish, members of stem chondrichthyan lineages (e.g. the climatiids and the diplacanthids) possessed pectoral dermal plates as well as dermal spines associated with the paired fins. The oldest species demonstrating these features is the acanthodian Fanjingshania renovata from the lower Silurian (Aeronian) of China. Fanjingshania possess compound pectoral plates composed of dermal scales fused to a bony plate and fin spines formed entirely of bone. Fin spines associated with the dorsal fins are rare among extant cartilaginous fishes, but are present, for instance, in Heterodontus or Squalus. Dorsal fin spines are typically developed in many fossil groups, such as in Hybodontiformes, Ctenacanthiformes or Xenacanthida. In Stethacanthus, the first dorsal fin spine was modified, forming a spine-brush complex. As with most fish, the tails of sharks provide thrust, making speed and acceleration dependent on tail shape. Caudal fin shapes vary considerably between shark species, due to their evolution in separate environments. Sharks possess a heterocercal caudal fin in which the dorsal portion is usually noticeably larger than the ventral portion. This is because the shark's vertebral column extends into that dorsal portion, providing a greater surface area for muscle attachment. This allows more efficient locomotion among these negatively buoyant cartilaginous fish. By contrast, most bony fish possess a homocercal caudal fin. Tiger sharks have a large upper lobe, which allows for slow cruising and sudden bursts of speed. The tiger shark must be able to twist and turn in the water easily when hunting to support its varied diet, whereas the porbeagle shark, which hunts schooling fish such as mackerel and herring, has a large lower lobe to help it keep pace with its fast-swimming prey. Other tail adaptations help sharks catch prey more directly, such as the thresher shark's usage of its powerful, elongated upper lobe to stun fish and squid. On the other hand, rays rely on their enlarged pectoral fins for propulsion. Similarly enlarged pectoral fins can be found in the extinct Petalodontiformes (e.g. Belantsea, Janassa, Menaspis), which belong to Holocephali (ratfish and their fossil relatives), or in Aquilolamna (Selachimorpha) and Squatinactis (Squatinactiformes). Some cartilaginous fishes have an eel-like locomotion (e.g. Chlamydoselachus, Thrinacoselache, Phoebodus) Diversity of fins in cartilaginous fishes Shark finning According to the Humane Society International, approximately 100 million sharks are killed each year for their fins, in an act known as shark finning. After the fins are cut off, the mutilated sharks are thrown back in the water and left to die. In some countries of Asia, shark fins are a culinary delicacy, such as shark fin soup. Currently, international concerns over the sustainability and welfare of sharks have impacted consumption and availability of shark fin soup worldwide. Shark finning is prohibited in many countries. Fin functions Generating thrust Foil shaped fins generate thrust when moved, the lift of the fin sets water or air in motion and pushes the fin in the opposite direction. Aquatic animals get significant thrust by moving fins back and forth in water. Often the tail fin is used, but some aquatic animals generate thrust from pectoral fins. Cavitation occurs when negative pressure causes bubbles (cavities) to form in a liquid, which then promptly and violently collapse. It can cause significant damage and wear. Cavitation damage can occur to the tail fins of powerful swimming marine animals, such as dolphins and tuna. Cavitation is more likely to occur near the surface of the ocean, where the ambient water pressure is relatively low. Even if they have the power to swim faster, dolphins may have to restrict their speed because collapsing cavitation bubbles on their tail are too painful. Cavitation also slows tuna, but for a different reason. Unlike dolphins, these fish do not feel the bubbles, because they have bony fins without nerve endings. Nevertheless, they cannot swim faster because the cavitation bubbles create a vapor film around their fins that limits their speed. Lesions have been found on tuna that are consistent with cavitation damage. Scombrid fishes (tuna, mackerel and bonito) are particularly high-performance swimmers. Along the margin at the rear of their bodies is a line of small rayless, non-retractable fins, known as finlets. There has been much speculation about the function of these finlets. Research done in 2000 and 2001 by Nauen and Lauder indicated that "the finlets have a hydrodynamic effect on local flow during steady swimming" and that "the most posterior finlet is oriented to redirect flow into the developing tail vortex, which may increase thrust produced by the tail of swimming mackerel". Fish use multiple fins, so it is possible that a given fin can have a hydrodynamic interaction with another fin. In particular, the fins immediately upstream of the caudal (tail) fin may be proximate fins that can directly affect the flow dynamics at the caudal fin. In 2011, researchers using volumetric imaging techniques were able to generate "the first instantaneous three-dimensional views of wake structures as they are produced by freely swimming fishes". They found that "continuous tail beats resulted in the formation of a linked chain of vortex rings" and that "the dorsal and anal fin wakes are rapidly entrained by the caudal fin wake, approximately within the timeframe of a subsequent tail beat". Controlling motion Once motion has been established, the motion itself can be controlled with the use of other fins. The bodies of reef fishes are often shaped differently from open water fishes. Open water fishes are usually built for speed, streamlined like torpedoes to minimise friction as they move through the water. Reef fish operate in the relatively confined spaces and complex underwater landscapes of coral reefs. For this manoeuvrability is more important than straight line speed, so coral reef fish have developed bodies which optimise their ability to dart and change direction. They outwit predators by dodging into fissures in the reef or playing hide and seek around coral heads. The pectoral and pelvic fins of many reef fish, such as butterflyfish, damselfish and angelfish, have evolved so they can act as brakes and allow complex manoeuvres. Many reef fish, such as butterflyfish, damselfish and angelfish, have evolved bodies which are deep and laterally compressed like a pancake, and will fit into fissures in rocks. Their pelvic and pectoral fins have evolved differently, so they act together with the flattened body to optimise manoeuvrability. Some fishes, such as puffer fish, filefish and trunkfish, rely on pectoral fins for swimming and hardly use tail fins at all. Reproduction Male cartilaginous fishes (sharks and rays), as well as the males of some live-bearing ray finned fishes, have fins that have been modified to function as intromittent organs, reproductive appendages which allow internal fertilization. In ray finned fish, they are called gonopodia or andropodia, and in cartilaginous fish, they are called claspers. Gonopodia are found on the males of some species in the Anablepidae and Poeciliidae families. They are anal fins that have been modified to function as movable intromittent organs and are used to impregnate females with milt during mating. The third, fourth and fifth rays of the male's anal fin are formed into a tube-like structure in which the sperm of the fish is ejected. When ready for mating, the gonopodium becomes erect and points forward towards the female. The male shortly inserts the organ into the sex opening of the female, with hook-like adaptations that allow the fish to grip onto the female to ensure impregnation. If a female remains stationary and her partner contacts her vent with his gonopodium, she is fertilised. The sperm is preserved in the female's oviduct. This allows females to fertilise themselves at any time without further assistance from males. In some species, the gonopodium may be half the total body length. Occasionally, the fin is too long to be used, as in the "lyretail" breeds of Xiphophorus helleri. Hormone treated females may develop gonopodia. These are useless for breeding. Similar organs with similar characteristics are found in other fishes, for example the andropodium in the Hemirhamphodon or in the Goodeidae or the gonopodium in the Middle Triassic Saurichthys, the oldest known example of viviparity in a ray-finned fish. Claspers are found on the males of cartilaginous fishes. They are the posterior part of the pelvic fins that have also been modified to function as intromittent organs, and are used to channel semen into the female's cloaca during copulation. The act of mating in sharks usually includes raising one of the claspers to allow water into a siphon through a specific orifice. The clasper is then inserted into the cloaca, where it opens like an umbrella to anchor its position. The siphon then begins to contract expelling water and sperm. Other functions Other uses of fins include walking and perching on the sea floor, gliding over water, cooling of body temperature, stunning of prey, display (scaring of predators, courtship), defence (venomous fin spines, locking between corals), luring of prey, and attachment structures. The Indo-Pacific sailfish has a prominent dorsal fin. Like scombroids and other billfish, they streamline themselves by retracting their dorsal fins into a groove in their body when they swim. The huge dorsal fin, or sail, of the sailfish is kept retracted most of the time. Sailfish raise them if they want to herd a school of small fish, and also after periods of high activity, presumably to cool down. The oriental flying gurnard has large pectoral fins which it normally holds against its body, and expands when threatened to scare predators. Despite its name, it is a demersal fish, not a flying fish, and uses its pelvic fins to walk along the bottom of the ocean. Fins can have an adaptive significance as sexual ornaments. During courtship, the female cichlid, Pelvicachromis taeniatus, displays a large and visually arresting purple pelvic fin. "The researchers found that males clearly preferred females with a larger pelvic fin and that pelvic fins grew in a more disproportionate way than other fins on female fish." Evolution Evolution of paired fins There are two prevailing hypotheses that have been historically debated as models for the evolution of paired fins in fish: the gill arch theory and the lateral fin-fold theory. The former, commonly referred to as the "Gegenbaur hypothesis," was posited in 1870 and proposes that the "paired fins are derived from gill structures". This fell out of popularity in favour of the lateral fin-fold theory, first suggested in 1877, which proposes that paired fins budded from longitudinal, lateral folds along the epidermis just behind the gills. There is weak support for both hypotheses in the fossil record and in embryology. However, recent insights from developmental patterning have prompted reconsideration of both theories in order to better elucidate the origins of paired fins. Classical theories Carl Gegenbaur's concept of the "Archipterygium" was introduced in 1876. It was described as a gill ray, or "joined cartilaginous stem," that extended from the gill arch. Additional rays arose from along the arch and from the central gill ray. Gegenbaur suggested a model of transformative homology – that all vertebrate paired fins and limbs were transformations of the archipterygium. Based on this theory, paired appendages such as pectoral and pelvic fins would have differentiated from the branchial arches and migrated posteriorly. However, there has been limited support for this hypothesis in the fossil record both morphologically and phylogenically. In addition, there was little to no evidence of an anterior-posterior migration of pelvic fins. Such shortcomings of the gill-arch theory led to its early demise in favour of the lateral fin-fold theory proposed by St. George Jackson Mivart, Francis Balfour, and James Kingsley Thacher. The lateral fin-fold theory hypothesised that paired fins developed from lateral folds along the body wall of the fish. Just as segmentation and budding of the median fin fold gave rise to the median fins, a similar mechanism of fin bud segmentation and elongation from a lateral fin fold was proposed to have given rise to the paired pectoral and pelvic fins. However, there was little evidence of a lateral fold-to-fin transition in the fossil record. In addition, it was later demonstrated phylogenically that pectoral and pelvic fins arise from distinct evolutionary and mechanistic origins. Evolutionary developmental biology Recent studies in the ontogeny and evolution of paired appendages have compared finless vertebrates – such as lampreys – with Chondrichthyes, the most basal living vertebrate with paired fins. In 2006, researchers found that the same genetic programming involved in the segmentation and development of median fins was found in the development of paired appendages in catsharks. Although these findings do not directly support the lateral fin-fold hypothesis, the original concept of a shared median-paired fin evolutionary developmental mechanism remains relevant. A similar renovation of an old theory may be found in the developmental programming of chondricthyan gill arches and paired appendages. In 2009, researchers at the University of Chicago demonstrated that there are shared molecular patterning mechanisms in the early development of the chondricthyan gill arch and paired fins. Findings such as these have prompted reconsideration of the once-debunked gill-arch theory. From fins to limbs Fish are the ancestors of all mammals, reptiles, birds and amphibians. In particular, terrestrial tetrapods (four-legged animals) evolved from fish and made their first forays onto land about 390 million years ago. They used paired pectoral and pelvic fins for locomotion. The pectoral fins developed into forelegs (arms in the case of humans) and the pelvic fins developed into hind legs. Much of the genetic machinery that builds a walking limb in a tetrapod is already present in the swimming fin of a fish. In 2011, researchers at Monash University in Australia used primitive but still living lungfish "to trace the evolution of pelvic fin muscles to find out how the load-bearing hind limbs of the tetrapods evolved." Further research at the University of Chicago found bottom-walking lungfishes had already evolved characteristics of the walking gaits of terrestrial tetrapods. In a classic example of convergent evolution, the pectoral limbs of pterosaurs, birds and bats further evolved along independent paths into flying wings. Even with flying wings, there are many similarities with walking legs, and core aspects of the genetic blueprint of the pectoral fin have been retained. The first mammals appeared during the Triassic period (between 251.9 and 201.4 million years ago). Several groups of these mammals started returning to the sea, including the cetaceans (whales, dolphins and porpoises). Recent DNA analysis suggests that cetaceans evolved from within the even-toed ungulates, and that they share a common ancestor with the hippopotamus. About 23 million years ago, another group of bearlike land mammals started returning to the sea. These were the seals. What had become walking limbs in cetaceans and seals evolved independently into new forms of swimming fins. The forelimbs became flippers, while the hindlimbs were either lost (cetaceans) or also modified into flipper (pinnipeds). In cetaceans, the tail gained two fins at the end, called a fluke. Fish tails are usually vertical and move from side to side. Cetacean flukes are horizontal and move up and down, because cetacean spines bend the same way as in other mammals. Ichthyosaurs are ancient reptiles that resembled dolphins. They first appeared about 245 million years ago and disappeared about 90 million years ago. "This sea-going reptile with terrestrial ancestors converged so strongly on fishes that it actually evolved a dorsal fin and tail fin for improved aquatic locomotion. These structures are all the more remarkable because they evolved from nothing — the ancestral terrestrial reptile had no hump on its back or blade on its tail to serve as a precursor." The biologist Stephen Jay Gould said the ichthyosaur was his favorite example of convergent evolution. Fins or flippers of varying forms and at varying locations (limbs, body, tail) have also evolved in a number of other tetrapod groups, including diving birds such as penguins (modified from wings), sea turtles (forelimbs modified into flippers), mosasaurs (limbs modified into flippers), and sea snakes (vertically expanded, flattened tail fin). Robotic fins The use of fins for the propulsion of aquatic animals can be remarkably effective. It has been calculated that some fish can achieve a propulsive efficiency greater than 90%. Fish can accelerate and manoeuvre much more effectively than boats or submarine, and produce less water disturbance and noise. This has led to biomimetic studies of underwater robots which attempt to emulate the locomotion of aquatic animals. An example is the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion. In 2005, the Sea Life London Aquarium displayed three robotic fish created by the computer science department at the University of Essex. The fish were designed to be autonomous, swimming around and avoiding obstacles like real fish. Their creator claimed that he was trying to combine "the speed of tuna, acceleration of a pike, and the navigating skills of an eel." The AquaPenguin, developed by Festo of Germany, copies the streamlined shape and propulsion by front flippers of penguins. Festo also developed AquaRay, AquaJelly and AiraCuda, respectively emulating the locomotion of manta rays, jellyfish and barracuda. In 2004, Hugh Herr at MIT prototyped a biomechatronic robotic fish with a living actuator by surgically transplanting muscles from frog legs to the robot and then making the robot swim by pulsing the muscle fibers with electricity. Robotic fish offer some research advantages, such as the ability to examine an individual part of a fish design in isolation from the rest of the fish. However, this risks oversimplifying the biology so key aspects of the animal design are overlooked. Robotic fish also allow researchers to vary a single parameter, such as flexibility or a specific motion control. Researchers can directly measure forces, which is not easy to do in live fish. "Robotic devices also facilitate three-dimensional kinematic studies and correlated hydrodynamic analyses, as the location of the locomotor surface can be known accurately. And, individual components of a natural motion (such as outstroke vs. instroke of a flapping appendage) can be programmed separately, which is certainly difficult to achieve when working with a live animal."
Biology and health sciences
External anatomy and regions of the body
Biology
26219128
https://en.wikipedia.org/wiki/Instrumental%20chemistry
Instrumental chemistry
Instrumental analysis is a field of analytical chemistry that investigates analytes using scientific instruments. Spectroscopy Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy, and circular dichroism spectroscopy. Nuclear spectroscopy Methods of nuclear spectroscopy use properties of a nucleus to probe a material's properties, especially the material's local structure. Common methods include nuclear magnetic resonance spectroscopy (NMR), Mössbauer spectroscopy (MBS), and perturbed angular correlation (PAC). Mass spectrometry Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. There are several ionization methods: electron ionization, chemical ionization, electrospray, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. Crystallography Crystallography is a technique that characterizes the chemical structure of materials at the atomic level by analyzing the diffraction patterns of electromagnetic radiation or particles that have been deflected by atoms in the material. X-rays are most commonly used. From the raw data, the relative placement of atoms in space may be determined. Electrochemical analysis Electroanalytical methods measure the electric potential in volts and/or the electric current in amps in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The three main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). Thermal analysis Calorimetry and thermogravimetric analysis measure the interaction of a material and heat. Separation Separation processes are used to decrease the complexity of material mixtures. Chromatography and electrophoresis are representative of this field. Hybrid techniques Combinations of the above techniques produce "hybrid" or "hyphenated" techniques. Several examples are in popular use today and new hybrid techniques are under development. Hyphenated separation techniques refer to a combination of two or more techniques to separate chemicals from solutions and detect them. Most often, the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. Examples of hyphenated techniques: Gas chromatography-mass spectrometry (GC-MS) Liquid chromatography–mass spectrometry (LC-MS) Liquid chromatography-infrared spectroscopy (LC-IR) High-performance liquid chromatography/electrospray ionization-mass spectrometry (HPLC/ESI-MS) Chromatography-diode-array detection (LC-DAD) Capillary electrophoresis-mass spectrometry (CE-MS) Capillary electrophoresis-ultraviolet-visible spectroscopy (CE-UV) Ion-mobility spectrometry–mass spectrometry Prolate trochoidal mass spectrometer Microscopy The visualization of single molecules, single biological cells, biological tissues and nanomaterials is very important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field has been rapidly progressing because of the rapid development of the computer and camera industries. Lab-on-a-chip Devices that integrate multiple laboratory functions on a single chip of only a few square millimeters or centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters.
Physical sciences
Basics_2
Chemistry
41678169
https://en.wikipedia.org/wiki/Legionnaires%27%20disease
Legionnaires' disease
Legionnaires' disease is a form of atypical pneumonia caused by any species of Legionella bacteria, quite often Legionella pneumophila. Signs and symptoms include cough, shortness of breath, high fever, muscle pains, and headaches. Nausea, vomiting, and diarrhea may also occur. This often begins 2–10 days after exposure. A legionellosis is any disease caused by Legionella, including Legionnaires' disease (a pneumonia) and Pontiac fever (a related upper respiratory tract infection), but Legionnaires' disease is the most common, so mentions of legionellosis often refer to Legionnaires' disease. The bacterium is found naturally in fresh water. It can contaminate hot water tanks, hot tubs, and cooling towers of large air conditioners. It is usually spread by breathing in mist that contains the bacteria. It can also occur when contaminated water is aspirated. It typically does not spread directly between people, and most people who are exposed do not become infected. Risk factors for infection include older age, a history of smoking, chronic lung disease, and poor immune function. Those with severe pneumonia and those with pneumonia and a recent travel history should be tested for the disease. Diagnosis is by a urinary antigen test and sputum culture. No vaccine is available. Prevention depends on good maintenance of water systems. Treatment of Legionnaires' disease is commonly conducted with antibiotics. Recommended agents include fluoroquinolones, azithromycin, or doxycycline. Hospitalization is often required. The fatality rate is around 10% for healthy persons and 25% for those with underlying conditions. The number of cases that occur globally is not known. Legionnaires' disease is the cause of an estimated 2–9% of pneumonia cases that are acquired outside of a hospital. An estimated 8,000 to 18,000 cases a year in the United States require hospitalization. Outbreaks of disease account for a minority of cases. While it can occur any time of the year, it is more common in the summer and autumn. The disease is named after the outbreak where it was first identified, at a 1976 American Legion convention in Philadelphia. Signs and symptoms The length of time between exposure to the bacteria and the appearance of symptoms (incubation period) is generally 2–10 days, but can more rarely extend to as long as 20 days. For the general population, among those exposed, between 0.1 and 5.0% develop the disease, while among those in hospital, between 0.4 and 14% develop the disease. Those with Legionnaires' disease usually have fever, chills, and a cough, which may be dry or may produce sputum. Almost all experience fever, while around half have cough with sputum, and one-third cough up blood or bloody sputum. Some also have muscle aches, headache, tiredness, loss of appetite, loss of coordination (ataxia), chest pain, or diarrhea and vomiting. Up to half of those with Legionnaires' disease have gastrointestinal symptoms, and almost half have neurological symptoms, including confusion and impaired cognition. "Relative bradycardia" may also be present, which is low to normal heart rate despite the presence of a fever. Laboratory tests may show that kidney functions, liver functions, and electrolyte levels are abnormal, which may include low sodium in the blood. Chest X-rays often show pneumonia with consolidation in the bottom portion of both lungs. Distinguishing Legionnaires' disease from other types of pneumonia by symptoms or radiologic findings alone is difficult; other tests are required for definitive diagnosis. People with Pontiac fever, a much milder illness caused by the same bacterium, experience fever and muscle aches without pneumonia. They generally recover in 2–5 days without treatment. For Pontiac fever, the time between exposure and symptoms is generally a few hours to two days. Cause Over 90% of cases of Legionnaires' disease are caused by Legionella pneumophila. Other types include L. longbeachae, L. feeleii, L. micdadei, and L. anisa. Transmission Legionnaires' disease is usually spread by the breathing in of aerosolized water or soil contaminated with the Legionella bacteria. Experts have stated that Legionnaires' disease is not transmitted from person to person. In 2014, one case of possible spread from someone sick to the caregiver occurred. Rarely, it has been transmitted by direct contact between contaminated water and surgical wounds. The bacteria grow best at warm temperatures and thrive at water temperatures between , with an optimum temperature of . Temperatures above kill the bacteria. Sources where temperatures allow the bacteria to thrive include hot water tanks, cooling towers, and evaporative condensers of large air conditioning systems, such as those commonly found in hotels and large office buildings. Pre-1988, energy conservation programs from the late 1970s and early 1980s still mandated a maximum hot water generation, storage and distribution temperature of , unknowingly, legionella bacteria's ideal breeding temperature. To minimize risks of bacterial growth, the American Society of Heating, Refrigerating and Air-Conditioning Engineers' 1988 ASHRAE Standard 188 and subsequent ASHRAE Guideline 12-2000 increased recommended hot water generation and storage temperatures to with minimum distribution temperatures of . Though the first known outbreak was in Philadelphia, cases of legionellosis have occurred throughout the world. Reservoirs L. pneumophila thrives in aquatic systems, where it is established within amoebae in a symbiotic relationship. Legionella bacteria survive in water as intracellular parasites of water-dwelling protozoa, such as amoebae. Amoebae are often part of biofilms, and once Legionella and infected amoebae are protected within a biofilm, they are particularly difficult to destroy. In the built environment, central air conditioning systems in office buildings, hotels, and hospitals are sources of contaminated water. Other places the bacteria can dwell include cooling towers used in industrial cooling systems, evaporative coolers, nebulizers, humidifiers, whirlpool spas, hot water systems, showers, windshield washers, fountains, room-air humidifiers, ice-making machines, and misting systems typically found in grocery-store produce sections. The bacteria may also be transmitted from contaminated aerosols generated in hot tubs if the disinfection and maintenance programs are not followed rigorously. Freshwater ponds, creeks, and ornamental fountains are potential sources of Legionella. The disease is particularly associated with hotels, fountains, cruise ships, and hospitals with complex potable water systems and cooling systems. Respiratory-care devices such as humidifiers and nebulizers used with contaminated tap water may contain Legionella species, so using sterile water is very important. Other sources include exposure to potting mix and compost. Mechanism Legionella spp. enter the lungs either by aspiration of contaminated water or inhalation of aerosolized contaminated water or soil. In the lung, the bacteria are consumed by macrophages, a type of white blood cell, inside of which the Legionella bacteria multiply, causing the death of the macrophage. Once the macrophage dies, the bacteria are released from the dead cell to infect other macrophages. Virulent strains of Legionella kill macrophages by blocking the fusion of phagosomes with lysosomes inside the host cell; normally, bacteria are contained inside the phagosome, which merges with a lysosome, allowing enzymes and other chemicals to break down the invading bacteria. Diagnosis People of any age may develop Legionnaires' disease, but the illness most often affects middle-aged and older people, particularly those who smoke cigarettes or have chronic lung disease. Immunocompromised people are also at higher risk. Pontiac fever most commonly occurs in those who are otherwise healthy. The most useful diagnostic tests detect the bacteria in coughed-up mucus, find Legionella antigens in urine samples, or allow comparison of Legionella antibody levels in two blood samples taken 3–6 weeks apart. A urine antigen test is simple, quick, and very reliable, but only detects L. pneumophila serogroup 1, which accounts for 70% of disease caused by L. pneumophila, which means use of the urine antigen test alone may miss as many as 30% of cases. This test was developed by Richard Kohler in 1982. When dealing with L. pneumophila serogroup 1, the urine antigen test is useful for early detection of Legionnaire's disease and initiation of treatment, and has been helpful in early detection of outbreaks. However, it does not identify the specific subtypes, so it cannot be used to match the person with the environmental source of infection. The Legionella bacteria can be cultured from sputum or other respiratory samples. Legionella spp. stain poorly with Gram stain, stain positive with silver, and are cultured on charcoal yeast extract with iron and cysteine (CYE agar). A significant under-reporting problem occurs with legionellosis. Even in countries with effective health services and readily available diagnostic testing, about 90% of cases of Legionnaires' disease are missed. This is partly due to the disease being a relatively rare form of pneumonia, which many clinicians may not have encountered before, thus may misdiagnose. A further issue is that people with legionellosis can present with a wide range of symptoms, some of which (such as diarrhea) may distract clinicians from making a correct diagnosis. Prevention Although the risk of Legionnaires' disease being spread by large-scale water systems cannot be eliminated, it can be greatly reduced by writing and enforcing a highly detailed, systematic water safety plan appropriate for the specific facility involved (office building, hospital, hotel, spa, cruise ship, etc.) Some of the elements that such a plan may include are: Keep water temperature either below or above the range in which the Legionella bacterium thrives. Prevent stagnation, for example, by removing from a network of pipes any sections that have no outlet (dead ends). Where stagnation is unavoidable, as when a wing of a hotel is closed for the off-season, remedial measures are recommended, e.g., maintaining elevated temperatures throughout the hot-water distribution system and periodic disinfection or permanent chlorination of cold-water systems. Prevention of biofilms is crucial because once established they become more difficult to remove from piping systems. The likelihood of formation is increased by pipe scale and corrosion; warm water temperatures; stagnation and the quantity of nutrients that enter the system. Periodically disinfect the system, by high heat or a chemical biocide, and use chlorination where appropriate. Monochloramine is likely more effective than free chlorine (sodium hypochlorite), being more resistant with residuals likely to persist to the point of delivery. Monochloramine is also more likely to penetrate legionella biofilms. Treatment of water with copper-silver ionization or ultraviolet light may also be effective. System design (or renovation) can reduce the production of aerosols and reduce human exposure to them, by directing them well away from building air intakes. An effective water safety plan also covers such matters as training, record-keeping, communication among staff, contingency plans, and management responsibilities. The format and content of the plan may be prescribed by public health laws or regulations. To inform the water safety plan, the undertaking of a site specific legionella risk assessment is often recommended in the first instance. The legionella risk assessment identifies the hazards, the level of risk they pose and provides recommendations of control measures to put place within the overarching water safety plan. Treatment Effective antibiotics include most macrolides, tetracyclines, ketolides, and quinolones. Legionella spp. multiply within the cell, so any effective treatment must have excellent intracellular penetration. Current treatments of choice are the respiratory tract quinolones (levofloxacin, moxifloxacin, gemifloxacin) or newer macrolides (azithromycin, clarithromycin, roxithromycin). The antibiotics used most frequently have been levofloxacin, doxycycline, and azithromycin. Macrolides (azithromycin) are used in all age groups, while tetracyclines (doxycycline) are prescribed for children above the age of 12 and quinolones (levofloxacin) above the age of 18. Rifampicin can be used in combination with a quinolone or macrolide. Whether rifampicin is an effective antibiotic to take for treatment is uncertain. The Infectious Diseases Society of America does not recommend the use of rifampicin with added regimens. Tetracyclines and erythromycin led to improved outcomes compared to other antibiotics in the original American Legion outbreak. These antibiotics are effective because they have excellent intracellular penetration in Legionella-infected cells. The recommended treatment is 5–10 days of levofloxacin or 3–5 days of azithromycin, but in people who are immunocompromised, have severe disease, or other pre-existing health conditions, longer antibiotic use may be necessary. During outbreaks, prophylactic antibiotics have been used to prevent Legionnaires' disease in high-risk individuals who have possibly been exposed. The mortality at the original American Legion convention in 1976 was high (29 deaths in 182 infected individuals) because the antibiotics used (including penicillins, cephalosporins, and aminoglycosides) had poor intracellular penetration. Mortality has plunged to less than 5% if therapy is started quickly. Delay in giving the appropriate antibiotic leads to higher mortality. Prognosis The fatality rate of Legionnaires' disease has ranged from 5–30% during various outbreaks and approaches 50% for nosocomial infections, especially when treatment with antibiotics is delayed. Hospital-acquired Legionella pneumonia has a fatality rate of 28%, and the principal source of infection in such cases is the drinking-water distribution system. Epidemiology Legionnaires' disease acquired its name in July 1976, when an outbreak of pneumonia occurred among people attending a convention of the American Legion at the Bellevue-Stratford Hotel in Philadelphia. Of the 182 reported cases, mostly men, 29 died. On 18 January 1977, the causative agent was identified as a previously unknown strain of bacteria, subsequently named Legionella, and the species that caused the outbreak was named Legionella pneumophila. Following this discovery, unexplained outbreaks of severe respiratory disease from the 1950s were retrospectively attributed to Legionella. Legionnaires' disease also became a prominent historical example of an emerging infectious disease. Outbreaks of Legionnaires' disease receive significant media attention, but this disease usually occurs in single, isolated cases not associated with any recognized outbreak. When outbreaks do occur, they are usually in the summer and early autumn, though cases may occur at any time of year. Most infections occur in those who are middle-aged or older. National surveillance systems and research studies were established early, and in recent years, improved ascertainment and changes in clinical methods of diagnosis have contributed to an upsurge in reported cases in many countries. Environmental studies continue to identify novel sources of infection, leading to regular revisions of guidelines and regulations. About 8,000 to 18,000 cases of Legionnaires' disease occur each year in the United States, according to the Bureau of Communicable Disease Control. Between 1995 and 2005, over 32,000 cases of Legionnaires' disease and more than 600 outbreaks were reported to the European Working Group for Legionella Infections. The data on Legionella are limited in developing countries, and Legionella-related illnesses likely are underdiagnosed worldwide. Improvements in diagnosis and surveillance in developing countries would be expected to reveal far higher levels of morbidity and mortality than are currently recognised. Similarly, improved diagnosis of human illness related to Legionella species and serogroups other than Legionella pneumophila would improve knowledge about their incidence and spread. A 2011 study successfully used modeling to predict the likely number of cases during Legionnaires' outbreaks based on symptom onset dates from past outbreaks. In this way, the eventual likely size of an outbreak can be predicted, enabling efficient and effective use of public-health resources in managing an outbreak. During the COVID-19 pandemic, some researchers and organisations raised concerns about the impact of the COVID-19 lockdowns on Legionnaire's disease outbreaks. Additionally, at least two people in England died from a co-infection of Legionella and SARS-CoV-2. Outbreaks An outbreak is defined as two or more cases where the onset of illness is closely linked in time (weeks rather than months) and space, where a suspicion or evidence exists of a common source of infection, with or without microbiological support (i.e. common spatial location of cases from travel history). In April 1985, 175 people in Stafford, England, were admitted to the District or Kingsmead Stafford Hospitals with chest infection or pneumonia. A total of 28 people died. Medical diagnosis showed that Legionnaires' disease was responsible and the immediate epidemiological investigation traced the source of the infection to the air-conditioning cooling tower on the roof of Stafford District Hospital. In March 1999, a large outbreak in the Netherlands occurred during the Westfriese Flora flower exhibition in Bovenkarspel; 318 people became ill and at least 32 people died. This was the second-deadliest outbreak since the 1976 outbreak and possibly the deadliest, as several people were buried before Legionnaires' disease had been diagnosed. The world's largest outbreak of Legionnaires' disease happened in July 2001, with people appearing at the hospital on 7 July, in Murcia, Spain. More than 800 suspected cases were recorded by the time the last case was treated on 22 July; 636–696 of these cases were estimated and 449 confirmed (so, at least 16,000 people were exposed to the bacterium) and six died, a case-fatality rate around 1%. In September 2005, 127 residents of a nursing home in Canada became ill with L. pneumophila. Within a week, 21 of the residents had died. Culture results at first were negative, which is not unusual, as L. pneumophila is a "fastidious" bacterium, meaning it requires specific nutrients, living conditions, or both to grow. The source of the outbreak was traced to the air-conditioning cooling towers on the nursing home's roof. In an outbreak in lower Quebec City, Canada, 180 people were affected with 13 resulting deaths due to contaminated water in a cooling tower. In November 2014, 302 people were hospitalized following an outbreak of legionellosis in Portugal, and seven related deaths were reported. All cases emerged in three civil parishes from the municipality of Vila Franca de Xira in the northern outskirts of Lisbon, and were treated in hospitals of the greater Lisbon area. The source is suspected to be located in the cooling towers of the fertilizer plant Fertibéria. Twelve people were diagnosed with the disease in an outbreak in the Bronx, New York, in December 2014; the source was traced to contaminated cooling towers at a housing development. In July and August 2015, another, unrelated outbreak in the Bronx killed 12 people and made about 120 people sick; the cases arose from a cooling tower on top of a hotel. At the end of September, another person died of the disease and 13 were sickened in yet another unrelated outbreak in the Bronx. The cooling towers from which the people were infected in the latter outbreak had been cleaned during the summer outbreak, raising concerns about how well the bacteria could be controlled. On 28 August 2015, an outbreak of Legionnaire's disease was detected at San Quentin State Prison in Northern California; 81 people were sickened and the cause was sludge that had built up in cooling towers. Between June 2015, and January 2016, 87 cases of Legionnaires' disease were reported by the Michigan Department of Health and Human Services for the city of Flint, Michigan, and surrounding areas. The outbreak may have been linked to the Flint water crisis, in which the city's water source was changed to a cheaper and inadequately treated source. Ten of those cases were fatal. In November 2017, an outbreak was detected at Hospital de São Francisco Xavier, Lisbon, Portugal, with up to 53 people being diagnosed with the disease and five of them dying from it. In Quincy, Illinois, at the Illinois Veterans Home, a 2015 outbreak of the disease killed 12 people and sickened more than 50 others. It was believed to be caused by infected water supply. Three more cases were identified by November 2017. In the autumn of 2017, 22 cases were reported in a Legionnaires' disease outbreak at Disneyland in Anaheim, California. It was believed to have been caused by a cooling tower that releases mist for the comfort of visitors. The contaminated droplets likely spread to the people in and beyond the park. In July 2019, 11 former guests of the Sheraton Atlanta hotel were diagnosed with the disease, with 55 additional probable cases. In September 2019, 141 visitors to the Western North Carolina Mountain State Fair were diagnosed with Legionnaires' disease, with four reported deaths, after a hot tub exhibit is suspected to have developed and spread the bacteria. At least one additional exposure apparently occurred during the Asheville Quilt Show that took place a few weeks after the fair in the same building where the hot tub exhibit was held. The building had been sanitized after the outbreak. In December 2019, the government of Western Australia's Department of Health was notified of four cases of Legionnaires' disease. Those exposed had recently visited near Bali's Ramayana Resort and Spa in central Kuta. In February 2024, Minnesota Department of Health issued a news release stating that fourteen (14) cases were identified in Grand Rapids, Minnesota since April 2023 which they attributed to the municipal water supply. In January 2024, NSW Health issued an alert for Legionnaires' disease for Sydney CBD. As of 3rd January 2024, 7 known cases requiring hospitalization had been reported.
Biology and health sciences
Bacterial infections
Health
24813575
https://en.wikipedia.org/wiki/Rho%20Ophiuchi%20cloud%20complex
Rho Ophiuchi cloud complex
The Rho Ophiuchi cloud complex is a complex of interstellar clouds with different nebulae, particularly a dark nebula which is centered 1° south of the star ρ Ophiuchi, which it among others extends to, of the constellation Ophiuchus. At an estimated distance of about , or 460 light years, it is one of the closest star-forming regions to the Solar System. Cloud complex This cloud covers an angular area of on the celestial sphere. It consists of two major regions of dense gas and dust. The first contains a star-forming cloud (L1688) and two filaments (L1709 and L1755), while the second has a star-forming region (L1689) and a filament (L1712–L1729). These filaments extend up to 10–17.5 parsecs in length and can be as narrow as 0.24 parsecs in width. The large extensions of the complex are also called Dark River clouds (or Rho Ophiuchi Streamers) and are identified as Barnard 44 and 45. Some of the structures within the complex appear to be the result of a shock front passing through the clouds from the direction of the neighboring Sco OB2 association. Temperatures of the clouds range from 13–22 K, and there is a total of about 3,000 times the mass of the Sun in the material. Over half of the mass of the complex is concentrated around the L1688 cloud, and this is the most active star-forming region. There are embedded infrared sources within the complex. A total of 425 infrared sources have been detected near the L1688 cloud. These are presumed to be young stellar objects, including 16 classified as protostars, 123 T Tauri stars with dense circumstellar disks, and 77 weaker T Tauri stars with thinner disks. The last 2 categories of stars have estimated ages ranging from 100,000 to a million years. The first brown dwarf to be identified in a star-forming region was Rho Oph J162349.8-242601, located in the Rho Ophiuchi cloud. One of the older objects at the edge of the primary star-forming region was found to be a circumstellar disk seen nearly edge-on. It spans a diameter of 300 AU and contains at least twice the mass of Jupiter. The million-year-old star at the center of the disk has a temperature of 3,000 K and is emitting 0.4 times the luminosity of the Sun. The 2023 NASA/ESA/CSA James Webb Space Telescope imagereleased on the telescope's first anniversaryshows young stars, roughly the size of the Sun, at the center of circumstellar discs. These represent planetary systems of the future being formed in a "stellar nursery". Since the field of view of the photo is very small, at 6.4 arc-minutes, it displays just a tiny region of what appears in most other photographs of the Rho Ophiuchi cloud complex. Gallery
Physical sciences
Notable nebulae
Astronomy
37509820
https://en.wikipedia.org/wiki/Processor%20%28computing%29
Processor (computing)
In computing and computer science, a processor or processing unit is an electrical component (digital circuit) that performs operations on an external data source, usually memory or some other data stream. It typically takes the form of a microprocessor, which can be implemented on a single or a few tightly integrated metal–oxide–semiconductor integrated circuit chips. In the past, processors were constructed using multiple individual vacuum tubes, multiple individual transistors, or multiple integrated circuits. The term is frequently used to refer to the central processing unit (CPU), the main processor in a system. However, it can also refer to other coprocessors, such as a graphics processing unit (GPU). Traditional processors are typically based on silicon; however, researchers have developed experimental processors based on alternative materials such as carbon nanotubes, graphene, diamond, and alloys made of elements from groups three and five of the periodic table. Transistors made of a single sheet of silicon atoms one atom tall and other 2D materials have been researched for use in processors. Quantum processors have been created; they use quantum superposition to represent bits (called qubits) instead of only an on or off state. Moore's law Moore's law, named after Gordon Moore, is the observation and projection via historical trend that the number of transistors in integrated circuits, and therefore processors by extension, doubles every two years. The progress of processors has followed Moore's law closely. Types Central processing units (CPUs) are the primary processors in most computers. They are designed to handle a wide variety of general computing tasks rather than only a few domain-specific tasks. If based on the von Neumann architecture, they contain at least a control unit (CU), an arithmetic logic unit (ALU), and processor registers. In practice, CPUs in personal computers are usually also connected, through the motherboard, to a main memory bank, hard drive or other permanent storage, and peripherals, such as a keyboard and mouse. Graphics processing units (GPUs) are present in many computers and designed to efficiently perform computer graphics operations, including linear algebra. They are highly parallel, and CPUs usually perform better on tasks requiring serial processing. Although GPUs were originally intended for use in graphics, over time their application domains have expanded, and they have become an important piece of hardware for machine learning. There are several forms of processors specialized for machine learning. These fall under the category of AI accelerators (also known as neural processing units, or NPUs) and include vision processing units (VPUs) and Google's Tensor Processing Unit (TPU). Sound chips and sound cards are used for generating and processing audio. Digital signal processors (DSPs) are designed for processing digital signals. Image signal processors are DSPs specialized for processing images in particular. Deep learning processors, such as neural processing units are designed for efficient deep learning computation. Physics processing units (PPUs) are built to efficiently make physics-related calculations, particularly in video games. Field-programmable gate arrays (FPGAs) are specialized circuits that can be reconfigured for different purposes, rather than being locked into a particular application domain during manufacturing. The Synergistic Processing Element or Unit (SPE or SPU) is a component in the Cell microprocessor. Processors based on different circuit technology have been developed. One example is quantum processors, which use quantum physics to enable algorithms that are impossible on classical computers (those using traditional circuitry). Another example is photonic processors, which use light to make computations instead of semiconducting electronics. Processing is done by photodetectors sensing light produced by lasers inside the processor.
Technology
Computer hardware
null
30874055
https://en.wikipedia.org/wiki/Delhi%20Metro
Delhi Metro
The Delhi Metro is a rapid transit system that serves Delhi and the adjoining satellite cities of Ghaziabad, Faridabad, Gurugram, Noida, Bahadurgarh, and Ballabhgarh in the National Capital Region of India. The system consists of 10 colour-coded lines serving 257 stations, with a total length of . It is India's largest and busiest metro rail system and the second-oldest, after the Kolkata Metro. The metro has a mix of underground, at-grade, and elevated stations using broad-gauge and standard-gauge tracks. The metro makes over 4,300 trips daily. Construction began in 1998, and the first elevated section (Shahdara to Tis Hazari) on the Red Line opened on 25 December 2002. The first underground section (Vishwa Vidyalaya – Kashmere Gate) on the Yellow Line opened on 20 December 2004. The network was developed in phases. Phase I was completed by 2006, followed by Phase II in 2011. Phase III was mostly complete in 2021, except for a small extension of the Airport Line which opened in 2023. Construction of Phase IV began on 30 December 2019. The Delhi Metro Rail Corporation (DMRC), a joint venture between the Government of India and Delhi, built and operates the Delhi Metro. The DMRC was certified by the United Nations in 2011 as the first metro rail and rail-based system in the world to receive carbon credits for reducing greenhouse-gas emissions, reducing annual carbon emission levels in the city by 630,000 tonnes. The Delhi Metro has interchanges with the Rapid Metro Gurgaon (with a shared ticketing system) and Noida Metro. On 22 October 2019, DMRC took over operations of the financially-troubled Rapid Metro Gurgaon. The Delhi Metro's annual ridership was 203.23 crore (2.03 billion) in 2023. The system will have interchanges with the Delhi-Meerut RRTS, India's fastest urban regional transit system. History Background The concept of mass rapid transit for New Delhi first emerged from a 1969 traffic and travel characteristics study in the city. Over the next several years, committees in a number of government departments were commissioned to examine issues related to technology, route alignment, and governmental jurisdiction. In 1984, the Urban Arts Commission proposed the development of a multi-modal transport system which would build three underground mass rapid transit corridors and augmenting the city's suburban railway and road transport networks. The city expanded significantly while technical studies and financing the project underway, doubling its population and increasing the number of vehicles five-fold between 1981 and 1998. Traffic congestion and pollution soared as an increasing number of commuters used private vehicles, and the existing bus system was unable to bear the load. A 1992 attempt to privatise the bus transport system compounded the problem, with inexperienced operators plying poorly-maintained, noisy and polluting buses on lengthy routes; this resulted in long waiting times, unreliable service, overcrowding, unqualified drivers, speeding and reckless driving which led to road accidents. The Government of India under Prime Minister H.D. Deve Gowda and the Government of Delhi set up the Delhi Metro Rail Corporation (DMRC) on 3 May 1995, with Elattuvalapil Sreedharan its managing director. Mangu Singh replaced Sreedharan as DMRC managing director on 31 December 2011. Initial construction When the project was originally approved by the Union Cabinet in September 1996, it had three corridors. In 1997, official development assistance loans from Japan were granted to finance and conduct the first phase of the system. Construction of the Delhi Metro began on 1 October 1998. To avoid problems experienced by the Kolkata Metro, which witnessed substantial delays and ran 12 times over budget due to "political meddling, technical problems and bureaucratic delays", the DMRC was created as a special-purpose vehicle vested with autonomy and power to execute the large project which involved many technical complexities in a difficult urban environment within a limited time frame. Putting the central and state governments on an equal footing gave an unprecedented level of autonomy and freedom to the company, which had full powers to hire people, decide on tenders, and control funds. The DMRC hired the Hong Kong MTRC as a technical consultant on rapid-transit operation and construction techniques. Construction proceeded smoothly except for a major disagreement in 2000, when the Ministry of Railways forced the system to use broad gauge despite the DMRC's preference for standard gauge. This decision led to an additional capital expenditure of . The Delhi Metro's first line, the Red Line, was inaugurated by Prime Minister Atal Bihari Vajpayee on 24 December 2002. The metro became India's second underground rapid transit system, after the Kolkata Metro, when the Vishwa Vidyalaya–Kashmere Gate section of the Yellow Line opened on 20 December 2004. The underground line was inaugurated by Prime Minister Manmohan Singh. The project's first phase was completed in 2006, on budget and almost three years ahead of schedule, an achievement described by Business Week as "nothing short of a miracle". Phase I A 64.75 kilometer (40.23 miles) network of 59 stations was constructed in Delhi, encompassing the initial sections of the Red, Yellow, and Blue Lines. The stations were opened to the public between 25 December 2002 and 11 November 2006. Phase II A total of network of 86 stations and 10 routes and extensions was built. Seven routes were extensions of the Phase I network, three were new colour-coded lines, and three routes connect to other cities (the Yellow Line to Gurgaon and the Blue Line to Noida and Ghaziabad) of the national capital region in the states of Haryana and Uttar Pradesh. At the end of Phases I and II, the network's total length was and 145 stations became operational between 4 June 2008 and 27 August 2011. Phase III Phase I (Red, Yellow and Blue Lines) and Phase II (Green, Violet, and Airport Express Lines) focused on adding radial lines to expand the network. To further reduce congestion and improve connectivity, Phase III included eight extensions to existing lines, two ring lines (the Pink and Magenta Lines) and the Grey Line. It has 28 underground stations, three new lines and seven route extensions, totaling , at a cost of . The three new Phase III lines are the Pink Line on Inner Ring Road (Line 7), the Magenta Line on Outer Ring Road (Line 8) and the Grey Line connecting Dwarka and Najafgarh (Line 9). Work on Phase III began in 2011, with 2016 the planned deadline. Over 20 tunnel-boring machines were used simultaneously to expedite construction, which was completed in March 2019 (except for a small stretch due to non-availability of land). Short extensions were later added to Phase III, which was expected to be completed by the end of 2020, but construction was delayed due to the COVID-19 pandemic. It was completed on 18 September 2021 with the opening of the Grey Line extension from Najafgarh to Dhansa Bus Stand. An extension of the Airport Line to Yashobhoomi Dwarka Sector - 25 metro station was later added, and it was completed on 17 September 2023. Driverless operations on the Magenta line began on 28 December 2021, making it the Delhi Metro's (and India's) first driverless metro line. On 25 November 2021, the Pink Line also began driverless operations. The total driverless DMRC network is nearly , putting Delhi Metro in fourth position globally among such networks behind Kuala Lumpur. The expected daily ridership of the network after the completion of Phase III was estimated at 53.47 lakh passengers. Actual DMRC ridership was 27.79 lakh in 2019–20, 51.97 percent of the projected ridership. Actual ridership of the Phase III corridors was 4.38 lakh, compared with a projected ridership of 20.89 lakh in 2019–20 (a deficit of 79.02 percent). The communication-based train control (CBTC) on Phase III trains enables them to run at a 90-second headway, although the actual headway between trains is higher because of the relatively low demand on the new corridors. Keeping the short headway and other constraints in mind, DMRC changed its decision to build nine-car-long stations for new lines and opted for shorter stations which can accommodate six-car trains. Phase IV Phase IV, with a length of and six lines, was finalized by the Government of Delhi in December 2018. Approval from the government of India was received for three priority corridors in March 2019. Construction of the corridors began on 30 December 2019, with an expected completion date of 2026. The metro's total length will exceed at the end of Phase IV, not including other independently operated systems in the National Capital Region such as the Aqua Line of the Noida-Greater Noida Metro and the Rapid Metro Gurgaon which connect to the Delhi Metro. Construction Incidents On 19 October 2008, a launching gantry and part of the overhead Blue Line extension under construction in Laxmi Nagar collapsed and fell on a passing bus. Workers were using a crane to lift a 400-tonne concrete span of the bridge when the gantry and a span of the bridge collapsed on the bus. The driver and a construction worker were killed. On 12 July 2009, a section of a bridge collapsed while it was being erected at Zamrudpur, east of Kailash, on the Central Secretariat – Badarpur corridor. Six people died and 15 were injured. A crane removing the debris collapsed the following day and collapsed two other nearby cranes, injuring six. On 22 July 2009, a worker at the Ashok Park Metro station was killed when a steel beam fell on him. Over a hundred people, including 93 workers, have died since work on the metro began in 1998. On 23 April 2018, five people were injured when an iron girder fell off the elevated section of a Metro structure under construction at the Mohan Nagar intersection in Ghaziabad. A car, an auto rickshaw, and a motorbike were also damaged in the incident. Expansion Gurugram Upcoming/Under construction Metro Line The Haryana Mass Rapid Transit Corporation (HMRTC) has plans to establish a metro network spanning 188 kilometers in Gurugram. Gurugram Metro Rail Limited (GMRL) will be responsible for constructing, maintaining, and operating this metro line, similar to the Delhi Metro Rail Corporation. Currently, all these lines will be developed in the first phase, with further expansion planned in the second/upcoming phase. Proposed Phase V Former DMRC managing director E. Sreedharan said that by the time Phase IV is completed, the city will need Phase V to cope with increased population and transport needs. Planning for this phase has not begun, but the following corridor has been suggested for the near future: Yamuna Bank – Loni border: , dropped from Phase IV expansion Central Vista Loop Line is a part of Central Vista Redevelopment Project. Delhi Air Train or Automated People's Mover is a part of Indira Gandhi International Airport expansion which will connect T1, T2, T3 and Aerocity. A detailed project report (DPR) has been prepared for the extension of the Yellow Line (Delhi Metro) to Khera Kalan in North Delhi from Samaypur Badli metro station, with a proposed station at Siraspur along the route. Haryana and UP connectivity Haryana projects Gurugram Metro loop (from HUDA City Centre to Cyber City) - approved: The total length of the corridor will be about , consisting of 27 elevated stations with six interchange stations. This link would start at HUDA City Centre and move towards Sector 45, Cyber Park, district shopping centre, Sector 47, Subhash Chowk, Sector 48, Sector 72 A, Hero Honda Chowk, Udyog Vihar Phase 6, Sector 10, Sector 37, Basai village, Sector 9, Sector 7, Sector 4, Sector 5, Ashok Vihar, Sector 3, Bajghera Road, Palam Vihar Extension, Palam Vihar, Sector 23 A, Sector 22, Udyog Vihar Phase 4, Udyog Vihar Phase 5 and finally merge in existing Metro network of Rapid Metro Gurgaon at Moulsari Avenue station near Cyber City. Gurugram (from Rezang La Chowk in Palam Vihar to IGI Airport (IICC - Dwarka Sector 25 metro station)) - proposed: connect Gurugram Loop to IGI Airport by connecting Palam Vihar to Delhi Airport Metro Express (Orange Line) at existing IICC - Dwarka Sector 25 metro station (India International Convention and Expo Centre) which also connects to Blue Line at Dwarka Sector 21 metro station). It will likely be a nearly 6 km extension of Orange Line from IICC Dwarka to Bamnoli Chowk (southeast end of IICC), Nykaa village, Bijwasan railway station (BWSN), Gurugram Sector-51 and connect to Gurugram metro network near Palam Vihar Halt railway station (PLVR). HUDA City Centre to Manesar City - approved: An extension of Yellow Line, included in the Gurgaon Masterplan 2031, approved by the Haryana govt will go up to Panchgaon Chowk in Manesar, where it will interchange with Delhi–Alwar Regional Rapid Transit System, Haryana Orbital Rail Corridor (Panchgaon), Western Peripheral Expressway's Multimodal Transit Centre and Jhajjar-Palwal rail line. Gurgaon – Faridabad metro - DPR ready: In May 2020, the Detailed Project Report (DPR) and survey for the long Gurgaon-Faridabad metro link from Vatika Chowk in Gururam to Bata Chowk in Faridabad was completed which will have 8 stations, of which the elevated stretch along the Gurgaon-Faridabad Road through eco-sensitive wildlife corridor will be elevated. Bahadurgarh (Brigadier Hoshiyar Singh metro station) – Rohtak City: A Green Line extension partially approved to Asaudha Bahadurgarh to Asaudah railway station section, to connect with Haryana Orbital Rail Corridor at Asaudha station, is covered in FY 2023-24 budget of Haryana govt. Dhansa Bus Stand – Jhajjar City: A Grey Line extension, proposed but not approved Uttar Pradesh (UP) projects Shiv Vihar – Loni: Proposed but not approved Noida – Noida International Airport: surface line along the Yamuna Expressway serving the proposed Noida International Airport. The line, envisioned to be completed by 2025, will connect with the Noida Metro. Integration with RapidX The RapidX is a semi-high-speed regional rapid transit system (RRTS) which aims to connect Delhi with its neighbouring cities via eight lines of semi-high-speed trains operating at a maximum speed of . Phase I of the project consists of three corridors: Delhi–Meerut, Delhi–Alwar, and Delhi–Panipat corridor. The Delhi–Meerut corridor, also known as the Delhi–Meerut RRTS, is currently under development by the National Capital Region Transport Corporation (NCRTC). The Delhi–Meerut RRTS is long and costs . It will comprise 14 stations (with nine additional stations for the Meerut Metro) and two depots. Three of the 14 stations (Sarai Kale Khan, New Ashok Nagar, and Anand Vihar) will be in Delhi, and are planned for seamless integration with the Delhi Metro. Lines Red Line (Line 1) The Red Line, the first metro line opened, connects Rithala in the west to Shaheed Sthal (New Bus Adda) in the east for a distance of . Partly elevated and partly at grade, it crosses the Yamuna River between the Kashmere Gate and Shastri Park stations. The opening of the first stretch on 24 December 2002, between Shahdara and Tis Hazari, crashed the ticketing system due to demand. Subsequent sections were opened from Tis Hazari – Trinagar (later renamed Inderlok) on 4 October 2003, Inderlok – Rithala on 31 March 2004, and Shahdara – Dilshad Garden on 4 June 2008. The Red Line has interchanges at Kashmere Gate with the Yellow and Violet Lines, at Inderlok with the Green Line, and at Netaji Subhash Place and Welcome with the Pink Line. An interchange with the Blue Line at Mohan Nagar is planned. Six-coach trains were commissioned on the line on 24 November 2013. An extension from Dilshad Garden to Shaheed Sthal (New Bus Adda) opened on 8 March 2019. The metro introduced a set of two eight-coach trains on the Red Line, converted from the existing fleet of 39 six-coach trains, in November 2022. Yellow Line (Line 2) The Yellow Line, the metro's second line, was its first underground line. Running north to south, it connects Samaypur Badli with Millennium City Centre Gurugram in Gurugram. The northern and southern parts of the line are elevated, and the central section (which passes through some of the most congested parts of Delhi) is underground. The underground section between Vishwa Vidyalaya and Kashmere Gate opened on 20 December 2004; the Kashmere Gate – Central Secretariat section opened on 3 July 2005, and Vishwa Vidyalaya – Jahangirpuri on 4 February 2009. The line has India's second-deepest metro station at Chawri Bazar, below ground level. An additional stretch from Qutab Minar to Millennium City Centre Gurugram, initially operating separately from the mainline, opened on 21 June 2010; the Chhatarpur station on this stretch opened on 26 August of that year. Due to delays in acquiring land to construct the station, it was built with prefabricated structures in nine months and is the only Delhi Metro station made completely of steel. The connecting link between Central Secretariat and Qutub Minar opened on 3 September 2010. On 10 November 2015, the line was further extended between Jahangirpuri and Samaypur Badli in Outer Delhi. Interchanges are available with the Red Line and Kashmere Gate ISBT at Kashmere Gate, with the Blue Line at Rajiv Chowk, with the Violet Line at Kashmere Gate and Central Secretariat, with the Airport Express at New Delhi, with the Pink Line at Azadpur and Dilli Haat - INA, with the Magenta Line at Hauz Khas, with Rapid Metro Gurgaon at Sikanderpur, and with Indian Railways at Chandni Chowk and New Delhi. The Yellow Line is the metro's first line to replace four-coach trains with six- and eight-coach configurations. The Metro Museum at Patel Chowk metro station, South Asia's only rapid-transit museum, has a collection of display panels, historical photographs and exhibits tracing the genesis of the Delhi Metro. The museum was opened on 1 January 2009. Blue Line (Lines 3 and 4) The Blue Line, the third line of the metro open, was the first to connect areas outside Delhi. Mainly elevated and partly underground, it connects Dwarka Sub City in the west with the satellite city of Noida in the east for a distance of . The line's first section, between Dwarka and Barakhamba Road, opened on 31 December 2005, and subsequent sections opened between Dwarka – Dwarka Sector 9 on 1 April 2006, Barakhamba Road – Indraprastha on 11 November 2006, Indraprastha – Yamuna Bank on 10 May 2009, Yamuna Bank – Noida City Centre on 12 November 2009, and Dwarka Sector 9 – Dwarka Sector 21 on 30 October 2010. The line crosses the Yamuna River between the Indraprastha and Yamuna Bank stations, and has India's second extradosed bridge across the Northern Railways mainlines near Pragati Maidan. A branch of the Blue Line, inaugurated on 8 January 2010, runs for from the Yamuna Bank station to Anand Vihar in East Delhi. It was extended to Vaishali on 14 July 2011. A stretch from Dwarka Sector 9 to Dwarka Sector 21 opened on 30 October 2010. On 9 March 2019, a extension from Noida City Centre to Noida Electronic City was opened by Prime Minister Narendra Modi. Interchanges are available with the Aqua Line (Noida Metro) Noida Sector 51 station at Noida Sector 52, with the Yellow Line at Rajiv Chowk, with the Green Line at Kirti Nagar, with the Violet Line at Mandi House, with the Airport Express at Dwarka Sector 21, with the Pink Line at Rajouri Garden, Mayur Vihar Phase-I, Karkarduma and Anand Vihar, with the Magenta Line at Janakpuri West and Botanical Garden, and with Indian Railways and the Interstate Bus Station (ISBT) at Anand Vihar station (which connects with Anand Vihar Railway Terminal and Anand Vihar ISBT). An interchange with the Red Line at Mohan Nagar is planned. Green Line (Line 5) Opened in 2010, the Green Line (Line 5) is the metro's fifth and its first standard-gauge line; the others were broad gauge. It runs between Inderlok (a Red Line station) and Brigadier Hoshiyar Singh, with a branch line connecting its Ashok Park Main station with Kirti Nagar on the Blue Line. The elevated line, built as part of Phase II, runs primarily along the busy NH 10 route in West Delhi. It has 24 stations, including an interchange, and covers . The line has India's first standard-gauge maintenance depot, at Mundka. It opened in two stages, with the Inderlok–Mundka section opening on 3 April 2010 and the Kirti Nagar–Ashok Park Main branch line opening on 27 August 2011. On 6 August 2012, to improve commuting in the National Capital Region, the government of India approved an extension from Mundka to Bahadurgarh in Haryana. The stretch has seven stations (Mundka Industrial Area, Ghevra, Tikri Kalan, Tikri Border, Pandit Shree Ram Sharma, Bahadurgarh City and Brigadier Hoshiyar Singh) between Mundka and Bahadurgarh, and opened on 24 June 2018. Interchanges are available with the Red Line at Inderlok, the Blue Line at Kirti Nagar and the Pink Line at Punjabi Bagh West. Violet Line (Line 6) The Violet Line is the sixth metro line opened and the second standard-gauge corridor, after the Green Line. The line connects Raja Nahar Singh in Ballabgarh via Faridabad to Kashmere Gate in New Delhi, with overhead and the rest underground. The first section between Central Secretariat and Sarita Vihar opened on 3 October 2010, hours before the inaugural ceremony of the 2010 Commonwealth Games, and connects Jawaharlal Nehru Stadium (the venue for the games' opening and closing ceremonies). Completed in 41 months, it includes a bridge over the Indian Railways mainlines and a cable-stayed bridge across a road flyover; it connects several hospitals, tourist attractions, and an industrial estate. Service is provided at five-minute intervals. An interchange with the Yellow Line is available at Central Secretariat through an integrated concourse. On 14 January 2011, the remaining portion from Sarita Vihar to Badarpur was opened; this added three new stations to the network. The section between Mandi House and Central Secretariat was opened on 26 June 2014, and a section between ITO and Mandi House was opened on 8 June 2015. A extension south to Escorts Mujesar in Faridabad was inaugurated by Prime Minister Narendra Modi on 6 September 2015. All nine stations on the Badarpur–Escorts Mujesar (Faridabad) section of the metro's Phase III received the highest rating (platinum) for adherence to green-building norms from the Indian Green Building Council (IGBC). The awards were given to DMRC Managing Director Mangu Singh by IGBC chair P. C. Jain on 10 September 2015. The line's Faridabad corridor is the longest corridor outside Delhi: 11 stations and . On 28 May 2017, the ITO–Kashmere Gate corridor was opened by Union Minister of Urban Development Venkaiah Naidu and Chief Minister of Delhi Arvind Kejriwal. The underground section is popularly known as the Heritage Line. Interchanges are available with the Red Line at Kashmere Gate, with the Yellow Line at Kashmere Gate and Central Secretariat, with the Blue Line at Mandi House, with the Pink Line at Lajpat Nagar and with the Magenta Line at Kalkaji Mandir. Airport Express Line / Orange (Line 7) The Airport Express line runs from New Delhi to Yashobhoomi Dwarka Sector - 25, linking the New Delhi railway station and Indira Gandhi International Airport. The line was operated by Delhi Airport Metro Express Pvt. Limited (DAMEL), a subsidiary of Reliance Infrastructure (the line's concessionaire until 30 June 2013). It is now operated by DMRC. The line was built at a cost of , of which Reliance Infrastructure invested and will pay fees in a revenue-share model. It has six stations (Dhaula Kuan and Delhi Aerocity became operational on 15 August 2011), and some have check-in facilities, parking, and eateries. Rolling stock consists of six-coach trains, operating at ten-minute intervals, with a maximum speed of . Originally scheduled to open before the 2010 Commonwealth Games, the line failed to obtain the mandatory safety clearance and was opened on 23 February 2011 after a delay of about five months. Sixteen months after beginning operations, it was shut down for viaduct repairs on 7 July 2012. The line reopened on 22 January 2013. On 27 June 2013, Reliance Infrastructure told DMRC that they were unable to operate the line beyond 30 June of that year. DMRC took over the line on 1 July 2013 with a 100-person operations and maintenance team. In January 2015, DMRC reported that the line's ridership had increased about 30 percent after a fare reduction of up to 40 percent the previous July. DMRC announced a further fare reduction on 14 September 2015, with a maximum fare of ₹60 and minimum of ₹10 instead of ₹100 and ₹20. DMRC said that this was done to reduce crowding on the Blue Line, diverting some Dwarka-bound passengers to the Airport Express Line (which is underutilised and faster than the Blue Line. The line's speed was increased from to on 24 June 2023, enabling a 16-minute ride from New Delhi to IGI Airport. Interchanges are available with the Yellow Line at New Delhi, with the Blue Line at Dwarka Sector 21, with the Durgabai Deshmukh South Campus metro station of the Pink Line at Dhaula Kuan, and with Indian Railways at New Delhi. An expansion of Dwarka Sector 25 was inaugurated on 17 September 2023 with the opening of the adjacent India International Convention Centre. Pink Line (Line 7) The Pink Line is the second new line of the Delhi Metro's third phase. It was opened on 14 March 2018, with an extension opening on 6 August. The Trilokpuri Sanjay Lake-to-Shiv Vihar section was opened on 31 October, and the Lajpat Nagar-to-Mayur Vihar Pocket I section opened on 31 December of that year. The final section, between Mayur Vihar Pocket I and Trilokpuri Sanjay Lake, was opened on 6 August 2021 after delays due to land-acquisition and rehabilitation issues. The Pink Line has 38 stations from Majlis Park to Shiv Vihar, both in North Delhi. With a length of , it is the Delhi Metro's longest line. The mostly-elevated line covers Delhi in a U-shaped pattern. It is also known as the Ring Road Line, since it runs along the busy Ring Road. The line has interchanges with most of the metro's other lines, including with the Red Line at Netaji Subhash Place and Welcome, with the Yellow Line at Azadpur and Dilli Haat – INA, with the Blue Line at Rajouri Garden, Mayur Vihar Phase-I, Anand Vihar and Karkarduma, with the Green Line at Punjabi Bagh West, with Dhaula Kuan of the Airport Express at Durgabai Deshmukh South Campus, with the Violet Line at Lajpat Nagar, with Indian Railways at Hazrat Nizamuddin and Anand Vihar Terminal, and the ISBTs at Anand Vihar and Sarai Kale Khan. The Pink Line reaches the Delhi Metro's highest point at Dhaula Kuan, passing over the Dhaula Kuan grade-separator flyovers and the Airport Express Line. Magenta Line (Line 8) The Magenta Line is the Delhi Metro's first new line of its third phase. The Botanical Garden-to-Kalkaji Mandir section opened on 25 December 2017, and the remainder of the line opened on 28 May 2018. It has 26 stations, from Krishna Park Extension to Botanical Garden. The line directly connects to Terminal 1D of Indira Gandhi International Airport. The Hauz Khas station on this line and the Yellow Line is the deepest metro station, at a depth of . The Magenta Line has interchanges with the Yellow Line at Hauz Khas, with the Blue Line at Janakpuri West and Botanical Garden, and with the Violet Line at Kalkaji Mandir. India's first driverless train service began on the Magenta Line in December 2020. Grey Line (Line 9) The Grey Line (also known as Line 9), the metro's shortest, runs from Dwarka to Dhansa Bus Stand in western Delhi. The line has four stations (Dhansa Bus Stand, Najafgarh, Nangli and Dwarka), and has an interchange with the Blue Line at Dwarka. The Najafgarh-to-Dwarka section was opened on 4 October 2019. The extension to Dhansa Bus Stand was scheduled to open in December 2020, but construction was delayed by the COVID-19 pandemic; it opened on 18 September 2021. Network The Delhi Metro has been undergoing construction in phases. Phase I consisted of 59 stations and of route length, of which is underground and at grade or elevated. The inauguration of the Dwarka–Barakhamba Road corridor of the Blue Line completed Phase I in October 2006. Phase II consists of of route length and 86 stations, and is completed; the first section opened in June 2008, and the last section opened in August 2011. Phase III consists of 109 stations, three new lines and seven route extensions, totaling , at a cost of . Most of it was completed on 5 April 2019, except for a small section of the Pink Line between the Mayur Vihar Pocket 1 and Trilokpuri Sanjay Lake stations (opened on 6 August 2021) the Grey Line extension from Najafgarh to Dhansa Bus Stand (opened on 18 September 2021) and the Airport Express extension from Dwarka Sector 21 to Yashbhoomi-Dwarka Sector 25 (Completed on 17th September 2023). Phase IV, with six lines totaling , was finalized in July 2015. Of this, across three lines (priority corridors) with 45 stations was approved by the government of India for construction on 7 March 2019. The Golden Line was lengthened in October 2020, making the project long. Magenta Line's one station extension along the RK Ashram Marg opened on 5th January 2025 currently as far as Krishna Park Extension, with the rest of the network (Along with planned routes) is planned to be completed by atleast 2029. Operations Trains operate at a frequency of one to two minutes to five to ten minutes between 05:00 and 00:00, depending upon peak and off-peak hours. They typically travel up to , and stop for about 20 seconds at each station. Automated station announcements are in Hindi and English. Many stations have ATMs, food outlets, cafés, convenience stores and mobile recharge. Eating, drinking, smoking, and chewing gum are prohibited. The metro has a sophisticated fire alarm system for advance warning in emergencies, and fire retardant material is used in trains and stations. Navigation information is available on Google Maps. Since October 2010, the first coach of every train is reserved for women; the last coach is also reserved when the train changes tracks at the terminal stations on the Red, Green and Violet Lines. The mobile Delhi Metro Rail app has been introduced for iPhone and Android users with information such as the location of the nearest metro station, fares, parking availability, nearby tourist attractions, security and emergency helpline numbers. Security Security has been provided by the CISF Unit DMRC since 2007. Closed-circuit cameras monitor trains and stations, and their feeds are monitored by the CISF and Delhi Metro authorities. Over 7,000 CISF personnel have been deployed for security in addition to metal detectors, X-ray baggage-inspection systems, and detection dogs. Eighteen Delhi Metro Rail Police stations have been established, and about 5,200 CCTV cameras have been installed. Each underground station has 45 to 50 cameras, and each elevated station has 16 to 20 cameras. The cameras are monitored by the CISF and the Delhi Metro Rail Corporation. Intercoms are provided in each train car for emergency communication between passengers and the train operator. Periodic security drills are carried out at stations and on trains. The DMRC is considering raising station walls and railings for passenger safety. Ticketing The metro's fares were last revised on 10 October 2017, based on the recommendation of the 4th Fare Fixation Committee in May 2016. Metro commuters have five choices for ticket purchases: RFID token: RFID tokens are valid only for a single journey on the day of purchase. Their value depends on the distance travelled, with fares for a single journey ranging from to . Fares are calculated based on the distance between the origin and destination stations. As of 2024 they are no longer in use. Smart card: Smart cards are available for longer terms, and are the most convenient for frequent commuters. Valid for ten years from the date of purchase or the date of the last recharge, they are available in denominations of to . A 10-percent discount is given, with an additional 10-percent discount for off-peak travel. A new card has a deposit, refundable on its return before expiry if physically undamaged. For women commuters, the Delhi government unsuccessfully proposed a fare-exemption scheme. A common ticketing facility, allowing commuters to use smart cards on Delhi Transport Corporation (DTC) buses and the metro, was introduced on 28 August 2018. Tourist card: Tourist cards can be used for unlimited travel on the Delhi Metro for short periods of time. There are two kinds of tourist cards, valid for one and three days. The cost of a one-day card is and a three-day card is , including a refundable deposit of paid at purchase. National Common Mobility Card: Part of the Indian government's One Nation, One Card policy, the National Common Mobility Card is an inter-operable transport card enabling a user to pay for travel, tolls, shopping and cash. Enabled through RuPay, the NCMC was commissioned on the Airport Express Line on 28 December 2020. In June 2023, DMRC completed the upgrade of its automatic fare collection (AFC) systems to be compliant with NCMC services. QR code based ticketing: A Delhi Metro QR ticket is a mobile-based ticket allowing travel like a token or recharge card. A ticket can be bought online with the RIDLR app. For entry and exit, the QR ticket is scanned at the AFC gates. Similar to mobile-based tickets, paper QR tickets can be bought at a station. Problems As the metro has expanded, high ridership on new trains has led to increasing overcrowding and delays. To alleviate the problem, eight-coach trains have been introduced on the Yellow and Blue Lines and more-frequent trains have been proposed. Infrequent, overcrowded and erratic feeder bus services connecting stations to nearby localities have also been a concern. Although the quality and cleanliness of the Delhi Metro have been praised, rising fares have been criticized; fares are higher than those of the bus services the metro replaced. According to a recent study, Delhi Metro fares are the second-most unaffordable among metros charging less than US$0.5 per ride. Another study finds that Delhi Metro may also have a low ridership problem compared to its size and may not be generating the amount of traffic a metro system generates. Feeder buses DMRC began its feeder bus service in 2007 with a fleet of 117 minibuses on 16 routes. In January 2024, it had a fleet of 47 electric feeder buses on five routes to nine metro stations: Kashmere Gate, Gokulpuri, Shastri Park, Laxmi Nagar, East Vinod Nagar - Mayur Vihar-II, Anand Vihar, Dilshad Garden, Vishwavidyalaya, and GTB Nagar. The routes are: MC-127: Kashmere Gate to Harsh Vihar MC-137: Shastri Park to Mayur Vihar Phase-III MC-137 (Mini): Udyog Bhawan to Vanijya Bhawan MC-341: Mayur Vihar Phase-III to Harsh Vihar ML-06: Vishwavidyalaya to Shankarpura Ridership Note that DMRC reports different metrics versus the daily ridership below. DMRC report "daily passenger journeys" - for example, in 2022–23, DMRC reported that average daily passenger journeys were approx 4.63 million per day as compared to 5.16 million per day in 2019-20 (pre-Covid). Metro service was suspended on 25 March 2020 due to the COVID-19 pandemic. Operations resumed on 12 September 2020, and the average daily ridership fell to 8.78 lakh (0.88 million) in FY 2020–21. The maximum daily ridership (passenger journeys) of 7.109 million was reported on 13 February 2024. * Includes Rapid Metro Gurgaon ^ From 2019 onwards the DMRC changed the ridership calculation to count every trip taken by a passenger on a line. This means that a passenger that takes 2 connections will count 3 times towards ridership. This is different from the more standard practice of counting entire journeys applied in other metro systems. Finances Summary financials Source: The Delhi Metro has been operating with a loss in EBT (earnings before taxes) since 2010, although the loss has shrunk since 2015–16. Its EBITDA (earnings before interest, taxes, depreciation, and amortization) declined from 73 percent in FY 2007 to 27 percent in FY 2016–17 before improving to 30 percent in 2017–18. The metro began a naming policy for stations in 2014, awarded by an open e-tendering process, to generate non-fare revenue. Funding and Capitalisation DMRC is owned by the government of the National Capital Territory of Delhi and the government of India. Total debt was in March 2016, and equity capital was . The cost of the debt is zero percent for Union Government and Delhi Government loans, and from 0.01 and 2.3 percent for Japan International Cooperation Agency (JICA) loans. On 31 March 2016, was paid-up capital; the rest is reserves and surplus. Depots Delhi Metro has 15 depots. Some depots, such as Shastri Park and Yamuna Bank, are near their respective at-grade station complexes; others, such as Sarita Vihar and Mundka, are joined indirectly to the main line. The Najafgarh depot is unique in housing trains from the Blue and Grey Lines; the Sarita Vihar depot will house Violet and Golden Line trains in the future. The Phase III Kalindi Kunj and Vinod Nagar depots were built differently due to land-acquisition issues; the former has an extra elevated stabling yard adjacent to the Jasola Vihar - Shaheen Bagh station, and the latter has two sub-depots (one with two floors). An elevated stabling yard was also built adjacent to the Noida Electronic City station, but it is not considered a depot. As part of Phase IV, the Mukundpur depot will be expanded to accommodate the Pink and Magenta Lines without land-acquisition issues. The metro has two rail gauges. Phase I lines have broad gauge rolling stock, and three Phase II lines have rolling stock. Trains are maintained at seven depots at Khyber Pass and Sultanpur for the Yellow Line, Mundka for the Green Line, Najafgarh and Yamuna Bank for the Blue Line, Shastri Park for the Red Line, and Sarita Vihar for the Violet Line. Maglev trains were considered for some Phase III lines, but DMRC decided to continue with conventional rail in August 2012. By 31 March 2015, the company had a total of 1,306 coaches (220 trains). In addition to line extensions, two new lines (7 and 8) were proposed in Phase III. Unattended train operation (UTO) will be in 486 coaches (81 six-car trains). An additional 258 broad-gauge (BG) coaches for Lines 1 to 4 and 138 standard-gauge (SG) coaches for Lines 5 and 6 were proposed. At the end of Phase III, there would be 2,188 coaches (333 trains). Except for a few four-car trains on Line 5, 93 percent of the trains would have a six- or eight-car configuration at the end of Phase III. Broad gauge Rolling stock is provided by two major suppliers. Phase I rolling stock was supplied by a consortium of companies (Hyundai Rotem, Mitsubishi Corporation, and MELCO). The coaches look similar to the MTR Rotem EMU, but have only four doors; sliding doors, instead of plug doors, are used. The coaches were initially built in South Korea by Rotem, then in Bangalore by BEML through a technology transfer arrangement. The trains consist of four lightweight stainless-steel coaches with vestibules (permitting movement throughout them) and can carry up to 1,500 passengers, with 50 seated and 330 standing passengers per coach. The coaches are air-conditioned, equipped with automatic doors, microprocessor-controlled brakes and secondary air suspension, and can maintain an average speed of over a distance of . The system is extendable to eight coaches, and platforms have been designed accordingly. Phase II rolling stock is supplied by Bombardier Transportation, which received an order for 614 cars at a cost of about . Although the initial trains were made in Görlitz, Germany and Sweden, the remainder will be built at Bombardier's factory in Savli (near Vadodara). The four- and six-car trains have a capacity of 1,178 and 1,792 commuters each, respectively. Coaches have closed-circuit television (CCTV) cameras with eight-hour backup, chargers for cell phones and laptops, and improved climate control. Standard gauge Standard-gauge rolling stock is manufactured by BEML at its factory in Bangalore, and most of these trains are supplied to BEML by Hyundai Rotem. The four-car trains have a capacity of 1,506 passengers, accommodating 50 seated and 292 standing passengers in each coach. The trains, with CCTV cameras in and outside the coaches, chargers for mobile phones and laptops, improved climate control and microprocessor-controlled disc brakes, will be capable of maintaining an average speed of over a distance of . Airport Express Eight six-car trains supplied by CAF Beasain were imported from Spain. CAF held five-percent equity in the DAME project, and Reliance Infrastructure held the remaining 95 percent before DMRC took over operations. Trains on this line have noise reduction and padded fabric seats. Coaches are equipped with LCD screens for entertainment and flight information. Trains have an event recorder which can withstand high levels of temperature and impact, and wheels have a flange-lubrication system for reduced noise and improved comfort. Signaling and telecommunication The metro uses cab signaling with a centralised automatic train control system consisting of automatic operation, protection and signaling modules. A 380 MHz digital trunked TETRA radio communication system from Motorola Solutions is used on all lines to carry voice and data information. For the Blue Line, Siemens supplied the electronic interlocking Sicas, the Vicos OC 500 operation-control system and the LZB 700 M automation-control system. An integrated system with optical fibre cable, on-train radio, CCTV, and a centralised clock and public address system is used for telecommunication during normal operations and emergencies. Alstom supplied the signaling system for the Red and Yellow Lines, and Bombardier Transportation supplied its CITYFLO 350 signaling system for the Green and Violet Lines. The Airport Express line introduced WiFi service at all its stations on 13 January 2012. Connectivity in trains is expected in the future. WiFi service is provided by YOU Broadband and Cable India. In August 2017, Wifi service began at all the 50 stations of the Blue Line. A fully-automated, operator-less train system was offered to the metro by the French technology firm Thales. Environment and aesthetics The metro has received awards for environmentally-friendly practices from organisations including the United Nations, RINA, and the International Organization for Standardization; it is the second metro in the world, after the New York City Subway, to be ISO 14001 certified for environmentally-friendly construction. By March 2023, 64 metro stations, four sections on the central verge between piers, and 12 other Phase I and II locations on the network have rainwater harvesting for environmental protection; all 27 Phase-IV elevated stations will also harvest rainwater, and 52 recharge pits are being constructed for this purpose. It is the world's first railway project to earn carbon credits after being registered with the United Nations under the UN's Clean Development Mechanism, and has earned 400,000 carbon credits with the regenerative braking systems on its trains. DMRC installed the metro's first rooftop solar power plant at the Dwarka Sector-21 station in 2014. The network received 35 percent of its energy from renewable sources by April 2023, which it intends to increase to 50 percent by 2031. Of this, 30 percent comes from the Rewa Ultra Mega Solar park in Madhya Pradesh; four percent (50 MWp) comes from rooftop solar panels, and one percent comes from a waste-to-energy plant in Ghazipur. DMRC has installed solar panels at 142 locations: 15 depots, 93 stations, and 34 other buildings. The metro has been promoted as an integral part of community infrastructure, and artwork depicting the local way of life has been displayed at stations. Students at local art colleges have designed murals at metro stations, and the viaduct pillars of some elevated sections have been decorated with mosaic murals created by local schoolchildren. The metro station at INA Colony has a gallery of artwork and handicrafts from across India, and all stations on the Central Secretariat – Qutub Minar section of the Yellow Line have panels depicting Delhi's architectural heritage. The Nobel Memorial Wall at Rajiv Chowk has portraits of the seven Indian Nobel laureates: Rabindranath Tagore, CV Raman, Hargobind Khorana, Mother Teresa, Subrahmanyan Chandrasekhar, Amartya Sen and Venkatraman Ramakrishnan. In popular culture A number of films have been shot in the Delhi Metro; the first was Bewafaa in November 2005. Delhi-6, Love Aaj Kal, PK, and Paa also have scenes filmed inside Delhi Metro trains and stations. Bang Bang! was filmed near the Mayur Vihar Extension metro station in March 2014, and the 2019 film War was filmed in the metro.
Technology
India
null
30875080
https://en.wikipedia.org/wiki/Abyssinian%20cat
Abyssinian cat
The Abyssinian is a breed of cat with a distinctive "ticked" tabby coat, in which individual hairs are banded with different colours. They are also known simply as Abys. The first members of the breed to be exhibited in England were brought there from Abyssinia (now known as Ethiopia), hence the name. Genetic studies place the breed's origins in Southeast Asia and the coasts of the Indian Ocean, however. It is possible that the breed was introduced to Abyssinia by travelers who had stopped in Calcutta. Once a comparatively obscure breed, the Abyssinian had become one of the top five most popular cat breeds by 2016. The breed's distinctive appearance, seeming long, lean and finely coloured compared to other cats, has been analogized to that of human fashion models. Personality-wise, the cats traditionally display active, curious attitudes in which they frequently follow owners around and encourage play. Their dog-like characteristics also involve a particular sense of affection and desire for interaction. Abys have a distinctive wildcat look with their ticked coat and large erect ears. They are a highly social breed and can be demanding of attention. They do well in multi-cat households due to their social nature. Not a lap cat, Abyssinians are in constant motion, either exploring or playing. History What is thought to be the earliest known designated Abyssinian cat is in an exhibit still residing in the Leiden Zoological Museum in The Netherlands. It was purchased around 1834-1836 from a supplier of small wild cat exhibits as a taxidermy and was labeled by the museum founder as "Patrie, domestica India." The first example of a domesticated Abyssinian, however, involves the story of a cat being brought to England by the British Lt. General Sir Robert Napier in 1868 who had returned from the Abyssinia War. The cat was given the name "Zula" and won first prize in the December 1871 Crystal Palace cat show. Many modern Abyssinian breeders dispute Zula as having been the first domestic Abyssinian, arguing that the existing illustrations of Zula portray the cat as having ears too small for an Abyssinian and a coat too waved and long. The breed was nearly wiped out in the United Kingdom following the Second World War and an outbreak of feline leukaemia virus, resulting in cats being imported from places such as Holland, America, Scandinavia, Australia, and New Zealand. The Abyssinian is one of the oldest established cat breeds, being recognised in 1929 by the Governing Council of the Cat Fancy. The breed was developed in the United Kingdom with references dating back to the 1890s. Description Appearance The Abyssinian is a lithe, fine-boned, muscular, medium to large sized cat. The average weight is ranging between with height ranging between . The head is moderately wedge-shaped, with a slight break at the muzzle, and nose and chin ideally forming a straight vertical line when viewed in profile. They have alert, relatively large pointed ears. The eyes are almond-shaped and are gold, green, hazel or copper depending on coat colour. The legs tend to be long in proportion to a graceful body, with small oval paws; the tail is likewise long and tapering. Abyssinian kittens are born with dark coats that gradually lighten as they mature, usually over several months. The coat is short, and is ideally fine, not soft, dense, close-lying and silky to the touch. The ticked or agouti effect that is the trademark of the breed—genetically a variant of the tabby pattern—should be uniform over the body, although the ridge of the spine and tail, back of the hind legs and the pads of the paws are always noticeably darker. Each hair has a light base with three or four bands of additional colour growing darker towards the tip. The base colour should be as clear as possible; any extensive intermingling with grey is considered a serious fault. A tendency to white on the chin is common but likewise must be minimal. The typical tabby M-shaped marking is often found on the forehead. The breed's original colour standard is a warm deep reddish-brown base with black ticking, known as "usual" in the United Kingdom, "tawny" in Australia, and "ruddy" elsewhere. Sorrel (also called cinnamon or red), a lighter coppery base with chocolate brown ticking, is a unique mutation of this original pattern. Other variants have been introduced by outcrossing to the Burmese and other shorthaired breeds, notably blue (on a warm beige base) and fawn (on a softer creamy peach base). The less common chocolate and lilac are not recognized in the Cat Fancier's Association (CFA) breed standard but have been granted full champion status in The International Cat Association (TICA) and in the UK. The UK also recognizes the Silver Abyssinian, in which the base coat is a pure silvery white with black (called "usual silver"), blue, cream or sorrel ticking. Various other colour combinations are in development, including the "torbie", in which a patched tortoiseshell pattern in any of these colours is visible under the tabby banding. The breed owes their distinctive coat to a dominant mutant gene known as Ta. In 2007, the first cat to have its entire genome published was an Abyssinian named Cinnamon. Behaviour Veterinarian Joan O. Joshua has written that the "dog-like attachment to the owners" of Abyssinian and Burmese cats causes "greater dependence on human contacts". This stands in contrast to the mere "tolerant acceptance of human company" based around "comforts" that multiple other breeds display. With their interest in playing with their owners combined with their curious intelligence, Abyssinians are sometimes called the "Clowns of the Cat Kingdom". They have soft chirrup-like vocalizations which do not sound like the expected "meow". A study comparing Oriental, Siamese and Abyssinian kittens to Norwegian Forest cat kittens found that the former group was more likely to recede and hide as well as display other 'shy' behaviour. Health Familial renal amyloidosis or AA amyloidosis, a kidney disorder due to a mutation in the AA amyloid protein gene, has been seen in Abyssinians. The Abyssinian has had severe problems with blindness caused by a hereditary retinal degeneration due to mutations in the rdAc gene. However, the prevalence has been reduced from 45% to less than 4% in 2008 in the country of Sweden. An Australian analysis found the Abyssinian to be over-represented in cases of feline infectious peritonitis when compared to the expected frequency based on census data (4.4% versus 1.5%). An American study had similar results with an odds ratio of 8.98. In a review of over 5,000 cases of urate urolithiasis the Abyssinian was significantly under-represented, with only one of the recorded cases belonging to an Abyssinian. The 2008 study "The Ascent of Cat Breeds: Genetic Evaluations of Breeds and Worldwide Random-bred Populations" by Lipinski et al. conducted at UC Davis by the team led by leading feline geneticist Dr Leslie Lyons found that the Abyssinian has a low level of genetic diversity, a heterozygosity value of 0.45 within a range of 0.34–0.69 for all breeds studied, and has genetic markers common to both Southeast Asian and Western breeds indicating that cats from both Asia and Europe were used to create the breed. The Abyssinian was found to be predisposed to feline atopic dermatitis in a retrospective study of cases of the disease. The Abyssinian is predisposed to psychogenic alopecia. An American study found the Abyssinian to be at increased risk of aortic thromboembolism with an odds ratio of 6.03. A retrospective study in the US found the Abyssinian to be predisposed to acquired myasthaenia gravis with an odds ratio of 4.97. Mycobacterium avium complex infection is a very rare disease: 10/12 cases were Abyssinians. A study of cases of patellar luxation in the USA and in Europe found 38% (26/69) Abyssinians had the condition compared to 1/84 for other breeds. The Abyssinian is the cat breed most commonly affected by progressive retinal atrophy. The condition is caused by two separate mutations in the breed. Early onset PRA is caused by an autosomal dominant mutation in the CRX gene. Late onset PRA is caused by an autosomal recessive mutation in the CEP290 gene. The Abyssinian is one of the more commonly affected breeds for pyruvate kinase deficiency. An autosomal recessive mutation of the PKLR gene is responsible for the condition in the breed.
Biology and health sciences
Cats
Animals
30876044
https://en.wikipedia.org/wiki/Plant%20breeding
Plant breeding
Plant breeding is the science of changing the traits of plants in order to produce desired characteristics. It is used to improve the quality of plant products for use by humans and animals. The goals of plant breeding are to produce crop varieties that boast unique and superior traits for a variety of applications. The most frequently addressed agricultural traits are those related to biotic and abiotic stress tolerance, grain or biomass yield, end-use quality characteristics such as taste or the concentrations of specific biological molecules (proteins, sugars, lipids, vitamins, fibers) and ease of processing (harvesting, milling, baking, malting, blending, etc.). Plant breeding can be performed using many different techniques, ranging from the selection of the most desirable plants for propagation, to methods that make use of knowledge of genetics and chromosomes, to more complex molecular techniques. Genes in a plant are what determine what type of qualitative or quantitative traits it will have. Plant breeders strive to create a specific outcome of plants and potentially new plant varieties, and in the course of doing so, narrow down the genetic diversity of that variety to a specific few biotypes. It is practiced worldwide by individuals such as gardeners and farmers, and by professional plant breeders employed by organizations such as government institutions, universities, crop-specific industry associations or research centers. International development agencies believe that breeding new crops is important for ensuring food security by developing new varieties that are higher yielding, disease resistant, drought tolerant or regionally adapted to different environments and growing conditions. A recent study shows that without plant breeding, Europe would have produced 20% fewer arable crops over the last 20 years, consuming an additional of land and emitting of carbon. Wheat species created for Morocco are currently being crossed with plants to create new varieties for northern France. Soy beans, which were previously grown predominantly in the south of France, are now grown in southern Germany. History Plant breeding started with sedentary agriculture and particularly the domestication of the first agricultural plants, a practice which is estimated to date back 9,000 to 11,000 years. Initially early farmers simply selected food plants with particular desirable characteristics, and employed these as progenitors for subsequent generations, resulting in an accumulation of valuable traits over time. Grafting technology had been practiced in China before 2000 BCE. By 500 BCE grafting was well established and practiced. Gregor Mendel (1822–84) is considered the "father of genetics". His experiments with plant hybridization led to his establishing laws of inheritance. Genetics stimulated research to improve crop production through plant breeding. Selective breeding played a crucial role in the Green Revolution of the 20th century. Modern plant breeding is applied genetics, but its scientific basis is broader, covering molecular biology, cytology, systematics, physiology, pathology, entomology, chemistry, and statistics (biometrics). It has also developed its own technology. Classical plant breeding One major technique of plant breeding is selection, the process of selectively propagating plants with desirable characteristics and eliminating or "culling" those with less desirable characteristics. Another technique is the deliberate interbreeding (crossing) of closely or distantly related individuals to produce new crop varieties or lines with desirable properties. Plants are crossbred to introduce traits/genes from one variety or line into a new genetic background. For example, a mildew-resistant pea may be crossed with a high-yielding but susceptible pea, the goal of the cross being to introduce mildew resistance without losing the high-yield characteristics. Progeny from the cross would then be crossed with the high-yielding parent to ensure that the progeny were most like the high-yielding parent, (backcrossing). The progeny from that cross would then be tested for yield (selection, as described above) and mildew resistance and high-yielding resistant plants would be further developed. Plants may also be crossed with themselves to produce inbred varieties for breeding. Pollinators may be excluded through the use of pollination bags. Classical breeding relies largely on homologous recombination between chromosomes to generate genetic diversity. The classical plant breeder may also make use of a number of in vitro techniques such as protoplast fusion, embryo rescue or mutagenesis (see below) to generate diversity and produce hybrid plants that would not exist in nature. Traits that breeders have tried to incorporate into crop plants include: Improved quality, such as increased nutrition, improved flavor, or greater beauty Increased yield of the crop Increased tolerance of environmental pressures (salinity, extreme temperature, drought) Resistance to viruses, fungi and bacteria Increased tolerance to insect pests Increased tolerance of herbicides Longer storage period for the harvested crop Before World War II Gartons Agricultural Plant Breeders in England was established in 1880, which became a public company in 1898, by John Garton, who was one of the first to commercialize new varieties of agricultural crops created through cross-pollination. The firm's first introduction was the , an oat variety. It is one of the first agricultural grain varieties bred from a controlled cross, introduced to commerce in 1892. In the early 20th century, plant breeders realized that Gregor Mendel's findings on the non-random nature of inheritance could be applied to seedling populations produced through deliberate pollinations to predict the frequencies of different types. Wheat hybrids were bred to increase the crop production of Italy during the so-called "Battle for Grain" (1925–1940). Heterosis was explained by George Harrison Shull. It describes the tendency of the progeny of a specific cross to outperform both parents. The detection of the usefulness of heterosis for plant breeding has led to the development of inbred lines that reveal a heterotic yield advantage when they are crossed. Maize was the first species where heterosis was widely used to produce hybrids. Statistical methods were also developed to analyze gene action and distinguish heritable variation from variation caused by environment. In 1933 another important breeding technique, cytoplasmic male sterility (CMS), developed in maize, was described by Marcus Morton Rhoades. CMS is a maternally inherited trait that makes the plant produce sterile pollen. This enables the production of hybrids without the need for labor-intensive detasseling. These early breeding techniques resulted in large yield increase in the United States in the early 20th century. Similar yield increases were not produced elsewhere until after World War II, the Green Revolution increased crop production in the developing world in the 1960s. After World War II Following World War II a number of techniques were developed that allowed plant breeders to hybridize distantly related species, and artificially induce genetic diversity. When distantly related species are crossed, plant breeders make use of a number of plant tissue culture techniques to produce progeny from otherwise fruitless mating. Interspecific and intergeneric hybrids are produced from a cross of related species or genera that do not normally sexually reproduce with each other. These crosses are referred to as Wide crosses. For example, the cereal triticale is a wheat and rye hybrid. The cells in the plants derived from the first generation created from the cross contained an uneven number of chromosomes and as a result was sterile. The cell division inhibitor colchicine was used to double the number of chromosomes in the cell and thus allow the production of a fertile line. Failure to produce a hybrid may be due to pre- or post-fertilization incompatibility. If fertilization is possible between two species or genera, the hybrid embryo may abort before maturation. If this does occur the embryo resulting from an interspecific or intergeneric cross can sometimes be rescued and cultured to produce a whole plant. Such a method is referred to as embryo rescue. This technique has been used to produce new rice for Africa, an interspecific cross of Asian rice Oryza sativa and African rice O. glaberrima. Hybrids may also be produced by a technique called protoplast fusion. In this case protoplasts are fused, usually in an electric field. Viable recombinants can be regenerated in culture. Chemical mutagens like ethyl methanesulfonate (EMS) and dimethyl sulfate (DMS), radiation, and transposons are used for mutagenesis. Mutagenesis is the generation of mutants. The breeder hopes for desirable traits to be bred with other cultivars – a process known as mutation breeding. Classical plant breeders also generate genetic diversity within a species by exploiting a process called somaclonal variation, which occurs in plants produced from tissue culture, particularly plants derived from callus. Induced polyploidy, and the addition or removal of chromosomes using a technique called chromosome engineering may also be used. When a desirable trait has been bred into a species, a number of crosses to the favored parent are made to make the new plant as similar to the favored parent as possible. Returning to the example of the mildew resistant pea being crossed with a high-yielding but susceptible pea, to make the mildew resistant progeny of the cross most like the high-yielding parent, the progeny will be crossed back to that parent for several generations (See backcrossing). This process removes most of the genetic contribution of the mildew resistant parent. Classical breeding is therefore a cyclical process. With classical breeding techniques, the breeder does not know exactly what genes have been introduced to the new cultivars. Some scientists therefore argue that plants produced by classical breeding methods should undergo the same safety testing regime as genetically modified plants. There have been instances where plants bred using classical techniques have been unsuitable for human consumption, for example the poison solanine was unintentionally increased to unacceptable levels in certain varieties of potato through plant breeding. New potato varieties are often screened for solanine levels before reaching the marketplace. Even with the very latest in biotech-assisted conventional breeding, incorporation of a trait takes an average of seven generations for clonally propagated crops, nine for self-fertilising, and seventeen for cross-pollinating. Modern plant breeding Modern plant breeding may use techniques of molecular biology to select, or in the case of genetic modification, to insert, desirable traits into plants. Application of biotechnology or molecular biology is also known as molecular breeding. Marker assisted selection Sometimes many different genes can influence a desirable trait in plant breeding. The use of tools such as molecular markers or DNA fingerprinting can map thousands of genes. This allows plant breeders to screen large populations of plants for those that possess the trait of interest. The screening is based on the presence or absence of a certain gene as determined by laboratory procedures, rather than on the visual identification of the expressed trait in the plant. The purpose of marker assisted selection, or plant genome analysis, is to identify the location and function (phenotype) of various genes within the genome. If all of the genes are identified it leads to genome sequence. All plants have varying sizes and lengths of genomes with genes that code for different proteins, but many are also the same. If a gene's location and function is identified in one plant species, a very similar gene likely can also be found in a similar location in another related species genome. Doubled haploidy and reverse breeding Homozygous plants with desirable traits can be produced from heterozygous starting plants, if a haploid cell with the alleles for those traits can be produced, and then used to make a doubled haploid. The doubled haploid will be homozygous for the desired traits. Furthermore, two different homozygous plants created in that way can be used to produce a generation of F1 hybrid plants which have the advantages of heterozygosity and a greater range of possible traits. Thus, an individual heterozygous plant chosen for its desirable characteristics can be converted into a heterozygous variety (F1 hybrid) without the necessity of vegetative reproduction but as the result of the cross of two homozygous/doubled haploid lines derived from the originally selected plant. This shortcut has been dubbed 'reverse breeding'. Plant tissue culturing can produce haploid or double haploid plant lines and generations. This cuts down the genetic diversity taken from that plant species in order to select for desirable traits that will increase the fitness of the individuals. Using this method decreases the need for breeding multiple generations of plants to get a generation that is homogeneous for the desired traits, thereby saving much time over the natural version of the same process. There are many plant tissue culturing techniques that can be used to achieve haploid plants, but microspore culturing is currently the most promising for producing the largest numbers of them. Genetic modification Genetic modification of plants is achieved by adding a specific gene or genes to a plant, or by knocking down a gene with RNAi, to produce a desirable phenotype. The plants resulting from adding a gene are often referred to as transgenic plants. If for genetic modification genes of the species or of a crossable plant are used under control of their native promoter, then they are called cisgenic plants. Sometimes genetic modification can produce a plant with the desired trait or traits faster than classical breeding because the majority of the plant's genome is not altered. To genetically modify a plant, a genetic construct must be designed so that the gene to be added or removed will be expressed by the plant. To do this, a promoter to drive transcription and a termination sequence to stop transcription of the new gene, and the gene or genes of interest must be introduced to the plant. A marker for the selection of transformed plants is also included. In the laboratory, antibiotic resistance is a commonly used marker: Plants that have been successfully transformed will grow on media containing antibiotics; plants that have not been transformed will die. In some instances markers for selection are removed by backcrossing with the parent plant prior to commercial release. The construct can be inserted in the plant genome by genetic recombination using the bacteria Agrobacterium tumefaciens or A. rhizogenes, or by direct methods like the gene gun or microinjection. Using plant viruses to insert genetic constructs into plants is also a possibility, but the technique is limited by the host range of the virus. For example, Cauliflower mosaic virus (CaMV) only infects cauliflower and related species. Another limitation of viral vectors is that the virus is not usually passed on to the progeny, so every plant has to be inoculated. The majority of commercially released transgenic plants are currently limited to plants that have introduced resistance to insect pests and herbicides. Insect resistance is achieved through incorporation of a gene from Bacillus thuringiensis (Bt) that encodes a protein that is toxic to some insects. For example, the cotton bollworm, a common cotton pest, feeds on Bt cotton it will ingest the toxin and die. Herbicides usually work by binding to certain plant enzymes and inhibiting their action. The enzymes that the herbicide inhibits are known as the herbicide's "target site". Herbicide resistance can be engineered into crops by expressing a version of target site protein that is not inhibited by the herbicide. This is the method used to produce glyphosate resistant ("Roundup Ready") crop plants. Genetic modification can further increase yields by increasing stress tolerance to a given environment. Stresses such as temperature variation, are signalled to the plant via a cascade of signalling molecules which will activate a transcription factor to regulate gene expression. Overexpression of particular genes involved in cold acclimation has been shown to produce more resistance to freezing, which is one common cause of yield loss Genetic modification of plants that can produce pharmaceuticals (and industrial chemicals), sometimes called pharming, is a rather radical new area of plant breeding. The debate surrounding genetically modified food during the 1990s peaked in 1999 in terms of media coverage and risk perception, and continues today – for example, "Germany has thrown its weight behind a growing European mutiny over genetically modified crops by banning the planting of a widely grown pest-resistant corn variety." The debate encompasses the ecological impact of genetically modified plants, the safety of genetically modified food and concepts used for safety evaluation like substantial equivalence. Such concerns are not new to plant breeding. Most countries have regulatory processes in place to help ensure that new crop varieties entering the marketplace are both safe and meet farmers' needs. Examples include variety registration, seed schemes, regulatory authorizations for GM plants, etc. Breeding and the microbiome Industrial breeding of plants has unintentionally altered how agricultural cultivars associate with their microbiome. In maize, for example, breeding has altered the nitrogen cycling taxa required to the rhizosphere, with more modern lines recruiting less nitrogen fixing taxa and more nitrifiers and denitrifiers. Microbiomes of breeding lines showed that hybrid plants share much of their bacterial community with their parents, such as Cucurbita seeds and apple shoot endophytes. In addition, the proportional contribution of the microbiome from parents to offspring corresponds to the amount of genetic material contributed by each parent during breeding and domestication. Phenotyping and artificial intelligence machine learning and especially deep machine learning has recently become more commonly used in phenotyping. Computer vision using ML has made great strides and is now being applied to leaf phenotyping and other phenotyping jobs typically performed by human eyes. Pound et al. 2017 and Singh et al. 2016 are especially salient examples of early successful application and demonstration of the general usability of the process across multiple target plant species. These methods will work even better with large, publicly available open data sets. Speed breeding Speed breeding is introduced by Watson et al. 2018. Classical (human performed) phenotyping during speed breeding is also possible, using a procedure developed by Richard et al. 2015. it is highly anticipated that SB and automated phenotyping will, combined, produce greatly improved outcomes see above. Genomic selection (GS) The NGS platform has substantially declined the time and cost required for sequencing and facilitated SNP discovery in model and non-model plants. This in turn has led to employing large-scale SNP markers in genomic selection approaches which aim at predicting genomic breeding values/GEBVs of genotypes in a given population. This method can increase the selection accuracy and decrease the time of each breeding cycle. It has been used in different crops such as maize, wheat, etc. Participatory plant breeding Participatory plant breeding (PPB) is when farmers are involved in a crop improvement programme with opportunities to make decisions and contribute to the research process at different stages. Participatory approaches to crop improvement can also be applied when plant biotechnologies are being used for crop improvement. Local agricultural systems and genetic diversity are strengthened by participatory programs, and outcomes are enhanced by farmers knowledge of the quality required and evaluation of the target environment. A 2019 review of participatory plant breeding indicated that it had not gained widespread acceptance despite its record of successfully developing varieties with improved diversity and nutritional quality, as well as greater likelihood of these improved varieties being adopted by farmers. This review also found participatory plant breeding to have a better cost/benefit ratio than non-participatory approaches, and suggested incorporating participatory plant breeding with evolutionary plant breeding. Evolutionary plant breeding Evolutionary plant breeding describes practices which use mass populations with diverse genotypes grown under competitive natural selection. Survival in common crop cultivation environments is the predominant method of selection, rather than direct selection by growers and breeders. Individual plants that are favored under prevailing growing conditions, such as environment and inputs, contribute more seed to the next generation than less-adapted individuals. Evolutionary plant breeding has been successfully used by the Nepal National Gene Bank to preserve landrace diversity within Jumli Marshi rice while reducing its susceptibility to blast disease. These practices have also been used in Nepal with bean landraces. In 1929, Harlan and Martini proposed a method of plant breeding with heterogeneous populations by pooling an equal number of F2 seeds obtained from 378 crosses among 28 geographically diverse barley cultivars. In 1938, Harlan and Martini demonstrated evolution by natural selection in mixed dynamic populations as a few varieties that became dominant in some locations almost disappeared in others; poorly-adapted varieties disappeared everywhere. Evolutionary breeding populations have been used to establish self-regulating plant–pathogen systems. Examples include barley, where breeders were able to improve resistance to Rynchosporium secalis scald over 45 generations. An evolutionary breeding project grew F5 hybrid bulk soybean populations on soil infested by the soybean cyst nematode and was able to increase the proportion of resistant plants from 5% to 40%. The International Center for Agricultural Research in the Dry Areas (ICARDA) evolutionary plant breeding is combined with participatory plant breeding in order to allow farmers to choose which varieties suit their needs in their local environment. An influential 1956 effort by Coit A. Suneson to codify this approach coined the term evolutionary plant breeding and concluded that 15 generations of natural selection are desirable to produce results that are competitive with conventional breeding. Evolutionary breeding allows working with much larger plant population sizes than conventional breeding. It has also been used in tandem with conventional practices in order to develop both heterogeneous and homogeneous crop lines for low input agricultural systems that have unpredictable stress conditions. Evolutionary plant breeding has been delineated into four stages: Stage 1: Genetic diversity is created, for example by manual crosses of inbreeding species or mixing of cultivars in outcrossing species. Stage 2: Multiplication of seeds Stage 3: Seeds of each cross are then mixed to produce the first generation of the Composite Cross Population (CCP). The entire offspring is sown to grow and set seed. As the number of plants in the population increases, a proportion of the harvested seed is saved for sowing. Stage 4: The seed can be used for continued evolutionary plant breeding or as a starting point for a conventional breeding effort. Issues and concerns Breeding and food security Issues facing plant breeding in the future include the lack of arable land, increasingly harsh cropping conditions and the need to maintain food security, which involves being able to provide the world population with sufficient nutrition. Crops need to be able to mature in multiple environments to allow worldwide access, which involves solving problems including drought tolerance. It has been suggested that global solutions are achievable through the process of plant breeding, with its ability to select specific genes allowing crops to perform at a level which yields the desired results. One issue facing agriculture is the loss of landraces and other local varieties which have diversity that may have useful genes for climate adaptation in the future. Conventional breeding intentionally limits phenotype plasticity within genotypes and limits variability between genotypes. Uniformity does not allow crops to adapt to climate change and other biotic stresses and abiotic stresses. Plant breeders' rights Plant breeders' rights is an important and controversial issue. Production of new varieties is dominated by commercial plant breeders, who seek to protect their work and collect royalties through national and international agreements based in intellectual property rights. The range of related issues is complex. In the simplest terms, critics of the increasingly restrictive regulations argue that, through a combination of technical and economic pressures, commercial breeders are reducing biodiversity and significantly constraining individuals (such as farmers) from developing and trading seed on a regional level. Efforts to strengthen breeders' rights, for example, by lengthening periods of variety protection, are ongoing. Intellectual property legislation for plants often uses definitions that typically include genetic uniformity and unchanging appearance over generations. These legal definitions of stability contrast with traditional agronomic usage, which considers stability in terms of how consistent the yield or quality of a crop remains across locations and over time. As of 2020, regulations in Nepal only allow uniform varieties to be registered or released. Evolutionary plant populations and many landraces are polymorphic and do not meet these standards. Environmental stressors Uniform and genetically stable cultivars can be inadequate for dealing with environmental fluctuations and novel stress factors. Plant breeders have focused on identifying crops which will ensure crops perform under these conditions; a way to achieve this is finding strains of the crop that is resistance to drought conditions with low nitrogen. It is evident from this that plant breeding is vital for future agriculture to survive as it enables farmers to produce stress resistant crops hence improving food security. In countries that experience harsh winters such as Iceland, Germany and further east in Europe, plant breeders are involved in breeding for tolerance to frost, continuous snow-cover, frost-drought (desiccation from wind and solar radiation under frost) and high moisture levels in soil in winter. Long-term process Breeding is not a quick process, which is especially important when breeding to ameliorate a disease. The average time from human recognition of a new fungal disease threat to the release of a resistant crop for that pathogen is at least twelve years. Maintaining specific conditions When new plant breeds or cultivars are bred, they must be maintained and propagated. Some plants are propagated by asexual means while others are propagated by seeds. Seed propagated cultivars require specific control over seed source and production procedures to maintain the integrity of the plant breeds results. Isolation is necessary to prevent cross contamination with related plants or the mixing of seeds after harvesting. Isolation is normally accomplished by planting distance but in certain crops, plants are enclosed in greenhouses or cages (most commonly used when producing F1 hybrids). Nutritional value Modern plant breeding, whether classical or through genetic engineering, comes with issues of concern, particularly with regard to food crops. The question of whether breeding can have a negative effect on nutritional value is central in this respect. Although relatively little direct research in this area has been done, there are scientific indications that, by favoring certain aspects of a plant's development, other aspects may be retarded. A study published in the Journal of the American College of Nutrition in 2004, entitled Changes in USDA Food Composition Data for 43 Garden Crops, 1950 to 1999, compared nutritional analysis of vegetables done in 1950 and in 1999, and found substantial decreases in six of 13 nutrients measured, including 6% of protein and 38% of riboflavin. Reductions in calcium, phosphorus, iron and ascorbic acid were also found. The study, conducted at the Biochemical Institute, University of Texas at Austin, concluded in summary: "We suggest that any real declines are generally most easily explained by changes in cultivated varieties between 1950 and 1999, in which there may be trade-offs between yield and nutrient content." Plant breeding can contribute to global food security as it is a cost-effective tool for increasing nutritional value of forage and crops. Improvements in nutritional value for forage crops from the use of analytical chemistry and rumen fermentation technology have been recorded since 1960; this science and technology gave breeders the ability to screen thousands of samples within a small amount of time, meaning breeders could identify a high performing hybrid quicker. The genetic improvement was mainly in vitro dry matter digestibility (IVDMD) resulting in 0.7-2.5% increase, at just 1% increase in IVDMD a single Bos Taurus also known as beef cattle reported 3.2% increase in daily gains. This improvement indicates plant breeding is an essential tool in gearing future agriculture to perform at a more advanced level. Yield With an increasing population, the production of food needs to increase with it. It is estimated that a 70% increase in food production is needed by 2050 in order to meet the Declaration of the World Summit on Food Security. But with the degradation of agricultural land, simply planting more crops is no longer a viable option. New varieties of plants can in some cases be developed through plant breeding that generate an increase of yield without relying on an increase in land area. An example of this can be seen in Asia, where food production per capita has increased twofold. This has been achieved through not only the use of fertilisers, but through the use of better crops that have been specifically designed for the area. Role of plant breeding in organic agriculture Some critics of organic agriculture claim it is too low-yielding to be a viable alternative to conventional agriculture in situations when that poor performance may be the result in part of growing poorly-adapted varieties. It is estimated that over 95% of organic agriculture is based on conventionally adapted varieties, even though the production environments found in organic vs. conventional farming systems are vastly different due to their distinctive management practices. Most notably, organic farmers have fewer inputs available than conventional growers to control their production environments. Breeding varieties specifically adapted to the unique conditions of organic agriculture is critical for this sector to realize its full potential. This requires selection for traits such as: Water use efficiency Nutrient use efficiency (particularly nitrogen and phosphorus) Weed competitiveness Tolerance of mechanical weed control Pest/disease resistance Early maturity (as a mechanism for avoidance of particular stresses) Abiotic stress tolerance (i.e. drought, salinity, etc...) Currently, few breeding programs are directed at organic agriculture and until recently those that did address this sector have generally relied on indirect selection (i.e. selection in conventional environments for traits considered important for organic agriculture). However, because the difference between organic and conventional environments is large, a given genotype may perform very differently in each environment due to an interaction between genes and the environment (see gene–environment interaction). If this interaction is severe enough, an important trait required for the organic environment may not be revealed in the conventional environment, which can result in the selection of poorly adapted individuals. To ensure the most adapted varieties are identified, advocates of organic breeding now promote the use of direct selection (i.e. selection in the target environment) for many agronomic traits. There are many classical and modern breeding techniques that can be utilized for crop improvement in organic agriculture despite the ban on genetically modified organisms. For instance, controlled crosses between individuals allow desirable genetic variation to be recombined and transferred to seed progeny via natural processes. Marker assisted selection can also be employed as a diagnostics tool to facilitate selection of progeny who possess the desired trait(s), greatly speeding up the breeding process. This technique has proven particularly useful for the introgression of resistance genes into new backgrounds, as well as the efficient selection of many resistance genes pyramided into a single individual. Molecular markers are not currently available for many important traits, especially complex ones controlled by many genes. List of notable plant breeders Yvonne Aitken Norman Borlaug Luther Burbank Keith Downey Thomas Andrew Knight Niels Ebbesen Hansen Nazareno Strampelli Nikolai Vavilov
Technology
Basics_2
null
30876071
https://en.wikipedia.org/wiki/Electric%20dipole%20moment
Electric dipole moment
The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system: that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-metre (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry. Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge. Elementary definition Often in physics, the dimensions of an object can be ignored so it can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge, then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for the dipole, from the positive charge to the negative charge, is used in chemistry. An idealization of this two-charge system is the electrical point dipole consisting of two (infinite) charges only infinitesimally separated, but with a finite . This quantity is used in the definition of polarization density. Energy and torque An object with an electric dipole moment p is subject to a torque τ when placed in an external electric field E. The torque tends to align the dipole with the field. A dipole aligned parallel to an electric field has lower potential energy than a dipole making some non-zero angle with it. For a spatially uniform electric field across the small region occupied by the dipole, the energy U and the torque are given by The scalar dot "" product and the negative sign shows the potential energy minimises when the dipole is parallel with the field, maximises when it is antiparallel, and is zero when it is perpendicular. The symbol "" refers to the vector cross product. The E-field vector and the dipole vector define a plane, and the torque is directed normal to that plane with the direction given by the right-hand rule. A dipole in such a uniform field may twist and oscillate, but receives no overall net force with no linear acceleration of the dipole. The dipole twists to align with the external field. However, in a non-uniform electric field a dipole may indeed receive a net force since the force on one end of the dipole no longer balances that on the other end. It can be shown that this net force is generally parallel to the dipole moment. Expression (general case) More generally, for a continuous distribution of charge confined to a volume V, the corresponding expression for the dipole moment is: where r locates the point of observation and d3r′ denotes an elementary volume in V. For an array of point charges, the charge density becomes a sum of Dirac delta functions: where each ri is a vector from some reference point to the charge qi. Substitution into the above integration formula provides: This expression is equivalent to the previous expression in the case of charge neutrality and . For two opposite charges, denoting the location of the positive charge of the pair as r+ and the location of the negative charge as r−: showing that the dipole moment vector is directed from the negative charge to the positive charge because the position vector of a point is directed outward from the origin to that point. The dipole moment is particularly useful in the context of an overall neutral system of charges, such as a pair of opposite charges or a neutral conductor in a uniform electric field. For such a system, visualized as an array of paired opposite charges, the relation for electric dipole moment is: where r is the point of observation and di = ri − ri, ri being the position of the negative charge in the dipole i, and ri the position of the positive charge. This is the vector sum of the individual dipole moments of the neutral charge pairs. (Because of overall charge neutrality, the dipole moment is independent of the observer's position r.) Thus, the value of p is independent of the choice of reference point, provided the overall charge of the system is zero. When discussing the dipole moment of a non-neutral system, such as the dipole moment of the proton, a dependence on the choice of reference point arises. In such cases it is conventional to choose the reference point to be the center of mass of the system, not some arbitrary origin. This choice is not only a matter of convention: the notion of dipole moment is essentially derived from the mechanical notion of torque, and as in mechanics, it is computationally and theoretically useful to choose the center of mass as the observation point. For a charged molecule the center of charge should be the reference point instead of the center of mass. For neutral systems the reference point is not important, and the dipole moment is an intrinsic property of the system. Potential and field of an electric dipole An ideal dipole consists of two opposite charges with infinitesimal separation. We compute the potential and field of such an ideal dipole starting with two opposite charges at separation , and taking the limit as . Two closely spaced opposite charges ±q have a potential of the form: corresponding to the charge density by Coulomb's law, where the charge separation is: Let R denote the position vector relative to the midpoint , and the corresponding unit vector: Taylor expansion in (see multipole expansion and quadrupole) expresses this potential as a series. where higher order terms in the series are vanishing at large distances, R, compared to d. Here, the electric dipole moment p is, as above: The result for the dipole potential also can be expressed as: which relates the dipole potential to that of a point charge. A key point is that the potential of the dipole falls off faster with distance R than that of the point charge. The electric field of the dipole is the negative gradient of the potential, leading to: Thus, although two closely spaced opposite charges are not quite an ideal electric dipole (because their potential at short distances is not that of a dipole), at distances much larger than their separation, their dipole moment p appears directly in their potential and field. As the two charges are brought closer together (d is made smaller), the dipole term in the multipole expansion based on the ratio d/R becomes the only significant term at ever closer distances R, and in the limit of infinitesimal separation the dipole term in this expansion is all that matters. As d is made infinitesimal, however, the dipole charge must be made to increase to hold p constant. This limiting process results in a "point dipole". Dipole moment density and polarization density The dipole moment of an array of charges, determines the degree of polarity of the array, but for a neutral array it is simply a vector property of the array with no information about the array's absolute location. The dipole moment density of the array p(r) contains both the location of the array and its dipole moment. When it comes time to calculate the electric field in some region containing the array, Maxwell's equations are solved, and the information about the charge array is contained in the polarization density P(r) of Maxwell's equations. Depending upon how fine-grained an assessment of the electric field is required, more or less information about the charge array will have to be expressed by P(r). As explained below, sometimes it is sufficiently accurate to take P(r) = p(r). Sometimes a more detailed description is needed (for example, supplementing the dipole moment density with an additional quadrupole density) and sometimes even more elaborate versions of P(r) are necessary. It now is explored just in what way the polarization density P(r) that enters Maxwell's equations is related to the dipole moment p of an overall neutral array of charges, and also to the dipole moment density p(r) (which describes not only the dipole moment, but also the array location). Only static situations are considered in what follows, so P(r) has no time dependence, and there is no displacement current. First is some discussion of the polarization density P(r). That discussion is followed with several particular examples. A formulation of Maxwell's equations based upon division of charges and currents into "free" and "bound" charges and currents leads to introduction of the D- and P-fields: where P is called the polarization density. In this formulation, the divergence of this equation yields: and as the divergence term in E is the total charge, and ρf is "free charge", we are left with the relation: with ρb as the bound charge, by which is meant the difference between the total and the free charge densities. As an aside, in the absence of magnetic effects, Maxwell's equations specify that which implies Applying Helmholtz decomposition: for some scalar potential φ, and: Suppose the charges are divided into free and bound, and the potential is divided into Satisfaction of the boundary conditions upon φ may be divided arbitrarily between φf and φb because only the sum φ must satisfy these conditions. It follows that P is simply proportional to the electric field due to the charges selected as bound, with boundary conditions that prove convenient. In particular, when no free charge is present, one possible choice is . Next is discussed how several different dipole moment descriptions of a medium relate to the polarization entering Maxwell's equations. Medium with charge and dipole densities As described next, a model for polarization moment density p(r) results in a polarization restricted to the same model. For a smoothly varying dipole moment distribution p(r), the corresponding bound charge density is simply as we will establish shortly via integration by parts. However, if p(r) exhibits an abrupt step in dipole moment at a boundary between two regions, ∇·p(r) results in a surface charge component of bound charge. This surface charge can be treated through a surface integral, or by using discontinuity conditions at the boundary, as illustrated in the various examples below. As a first example relating dipole moment to polarization, consider a medium made up of a continuous charge density ρ(r) and a continuous dipole moment distribution p(r). The potential at a position r is: where ρ(r) is the unpaired charge density, and p(r) is the dipole moment density. Using an identity: the polarization integral can be transformed: where the vector identity was used in the last steps. The first term can be transformed to an integral over the surface bounding the volume of integration, and contributes a surface charge density, discussed later. Putting this result back into the potential, and ignoring the surface charge for now: where the volume integration extends only up to the bounding surface, and does not include this surface. The potential is determined by the total charge, which the above shows consists of: showing that: In short, the dipole moment density p(r) plays the role of the polarization density P for this medium. Notice, p(r) has a non-zero divergence equal to the bound charge density (as modeled in this approximation). It may be noted that this approach can be extended to include all the multipoles: dipole, quadrupole, etc. Using the relation: the polarization density is found to be: where the added terms are meant to indicate contributions from higher multipoles. Evidently, inclusion of higher multipoles signifies that the polarization density P no longer is determined by a dipole moment density p alone. For example, in considering scattering from a charge array, different multipoles scatter an electromagnetic wave differently and independently, requiring a representation of the charges that goes beyond the dipole approximation. Surface charge Above, discussion was deferred for the first term in the expression for the potential due to the dipoles. Integrating the divergence results in a surface charge. The figure at the right provides an intuitive idea of why a surface charge arises. The figure shows a uniform array of identical dipoles between two surfaces. Internally, the heads and tails of dipoles are adjacent and cancel. At the bounding surfaces, however, no cancellation occurs. Instead, on one surface the dipole heads create a positive surface charge, while at the opposite surface the dipole tails create a negative surface charge. These two opposite surface charges create a net electric field in a direction opposite to the direction of the dipoles. This idea is given mathematical form using the potential expression above. Ignoring the free charge, the potential is: Using the divergence theorem, the divergence term transforms into the surface integral: with dA0 an element of surface area of the volume. In the event that p(r) is a constant, only the surface term survives: with dA0 an elementary area of the surface bounding the charges. In words, the potential due to a constant p inside the surface is equivalent to that of a surface charge which is positive for surface elements with a component in the direction of p and negative for surface elements pointed oppositely. (Usually the direction of a surface element is taken to be that of the outward normal to the surface at the location of the element.) If the bounding surface is a sphere, and the point of observation is at the center of this sphere, the integration over the surface of the sphere is zero: the positive and negative surface charge contributions to the potential cancel. If the point of observation is off-center, however, a net potential can result (depending upon the situation) because the positive and negative charges are at different distances from the point of observation. The field due to the surface charge is: which, at the center of a spherical bounding surface is not zero (the fields of negative and positive charges on opposite sides of the center add because both fields point the same way) but is instead: If we suppose the polarization of the dipoles was induced by an external field, the polarization field opposes the applied field and sometimes is called a depolarization field. In the case when the polarization is outside a spherical cavity, the field in the cavity due to the surrounding dipoles is in the same direction as the polarization. In particular, if the electric susceptibility is introduced through the approximation: where , in this case and in the following, represent the external field which induces the polarization. Then: Whenever χ(r) is used to model a step discontinuity at the boundary between two regions, the step produces a surface charge layer. For example, integrating along a normal to the bounding surface from a point just interior to one surface to another point just exterior: where An, Ωn indicate the area and volume of an elementary region straddling the boundary between the regions, and a unit normal to the surface. The right side vanishes as the volume shrinks, inasmuch as ρb is finite, indicating a discontinuity in E, and therefore a surface charge. That is, where the modeled medium includes a step in permittivity, the polarization density corresponding to the dipole moment density necessarily includes the contribution of a surface charge. A physically more realistic modeling of p(r) would have the dipole moment density drop off rapidly, but smoothly to zero at the boundary of the confining region, rather than making a sudden step to zero density. Then the surface charge will not concentrate in an infinitely thin surface, but instead, being the divergence of a smoothly varying dipole moment density, will distribute itself throughout a thin, but finite transition layer. Dielectric sphere in uniform external electric field The above general remarks about surface charge are made more concrete by considering the example of a dielectric sphere in a uniform electric field. The sphere is found to adopt a surface charge related to the dipole moment of its interior. A uniform external electric field is supposed to point in the z-direction, and spherical polar coordinates are introduced so the potential created by this field is: The sphere is assumed to be described by a dielectric constant κ, that is, and inside the sphere the potential satisfies Laplace's equation. Skipping a few details, the solution inside the sphere is: while outside the sphere: At large distances, φ> → φ∞ so B = −E∞ . Continuity of potential and of the radial component of displacement D = κε0E determine the other two constants. Supposing the radius of the sphere is R, As a consequence, the potential is: which is the potential due to applied field and, in addition, a dipole in the direction of the applied field (the z-direction) of dipole moment: or, per unit volume: The factor is called the Clausius–Mossotti factor and shows that the induced polarization flips sign if . Of course, this cannot happen in this example, but in an example with two different dielectrics κ is replaced by the ratio of the inner to outer region dielectric constants, which can be greater or smaller than one. The potential inside the sphere is: leading to the field inside the sphere: showing the depolarizing effect of the dipole. Notice that the field inside the sphere is uniform and parallel to the applied field. The dipole moment is uniform throughout the interior of the sphere. The surface charge density on the sphere is the difference between the radial field components: This linear dielectric example shows that the dielectric constant treatment is equivalent to the uniform dipole moment model and leads to zero charge everywhere except for the surface charge at the boundary of the sphere. General media If observation is confined to regions sufficiently remote from a system of charges, a multipole expansion of the exact polarization density can be made. By truncating this expansion (for example, retaining only the dipole terms, or only the dipole and quadrupole terms, or etc.), the results of the previous section are regained. In particular, truncating the expansion at the dipole term, the result is indistinguishable from the polarization density generated by a uniform dipole moment confined to the charge region. To the accuracy of this dipole approximation, as shown in the previous section, the dipole moment density p(r) (which includes not only p but the location of p) serves as P(r). At locations inside the charge array, to connect an array of paired charges to an approximation involving only a dipole moment density p(r) requires additional considerations. The simplest approximation is to replace the charge array with a model of ideal (infinitesimally spaced) dipoles. In particular, as in the example above that uses a constant dipole moment density confined to a finite region, a surface charge and depolarization field results. A more general version of this model (which allows the polarization to vary with position) is the customary approach using electric susceptibility or electrical permittivity. A more complex model of the point charge array introduces an effective medium by averaging the microscopic charges; for example, the averaging can arrange that only dipole fields play a role. A related approach is to divide the charges into those nearby the point of observation, and those far enough away to allow a multipole expansion. The nearby charges then give rise to local field effects. In a common model of this type, the distant charges are treated as a homogeneous medium using a dielectric constant, and the nearby charges are treated only in a dipole approximation. The approximation of a medium or an array of charges by only dipoles and their associated dipole moment density is sometimes called the point dipole approximation, the discrete dipole approximation, or simply the dipole approximation. Electric dipole moments of fundamental particles Not to be confused with the magnetic dipole moments of particles, much experimental work is continuing on measuring the electric dipole moments (EDM; or anomalous electric dipole moment) of fundamental and composite particles, namely those of the electron and neutron, respectively. As EDMs violate both the parity (P) and time-reversal (T) symmetries, their values yield a mostly model-independent measure of CP-violation in nature (assuming CPT symmetry is valid). Therefore, values for these EDMs place strong constraints upon the scale of CP-violation that extensions to the standard model of particle physics may allow. Current generations of experiments are designed to be sensitive to the supersymmetry range of EDMs, providing complementary experiments to those done at the LHC. Indeed, many theories are inconsistent with the current limits and have effectively been ruled out, and established theory permits a much larger value than these limits, leading to the strong CP problem and prompting searches for new particles such as the axion. We know at least in the Yukawa sector from neutral kaon oscillations that CP is broken. Experiments have been performed to measure the electric dipole moment of various particles like the electron and the neutron. Many models beyond the standard model with additional CP-violating terms generically predict a nonzero electric dipole moment and are hence sensitive to such new physics. Instanton corrections from a nonzero θ term in quantum chromodynamics predict a nonzero electric dipole moment for the neutron and proton, which have not been observed in experiments (where the best bounds come from analysing neutrons). This is the strong CP problem and is a prediction of chiral perturbation theory. Dipole moments of molecules Dipole moments in molecules are responsible for the behavior of a substance in the presence of external electric fields. The dipoles tend to be aligned to the external field which can be constant or time-dependent. This effect forms the basis of a modern experimental technique called dielectric spectroscopy. Dipole moments can be found in common molecules such as water and also in biomolecules such as proteins. By means of the total dipole moment of some material one can compute the dielectric constant which is related to the more intuitive concept of conductivity. If is the total dipole moment of the sample, then the dielectric constant is given by where k is a constant and is the time correlation function of the total dipole moment. In general the total dipole moment have contributions coming from translations and rotations of the molecules in the sample, Therefore, the dielectric constant (and the conductivity) has contributions from both terms. This approach can be generalized to compute the frequency dependent dielectric function. It is possible to calculate dipole moments from electronic structure theory, either as a response to constant electric fields or from the density matrix. Such values however are not directly comparable to experiment due to the potential presence of nuclear quantum effects, which can be substantial for even simple systems like the ammonia molecule. Coupled cluster theory (especially CCSD(T)) can give very accurate dipole moments, although it is possible to get reasonable estimates (within about 5%) from density functional theory, especially if hybrid or double hybrid functionals are employed. The dipole moment of a molecule can also be calculated based on the molecular structure using the concept of group contribution methods.
Physical sciences
Electrostatics
Physics
30876283
https://en.wikipedia.org/wiki/Pinguicula
Pinguicula
Pinguicula, commonly known as butterworts, is a genus of carnivorous flowering plants in the family Lentibulariaceae. They use sticky, glandular leaves to lure, trap, and digest insects in order to supplement the poor mineral nutrition they obtain from the environment. Of the roughly 80 currently known species, 13 are native to Europe, 9 to North America, and some to northern Asia. The largest number of species is in South and Central America. Etymology The name Pinguicula is derived from a term coined by Conrad Gesner, who in his 1561 work entitled Horti Germaniae commented on the glistening leaves: "propter pinguia et tenera folia…" (Latin pinguis, "fat"). The common name "butterwort" reflects this characteristic. Characteristics The majority of Pinguicula are perennial plants. The only known annuals are P. sharpii, P. takakii, P. crenatiloba, and P. pumila. All species form stemless rosettes. Habitat Butterworts can be divided roughly into two main groups based on the climate in which they grow; each group is then further subdivided based on morphological characteristics. Although these groups are not cladistically supported by genetic studies, these groupings are nonetheless convenient for horticultural purposes. Tropical butterworts either form somewhat compact winter rosettes composed of fleshy leaves or retain carnivorous leaves year-round. They are typically located in regions where water is least seasonally plentiful, as too damp soil conditions can lead to rotting. They are found in areas in which  nitrogenous resources are known to be in low levels, infrequent or unavailable, due to acidic soil conditions. Temperate species often form tight buds (called hibernacula) composed of scale-like leaves during a winter dormancy period. During this time the roots (with the exception of P. alpina) and carnivorous leaves wither. Temperate species flower when they form their summer rosettes while tropical species flower at each rosette change. Many butterworts cycle between rosettes composed of carnivorous and non-carnivorous leaves as the seasons change, so these two ecological groupings can be further divided according to their ability to produce different leaves during their growing season. If the growth in the summer is different in size or shape to that in the early spring (for temperate species) or in the winter (tropical species), then plants are considered heterophyllous; whereas uniform growth identifies a homophyllous species. This results in four groupings: Tropical butterworts: species which do not undergo a winter dormancy but continue to alternately bloom and form rosettes. Heterophyllous tropical species: species that alternate between rosettes of carnivorous leaves during the warm season and compact rosettes of fleshy non-carnivorous leaves during the cool season. Examples include P. moranensis, P. gypsicola, and P. laxifolia. Homophyllous tropical species: these species produce rosettes of carnivorous leaves of roughly uniform size throughout the year, such as P. gigantea. Temperate butterworts: these plants are native to climate zones with cold winters. They produce a winter-resting bud (hibernaculum) during the winter. Heterophyllous temperate species: species where the vegetative and generative rosettes differ in shape and/or size, as seen in P. lutea and P. lusitanica. Homophyllous temperate species: the vegetative and generative rosettes appear identical, as exhibited by P. alpina, P. grandiflora, and P. vulgaris. Roots The root system of Pinguicula species is relatively undeveloped. The thin, white roots serve mainly as an anchor for the plant and to absorb moisture (nutrients are absorbed through carnivory). In temperate species these roots wither (except in P. alpina) when the hibernaculum is formed. In the few epiphytic species (such as P. lignicola), the roots form anchoring suction cups. Leaves and carnivory The leaf blade of a butterwort is smooth, rigid, and succulent, usually bright green or pinkish in colour. Depending on species, the leaves are between 2 and 30 cm (1-12") long. The leaf shape depends on the species, but is usually roughly obovate, spatulate, or linear. They can also appear yellow in color with a soft feel and a greasy consistency to the leaves. Like all members of the family Lentibulariaceae, butterworts are carnivorous. The mechanistic actions that these plants use to lure and capture prey is through a means of sticky or adhesives substances that are produced by mucilage secreted by glands located on the leaf's surface. In order to catch and digest insects, the leaf of a butterwort uses two specialized glands which are scattered across the leaf surface (usually only on the upper surface, with the exception of P. gigantea and P. longifolia ssp. longifolia). One is termed a peduncular gland, and consists of a few secretory cells on top of a single stalk cell. These cells produce a mucilaginous secretion which forms visible droplets across the leaf surface. This wet appearance probably helps lure prey in search of water (a similar phenomenon is observed in the sundews). The droplets secrete limited amounts of digestive enzymes, and serve mainly to entrap insects. On contact with an insect, the peduncular glands release additional mucilage from special reservoir cells located at the base of their stalks. The insect will begin to struggle, triggering more glands and encasing itself in mucilage. Some species can bend their leaf edges slightly by thigmotropism, bringing additional glands into contact with the trapped insect. The second type of gland found on butterwort leaves are sessile glands which lie flat on the leaf surface. Once the prey is entrapped by the peduncular glands and digestion begins, the initial flow of nitrogen triggers enzyme release by the sessile glands. These enzymes, which include amylase, esterase, phosphatase, protease, and ribonuclease break down the digestible components of the insect body. These fluids are then absorbed back into the leaf surface through cuticular holes, leaving only the chitin exoskeleton of the larger insects on the leaf surface. The holes in the cuticle which allow for this digestive mechanism also pose a challenge for the plant, since they serve as breaks in the cuticle (waxy layer) that protects the plant from desiccation. As a result, most butterworts live in humid environments. Butterworts are usually only able to trap small insects and those with large wing surfaces. They can also digest pollen which lands on their leaf surface. The secretory system can only function a single time, so that a particular area of the leaf surface can only be used to digest insects once. Unlike many other carnivorous plant species, butterworts do not appear to use jasmonates as a control system to switch on the production of digestive enzymes. Jasmonates are involved in the butterwort's defense against attacking insects, but not in its response to prey. Of the eight enzymes identified in the digestive secretions of butterworts, alpha-amylase appears to be unique when compared to other carnivorous plants. This research suggests that butterwort may have co-opted a different set of genes in its development of carnivory. Flowers As with almost all carnivorous plants, the flowers of butterworts are held far above the rest of the plant by a long stalk, in order to reduce the probability of trapping potential pollinators. The single, long-lasting flowers are zygomorphic, with two lower lip petals characteristic of the bladderwort family, and a spur extending from the back of the flower. The calyx has five sepals, and the petals are arranged in a two-part lower lip and a three-part upper lip. Most butterwort flowers are blue, violet or white, often suffused with a yellow, greenish or reddish tint. P. laueana and the newly described P. caryophyllacea are unique in having a strikingly red flowers. Butterworts are often cultivated and hybridized primarily for their flowers. The shape and colors of butterwort flowers are distinguishing characteristics which are used to divide the genus into subgenera and to distinguish individual species from one another. Fruit and seed The round to egg-shaped seed capsules open when dry into two halves, exposing numerous small (0.5–1 mm), brown seeds. If moisture is present the silique closes, protecting the seed and opening again upon dryness to allow for wind dispersal. Many species have a net-like pattern on their seed surface to allow them to land on water surfaces without sinking, since many non-epiphytic butterworts grow near water sources. The haploid chromosome number of butterworts is either n = 8 or n = 11 (or a multiple thereof), depending on species. The exception is P. lusitanica, whose chromosome count is n = 6. Diet The diet will range depending on the taxonomy and size of the prey due to the plant's retention ability. These size limitations are known to be the main element influencing what prey sources this carnivorous plant can access. They can also acquire nourishment from pollen and other plant parts that are high in protein, as other plants can become trapped on their leaves, thus, butterworts are both carnivorous and herbivorous plants. The diet consists of several species from the arthropod taxa; the majority of their prey are insects that have wings and are able to fly. The luring, retaining, and seizing of prey is the first steps in the feeding procedure for carnivorous plants; the result of the process is absorption and digestion of nutrients sourced from these food supplies. Pinguicula species do not select their prey, as they passively accumulate them through methods of sticky, adhesive leaves. However, they do have the ability of visual attraction of their colorful leaves, which will increase the likelihood of luring and capturing a specific taxa. Pinguicula capture their food source/ prey by means of the mucilaginous, sticky substances produced by their stalk glands on the top of their leaf. Once the prey has become trapped in the peduncular glands, the sessile glands present will then produce enzymes needed to accomplish digestion and breaking down the digestible regions of the  prey for their nutrients; taking in the fluids of the food source by means of cuticular holes present on the leaf's surface. Vegetative propagation As well as sexual reproduction by seed, many butterworts can reproduce asexually by vegetative reproduction. Many members of the genus form offshoots during or shortly after flowering (e.g., P. vulgaris), which grow into new genetically identical adults. A few other species form new offshoots using stolons (e.g., P. calyptrata, P. vallisneriifolia) while others form plantlets at the leaf margins (e.g., P. heterophylla, P. primuliflora). Distribution Butterworts are distributed throughout the northern hemisphere (map). The greatest concentration of species, however, is in humid mountainous regions of Mexico, Central America and South America, where populations can be found as far south as Tierra del Fuego. Australia and Antarctica are the only continents without any native butterworts. Butterworts probably originated in Central America, as this is the center of Pinguicula diversity – roughly 50% of butterwort species are found here. The great majority of individual Pinguicula species have a very limited distribution. The two butterwort species with the widest distribution - P. alpina and P. vulgaris - are found throughout much of Europe and North America. Other species found in North America include P. caerulea, P. ionantha, P. lutea, P. macroceras, P. planifolia, P. primuliflora, P. pumila, and P. villosa. Habitat In general, butterworts grow in nutrient-poor, alkaline soils. Some species have adapted to other soil types, such as acidic peat bogs (ex. P. vulgaris, P. calyptrata, P. lusitanica), soils composed of pure gypsum (P. gypsicola and other Mexican species), or even vertical rock walls (P. ramosa, P. vallisneriifolia, and most of the Mexican species). A few species are epiphytes (P. casabitoana, P. hemiepiphytica, P. lignicola). Many of the Mexican species commonly grow on mossy banks, rock, and roadsides in oak-pine forests. Pinguicula macroceras ssp. nortensis has even been observed growing on hanging dead grasses. P. lutea grows in pine flatwoods. Other species, such as P. vulgaris, grow in fens. Each of these environments is nutrient-poor, allowing butterworts to escape competition from other canopy-forming species, particularly grasses and sedges. Butterworts need habitats that are almost constantly moist or wet, at least during their carnivorous growth stage. Many Mexican species lose their carnivorous leaves, and sprout succulent leaves, or die back to onion-like "bulbs" to survive the winter drought, at which point they can survive in bone-dry conditions. The moisture they need for growing can be supplied by either a high groundwater table, or by high humidity or high precipitation. Unlike many other carnivorous plants that require sunny locations, many butterworts thrive in part-sun or even shady conditions. Conservation status The environmental threats faced by various Pinguicula species depend on their location and on how widespread their distribution is. Most endangered are the species which are endemic to small areas, such as P. ramosa, P. casabitoana, and P. fiorii. These populations are threatened primarily by habitat destruction. Wetland destruction has threatened several US species. Most of these are federally listed as either threatened or endangered, and P. ionantha is listed on CITES appendix I, giving it additional protection. Botanical history The first mention of butterworts in botanical literature is an entry entitled ("lard herb") by Vitus Auslasser in his 1479 work on medicinal herbs entitled Macer de Herbarium. The name is still used for butterworts in Tirol, Austria. In 1583, Clusius already distinguished between two forms in his Historia stirpium rariorum per Pannoniam, Austriam: a blue-flowered form (P. vulgaris) and a white-flowered form (Pinguicula alpina). Linnaeus added P. villosa and P. lusitanica when he published his Species Plantarum in 1753. The number of known species rose sharply with the exploration of the new continents in the 19th century; by 1844, 32 species were known. It was only in the late 19th century that the carnivory of this genus began to be studied in detail. In a letter to Asa Gray dated June 3, 1874, Charles Darwin mentioned his early observations of the butterwort's digestive process and insectivorous nature. Darwin studied these plants extensively. S. J. Casper's large 1966 monograph of the genus included 46 species, a number which has almost doubled since then. Many exciting discoveries have been made in recent years, especially in Mexico. Another important development in the history of butterworts is the formation of the International Pinguicula Study Group, an organization dedicated to furthering the knowledge of this genus and promoting its popularity in cultivation, in the 1990s. Uses Butterworts are widely cultivated by carnivorous plant enthusiasts. The temperate species and many of the Mexican butterworts are relatively easy to grow and have therefore gained relative popularity. Two of the most widely grown plants are the hybrid cultivars Pinguicula × 'Sethos' and Pinguicula × 'Weser'. Both are crosses of Pinguicula ehlersiae and Pinguicula moranensis, and are employed by commercial orchid nurseries to combat pests. Butterworts also produce a strong bactericide which prevents insects from rotting while they are being digested. According to Linnaeus, this property has long been known by northern Europeans, who applied butterwort leaves to the sores of cattle to promote healing. Additionally, butterwort leaves were used to curdle milk and form a buttermilk-like fermented milk product called (Sweden) and (Norway). Classification Pinguicula belong to the bladderwort family (Lentibulariaceae), along with Utricularia and Genlisea. Siegfried Jost Casper systematically divided them into three subgenera with 15 sections. A detailed study of the phylogenetics of butterworts by Cieslak et al. (2005) found that all of the currently accepted subgenera and many of the sections were polyphyletic. The diagram below gives a more accurate representation of the correct cladogram. Polyphyletic sections are marked with an *. ┌────Clade I (Sections Temnoceras *, Orcheosanthus *, Longitubus, │ Heterophyllum *, Agnata *, Isoloba *, Crassifolia) │ ┌───┤ │ │ │ │ ┌──────┤ └────Clade II (Section Micranthus * = P. alpina) │ │ │ │ ┌───┤ └────────Clade III (Sections Micranthus *, Nana) │ │ │ │ ───┤ └───────────────Clade IV (Section Pinguicula) │ │ └───────────────────Clade V (Sections Isoloba *, Ampullipalatum, Cardiophyllum)
Biology and health sciences
Lamiales
Plants
30876400
https://en.wikipedia.org/wiki/Heart%20transplantation
Heart transplantation
A heart transplant, or a cardiac transplant, is a surgical transplant procedure performed on patients with end-stage heart failure or severe coronary artery disease when other medical or surgical treatments have failed. , the most common procedure is to take a functioning heart, with or without both lungs, from a recently deceased organ donor (brain death is the standard) and implant it into the patient. The patient's own heart is either removed and replaced with the donor heart (orthotopic procedure) or, much less commonly, the recipient's diseased heart is left in place to support the donor heart (heterotopic, or "piggyback", transplant procedure). Approximately 3,500 heart transplants are performed each year worldwide, more than half of which are in the US. Post-operative survival periods average 15 years. Heart transplantation is not considered to be a cure for heart disease; rather it is a life-saving treatment intended to improve the quality and duration of life for a recipient. History American medical researcher Simon Flexner was one of the first people to mention the possibility of heart transplantation. In 1907, he wrote the paper "Tendencies in Pathology," in which he said that it would be possible one day by surgery to replace diseased human organs – including arteries, stomach, kidneys and heart. Not having a human donor heart available, James D. Hardy of the University of Mississippi Medical Center transplanted the heart of a chimpanzee into the chest of dying Boyd Rush in the early morning of Jan. 24, 1964. Hardy used a defibrillator to shock the heart to restart beating. This heart did beat in Rush's chest for 60 to 90 minutes (sources differ), and then Rush died without regaining consciousness. Although Hardy was a respected surgeon who had performed the world's first human-to-human lung transplant a year earlier, author Donald McRae states that Hardy could feel the "icy disdain" from fellow surgeons at the Sixth International Transplantation Conference several weeks after this attempt with the chimpanzee heart. Hardy had been inspired by the limited success of Keith Reemtsma at Tulane University in transplanting chimpanzee kidneys into human patients with kidney failure. The consent form Hardy asked Rush's stepsister to sign did not include the possibility that a chimpanzee heart might be used, although Hardy stated that he did include this in verbal discussions. A xenotransplantation is the technical term for the transplant of an organ or tissue from one species to another. Dr Dhaniram Baruah of Assam, India was the first heart surgeon to transplant a pig's heart in human body. However the recipient died subsequently. The world's first successful pig-to-human heart transplant was performed in January 2022 by surgeon Bartley P. Griffith of USA. The world's first human-to-human heart transplant was performed by South African cardiac surgeon Christiaan Barnard utilizing the techniques developed by American surgeons Norman Shumway and Richard Lower. Patient Louis Washkansky received this transplant on December 3, 1967, at the Groote Schuur Hospital in Cape Town, South Africa. Washkansky, however, died 18 days later from pneumonia. On December 6, 1967, at Maimonides Hospital in Brooklyn, New York, Adrian Kantrowitz performed the world's first pediatric heart transplant. The infant's new heart stopped beating after 7 hours and could not be restarted. At a following press conference, Kantrowitz emphasized that he did not consider the operation a success. Norman Shumway performed the first adult heart transplant in the United States on January 6, 1968, at the Stanford University Hospital. A team led by Donald Ross performed the first heart transplant in the United Kingdom on May 3, 1968. These were allotransplants, the technical term for a transplant from a non-genetically identical individual of the same species. Brain death is the current ethical standard for when a heart donation can be allowed. Worldwide, more than 100 transplants were performed by various doctors during 1968. Only a third of these patients lived longer than three months. The next big breakthrough came in 1983 when cyclosporine entered widespread usage. This drug enabled much smaller amounts of corticosteroids to be used to prevent many cases of rejection (the "corticosteroid-sparing" effect of cyclosporine). On June 9, 1984, "JP" Lovette IV of Denver, Colorado, became the world's first successful pediatric heart transplant. Columbia-Presbyterian Medical Center surgeons transplanted the heart of 4-year-old John Nathan Ford of Harlem into 4-year-old JP a day after the Harlem child died of injuries received in a fall from a fire escape at his home. JP was born with multiple heart defects. The transplant was done by a surgical team led by Dr. Eric A. Rose, director of cardiac transplantation at New York–Presbyterian Hospital. Drs. Keith Reemtsma and Fred Bowman also were members of the team for the six-hour operation. In 1988, the first "domino" heart transplant was performed, in which a patient in need of a lung transplant with a healthy heart would receive a heart-lung transplant, and their original heart would be transplanted into someone else. Worldwide, about 5,000 heart transplants are performed annually, an increase of 53 percent between 2011 and 2022. The majority of these are performed in the United States (about 4,000 annually). Vanderbilt University Medical Center in Nashville, Tennessee currently is the largest heart transplant center in the world, having performed a world-record 174 adult and pediatric transplants in 2024 alone. About 800,000 people have NYHA Class IV heart failure symptoms indicating advanced heart failure. The great disparity between the number of patients needing transplants and the number of procedures being performed spurred research into the transplantation of non-human hearts into humans after 1993. Xenografts from other species and artificial hearts are two less successful alternatives to allografts. The ability of medical teams to perform transplants continues to expand. For example, Sri Lanka's first heart transplant was successfully performed at the Kandy General Hospital on July 7, 2017. In recent years, donor heart preservation has improved and Organ Care System is being used in some centers in order to reduce the harmful effect of cold storage. During heart transplant, the vagus nerve is severed, thus removing parasympathetic influence over the myocardium. However, some limited return of sympathetic nerves has been demonstrated in humans. Recently, Australian researchers found a way to give more time for a heart to survive prior to the transplant, almost double the time. Heart transplantation using donation after circulatory death (DCD) was recently adopted and can help in reducing waitlist time while increasing transplant rate. Critically ill patients that are unsuitable for heart transplantation can be rescued and optimized with mechanical circulatory support, and bridged successfully to heart transplantation afterwards with good outcomes. On January 7, 2022, David Bennett, aged 57, of Maryland became the first person to receive a gene-edited pig heart in a transplant at the University of Maryland Medical Center. Before the transplant, David was unable to receive a human heart due to the patient's past conditions with heart failure and an irregular heartbeat, causing surgeons to use the pig heart that was genetically modified. Bennett died two months later at University of Maryland Medical Center on March 8, 2022. On April 19, 2023 Stanford Medicine surgeons performed the first beating-heart transplants from cardiac death donors. On December 11, 2024, in Padua, Italy, the world's first heart transplant from a non-beating donor to a fully beating heart was performed. Contraindications Some patients are less suitable for a heart transplant, especially if they have other circulatory conditions related to their heart condition. The following conditions in a patient increase the chances of complications. Absolute contraindications: Irreversible kidney, lung, or liver disease Active cancer if it is likely to impact the survival of the patient Life-threatening diseases unrelated to the cause of heart failure, including acute infection or systemic disease such as systemic lupus erythematosus, sarcoidosis, or amyloidosis Vascular disease of the neck and leg arteries. High pulmonary vascular resistance – over 5 or 6 Wood units. Relative contraindications: Insulin-dependent diabetes with severe organ dysfunction Recent thromboembolism such as stroke Severe obesity Age over 65 years (some variation between centers) – older patients are usually evaluated on an individual basis. Active substance use disorder, such as alcohol, recreational drugs or tobacco smoking (which increases the chance of lung disease) Patients who are in need of a heart transplant but do not qualify may be candidates for an artificial heart or a left ventricular assist device (LVAD). Complications Potential complications include: Post-operative complications include infection and sepsis. The surgery death rate was 5–10% in 2011. Acute or chronic graft rejection Cardiac allograft vasculopathy Atrial arrhythmia Lymphoproliferative malignancies, further worsened by immunosuppressive medication Increased risk of secondary infections due to immunosuppressive medication Serum sickness due to anti-thymocyte globulin Tricuspid valve regurgitation Repeated endomyocardial biopsy can cause bleeding and thrombosis Rejection Since the transplanted heart originates from another organism, the recipient's immune system will attempt to reject it regardless if the donor heart matches the recipient's blood type (unless if the donor is an isograft). Like other solid organ transplants, the risk of rejection never fully goes away, and the patient will be on immunosuppressive drugs for the rest of their life. Usage of these drugs may cause unwanted side effects, such as an increased likelihood of contracting secondary infections or develop certain types of cancer. Recipients can acquire kidney disease from a heart transplant due to the side effects of immunosuppressant medications. Many recent advances in reducing complications due to tissue rejection stem from mouse heart transplant procedures. People who have had heart transplants are monitored in various ways to test for possible organ rejection. A 2022 pilot study examining the acceptability and feasibility of using video directly observed therapy to increase medication adherence in adolescent heart transplant patients showed promising results of 90.1% medication adherence compared to 40-60% typically. Higher medication variability levels can lead to greater organ rejections and other poor outcomes. Prognosis The prognosis for heart transplant patients following the orthotopic procedure has improved over the past 20 years, and as of June 5, 2009 the survival rates were: 1 year: 88.0% (males), 86.2% (females) 3 years: 79.3% (males), 77.2% (females) 5 years: 73.2% (males), 69.0% (females) In 2007, researchers from the Johns Hopkins University School of Medicine discovered that "men receiving female hearts had a 15% increase in the risk of adjusted cumulative mortality" over five years compared to men receiving male hearts. Survival rates for women did not significantly differ based on male or female donors.
Biology and health sciences
Surgery
Health
30876419
https://en.wikipedia.org/wiki/Quantum%20state
Quantum state
In quantum physics, a quantum state is a mathematical entity that embodies the knowledge of a quantum system. Quantum mechanics specifies the construction, evolution, and measurement of a quantum state. The result is a prediction for the system represented by the state. Knowledge of the quantum state, and the rules for the system's evolution in time, exhausts all that can be known about a quantum system. Quantum states may be defined differently for different kinds of systems or problems. Two broad categories are wave functions describing quantum systems using position or momentum variables and the more abstract vector quantum states. Historical, educational, and application-focused problems typically feature wave functions; modern professional physics uses the abstract vector states. In both categories, quantum states divide into pure versus mixed states, or into coherent states and incoherent states. Categories with special properties include stationary states for time independence and quantum vacuum states in quantum field theory. From the states of classical mechanics As a tool for physics, quantum states grew out of states in classical mechanics. A classical dynamical state consists of a set of dynamical variables with well-defined real values at each instant of time. For example, the state of a cannon ball would consist of its position and velocity. The state values evolve under equations of motion and thus remain strictly determined. If we know the position of a cannon and the exit velocity of its projectiles, then we can use equations containing the force of gravity to predict the trajectory of a cannon ball precisely. Similarly, quantum states consist of sets of dynamical variables that evolve under equations of motion. However, the values derived from quantum states are complex numbers, quantized, limited by uncertainty relations, and only provide a probability distribution for the outcomes for a system. These constraints alter the nature of quantum dynamic variables. For example, the quantum state of an electron in a double-slit experiment would consist of complex values over the detection region and, when squared, only predict the probability distribution of electron counts across the detector. Role in quantum mechanics The process of describing a quantum system with quantum mechanics begins with identifying a set of variables defining the quantum state of the system. The set will contain compatible and incompatible variables. Simultaneous measurement of a complete set of compatible variables prepares the system in a unique state. The state then evolves deterministically according to the equations of motion. Subsequent measurement of the state produces a sample from a probability distribution predicted by the quantum mechanical operator corresponding to the measurement. The fundamentally statistical or probabilisitic nature of quantum measurements changes the role of quantum states in quantum mechanics compared to classical states in classical mechanics. In classical mechanics, the initial state of one or more bodies is measured; the state evolves according to the equations of motion; measurements of the final state are compared to predictions. In quantum mechanics, ensembles of identically prepared quantum states evolve according to the equations of motion and many repeated measurements are compared to predicted probability distributions. Measurements Measurements, macroscopic operations on quantum states, filter the state. Whatever the input quantum state might be, repeated identical measurements give consistent values. For this reason, measurements 'prepare' quantum states for experiments, placing the system in a partially defined state. Subsequent measurements may either further prepare the system – these are compatible measurements – or it may alter the state, redefining it – these are called incompatible or complementary measurements. For example, we may measure the momentum of a state along the axis any number of times and get the same result, but if we measure the position after once measuring the momentum, subsequent measurements of momentum are changed. The quantum state appears unavoidably altered by incompatible measurements. This is known as the uncertainty principle. Eigenstates and pure states The quantum state after a measurement is in an eigenstate corresponding to that measurement and the value measured. Other aspects of the state may be unknown. Repeating the measurement will not alter the state. In some cases, compatible measurements can further refine the state, causing it to be an eigenstate corresponding to all these measurements. A full set of compatible measurements produces a pure state. Any state that is not pure is called a mixed state as discussed in more depth below. The eigenstate solutions to the Schrödinger equation can be formed into pure states. Experiments rarely produce pure states. Therefore statistical mixtures of solutions must be compared to experiments. Representations The same physical quantum state can be expressed mathematically in different ways called representations. The position wave function is one representation often seen first in introductions to quantum mechanics. The equivalent momentum wave function is another wave function based representation. Representations are analogous to coordinate systems or similar mathematical devices like parametric equations. Selecting a representation will make some aspects of a problem easier at the cost of making other things difficult. In formal quantum mechanics (see below) the theory develops in terms of abstract 'vector space', avoiding any particular representation. This allows many elegant concepts of quantum mechanics to be expressed and to be applied even in cases where no classical analog exists. Wave function representations Wave functions represent quantum states, particularly when they are functions of position or of momentum. Historically, definitions of quantum states used wavefunctions before the more formal methods were developed. The wave function is a complex-valued function of any complete set of commuting or compatible degrees of freedom. For example, one set could be the spatial coordinates of an electron. Preparing a system by measuring the complete set of compatible observables produces a pure quantum state. More common, incomplete preparation produces a mixed quantum state. Wave function solutions of Schrödinger's equations of motion for operators corresponding to measurements can readily be expressed as pure states; they must be combined with statistical weights matching experimental preparation to compute the expected probability distribution. Pure states of wave functions Numerical or analytic solutions in quantum mechanics can be expressed as pure states. These solution states, called eigenstates, are labeled with quantized values, typically quantum numbers. For example, when dealing with the energy spectrum of the electron in a hydrogen atom, the relevant pure states are identified by the principal quantum number , the angular momentum quantum number , the magnetic quantum number , and the spin z-component . For another example, if the spin of an electron is measured in any direction, e.g. with a Stern–Gerlach experiment, there are two possible results: up or down. A pure state here is represented by a two-dimensional complex vector , with a length of one; that is, with where and are the absolute values of and . The postulates of quantum mechanics state that pure states, at a given time , correspond to vectors in a separable complex Hilbert space, while each measurable physical quantity (such as the energy or momentum of a particle) is associated with a mathematical operator called the observable. The operator serves as a linear function that acts on the states of the system. The eigenvalues of the operator correspond to the possible values of the observable. For example, it is possible to observe a particle with a momentum of 1 kg⋅m/s if and only if one of the eigenvalues of the momentum operator is 1 kg⋅m/s. The corresponding eigenvector (which physicists call an eigenstate) with eigenvalue 1 kg⋅m/s would be a quantum state with a definite, well-defined value of momentum of 1 kg⋅m/s, with no quantum uncertainty. If its momentum were measured, the result is guaranteed to be 1 kg⋅m/s. On the other hand, a pure state described as a superposition of multiple different eigenstates does in general have quantum uncertainty for the given observable. Using bra–ket notation, this linear combination of eigenstates can be represented as: The coefficient that corresponds to a particular state in the linear combination is a complex number, thus allowing interference effects between states. The coefficients are time dependent. How a quantum state changes in time is governed by the time evolution operator. Mixed states of wave functions A mixed quantum state corresponds to a probabilistic mixture of pure states; however, different distributions of pure states can generate equivalent (i.e., physically indistinguishable) mixed states. A mixture of quantum states is again a quantum state. A mixed state for electron spins, in the density-matrix formulation, has the structure of a matrix that is Hermitian and positive semi-definite, and has trace 1. A more complicated case is given (in bra–ket notation) by the singlet state, which exemplifies quantum entanglement: which involves superposition of joint spin states for two particles with spin 1/2. The singlet state satisfies the property that if the particles' spins are measured along the same direction then either the spin of the first particle is observed up and the spin of the second particle is observed down, or the first one is observed down and the second one is observed up, both possibilities occurring with equal probability. A pure quantum state can be represented by a ray in a projective Hilbert space over the complex numbers, while mixed states are represented by density matrices, which are positive semidefinite operators that act on Hilbert spaces. The Schrödinger–HJW theorem classifies the multitude of ways to write a given mixed state as a convex combination of pure states. Before a particular measurement is performed on a quantum system, the theory gives only a probability distribution for the outcome, and the form that this distribution takes is completely determined by the quantum state and the linear operators describing the measurement. Probability distributions for different measurements exhibit tradeoffs exemplified by the uncertainty principle: a state that implies a narrow spread of possible outcomes for one experiment necessarily implies a wide spread of possible outcomes for another. Statistical mixtures of states are a different type of linear combination. A statistical mixture of states is a statistical ensemble of independent systems. Statistical mixtures represent the degree of knowledge whilst the uncertainty within quantum mechanics is fundamental. Mathematically, a statistical mixture is not a combination using complex coefficients, but rather a combination using real-valued, positive probabilities of different states . A number represents the probability of a randomly selected system being in the state . Unlike the linear combination case each system is in a definite eigenstate. The expectation value of an observable is a statistical mean of measured values of the observable. It is this mean, and the distribution of probabilities, that is predicted by physical theories. There is no state that is simultaneously an eigenstate for all observables. For example, we cannot prepare a state such that both the position measurement and the momentum measurement (at the same time ) are known exactly; at least one of them will have a range of possible values. This is the content of the Heisenberg uncertainty relation. Moreover, in contrast to classical mechanics, it is unavoidable that performing a measurement on the system generally changes its state. More precisely: After measuring an observable A, the system will be in an eigenstate of A; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measure A twice in the same run of the experiment, the measurements being directly consecutive in time, then they will produce the same results. This has some strange consequences, however, as follows. Consider two incompatible observables, and , where corresponds to a measurement earlier in time than . Suppose that the system is in an eigenstate of at the experiment's beginning. If we measure only , all runs of the experiment will yield the same result. If we measure first and then in the same run of the experiment, the system will transfer to an eigenstate of after the first measurement, and we will generally notice that the results of are statistical. Thus: Quantum mechanical measurements influence one another, and the order in which they are performed is important. Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, called entangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, see Quantum entanglement. These entangled states lead to experimentally testable properties (Bell's theorem) that allow us to distinguish between quantum theory and alternative classical (non-quantum) models. Schrödinger picture vs. Heisenberg picture One can take the observables to be dependent on time, while the state was fixed once at the beginning of the experiment. This approach is called the Heisenberg picture. (This approach was taken in the later part of the discussion above, with time-varying observables , .) One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as the Schrödinger picture. (This approach was taken in the earlier part of the discussion above, with a time-varying state .) Conceptually (and mathematically), the two approaches are equivalent; choosing one of them is a matter of convention. Both viewpoints are used in quantum theory. While non-relativistic quantum mechanics is usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, for quantum field theory. Compare with Dirac picture. Formalism in quantum physics Pure states as rays in a complex Hilbert space Quantum physics is most commonly formulated in terms of linear algebra, as follows. Any given system is identified with some finite- or infinite-dimensional Hilbert space. The pure states correspond to vectors of norm 1. Thus the set of all pure states corresponds to the unit sphere in the Hilbert space, because the unit sphere is defined as the set of all vectors with norm 1. Multiplying a pure state by a scalar is physically inconsequential (as long as the state is considered by itself). If a vector in a complex Hilbert space can be obtained from another vector by multiplying by some non-zero complex number, the two vectors in are said to correspond to the same ray in the projective Hilbert space of . Note that although the word ray is used, properly speaking, a point in the projective Hilbert space corresponds to a line passing through the origin of the Hilbert space, rather than a half-line, or ray in the geometrical sense. Spin The angular momentum has the same dimension (M·L·T) as the Planck constant and, at quantum scale, behaves as a discrete degree of freedom of a quantum system. Most particles possess a kind of intrinsic angular momentum that does not appear at all in classical mechanics and arises from Dirac's relativistic generalization of the theory. Mathematically it is described with spinors. In non-relativistic quantum mechanics the group representations of the Lie group SU(2) are used to describe this additional freedom. For a given particle, the choice of representation (and hence the range of possible values of the spin observable) is specified by a non-negative number that, in units of the reduced Planck constant , is either an integer (0, 1, 2, ...) or a half-integer (1/2, 3/2, 5/2, ...). For a massive particle with spin , its spin quantum number always assumes one of the possible values in the set As a consequence, the quantum state of a particle with spin is described by a vector-valued wave function with values in C2S+1. Equivalently, it is represented by a complex-valued function of four variables: one discrete quantum number variable (for the spin) is added to the usual three continuous variables (for the position in space). Many-body states and particle statistics The quantum state of a system of N particles, each potentially with spin, is described by a complex-valued function with four variables per particle, corresponding to 3 spatial coordinates and spin, e.g. Here, the spin variables mν assume values from the set where is the spin of νth particle. for a particle that does not exhibit spin. The treatment of identical particles is very different for bosons (particles with integer spin) versus fermions (particles with half-integer spin). The above N-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) with respect to the particle numbers. If not all N particles are identical, but some of them are, then the function must be (anti)symmetrized separately over the variables corresponding to each group of identical variables, according to its statistics (bosonic or fermionic). Electrons are fermions with , photons (quanta of light) are bosons with (although in the vacuum they are massless and can't be described with Schrödinger mechanics). When symmetrization or anti-symmetrization is unnecessary, -particle spaces of states can be obtained simply by tensor products of one-particle spaces, to which we will return later. Basis states of one-particle systems A state belonging to a separable complex Hilbert space can always be expressed uniquely as a linear combination of elements of an orthonormal basis of . Using bra–ket notation, this means any state can be written as with complex coefficients and basis elements . In this case, the normalization condition translates to In physical terms, has been expressed as a quantum superposition of the "basis states" , i.e., the eigenstates of an observable. In particular, if said observable is measured on the normalized state , then is the probability that the result of the measurement is . In general, the expression for probability always consist of a relation between the quantum state and a portion of the spectrum of the dynamical variable (i.e. random variable) being observed. For example, the situation above describes the discrete case as eigenvalues belong to the point spectrum. Likewise, the wave function is just the eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) ; the energy of the system. An example of the continuous case is given by the position operator. The probability measure for a system in state is given by: where is the probability density function for finding a particle at a given position. These examples emphasize the distinction in charactertistics between the state and the observable. That is, whereas is a pure state belonging to , the (generalized) eigenvectors of the position operator do not. Pure states vs. bound states Though closely related, pure states are not the same as bound states belonging to the pure point spectrum of an observable with no quantum uncertainty. A particle is said to be in a bound state if it remains localized in a bounded region of space for all times. A pure state is called a bound state if and only if for every there is a compact set such that for all . The integral represents the probability that a particle is found in a bounded region at any time . If the probability remains arbitrarily close to then the particle is said to remain in . Superposition of pure states As mentioned above, quantum states may be superposed. If and are two kets corresponding to quantum states, the ket is also a quantum state of the same system. Both and can be complex numbers; their relative amplitude and relative phase will influence the resulting quantum state. Writing the superposed state using and defining the norm of the state as: and extracting the common factors gives: The overall phase factor in front has no physical effect. Only the relative phase affects the physical nature of the superposition. One example of superposition is the double-slit experiment, in which superposition leads to quantum interference. Another example of the importance of relative phase is Rabi oscillations, where the relative phase of two states varies in time due to the Schrödinger equation. The resulting superposition ends up oscillating back and forth between two different states. Mixed states A pure quantum state is a state which can be described by a single ket vector, as described above. A mixed quantum state is a statistical ensemble of pure states (see Quantum statistical mechanics). Mixed states arise in quantum mechanics in two different situations: first, when the preparation of the system is not fully known, and thus one must deal with a statistical ensemble of possible preparations; and second, when one wants to describe a physical system which is entangled with another, as its state cannot be described by a pure state. In the first case, there could theoretically be another person who knows the full history of the system, and therefore describe the same system as a pure state; in this case, the density matrix is simply used to represent the limited knowledge of a quantum state. In the second case, however, the existence of quantum entanglement theoretically prevents the existence of complete knowledge about the subsystem, and it's impossible for any person to describe the subsystem of an entangled pair as a pure state. Mixed states inevitably arise from pure states when, for a composite quantum system with an entangled state on it, the part is inaccessible to the observer. The state of the part is expressed then as the partial trace over . A mixed state cannot be described with a single ket vector. Instead, it is described by its associated density matrix (or density operator), usually denoted ρ. Density matrices can describe both mixed and pure states, treating them on the same footing. Moreover, a mixed quantum state on a given quantum system described by a Hilbert space can be always represented as the partial trace of a pure quantum state (called a purification) on a larger bipartite system for a sufficiently large Hilbert space . The density matrix describing a mixed state is defined to be an operator of the form where is the fraction of the ensemble in each pure state The density matrix can be thought of as a way of using the one-particle formalism to describe the behavior of many similar particles by giving a probability distribution (or ensemble) of states that these particles can be found in. A simple criterion for checking whether a density matrix is describing a pure or mixed state is that the trace of is equal to 1 if the state is pure, and less than 1 if the state is mixed. Another, equivalent, criterion is that the von Neumann entropy is 0 for a pure state, and strictly positive for a mixed state. The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observable is given by where and are eigenkets and eigenvalues, respectively, for the operator , and "" denotes trace. It is important to note that two types of averaging are occurring, one (over ) being the usual expected value of the observable when the quantum is in state , and the other (over ) being a statistical (said incoherent) average with the probabilities that the quantum is in those states. Mathematical generalizations States can be formulated in terms of observables, rather than as vectors in a vector space. These are positive normalized linear functionals on a C*-algebra, or sometimes other classes of algebras of observables. See State on a C*-algebra and Gelfand–Naimark–Segal construction for more details.
Physical sciences
Quantum mechanics
Physics
30876799
https://en.wikipedia.org/wiki/Riemann%20sphere
Riemann sphere
In mathematics, the Riemann sphere, named after Bernhard Riemann, is a model of the extended complex plane (also called the closed complex plane): the complex plane plus one point at infinity. This extended plane represents the extended complex numbers, that is, the complex numbers plus a value for infinity. With the Riemann model, the point is near to very large numbers, just as the point is near to very small numbers. The extended complex numbers are useful in complex analysis because they allow for division by zero in some circumstances, in a way that makes expressions such as well-behaved. For example, any rational function on the complex plane can be extended to a holomorphic function on the Riemann sphere, with the poles of the rational function mapping to infinity. More generally, any meromorphic function can be thought of as a holomorphic function whose codomain is the Riemann sphere. In geometry, the Riemann sphere is the prototypical example of a Riemann surface, and is one of the simplest complex manifolds. In projective geometry, the sphere is an example of a complex projective space and can be thought of as the complex projective line , the projective space of all complex lines in . As with any compact Riemann surface, the sphere may also be viewed as a projective algebraic curve, making it a fundamental example in algebraic geometry. It also finds utility in other disciplines that depend on analysis and geometry, such as the Bloch sphere of quantum mechanics and in other branches of physics. Extended complex numbers The extended complex numbers consist of the complex numbers together with . The set of extended complex numbers may be written as , and is often denoted by adding some decoration to the letter , such as The notation has also seen use, but as this notation is also used for the punctured plane , it can lead to ambiguity. Geometrically, the set of extended complex numbers is referred to as the Riemann sphere (or extended complex plane). Arithmetic operations Addition of complex numbers may be extended by defining, for , for any complex number , and multiplication may be defined by for all nonzero complex numbers , with . Note that and are left undefined. Unlike the complex numbers, the extended complex numbers do not form a field, since does not have an additive nor multiplicative inverse. Nonetheless, it is customary to define division on by for all nonzero complex numbers with and . The quotients and are left undefined. Rational functions Any rational function (in other words, is the ratio of polynomial functions and of with complex coefficients, such that and have no common factor) can be extended to a continuous function on the Riemann sphere. Specifically, if is a complex number such that the denominator is zero but the numerator is nonzero, then can be defined as . Moreover, can be defined as the limit of as , which may be finite or infinite. The set of complex rational functions—whose mathematical symbol is —form all possible holomorphic functions from the Riemann sphere to itself, when it is viewed as a Riemann surface, except for the constant function taking the value everywhere. The functions of form an algebraic field, known as the field of rational functions on the sphere. For example, given the function we may define , since the denominator is zero at , and since as . Using these definitions, becomes a continuous function from the Riemann sphere to itself. As a complex manifold As a one-dimensional complex manifold, the Riemann sphere can be described by two charts, both with domain equal to the complex number plane . Let be a complex number in one copy of , and let be a complex number in another copy of . Identify each nonzero complex number of the first with the nonzero complex number of the second . Then the map is called the transition map between the two copies of —the so-called charts—glueing them together. Since the transition maps are holomorphic, they define a complex manifold, called the Riemann sphere. As a complex manifold of 1 complex dimension (i.e. 2 real dimensions), this is also called a Riemann surface. Intuitively, the transition maps indicate how to glue two planes together to form the Riemann sphere. The planes are glued in an "inside-out" manner, so that they overlap almost everywhere, with each plane contributing just one point (its origin) missing from the other plane. In other words, (almost) every point in the Riemann sphere has both a value and a value, and the two values are related by . The point where should then have -value ""; in this sense, the origin of the -chart plays the role of in the -chart. Symmetrically, the origin of the -chart plays the role of in the -chart. Topologically, the resulting space is the one-point compactification of a plane into the sphere. However, the Riemann sphere is not merely a topological sphere. It is a sphere with a well-defined complex structure, so that around every point on the sphere there is a neighborhood that can be biholomorphically identified with . On the other hand, the uniformization theorem, a central result in the classification of Riemann surfaces, states that every simply-connected Riemann surface is biholomorphic to the complex plane, the hyperbolic plane, or the Riemann sphere. Of these, the Riemann sphere is the only one that is a closed surface (a compact surface without boundary). Hence the two-dimensional sphere admits a unique complex structure turning it into a one-dimensional complex manifold. As the complex projective line The Riemann sphere can also be defined as the complex projective line. The points of the complex projective line can be defined as equivalence classes of non-null vectors in the complex vector space : two non-null vectors and are equivalent iff for some non-zero coefficient . In this case, the equivalence class is written using projective coordinates. Given any point in the complex projective line, one of and must be non-zero, say . Then by the notion of equivalence, , which is in a chart for the Riemann sphere manifold. This treatment of the Riemann sphere connects most readily to projective geometry. For example, any line (or smooth conic) in the complex projective plane is biholomorphic to the complex projective line. It is also convenient for studying the sphere's automorphisms, later in this article. As a sphere The Riemann sphere can be visualized as the unit sphere in the three-dimensional real space . To this end, consider the stereographic projection from the unit sphere minus the point onto the plane which we identify with the complex plane by . In Cartesian coordinates and spherical coordinates on the sphere (with the zenith and the azimuth), the projection is Similarly, stereographic projection from onto the plane identified with another copy of the complex plane by is written The inverses of these two stereographic projections are maps from the complex plane to the sphere. The first inverse covers the sphere except the point , and the second covers the sphere except the point . The two complex planes, that are the domains of these maps, are identified differently with the plane , because an orientation-reversal is necessary to maintain consistent orientation on the sphere. The transition maps between -coordinates and -coordinates are obtained by composing one projection with the inverse of the other. They turn out to be and , as described above. Thus the unit sphere is diffeomorphic to the Riemann sphere. Under this diffeomorphism, the unit circle in the -chart, the unit circle in the -chart, and the equator of the unit sphere are all identified. The unit disk is identified with the southern hemisphere , while the unit disk is identified with the northern hemisphere . Metric A Riemann surface does not come equipped with any particular Riemannian metric. The Riemann surface's conformal structure does, however, determine a class of metrics: all those whose subordinate conformal structure is the given one. In more detail: The complex structure of the Riemann surface does uniquely determine a metric up to conformal equivalence. (Two metrics are said to be conformally equivalent if they differ by multiplication by a positive smooth function.) Conversely, any metric on an oriented surface uniquely determines a complex structure, which depends on the metric only up to conformal equivalence. Complex structures on an oriented surface are therefore in one-to-one correspondence with conformal classes of metrics on that surface. Within a given conformal class, one can use conformal symmetry to find a representative metric with convenient properties. In particular, there is always a complete metric with constant curvature in any given conformal class. In the case of the Riemann sphere, the Gauss–Bonnet theorem implies that a constant-curvature metric must have positive curvature . It follows that the metric must be isometric to the sphere of radius in via stereographic projection. In the -chart on the Riemann sphere, the metric with is given by In real coordinates , the formula is Up to a constant factor, this metric agrees with the standard Fubini–Study metric on complex projective space (of which the Riemann sphere is an example). Up to scaling, this is the only metric on the sphere whose group of orientation-preserving isometries is 3-dimensional (and none is more than 3-dimensional); that group is called . In this sense, this is by far the most symmetric metric on the sphere. (The group of all isometries, known as , is also 3-dimensional, but unlike is not a connected space.) Conversely, let denote the sphere (as an abstract smooth or topological manifold). By the uniformization theorem there exists a unique complex structure on up to conformal equivalence. It follows that any metric on is conformally equivalent to the round metric. All such metrics determine the same conformal geometry. The round metric is therefore not intrinsic to the Riemann sphere, since "roundness" is not an invariant of conformal geometry. The Riemann sphere is only a conformal manifold, not a Riemannian manifold. However, if one needs to do Riemannian geometry on the Riemann sphere, the round metric is a natural choice (with any fixed radius, though radius is the simplest and most common choice). That is because only a round metric on the Riemann sphere has its isometry group be a 3-dimensional group. (Namely, the group known as , a continuous ("Lie") group that is topologically the 3-dimensional projective space .) Automorphisms The study of any mathematical object is aided by an understanding of its group of automorphisms, meaning the maps from the object to itself that preserve the essential structure of the object. In the case of the Riemann sphere, an automorphism is an invertible conformal map (i.e. biholomorphic map) from the Riemann sphere to itself. It turns out that the only such maps are the Möbius transformations. These are functions of the form where , , , and are complex numbers such that . Examples of Möbius transformations include dilations, rotations, translations, and complex inversion. In fact, any Möbius transformation can be written as a composition of these. The Möbius transformations are homographies on the complex projective line. In projective coordinates, the transformation f can be written Thus the Möbius transformations can be described as two-by-two complex matrices with nonzero determinant. Since they act on projective coordinates, two matrices yield the same Möbius transformation if and only if they differ by a nonzero factor. The group of Möbius transformations is the projective linear group . If one endows the Riemann sphere with the Fubini–Study metric, then not all Möbius transformations are isometries; for example, the dilations and translations are not. The isometries form a proper subgroup of , namely . This subgroup is isomorphic to the rotation group , which is the group of symmetries of the unit sphere in (which, when restricted to the sphere, become the isometries of the sphere). Applications In complex analysis, a meromorphic function on the complex plane (or on any Riemann surface, for that matter) is a ratio of two holomorphic functions and . As a map to the complex numbers, it is undefined wherever is zero. However, it induces a holomorphic map to the complex projective line that is well-defined even where . This construction is helpful in the study of holomorphic and meromorphic functions. For example, on a compact Riemann surface there are no non-constant holomorphic maps to the complex numbers, but holomorphic maps to the complex projective line are abundant. The Riemann sphere has many uses in physics. In quantum mechanics, points on the complex projective line are natural values for photon polarization states, spin states of massive particles of spin , and 2-state particles in general (see also Quantum bit and Bloch sphere). The Riemann sphere has been suggested as a relativistic model for the celestial sphere. In string theory, the worldsheets of strings are Riemann surfaces, and the Riemann sphere, being the simplest Riemann surface, plays a significant role. It is also important in twistor theory.
Mathematics
Complex analysis
null
30876867
https://en.wikipedia.org/wiki/Molecular%20cloning
Molecular cloning
Molecular cloning is a set of experimental methods in molecular biology that are used to assemble recombinant DNA molecules and to direct their replication within host organisms. The use of the word cloning refers to the fact that the method involves the replication of one molecule to produce a population of cells with identical DNA molecules. Molecular cloning generally uses DNA sequences from two different organisms: the species that is the source of the DNA to be cloned, and the species that will serve as the living host for replication of the recombinant DNA. Molecular cloning methods are central to many contemporary areas of modern biology and medicine. In a conventional molecular cloning experiment, the DNA to be cloned is obtained from an organism of interest, then treated with enzymes in the test tube to generate smaller DNA fragments. Subsequently, these fragments are then combined with vector DNA to generate recombinant DNA molecules. The recombinant DNA is then introduced into a host organism (typically an easy-to-grow, benign, laboratory strain of E. coli bacteria). This will generate a population of organisms in which recombinant DNA molecules are replicated along with the host DNA. Because they contain foreign DNA fragments, these are transgenic or genetically modified microorganisms (GMOs). This process takes advantage of the fact that a single bacterial cell can be induced to take up and replicate a single recombinant DNA molecule. This single cell can then be expanded exponentially to generate a large number of bacteria, each of which contains copies of the original recombinant molecule. Thus, both the resulting bacterial population, and the recombinant DNA molecule, are commonly referred to as "clones". Strictly speaking, recombinant DNA refers to DNA molecules, while molecular cloning refers to the experimental methods used to assemble them. The idea arose that different DNA sequences could be inserted into a plasmid and that these foreign sequences would be carried into bacteria and digested as part of the plasmid. That is, these plasmids could serve as cloning vectors to carry genes. Virtually any DNA sequence can be cloned and amplified, but there are some factors that might limit the success of the process. Examples of the DNA sequences that are difficult to clone are inverted repeats, origins of replication, centromeres and telomeres. There is also a lower chance of success when inserting large-sized DNA sequences. Inserts larger than 10kbp have very limited success, but bacteriophages such as bacteriophage λ can be modified to successfully insert a sequence up to 40 kbp. History Prior to the 1970s, the understanding of genetics and molecular biology was severely hampered by an inability to isolate and study individual genes from complex organisms. This changed dramatically with the advent of molecular cloning methods. Microbiologists, seeking to understand the molecular mechanisms through which bacteria restricted the growth of bacteriophage, isolated restriction endonucleases, enzymes that could cleave DNA molecules only when specific DNA sequences were encountered. They showed that restriction enzymes cleaved chromosome-length DNA molecules at specific locations, and that specific sections of the larger molecule could be purified by size fractionation. Using a second enzyme, DNA ligase, fragments generated by restriction enzymes could be joined in new combinations, termed recombinant DNA. By recombining DNA segments of interest with vector DNA, such as bacteriophage or plasmids, which naturally replicate inside bacteria, large quantities of purified recombinant DNA molecules could be produced in bacterial cultures. The first recombinant DNA molecules were generated and studied in 1972. Overview Molecular cloning takes advantage of the fact that the chemical structure of DNA is fundamentally the same in all living organisms. Therefore, if any segment of DNA from any organism is inserted into a DNA segment containing the molecular sequences required for DNA replication, and the resulting recombinant DNA is introduced into the organism from which the replication sequences were obtained, then the foreign DNA will be replicated along with the host cell's DNA in the transgenic organism. Molecular cloning is similar to PCR in that it permits the replication of DNA sequence. The fundamental difference between the two methods is that molecular cloning involves replication of the DNA in a living microorganism, while PCR replicates DNA in an in vitro solution, free of living cells. In silico cloning and simulations Before actual cloning experiments are performed in the lab, most cloning experiments are planned in a computer, using specialized software. Although the detailed planning of the cloning can be done in any text editor, together with online utilities for e.g. PCR primer design, dedicated software exist for the purpose. Software for the purpose include for example ApE (open source), DNAStrider (open source), Serial Cloner (gratis), Collagene (open source), and SnapGene (commercial). These programs allow to simulate PCR reactions, restriction digests, ligations, etc., that is, all the steps described below. Steps In standard molecular cloning experiments, the cloning of any DNA fragment essentially involves seven steps: (1) Choice of host organism and cloning vector, (2) Preparation of vector DNA, (3) Preparation of DNA to be cloned, (4) Creation of recombinant DNA, (5) Introduction of recombinant DNA into host organism, (6) Selection of organisms containing recombinant DNA, (7) Screening for clones with desired DNA inserts and biological properties. Notably, the growing capacity and fidelity of DNA synthesis platforms allows for increasingly intricate designs in molecular engineering. These projects may include very long strands of novel DNA sequence and/or test entire libraries simultaneously, as opposed to of individual sequences. These shifts introduce complexity that require design to move away from the flat nucleotide-based representation and towards a higher level of abstraction. Examples of such tools are GenoCAD, Teselagen (free for academia) or GeneticConstructor (free for academics). Choice of host organism and cloning vector Although a very large number of host organisms and molecular cloning vectors are in use, the great majority of molecular cloning experiments begin with a laboratory strain of the bacterium E. coli (Escherichia coli) and a plasmid cloning vector. E. coli and plasmid vectors are in common use because they are technically sophisticated, versatile, widely available, and offer rapid growth of recombinant organisms with minimal equipment. If the DNA to be cloned is exceptionally large (hundreds of thousands to millions of base pairs), then a bacterial artificial chromosome or yeast artificial chromosome vector is often chosen. Specialized applications may call for specialized host-vector systems. For example, if the experimentalists wish to harvest a particular protein from the recombinant organism, then an expression vector is chosen that contains appropriate signals for transcription and translation in the desired host organism. Alternatively, if replication of the DNA in different species is desired (for example, transfer of DNA from bacteria to plants), then a multiple host range vector (also termed shuttle vector) may be selected. In practice, however, specialized molecular cloning experiments usually begin with cloning into a bacterial plasmid, followed by subcloning into a specialized vector. Whatever combination of host and vector are used, the vector almost always contains four DNA segments that are critically important to its function and experimental utility: DNA replication origin is necessary for the vector (and its linked recombinant sequences) to replicate inside the host organism one or more unique restriction endonuclease recognition sites to serve as sites where foreign DNA may be introduced a selectable genetic marker gene that can be used to enable the survival of cells that have taken up vector sequences a tag gene that can be used to screen for cells containing the foreign DNA Preparation of vector DNA The cloning vector is treated with a restriction endonuclease to cleave the DNA at the site where foreign DNA will be inserted. The restriction enzyme is chosen to generate a configuration at the cleavage site that is compatible with the ends of the foreign DNA (see DNA end). Typically, this is done by cleaving the vector DNA and foreign DNA with the same restriction enzyme or restriction endonuclease, for example EcoRI and this restriction enzyme was isolated from E.coli. Most modern vectors contain a variety of convenient cleavage sites that are unique within the vector molecule (so that the vector can only be cleaved at a single site) and are located within a gene (frequently beta-galactosidase) whose inactivation can be used to distinguish recombinant from non-recombinant organisms at a later step in the process. To improve the ratio of recombinant to non-recombinant organisms, the cleaved vector may be treated with an enzyme (alkaline phosphatase) that dephosphorylates the vector ends. Vector molecules with dephosphorylated ends are unable to replicate, and replication can only be restored if foreign DNA is integrated into the cleavage site. Preparation of DNA to be cloned For cloning of genomic DNA, the DNA to be cloned is extracted from the organism of interest. Virtually any tissue source can be used (even tissues from extinct animals), as long as the DNA is not extensively degraded. The DNA is then purified using simple methods to remove contaminating proteins (extraction with phenol), RNA (ribonuclease) and smaller molecules (precipitation and/or chromatography). Polymerase chain reaction (PCR) methods are often used for amplification of specific DNA or RNA (RT-PCR) sequences prior to molecular cloning. DNA for cloning experiments may also be obtained from RNA using reverse transcriptase (complementary DNA or cDNA cloning), or in the form of synthetic DNA (artificial gene synthesis). cDNA cloning is usually used to obtain clones representative of the mRNA population of the cells of interest, while synthetic DNA is used to obtain any precise sequence defined by the designer. Such a designed sequence may be required when moving genes across genetic codes (for example, from the mitochondria to the nucleus) or simply for increasing expression via codon optimization. The purified DNA is then treated with a restriction enzyme to generate fragments with ends capable of being linked to those of the vector. If necessary, short double-stranded segments of DNA (linkers) containing desired restriction sites may be added to create end structures that are compatible with the vector. Creation of recombinant DNA with DNA ligase The creation of recombinant DNA is in many ways the simplest step of the molecular cloning process. DNA prepared from the vector and foreign source are simply mixed together at appropriate concentrations and exposed to an enzyme (DNA ligase) that covalently links the ends together. This joining reaction is often termed ligation. The resulting DNA mixture containing randomly joined ends is then ready for introduction into the host organism. DNA ligase only recognizes and acts on the ends of linear DNA molecules, usually resulting in a complex mixture of DNA molecules with randomly joined ends. The desired products (vector DNA covalently linked to foreign DNA) will be present, but other sequences (e.g. foreign DNA linked to itself, vector DNA linked to itself and higher-order combinations of vector and foreign DNA) are also usually present. This complex mixture is sorted out in subsequent steps of the cloning process, after the DNA mixture is introduced into cells. Introduction of recombinant DNA into host organism The DNA mixture, previously manipulated in vitro, is moved back into a living cell, referred to as the host organism. The methods used to get DNA into cells are varied, and the name applied to this step in the molecular cloning process will often depend upon the experimental method that is chosen (e.g. transformation, transduction, transfection, electroporation). When microorganisms are able to take up and replicate DNA from their local environment, the process is termed transformation, and cells that are in a physiological state such that they can take up DNA are said to be competent. In mammalian cell culture, the analogous process of introducing DNA into cells is commonly termed transfection. Both transformation and transfection usually require preparation of the cells through a special growth regime and chemical treatment process that will vary with the specific species and cell types that are used. Electroporation uses high voltage electrical pulses to translocate DNA across the cell membrane (and cell wall, if present). In contrast, transduction involves the packaging of DNA into virus-derived particles, and using these virus-like particles to introduce the encapsulated DNA into the cell through a process resembling viral infection. Although electroporation and transduction are highly specialized methods, they may be the most efficient methods to move DNA into cells. Selection of organisms containing vector sequences Whichever method is used, the introduction of recombinant DNA into the chosen host organism is usually a low efficiency process; that is, only a small fraction of the cells will actually take up DNA. Experimental scientists deal with this issue through a step of artificial genetic selection, in which cells that have not taken up DNA are selectively killed, and only those cells that can actively replicate DNA containing the selectable marker gene encoded by the vector are able to survive. When bacterial cells are used as host organisms, the selectable marker is usually a gene that confers resistance to an antibiotic that would otherwise kill the cells, typically ampicillin. Cells harboring the plasmid will survive when exposed to the antibiotic, while those that have failed to take up plasmid sequences will die. When mammalian cells (e.g. human or mouse cells) are used, a similar strategy is used, except that the marker gene (in this case typically encoded as part of the kanMX cassette) confers resistance to the antibiotic Geneticin. Screening for clones with desired DNA inserts and biological properties Modern bacterial cloning vectors (e.g. pUC19 and later derivatives including the pGEM vectors) use the blue-white screening system to distinguish colonies (clones) of transgenic cells from those that contain the parental vector (i.e. vector DNA with no recombinant sequence inserted). In these vectors, foreign DNA is inserted into a sequence that encodes an essential part of beta-galactosidase, an enzyme whose activity results in formation of a blue-colored colony on the culture medium that is used for this work. Insertion of the foreign DNA into the beta-galactosidase coding sequence disables the function of the enzyme so that colonies containing transformed DNA remain colorless (white). Therefore, experimentalists are easily able to identify and conduct further studies on transgenic bacterial clones, while ignoring those that do not contain recombinant DNA. The total population of individual clones obtained in a molecular cloning experiment is often termed a DNA library. Libraries may be highly complex (as when cloning complete genomic DNA from an organism) or relatively simple (as when moving a previously cloned DNA fragment into a different plasmid), but it is almost always necessary to examine a number of different clones to be sure that the desired DNA construct is obtained. This may be accomplished through a very wide range of experimental methods, including the use of nucleic acid hybridizations, antibody probes, polymerase chain reaction, restriction fragment analysis and/or DNA sequencing. Applications Molecular cloning provides scientists with an essentially unlimited quantity of any individual DNA segments derived from any genome. This material can be used for a wide range of purposes, including those in both basic and applied biological science. A few of the more important applications are summarized here. Genome organization and gene expression Molecular cloning has led directly to the elucidation of the complete DNA sequence of the genomes of a very large number of species and to an exploration of genetic diversity within individual species, work that has been done mostly by determining the DNA sequence of large numbers of randomly cloned fragments of the genome, and assembling the overlapping sequences. At the level of individual genes, molecular clones are used to generate probes that are used for examining how genes are expressed, and how that expression is related to other processes in biology, including the metabolic environment, extracellular signals, development, learning, senescence and cell death. Cloned genes can also provide tools to examine the biological function and importance of individual genes, by allowing investigators to inactivate the genes, or make more subtle mutations using regional mutagenesis or site-directed mutagenesis. Genes cloned into expression vectors for functional cloning provide a means to screen for genes on the basis of the expressed protein's function. Production of recombinant proteins Obtaining the molecular clone of a gene can lead to the development of organisms that produce the protein product of the cloned genes, termed a recombinant protein. In practice, it is frequently more difficult to develop an organism that produces an active form of the recombinant protein in desirable quantities than it is to clone the gene. This is because the molecular signals for gene expression are complex and variable, and because protein folding, stability and transport can be very challenging. Many useful proteins are currently available as recombinant products. These include--(1) medically useful proteins whose administration can correct a defective or poorly expressed gene (e.g. recombinant factor VIII, a blood-clotting factor deficient in some forms of hemophilia, and recombinant insulin, used to treat some forms of diabetes), (2) proteins that can be administered to assist in a life-threatening emergency (e.g. tissue plasminogen activator, used to treat strokes), (3) recombinant subunit vaccines, in which a purified protein can be used to immunize patients against infectious diseases, without exposing them to the infectious agent itself (e.g. hepatitis B vaccine), and (4) recombinant proteins as standard material for diagnostic laboratory tests. Transgenic organisms Once characterized and manipulated to provide signals for appropriate expression, cloned genes may be inserted into organisms, generating transgenic organisms, also termed genetically modified organisms (GMOs). Although most GMOs are generated for purposes of basic biological research (see for example, transgenic mouse), a number of GMOs have been developed for commercial use, ranging from animals and plants that produce pharmaceuticals or other compounds (pharming), herbicide-resistant crop plants, and fluorescent tropical fish (GloFish) for home entertainment. Gene therapy Gene therapy involves supplying a functional gene to cells lacking that function, with the aim of correcting a genetic disorder or acquired disease. Gene therapy can be broadly divided into two categories. The first is alteration of germ cells, that is, sperm or eggs, which results in a permanent genetic change for the whole organism and subsequent generations. This "germ line gene therapy" is considered by many to be unethical in human beings. The second type of gene therapy, "somatic cell gene therapy", is analogous to an organ transplant. In this case, one or more specific tissues are targeted by direct treatment or by removal of the tissue, addition of the therapeutic gene or genes in the laboratory, and return of the treated cells to the patient. Clinical trials of somatic cell gene therapy began in the late 1990s, mostly for the treatment of cancers and blood, liver, and lung disorders. Despite a great deal of publicity and promises, the history of human gene therapy has been characterized by relatively limited success. The effect of introducing a gene into cells often promotes only partial and/or transient relief from the symptoms of the disease being treated. Some gene therapy trial patients have suffered adverse consequences of the treatment itself, including deaths. In some cases, the adverse effects result from disruption of essential genes within the patient's genome by insertional inactivation. In others, viral vectors used for gene therapy have been contaminated with infectious virus. Nevertheless, gene therapy is still held to be a promising future area of medicine, and is an area where there is a significant level of research and development activity.
Technology
Biotechnology
null
46326758
https://en.wikipedia.org/wiki/Satellite%20system%20%28astronomy%29
Satellite system (astronomy)
A satellite system is a set of gravitationally bound objects in orbit around a planetary mass object (incl. sub-brown dwarfs and rogue planets) or minor planet, or its barycenter. Generally speaking, it is a set of natural satellites (moons), although such systems may also consist of bodies such as circumplanetary disks, ring systems, moonlets, minor-planet moons and artificial satellites any of which may themselves have satellite systems of their own (see Subsatellites). Some bodies also possess quasi-satellites that have orbits gravitationally influenced by their primary, but are generally not considered to be part of a satellite system. Satellite systems can have complex interactions including magnetic, tidal, atmospheric and orbital interactions such as orbital resonances and libration. Individually major satellite objects are designated in Roman numerals. Satellite systems are referred to either by the possessive adjectives of their primary (e.g. "Jovian system"), or less commonly by the name of their primary (e.g. "Jupiter system"). Where only one satellite is known, or it is a binary with a common centre of gravity, it may be referred to using the hyphenated names of the primary and major satellite (e.g. the "Earth-Moon system"). Many Solar System objects are known to possess satellite systems, though their origin is still unclear. Notable examples include the Jovian system, with 95 known moons (including the large Galilean moons) and the largest overall, the Saturnian System, with 146 known moons (including Titan and the most visible rings in the Solar System alongside). Both satellite systems are large and diverse, in fact, all of the giant planets of the Solar System possess large satellite systems as well as planetary rings, and it is inferred that this is a general pattern. Several objects farther from the Sun also have satellite systems consisting of multiple moons, including the complex Plutonian system where multiple objects orbit a common center of mass, as well as many asteroids and plutinos. Apart from the Earth-Moon system and Mars' system of two tiny natural satellites, the other terrestrial planets are generally not considered satellite systems, although some have been orbited by artificial satellites originating from Earth. Little is known of satellite systems beyond the Solar System, although it is inferred that natural satellites are common. Possible signs of exomoons have been detected around exoplanets such as Kepler-1625b. It is also theorised that rogue planets ejected from their planetary system could retain a system of satellites. Natural formation and evolution Satellite systems, like planetary systems, are the product of gravitational attraction, but are also sustained through fictitious forces. While the general consensus is that most planetary systems are formed from an accretionary disks, the formation of satellite systems is less clear. The origin of many moons are investigated on a case-by-case basis, and the larger systems are thought to have formed through a combination of one or more processes. System stability The Hill sphere is the region in which an astronomical body dominates the attraction of satellites. Of the Solar System planets, Neptune and Uranus have the largest Hill spheres, due to the lessened gravitational influence of the Sun at their far orbits, however all of the giant planets have Hill spheres in the vicinity of 100 million kilometres in radius. By contrast, the Hill spheres of Mercury and Ceres, being closer to the Sun are quite small. Outside of the Hill sphere, the Sun dominates the gravitational influence, with the exception of the Lagrangian points. Satellites are stable at the and Lagrangian points. These lie at the third corners of the two equilateral triangles in the plane of orbit whose common base is the line between the centers of the two masses, such that the point lies behind () or ahead () of the smaller mass with regard to its orbit around the larger mass. The triangular points ( and ) are stable equilibria, provided that the ratio of M1/M2 is nearly 24.96. When a body at these points is perturbed, it moves away from the point, but the factor opposite of that which is increased or decreased by the perturbation (either gravity or angular momentum-induced speed) will also increase or decrease, bending the object's path into a stable, kidney-bean-shaped orbit around the point (as seen in the corotating frame of reference). It is generally thought that natural satellites should orbit in the same direction as the planet is rotating (known as prograde orbit). As such, the terminology regular moon is used for these orbit. However a retrograde orbit (the opposite direction to the planet) is also possible, the terminology irregular moon is used to describe known exceptions to the rule, it is believed that irregular moons have been inserted into orbit through gravitational capture. Accretion theories Accretion disks around giant planets may occur in a similar way to the occurrence of disks around stars, out of which planets form (for example, this is one of the theories for the formations of the satellite systems of Uranus, Saturn, and Jupiter). This early cloud of gas is a type of circumplanetary disk known as a proto-satellite disk (in the case of the Earth-Moon system, the proto-lunar disk). Models of gas during the formation of planets coincide with a general rule for planet-to-satellite(s) mass ratio of 10,000:1 (a notable exception is Neptune). Accretion is also proposed by some as a theory for the origin of the Earth-Moon system, however the angular momentum of system and the Moon's smaller iron core can not easily be explained by this. Debris disks Another proposed mechanism for satellite system formation is accretion from debris. Scientists theorise that the Galilean moons are thought by some to be a more recent generation of moons formed from the disintegration of earlier generations of accreted moons. Ring systems are a type of circumplanetary disk that can be the result of satellites disintegrated near the Roche limit. Such disks could, over time, coalesce to form natural satellites. Collision theories Collision is one of the leading theories for the formation of satellite systems, particularly those of the Earth and Pluto. Objects in such a system may be part of a collisional family and this origin may be verified comparing their orbital elements and composition. Computer simulations have been used to demonstrate that giant impacts could have been the origin of the Moon. It is thought that early Earth had multiple moons resulting from the giant impact. Similar models have been used to explain the creation of the Plutonian system as well as those of other Kuiper belt objects and asteroids. This is also a prevailing theory for the origin of the moons of Mars. Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit. Collision is also used to explain peculiarities in the Uranian system. Models developed in 2018 explain the planet's unusual spin support an oblique collision with an object twice the size of Earth which likely to have re-coalesced to form the system's icy moons. Gravitational capture theories Some theories suggest that gravitational capture is the origin of Neptune's major moon Triton, the moons of Mars, and Saturn's moon Phoebe. Some scientists have put forward extended atmospheres around young planets as a mechanism for slowing the movement of a passing objects to aid in capture. The hypothesis has been put forward to explain the irregular satellite orbits of Jupiter and Saturn, for example. A tell-tale sign of capture is a retrograde orbit, which can result from an object approaching the side of the planet which it is rotating towards. Capture has even been proposed as the origin of Earth's Moon. In the case of the latter, however, virtually identical isotope ratios found in samples of the Earth and Moon cannot be explained easily by this theory. Temporary capture Evidence for the natural process of satellite capture has been found in direct observation of objects captured by Jupiter. Five such captures have been observed, the longest being for approximately twelve years. Based on computer modelling, the future capture of comet 111P/Helin-Roman-Crockett for 18 years is predicted to begin in 2068. However temporary captured orbits have highly irregular and unstable, the theorised processes behind stable capture may be exceptionally rare. Features and interactions Natural satellite systems, particularly those involving multiple planetary mass objects can have complex interactions which can have effects on multiple bodies or across the wider system. Ring systems Ring systems are collections of dust, moonlets, or other small objects. The most notable examples are those around Saturn, but the other three gas giants (Jupiter, Uranus and Neptune) also have ring systems. Other objects have also been found to possess rings. Haumea was the first dwarf planet and trans-Neptunian object found to possess a ring system. Centaur 10199 Chariklo, with a diameter of about , is the smallest object with rings ever discovered consisting of two narrow and dense bands, 6–7 km (4 mi) and 2–4 km (2 mi) wide, separated by a gap of . The Saturnian moon Rhea may have a tenuous ring system consisting of three narrow, relatively dense bands within a particulate disk, the first predicted around a moon. Most rings were thought to be unstable and to dissipate over the course of tens or hundreds of millions of years. Studies of Saturn's rings however indicate that they may date to the early days of the Solar System. Current theories suggest that some ring systems may form in repeating cycles, accreting into natural satellites that break up as soon as they reach the Roche limit. This theory has been used to explain the longevity of Saturn's rings as well the moons of Mars. Gravitational interactions Orbital configurations Cassini's laws describe the motion of satellites within a system with their precessions defined by the Laplace plane. Most satellite systems are found orbiting the ecliptic plane of the primary. An exception is Earth's moon, which orbits in to the planet's equatorial plane. When orbiting bodies exert a regular, periodic gravitational influence on each other is known as orbital resonance. Orbital resonances are present in several satellite systems: 2:4 Tethys–Mimas (Saturn's moons) 1:2 Dione–Enceladus (Saturn's moons) 3:4 Hyperion–Titan (Saturn's moons) 1:2:4 Ganymede–Europa–Io (Jupiter's moons) 1:3:4:5:6 near resonances - Styx, Nix, Kerberos, and Hydra (Pluto's moons) (Styx approximately 5.4% from resonance, Nix approximately 2.7%, Kerberos approximately 0.6%, and Hydra approximately 0.3%). Other possible orbital interactions include libration and co-orbital configuration. The Saturnian moons Janus and Epimetheus share their orbits, the difference in semi-major axes being less than either's mean diameter. Libration is a perceived oscillating motion of orbiting bodies relative to each other. The Earth-moon satellite system is known to produce this effect. Several systems are known to orbit a common centre of mass and are known as binary companions. The most notable system is the Plutonian system, which is also dwarf planet binary. Several minor planets also share this configuration, including "true binaries" with near equal mass, such as 90 Antiope and (66063) 1998 RO1. Some orbital interactions and binary configurations have been found to cause smaller moons to take non-spherical forms and "tumble" chaotically rather than rotate, as in the case of Nix, Hydra (moons of Pluto) and Hyperion (moon of Saturn). Tidal interaction Tidal energy including tidal acceleration can have effects on both the primary and satellites. The Moon's tidal forces deform the Earth and hydrosphere, similarly heat generated from tidal friction on the moons of other planets is found to be responsible for their geologically active features. Another extreme example of physical deformity is the massive equatorial ridge of the near-Earth asteroid 66391 Moshup created by the tidal forces of its moon, such deformities may be common among near-Earth asteroids. Tidal interactions also cause stable orbits to change over time. For instance, Triton's orbit around Neptune is decaying and 3.6 billion years from now, it is predicted that this will cause Triton to pass within Neptune's Roche limit resulting in either a collision with Neptune's atmosphere or the breakup of Triton, forming a large ring similar to that found around Saturn. A similar process is drawing Phobos closer to Mars, and it is predicted that in 50 million years it will either collide with the planet or break up into a planetary ring. Tidal acceleration, on the other hand, gradually moves the Moon away from Earth, such that it may eventually be released from its gravitational bounding and exit the system. Perturbation and instability While tidal forces from the primary are common on satellites, most satellite systems remain stable. Perturbation between satellites can occur, particularly in the early formation, as the gravity of satellites affect each other, and can result in ejection from the system or collisions between satellites or with the primary. Simulations show that such interactions cause the orbits of the inner moons of the Uranus system to be chaotic and possibly unstable. Some of Io's active can be explained by perturbation from Europa's gravity as their orbits resonate. Perturbation has been suggested as a reason that Neptune does not follow the 10,000:1 ratio of mass between the parent planet and collective moons as seen in all other known giant planets. One theory of the Earth-Moon system suggest that a second companion which formed at the same time as the Moon, was perturbed by the Moon early in the system's history, causing it to impact with the Moon. Atmospheric and magnetic interaction Some satellite systems have been known to have gas interactions between objects. Notable examples include the Jupiter, Saturn and Pluto systems. The Io plasma torus is a transfer of oxygen and sulfur from the tenuous atmosphere of Jupiter's volcanic moon, Io and other objects including Jupiter and Europa. A torus of oxygen and hydrogen produced by Saturn's moon, Enceladus forms part of the E ring around Saturn. Nitrogen gas transfer between Pluto and Charon has also been modelled and is expected to be observable by the New Horizons space probe. Similar tori produced by Saturn's moon Titan (nitrogen) and Neptune's moon Triton (hydrogen) is predicted. Complex magnetic interactions have been observed in satellite systems. Most notably, the interaction of Jupiter's strong magnetic field with those of Ganymede and Io. Observations suggest that such interactions can cause the stripping of atmospheres from moons and the generation of spectacular auroras. History The notion of satellite systems pre-dates history. The Moon was known by the earliest humans. The earliest models of astronomy were based around celestial bodies (or a "celestial sphere") orbiting the Earth. This idea was known as geocentrism (where the Earth is the centre of the universe). However the geocentric model did not generally accommodate the possibility of celestial objects orbiting other observed planets, such as Venus or Mars. Seleucus of Seleucia (b. 190 BCE) made observations which may have included the phenomenon of tides, which he supposedly theorized to be caused by the attraction to the Moon and by the revolution of the Earth around an Earth-Moon 'center of mass'. As heliocentrism (the doctrine that the Sun is the centre of the universe) began to gain in popularity in the 16th century, the focus shifted to planets and the idea of systems of planetary satellites fell out of general favour. Nevertheless, in some of these models, the Sun and Moon would have been satellites of the Earth. Nicholas Copernicus published a model in which the Moon orbited around the Earth in the Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in the year of his death, 1543. It was not until the discovery of the Galilean moons in either 1609 or 1610 by Galileo, that the first definitive proof was found for celestial bodies orbiting planets. The first suggestion of a ring system was in 1655, when Christiaan Huygens thought that Saturn was surrounded by rings. The first probe to explore a satellite system other than Earth was Mariner 7 in 1969, which observed Phobos. The twin probes Voyager 1 and Voyager 2 were the first to explore the Jovian system in 1979. Zones and habitability Based on tidal heating models, scientists have defined zones in satellite systems similarly to those of planetary systems. One such zone is the circumplanetary habitable zone (or "habitable edge"). According to this theory, moons closer to their planet than the habitable edge cannot support liquid water at their surface. When effects of eclipses as well as constraints from a satellite's orbital stability are included into this concept, one finds that — depending on a moon's orbital eccentricity — there is a minimum mass of roughly 0.2 solar masses for stars to host habitable moons within the stellar HZ. The magnetic environment of exomoons, which is critically triggered by the intrinsic magnetic field of the host planet, has been identified as another effect on exomoon habitability. Most notably, it was found that moons at distances between about 5 and 20 planetary radii from a giant planet can be habitable from an illumination and tidal heating point of view, but still the planetary magnetosphere would critically influence their habitability.
Physical sciences
Planetary science
Astronomy
26233012
https://en.wikipedia.org/wiki/List%20of%20the%20most%20distant%20astronomical%20objects
List of the most distant astronomical objects
This article documents the most distant astronomical objects discovered and verified so far, and the time periods in which they were so classified. For comparisons with the light travel distance of the astronomical objects listed below, the age of the universe since the Big Bang is currently estimated as 13.787±0.020 Gyr. Distances to remote objects, other than those in nearby galaxies, are nearly always inferred by measuring the cosmological redshift of their light. By their nature, very distant objects tend to be very faint, and these distance determinations are difficult and subject to errors. An important distinction is whether the distance is determined via spectroscopy or using a photometric redshift technique. The former is generally both more precise and also more reliable, in the sense that photometric redshifts are more prone to being wrong due to confusion with lower redshift sources that may have unusual spectra. For that reason, a spectroscopic redshift is conventionally regarded as being necessary for an object's distance to be considered definitely known, whereas photometrically determined redshifts identify "candidate" very distant sources. Here, this distinction is indicated by a "p" subscript for photometric redshifts. The proper distance provides a measurement of how far a galaxy is at a fixed moment in time. At the present time the proper distance equals the comoving distance since the cosmological scale factor has value one: . The proper distance represents the distance obtained as if one were able to freeze the flow of time (set in the FLRW metric) and walk all the way to a galaxy while using a meter stick. For practical reasons, the proper distance is calculated as the distance traveled by light (set in the FLRW metric) from the time of emission by a galaxy to the time an observer (on Earth) receives the light signal. It differs from the “light travel distance” since the proper distance takes into account the expansion of the universe, i.e. the space expands as the light travels through it, resulting in numerical values which locate the most distant galaxies beyond the Hubble sphere and therefore with recession velocities greater than the speed of light c. Most distant spectroscopically-confirmed objects Candidate most distant objects Since the beginning of the James Webb Space Telescope's (JWST) science operations in June 2022, numerous distant galaxies far beyond what could be seen by the Hubble Space Telescope (z = 11) have been discovered thanks to the JWST's capability of seeing far into the infrared. Previously in 2012, there were about 50 possible objects z = 8 or farther, and another 100 candidates at z = 7, based on photometric redshift estimates released by the Hubble eXtreme Deep Field (XDF) project from observations made between mid-2002 and December 2012. Some objects included here have been observed spectroscopically, but had only one emission line tentatively detected, and are therefore still considered candidates by researchers. List of most distant objects by type Timeline of most distant astronomical object recordholders Objects in this list were found to be the most distant object at the time of determination of their distance. This is frequently not the same as the date of their discovery. Distances to astronomical objects may be determined through parallax measurements, use of standard references such as cepheid variables or Type Ia supernovas, or redshift measurement. Spectroscopic redshift measurement is preferred, while photometric redshift measurement is also used to identify candidate high redshift sources. The symbol z represents redshift. List of objects by year of discovery that turned out to be most distant This list contains a list of most distant objects by year of discovery of the object, not the determination of its distance. Objects may have been discovered without distance determination, and were found subsequently to be the most distant known at that time. However, object must have been named or described. An object like OJ 287 is ignored even though it was detected as early as 1891 using photographic plates, but ignored until the advent of radiotelescopes.
Physical sciences
Physical cosmology
Astronomy
43201040
https://en.wikipedia.org/wiki/Audio%20coding%20format
Audio coding format
An audio coding format (or sometimes audio compression format) is a content representation format for storage or transmission of digital audio (such as in digital television, digital radio and in audio and video files). Examples of audio coding formats include MP3, AAC, Vorbis, FLAC, and Opus. A specific software or hardware implementation capable of audio compression and decompression to/from a specific audio coding format is called an audio codec; an example of an audio codec is LAME, which is one of several different codecs which implements encoding and decoding audio in the MP3 audio coding format in software. Some audio coding formats are documented by a detailed technical specification document known as an audio coding specification. Some such specifications are written and approved by standardization organizations as technical standards, and are thus known as an audio coding standard. The term "standard" is also sometimes used for de facto standards as well as formal standards. Audio content encoded in a particular audio coding format is normally encapsulated within a container format. As such, the user normally doesn't have a raw AAC file, but instead has a .m4a audio file, which is a MPEG-4 Part 14 container containing AAC-encoded audio. The container also contains metadata such as title and other tags, and perhaps an index for fast seeking. A notable exception is MP3 files, which are raw audio coding without a container format. De facto standards for adding metadata tags such as title and artist to MP3s, such as ID3, are hacks which work by appending the tags to the MP3, and then relying on the MP3 player to recognize the chunk as malformed audio coding and therefore skip it. In video files with audio, the encoded audio content is bundled with video (in a video coding format) inside a multimedia container format. An audio coding format does not dictate all algorithms used by a codec implementing the format. An important part of how lossy audio compression works is by removing data in ways humans can't hear, according to a psychoacoustic model; the implementer of an encoder has some freedom of choice in which data to remove (according to their psychoacoustic model). Lossless, lossy, and uncompressed audio coding formats A lossless audio coding format reduces the total data needed to represent a sound but can be de-coded to its original, uncompressed form. A lossy audio coding format additionally reduces the bit resolution of the sound on top of compression, which results in far less data at the cost of irretrievably lost information. Transmitted (streamed) audio is most often compressed using lossy audio codecs as the smaller size is far more convenient for distribution. The most widely used audio coding formats are MP3 and Advanced Audio Coding (AAC), both of which are lossy formats based on modified discrete cosine transform (MDCT) and perceptual coding algorithms. Lossless audio coding formats such as FLAC and Apple Lossless are sometimes available, though at the cost of larger files. Uncompressed audio formats, such as pulse-code modulation (PCM, or .wav), are also sometimes used. PCM was the standard format for Compact Disc Digital Audio (CDDA). History In 1950, Bell Labs filed the patent on differential pulse-code modulation (DPCM). Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973. Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC). Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time. Perceptual coding is used by modern audio compression formats such as MP3 and AAC. Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3 and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The MDCT is used by modern audio compression formats such as Dolby Digital, MP3, and Advanced Audio Coding (AAC). List of lossy formats General Speech Linear predictive coding (LPC) Adaptive predictive coding (APC) Code-excited linear prediction (CELP) Algebraic code-excited linear prediction (ACELP) Relaxed code-excited linear prediction (RCELP) Low-delay CELP (LD-CELP) Adaptive Multi-Rate (used in GSM and 3GPP) Codec 2 (noted for its lack of patent restrictions) Speex (noted for its lack of patent restrictions) Modified discrete cosine transform (MDCT) AAC-LD Constrained Energy Lapped Transform (CELT) Opus (mostly for real-time applications) List of lossless formats Apple Lossless (ALAC – Apple Lossless Audio Codec) Adaptive Transform Acoustic Coding (ATRAC) Audio Lossless Coding (also known as MPEG-4 ALS) Direct Stream Transfer (DST) Dolby TrueHD DTS-HD Master Audio Free Lossless Audio Codec (FLAC) Lossless discrete cosine transform (LDCT) Meridian Lossless Packing (MLP) Monkey's Audio (Monkey's Audio APE) MPEG-4 SLS (also known as HD-AAC) OptimFROG Original Sound Quality (OSQ) RealPlayer (RealAudio Lossless) Shorten (SHN) TTA (True Audio Lossless) WavPack (WavPack lossless) WMA Lossless (Windows Media Lossless)
Technology
File formats
null
44934944
https://en.wikipedia.org/wiki/Rampart%20%28fortification%29
Rampart (fortification)
In fortification architecture, a rampart is a length of embankment or wall forming part of the defensive boundary of a castle, hillfort, settlement or other fortified site. It is usually broad-topped and made of excavated earth and/or masonry. Types The composition and design of ramparts varied from the simple mounds of earth and stone, known as dump ramparts, to more complex earth and timber defences (box ramparts and timberlaced ramparts), as well as ramparts with stone revetments. One particular type, common in Central Europe, used earth, stone and timber posts to form a Pfostenschlitzmauer or "post-slot wall". Vitrified ramparts were composed of stone that was subsequently fired, possibly to increase its strength. Early fortifications Many types of early fortification, from prehistory through to the Early Middle Ages, employed earth ramparts usually in combination with external ditches to defend the outer perimeter of a fortified site or settlement. Hillforts, ringforts or "raths" and ringworks all made use of ditch and rampart defences, and they are the characteristic feature of circular ramparts. The ramparts could be reinforced and raised in height by the use of palisades. This type of arrangement was a feature of the motte and bailey castle of northern Europe in the early medieval period. Classical fortifications During the classical era, societies became sophisticated enough to create tall ramparts of stone or brick, provided with a platform or wall walk for the defenders to hurl missiles from and a parapet to protect them from the missiles thrown by attackers. Well known examples of classical stone ramparts include Hadrian's Wall and the Walls of Constantinople. Medieval fortifications After the fall of the Western Roman Empire, there was a return to the widespread use of earthwork ramparts which lasted well into the 11th century, an example is the Norman motte and bailey castle. As castle technology evolved during the Middle Ages and Early Modern times, ramparts continued to form part of the defences, but now they tended to consist of thick walls with crenellated parapets. Fieldworks, however, continued to make use of earth ramparts due to their relatively temporary nature. Elements of a rampart in a stone castle or town wall from the 11th to 15th centuries included: Parapet: a low wall on top of the rampart to shelter the defenders. Crenellation: rectangular gaps or indentations at intervals in the parapet, the gaps being called embrasures or crenels, and the intervening high parts being called merlons. Loophole or arrowslit: a narrow opening in a parapet or in the main body of the rampart, allowing defenders to shoot out without exposing themselves to the enemy. Chemin de ronde or wallwalk: a pathway along the top of the rampart but behind the parapet, which served as a fighting platform and a means of communication with other parts of the fortification. Machicolation: an overhanging projection supported by corbels, the floor of which was pierced with openings so that missiles and hot liquids could be thrown down on attackers. Brattice: a timber gallery built on top of the rampart and projecting forward from the parapet, to give the defenders a better field of fire. Artillery fortifications In response to the introduction of artillery, castle ramparts began to be built with much thicker walling and a lower profile, one of earliest examples first being Ravenscraig Castle in Scotland which was built in 1460. In the first half of the 16th century, the solid masonry walls began to be replaced by earthen banks, sometimes faced with stone, which were better able to withstand the impact of shot; the earth being obtained from the ditch which was dug in front of the rampart. At the same time, the plan or "trace" of these ramparts began to be formed into angular projections called bastions which allowed the guns mounted on them to create zones of interlocking fire. This bastion system became known as the trace italienne because Italian engineers had been at the forefront of its development, although it was later perfected in northern Europe by engineers such as Van Coehoorn and Vauban and was the dominant style of fortification until the mid-19th century. Elements of a rampart in an artillery fortification from the 16th to 19th centuries included: Exterior slope: the front face of the rampart, often faced with stone or brick. Interior slope: the back of the rampart on the inside of the fortification; sometimes retained with a masonry wall but usually a grassy slope. Parapet (or breastwork) which protected and concealed the defending soldiers. Banquette: a continuous step built onto the interior of the parapet, enabling the defenders to shoot over the top with small arms. Barbette: a raised platform for one or more guns enabling them to fire over the parapet. Embrasure: an opening in the parapet for guns to fire through. Terreplein: the top surface or "fighting platform" of the rampart, behind the parapet. Traverse: an earthen embankment, the same height as the parapet, built across the terreplein to prevent it being swept by enfilade fire. Casemate: a vaulted chamber built inside the rampart for protected accommodation or storage, but sometimes pierced by an embrasure at the front for a gun to fire through. Bartizan (also guérite or echauguette): a small turret projecting from the parapet, intended to give a good view to a sentry while remaining protected. Archaeological significance As well as the immediate archaeological significance of such ramparts in indicating the development of military tactics and technology, these sites often enclose areas of historical significance that point to the local conditions at the time the fortress was built.
Technology
Fortification
null
30889569
https://en.wikipedia.org/wiki/Digital%20signal
Digital signal
A digital signal is a signal that represents data as a sequence of discrete values; at any given time it can only take on, at most, one of a finite number of values. This contrasts with an analog signal, which represents continuous values; at any given time it represents a real number within a continuous range of values. Simple digital signals represent information in discrete bands of analog levels. All levels within a band of values represent the same information state. In most digital circuits, the signal can have two possible valid values; this is called a binary signal or logic signal. They are represented by two voltage bands: one near a reference value (typically termed as ground or zero volts), and the other a value near the supply voltage. These correspond to the two values zero and one (or false and true) of the Boolean domain, so at any given time a binary signal represents one binary digit (bit). Because of this discretization, relatively small changes to the analog signal levels do not leave the discrete envelope, and as a result are ignored by signal state sensing circuitry. As a result, digital signals have noise immunity; electronic noise, provided it is not too great, will not affect digital circuits, whereas noise always degrades the operation of analog signals to some degree. Digital signals having more than two states are occasionally used; circuitry using such signals is called multivalued logic. For example, signals that can assume three possible states are called three-valued logic. In a digital signal, the physical quantity representing the information may be a variable electric current or voltage, the intensity, phase or polarization of an optical or other electromagnetic field, acoustic pressure, the magnetization of a magnetic storage media, etcetera. Digital signals are used in all digital electronics, notably computing equipment and data transmission. Definitions The term digital signal has related definitions in different contexts. In digital electronics In digital electronics, a digital signal is a pulse amplitude modulated signal, i.e. a sequence of fixed-width electrical pulses or light pulses, each occupying one of a discrete number of levels of amplitude. A special case is a logic signal or a binary signal, which varies between a low and a high signal level. The pulse trains in digital circuits are typically generated by metal–oxide–semiconductor field-effect transistor (MOSFET) devices, due to their rapid on–off electronic switching speed and large-scale integration (LSI) capability. In contrast, BJT transistors more slowly generate analog signals resembling sine waves. In signal processing In digital signal processing, a digital signal is a representation of a physical signal that is sampled and quantized. A digital signal is an abstraction that is discrete in time and amplitude. The signal's value only exists at regular time intervals, since only the values of the corresponding physical signal at those sampled moments are significant for further digital processing. The digital signal is a sequence of codes drawn from a finite set of values. The digital signal may be stored, processed or transmitted physically as a pulse-code modulation (PCM) signal. In communications In digital communications, a digital signal is a continuous-time physical signal, alternating between a discrete number of waveforms, representing a bitstream. The shape of the waveform depends the transmission scheme, which may be either a line coding scheme allowing baseband transmission; or a digital modulation scheme, allowing passband transmission over long wires or over a limited radio frequency band. Such a carrier-modulated sine wave is considered a digital signal in literature on digital communications and data transmission, but considered as a bit stream converted to an analog signal in electronics and computer networking. In communications, sources of interference are usually present, and noise is frequently a significant problem. The effects of interference are typically minimized by filtering off interfering signals as much as possible and by using data redundancy. The main advantages of digital signals for communications are often considered to be noise immunity, and the ability, in many cases such as with audio and video data, to use data compression to greatly decrease the bandwidth that is required on the communication media. Logic voltage levels A waveform that switches representing the two states of a Boolean value (0 and 1, or low and high, or false and true) is referred to as a digital signal or logic signal or binary signal when it is interpreted in terms of only two possible digits. The two states are usually represented by some measurement of an electrical property: Voltage is the most common, but current is used in some logic families. Two ranges of voltages are typically defined for each logic family, which are frequently not directly adjacent. The signal is low when in the low range and high when in the high range, and in between the two ranges the behaviour can vary between different types of gates. The clock signal is a special digital signal that is used to synchronize many digital circuits. The image shown can be considered the waveform of a clock signal. Logic changes are triggered either by the rising edge or the falling edge. The rising edge is the transition from a low voltage (level 1 in the diagram) to a high voltage (level 2). The falling edge is the transition from a high voltage to a low one. Although in a highly simplified and idealized model of a digital circuit, we may wish for these transitions to occur instantaneously, no real world circuit is purely resistive and therefore no circuit can instantly change voltage levels. This means that during a short, finite transition time the output may not properly reflect the input, and will not correspond to either a logically high or low voltage. Modulation To create a digital signal, an analog signal must be modulated with a control signal to produce it. The simplest modulation, a type of unipolar encoding, is simply to switch on and off a DC signal so that high voltages represent a '1' and low voltages are '0'. In digital radio schemes one or more carrier waves are amplitude, frequency or phase modulated by the control signal to produce a digital signal suitable for transmission. Asymmetric Digital Subscriber Line (ADSL) over telephone wires, does not primarily use binary logic; the digital signals for individual carriers are modulated with different valued logics, depending on the Shannon capacity of the individual channel. Clocking Digital signals may be sampled by a clock signal at regular intervals by passing the signal through a flip-flop. When this is done, the input is measured at the clock edge, and the signal from that time. The signal is then held steady until the next clock. This process is the basis of synchronous logic. Asynchronous logic also exists, which uses no single clock, and generally operates more quickly, and may use less power, but is significantly harder to design.
Technology
Signal processing
null
29464172
https://en.wikipedia.org/wiki/Weibo
Weibo
Weibo (), or Sina Weibo (), is a Chinese microblogging (weibo) website. Launched by Sina Corporation on 14 August 2009, it is one of the biggest social media platforms in China, with over 582 million monthly active users (252 million daily active users) as of Q1 2022. The platform has been highly successful but has faced criticism for heavy censorship. Sina had gone public on the Nasdaq in 2000. In March 2014, Sina announced a spinoff of Weibo and filed an IPO under the symbol WB. Sina carved out 11% of Weibo in the IPO, with Alibaba owning 32% post-IPO. The company began trading publicly on 17 April 2014. In March 2017, Sina launched Sina Weibo International Version. In November 2018, Sina Weibo suspended its registration function for minors under the age of 14. In July 2019, Sina Weibo announced that it would launch a two-month campaign to clean up pornographic and vulgar information, named "Project Deep Blue" (蔚蓝计划). On 29 September 2020, the company announced it would go private again due to rising tensions between the US and China. Name "Weibo" (微博) is the Chinese word for "microblog". Sina Weibo launched its new domain name weibo.com on 7 April 2011, deactivating and redirecting from the old domain, t.sina.com.cn, to the new one. Due to its popularity, the media sometimes refers to the platform simply as "Weibo," despite the numerous other Chinese microblogging/weibo services including Tencent Weibo (腾讯微博), Sohu Weibo (搜狐微博), and NetEase Weibo (网易微博). However, the latter three have stopped providing services. Background Sina Weibo is a platform based on fostering user relationships to share, disseminate, and receive information. Through the website or the mobile app, users can upload pictures and videos publicly for instant sharing, with other users being able to comment with text, pictures and videos, or use a multimedia instant messaging service. The company initially invited a large number of celebrities to join the platform at the beginning and has since invited many media personalities, government departments, businesses and non-governmental organizations to open accounts for the purpose of publishing and communicating information. To avoid the impersonation of celebrities, Sina Weibo uses verification symbols; celebrity accounts have an orange letter "V" and organizations' accounts have a blue letter "V". Sina Weibo has more than 500 million registered users; out of these, 313 million are monthly active users, 85% use the Weibo mobile app, 70% are college-aged, 50.10% are male and 49.90% are female. There are over 100 million messages posted by users each day. With more than 100 million followers, actress Xie Na holds the record for the most followers on the platform. Despite fierce competition among Chinese social media platforms, Sina Weibo remains the most popular. History After the July 2009 Ürümqi riots, China shut down most domestic microblogging services, including Fanfou, the very first weibo service. Many popular non-China-based microblogging services like Twitter, Facebook, and Plurk have since been blocked. Sina Corporation CEO Charles Chao considered this to be an opportunity, and on 14 August 2009, Sina launched the tested version of Sina Weibo. Basic functions including message, private message, comment and reposting were made available that September. A Sina Weibo–compatible API platform for developing third-party applications was launched on 28 July 2010. On 1 December 2010, the website experienced an outage, which administrators later said was due to the ever-increasing numbers of users and posts. Registered users surpassed 100 million in February 2011. Since 23 March 2011, t.cn has been used as Sina Weibo's official shortened URL in lieu of sinaurl.cn. On 7 April 2011, weibo.com replaced t.sina.com.cn as the new main domain name used by the website. The official logo was also updated. In June 2011, Sina announced an English-language version of Sina Weibo would be developed and launched, though content would still be governed by Chinese law. On 11 January 2013, Sina Weibo and Alibaba China (a subsidiary of Alibaba Group) signed a strategic cooperation agreement. With more and more foreign celebrities using Sina Weibo, language translation has become an urgent need for Chinese users who wish to communicate with their idols online, especially Korean. In January 2013, Sina Weibo and NetEase.com announced that they had reached a strategic cooperation agreement. When users browse foreign language content, they can now directly obtain translation results through the YouDao Dictionary. The Sina Weibo financial report in February 2013 showed that its total revenue was approximately US$66 million and that the number of registered users had exceeded the 500 million mark. In April 2013, Sina officially announced that Sina Weibo had signed a strategic cooperation agreement with Alibaba. The two sides conducted in-depth cooperation in areas such as user account interoperability, data exchange, online payment, and internet marketing. At the same time, Sina announced that Alibaba, through its wholly owned subsidiary, had purchased the preferred shares and common shares issued by Sina Weibo Company for US$586 million, which accounted for approximately 18% of Weibo's fully diluted and diluted total shares. Ownership On 9 April 2013, Alibaba Group announced that it would acquire 18% of Sina Weibo for US$586 million, with the option to buy up to 30% in the future. Alibaba exercised this option when Weibo was listed on NASDAQ in April 2014. Users According to iResearch's report on 30 March 2011, Sina Weibo had 56.5% of China's microblogging market based on active users and 86.6% based on browsing time over competitors such as Tencent Weibo and Baidu. The top 100 users had over 485 million followers combined. More than 5,000 companies and 2,700 media organizations in China use Sina Weibo. The site is maintained by a growing microblogging department of 200 employees responsible for technology, design, operations, and marketing. Sina executives invited and persuaded many Chinese celebrities to join the platform. Users now include Asian celebrities, movie stars, singers, famous business and media figures, athletes, scholars, artists, organizations, religious figures, government departments, and officials from Hong Kong, Mainland China, Malaysia, Singapore, Taiwan, and Macau, as well as some famous foreign individuals and organizations, including Kevin Rudd, Boris Johnson, David Cameron, Narendra Modi, Toshiba, and the Germany national football team. Sina Weibo has a verification program for known people and organizations. Once an account is verified, a verification badge is added beside the account name. According to research by Sina Corporation, the number of active users reached over 400 million by Q1 2018, making Sina Weibo the 7th platform with at least 400 million active users, and daily usage increased by 21%. In June 2020, Weibo was among 58 other Chinese apps that were banned by the Government of India. Following this, Prime Minister of India Narendra Modi's account was deactivated. Features Many of Sina Weibo's features resemble those of Twitter. A user may post with a 140-character limit (increased to 2,000 as of January 2016 with the exception of reposts and comments), mention or talk to other people using "@UserName" formatting, add hashtags, follow other users to make their posts appear in one's own timeline, re-post with "//@UserName" similar to Twitter's retweet function "RT @UserName", select posts for one's favorites list, and verify the account if the user is a celebrity, brand, business or otherwise of public interest. URLs are automatically shortened using the domain name t.cn, akin to Twitter's t.co. Official and third-party applications can access Sina Weibo from other websites or platforms. Users may: Submit up to 18 images/video files in every post Send personal messages to followers Follow others and be followed Post "stories" just like on Instagram React to posts using different emojis Receive monetary rewards that can be used in a digital store linked to Weibo View posts identified as hot or popular Display the location you post from Hashtags differ slightly between Sina Weibo and Twitter, using the double-hashtag "#HashName#" format (the lack of spacing between Chinese characters necessitates a closing tag). Users can own a hashtag by requesting hashtag monitoring; the company reviews these requests and responds within one to three days. Once a user owns a hashtag, they have access to a wide variety of functions available only to them on the condition that they remain active (less than 1 post per calendar week revokes these privileges). Additionally, comments appear as a list below each post. A commenter can also choose to re-post the comment, quoting the whole original post, to their own page. Unregistered users can only browse a few posts by verified accounts. Neither unverified account pages nor comments to posts by verified accounts are accessible to unregistered users. Although often described as a Chinese version of Twitter, Sina Weibo combines elements of Twitter, Facebook, and Medium, along with other social media platforms. Sina Weibo users interact more than Twitter users do, and while many topics that go viral on Weibo also originate from the platform itself, Twitter topics often come from outside news or events. During the outbreak of the COVID-19, Weibo was also a data collecting station to collect and detect the spread of the coronavirus. Trending topics Sina Weibo's "trending topics" is a list of current popular topics based partly on tracking user participation and partly on the preference of Weibo staff. Once a topic is trending, it often becomes a heated issue and can have wide-ranging social influence. As such, the list has reshaped how Chinese people relate to the news media. Verification Sina Weibo has a verification policy, much like Twitter's account verification, for confirming the identity of a user (celebrities, organizations etc.). Once a user is verified, a colorful V is appended to their username; individuals receive an orange V, while organizations and companies receive a blue V. A graph and declaration certifying the verification appear on verified user pages. There are several kinds of verifications: personal, college, organization, verification for official accounts (government departments, social media platforms and famous companies), and Weibo Master (linked with phone numbers and followers). To protect the rights and interests of celebrities, Sina Weibo has launched a celebrity authentication system. The celebrity authentication logo is a gold "V" logo after the verified user's name. The certified figures are mainly stars of various industries, business executives and important news parties. From 22:00 on June 12, 2020, users who post comments must follow the blogger for more than 7 days, except for those who have set "people I follow" to comment on themselves. This adjustment will last for 7 days. Clients Sina produces mobile applications for various platforms to access Sina Weibo, including Android, BlackBerry OS, iOS, Symbian S60, Windows Mobile, Windows Phone and HarmonyOS. Sina has also released a desktop client for Microsoft Windows under the product name Weibo Desktop. International versions Sina Weibo is available in both simplified and traditional Chinese characters. The site also has versions that cater to users from Hong Kong and Taiwan. In 2011, Weibo developed an international edition in English and other languages. On 9 January 2018, the company ran a week-long public test of its English edition. Sina Weibo's official iPhone and iPad apps are available in English. Weibo International supports existing Weibo accounts and allows Facebook accounts to link to the platform; users can also use their mobile phone number (including international mobile phone numbers) to register new accounts. Weibo Stories One of the most recent features of Weibo is Stories. "Weibo's stories" is a video function allowing users to record a video and save them in a separate "Story" menu in their profile page. Weibo VLOG Weibo has also launched a new "Vlog" function. Now, every video with a hashtag VLOG will be available in the main search page under "VLOG" sub-menu. Weibo interviews Weibo interviews are text-based interviews hosted on the Weibo platform. Users post questions to the person being interviewed via Weibo posts and that person responds in real-time. Posting via text message If a user links their Weibo account to a cell phone number, the user can both make and receive Weibo posts via text message. The user can then upload posts by texting them to 1069 009 009 and they will appear on Weibo in real time. Replies or comments to those posts are sent to the user via text message. IP address Weibo began displaying IP addresses of users when posting and commenting in April 2022. Super-hashtags A centralised approach to information in the form of a topic. Users can post and discuss within the super-hashtags. It is different from regular hashtags, users can apply to be the host and having the authorities to audit and shield the posts. Once the user choose to subscribe the super-hashtag, they will become a member of that community. Their level of membership will increase depend on the sacrifice to the degree of discussion of the super-hashtags (such as signing in, posting, commenting). Paid promotion After publishing a post, you can choose to increase the exposure of the post by paying for it and promoting the content to a wider potential audience. Other services Weilingdi (微领地, literally, micro fief) is another service bundled with Weibo. Similar to Foursquare, Weilingdi is a location-based social networking website for mobile devices; the site grew out of Sina's 2011 joint venture with GeoSentric's GyPSii. Sina's Tuding (图钉) photo-sharing service, similar to Instagram, is also produced by the same joint venture. Sina Lady Weibo (新浪女性微博) specializes in women's interests. Weibo Data Center enables users to access data analysis about a topic of their choice, Sina Weibo's official data, and demographic information. Sina Weibo has also recently released a desktop version available for free download at its website. Controversies Racism On 2 May 2021, a Weibo account belonging to the Chinese Communist Party's Central Political and Legal Affairs Commission posted an image of rocket Long March 5B's launch next to a photo of mass cremations of the dead in India as a result of the COVID-19 pandemic with the caption "China lighting a fire versus India lighting a fire". The post was quickly deleted after it faced massive backlash from users and hashtag related to the post also was deleted. According to a report by the Human Rights Watch, racist content targeting black people are strongly prevalent in Chinese social media platforms including Weibo. Censorship In cooperation with internet censorship in China, Sina sets strict controls over the posts on its services. Posts with links using some URL shortening services (including Google's goo.gl), or containing blacklisted keywords, are not allowed on Sina Weibo. Posts on politically sensitive topics are deleted after manual checking. Users with few followers may be able to post on censored topics with relative freedom until they reach a critical mass of followers, which triggers enforced content supervision. Sina Weibo is believed to employ a distributed, heterogeneous strategy for censorship that has a great amount of defense-in-depth, which ranges from keyword list filtering to individual user monitoring. Nearly 30% of the total deletion events occur within 5–30 minutes, and nearly 90% of the deletions happen within the first 24 hours. On 9 March 2010, the posts by Chinese artist and activist Ai Weiwei at Sina Weibo to appeal for information on the 2008 Sichuan earthquake going public were deleted and his account was closed by the site administrator. Attempts to register accounts with usernames alluding to Ai Weiwei were blocked. On 30 March 2010, Hong Kong singer Gigi Leung blogged about the jailed Zhao Lianhai, an activist and father to a 2008 Chinese milk scandal victim; that post was also deleted by an administrator shortly thereafter. On 16 March 2012, all users of Sina Weibo in Beijing were told to register with their real names. Starting on 31 March 2012, the comment function of Sina Weibo was shut down for three days, along with Tencent QQ. In May 2012, Sina Weibo introduced new restrictions on the content its users can post. In October 2012, Sina Weibo heavily censored discussion of the Foxconn strikes in October 2012. On 4 June 2013, Sina Weibo blocked the terms "Today," "Tonight," "June 4," and "Big Yellow Duck." If a user searched using these terms, a message would appear stating that according to relevant laws, statutes and policies, the results of the search couldn't be shown. This censorship was implemented because a photoshopped version of Tank Man which swapped all tanks in the photo with the sculpture Rubber Duck had been circulating on Twitter. According to a BBC News report, the decreasing number of users since 2014 can be attributed both to the crackdown by the Chinese government on the use of aliases to create accounts and to the rising threat from competitor WeChat. On 8 September 2017, Weibo gave an ultimatum to its users to verify their accounts with their real names by 15 September. The platform announced that same month that it would hire 1000 "supervisors" from among its users to engage in censorship. These supervisors were supposed to report at least 200 content pieces per month, with those with the best results being rewarded with special prizes, including iPhones and notebooks. On 18 February 2018, Sina Weibo provided a "Comment moderation" function for both head users and official members. Comments received after opening this feature will not be displayed immediately, instead of requiring approval from moderators. Users can utilize this feature to avoid illegal content appearing in their comment section. In April 2018, Weibo began a crackdown on anime, games, and short videos depicting "pornography, gore, violence and homosexuality". The CCP criticized Weibo's move, following which the company decided to exclude homosexual content from the purge. On 11 June 2020, the Cybersecurity Administration of China ordered Weibo to suspend its "trending topics" page for a week. The CAC accused Weibo of "dissemination of illegal information". On 22 February 2022, Horizon News accidentally posted on its Weibo page its instructions not to post anti-Russia content related to the crisis between Russia and Ukraine. In January 2023, Sina Weibo suspended more than 1,000 social media accounts of critics of the Chinese government response to COVID-19. Fake social media engagement Chinese social media is dominated by a strong influencer and celebrity fandom culture. Celebrities and digital influencers, or key opinion leaders (KOLs), compete fiercely for higher follower counts to attract lucrative brand deals. Despite some efforts undertaken by Weibo to curb fake engagement, the issue remains pervasive due to the incentives for influencers and the advanced nature of fake engagement tools. In 2018, a government crackdown exposed widespread manipulation on Sina Weibo, resulting in the temporary banning of numerous celebrities from its rankings. Notable figures like Wang Sicong were removed from the "hot searches" list, revealing a black market for manipulating rankings. Celebrities and KOLs exploit these tactics to enhance their visibility and suppress unfavorable stories. Weibo acknowledged this problem, listing banned terms and promising increased efforts to manage illegal content. Despite these measures, services offering to boost hashtags into top trending topics for a fee remain prevalent. Weibo is also inundated with fake followers, with 10,000 zombie followers costing around 10 yuan according to a 2019 Caixin report. Celebrity fan clubs act as comprehensive fake social media traffic generators, employing dedicated teams to create content and boost engagement figures. Reports indicate that a significant portion of top influencers have used these services to meet the minimum follower requirements for attracting advertisers. Promotions Weibo paid ads Average organic post view is around 10% – 15% on Weibo. To attract more followers, there are 3 types of paid ads options available: Sponsored Post: Promotes to current followers and/or potential followers. Weibo Tasks: Allows advertisers to pay for other accounts to repost, which in turn reach target audiences. Fensi Tong (粉丝通): The most well known paid advertising option on Weibo; allows more specific targeting options, including interests, gender, location and devices. Advertisers can choose between CPM (cost per million; 0.5CNY per thousand exposure) and CPC (cost per engagement; 0.5CNY per effective engagement). Companies or organizations often use Fensitong and pay well-known Sina Weibo users (usually those with more than 1 million followers) to advertise to their followers. Livery airplane On 8 June 2011, Tianjin Airlines unveiled an Embraer E-190 jet in special Sina Weibo livery and named it "Sina Weibo plane" (新浪微博号). It is the first commercial airplane to be named after a website in China. Villarreal CF In January 2012, Sina Weibo also announced that they would be sponsoring Spanish football club Villarreal CF for its match against FC Barcelona, to increase its fanbase in China. CCTV 2018 New Year's Gala On 5 February 2018, Weibo officially announced that it will become the exclusive partner of the New Media Social Platform of the CCTV Spring Festival Gala in 2018 to attract more Chinese people worldwide to use Weibo. Statistics Sina Weibo's official accounts Weibo's Secretary: 194,144,293 Weibo's Service Center: 180,564,151 Weibo's Staff: 155,444,287 Most popular accounts (individuals) As of 19 April 2019, the following ten individuals managed the most popular accounts (name handle in parentheses) and the number of followers: Xie Na (xiena): 125,742,516 He Jiong (hejiong): 120,013,900 Yang Mi (yangmiblog): 107,601,756 Angelababy (realangelababy): 102,212,814 Chen Kun (chenkun): 93,456,957 Zhao Liying (zhaoliying): 86,690,864 Vicky Zhao (zhaowei): 85,650,051 Jackson Yee (yiyangqianxi): 84,620,416 Yao Chen (yaochen): 83,811,714 Deng Chao (dengchao): 80,972,525 Record-setting posts On 13 September 2013, the unverified handle "veggieg" (widely believed to be Faye Wong) posted a message suggesting that she had divorced her husband. The message was commented and re-posted more than a million times in four hours. The record was broken on 31 March 2014 by Wen Zhang, who posted a long apology admitting an extramarital affair when his wife Ma Yili was pregnant with their second child. This message was commented and re-posted more than 2.5 million times in 10 hours. (Ma's response generated 2.18 million responses in 12 hours.) On 22 June 2014, TFBOYS member Wang Junkai was awarded a Guinness World Record title for a Weibo post that was reposted 42,776,438 times. Luhan holds the Guinness World Record for most comments.
Technology
Social network and blogging
null
44687845
https://en.wikipedia.org/wiki/Gas%20giant
Gas giant
A gas giant is a giant planet composed mainly of hydrogen and helium. Jupiter and Saturn are the gas giants of the Solar System. The term "gas giant" was originally synonymous with "giant planet". However, in the 1990s, it became known that Uranus and Neptune are really a distinct class of giant planets, being composed mainly of heavier volatile substances (which are referred to as "ices"). For this reason, Uranus and Neptune are now often classified in the separate category of ice giants. Jupiter and Saturn consist mostly of elements such as hydrogen and helium, with heavier elements making up between 3 and 13 percent of their mass. They are thought to consist of an outer layer of compressed molecular hydrogen surrounding a layer of liquid metallic hydrogen, with probably a molten rocky core inside. The outermost portion of their hydrogen atmosphere contains many layers of visible clouds that are mostly composed of water (despite earlier consensus that there was no water anywhere in the Solar System besides Earth) and ammonia. The layer of metallic hydrogen located in the mid-interior makes up the bulk of every gas giant and is referred to as "metallic" because the very large atmospheric pressure turns hydrogen into an electrical conductor. The gas giants' cores are thought to consist of heavier elements at such high temperatures () and pressures that their properties are not yet completely understood. The placement of the solar system's gas giants can be explained by the grand tack hypothesis. The defining differences between a very low-mass brown dwarf (which can have a mass as low as roughly 13 times that of Jupiter) and a gas giant are debated. One school of thought is based on formation; the other, on the physics of the interior. Part of the debate concerns whether brown dwarfs must, by definition, have experienced nuclear fusion at some point in their history. Terminology The term gas giant was coined in 1952 by the science fiction writer James Blish and was originally used to refer to all giant planets. It is, arguably, something of a misnomer because throughout most of the volume of all giant planets, the pressure is so high that matter is not in gaseous form. Other than solids in the core and the upper layers of the atmosphere, all matter is above the critical point, where there is no distinction between liquids and gases. The term has nevertheless caught on, because planetary scientists typically use "rock", "gas", and "ice" as shorthands for classes of elements and compounds commonly found as planetary constituents, irrespective of what phase the matter may appear in. In the outer Solar System, hydrogen and helium are referred to as "gases"; water, methane, and ammonia as "ices"; and silicates and metals as "rocks". In this terminology, since Uranus and Neptune are primarily composed of ices, not gas, they are more commonly called ice giants and distinct from the gas giants. Classification Theoretically, gas giants can be divided into five distinct classes according to their modeled physical atmospheric properties, and hence their appearance: ammonia clouds (I), water clouds (II), cloudless (III), alkali-metal clouds (IV), and silicate clouds (V). Jupiter and Saturn are both class I. Hot Jupiters are class IV or V. Extrasolar Cold gas giants A cold hydrogen-rich gas giant more massive than Jupiter but less than about () will only be slightly larger in volume than Jupiter. For masses above , gravity will cause the planet to shrink (see degenerate matter). Kelvin–Helmholtz heating can cause a gas giant to radiate more energy than it receives from its host star. Gas dwarfs Although the words "gas" and "giant" are often combined, hydrogen planets need not be as large as the familiar gas giants from the Solar System. However, smaller gas planets and planets closer to their star will lose atmospheric mass more quickly via hydrodynamic escape than larger planets and planets farther out. A gas dwarf could be defined as a planet with a rocky core that has accumulated a thick envelope of hydrogen, helium and other volatiles, having as result a total radius between 1.7 and 3.9 Earth-radii. The smallest known extrasolar planet that is likely a "gas planet" is Kepler-138d, which has the same mass as Earth but is 60% larger and therefore has a density that indicates a thick gas envelope. A low-mass gas planet can still have a radius resembling that of a gas giant if it has the right temperature. Precipitation and meteorological phenomena Jovian weather Heat that is funneled upward by local storms is a major driver of the weather on gas giants. Much, if not all, of the deep heat escaping the interior flows up through towering thunderstorms. These disturbances develop into small eddies that eventually form storms such as the Great Red Spot on Jupiter. On Earth and Jupiter, lightning and the hydrologic cycle are intimately linked together to create intense thunderstorms. During a terrestrial thunderstorm, condensation releases heat that pushes rising air upward. This "moist convection" engine can segregate electrical charges into different parts of a cloud; the reuniting of those charges is lightning. Therefore, we can use lightning to signal to us where convection is happening. Although Jupiter has no ocean or wet ground, moist convection seems to function similarly compared to Earth. Jupiter's Red Spot The Great Red Spot (GRS) is a high-pressure system located in Jupiter's southern hemisphere. The GRS is a powerful anticyclone, swirling at about 430 to 680 kilometers per hour counterclockwise around the center. The Spot has become known for its ferocity, even feeding on smaller Jovian storms. Tholins are brown organic compounds found within the surface of various planets that are formed by exposure to UV irradiation. The tholins that exist on Jupiter's surface get sucked up into the atmosphere by storms and circulation; it is hypothesized that those tholins that become ejected from the regolith get stuck in Jupiter's GRS, causing it to be red. Helium rain on Saturn and Jupiter Condensation of helium creates liquid helium rain on gas giants. On Saturn, this helium condensation occurs at certain pressures and temperatures when helium does not mix in with the liquid metallic hydrogen present on the planet. Regions on Saturn where helium is insoluble allow the denser helium to form droplets and act as a source of energy, both through the release of latent heat and by descending deeper into the center of the planet. This phase separation leads to helium droplets that fall as rain through the liquid metallic hydrogen until they reach a warmer region where they dissolve in the hydrogen. Since Jupiter and Saturn have different total masses, the thermodynamic conditions in the planetary interior could be such that this condensation process is more prevalent in Saturn than in Jupiter. Helium condensation could be responsible for Saturn's excess luminosity as well as the helium depletion in the atmosphere of both Jupiter and Saturn.
Physical sciences
Planetary science
Astronomy